, view all tags

Installing and Configuring the Compute service (NOVA)

Configuring the Hypervisor (KVM)

  • Install virtualizzation RPMs
# yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python python-virtinst libvirt-client bridge-utils

# yum groupinstall Virtualization "Virtualization Client" "Virtualization Platform" "Virtualization Tools"

# yum install openstack-utils memcached qpid-cpp-server openstack-nova dnsmasq-utils  python-keystone-auth-token

  • Pre-configured the network (di default viene su cosė).
# ifconfig virbr0
virbr0    Link encap:Ethernet  HWaddr 52:54:00:54:65:A1  
          inet addr:  Bcast:  Mask:
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:45 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:7585 (7.4 KiB)

  • Configuration requirements with RHEL
    • Ensure auth=no is set in /etc/qpidd.conf.
    • use the openstack-config package to turn off force DHCP releases: sudo openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release False (Non viene mostrato nessun output ma modificato il file di configurazione.)
    • If you intend to use guest images that don't have a single partition, then allow libguestfs to inspect the image so that files can be injected, by setting: sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_inject_partition -1 (Non viene mostrato nessun output ma modificato il file di configurazione.)

Configuring OpenStack Compute

Da modificare due file:
  • nova.conf
logdir = /var/log/nova
verbose = True
state_path = /var/lib/nova
lock_path = /var/lib/nova/tmp

auth_strategy = keystone


volume_group = vg_vol01
#volume_name_template = volume-%08x
iscsi_helper = tgtadm

# DATABASE del Cloud Controller

libvirt_type = kvm
connection_type = libvirt
#instance_name_template = instance-%08x



network_manager = nova.network.manager.FlatDHCPManager
force_dhcp_release = True
dhcpbridge_flagfile = /etc/nova/nova.conf
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip =
public_interface = eth0
#vlan_interface = eth0
flat_network_bridge = virbr0
flat_interface = eth0
fixed_range =


# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_listen =
vncserver_proxyclient_address =

# Qpid
qpid_hostname = stack-01.cnaf.infn.it
rpc_backend = nova.rpc.impl_qpid

dhcpbridge = /usr/bin/nova-dhcpbridge
injected_network_template = /usr/share/nova/interfaces.template
libvirt_xml_template = /usr/share/nova/libvirt.xml.template
libvirt_nonblocking = True
libvirt_inject_partition = -1
vpn_client_template = /usr/share/nova/client.ovpn.template
credentials_template = /usr/share/nova/novarc.template
root_helper = sudo nova-rootwrap
remove_unused_base_images = True

  • verificare che in fondo a /etc/nova/api-paste.ini ci siano le seguenti configurazioni (la parte prima non si tocca, č l'ip di keystone ).

paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host =
service_port = 5000
auth_host =
auth_port = 35357
auth_protocol = http
auth_uri =
admin_tenant_name = service 
admin_user = nova
admin_password = XXXXXXX

  • lanciare i seguenti comandi:
# for svc in api objectstore compute network volume scheduler cert; do echo openstack-nova-$svc; service openstack-nova-$svc stop ; chkconfig openstack-nova-$svc on; done

# nova-manage db sync

# for svc in api objectstore compute network volume scheduler cert; do echo openstack-nova-$svc; /etc/init.d/openstack-nova-$svc start ; done 

  • Sul Cloud controller, verificare lo stato dei NOVA compute (non vengono riportati di seguito alcuni WARNING):
# nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   stack-03.cnaf.infn.it                nova             enabled    :-)   2012-08-31 09:46:24
nova-compute     stack-03.cnaf.infn.it                nova             enabled    :-)   2012-08-31 09:46:26
nova-network     stack-03.cnaf.infn.it                nova             enabled    :-)   2012-08-31 09:46:25
nova-cert        stack-03.cnaf.infn.it                nova             enabled    :-)   2012-08-31 09:46:24
nova-volume      stack-03.cnaf.infn.it                nova             enabled    :-)   2012-08-31 09:46:24

  • You must run the command that creates the network and the bridge using the virbr0 specified in the nova.conf file to create the network that the virtual machines use. This example shows the network range using as the fixed range for our guest
VMs, but you can substitute the range for the network you have available. We're labeling it with private in this case.
# nova-manage network create private --multi_host=T --fixed_range_v4= --bridge_interface=virbr0 --num_networks=1 --network_size=256

# nova-manage network list
id      IPv4                    IPv6            start address   DNS1            DNS2            VlanID          project         uuid           
1        None           None            None            None            052f9b4b-e6d7-4ad9-a3f1-929e80008372

-- PaoloVeronesi - 2012-08-30

Edit | Attach | PDF | History: r27 | r14 < r13 < r12 < r11 | Backlinks | Raw View | More topic actions...
Topic revision: r12 - 2012-08-31 - PaoloVeronesi
  • Edit
  • Attach
This site is powered by the TWiki collaboration platformCopyright © 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback