Tags:
, view all tags

Installazione e configurazione del servizio Compute (NOVA)

Configurazione dell'Hypervisor (KVM)

  • Installare i pacchetti per la virtualizzazione:
    # yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python python-virtinst libvirt-client bridge-utils
    # yum groupinstall Virtualization "Virtualization Client" "Virtualization Platform" "Virtualization Tools"
    # yum install openstack-utils memcached qpid-cpp-server openstack-nova dnsmasq-utils  python-keystone-auth-token
    

  • Verificare che sia attiva l'interfaccia di rete virtuale, chiamata di default virbr0. Il comando ifconfig dovrebbe mostrarla nel suo output:
    virbr0    Link encap:Ethernet  HWaddr 52:54:00:54:65:A1  
              inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:45 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:0 (0.0 b)  TX bytes:7585 (7.4 KiB)
    

  • Requisiti di configurazione per RHEL
    • Assicurarsi che nel file /etc/qpidd.conf sia settato auth=no .
    • Attraverso il comando openstack-config settare a False il parametro force_dhcp_release :
      # openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release False
      
      Nota bene: non viene mostrato nessun output ma modificato il file di configurazione.
    • Se si intende utilizzare immagini che non usano una singola partizione, eseguire il seguente comando:
      # openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_inject_partition -1
      
      Nota bene: non viene mostrato nessun output ma modificato il file di configurazione.

Configurazione del servizio Compute

  • Modificare due file in /etc/nova nel seguente modo.
    • Contentuto completo di nova.conf :
      [DEFAULT]
      # LOG/STATE
      logdir = /var/log/nova
      verbose = True
      state_path = /var/lib/nova
      lock_path = /var/lib/nova/tmp
      
      # AUTHENTICATION
      auth_strategy = keystone
      
      # SCHEDULER
      #compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
      
      # VOLUMES
      volume_group = <VOLUME_NAME>
      #volume_name_template = volume-%08x
      iscsi_helper = tgtadm
      
      # DATABASE del Cloud Controller
      sql_connection = mysql://nova:<YOUR_NOVADB_PASSWORD>@openstack-01.cnaf.infn.it/nova
      
      # COMPUTE
      libvirt_type = kvm
      connection_type = libvirt
      #instance_name_template = instance-%08x
      #api_paste_config=/etc/nova/api-paste.ini
      #allow_resize_to_same_host=True
      
      # APIS
      #osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
      #ec2_dmz_host=192.168.206.130
      #s3_host=192.168.206.130
      
      # GLANCE
      #image_service=nova.image.glance.GlanceImageService
      #glance_api_servers=192.168.206.130:9292
      
      # NETWORK
      network_manager = nova.network.manager.FlatDHCPManager
      force_dhcp_release = True
      dhcpbridge_flagfile = /etc/nova/nova.conf
      firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
      # Change my_ip to match each host
      my_ip = <THIS_SERVER_IP>
      public_interface = eth0
      #vlan_interface = eth0
      flat_network_bridge = virbr0
      flat_interface = eth0
      fixed_range = 192.168.122.0/24
      
      # NOVNC CONSOLE
      #novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
      
      # Change vncserver_proxyclient_address and vncserver_listen to match each compute host
      vncserver_listen = <THIS_SERVER_IP>
      vncserver_proxyclient_address = <THIS_SERVER_IP>
      
      # Qpid
      qpid_hostname = openstack-01.cnaf.infn.it
      rpc_backend = nova.rpc.impl_qpid
      
      # OTHER
      dhcpbridge = /usr/bin/nova-dhcpbridge
      injected_network_template = /usr/share/nova/interfaces.template
      libvirt_xml_template = /usr/share/nova/libvirt.xml.template
      libvirt_nonblocking = True
      libvirt_inject_partition = -1
      vpn_client_template = /usr/share/nova/client.ovpn.template
      credentials_template = /usr/share/nova/novarc.template
      root_helper = sudo nova-rootwrap
      remove_unused_base_images = True
      
      Dove:
      • <VOLUME_NAME> il nome del volume fisico del server che si sta configurando
      • <YOUR_NOVADB_PASSWORD> la password dell'utente "nova" del DB "nova" del Cloud Controller
      • <THIS_SERVER_IP> l'IP del server che si sta configurando
      • nei parametri sql_connection e qpid_hostname "openstack-01.cnaf.infn.it" il server che ospita il Cloud Controller
      • nel parametro flat_network_bridge "virbr0" l'interfaccia di rete virtuale del server che si sta configurando
        
        
    • Parte finale di /etc/nova/api-paste.ini (la parte precedente rimane invariata):
      [...]
      
      [filter:authtoken]
      paste.filter_factory = keystone.middleware.auth_token:filter_factory
      service_protocol = http
      service_host = <KEYSTONE_SERVICE_IP>
      service_port = 5000
      auth_host = <KEYSTONE_SERVICE_IP>
      auth_port = 35357
      auth_protocol = http
      auth_uri = http://<KEYSTONE_SERVICE_IP>:5000/
      admin_tenant_name = service 
      admin_user = nova
      admin_password = <NOVA_PASSWORD>
      
      Dove:
      • <KEYSTONE_SERVICE_IP> l'IP del server che ospita Keystone (nel caso del prototipo l'IP di openstack-01.cnaf.infn.it)
      • <NOVA_PASSWORD> la password che stata associata all'utente del servizio Nova in Keystone
        
        
  • Nota bene: il comando nova-manage potrebbe dare in output alcuni messaggi di Warning su metodi deprecati.
    
    
  • Per far partire i servizi di Nova ed inizializzare il DB, lanciare i seguenti comandi:
    # for svc in api objectstore compute network volume scheduler cert; do echo openstack-nova-$svc; service openstack-nova-$svc stop ; chkconfig openstack-nova-$svc on; done
    # nova-manage db sync
    # for svc in api objectstore compute network volume scheduler cert; do echo openstack-nova-$svc; /etc/init.d/openstack-nova-$svc start ; done 
    

  • Sul Cloud controller verificare lo stato dei servizi Nova compute:
    # nova-manage service list
    
    Binary           Host                                 Zone             Status     State Updated_At
    nova-scheduler   stack-03.cnaf.infn.it                nova             enabled    :-)   2012-08-31 09:46:24
    nova-compute     stack-03.cnaf.infn.it                nova             enabled    :-)   2012-08-31 09:46:26
    nova-network     stack-03.cnaf.infn.it                nova             enabled    :-)   2012-08-31 09:46:25
    nova-cert        stack-03.cnaf.infn.it                nova             enabled    :-)   2012-08-31 09:46:24
    nova-volume      stack-03.cnaf.infn.it                nova             enabled    :-)   2012-08-31 09:46:24
    

  • Creare la sottorete privata, i cui IP verranno assegnati alle istanze delle macchine virtuali, utilizzando l'interfaccia virtuale (nel caso in esempio virbr0). Il seguente comando, ad esempio, crea una sottorete con range 192.168.122.0/24.
    # nova-manage network create private --multi_host=T --fixed_range_v4=192.168.122.0/24 --bridge_interface=virbr0 --num_networks=1 --network_size=256
    
    # nova-manage network list
    id      IPv4                    IPv6            start address   DNS1            DNS2            VlanID          project         uuid           
    1       192.168.122.0/24        None            192.168.122.2   8.8.4.4         None            None            None            052f9b4b-e6d7-4ad9-a3f1-929e80008372
    

Configurazione degli indirizzi IP (Floating) pubblici

Indirizzi IP pubblici e privati

Ad ogni istanza virtuale viene automaticamente assegnato un IP privato (appartenente alla sottorete creata al punto precedente). E' possibile assegnare indirizzi pubblici alle istanze. In tal caso verificare la presenza della seguente riga nel file nova.conf If you plan to use this feature, you must add the following to your nova.conf file to specify which interface the nova-network service will bind public IP addresses to:
public_interface = eth0
Restart the nova-network service if you change nova.conf while the service is running.

Enabling IP forwarding

By default, the IP forwarding is disabled on most of Linux distributions. The floating IP feature requires the IP forwarding enabled in order to work, you can check if the forwarding is enabled by running the following command:
# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0
In this example, the IP forwarding is disabled. You can enable it on the fly by running the following command:
$ sysctl -w net.ipv4.ip_forward=1
In order to make the changes permanent, edit the /etc/sysctl.conf and update the IP forwarding setting : /verbatim> net.ipv4.ip_forward = 1

Creating a List of Available Floating IP Addresses

Nova maintains a list of floating IP addresses that are available for assigning to instances. Use the nova-manage floating create command to add entries to this list, as root. For example:

nova-manage floating create 131.154.101.220 The following nova-manage commands apply to floating IPs.

  • nova-manage floating list: List the floating IP addresses in the pool.
  • nova-manage floating create [cidr]: Create specific floating IPs for either a single address or a subnet.
  • nova-manage floating delete [cidr]: Remove floating IP addresses using the same parameters as the create command.

Adding a Floating IP to an Instance

Adding a floating IP to an instance is a two step process:
  1. nova floating-ip-create: Allocate a floating IP address from the list of available addresses.
  2. nova add-floating-ip: Add an allocated floating IP address to a running instance.

Here's an example of how to add a floating IP to a running instance with an ID of 63c5b9ba-3308-43ce-af61-d7b5dbc08c15

#  nova floating-ip-create
+-----------------+-------------+----------+------+
|        Ip       | Instance Id | Fixed Ip | Pool |
+-----------------+-------------+----------+------+
| 131.154.101.220 | None        | None     | nova |
+-----------------+-------------+----------+------+

# nova add-floating-ip 63c5b9ba-3308-43ce-af61-d7b5dbc08c15 131.154.101.220

# nova-manage floating list
c10d9c9f296b47f8a1212dd7a98357e0        131.154.101.220 63c5b9ba-3308-43ce-af61-d7b5dbc08c15    nova    eth0

If the instance no longer needs a public address, remove the floating IP address from the instance and de-allocate the address:

# nova remove-floating-ip 63c5b9ba-3308-43ce-af61-d7b5dbc08c15 131.154.101.220
# nova floating-ip-delete 131.154.101.220

Automatically adding floating IPs

The nova-network service can be configured to automatically allocate and assign a floating IP address to virtual instances when they are launched. Add the following line to nova.conf and restart the nova-network service
auto_assign_floating_ip=True

Note that if this option is enabled and all of the floating IP addresses have already been allocated, the nova boot command will fail with an error.

-- PaoloVeronesi - 2012-08-30

Edit | Attach | PDF | History: r25 | r17 < r16 < r15 < r14 | Backlinks | Raw View | More topic actions...
Topic revision: r15 - 2012-10-11 - EnricoFattibene
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platformCopyright © 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback