Aeolus at the Ohio LinuxFest - Part I Cloud Providers Sep 24, 2012
This weekend I head out to Columbus, OH for the Ohio LinuxFest where I will be presenting Aeolus. This is one of the largest community run Linux conferences in the states, and should be my biggest presentation audience yet, so I'm really looking forward to going. The presentation leans heavily on a live demonstration of the software, with a minimal introduction to the topic and architecture at the beginning. This week a few of us in the Aeolus community are going to be blogging / tweeting / etc to try and drive a bit of buzz around our recent work which includes support for more cloud providers (prominently openstack!), various command line utilities and migration tooling, and our community efforts.
The demo consists of two machines, my desktop workstation acting as an 'external' cloud provider, and my laptop running a vm w/ the Aeolus suite on it. Network resources are always unreliable at conferences, so I'm not planning on deploying to ec2 or similar, but rather I have setup OpenStack and oVirt (RHEV) on my desktop, each bonded to their own management network interface, through which they will be controlled via my laptop. The exact same commands and setup I will be demoing works with any cloud provider supported by the Aeolus suite (which is the most comprehensive IaaS cloud management suite to date).

Attached below is my guide on setting up oVirt and OpenStack on a fresh F17 box (after the jump). I plan on posting an Aeolus overview later to demo interaction with the providers setup below.
Setting up OpenStack on Fedora
This guide is monumental, perhaps the best/definitive end-to-end guide on setting up openstack on Fedora. Openstack may be installed in a VM, but it may _not_ be run off a live cd (due to current restrictions on memory/disk transfer sizes).
To start off, setup your environment
# export ADMIN_TOKEN=$(openssl rand -hex 10)
# export OS_USERNAME=admin
# export OS_PASSWORD=cloudpass
# export OS_TENANT_NAME=admin
# export OS_AUTH_URL=http://<endpoint>:5000/v2.0/
# export SERVICE_ENDPOINT=http://<endpoint>:35357/v2.0/
# export SERVICE_TOKEN=$ADMIN_TOKEN
# export ADMIN_PASSWORD=$OS_PASSWORD
# export SERVICE_PASSWORD=servicepass
Make sure to set
Yum install the necessary components:
# yum install mysql-server qpid-cpp-server-daemon
# yum install --enablerepo=updates-testing openstack-utils openstack-nova openstack-glance openstack-keystone openstack-dashboard
Set the root mysql password:
# mysqladmin -uroot password "cloudpass"
# service mysqld restart
Setup the openstack databases:
# openstack-db --service nova --init -y --rootpw cloudpass
# openstack-db --service glance --init -y --rootpw cloudpass
# openstack-db --service keystone --init -y --rootpw cloudpass
Create a volume group for openstack-nova-volume (default 'nova-volumes')
# dd if=/dev/zero of=/var/lib/nova/nova-volumes.img bs=1M seek=2k count=0
# vgcreate nova-volumes $(sudo losetup --show -f /var/lib/nova/nova-volumes.img)
# openstack-config --set /etc/nova/nova.conf DEFAULT volume_group nova-volumes
Config openstack to use keystone to authenticate
# openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
# openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name service
# openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova
# openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password servicepass
# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
# openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
# openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
# openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_tenant_name service
# openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_user glance
# openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_password servicepass
# openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_tenant_name service
# openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_user glance
# openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_password servicepass
Start the openstack nova, glance, keystone services & their dependencies
# systemctl start qpidd.service
# systemctl enable qpidd.service
# systemctl start libvirtd.service
# systemctl enable libvirtd.service
# systemctl start openstack-keystone.service
# systemctl enable openstack-keystone.service
# systemctl start openstack-glance-api.service
# systemctl enable openstack-glance-api.service
# systemctl start openstack-glance-registry.ser
# systemctl start openstack-nova-api.service
# systemctl enable openstack-nova-api.service
# systemctl start openstack-nova-objectstore.service
# systemctl enable openstack-nova-objectstore.service
# systemctl start openstack-nova-compute.service
# systemctl enable openstack-nova-compute.service
# systemctl start openstack-nova-network.service
# systemctl enable openstack-nova-network.service
# systemctl start openstack-nova-volume.service
# systemctl enable openstack-nova-volume.service
# systemctl start openstack-nova-scheduler.service
# systemctl enable openstack-nova-scheduler.service
Create initial keystone accounts
# openstack-keystone-sample-data
Create a network for openstack
# nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=demonetbr0
# modprobe nbd
Import a base image to base new instances off off, you may grab a simple on from my server (anything supported by qemu/kvm will do):
# virsh net-destroy default
# wget http://syracloud.net/~mmorsi/f16-x86_64-openstack-sda.qcow2
# glance add name=f16-jeos is_public=true disk_format=qcow2 container_format=bare < f16-x86_64-openstack-sda.qcow2
If you're running in a vm, need to setup software virtualization:
# openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu
# openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters AllHostsFilter
# setsebool -P virt_use_execmem on
Unless there were any errors, openstack should now be running on your system. You may start instances as any user with access to the system (as defined in the environment in the first step).
Create a key to use to log-into instances:
$ nova keypair-add mykey > oskey.priv
$ chmod 600 oskey.priv
Start and verify and instance instance
$ nova boot myserver --flavor 2 --key_name mykey --image $(glance index | grep f16-jeos | awk '{print $1}')
$ nova list
$ sudo virsh list # once started
SSH into the running instance
$ ssh -i oskey.priv root@10.0.0.2
Setting up oVirt on Fedora
This oVirtwebsite is quite comprehensive and includes everything needed to get it up and running. oVirt may not be installed in a vm nor run off a live cd.
Setting a host to run oVirt, this will need to be resolvable for whoever is connecting to oVirt:
# echo "192.168.122.218 ovirt" >> /etc/hosts
# echo "HOSTNAME=ovirt" >> /etc/sysconfig/network
Configure the prequisites
# dnsmasq --bind-interfaces --listen-address=127.0.0.1
# echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
# service sshd restart
Configure NFS by coping the following to /etc/exports:
/ext/ovirt31storage *(rw,async,no_subtree_check,all_squash,anonuid=36,anongid=36)
/ext/ovirt31export *(rw,async,no_subtree_check,all_squash,anonuid=36,anongid=36)
/ext/ovirt31isos *(rw,async,no_subtree_check,all_squash,anonuid=36,anongid=36)
Finish setting up nfs to store images, isos, and other cloud data.
# mkdir -p /ext/ovirt31storage /ext/ovirt31export /ext/ovirt31isos
# chown vdsm.kvm /ext/ovirt31storage /ext/ovirt31export /ext/ovirt31isos
# chmod 775 /ext/ovirt31storage /ext/ovirt31export /ext/ovirt31isos
# service nfsd restart
Install / set up oVirt:
# yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm
# yum install ovirt-engine
# engine-setup

- open a web browser, navigate to http://ovirt
- client 'Administrative' interface and login (credentials for engine-setup command)
- Click storage domains and then create / new domain
- Fill in nfs details of storage domain, note:
- add and activate the 'ovirt31storage' domain before the isos or export
- make sure to specify a mount point that is externally accessible (eg ovirt:/ovirt31storage), this will need to be the EXACT SAME mountpoint you moint on the Aeolus side
- make sure to select the 'iso' and 'export' types for the other domains
- each time your machine reboots you will need to make sure the domains start off as umounted before activating them in the oVirt web interface (see this script)
Download / import an ISO disk image to boot instances off of. For example download Fedora here and import it into oVirt with:
# engine-iso-uploader -i is1 upload Downloads/Fedora-17-x86_64-Live-Desktop.iso

- Fill in the details for the local host. No need to configure power management.
- This most likely will reboot the machine as the node is added to ovirt
Finally in the admin interface, you can click 'VMs', then 'new server' to create a new instance.
- You will need to attach a disk image to the instance when prompted
- You may optionally attach a network interface to the instance when prompted
- You may launch the instance with 'Run once', selecting the ISO you uploaded before to boot off of
Final steps
In my next post I will be discussing using Aeolus to connect to OpenStack and oVirt as setup above. Since I will be running Aeolus on my laptop, which will be the only communication point for my desktop, I disabled the firewall on the desktop and openstack vm running ontop of it so as to simplify the setup. The laptop assigns the desktop static ip addresses which are also aliased in my local /etc/hosts file (along with the vm running aeolus):
110.220.110.30 ovirt
110.220.110.10 openstack
110.220.110.40 openstack_vm
192.168.122.129 aeolus
Make sure to stay tuned for the next steps in the process which should be coming later today / tomorrow!