Installing OpenStack on Ubuntu 12.04 LTS in 10 Minutes

Tue, 01 Oct 2013


OpenStack's technology stack consists of a series of interrelated projects which controls a given deployment of hardware providing processing, storage, and networking. Deployments are managed using a simple UI and a flexible API which can be used by third party software.

Infrastructure is meant to be open, trustworthy and secure. The best way to ensure trust in infrastructure is the use of Open Source software and hardware exclusively at the infrastructure level.

This guide is dedicated to helping individuals deploy OpenStack for use with the project.'s goal is to create a highly a highly distributed cloud backed by a simple cryptocurrency payment system. Participation in a enabled compute pool provides resource and revenue sharing among participants.

This guide and the software it contains are released under the MIT Open Source license. Anyone is welcome to use these scripts to install OpenStack for evaluation or production use.


  1. You need a minimum of one rig with at least 8GB of RAM, 4 cores, (1) SSD drive, and one ethernet card.
  2. You need a clean install of Ubuntu 12.04 LTS 64-bit Linux on your box. You can also install this on the server version of 12.04.x.
  3. You'll need a router which supports IPv6. Ideally, your router is also configured for a small group of publicly routable IPv4 addresses.
  4. Optionally, you should have an account on a pool. If you aren't a member of a pool, you may join StackMonkey's pool for free. Please note, the pool software is not complete at this time.
  5. Optionally, you can install Veo's sgminer to mine alt currencies with your rig's GPUs without impacting instance performance.
  6. Optionally, fire up a good music track to listen to while you watch the bytes scroll by.

Note: Each OpenStack cluster needs a single controller which is in charge of managing the cluster. Certain steps below are labeled to indicate they need to be run only on the controller. All other steps which are not labeled will need to be completed for each and every node in the cluster - including the controller.

Forum Discussion

There is a forum based discussion area on Google Groups for posting technical questions regarding the guide.

IRC Channel

The IRC channel for the project is located in the #stackgeek channel on Mibbit.

Support is only provided for participants.

Install Bugs?

If you encounter bug with the installation code, you may open a ticket.

Video Guide

The video for this guide is located on Vimeo.

OpenStack Video


Assuming a fresh install of Ubuntu Desktop, you'll need to locally login to each rig and install the openssh-server to allow remote ssh access:

sudo apt-get install openssh-server

You may now login remotely to your rig via ssh and do an upgrade:

echo '# STACKGEEK ADDED THIS' >> /etc/apt/sources.list
echo 'deb precise-updates/grizzly main' >> /etc/apt/sources.list
apt-get install ubuntu-cloud-keyring -y
apt-get update -y
apt-get upgrade -y

The upgrade will take a while. When it is done, install git with aptitude:

sudo su
apt-get -y install git

Checkout the StackGeek OpenStack setup scripts from Github:

git clone git://
cd openstackgeek/grizzly

Network Interfaces

You need to manually configure your ethernet interface to support a non-routable static IPv4 address and an auto configured IPv6 address. Externally routed IPv4 addresses will be added in a later section. To start, run the following script:


The script will output a short configuration block which should be placed manually in /etc/network/interfaces. Be sure to edit the IP adddress before you save the file! I suggest you use an ordered set of IPs like .100, .101, .102, etc. for your rigs.

# loopback
auto lo
iface lo inet loopback

# primary interface
auto eth0
iface eth0 inet static

# ipv6 configuration
iface eth0 inet6 auto

Reboot the rig after saving the file.

Privacy and Tracking Notice

A few of these scripts contain tracking pings and are used to analyze the install process flow. The IP address of the machine(s) you are installing will be reported to No other personal information is transmitted by the tracking pings. You may examine the Open Source code for handling the ping requests here.

You may run the following script if you would like to disable the tracking pings in these scripts:


Note: These scripts are provided free of charge and serve to assist users in setting up OpenStack to participate in a highly distributed cloud. The intent is to bring trust and transparency to the Internet's infrastructure. The compute rigs you are installing already call into several other services, including Ubuntu's hosted repos and various OpenStack cloud image hosting servers. The impact of these tracking scripts to your privacy is minimal, at the worst. Your participation in tracking your install is appreciated.

Another Note: Please also be aware that the script below sends your configuration file to a pastebin knockoff hosted on and keeps it until you delete it (instructions below). If you don't want this functionality, please edit the script to your liking.

Test and Update

After editing the network, you'll need to test your rig for virtualization support:


If your rig doesn't support virtualization, you will need to check your virtualization settings in bios or upgrade your hardware. If it does support virtualization, you'll be prompted to update your Ubuntu install:


The update should come back pretty quick as you've already updated the system.


Note: Be sure to take a look at the scripts before you run them. Keep in mind the setup scripts will periodically prompt you for input, either for confirming installation of a package, or asking you for information for configuration.

Start the installation by running the setup script:


You will be asked whether or not this rig is to be configured as a controller. If you answer yes, the result of the setup will be a setuprc file in the install directory. The setup script will also output a URL which is used to copy the existing setup to a compute rig. Here's an example URL:

If you indicated the rig is not a controller node, you will be prompted for the URL spit out by the controller installation as mentioned above. Paste this URL in and hit enter to start the compute rig install.

Note: If you are installing a compute rig, you may skip to the Cinder Setup section below.

Database Setup (Controller Only)

The next part of the setup installs MySQL and RabbitMQ. This is only required for the controller rig. Skip this step if you are setting up a compute rig for your cluster. Start the install on the controller rig by typing:


The install script will install Rabbit and MySQL. During the MySQL install you will be prompted for the MySQL password you entered earlier to set a password for the MySQL root user. You'll be prompted again toward the end of the script when it creates the databases.

Keystone Setup (Controller Only)

Keystone is used by OpenStack to provide central authentication across all installed services. Start the install of Keystone by typing the following:


When the install is done, test Keystone by setting the environment variables using the newly created stackrc file. Note: This file can be sourced any time you need to manage the OpenStack cluster from the command line.

. ./stackrc
keystone user-list

Keystone should output the current user list to the console:

|                id                |   name  | enabled |       email        |
| 5474c43e65c840b5b371d695af72cba4 |  admin  |   True  | |
| dec9e0adf6af4066810b922035f24edf |  cinder |   True  | |
| 936e0e930553423b957d1983d0a29a62 |   demo  |   True  | |
| 665bc14a5da44e86bd5856c6a22866fb |  glance |   True  | |
| bf435eb480f643058e27520ee3737685 |   nova  |   True  | |
| 7fa480363a364d539278613aa7e32875 | quantum |   True  | |

Glance Setup (Controller Only)

Glance provides image services for OpenStack. Images are comprised of prebuilt operating system images built to run on OpenStack. There is a list of available images on the OpenStack site.

Start the Glance install by typing:


Once the Glance install completes, you should be able to query the system for the available images:

glance image-list

The output should be something like this:

| ID                                   | Name             | Disk Format | Container Format | Size      | Status |
| df53bace-b5a0-49ba-9b7f-4d43f249e3f3 | Cirros 0.3.0     | qcow2       | bare             | 9761280   | active |

Cinder Setup

Cinder is used to provide additional volume attachments to running instances and snapshot space. Start the install of Cinder by typing:


Once the install of Cinder is complete, determine your space requirements and run the loopback volume creation script:


Keep in mind you have to create a loopback file that is at least 1GB in size. After you complete the Nova setup for the controller below, you should be able to query installed storage types:

cinder type-list

You may then create a new volume to test (again, this requires running the Nova setup for the controller below):

cinder create --volume-type Storage --display-name test 1

Note: If you are installing a compute rig, you may skip to the *Nova Compute Setup section below.*

Nova Setup (Controller Only)

Nova provides multiple services to OpenStack for controlling networking, imaging and starting and stopping instances. If you are installing a compute rig, please skip to the following section to install the base nova-compute methods needed for running a compute rig.

Start the controller's nova install by typing the following:


When the install is complete, you may query the running services by doing the following:

nova service-list

You should see output that looks similar to this:

| Binary           | Host   | Zone     | Status  | State | Updated_at                 |
| nova-cert        | tester | internal | enabled | up    | 2014-02-20T10:37:25.000000 |
| nova-conductor   | tester | internal | enabled | up    | 2014-02-20T10:37:17.000000 |
| nova-consoleauth | tester | internal | enabled | up    | 2014-02-20T10:37:25.000000 |
| nova-network     | tester | internal | enabled | up    | 2014-02-20T10:37:25.000000 |
| nova-scheduler   | tester | internal | enabled | up    | 2014-02-20T10:37:24.000000 |

Nova Compute Setup (Compute Rigs Only)

If you are installing a controller, this step has already been completed using the Nova Setup section above. You may skip this if you are installing a controller rig.

You may run this on any number of compute rigs. Start the Nova Compute setup on a given compute rig by typing the following:


Once the compute rig has been configured, you may log back into the controller rig and run the nova service list command again:

nova service-list

You should see new entries for the newly added compute rig:

| Binary           | Host    | Zone     | Status  | State | Updated_at                 |
| nova-cert        | nero    | internal | enabled | up    | 2014-04-13T17:20:52.000000 |
| nova-compute     | booster | nova     | enabled | up    | 2014-04-13T17:20:55.000000 |
| nova-compute     | nero    | nova     | enabled | up    | 2014-04-13T17:20:55.000000 |
| nova-conductor   | nero    | internal | enabled | up    | 2014-04-13T17:20:52.000000 |
| nova-consoleauth | nero    | internal | enabled | up    | 2014-04-13T17:20:52.000000 |
| nova-network     | booster | internal | enabled | up    | 2014-04-13T17:20:52.000000 |
| nova-network     | nero    | internal | enabled | up    | 2014-04-13T17:20:52.000000 |
| nova-scheduler   | nero    | internal | enabled | up    | 2014-04-13T17:20:52.000000 |

Flat Networking Setup (Controller Only)

This guide completely ignores the disaster ridden Neutron/Quantum project. If you are interested in Neutron, this is not the place to seek help.

Begin by creating an IPv4 private network range which blocks out the network:

nova-manage network create private --fixed_range_v4= --num_networks=1 --bridge=br100 --bridge_interface=eth0 --network_size=255

You'll need to add a route in your router to point to the new network managed by the controller (pseudo command here):

route add gw

Now enter a set of publicly available IPv4 based addresses:

nova-mange floating create

This example would allow a floating IP address to be assigned to instance from the range of to

You can view the private network by querying nova:

nova network-list

Output should look like this:

| ID                                   | Label   | CIDR          |
| 22aca431-14b3-43e0-a762-b02914770e6d | private | |

View the available floating pool addresses by querying nova again:

nova floating-ip-bulk-list

Output should look like this (truncated for space):

| project_id | address       | instance_uuid | pool | interface |
| None       | | None          | nova | |
| None       | | None          | nova | |

There will be additional guides posted on best practices for IPv6 allocation and IPv4 mapping and isolation. Hold tight.

Horizon Setup (Controller Only)

Horizon provides OpenStack's managment interface. Install Horizon by typing:


Now reboot the controller rig:


Once the rig comes back up, you should be able to log into your OpenStack cluster with the following URL format (changing the IP of course):

Your user/pass combination will be 'admin' and whatever you entered for a password earlier. If you accidentally run this command before adding the network above, you may see errors in the UI.

Note: If you log into the dashboard and get errors regarding quotas, log out of the UI by clicking on 'sign out' at the top right and then reboot the rig. The errors should go away when you log back in.

Install the StackMonkey Virtual Appliance

StackMonkey is a pool instance of the highly distributed cloud framework. If you elect to install the appliance, this OpenStack node will provide a small portion of its compute power to help build a highly distributed cloud. You will earn Bitcoin doing this.

The virtual appliance setup can be run by typing the following command:


More information about the setting up a new appliance can be viewed on StackMonkey (requires Google login).

OpenStack Cheat Sheet

An OpenStack Command Line Cheat Sheet is available on Anystacker's site. Commands can be run once the setuprc file has been sourced:

. ./setuprc

Delete the Paste File

The URL created for a multi-rig install is stored on an AppEngine application based on Rupa's sprunge project. You should delete the paste after you are done with your setup for security's sake:

curl -X DELETE

If you have any questions, issues or concerns, please feel free to join IRC, post on the forum, or create a ticket!