Running a devstack virtual machine with limited memory

If you have a system with only 4GB of RAM, you need to assign at least 2.5GB (2560M) to a virtual machine to install devstack. Even with this limited RAM there are times the devstack installation will fail.

One way to give the installation process an opportunity to complete is to configure your virtual machine to have swap space. The amount of swap space you can configure may be limited to the size of your initial disk partition configuration (which is 8GB). The following steps add a 2GB swap file to your virtual machine.

sudo swapon -s
free -m
sudo fallocate -l 2G /swapfile
ls -lh /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon -s
free -m
echo "/swapfile   none    swap    sw    0   0" | sudo tee -a /etc/fstab
cat /etc/fstab
The use of swap space by your virtual machine instead of available RAM will cause a significant slowdown of any software. For the purposes of a minimal installation this option provides a means to observe a running minimal OpenStack cloud.

Downloading and installing devstack

The following instructions assume you have a running Linux virtual machine that can support the installation of devstack to demonstrate a simple working OpenStack cloud.

For more information about the preparation needed for this step, see these pre-requisite instructions:

Pre-requisites

You will need to login to your Linux virtual machine as a normal user (e.g. stack if you followed these instructions).

To verify the IP address of your machine you can run:

$ ifconfig eth1

NOTE: This assumes you configured a second network adapter as detailed.

You need to determine the IP address assigned. If this is your first-time using VirtualBox and this was configured with default settings, the value will be 192.168.56.101

eth1      Link encap:Ethernet  HWaddr 08:00:27:db:42:6e  
          inet addr:192.168.56.101  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fedb:426e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:398500 errors:0 dropped:0 overruns:0 frame:0
          TX packets:282829 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:35975184 (35.9 MB)  TX bytes:59304714 (59.3 MB)

Verify that you have applicable sudo privileges.

$ sudo id

If you are prompted for a password, then your privileges are not configured correctly. See here.

Download devstack

After connecting to the virtual machine the following commands will download the devstack source code:

$ sudo apt-get install -y git-core
# NOTE: You will not be prompted for a password
#       This is important for the following installation steps
$ git clone https://git.openstack.org/openstack-dev/devstack

Configure devstack

The following will create an example configuration file suitable for a default devstack installation.

$ cd devstack
# Use the sample default configuration file
$ cp samples/local.conf .
$ HOST_IP="192.168.56.101"
$ echo "HOST_IP=${HOST_IP}" >> local.conf

NOTE: If your machine has different IP address you should specify this alternative value.

Install devstack

$ ./stack.sh

Depending on your physical hardware and network connection, this takes approximately 20 minutes.

When completed you will see the following:

...
This is your host IP address: 192.168.56.101
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.56.101/dashboard
Keystone is serving at http://192.168.56.101:5000/
The default users are: admin and demo
The password: nomoresecrete
While the installation of devstack is happening, you should read Configuration section, and look at the devstack/samples/local.conf sample configuration file being used.

Accessing devstack

You now have a running OpenStack cloud. There are two easy ways to access the running services to verify.

  • Connect the Horizon dashboard in your browser with the URL (e.g. http://192.168.56.101/), and use the user and password described (e.g. admin and nomoresecrete).
  • Use the OpenStack client that is installed with devstack, for example:
$ source accrc/admin/admin
$ openstack image list

See Using your devstack cloud for more information about analyzing your running cloud, restarting services, configuration files and how to demonstrate a code change.

Other devstack commands

There are some useful commands to know about with your devstack setup.

If you restart your virtual machine, you reconnect to devstack by re-running the installation (there is no longer a rejoin-stack.sh):

$ ./stack.sh

To shutdown a running devstack.

$ ./unstack.sh

To cleanup your VM of devstack installed software.

$ ./clean.sh

Setting up Ubuntu using vagrant

As discussed in Setting up an Ubuntu virtual machine using VirtualBox there are several other alternatives to defining an Ubuntu virtual machine. One of these alternatives is using Vagrant.

Pre-requisites

Vagrant requires the installation of VirtualBox.

Install Vagrant

See Vagrant Downloads for the correct file for your platform.

For Ubuntu, the following commands will download a recent copy and install on your computer.

$ wget https://releases.hashicorp.com/vagrant/1.8.1/vagrant_1.8.1_x86_64.deb
$ sudo dpkg -i vagrant_1.8.1_x86_64.deb

Launching an Ubuntu image

The following commands will initialize an start an Ubuntu 14.04 vagrant instance.

$ vagrant init ubuntu/trusty64
$ vagrant up --provider virtualbox
$ vagrant ssh

You should now be connected to the new virtual machine.

Vagrant creates a port forwarding configuration from your local machine automatically. You can connect via ssh directly with:

ssh vagrant@localhost -p 2222 -i .vagrant/machines/default/virtualbox/private_key

NOTE: Port 2222 may be different if this is already in use. You can verify this via the output of the vagrant up command, for example:

...
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
...

Post configuration

In order to access your vagrant instance with a specific IP address and leverage the recommended devstack instructions you need to add the config.vm.network line to the Vagrantfile in the directory used on your host computer. You also need to set the virtual machine memory to at least 2.5GB to get a minimal devstack operational.

Vagrant.configure(2) do |config|
  config.vm.box = "ubuntu/trusty64"
  config.vm.network "private_network", type: "dhcp"
 
  config.vm.provider "virtualbox" do |v|
    v.memory = 2560
  end
end

You will then need to restart the vagrant image in order to have a host-only IP assigned to the virtual machine and applicable memory.

$ vagrant reload
$ vagrant ssh
$ ifconfig eth1
$ free -m

This has created a suitable virtual machine ready for Downloading and installing devstack.

Setting up CentOS on VirtualBox for RDO

Create a CentOS Virtual Machine (VM)

NOTE: There are several different ways in creating a base VM CentOS image. These steps are the more manual approach, however they are provided for completeness in understanding varying options.

To create a virtual machine in VirtualBox select the New icon. This will prompt you for some initial configuration. Use these recommendations:

  • Name and operating System
    • Name: RDO
    • Type: Linux
    • Version: Red Hat (64-bit)
  • Memory Size
    • Use at minimum 4GB.
  • Hard Disk
    • Use the default settings including 8.0GB, VDI type, dynamically allocated, File location and size.

By default your virtual machine is ready to install however by making the following network recommendation it will be easier to access your running virtual machine via SSH and the RDO web interface and APIs from your host computer.

  • Click Settings
  • Select Network
  • Enable Adapter 2 and attach to a Host-only Adapter and select vboxnet0
  • Ok

Install CentOS Operating System

You are now ready to install the Operating System on the virtual machine with the following instructions.

  • Click Start
  • Open the CentOS .iso file you just downloaded.
  • You will be prompted for a number of options, select the default provided and use the following values when prompted.
  • Install CentOS 7
  • Select English and English (United States) (or your choice of language)
  • Select System to configure your installation destination
    • Click Done to use the default VM disk and automatically configure partitioning
  • Select Network & hostname
    • Enable both of the listed Ethernet connections
    • Enter rdo for the Host Name
    • Click Done
  • Click Begin Installation
  • Click Root Password
    • Enter password of your choosing
  • Click User Creation
    • Enter rdo for user name (or any value of your choice)
    • Enter Openstack for password (or any password of your choice)
    • Click Done

When the installation is complete, click Reboot.

You will now be able to login with username: rdo and password: Openstack (or the values you chose).

Post Installation

While the second ethernet adapter for your VM is configured it is not enabled.

$ su -
# Enter root password
$ sed -ie "s/ONBOOT=no/ONBOOT=yes/" /etc/sysconfig/network-scripts/ifcfg-enp0s8
$ ifup enp0s8
$ ip addr
# RDO does not operate with NetworkManager
$ sudo systemctl stop NetworkManager.service
$ sudo systemctl disable NetworkManager.service

The ip output will verify the IP address that was assigned. If you configured the VirtualBox host-only adapter with defaults, the address will be 192.168.56.1XX.

To verify access to your virtual machine from your host computer, you should SSH with:

$ ssh [email protected]

Setting up Ubuntu on VirtualBox for devstack

As discussed, devstack enables a software developer to run a standalone minimal OpenStack cloud on a virtual machine (VM). In this tutorial we are going to step through the installation of an Ubuntu VM using VirtualBox manually. This is a pre-requisite to installing devstack.

NOTE: There are several different ways in creating a base Ubuntu VM image. These steps are the more manual approach, however they are provided for completeness in understanding varying options.

Pre-requisites

  1. You will need a computer running a 64 bit operating system on Mac OSX, Windows, Linux or Solaris with at least 4GB of RAM and 10GB of available disk drive space.
  2. You will need to have a working VirtualBox on your computer. See Setting up VirtualBox to run virtual machines as a pre-requisite for these steps.
  3. You will need an Ubuntu server .iso image. Download the Ubuntu Server 14.04 (Trusty) server image (e.g. ubuntu-14.04.X-server-amd64.iso) to your computer. This will be the base operating system of your virtual machine that will run devstack.

If using Mac OS X or Linux you can obtain a recent .iso release with the command:

$ wget http://releases.ubuntu.com/14.04/ubuntu-14.04.4-server-amd64.iso
NOTE: devstack can be installed on different operating systems. As a first time user, Ubuntu 14.04 is used as this is a more common platform (and used by OpenStack infrastructure). Other operating systems include Ubuntu (14.10, 15.04, 15.01), Fedora (22, 23) and CentOS/RHEL 7.

Create an Ubuntu Virtual Machine

To create a virtual machine in VirtualBox select the New icon. This will prompt you for some initial configuration. Use these recommendations:

  • Name and operating System
    • Name: devstack
    • Type: Linux
    • Version: Ubuntu (64-bit)
  • Memory Size
    • If you have 8+GB use 4GB.
    • If you have only 4GB use 2.5GB. (Note. Testing during the creation of this guide found that 2048M was insufficient, and that a minimum of 2560M was needed)
  • Hard Disk
    • Use the default settings including 8.0GB, VDI type, dynamically allocated, File location and size.

By default your virtual machine is ready to install however by making the following network recommendation it will be easier to access your running virtual machine and devstack from your host computer.

  • Click Settings
  • Select Network
  • Enable Adapter 2 and attach to a Host-only Adapter and select vboxnet0
  • Ok

You are now ready to install the Operating System on the virtual machine with the following instructions.

  • Click Start
  • Open the Ubuntu .iso file you just downloaded.
  • You will be prompted for a number of options, select the default provided and use the following values when prompted.
  • Install Ubuntu Server
  • English (or your choice)
  • United States (or your location)
  • No for configure the keyboard
  • English (US) for keyboard (or your preference)
  • English (US) for keyboard layout (or your preference)
  • Select eth0 as your primary network interface
  • Select default ubuntu for hostname
  • Enter stack for full username/username
  • Enter Openstack for password (or your own preference)
  • Select No to encrypt home directory
  • Select Yes for time zone selected
  • Select Guided – use entire disk for partition method
  • Select highlighted partition
  • Select Yes to partition disks
  • Select Continue for package manager proxy
  • Select No automatic updates
  • Select OpenSSH Server in software to install
  • Select Yes to install GRUB boot loader
  • Select Continue when installation complete

The new virtual machine will now restart and you will be able to login with the username and password specified (i.e. stack and Openstack).

Post Installation

After successfully logging in run the following commands to complete the Ubuntu installation setup needed as pre-requisites to install devstack.

$ sudo su -
# Enter your stack user password
$ umask 266 & echo "stack ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/stack
$ apt-get update && apt-get upgrade -y
$ echo "auto eth1
iface eth1 inet dhcp" >> /etc/network/interfaces
$ ifup eth1

You are now ready to download and install devstack.

You can also setup an Ubuntu virtual machine via vagrant which simplifies these instructions.

More information

This blog is a series for the software developer with no experience in OpenStack to experience just the tip of functionality and features to become more interested in the project.

VirtualBox networking for beginners

When using VirtualBox for my OpenStack development I always configure two network adapters for ease of development. The first is a NAT adapter that enables the guest VM connectivity to the Internet via the host. The second network adapter is a host-only Adapter that enables my host computer (aka my terminal windows) to SSH directly to the guest VM, or to access a web interface for example. This enables the use of tools like ssh, scp, rsync etc easily with multiple VMs without thinking of different ports.

Having the two adapters is very convenient, however when you install products such as devstack or RDO these require additional steps to manage the interface and configure the installation. These steps are relatively straightforward but they make the most simple instructions more complex.

There are alternatives to using the NAT only adapter and enable port forwarding. For example you can configure port forwarding of port 2222 to the guest 22 with (when VM is not running):

$ VBoxManage modifyvm "vm-name" --natpf1 "guestssh,tcp,,2222,,22"
$ VBoxManage startvm "vm-name"

You can now connect to the guest VM via port forwarding on your host, in this case connecting to port 2222.

$ ssh user@localhost -p 2222

Personally I find this a disadvantage. You need to provide port forwarding for all ports you want to communicate on e.g. ssh (22), http (80) and keystone (5000). You need to do it in advance of using your VM, and you also need to do this for each VM.

However, depending on your needs and experience this is a valid alternative.

Ubuntu two adapter configuration

On Ubuntu, the following configuration file defines two DHCP network adapters.

$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

auto eth1
iface eth1 inet dhcp

You can verify adapter information (e.g. IP address) using ifconfig.

$ ifconfig
eth0      Link encap:Ethernet  HWaddr 08:00:27:7f:a0:e2  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe7f:a0e2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:585 errors:0 dropped:0 overruns:0 frame:0
          TX packets:455 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:129820 (129.8 KB)  TX bytes:64215 (64.2 KB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:66:8d:cb  
          inet addr:192.168.56.102  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe66:8dcb/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:221 errors:0 dropped:0 overruns:0 frame:0
          TX packets:152 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:28289 (28.2 KB)  TX bytes:19443 (19.4 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:371 errors:0 dropped:0 overruns:0 frame:0
          TX packets:371 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90509 (90.5 KB)  TX bytes:90509 (90.5 KB)

The ip command is also available.

To configure an IP with a fixed address on the host-only adapter network which is useful for many machines, you would use:

auto eth1
iface eth1 inet static
address 192.168.56.50
netmask 255.255.255.0
gateway 192.168.56.1

CentOS two adapter configuration

CentOS keeps a configuration file per interface. We start be determining the interface names.

$ ls -l /etc/sysconfig/network-scripts/ifcfg-*
-rw-r--r--. 1 root root 310 Mar 30 16:53 /etc/sysconfig/network-scripts/ifcfg-enp0s3
-rw-r--r--. 1 root root 278 Mar 30 17:00 /etc/sysconfig/network-scripts/ifcfg-enp0s8
-rw-r--r--. 1 root root 277 Mar 30 16:53 /etc/sysconfig/network-scripts/ifcfg-enp0s8e
-rw-r--r--. 1 root root 254 Sep 16  2015 /etc/sysconfig/network-scripts/ifcfg-lo

And then can review the per interface configuration with:

$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
TYPE="Ethernet"
BOOTPROTO="dhcp"
DEFROUTE="yes"
PEERDNS="yes"
PEERROUTES="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_FAILURE_FATAL="no"
NAME="enp0s3"
UUID="2c0bdd66-badc-449c-8db8-b2c85a716dab"
DEVICE="enp0s3"
ONBOOT="yes"

$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s8
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s8
UUID=4127685d-a7b9-4b8d-a399-6dcacdb3396d
DEVICE=enp0s8
ONBOOT=yes

You can verify the network configuration using the ip command.

$ ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:32:c0:4c brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 15876sec preferred_lft 15876sec
    inet6 fe80::a00:27ff:fe32:c04c/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:43:22:85 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.103/24 brd 192.168.56.255 scope global dynamic enp0s8
       valid_lft 1056sec preferred_lft 1056sec
    inet6 fe80::a00:27ff:fe43:2285/64 scope link 
       valid_lft forever preferred_lft forever

CentOS does not provide ifconfig by default, it’s included in the net-tools package (RDO for example installs this).

$ sudo yum install -y net-tools
$ ifconfig
enp0s3: flags=4163  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fe32:c04c  prefixlen 64  scopeid 0x20
        ether 08:00:27:32:c0:4c  txqueuelen 1000  (Ethernet)
        RX packets 437445  bytes 441847285 (421.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 142720  bytes 8897712 (8.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163  mtu 1500
        inet 192.168.56.103  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:fe43:2285  prefixlen 64  scopeid 0x20
        ether 08:00:27:43:22:85  txqueuelen 1000  (Ethernet)
        RX packets 14250  bytes 1708162 (1.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 15367  bytes 13061787 (12.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 5706360  bytes 778262566 (742.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5706360  bytes 778262566 (742.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

References

Installing VirtualBox for OpenStack development

Download VirtualBox for your operating system

VirtualBox is an open source virtualization product that will allow you to create virtual machines on a computer using Linux, Mac OS X or Windows. While the current version is 5.x, older versions will also work if you are already using this software.

NOTE: There are different products that can provide virtualization on your computer. As a first time user with virtualization, VirtualBox is a common open source product used by developers.

Install VirtualBox on your system

Just follow the default prompts.

Recommended VirtualBox Networking

To provide for a better experience for installing and accessing devstack or RDO the following VirtualBox configuration setup is recommended to create a host-only adapter network on your host machine.

  • Start VirtualBox
  • Open Preferences (e.g. File|Preferences)
  • Select Network
  • Select Host-only Networks
  • Add Network (accept all defaults)

This additional step will create a network configuration in VirtualBox that is called vboxnet0. This will define a network in the 192.168.56.X range, and will configure a DHCP server that will issue IP addresses starting at 192.168.56.101. This will enable you to more easily access your VMs from your host computer as discussed in VirtualBox networking for beginners. ‎

Installing Openstack with devstack, a first-time guide

This guide will enable the reader to install a minimal OpenStack cloud using devstack for the first time.

This guide will assume you have never installed virtualization software, used or configured devstack or even observed a running OpenStack cloud. This guide does assume that you can perform some basic software development instructions as documented.

This guide is targeted towards the software developer that may want to review the Python code and contribute to the open source project or the system architect that wants to evaluate some of the features of OpenStack. If you are an end user should try a public cloud that runs OpenStack such as OVH, Rackspace or other public cloud providers listed in the OpenStack Marketplace).

There are some hardware requirements and various copy/paste command line instructions on a Linux virtual machine. While it would be possible to publish a completed virtual machine you could download and click to run, understanding the underpinnings of the most basic installation and configuration of devstack will provide an appreciation of the complexity of the product and the software development capabilities.

At the end of this process you will have a running OpenStack cloud on your computer that is running on a Linux virtual machine. You will be able to access this with your browser and be able to perform basic cloud infrastructure tasks, such as creating a compute instance. This guide is not intended to talk about the benefits or usages of a cloud.

You will need a computer running Mac OS X, Windows, Linux (see supported list) or Solaris with at least 4GB of RAM and 10GB of available disk drive space in order to complete the following steps.

  1. Installing VirtualBox
  2. Setting up an Ubuntu virtual machine using VirtualBox
  3. ‎Downloading and installing devstack
  4. Using your OpenStack devstack cloud
NOTE: These steps will provide one means of installing devstack with one type of virtualization software on a specific Linux operating system. This is only meant as a first-time users guide, and some pre-defined decisions have been made. There are multiple ways to implement and use devstack with different software and operating systems.

What’s next?

Without knowing the purpose of following this first-time guide what’s next depends on your. As a software developer you may be interested in looking OpenStack Bugs or contributing to new features of one of the many projects. As an architect you may want to understand a more complex configuration setup as you plan to determine what may be necessary to utilize a cloud infrastructure in your organization. This guide is only intended as the first introduction and hopefully has provided the intended result for the reader to consider what OpenStack can possibly provide.

More references

We will assume you have never installed virtualization software on your computer and have not installed devstack, or even seen an OpenStack interface. The devstack documentation does not make this assumption and so these more generic instructions are useful to the uninitiated. While some (including this author) feel these are instructions worthy of the official devstack documentation, others (with valid reasons) do not and hence the democracy of a large distributed open source project. For more information see review #290854. This guide joins the many others searchable by Internet search engines.

devstack, your personal OpenStack Cloud

As a software developer or system architect that is interested in looking at the workings of OpenStack, devstack is one of several different ways to start a personal cloud using the current OpenStack code base.

In it’s most basic form, you can run devstack in a virtual machine and be able to manage your personal cloud via the Horizon web interface (known as the Dashboard), or via several CLI APIs such as the OpenStack client (OSC). You can use this to launch compute services, manage boot images and disk volumes, define networking and configure administrative users, projects and roles.

The benefit of devstack is for the developer and deployer. You can actually see the running cloud software, interact and engage with individual services. devstack is a valuable tool to debug and bugfix services. devstack is used by the OpenStack CI/CD system for testing so it is robust enough to evaluate the core projects and many of the available projects that can be configured to be installed with devstack. You can also configure to use trunk (i.e. master) code, or specific branches or tags for individual services. The CI system for example will install the trunk of services, and the specific branch of a new feature or bug fix for one given project in order to perform user and functional testing.

devstack also enables more complex configuration setups. You can setup devstack with LXC containers, you can run a multi-node setup, you can run with Neutron networking. While devstack installs a small subset of projects including keystone, nova, cinder, glance and horizon, you can use devstack to run other OpenStack projects such as Manila, Trove, Magnum, Sahara, Solum and Heat.

The benefit of devstack is for evaluation of capabilities. devstack is not a product to use to determine a path for production deployment of OpenStack. This process includes many more complex considerations of determining why you want to implement an infrastructure for demand for your organization, and considerations of the most basic technical needs such as uptime and SLA requirements, high availability, monitoring and alerting, security management and upgrade paths of your software.

If you are ready to see what OpenStack could provide and want to run a local cloud, you can start with installing Openstack with devstack, a first-time guide.

Additional References

What is OpenStack?

OpenStack is a cloud computing software product that is the leading open source platform for creating cloud infrastructure. Used by hundreds of companies to run public, private and hybrid clouds, OpenStack is the second most popular open source project after the Linux Kernel.

OpenStack is a product of many different projects (currently over 50), written primarily in Python and is in 2016 over 4 million lines of code.

OpenStack Distributions

There are numerous distributions of OpenStack from leading vendors such as Red Hat, IBM, Canonical, Cisco, SUSE, Oracle and VMware to name a few. Each vendor provide a means of installing and managing an OpenStack cloud and integrates the cloud with a large number of hardware and software products and appliances. Regardless of your preferred host operating system or deployment methodology with ansible, puppet or chef, there is an project or provider to suit your situation.

Evaluating OpenStack

For the software developer or system architect there are several ways to evaluate the basic features of OpenStack. Free online services such as Mirantis Express and TryStack or production clouds at Rackspace and OVH offer you an infrastructure to handle compute, storage, networking and orchestration features and you can engage with your personal cloud via web and CLI interfaces.

If however you want to delve behind the UI and API’s to see OpenStack in operation there are simple VM based means using devstack, RDO or Ubuntu OpenStack that can operate a running OpenStack on single VM or multiple VMs. You can follow the OpenStack documentation installation guides which cover openSUSE 13.2, SUSE Linux Enterprise Server 12, Red Hat Enterprise Linux 7, CentOS 7 and Ubuntu 14.04 (LTS) to install OpenStack on a number of physical hardware devices (a minimal configuration is three servers).

These options will introduce you to what *may* be possible with OpenStack personally. This is however the very tip of a very large iceberg. Considering a cloud infrastructure for your organization is a much more complex set of decisions about the impact, usefulness and cost-effectiveness for your organisation.

References

Retiring an OpenStack project

As part of migrating Oslo Incubator code to graduated libraries I have come across several inactive OpenStack projects. (An inactivate project does not mean the project should be retired or removed). However in my case when consulting the mailing list, it was confirmed that the kite project met the criteria to proceed.

There are good procedures to Retiring a project in the Infra manual. In summary these steps are:

  • Inform the developer community via the mailing list of the intention to retire the project, confirming there are no unaware interested parties.
  • Submit an openstack-infra/project-config change to remove the zuul gate jobs that are run when reviews are submitted. This is needed as the subsequent review to remove code will fail if these checks are enabled..
  • Submit a project review that removes all code, and updates the README with a standard message “This project is no longer maintained. …” See Infra manual for full details. This change will have a Depends-On: for your project-config review. If this review is not yet merged, you should add a Needed-By: reference accordingly.
  • Following the approved review to remove the code from HEAD, a subsequent review to openstack-infra/project-config is needed to remove other infrastructure usage of the project and mark the project as read only.
  • Finally, a request to openstack/governance is made to propose removal of the repository from the governance.

As with good version control, the resulting code for the project is not actually removed.
As per the commit comment you simply git checkout HEAD^1 to access the project in question.

Oracle OpenStack leveraging MySQL Cluster and Docker

At Oracle Openworld this year, Oracle OpenStack Release 2 was announced. This Kilo based distribution included some new deployment features not see in other OpenStack distros including the use of Kolla, Docker and MySQL Cluster. The press release states “First commercially available OpenStack implementation completely packaged as Docker instances”.

Using Docker to containerize each component of services is a very convenient means of dev/test/prod management. Your single node developer environment gets HA automatically, you can easily deploy to two or more management nodes varying services, and they look and act just like your production environment. I have often struggled with developing in OpenStack either with single project environments, creating a devstack, a previously installed 3 node physical server deployment which takes up room and power in my office, and also comparing other single node solutions including Canonical and Mirantis. I am often left to using online services such as Mirantis Express, TryStack and HP Cloud to more easily evaluate the end product, but without any access to the operating cloud under the covers.

It is an interesting move to using MySQL Cluster. I liked this announcement. This is a very robust Master/Master MySQL compatible product that starts with a High Availability implemented through a shared nothing architecture. You get the benefit of dynamic adding of data shards as your system grows. However MySQL Cluster is a very different product under the covers. For those familiar with managing MySQL server a different set of skills are required, starting with the concept of a management node, data nodes and SQL nodes and the different ways to manage, monitor and triage. MySQL Cluster is effectively an in-memory solution so this will require some additional sizing considerations especially for production deployments. Your backup/recovery/disaster recovery strategy will also change. All of this administration exists for MySQL Cluster, it is just different. If you only use MySQL server these are new skills to master. As an expert in MySQL server and having not used MySQL Cluster for at least 7 years, I cannot provide an insights for example of the use of common monitoring tools (including newer SaaS offerings). Still, MySQL Cluster is an extremely stable and production ready product, that scales to millions of QPS easily. Using this as a HA solution gives you a rock solid base which is what you need first.

While I attended a number of sessions and took the Hands On Lab for Oracle OpenStack the proof is having a running local environment. My next post will talk about my experiences installing.

References

Deploying Ubuntu OpenStack Kilo

My previous Ubuntu OpenStack setup has been using the Juno release. I received some installation problems for Kilo using the stable repo and so I switched to using the experimental repo. This comes with a number of surface changes.

  • The interactive installation asks for the installation type first, and password second.
  • The IP range of installed OpenStack services changes from 10.0.4.x to 10.0.7.x.
  • Juju GUI is no longer installed by default. You need to specifically add this as a service after initial installation.
  • The GUI displays additional information during installation.
  • The LXC container name changes from uoi-bootstrap to openstack-single-<user>.

Uninstall any existing environment

Remove any existing installed OpenStack cloud.

sudo openstack-install -k
sudo openstack-install -u

NOTE: Be sure to remove your existing cloud before upgrading. Failing to do so will mean you need to manually cleanup some things with:

sudo lxc-stop --name uoi-bootstrap
sudo lxc-destroy --name uoi-bootstrap
rm -rf $HOME/.cloud_install

Update the OpenStack installer

Upgrade Ubuntu OpenStack with the following commands. In my environment this installed version 0.99.14.

sudo apt-add-repository ppa:cloud-installer/experimental
sudo apt-get update
sudo apt-get upgrade openstack

Install OpenStack Kilo

Installing an Ubuntu OpenStack environment still uses the openstack-install command with an additional argument.

sudo openstack-install --upstream-ppa

NOTE: Updated 6/18/15 When using the experimental repo with version 0.99.12 or earlier you must specify the --extra-ppa argument and value, i.e. sudo openstack-install –extra-ppa ppa:cloud-installer/experimental. Thanks stokachu for pointing this out.

Adding Services

After setting up a Kilo cloud using Ubuntu OpenStack I was able to successfully add a Swift component. Something else that was not quite working as expected in stable.

References

Writing and testing unit tests in OpenStack

The following outlines an approach of identifying and improving unit tests in an OpenStack project.

Obtain the source code

You can obtain a copy of current source code for an OpenStack project at http://git.openstack.org. Active projects are categorized into openstack, openstack-dev, openstack-infra and stackforge.

NOTE: While you can find OpenStack projects on GitHub, these are just a mirror of the source repositories.

In this example I am going to use the Magnum project.

$ git clone git://git.openstack.org/openstack/magnum 
$ cd magnum

Run the current tests

The first step should be to run the current tests to verify the current code. This is to become familiar with the habit, especially if you may have made modifications and are returning to looking at your code. This will also create a virtual environment, which you will want to use later to test your changes.

$ tox -e py27

Should this fail, you may want to ensure all OpenStack developer dependencies are inplace on your OS.

Identify unit tests to work on

You can use the code coverage of unit tests to determine possible places to start adding to existing unit tests. The following command will produce a HTML report in the /cover directory of your project.

$ tox -e cover

This output will look similar to this example coverage output for Magnum. You can also produce a text based version with:

$ coverage report -m 

I will use this text version as a later verification.

Working on a specific unit test

Drilling down on any individual test file you will get an indication of code that does not have unit test coverage. For example in magnum/common/utils:

Once you have found a place to work with and you have identified the corresponding unit test file in the magnum/tests/unit sub-directory, in this example I am working on on magnum/tests/unit/common/test_utils.py, you will want to run this individual unit test in the virtual environment you previously created.

$ source .tox/py27/bin/activate
$ testr run test_utils -- -f

You can now start working on making your changes in whatever editor you wish. You may want to also work interactively in python initially to test and verify classes and methods especially if you are unfamiliar with how the code functions. For example, using the identical import found in test_utils.py for the test coverage I started with these simple checks.

(py27)$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from magnum.common import utils
>>> utils.is_valid_ipv4('10.0.0.1') == True
True
>>> utils.is_valid_ipv4('') == False
True

I then created some appropriate unit tests for these two methods based on this interactive validation. These tests show that I not only test for valid values, I also test various boundary contains for invalid values including blank, character and out of range values of IP addresses.

    def test_valid_ipv4(self):
        self.assertTrue(utils.is_valid_ipv4('10.0.0.1'))
        self.assertTrue(utils.is_valid_ipv4('255.255.255.255'))

    def test_invalid_ipv4(self):
        self.assertFalse(utils.is_valid_ipv4(''))
        self.assertFalse(utils.is_valid_ipv4('x.x.x.x'))
        self.assertFalse(utils.is_valid_ipv4('256.256.256.256'))
        self.assertFalse(utils.is_valid_ipv4(
                         'AA42:0000:0000:0000:0202:B3FF:FE1E:8329'))

    def test_valid_ipv6(self):
        self.assertTrue(utils.is_valid_ipv6(
                        'AA42:0000:0000:0000:0202:B3FF:FE1E:8329'))
        self.assertTrue(utils.is_valid_ipv6(
                        'AA42::0202:B3FF:FE1E:8329'))

    def test_invalid_ipv6(self):
        self.assertFalse(utils.is_valid_ipv6(''))
        self.assertFalse(utils.is_valid_ipv6('10.0.0.1'))
        self.assertFalse(utils.is_valid_ipv6('AA42::0202:B3FF:FE1E:'))

After making these changes you want to run and verify your modified test works as previously demonstrated.

$ testr run test_utils -- -f
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./magnum/tests/unit} --list  -f
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./magnum/tests/unit}  --load-list /tmp/tmpDMP50r -f
Ran 59 (+1) tests in 0.824s (-0.016s)
PASSED (id=19)

If your tests fail you will see a FAILED message like. I find it useful to write a failing test intentionally just to validate the actual testing process is working.


${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./magnum/tests/unit}  --load-list /tmp/tmpsZlk3i -f
======================================================================
FAIL: magnum.tests.unit.common.test_utils.UtilsTestCase.test_invalid_ipv6
tags: worker-0
----------------------------------------------------------------------
Empty attachments:
  stderr
  stdout

Traceback (most recent call last):
  File "magnum/tests/unit/common/test_utils.py", line 98, in test_invalid_ipv6
    self.assertFalse(utils.is_valid_ipv6('AA42::0202:B3FF:FE1E:832'))
  File "/home/rbradfor/os/openstack/magnum/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py", line 672, in assertFalse
    raise self.failureException(msg)
AssertionError: True is not false
Ran 55 (-4) tests in 0.805s (-0.017s)
FAILED (id=20, failures=1 (+1))

Confirming your new unit tests

You can verify this has improved coverage percentage by re-running the coverage commands. I use the text based version as an easy way to see a decrease in the number of lines not covered.

Before

$ coverage report -m | grep "common/utils"
magnum/common/utils    273     94     76     38    62%   92-94, 105-134, 151-157, 208-211, 215-218, 241-259, 267-270, 275-279, 325, 349-384, 442, 449-453, 458-459, 467, 517-518, 530-531, 544
$ tox -e cover

After

$ coverage report -m | grep "common/utils"
magnum/common/utils    273     86     76     38    64%   92-94, 105-134, 151-157, 241-259, 267-270, 275-279, 325, 349-384, 442, 449-453, 458-459, 467, 517-518, 530-531, 544

I can see 8 lines of improvement which I can also verify if I look at the html version.

Before

After

Additional Testing

Make sure you run a full test before committing. This runs all tests in multiple Python versions and runs the PEP8 code style validation for your modified unit tests.

$ tox -e py27

Here are some examples of PEP8 problems with my improvements to the unit tests.

pep8 runtests: commands[0] | flake8
./magnum/tests/unit/common/test_utils.py:88:80: E501 line too long (88 > 79 characters)
./magnum/tests/unit/common/test_utils.py:91:80: E501 line too long (87 > 79 characters)
...
./magnum/tests/unit/common/test_utils.py:112:32: E231 missing whitespace after ','
./magnum/tests/unit/common/test_utils.py:113:32: E231 missing whitespace after ','
./magnum/tests/unit/common/test_utils.py:121:30: E231 missing whitespace after ','
...

Submitting your work

In order for your time and effort to be included in the OpenStack project there are a number of key details you need to follow that I outlined in contributing to OpenStack. Specifically these documents are important.

You do not have to be familiar with the procedures in order to look at the code, and even look at improving the code. You will need to follow the steps as outlined in these links if you want to contribute your code. Remember if you are new, the best access to help is to jump onto the IRC channel of the project you are interested in.

This example along with additions for several other methods was submitted (See patch). It was reviewed and ultimately approved.

References

Some additional information about the tools and processes can be found in these OpenStack documentation and wiki pages.

Contributing to OpenStack

Following my first OpenStack Summit in Vancouver 4/2015 it was time to become involved with contributing to OpenStack.

I have lurked around the mailing lists and several IRC channels for a few weeks and familiarized myself with OpenStack in varying forms including devstack, the free hosted Mirantis Express and the VM version, Ubuntu OpenStack, and even building my own 3 physical server cloud from second hand hardware purchased on eBay.

There are several resources available however I suggest you start with this concise presentation I attended at the summit by Adrian Otto on “7 Habits of Highly Effective Contributors” (slides, video).

You should also look at contributions from existing developers by looking at current code being submitted for review at https://review.openstack.org. I spent several weeks just looking at submissions, and I look at new submissions most days. While it does not always make sense (including a lot initially) its important to look at the full scope of all the projects. It is extremely valuable to look at how the review process works, how others comment on contributions, and look at the types of patches and code changes that are being contributed. There are a number of ways of not doing it right which can be discouraging when you first start contributing. The following links are vital to read, and re-read.

Individual projects also have various information, for example Magnum’s Ways to Contribute.

The benefit of observing for some time is you can be better prepared when you start to contribute. I was also new to how unit testing and automated testing worked in Python (about 7th on my list of known languages), and so learning about running OpenStack tests with tox and understanding the different OpenStack tox configs were valuable lessons, helped by feedback of OpenStack developers on the mailing list and IRC (If you have not looked at the 7 Habits presentation, now is a great time).

I took the time to find areas of interest and value which become more apparent after attending my first Design Summit. I even committed to assist in a design priority in the Magnum project as a result of my learning about how unit testing worked.

And if you write about your experiences another thing you can do is Add your blog to Planet OpenStack. I have received great feedback from the OpenStack community when writing about my first experiences.

Tracking the Ubuntu OpenStack installation process

Following on from Installing Ubuntu OpenStack the following steps help you navigate around the single server installation, monitoring and debugging the installation process.

Configuration

The initial execution of the installer will create a default config.yaml file that defines the container and OpenStack services. After a successful installation this looks like:

$ more $HOME/.cloud-install/config.yaml
container_ip: 10.0.3.149
current_state: 2
deploy_complete: true
install_type: Single
openstack_password: openstack
openstack_release: juno
placements:
  controller:
    assignments:
      LXC:
      - nova-cloud-controller
      - glance
      - glance-simplestreams-sync
      - openstack-dashboard
      - juju-gui
      - keystone
      - mysql
      - neutron-api
      - neutron-openvswitch
      - rabbitmq-server
    constraints:
      cpu-cores: 2
      mem: 6144
      root-disk: 20480
  nova-compute-machine-0:
    assignments:
      BareMetal:
      - nova-compute
    constraints:
      mem: 4096
      root-disk: 40960
  quantum-gateway-machine-0:
    assignments:
      BareMetal:
      - quantum-gateway
    constraints:
      mem: 2048
      root-disk: 20480

This file changes during the installation process which I described later.

The LXC Container

The single server installation is managed within a single LXC container. You can obtain details of and connect to the container with the following.

$ sudo lxc-ls --fancy
----------------------------------------------------------------------------
uoi-bootstrap  RUNNING  10.0.3.149, 10.0.4.1, 192.168.122.1  -     YES      

$ sudo lxc-info --name uoi-bootstrap
Name:           uoi-bootstrap
State:          RUNNING
PID:            19623
IP:             10.0.3.149
IP:             10.0.4.1
IP:             192.168.122.1
CPU use:        27692.85 seconds
BlkIO use:      63.94 GiB
Memory use:     24.29 GiB
KMem use:       0 bytes
Link:           vethC0E9US
 TX bytes:      507.43 MiB
 RX bytes:      1.43 GiB
 Total bytes:   1.93 GiB

$ sudo lxc-attach --name uoi-bootstrap

You can also connect to the server directly. As I prefer to NEVER configure or connect to a server as root this is how I access the LXC container.

$ ssh [email protected]

Juju Status

When connected to the LXC container you can then look at the status of the Juju orchestration with.

$ export JUJU_HOME=~/.cloud-install/juju

$ juju status
environment: local
machines:
  "0":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: localhost
    instance-id: localhost
    series: trusty
    state-server-member-status: has-vote
  "1":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.62
    instance-id: ubuntu-local-machine-1
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=4096M root-disk=40960M
  "2":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.77
    instance-id: ubuntu-local-machine-2
    series: trusty
    containers:
      2/lxc/0:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.147
        instance-id: ubuntu-local-machine-2-lxc-0
        series: trusty
        hardware: arch=amd64
      2/lxc/1:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.15
        instance-id: ubuntu-local-machine-2-lxc-1
        series: trusty
        hardware: arch=amd64
      2/lxc/2:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.135
        instance-id: ubuntu-local-machine-2-lxc-2
        series: trusty
        hardware: arch=amd64
      2/lxc/3:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.133
        instance-id: ubuntu-local-machine-2-lxc-3
        series: trusty
        hardware: arch=amd64
      2/lxc/4:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.119
        instance-id: ubuntu-local-machine-2-lxc-4
        series: trusty
        hardware: arch=amd64
      2/lxc/5:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.88
        instance-id: ubuntu-local-machine-2-lxc-5
        series: trusty
        hardware: arch=amd64
      2/lxc/6:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.155
        instance-id: ubuntu-local-machine-2-lxc-6
        series: trusty
        hardware: arch=amd64
      2/lxc/7:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.36
        instance-id: ubuntu-local-machine-2-lxc-7
        series: trusty
        hardware: arch=amd64
      2/lxc/8:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.11
        instance-id: ubuntu-local-machine-2-lxc-8
        series: trusty
        hardware: arch=amd64
    hardware: arch=amd64 cpu-cores=2 mem=6144M root-disk=20480M
  "3":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.10
    instance-id: ubuntu-local-machine-3
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=20480M
  "4":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.96
    instance-id: ubuntu-local-machine-4
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
  "5":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.140
    instance-id: ubuntu-local-machine-5
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
  "6":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.197
    instance-id: ubuntu-local-machine-6
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
services:
  glance:
    charm: cs:trusty/glance-11
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - glance
      identity-service:
      - keystone
      image-service:
      - nova-cloud-controller
      - nova-compute
      object-store:
      - swift-proxy
      shared-db:
      - mysql
    units:
      glance/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/4
        open-ports:
        - 9292/tcp
        public-address: 10.0.4.119
  glance-simplestreams-sync:
    charm: local:trusty/glance-simplestreams-sync-0
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      identity-service:
      - keystone
    units:
      glance-simplestreams-sync/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/5
        public-address: 10.0.4.88
  juju-gui:
    charm: cs:trusty/juju-gui-16
    exposed: false
    units:
      juju-gui/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/1
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: 10.0.4.15
  keystone:
    charm: cs:trusty/keystone-12
    exposed: false
    relations:
      cluster:
      - keystone
      identity-service:
      - glance
      - glance-simplestreams-sync
      - neutron-api
      - nova-cloud-controller
      - openstack-dashboard
      - swift-proxy
      shared-db:
      - mysql
    units:
      keystone/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/2
        public-address: 10.0.4.135
  mysql:
    charm: cs:trusty/mysql-12
    exposed: false
    relations:
      cluster:
      - mysql
      shared-db:
      - glance
      - keystone
      - neutron-api
      - nova-cloud-controller
      - nova-compute
      - quantum-gateway
    units:
      mysql/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/0
        public-address: 10.0.4.147
  neutron-api:
    charm: cs:trusty/neutron-api-6
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - neutron-api
      identity-service:
      - keystone
      neutron-api:
      - nova-cloud-controller
      neutron-plugin-api:
      - neutron-openvswitch
      shared-db:
      - mysql
    units:
      neutron-api/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/7
        open-ports:
        - 9696/tcp
        public-address: 10.0.4.36
  neutron-openvswitch:
    charm: cs:trusty/neutron-openvswitch-2
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      neutron-plugin:
      - nova-compute
      neutron-plugin-api:
      - neutron-api
    subordinate-to:
    - nova-compute
  nova-cloud-controller:
    charm: cs:trusty/nova-cloud-controller-51
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cloud-compute:
      - nova-compute
      cluster:
      - nova-cloud-controller
      identity-service:
      - keystone
      image-service:
      - glance
      neutron-api:
      - neutron-api
      quantum-network-service:
      - quantum-gateway
      shared-db:
      - mysql
    units:
      nova-cloud-controller/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/3
        open-ports:
        - 3333/tcp
        - 8773/tcp
        - 8774/tcp
        - 9696/tcp
        public-address: 10.0.4.133
  nova-compute:
    charm: cs:trusty/nova-compute-14
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cloud-compute:
      - nova-cloud-controller
      compute-peer:
      - nova-compute
      image-service:
      - glance
      neutron-plugin:
      - neutron-openvswitch
      shared-db:
      - mysql
    units:
      nova-compute/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: "1"
        public-address: 10.0.4.62
        subordinates:
          neutron-openvswitch/0:
            upgrading-from: cs:trusty/neutron-openvswitch-2
            agent-state: started
            agent-version: 1.20.11.1
            public-address: 10.0.4.62
  openstack-dashboard:
    charm: cs:trusty/openstack-dashboard-9
    exposed: false
    relations:
      cluster:
      - openstack-dashboard
      identity-service:
      - keystone
    units:
      openstack-dashboard/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/6
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: 10.0.4.155
  quantum-gateway:
    charm: cs:trusty/quantum-gateway-10
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - quantum-gateway
      quantum-network-service:
      - nova-cloud-controller
      shared-db:
      - mysql
    units:
      quantum-gateway/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: "3"
        public-address: 10.0.4.10
  rabbitmq-server:
    charm: cs:trusty/rabbitmq-server-26
    exposed: false
    relations:
      amqp:
      - glance
      - glance-simplestreams-sync
      - neutron-api
      - neutron-openvswitch
      - nova-cloud-controller
      - nova-compute
      - quantum-gateway
      cluster:
      - rabbitmq-server
    units:
      rabbitmq-server/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/8
        open-ports:
        - 5672/tcp
        public-address: 10.0.4.11

You can also look at a subset of the status for a particular service, for example keystone with:

$ juju status keystone
environment: local
machines:
  "1":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.128
    instance-id: ubuntu-local-machine-1
    series: trusty
    containers:
      1/lxc/2:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.142
        instance-id: ubuntu-local-machine-1-lxc-2
        series: trusty
        hardware: arch=amd64
    hardware: arch=amd64 cpu-cores=2 mem=6144M root-disk=20480M
services:
  keystone:
    charm: cs:trusty/keystone-12
    exposed: false
    relations:
      cluster:
      - keystone
      identity-service:
      - glance
      - glance-simplestreams-sync
      - neutron-api
      - nova-cloud-controller
      - openstack-dashboard
      shared-db:
      - mysql
    units:
      keystone/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 1/lxc/2
        public-address: 10.0.4.142

Monitoring the Installation

When performing an installation you can monitor the executed commands with:

$ tail -f $HOME/.cloud-install/commands.log

...

This provides a lot of debugging output. A streamlined logging is actually possible with automated installation described later.

Uninstalling

As the single server instance is in a LXC container, as the documentation states uninstalling the environment is a rather trivial process that takes only a few seconds.

This will teardown the cloud but leaving userdata available for a subsequent deployment.

$ sudo openstack-install -k
Warning:

This will destroy the host Container housing the OpenStack private cloud. This is a permanent operation.
Proceed? [y/N] Y
Removing static route
Removing host container...
Container is removed.

You can also do a more permanent uninstall of the cloud and packages.

$ sudo openstack-install -u
Warning:

This will uninstall OpenStack and make a best effort to return the system back to its original state.
Proceed? [y/N] Y
Restoring system to last known state.
Ubuntu Openstack Installer Uninstalling ...Single install path.

This does not however seem to cleanup $HOME/.cloud-install. You can safely remove this or move it sideways when re-deploying without any issues.

Installation automation

As described in my original post, the openstack-install script is a cursors-based interactive view. You can automate the installation by defining the needed setup inputs in a separate configuration file and running in headless mode.

$ echo "install_type: Single
openstack_password: openstack" > install.yaml

$ sudo openstack-install --headless --config install.yaml

This has the added benefit providing a more meaningful log of the state of the installation with less verbose information then in the commands.log file.

[INFO  • 06-02 12:02:42 • cloudinstall.install] Running in headless mode.
[INFO  • 06-02 12:02:42 • cloudinstall.install] Performing a Single Install
[INFO  • 06-02 12:02:42 • cloudinstall.task] [TASKLIST] ['Initializing Environment', 'Creating container', 'Bootstrapping Juju']
[INFO  • 06-02 12:02:42 • cloudinstall.task] [TASK] Initializing Environment
[INFO  • 06-02 12:02:42 • cloudinstall.consoleui] Building environment
[INFO  • 06-02 12:02:42 • cloudinstall.single_install] Prepared userdata: {'extra_sshkeys': ['ssh-rsa ...\n'], 'seed_command': ['env', 'pollinate', '-q']}
[INFO  • 06-02 12:02:42 • cloudinstall.single_install] Setting permissions for user rbradfor
[INFO  • 06-02 12:02:43 • cloudinstall.task] [TASK] Creating container
[INFO  • 06-02 12:04:20 • cloudinstall.single_install] Setting DHCP properties for host container.
[INFO  • 06-02 12:04:20 • cloudinstall.single_install] Adding static route for 10.0.4.0/24 via 10.0.3.160
...
[INFO  • 06-02 12:22:50 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: pending
[INFO  • 06-02 12:23:31 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: installed
[INFO  • 06-02 12:23:52 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: started
[INFO  • 06-02 12:24:34 • cloudinstall.consoleui] Checking availability of keystone: started
[INFO  • 06-02 12:24:44 • cloudinstall.consoleui] Checking availability of keystone: started
[INFO  • 06-02 12:24:44 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: started
[INFO  • 06-02 12:27:38 • cloudinstall.consoleui] Checking availability of quantum-gateway: started
[INFO  • 06-02 12:27:38 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: started
[INFO  • 06-02 12:27:38 • cloudinstall.consoleui] Validating network parameters for Neutron
[INFO  • 06-02 12:27:53 • cloudinstall.consoleui] All systems go!=

And 25 minutes later you have an available cloud.

If you attempt to look at the GUI status page with openstack-status you will be given a text based version of messages like.

$ sudo openstack-status
[INFO  • 06-02 12:06:21 • cloudinstall.core] Running openstack-status in headless mode.
[INFO  • 06-02 12:06:21 • cloudinstall.consoleui] Loaded placements from file.
[INFO  • 06-02 12:06:21 • cloudinstall.consoleui] Waiting for machines to start: 3 unknown
[INFO  • 06-02 12:08:20 • cloudinstall.consoleui] Waiting for machines to start: 1 pending, 2 unknown
[INFO  • 06-02 12:08:48 • cloudinstall.consoleui] Waiting for machines to start: 2 pending, 1 unknown
[INFO  • 06-02 12:09:04 • cloudinstall.consoleui] Waiting for machines to start: 1 down (started), 1 pending, 1 unknown
[INFO  • 06-02 12:09:13 • cloudinstall.consoleui] Waiting for machines to start: 1 down (started), 2 pending
[INFO  • 06-02 12:09:20 • cloudinstall.consoleui] Waiting for machines to start: 2 down (started), 1 pending
[INFO  • 06-02 12:09:26 • cloudinstall.consoleui] Waiting for machines to start: 1 pending, 2 started
[INFO  • 06-02 12:09:51 • cloudinstall.consoleui] Waiting for machines to start: 1 down (started), 2 started
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Verifying service deployments
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Missing ConsoleUI() attribute: set_pending_deploys
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Checking if MySQL is deployed
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Deploying MySQL to machine lxc:1
[INFO  • 06-02 12:10:49 • cloudinstall.consoleui] Deployed MySQL.
[INFO  • 06-02 12:10:49 • cloudinstall.consoleui] Checking if Juju GUI is deployed
[INFO  • 06-02 12:10:49 • cloudinstall.consoleui] Deploying Juju GUI to machine lxc:1
[INFO  • 06-02 12:11:00 • cloudinstall.consoleui] Deployed Juju GUI.
[INFO  • 06-02 12:11:00 • cloudinstall.consoleui] Checking if Keystone is deployed
[INFO  • 06-02 12:11:00 • cloudinstall.consoleui] Deploying Keystone to machine lxc:1
...

It seems you can trick it into providing both a GUI and text version with the following in another shell session.

$ sed -ie "/headless/d" $HOME/.cloud-install/config.yaml
$ sudo openstack-status

NOTE: You will not get any output until the initial container is completed. This also leaves a .pid file that must be manually cleaned up if you run to soon. The next invocation provides the following message.

$ sudo openstack-status
Another instance of openstack-status is running. If you're sure there are no other instances, please remove ~/.cloud-install/openstack.pid
$ rm $HOME/.cloud-install/openstack.pid

Monitoring the installation progress

The running config.yaml file changes over the duration of the installation.
It’s most basic configuration (when starting with the GUI) is:

$ more $HOME/.cloud-install/config.yaml
current_state: 0
openstack_release: juno

The release is also defined in the $HOME/.cloud-install/openstack_release file.

When starting by passing the configuration as previously mentioned it’s initial state is:

$ more $HOME/.cloud-install/config.yaml
config_file: install.yaml
current_state: 0
headless: true
install_type: Single
openstack_password: openstack
openstack_release: juno

This is updated when the LXC container is installed.

$ more $HOME/.cloud-install/config.yaml
config_file: install.yaml
container_ip: 10.0.3.77
current_state: 0
headless: true
install_type: Single
openstack_password: openstack
openstack_release: juno

And also updated during installation, such as.

$ more $HOME/.cloud-install/config.yaml
config_file: install.yaml
container_ip: 10.0.3.77
current_state: 0
headless: true
install_type: Single
openstack_password: openstack
openstack_release: juno
placements:
  controller:
    assignments:
      LXC:
      - nova-cloud-controller
      - glance
      - glance-simplestreams-sync
      - openstack-dashboard
      - juju-gui
      - keystone
      - mysql
      - neutron-api
      - neutron-openvswitch
      - rabbitmq-server
    constraints:
      cpu-cores: 2
      mem: 6144
      root-disk: 20480
  nova-compute-machine-0:
    assignments:
      BareMetal:
      - nova-compute
    constraints:
      mem: 4096
      root-disk: 40960
  quantum-gateway-machine-0:
    assignments:
      BareMetal:
      - quantum-gateway
    constraints:
      mem: 2048
      root-disk: 20480

When completed the configuration has the following settings.

config_file: install.yaml
container_ip: 10.0.3.77
current_state: 2
deploy_complete: true
install_type: Single
openstack_password: openstack
openstack_release: juno
placements:
...

Problems

When using the GUI installer the first time you quit (using Q), it seems to leave the terminal state wrong. The following will reset this to normal use.

$ stty sane  ^j    # (i.e. Ctrl-J together).

Subsequent uses of openstack-status do not have the same problem.

References

In my next post I am going to talk about the analysis taken to debug errors in the installation, starting with Keystone – hook failed: “config-changed” message I got attempting to install kilo, and hence this more detailed analysis of the installation process components.

Installing Ubuntu OpenStack

The The Canonical Distribution of Ubuntu OpenStack provides a simple installer to run an OpenStack cloud. You can deploy a simple single machine setup with fully containerized services (11 in total), or a multi server installation leveraging MAAS – Metal as a Service and Landscape Autopilot.

Installation

This post describes my experiences with the single machine setup on a 4 core machine with 32GB of RAM with a clean Ubuntu 14.04 LTS OS. The installation requires the following commands to configure the repo, install and configure your OpenStack cloud. In this example, the installed version is 0.22.3.

sudo apt-add-repository -y ppa:cloud-installer/stable
sudo apt-get update
sudo apt-get install -y openstack
sudo openstack-install --version
sudo openstack-install

The final step uses a cusors-based interface and only requires two steps before the installation.

  • Specify a password
  • Specify the install type




The UI provides a progress status of the installation. Initially new containers will start with a Pending status. Following the starting of the Juju GUI container the footer bar shows the URL for the JujuGUI, in my case http://10.0.4.112. Following the starting of the Openstack Dashboard you will then get a Horizon URL also detailed in the footer such as http://10.0.4.74/horizon.






Horizon

The Horizon display is what you generally expect.




JujuGUI

The JujuGUI provides a display of the deployment orchestration via charms. You can also drill down to specific services. An example is for the glance service using the charm cs:trusty/glance-11. This describes the relationships and configuration which are also seen in the GUI. You can also view online the full source code used to create this deployed service.




OpenStack Status

You can view the state of your containerized cloud with openstack-status which is a cursors-based display of the running installation, the same used during the installation. This displays the units deployed, status messages and a footer URL bar that indicates the URL’s of Horizon and JujuGUI. Each time you invoke this it will also check services, as indicated by the [INFO] messages.


Connecting to Containers

The installer will automatically create a SSH key for the user that you use to run the openstack-install command. This enables you to SSH to any of the containers, for example to connect to the MySQL container.

ssh [email protected]
$ mysql -uroot -p`sudo cat /var/lib/mysql/mysql.passwd` -e "SHOW SCHEMAS"
+--------------------+
| Database           |
+--------------------+
| information_schema |
| glance             |
| keystone           |
| mysql              |
| neutron            |
| nova               |
| performance_schema |
+--------------------+

You can use the various OpenStack clients to access OpenStack services. These are not installed by default.

sudo apt-get install -y python-glanceclient python-openstackclient python-novaclient python-keystoneclient
$ source $HOME/.cloud-install/openstack-admin-rc
$ glance image-list
+--------------------------------------+---------------------------------------------------------------+-------------+------------------+-----------+--------+
| ID                                   | Name                                                          | Disk Format | Container Format | Size      | Status |
+--------------------------------------+---------------------------------------------------------------+-------------+------------------+-----------+--------+
| f3cd4ec6-8ce6-4a44-85ec-2f8f066f351b | auto-sync/ubuntu-trusty-14.04-amd64-server-20150528-disk1.img | qcow2       | bare             | 257294848 | active |
+--------------------------------------+---------------------------------------------------------------+-------------+------------------+-----------+--------+

More Information

Read Tracking the Ubuntu OpenStack installation process for more detailed information on monitoring the installation process.

Thanks to the New York OpenStack Group and a presentation by Mark Baker of Canonical who demonstrated MAAS and Landscape AutoPilot installation of OpenStack. Slides of Automating hard things slides.

The benefit of attending the OpenStack Summit

I attended my first OpenStack Summit in Vancouver 4/2015. While I have used various cloud computing technologies for eight years and presented cloud content at events such as Cloud Expo, this was my first involvement with OpenStack.

It was the essential experience to become more familiar with the technology, the ecosystem, the contributors and the process of how OpenStack software evolves.

If you are new to OpenStack I would strongly advise you to read about and play with OpenStack so that you have a basic knowledge of some of the core projects. The conference can then provide the accelerated learning to help guide you where to find a possible home in the ecosystem. OpenStack is a large and complex product with many different projects. Having being involved in the mailing list and IRC for a few weeks prior to the conference I was able to interact with core OpenStack developers and PTL’s. This made it much easier to introduce myself at the conference to meet people face to face.

If you missed it, there are many of the Vancouver Summit Videos online.

During the summit which included conference sessions for attendees was the Design Summit. This is where the individual projects discuss the “design” of the next release of OpenStack. This includes talks on the features projects are hoping to include. These sessions are very informative to see just how the OpenStack ecosystem improves and develops the product. I found these to be highly valuable. Not being a seasoned developer in OpenStack this also enabled me to evaluate a range of different projects. I would encourage you to be open-minded and attend design sessions in multiple projects to see how individual teams work, and to find areas of interest. My design summit experience including attending sessions on Keystone, Magnum, OpenStackClient, Infra, Tempest and Rally for example.

Learning the OpenStackClient (OSC)

As a way to navigate the extent of the CLI options for nova, keystone, glance and also openstack commands I came up with an educational approach.

While still early development the goal is to provide a Beginner/Intermediate/Expert views exposing various commands and options to help the user learn in a controlled way.

This initial V0.5 version provides all the glue to test commands against an OpenStack Cloud.

The tool is designed to be self explaining if you have the most basic understanding the OSC. Starting with Help gives you an overview of the options.

As this runs on my webserver, the default operation works great to display help syntax and error conditions. It also connects to a functioning OpenStack Cloud. In my first example it works with Mirantis Express. I intend to provide a means to install within a devstack installation.

Home Page

Help

Running your first command (using help)

Running your first command (producing error output)

Adding Authentication

Listing Images

Show details of an Image in JSON format

Disabling the temporary authorization token in devstack keystone

While building my own OpenStack cloud on physical servers I realized that Keystone uses a temporary authorization token in the Create the service entity and API endpoint and Create projects, users, and roles steps.

The Verify operation step makes reference to removing this mechanism however my current devstack installations have not done this.

To verify this I use the SERVICE_TOKEN as defined in my devstack/local.conf and the Keystone Admin URL.

$ openstack --os-token=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee --os-url=http://controller:35357/v2.0 user list
+----------------------------------+----------------------------------+
| ID                               | Name                             |
+----------------------------------+----------------------------------+
| 554209509f5b47e286e0379bcbf66762 | admin                            |
| 59ac0457a80d449c9dac3b66848f2b5b | demo                             |
| 8aab962698f9460692efb8d3aab35886 | verify_tempest_config-1304647972 |
| 8b602467cd9045888687987067cbd3f6 | alt_demo                         |
| a134c3b33e94475fb5398643dd816053 | glance                           |
| c68c68579ec0437094a14dfbc4728224 | cinder                           |
| e65bd34ca85a429ea5c56bf980f77d67 | nova                             |
+----------------------------------+----------------------------------+

Removing the configuration settings as documented from /etc/keystone/keystone-paste.ini as documented DOES NOT disable this level of access.

NOTE: This edit removes the admin_token_auth option from the pipeline setting in the [pipeline:public_api], [pipeline:admin_api] and [pipeline:api_v3] sections.

$ sudo sed -ie "s/ admin_token_auth / /" /etc/keystone/keystone-paste.ini
$ openstack --os-token=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee --os-url=http://controller:35357/v2.0 user list
+----------------------------------+----------------------------------+
| ID                               | Name                             |
+----------------------------------+----------------------------------+
| 554209509f5b47e286e0379bcbf66762 | admin                            |
| 59ac0457a80d449c9dac3b66848f2b5b | demo                             |
| 8aab962698f9460692efb8d3aab35886 | verify_tempest_config-1304647972 |
| 8b602467cd9045888687987067cbd3f6 | alt_demo                         |
| a134c3b33e94475fb5398643dd816053 | glance                           |
| c68c68579ec0437094a14dfbc4728224 | cinder                           |
| e65bd34ca85a429ea5c56bf980f77d67 | nova                             |
+----------------------------------+----------------------------------+

An additional (and not presently documented step) of restarting apache is needed to invalidate this access.

$ sudo service apache2 restart
$ openstack --os-token=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee --os-url=http://controller:35357/v2.0 user list
ERROR: openstack Could not find token: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee (Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: req-617961b7-012a-4d61-bdfb-aa738b8f788f)

The results for the command as shown can be produced by using the user/password credentials with the Keystone public URL.

$ openstack --os-username=admin --os-password=passwd --os-project-name=admin --os-auth-url=http://localhost:5000/ user list
+----------------------------------+----------------------------------+
| ID                               | Name                             |
+----------------------------------+----------------------------------+
| 554209509f5b47e286e0379bcbf66762 | admin                            |
| 59ac0457a80d449c9dac3b66848f2b5b | demo                             |
| 8aab962698f9460692efb8d3aab35886 | verify_tempest_config-1304647972 |
| 8b602467cd9045888687987067cbd3f6 | alt_demo                         |
| a134c3b33e94475fb5398643dd816053 | glance                           |
| c68c68579ec0437094a14dfbc4728224 | cinder                           |
| e65bd34ca85a429ea5c56bf980f77d67 | nova                             |
+----------------------------------+----------------------------------+

Understanding the different Openstack tox configs

Openstack projects use tox to manage virtual environments and run unit tests which I talked about here.

In this example I am using the oslo.config repo to look at the various tox configs in openstack use. The Governance Project Testing Interface is a starting point to read about project guidelines.

Get the current codebase

$ git clone git://git.openstack.org/openstack/oslo.config
$ cd oslo.config/
$ git rev-parse HEAD
7b1e157aeea426c58e3d4c9a76a231f0e5bc8241

The last line helps me identify the specific git commit I am working with. When moving between branches or when looking at a repo that may be a few days old, if I need to recreate this exact codebase all I need is this. For example, to look at a prior version at 3ab403925e9cb2928ba8e893c4d0f4a6f4b27d7 for example.

$ git checkout 3ab403925e9cb2928ba8e893c4d0f4a6f4b27d72
Note: checking out '3ab403925e9cb2928ba8e893c4d0f4a6f4b27d72'.
...
HEAD is now at 3ab4039... Merge "Added Raw Value Loading to Test Fixture"
$ git rev-parse HEAD
3ab403925e9cb2928ba8e893c4d0f4a6f4b27d72

To revert back to the current repo master.

$ git checkout master
Previous HEAD position was 3ab4039... Merge "Added Raw Value Loading to Test Fixture"
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
$ git rev-parse HEAD
7b1e157aeea426c58e3d4c9a76a231f0e5bc8241

NOTE: You don’t need to specify the full commit hash. In this example 3ab4039 also works.

tox configuration

The tox.ini file contains various config sections.

  • [tox] are global options
  • [testenv] are the default options for each virtual environment
  • [testenv:NAME] are the specific test environments

tox.ini

$ cat tox.ini
[tox]
distribute = False
envlist = py33,py34,py26,py27,pep8

[testenv]
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'

[testenv:pep8]
commands = flake8

[testenv:cover]
setenv = VIRTUAL_ENV={envdir}
commands =
  python setup.py testr --coverage

[testenv:venv]
commands = {posargs}

[testenv:docs]
commands = python setup.py build_sphinx

[flake8]
show-source = True
exclude = .tox,dist,doc,*.egg,build

NOTE: These file differ between projects. See later for an example comparison with python-openstackclient, nova and horizon.

Prerequisites

The default [testenv] options first refer to requirements.txt and test-requirements.txt which define the specific packages and required versions. Either minimum (e.g. netaddr>=0.7.12), range (e.g. stevedore>=1.3.0,<1.4.0) or more specific (e.g. pbr>=0.6,!=0.7,<1.0).

requirements.txt

$ cat requirements.txt
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.

pbr>=0.6,!=0.7,<1.0
argparse
netaddr>=0.7.12
six>=1.9.0
stevedore>=1.3.0,<1.4.0  # Apache-2.0

test-requirements.txt

$ cat test-requirements.txt
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.

hacking>=0.10.0,<0.11

discover
fixtures>=0.3.14
python-subunit>=0.0.18
testrepository>=0.0.18
testscenarios>=0.4
testtools>=0.9.36,!=1.2.0
oslotest>=1.5.1,<1.6.0  # Apache-2.0

# when we can require tox>= 1.4, this can go into tox.ini:
#  [testenv:cover]
#  deps = {[testenv]deps} coverage
coverage>=3.6

# this is required for the docs build jobs
sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
oslosphinx>=2.5.0,<2.6.0  # Apache-2.0

# Required only for tests
oslo.i18n>=1.5.0,<1.6.0  # Apache-2.0

# mocking framework
mock>=1.0

Style Guidelines (PEP8)

The first test we look at is pep8 run by flake8. This starts by reviewing the code with Style Guide for Python Code and any specific Openstack Style Guidelines.

$ tox -e pep8
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
pep8 inst-nodeps: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
pep8 runtests: PYTHONHASHSEED='3973315668'
pep8 runtests: commands[0] | flake8
_________ summary ___________
  pep8: commands succeeded
  congratulations :)

As with all unit tests you are wanting to see "The bar is green, the code is clean". An example of a failing test would look like:

$ tox -e pep8
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
pep8 inst-nodeps: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
pep8 runtests: PYTHONHASHSEED='820640265'
pep8 runtests: commands[0] | flake8
./oslo_config/types.py:51:31: E702 multiple statements on one line (semicolon)
        self.choices = choices; self.quotes = quotes
                              ^
ERROR: InvocationError: '/home/rbradfor/oslo.config/.tox/pep8/bin/flake8'
________ summary _________
ERROR:   pep8: commands failed


$ tox -e pep8
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
pep8 inst-nodeps: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
pep8 runtests: PYTHONHASHSEED='1937373059'
pep8 runtests: commands[0] | flake8
./oslo_config/types.py:52:13: E113 unexpected indentation
            self.quotes = quotes
            ^
./oslo_config/types.py:52:13: E901 IndentationError: unexpected indent
            self.quotes = quotes
            ^
ERROR: InvocationError: '/home/rbradfor/oslo.config/.tox/pep8/bin/flake8'
__________ summary __________
ERROR:   pep8: commands failed

Running tests

To run all tests for a given Python version you just specify said version.

$ tox -e py27
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
py27 inst-nodeps: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
py27 runtests: PYTHONHASHSEED='1822382852'
py27 runtests: commands[0] | python setup.py testr --slowest --testr-args=
running testr
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ . --list
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpbHjMgm
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpLA0oO0
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpMqT_s_
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpyJLbu8
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpF5KG5t
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpebkBDp
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpscXbNV
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpTv0jAn
Ran 1182 tests in 0.475s (-0.068s)
PASSED (id=4)
Slowest Tests
Test id                                                                                                Runtime (s)
-----------------------------------------------------------------------------------------------------  -----------
tests.test_cfg.ConfigFileOptsTestCase.test_conf_file_dict_value_no_colon                               0.029
oslo_config.tests.test_cfg.ConfigFileReloadTestCase.test_conf_files_reload_default                     0.024
oslo_config.tests.test_cfg.SubCommandTestCase.test_sub_command_resparse                                0.016
tests.test_cfg.ConfigFileOptsTestCase.test_conf_file_dict_ignore_dname                                 0.016
tests.test_cfg.ConfigFileOptsTestCase.test_conf_file_list_spaces_use_dgroup_and_dname                  0.016
tests.test_cfg.MultipleDeprecatedCliOptionsTestCase.test_conf_file_override_use_deprecated_multi_opts  0.015
oslo_config.tests.test_cfg.OverridesTestCase.test_default_override                                     0.014
oslo_config.tests.test_cfg.ConfigFileOptsTestCase.test_conf_file_list_default_wrong_type               0.014
oslo_config.tests.test_cfg.RequiredOptsTestCase.test_missing_required_group_opt                        0.012
tests.test_generator.GeneratorTestCase.test_generate(long_help,output_file)                            0.011
________ summary _______
  py27: commands succeeded
  congratulations :)

You can pass a specific test or tests via command line identifying the names by looking at the test classes.

$ ls -l oslo_config/tests/[^_]*.py
-rw-rw-r-- 1 rbradfor rbradfor  12788 Apr 30 12:46 oslo_config/tests/test_cfgfilter.py
-rw-rw-r-- 1 rbradfor rbradfor 144538 Apr 30 12:46 oslo_config/tests/test_cfg.py
-rw-rw-r-- 1 rbradfor rbradfor   4938 Apr 30 12:46 oslo_config/tests/test_fixture.py
-rw-rw-r-- 1 rbradfor rbradfor  16479 Apr 30 12:46 oslo_config/tests/test_generator.py
-rw-rw-r-- 1 rbradfor rbradfor   3865 Apr 30 12:46 oslo_config/tests/test_iniparser.py
-rw-rw-r-- 1 rbradfor rbradfor  13259 Apr 30 12:46 oslo_config/tests/test_types.py

NOTE: This project has a top level /tests directory which uses the old import API and I am informed is being removed for liberty.

$ tox -e py27 -- test_types
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
py27 create: /home/rbradfor/oslo.config/.tox/py27
py27 installdeps: -r/home/rbradfor/oslo.config/requirements.txt, -r/home/rbradfor/oslo.config/test-requirements.txt
py27 inst: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
py27 runtests: PYTHONHASHSEED='1505218584'
py27 runtests: commands[0] | python setup.py testr --slowest --testr-args=test_types
running testr
...
Ran 186 (-996) tests in 0.100s (-0.334s)
PASSED (id=6)
Slowest Tests
Test id                                                                                     Runtime (s)
------------------------------------------------------------------------------------------  -----------
tests.test_types.BooleanTypeTests.test_other_values_produce_error                           0.001
oslo_config.tests.test_types.DictTypeTests.test_equal                                       0.001
oslo_config.tests.test_types.BooleanTypeTests.test_not_equal_to_other_class                 0.000
tests.test_types.IntegerTypeTests.test_positive_values_are_valid                            0.000
tests.test_types.DictTypeTests.test_dict_of_dicts                                           0.000
oslo_config.tests.test_types.ListTypeTests.test_not_equal_with_non_equal_custom_item_types  0.000
tests.test_types.IntegerTypeTests.test_with_max_and_min                                     0.000
oslo_config.tests.test_types.FloatTypeTests.test_exponential_format                         0.000
tests.test_types.BooleanTypeTests.test_yes                                                  0.000
tests.test_types.ListTypeTests.test_repr                                                    0.000
________ summary ________
  py27: commands succeeded
  congratulations :)
$ echo $?
0
$ tox -epy27 -- '(test_types|test_generator)'

A failing test is going to produce the following.

$ tox -epy27 -- test_types
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
py27 create: /home/rbradfor/oslo.config/.tox/py27
py27 installdeps: -r/home/rbradfor/oslo.config/requirements.txt, -r/home/rbradfor/oslo.config/test-requirements.txt
py27 inst: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
py27 runtests: PYTHONHASHSEED='3672144590'
py27 runtests: commands[0] | python setup.py testr --slowest --testr-args=test_types
running testr
...
======================================================================
FAIL: oslo_config.tests.test_types.IPv4AddressTypeTests.test_ipv4_address
tags: worker-0
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 386, in test_ipv4_address
    self.assertConvertedValue('192.168.0.1', '192.168.0.2')
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 27, in assertConvertedValue
    self.assertEqual(expected, self.type_instance(s))
  File "/usr/lib/python2.7/unittest/case.py", line 515, in assertEqual
    assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 508, in _baseAssertEqual
    raise self.failureException(msg)
AssertionError: '192.168.0.2' != '192.168.0.1'
Ran 186 (-117) tests in 0.102s (-0.046s)
FAILED (id=8, failures=2 (+2))
error: testr failed (1)
ERROR: InvocationError: '/home/rbradfor/oslo.config/.tox/py27/bin/python setup.py testr --slowest --testr-args=test_types'
________ summary ________
ERROR:   py27: commands failed
$ echo $?
1

Testr

This is a wrapper for the underlying testr command (found in the command line of the [testenv] section). We can reproduce what this runs manually with.

$ source .tox/py27/bin/activate
(py27)$  python setup.py testr
running testr
...
Ran 1182 tests in 0.443s (-0.025s)
PASSED (id=5)

The current tox.ini config includes the --slowest argument which is self explaining.

One benefit of running this specifically is when writing failing tests (i.e. the Test Driven Development (TDD) approach to agile software development). You do not really want to run all tests in order to see a failure. The -f option helps.

$ testr run
...
Ran 1182 (+637) tests in 2.058s (+1.064s)
FAILED (id=12, failures=2 (+1))
$ testr run -- -f
...
Ran 545 (-637) tests in 1.075s (-0.900s)
FAILED (id=13, failures=1 (-1))
$ testr run test_types -- -f
...
Ran 34 (-152) tests in 0.030s (-0.000s)
FAILED (id=18, failures=1 (-1))

NOTE: It takes a bit to realize the syntax of tox and testr and handling doubledash? -- placement. When you work it out you realize you can reproduce this with tox directly using:

$ tox -e py27 -- test_types -- -f
...
Ran 151 (+117) tests in 0.125s (+0.120s)
FAILED (id=19, failures=2 (+1))
error: testr failed (1)
ERROR: InvocationError: '/home/rbradfor/oslo.config/.tox/py27/bin/python setup.py testr --slowest --testr-args=test_types -- -f'
________ summary ________
ERROR:   py27: commands failed

The reason for dropping into an activated virtual environment and running testr manually is because tox will destroy and recreate your virtual environment each time the command is executed, which is time consuming.

The Testr source can be found at testrepository, identified by (py27)$ more `which testr`.

Testr syntax

Testr has multiple options and commands you can read about via various help options:

$ testr help
$ testr quickstart
$ testr commands
$ testr help run

Usage: testr run [options] testfilters* doubledash? testargs*
...

While debugging several testr commands were useful.

List all tests

$ testr list-tests
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ . --list
oslo_config.tests.test_cfg.ChoicesTestCase.test_choice_bad
oslo_config.tests.test_cfg.ChoicesTestCase.test_choice_default
oslo_config.tests.test_cfg.ChoicesTestCase.test_choice_good
oslo_config.tests.test_cfg.ChoicesTestCase.test_conf_file_bad_choice_value
oslo_config.tests.test_cfg.ChoicesTestCase.test_conf_file_choice_bad_default
oslo_config.tests.test_cfg.ChoicesTestCase.test_conf_file_choice_empty_value
...

(py27)$ testr list-tests | wc -l
1183

1183 - 1 corresponds to the 1182 test run.

Last run

This enables you to review the last run tests (in a separate thread) and also get a correct error response code.

(py27)$ testr last
Ran 1182 tests in 0.575s (+0.099s)
PASSED (id=27)
(py27)$ echo $?
0
(py27)$ testr last
======================================================================
FAIL: oslo_config.tests.test_types.IPAddressTypeTests.test_ipv4_address
tags: worker-6
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 386, in test_ipv4_address
    self.assertConvertedValue('192.168.0.1', '192.168.0.2')
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 27, in assertConvertedValue
    self.assertEqual(expected, self.type_instance(s))
  File "/usr/lib/python2.7/unittest/case.py", line 515, in assertEqual
    assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 508, in _baseAssertEqual
    raise self.failureException(msg)
AssertionError: '192.168.0.2' != '192.168.0.1'
======================================================================
FAIL: oslo_config.tests.test_types.IPv4AddressTypeTests.test_ipv4_address
tags: worker-7
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 386, in test_ipv4_address
    self.assertConvertedValue('192.168.0.1', '192.168.0.2')
  File "/home/rbradfor/oslo.config/oslo_config/tests/test_types.py", line 27, in assertConvertedValue
    self.assertEqual(expected, self.type_instance(s))
  File "/usr/lib/python2.7/unittest/case.py", line 515, in assertEqual
    assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 508, in _baseAssertEqual
    raise self.failureException(msg)
AssertionError: '192.168.0.2' != '192.168.0.1'
Ran 1182 tests in 0.445s (-0.130s)
FAILED (id=28, failures=2 (+2))
(py27)$ echo $?
1

Code Coverage

The tox.ini also provides a section for code coverage.

$ tox -e cover
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
cover inst-nodeps: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
cover runtests: PYTHONHASHSEED='546795877'
cover runtests: commands[0] | python setup.py testr --coverage
running testr
...
Ran 1182 tests in 0.493s (-0.046s)
PASSED (id=26)
_________ summary _________
  cover: commands succeeded
  congratulations :)

Which is a wrapper for:

$ python setup.py testr --coverage
...
Ran 1182 tests in 0.592s (+0.116s)
PASSED (id=27)

These commands produces a /cover directory (which is not currently in .gitignore). The contents are HTML. I suspect there is likely an option for a more CLI readable format however for simplicity we publish these to an available running web server.

Apache Setup

In order to view what code coverage produces I configured Apache with a separate port and vhost in this devstack environment.

$ echo "ServerName "`hostname` | sudo tee /etc/apache2/conf-enabled/servername.conf
$ echo "Listen 81


    DocumentRoot /var/www/html
    
        Options FollowSymLinks MultiViews
        AllowOverride All
        Order allow,deny
        allow from all
    

    LogLevel warn
    ErrorLog \${APACHE_LOG_DIR}/localhost.error.log
    CustomLog \${APACHE_LOG_DIR}/localhost.access.log combined
" | sudo tee /etc/apache2/sites-enabled/localhost.conf
$ sudo apache2ctl graceful

Then I simply copied the projects coverage output as a quick hack to view.

$ sudo cp -r cover/ /var/www/html/
$ sudo apt-get install lynx-cur
$ lynx http://localhost:81/cover
                             Module                            statements missing excluded coverage
   Total                                                       12         0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/__init__  6          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/cfg       1          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/cfgfilter 1          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/fixture   1          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/generator 1          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/iniparser 1          0       0        100%
   .tox/py27/lib/python2.7/site-packages/oslo/config/types     1          0       0        100%

   coverage.py v3.7.1

Documentation

The last testenv setup in oslo.config is for documentation.

$ tox -e docs
GLOB sdist-make: /home/rbradfor/oslo.config/setup.py
docs create: /home/rbradfor/oslo.config/.tox/docs
docs installdeps: -r/home/rbradfor/oslo.config/requirements.txt, -r/home/rbradfor/oslo.config/test-requirements.txt
docs inst: /home/rbradfor/oslo.config/.tox/dist/oslo.config-1.10.0.zip
docs runtests: PYTHONHASHSEED='4293391351'
docs runtests: commands[0] | python setup.py build_sphinx
running build_sphinx
creating /home/rbradfor/oslo.config/doc/build
creating /home/rbradfor/oslo.config/doc/build/doctrees
creating /home/rbradfor/oslo.config/doc/build/html
Running Sphinx v1.2.3
loading pickled environment... not yet created
Using openstack theme from /home/rbradfor/oslo.config/.tox/docs/local/lib/python2.7/site-packages/oslosphinx/theme
building [html]: all source files
updating environment: 15 added, 0 changed, 0 removed
reading sources... [100%] types
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] types
writing additional files... genindex py-modindex search
copying static files... WARNING: html_static_path entry u'/home/rbradfor/oslo.config/doc/source/static' does not exist
done
copying extra files... done
dumping search index... done
dumping object inventory... done
build succeeded, 1 warning.
creating /home/rbradfor/oslo.config/doc/build/man
Running Sphinx v1.2.3
loading pickled environment... done
Using openstack theme from /home/rbradfor/oslo.config/.tox/docs/local/lib/python2.7/site-packages/oslosphinx/theme
building [man]: all source files
updating environment: 0 added, 0 changed, 0 removed
looking for now-outdated files... none found
writing... osloconfig.1 { cfg opts types configopts cfgfilter helpers fixture parser exceptions namespaces styleguide generator faq contributing }
build succeeded.
___________________________________________________________________________________________________________________________ summary ____________________________________________________________________________________________________________________________
  docs: commands succeeded
  congratulations :)

This creates a /doc directory (in .gitignore) which I copied to my previously configured web container to view in HTML.

$ sudo cp -r doc/ /var/www/html/
$ lynx http://localhost:81/doc/build/html

Other tox.ini configuration

As I navigate around other Openstack projects I have noticed some differences. These include:

Alternative global settings

[tox]
minversion = 1.6
skipdist = True

More detailed [testenv]

[testenv]
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'
[testenv]
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands = python setup.py testr --testr-args='{posargs}'
whitelist_externals = bash

Some fancy output coloring.

[testenv]
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
         NOSE_WITH_OPENSTACK=1
         NOSE_OPENSTACK_COLOR=1
         NOSE_OPENSTACK_RED=0.05
         NOSE_OPENSTACK_YELLOW=0.025
         NOSE_OPENSTACK_SHOW_ELAPSED=1
# Note the hash seed is set to 0 until horizon can be tested with a
# random hash seed successfully.
         PYTHONHASHSEED=0
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands = /bin/bash run_tests.sh -N --no-pep8 {posargs}
[testenv]
usedevelop = True
# tox is silly... these need to be separated by a newline....
whitelist_externals = bash
                      find
install_command = pip install -U --force-reinstall {opts} {packages}
# Note the hash seed is set to 0 until nova can be tested with a
# random hash seed successfully.
setenv = VIRTUAL_ENV={envdir}
         OS_TEST_PATH=./nova/tests/unit
         LANGUAGE=en_US
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands =
  find . -type f -name "*.pyc" -delete
  bash tools/pretty_tox.sh '{posargs}'
# there is also secret magic in pretty_tox.sh which lets you run in a fail only
# mode. To do this define the TRACE_FAILONLY environmental variable.

Alternative [testenv:NAME] sections

[testenv:functional]
commands = bash -x {toxinidir}/functional/harpoon.sh

[testenv:debug]
commands = oslo_debug_helper -t openstackclient/tests {posargs}

[tox:jenkins]
downloadcache = ~/cache/pip

[testenv:jshint]
commands = nodeenv -p
           npm install jshint -g
           /bin/bash run_tests.sh -N --jshint

[testenv:genconfig]
commands =
  bash tools/config/generate_sample.sh -b . -p nova -o etc/nova

Different Style guidelines

[flake8]
show-source = True
exclude = .tox,dist,doc,*.egg,build
[flake8]
show-source = True
exclude =  .venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build,tools
[testenv:pep8]
commands = flake8
[testenv:pep8]
commands =
  /bin/bash run_tests.sh -N --pep8
  /bin/bash run_tests.sh -N --makemessages --check-only

Different Code Coverage

[testenv:cover]
commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:cover]
# Also do not run test_coverage_ext tests while gathering coverage as those
# tests conflict with coverage.
commands =
  coverage erase
  python setup.py testr --coverage \
    --testr-args='{posargs}'
  coverage combine
  coverage html --include='nova/*' --omit='nova/openstack/common/*' -d covhtml -i

Different Docs

[testenv:docs]
commands = python setup.py build_sphinx
[testenv:docs]
commands =
  python setup.py build_sphinx
  bash -c '! find doc/ -type f -name *.json | xargs -t -n1 python -m json.tool 2>&1 > /dev/null | grep -B1 -v ^python'

Additional sections

[testenv:pip-missing-reqs]
# do not install test-requirements as that will pollute the virtualenv for
# determining missing packages
# this also means that pip-missing-reqs must be installed separately, outside
# of the requirements.txt files
deps = pip_missing_reqs
       -rrequirements.txt
commands=pip-missing-reqs -d --ignore-file=nova/tests/* nova
[hacking]
import_exceptions = oslo_log._i18n

What's Next

In a followup blog I will be talking about debugging with pdb and how to use this with tox.

References

Running openstack tests with tox

Recently the OSC (python-openstackclient) project removed run_tests.sh #177066 and tools/install_venv.py scripts #177086.

As I was very new to OpenStack development practices this threw me because of reading several OpenStack documentation pages including Getting the code that specifically mentions in Hacking on your laptop and running unit tests an example Setting Up a Developer Environment, and consulting with a friend that is a ATC this is the way I learned to setup virtual environments and running tests.

The Testing OpenStack Projects documentation also refers to run_tests.sh however caveats “There is an older convention, as follows. Most projects have a shell script, named “run_tests.sh”, that runs the unit tests of that project.” (i.e. devil in the details).

With run_tests.sh and tools/install_venv.py removed what is the *correct* way?

Setting up a virtual environment with tox

First you create the tox virtual environments. The tox configuration is held in tox.ini and this supports multiple environments for compatibility testing. You can see the environments created with.

$ grep envlist tox.ini
envlist = py33,py34,py26,py27,pep8

I have chosen to specify the Python 2.7 version. I am running Ubuntu 12.04.2 LTS which has Python 2.7 and Python 3.4. The HACKING.rst docs makes reference to “OpenStackClient strives to be Python 3.3 compatible.” Side Note: Python 3.4 fails to work with the openstackclient codebase, See later for issues I am seeing.

I setup the 2.7 virtual environment without running any tests.

$ tox -e py27 --notest
py27 create: /home/rbradfor/tmp/python-openstackclient/.tox/py27
py27 installdeps: -r/home/rbradfor/tmp/python-openstackclient/requirements.txt, -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt
py27 develop-inst: /home/rbradfor/tmp/python-openstackclient
_________ summary ____________
  py27: skipped tests
  congratulations :)

I can then reference the openstack binary directly in this virtual environment with:

$ .tox/py27/bin/openstack --version
openstack 1.1.0

You can use this virtual environment without requiring any pathing by activating.

$ source .tox/py27/bin/activate
$ openstack

This actually adds the applicable /bin directory to PATH and not PYTHONPATH.

$ which openstack
/home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/openstack
$ openstack --version
openstack 1.1.0

Running Tests

There are now several ways I can run individual or full tests.

$ tox -epy27 -- test_shell
py27 create: /home/rbradfor/tmp/python-openstackclient/.tox/py27
py27 installdeps: -r/home/rbradfor/tmp/python-openstackclient/requirements.txt, -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt
py27 develop-inst: /home/rbradfor/tmp/python-openstackclient
py27 runtests: PYTHONHASHSEED='2928878700'
py27 runtests: commands[0] | python setup.py testr --testr-args=test_shell
running testr
...
Ran 32 tests in 0.148s (+0.029s)
PASSED (id=6)
_____ summary _________________
  py27: commands succeeded
  congratulations :)

NOTE: It seems every second invocation of this fails. This is what I see.

$ tox -epy27 -- test_shell
py27 recreate: /home/rbradfor/tmp/python-openstackclient/.tox/py27
ERROR: invocation failed (exit code 3), logfile: /home/rbradfor/tmp/python-openstackclient/.tox/py27/log/py27-0.log
ERROR: actionid=py27
msg=getenv
cmdargs=['/usr/bin/python', '-m', 'virtualenv', '--setuptools', '--python', '/home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/python2.7', 'py27']
env={'LESSOPEN': '| /usr/bin/lesspipe %s', 'SSH_CLIENT': '192.168.1.2 60030 22', 'LOGNAME': 'rbradfor', 'USER': 'rbradfor', 'PATH': '/home/rbradfor/tmp/python-openstackclient/.tox/py27/bin:/home/rbradfor/tmp/python-openstackclient/.tox/py27/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 'HOME': '/home/rbradfor', 'PS1': '(py27)\\[\\e]0;\\u@\\h: \\w\\a\\]${debian_chroot:+($debian_chroot)}\\u@\\h:\\w\\$ ', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 'SHELL': '/bin/bash', 'SHLVL': '1', 'PYTHONHASHSEED': '4072653076', 'XDG_RUNTIME_DIR': '/run/user/1000', 'VIRTUAL_ENV': '/home/rbradfor/tmp/python-openstackclient/.tox/py27', 'XDG_SESSION_ID': '12', '_': '/usr/local/bin/tox', 'SSH_CONNECTION': '192.168.1.2 60030 192.168.1.60 22', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'SSH_TTY': '/dev/pts/2', 'OLDPWD': '/home/rbradfor', 'PWD': '/home/rbradfor/tmp/python-openstackclient', 'MAIL': '/var/mail/rbradfor', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}
The executable /home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/python2.7 (from --python=/home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/python2.7) does not exist

ERROR: InvocationError: /usr/bin/python -m virtualenv --setuptools --python /home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/python2.7 py27 (see /home/rbradfor/tmp/python-openstackclient/.tox/py27/log/py27-0.log)
______ summary ___________
ERROR:   py27: InvocationError: /usr/bin/python -m virtualenv --setuptools --python /home/rbradfor/tmp/python-openstackclient/.tox/py27/bin/python2.7 py27 (see /home/rbradfor/tmp/python-openstackclient/.tox/py27/log/py27-0.log)

Using the tox.ini command syntax.

$ python setup.py testr --slowest --testr-args=test_shell
...

PASSED (id=5)
Slowest Tests
Test id                                                                             Runtime (s)
----------------------------------------------------------------------------------  -----------
openstackclient.tests.test_shell.TestShellHelp.test_help_options                    0.026
openstackclient.tests.test_shell.TestShellPasswordAuth.test_only_url_flow           0.015
openstackclient.tests.test_shell.TestShellPasswordAuth.test_only_project_id_flow    0.009
openstackclient.tests.test_shell.TestShellTokenAuth.test_empty_auth                 0.009
openstackclient.tests.test_shell.TestShellTokenEndpointAuth.test_only_url           0.008
openstackclient.tests.test_shell.TestShellPasswordAuth.test_only_auth_type_flow     0.007
openstackclient.tests.test_shell.TestShellPasswordAuth.test_only_project_name_flow  0.007
openstackclient.tests.test_shell.TestShellPasswordAuth.test_only_trust_id_flow      0.007
openstackclient.tests.test_shell.TestShellTokenAuthEnv.test_only_auth_url           0.007
openstackclient.tests.test_shell.TestShellTokenAuthEnv.test_only_token              0.007

References

The following is recommended reading.

Thanks to Jeremy Stanley (fungi) and Doug Hellmann from the openstack-dev mailing list for setting me on the correct path.

Problems with Python 3.4 on Ubuntu 14.04.2 LTS

I was unable to run tests with Python 3.x. I have not spent the time to investigate why there are issues with libyaml which is not listed as core dependency in requirements.txt.
UPDATE: Seems this also is a simple problem. I did not have the -dev package installed.

$ sudo apt-get install -y python3.4-dev

And it’s all fine. Thanks Doug for that insight.

My next objective is to install Python 3.3 also as this is referenced as the baseline compatibility of the project.

$ git rev-parse HEAD
416d840dc4cb00026bac2512b259ce88a0e4a765

$ tox -epy34 -- notests
py34 create: /home/rbradfor/tmp/python-openstackclient/.tox/py34
py34 installdeps: -r/home/rbradfor/tmp/python-openstackclient/requirements.txt, -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt
ERROR: invocation failed (exit code 1), logfile: /home/rbradfor/tmp/python-openstackclient/.tox/py34/log/py34-1.log
ERROR: actionid=py34
msg=getenv
cmdargs=[local('/home/rbradfor/tmp/python-openstackclient/.tox/py34/bin/pip'), 'install', '-U', '-r/home/rbradfor/tmp/python-openstackclient/requirements.txt', '-r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt']
env={'XDG_RUNTIME_DIR': '/run/user/1000', 'VIRTUAL_ENV': '/home/rbradfor/tmp/python-openstackclient/.tox/py34', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'SSH_CLIENT': '192.168.1.2 60030 22', 'LOGNAME': 'rbradfor', 'USER': 'rbradfor', 'HOME': '/home/rbradfor', 'PATH': '/home/rbradfor/tmp/python-openstackclient/.tox/py34/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 'XDG_SESSION_ID': '12', '_': '/usr/local/bin/tox', 'SSH_CONNECTION': '192.168.1.2 60030 192.168.1.60 22', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 'SHELL': '/bin/bash', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'SHLVL': '1', 'SSH_TTY': '/dev/pts/2', 'OLDPWD': '/home/rbradfor', 'PWD': '/home/rbradfor/tmp/python-openstackclient', 'PYTHONHASHSEED': '1330227753', 'MAIL': '/var/mail/rbradfor', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}
Collecting pbr!=0.7,<1.0,>=0.6 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 4))
  Using cached pbr-0.10.8-py2.py3-none-any.whl
Collecting six>=1.9.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 5))
  Using cached six-1.9.0-py2.py3-none-any.whl
Collecting Babel>=1.3 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 7))
  Using cached Babel-1.3.tar.gz
Collecting cliff>=1.10.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 8))
  Using cached cliff-1.12.0.tar.gz
Collecting cliff-tablib>=1.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 9))
  Using cached cliff-tablib-1.1.tar.gz
Collecting os-client-config (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 10))
  Using cached os-client-config-0.8.0.tar.gz
Collecting oslo.config>=1.9.3 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 11))
  Using cached oslo.config-1.11.0-py2.py3-none-any.whl
Collecting oslo.i18n>=1.5.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 12))
  Using cached oslo.i18n-1.6.0-py2.py3-none-any.whl
Collecting oslo.utils>=1.4.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 13))
  Using cached oslo.utils-1.5.0-py2.py3-none-any.whl
Collecting oslo.serialization>=1.4.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 14))
  Using cached oslo.serialization-1.5.0-py2.py3-none-any.whl
Collecting python-glanceclient>=0.15.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached python_glanceclient-0.18.0-py2.py3-none-any.whl
Collecting python-keystoneclient>=1.1.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 16))
  Using cached python_keystoneclient-1.4.0-py2.py3-none-any.whl
Collecting python-novaclient>=2.22.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 17))
  Using cached python_novaclient-2.24.1-py2.py3-none-any.whl
Collecting python-cinderclient>=1.1.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 18))
  Using cached python_cinderclient-1.2.0-py2.py3-none-any.whl
Collecting python-neutronclient<3,>=2.3.11 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 19))
  Using cached python_neutronclient-2.5.0-py2.py3-none-any.whl
Collecting requests!=2.4.0,>=2.2.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 20))
  Using cached requests-2.6.2-py2.py3-none-any.whl
Collecting stevedore>=1.3.0 (from -r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 21))
  Using cached stevedore-1.4.0-py2.py3-none-any.whl
Collecting hacking<0.11,>=0.10.0 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 4))
  Using cached hacking-0.10.1-py2.py3-none-any.whl
Collecting coverage>=3.6 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 6))
  Using cached coverage-3.7.1.tar.gz
Collecting discover (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 7))
  Using cached discover-0.4.0.tar.gz
Collecting fixtures>=0.3.14 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 8))
  Using cached fixtures-1.0.0.tar.gz
Collecting mock>=1.0 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 9))
  Using cached mock-1.0.1.tar.gz
Collecting oslosphinx>=2.5.0 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 10))
  Using cached oslosphinx-2.5.0-py2.py3-none-any.whl
Collecting oslotest>=1.5.1 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 11))
  Using cached oslotest-1.6.0-py2.py3-none-any.whl
Collecting requests-mock>=0.6.0 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 12))
  Using cached requests_mock-0.6.0-py2.py3-none-any.whl
Collecting sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 13))
  Using cached Sphinx-1.2.3-py3-none-any.whl
Collecting testrepository>=0.0.18 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 14))
  Using cached testrepository-0.0.20.tar.gz
Collecting testtools!=1.2.0,>=0.9.36 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached testtools-1.7.1-py2.py3-none-any.whl
Collecting WebOb>=1.2.3 (from -r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 16))
  Using cached WebOb-1.4.1.tar.gz
Requirement already up-to-date: pip in ./.tox/py34/lib/python3.4/site-packages (from pbr!=0.7,<1.0,>=0.6->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 4))
Collecting pytz>=0a (from Babel>=1.3->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 7))
  Using cached pytz-2015.2-py2.py3-none-any.whl
Collecting argparse (from cliff>=1.10.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 8))
  Using cached argparse-1.3.0-py2.py3-none-any.whl
Collecting cmd2>=0.6.7 (from cliff>=1.10.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 8))
  Using cached cmd2-0.6.8.tar.gz
Collecting PrettyTable<0.8,>=0.7 (from cliff>=1.10.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 8))
  Using cached prettytable-0.7.2.tar.bz2
Collecting pyparsing>=2.0.1 (from cliff>=1.10.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 8))
  Using cached pyparsing-2.0.3-py2.py3-none-any.whl
Collecting tablib (from cliff-tablib>=1.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 9))
  Using cached tablib-0.10.0-py2.py3-none-any.whl
Collecting PyYAML>=3.1.0 (from os-client-config->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 10))
  Using cached PyYAML-3.11.tar.gz
Collecting netaddr>=0.7.12 (from oslo.config>=1.9.3->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 11))
  Using cached netaddr-0.7.14-py2.py3-none-any.whl
Collecting iso8601>=0.1.9 (from oslo.utils>=1.4.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 13))
  Using cached iso8601-0.1.10-py33-none-any.whl
Collecting netifaces>=0.10.4 (from oslo.utils>=1.4.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 13))
  Using cached netifaces-0.10.4.tar.gz
Collecting msgpack-python>=0.4.0 (from oslo.serialization>=1.4.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 14))
  Using cached msgpack-python-0.4.6.tar.gz
Collecting pyOpenSSL>=0.11 (from python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached pyOpenSSL-0.15.1-py2.py3-none-any.whl
Collecting warlock<2,>=1.0.1 (from python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached warlock-1.1.0.tar.gz
Collecting simplejson>=2.2.0 (from python-novaclient>=2.22.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 17))
  Using cached simplejson-3.6.5.tar.gz
Collecting flake8==2.2.4 (from hacking<0.11,>=0.10.0->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 4))
  Using cached flake8-2.2.4.tar.gz
Collecting pep8==1.5.7 (from hacking<0.11,>=0.10.0->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 4))
  Using cached pep8-1.5.7-py2.py3-none-any.whl
Collecting mccabe==0.2.1 (from hacking<0.11,>=0.10.0->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 4))
  Using cached mccabe-0.2.1.tar.gz
Collecting pyflakes==0.8.1 (from hacking<0.11,>=0.10.0->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 4))
  Using cached pyflakes-0.8.1-py2.py3-none-any.whl
Collecting testscenarios>=0.4 (from oslotest>=1.5.1->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 11))
  Using cached testscenarios-0.4.tar.gz
Collecting python-subunit>=0.0.18 (from oslotest>=1.5.1->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 11))
  Using cached python-subunit-1.1.0.tar.gz
Collecting mox3>=0.7.0 (from oslotest>=1.5.1->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 11))
  Using cached mox3-0.7.0.tar.gz
Collecting docutils>=0.10 (from sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 13))
  Using cached docutils-0.12.tar.gz
Collecting Jinja2>=2.3 (from sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 13))
  Using cached Jinja2-2.7.3.tar.gz
Collecting Pygments>=1.2 (from sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 13))
  Using cached Pygments-2.0.2-py3-none-any.whl
Collecting unittest2>=1.0.0 (from testtools!=1.2.0,>=0.9.36->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached unittest2-1.0.1-py2.py3-none-any.whl
Collecting traceback2 (from testtools!=1.2.0,>=0.9.36->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached traceback2-1.4.0-py2.py3-none-any.whl
Collecting extras (from testtools!=1.2.0,>=0.9.36->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached extras-0.0.3.tar.gz
Collecting python-mimeparse (from testtools!=1.2.0,>=0.9.36->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached python-mimeparse-0.1.4.tar.gz
Collecting cryptography>=0.7 (from pyOpenSSL>=0.11->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached cryptography-0.8.2.tar.gz
Collecting jsonschema<3,>=0.7 (from warlock<2,>=1.0.1->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached jsonschema-2.4.0-py2.py3-none-any.whl
Collecting jsonpatch<2,>=0.10 (from warlock<2,>=1.0.1->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached jsonpatch-1.9.tar.gz
Collecting markupsafe (from Jinja2>=2.3->sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 13))
  Using cached MarkupSafe-0.23.tar.gz
Collecting linecache2 (from traceback2->testtools!=1.2.0,>=0.9.36->-r /home/rbradfor/tmp/python-openstackclient/test-requirements.txt (line 15))
  Using cached linecache2-1.0.0-py2.py3-none-any.whl
Collecting pyasn1 (from cryptography>=0.7->pyOpenSSL>=0.11->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached pyasn1-0.1.7.tar.gz
Collecting setuptools (from cryptography>=0.7->pyOpenSSL>=0.11->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached setuptools-15.2-py2.py3-none-any.whl
Collecting cffi>=0.8 (from cryptography>=0.7->pyOpenSSL>=0.11->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached cffi-0.9.2.tar.gz
Collecting jsonpointer>=1.5 (from jsonpatch<2,>=0.10->warlock<2,>=1.0.1->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached jsonpointer-1.7.tar.gz
Collecting pycparser (from cffi>=0.8->cryptography>=0.7->pyOpenSSL>=0.11->python-glanceclient>=0.15.0->-r /home/rbradfor/tmp/python-openstackclient/requirements.txt (line 15))
  Using cached pycparser-2.12.tar.gz
Installing collected packages: pbr, six, pytz, Babel, argparse, pyparsing, cmd2, PrettyTable, stevedore, cliff, tablib, cliff-tablib, PyYAML, os-client-config, netaddr, oslo.config, oslo.i18n, iso8601, netifaces, oslo.utils, msgpack-python, oslo.serialization, pyasn1, setuptools, pycparser, cffi, cryptography, pyOpenSSL, requests, jsonschema, jsonpointer, jsonpatch, warlock, python-keystoneclient, python-glanceclient, simplejson, python-novaclient, python-cinderclient, python-neutronclient, pyflakes, pep8, mccabe, flake8, hacking, coverage, discover, linecache2, traceback2, unittest2, extras, python-mimeparse, testtools, fixtures, mock, oslosphinx, testscenarios, python-subunit, mox3, testrepository, oslotest, requests-mock, docutils, markupsafe, Jinja2, Pygments, sphinx, WebOb
  Running setup.py install for Babel
  Running setup.py install for cmd2
  Running setup.py install for PrettyTable
  Running setup.py install for cliff
  Running setup.py install for cliff-tablib
  Running setup.py install for PyYAML
    Complete output from command /home/rbradfor/tmp/python-openstackclient/.tox/py34/bin/python3.4 -c "import setuptools, tokenize;__file__='/tmp/pip-build-p19auoc2/PyYAML/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-xlgu_evx-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/rbradfor/tmp/python-openstackclient/.tox/py34/include/site/python3.4/PyYAML:
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.4
    creating build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/representer.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/tokens.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/constructor.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/reader.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/__init__.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/error.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/scanner.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/loader.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/parser.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/nodes.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/serializer.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/cyaml.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/emitter.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/events.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/dumper.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/resolver.py -> build/lib.linux-x86_64-3.4/yaml
    copying lib3/yaml/composer.py -> build/lib.linux-x86_64-3.4/yaml
    running build_ext
    creating build/temp.linux-x86_64-3.4
    checking if libyaml is compilable
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.4m -I/home/rbradfor/tmp/python-openstackclient/.tox/py34/include/python3.4m -c build/temp.linux-x86_64-3.4/check_libyaml.c -o build/temp.linux-x86_64-3.4/check_libyaml.o
    checking if libyaml is linkable
    x86_64-linux-gnu-gcc -pthread build/temp.linux-x86_64-3.4/check_libyaml.o -lyaml -o build/temp.linux-x86_64-3.4/check_libyaml
    building '_yaml' extension
    creating build/temp.linux-x86_64-3.4/ext
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.4m -I/home/rbradfor/tmp/python-openstackclient/.tox/py34/include/python3.4m -c ext/_yaml.c -o build/temp.linux-x86_64-3.4/ext/_yaml.o
    ext/_yaml.c:8:22: fatal error: pyconfig.h: No such file or directory
     #include "pyconfig.h"
                          ^
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

    ----------------------------------------
    Command "/home/rbradfor/tmp/python-openstackclient/.tox/py34/bin/python3.4 -c "import setuptools, tokenize;__file__='/tmp/pip-build-p19auoc2/PyYAML/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-xlgu_evx-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/rbradfor/tmp/python-openstackclient/.tox/py34/include/site/python3.4/PyYAML" failed with error code 1 in /tmp/pip-build-p19auoc2/PyYAML

ERROR: could not install deps [-r/home/rbradfor/tmp/python-openstackclient/requirements.txt, -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt]; v = InvocationError('/home/rbradfor/tmp/python-openstackclient/.tox/py34/bin/pip install -U -r/home/rbradfor/tmp/python-openstackclient/requirements.txt -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt (see /home/rbradfor/tmp/python-openstackclient/.tox/py34/log/py34-1.log)', 1)
________________________________________________________________________________________________________________________ summary ________________________________________________________________________________________________________________________
ERROR:   py34: could not install deps [-r/home/rbradfor/tmp/python-openstackclient/requirements.txt, -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt]; v = InvocationError('/home/rbradfor/tmp/python-openstackclient/.tox/py34/bin/pip install -U -r/home/rbradfor/tmp/python-openstackclient/requirements.txt -r/home/rbradfor/tmp/python-openstackclient/test-requirements.txt (see /home/rbradfor/tmp/python-openstackclient/.tox/py34/log/py34-1.log)', 1)

Inconsistent messaging for OpenStackClient

As I mentioned earlier in Moving to OpenStackClient CLI I came across several differences in reconciling the legacy CLI tools.

I have also come across very inconsistent messaging. Here is a simple example.

In nova

$ nova list
ERROR (CommandError): You must provide an auth url via either --os-auth-url or env[OS_AUTH_URL] or specify an auth_system which defines a default url with --os-auth-system or env[OS_AUTH_SYSTEM]

In openstack

$ openstack service list
ERROR: openstack Authorization Failed: Cannot authenticate without an auth_url

$ openstack keypair list
ERROR: openstack Authentication requires 'auth_url', which should be specified in 'HTTPClient'

$ openstack catalog list
ERROR: openstack 'NoneType' object has no attribute 'auth'

$ openstack volume list
ERROR: openstack unsupported operand type(s) for +: 'NoneType' and 'str'

All three errors are effectively the same. That is I have not setup environment variables (OS_PROJECT_NAME, OS_AUTH_URL, OS_USERNAME, OS_PASSWORD) or passed all correctly as arguments.

In delving to the openstackclient source code specifically auth.py#147 I see another message “Set an authentication URL, with –os-auth-url, OS_AUTH_URL or auth.auth_url”. api.auth is also referenced in the Authentication Documentation as the place to start.

Time to delve in the code to see what I can find out.

cd
git clone git://git.openstack.org/openstack/python-openstackclient
cd python-openstackclient
python tools/install_venv.py
source .venv/bin/activate
# Don't need sudo for local environment
which openstack

After setting OS_USERNAME, OS_PASSWORD, and OS_PROJECT NAME (and not setting OS_AUTH_URL in my test) I run the following.

$ openstack image list
WARNING: openstackclient.shell Possible error authenticating: Missing parameter(s):
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url

ERROR: openstack Missing parameter(s):
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url

This matches what I saw in the code, so it’s the installed version that is older code. While user, service and keypair lists returns the same message volume and catalog still do not.

(.venv)rbradfor@octogon:~/python-openstackclient$ openstack volume list
WARNING: openstackclient.shell Possible error authenticating: Missing parameter(s):
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url

ERROR: openstack unsupported operand type(s) for +: 'NoneType' and 'str'

(.venv)rbradfor@octogon:~/python-openstackclient$ openstack catalog list
WARNING: openstackclient.shell Possible error authenticating: Missing parameter(s):
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url

ERROR: openstack 'NoneType' object has no attribute 'auth'

Moving to OpenStackClient CLI

In working with the keynote CLI within the TripleO scripts I came across the following deprecation warning message.

$ keystone token-get
/usr/local/lib/python2.7/dist-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.
  'python-keystoneclient.', DeprecationWarning)

Time to switch to using the OpenStackClient, historically also called the unified CLI.

Very easy to install.

$ sudo pip install python-openstackclient

$ openstack help
usage: openstack help [-h] [cmd [cmd ...]]

print detailed help for another command

positional arguments:
  cmd         name of the command

optional arguments:
  -h, --help  show this help message and exit

I would also suggest you add the following alias to your startup shell rc.

alias os='openstack'

The --help option also provides a much detailed list of available argument options.

$ os --help
usage: openstack [--version] [-v] [--log-file LOG_FILE] [-q] [--debug]
                 [--os-region-name ]
                 [--os-cacert ] [--verify | --insecure]
                 [--os-default-domain ] [--timing]
                 [--os-compute-api-version ]
                 [--os-network-api-version ]
                 [--os-image-api-version ]
                 [--os-volume-api-version ]
                 [--os-identity-api-version ]
                 [--os-auth-type ] [--os-username ]
                 [--os-identity-provider ]
                 [--os-project-domain-name ]
                 [--os-project-domain-id ]
                 [--os-project-name ]
                 [--os-auth-url ]
                 [--os-trust-id ]
                 [--os-service-provider-endpoint ]
                 [--os-user-domain-id ]
                 [--os-domain-name ]
                 [--os-identity-provider-url ]
                 [--os-token ] [--os-domain-id ]
                 [--os-url ]
                 [--os-user-domain-name ]
                 [--os-user-id ] [--os-password ]
                 [--os-project-id ]
                 [--os-object-api-version ] [-h]

Command-line interface to the OpenStack APIs

optional arguments:
  --version             show program's version number and exit
  -v, --verbose         Increase verbosity of output. Can be repeated.
...

The new CLI provides a number of benefits above the consolidation of syntax into a single client. There is the flexibility in formatting output, both selecting columns and output format.

Working with nova for more simple initial examples.

$ nova image-list
+--------------------------------------+---------------------------------+--------+--------+
| ID                                   | Name                            | Status | Server |
+--------------------------------------+---------------------------------+--------+--------+
| a8672506-049f-4fda-bc58-e64a646d587e | cirros-0.3.2-x86_64-uec         | ACTIVE |        |
| 557dec3a-f912-430c-bda1-ace9c669b78d | cirros-0.3.2-x86_64-uec-kernel  | ACTIVE |        |
| 1d9b6c2e-96d9-4426-a98d-7fa378c26189 | cirros-0.3.2-x86_64-uec-ramdisk | ACTIVE |        |
+--------------------------------------+---------------------------------+--------+--------+
$ openstack image list
+--------------------------------------+---------------------------------+
| ID                                   | Name                            |
+--------------------------------------+---------------------------------+
| a8672506-049f-4fda-bc58-e64a646d587e | cirros-0.3.2-x86_64-uec         |
| 1d9b6c2e-96d9-4426-a98d-7fa378c26189 | cirros-0.3.2-x86_64-uec-ramdisk |
| 557dec3a-f912-430c-bda1-ace9c669b78d | cirros-0.3.2-x86_64-uec-kernel  |
+--------------------------------------+---------------------------------+

As you can see the Status and Server columns are not in the default format. You can access a list of columns, however both the column order, and even the contents (e.g. ACTIVE v active) means you need to adjust your existing scripts. In this case Status exists, Server does not.

$ openstack image list --long
+--------------------------------------+---------------------------------+-------------+------------------+----------+--------+------------+-----------+----------------------------------+-----------------------------------------------------------------------------------------------------+
| ID                                   | Name                            | Disk Format | Container Format |     Size | Status | Visibility | Protected | Owner                            | Properties                                                                                          |
+--------------------------------------+---------------------------------+-------------+------------------+----------+--------+------------+-----------+----------------------------------+-----------------------------------------------------------------------------------------------------+
| a8672506-049f-4fda-bc58-e64a646d587e | cirros-0.3.2-x86_64-uec         | ami         | ami              | 25165824 | active | public     | False     | 0087dccc995e4ea3aaf209ddc8ad33e2 | kernel_id='557dec3a-f912-430c-bda1-ace9c669b78d', ramdisk_id='1d9b6c2e-96d9-4426-a98d-7fa378c26189' |
| 1d9b6c2e-96d9-4426-a98d-7fa378c26189 | cirros-0.3.2-x86_64-uec-ramdisk | ari         | ari              |  3723817 | active | public     | False     | 0087dccc995e4ea3aaf209ddc8ad33e2 |                                                                                                     |
| 557dec3a-f912-430c-bda1-ace9c669b78d | cirros-0.3.2-x86_64-uec-kernel  | aki         | aki              |  4969360 | active | public     | False     | 0087dccc995e4ea3aaf209ddc8ad33e2 |                                                                                                     |
+--------------------------------------+---------------------------------+-------------+------------------+----------+--------+------------+-----------+----------------------------------+-----------------------------------------------------------------------------------------------------+

Column Options

openstack provides the ability to pass column names, so we can try to simulate what we seen in the legacy nova client.

$ openstack image list -c ID
+--------------------------------------+
| ID                                   |
+--------------------------------------+
| 1b79bbb2-e8b6-430f-8cb7-ced8f3589807 |
| e52bd241-db30-40e2-9230-40f21ff3e4c7 |
| 59e28cdf-c4af-4b02-bdeb-3779ec4c306d |
+--------------------------------------+

$ openstack image list -c ID -c Name
+--------------------------------------+---------------------------------+
| ID                                   | Name                            |
+--------------------------------------+---------------------------------+
| 1b79bbb2-e8b6-430f-8cb7-ced8f3589807 | cirros-0.3.2-x86_64-uec         |
| e52bd241-db30-40e2-9230-40f21ff3e4c7 | cirros-0.3.2-x86_64-uec-ramdisk |
| 59e28cdf-c4af-4b02-bdeb-3779ec4c306d | cirros-0.3.2-x86_64-uec-kernel  |
+--------------------------------------+---------------------------------+

$ openstack image list -c ID -c Name -c Status
+--------------------------------------+---------------------------------+
| ID                                   | Name                            |
+--------------------------------------+---------------------------------+
| 1b79bbb2-e8b6-430f-8cb7-ced8f3589807 | cirros-0.3.2-x86_64-uec         |
| e52bd241-db30-40e2-9230-40f21ff3e4c7 | cirros-0.3.2-x86_64-uec-ramdisk |
| 59e28cdf-c4af-4b02-bdeb-3779ec4c306d | cirros-0.3.2-x86_64-uec-kernel  |
+--------------------------------------+---------------------------------+

The Status column which is in both nova and openstack output above seems to provide no output or even an error. However if specified as the only individual column it draws an expected error message.

stack@ubuntu:~$ openstack image list -c Status
ERROR: openstack No recognized column names in ['Status']

Not knowing what the actual list of valid columns are, as it does not match the --long output we move on.

Update 4/23/15
It pays to read the source code, specifically openstackclient/image/v2/image.py. In order to list additional columns you must also specify --long.

$ python openstackclient/shell.py image list --long -c ID -c Name -c Status
+--------------------------------------+---------------------------------+--------+
| ID                                   | Name                            | Status |
+--------------------------------------+---------------------------------+--------+
| c5153c2b-df9c-488c-995e-5cb347c0ee35 | cirros-0.3.2-x86_64-uec         | active |
| ac7e0c04-e47c-43da-ba28-b0d47d293eb7 | cirros-0.3.2-x86_64-uec-ramdisk | active |
| 52cc99f5-a9aa-4f27-98b8-d8ec762a246d | cirros-0.3.2-x86_64-uec-kernel  | active |
+--------------------------------------+---------------------------------+--------+

I should also point out that column names are Case Sensitive. Id is not valid.

$ python openstackclient/shell.py image list --long -c Id -c Name -c Status
+---------------------------------+--------+
| Name                            | Status |
+---------------------------------+--------+
| cirros-0.3.2-x86_64-uec         | active |
| cirros-0.3.2-x86_64-uec-ramdisk | active |
| cirros-0.3.2-x86_64-uec-kernel  | active |
+---------------------------------+--------+

Format Options

One cool feature is the formatting options. There are currently 6 types.

TABLE

$ openstack image list --format table
+--------------------------------------+---------------------------------+
| ID                                   | Name                            |
+--------------------------------------+---------------------------------+
| a8672506-049f-4fda-bc58-e64a646d587e | cirros-0.3.2-x86_64-uec         |
| 1d9b6c2e-96d9-4426-a98d-7fa378c26189 | cirros-0.3.2-x86_64-uec-ramdisk |
| 557dec3a-f912-430c-bda1-ace9c669b78d | cirros-0.3.2-x86_64-uec-kernel  |
+--------------------------------------+---------------------------------+

CSV

$ openstack image list --format csv
"ID","Name"
"a8672506-049f-4fda-bc58-e64a646d587e","cirros-0.3.2-x86_64-uec"
"1d9b6c2e-96d9-4426-a98d-7fa378c26189","cirros-0.3.2-x86_64-uec-ramdisk"
"557dec3a-f912-430c-bda1-ace9c669b78d","cirros-0.3.2-x86_64-uec-kernel"

JSON

$ openstack image list --format json | python -m json.tool
[
    {
        "ID": "1b79bbb2-e8b6-430f-8cb7-ced8f3589807",
        "Name": "cirros-0.3.2-x86_64-uec"
    },
    {
        "ID": "e52bd241-db30-40e2-9230-40f21ff3e4c7",
        "Name": "cirros-0.3.2-x86_64-uec-ramdisk"
    },
    {
        "ID": "59e28cdf-c4af-4b02-bdeb-3779ec4c306d",
        "Name": "cirros-0.3.2-x86_64-uec-kernel"
    }
]

HTML

$ openstack image list --format html
<table>
<thead>
<tr><th>ID</th>
<th>Name</th></tr>
</thead>
<tr><td>a8672506-049f-4fda-bc58-e64a646d587e</td>
<td>cirros-0.3.2-x86_64-uec</td></tr>
<tr><td>1d9b6c2e-96d9-4426-a98d-7fa378c26189</td>
<td>cirros-0.3.2-x86_64-uec-ramdisk</td></tr>
<tr><td>557dec3a-f912-430c-bda1-ace9c669b78d</td>
<td>cirros-0.3.2-x86_64-uec-kernel</td></tr>
</table>

YAML

$ openstack image list --format yaml
- {ID: a8672506-049f-4fda-bc58-e64a646d587e, Name: cirros-0.3.2-x86_64-uec}
- {ID: 1d9b6c2e-96d9-4426-a98d-7fa378c26189, Name: cirros-0.3.2-x86_64-uec-ramdisk}
- {ID: 557dec3a-f912-430c-bda1-ace9c669b78d, Name: cirros-0.3.2-x86_64-uec-kernel}

SHELL

$ openstack image list --format shell
usage: openstack image list [-h] [-f {csv,html,json,table,yaml}] [-c COLUMN]
                            [--max-width ]
                            [--quote {all,minimal,none,nonnumeric}]
                            [--public | --private] [--property ]
                            [--long] [--sort [:]]
openstack image list: error: argument -f/--format: invalid choice: 'shell' (choose from 'csv', 'html', 'json', 'table', 'yaml')

As you can see shell is invalid for the list argument, however it is valid for the show argument.

$ openstack image show a8672506-049f-4fda-bc58-e64a646d587e
+------------------+-----------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                           |
+------------------+-----------------------------------------------------------------------------------------------------------------+
| checksum         | 4eada48c2843d2a262c814ddc92ecf2c                                                                                |
| container_format | ami                                                                                                             |
| created_at       | 2015-03-31T19:42:53.000000                                                                                      |
| deleted          | False                                                                                                           |
| disk_format      | ami                                                                                                             |
| id               | a8672506-049f-4fda-bc58-e64a646d587e                                                                            |
| is_public        | True                                                                                                            |
| min_disk         | 0                                                                                                               |
| min_ram          | 0                                                                                                               |
| name             | cirros-0.3.2-x86_64-uec                                                                                         |
| owner            | 0087dccc995e4ea3aaf209ddc8ad33e2                                                                                |
| properties       | {u'kernel_id': u'557dec3a-f912-430c-bda1-ace9c669b78d', u'ramdisk_id': u'1d9b6c2e-96d9-4426-a98d-7fa378c26189'} |
| protected        | False                                                                                                           |
| size             | 25165824                                                                                                        |
| status           | active                                                                                                          |
| updated_at       | 2015-03-31T19:42:54.000000                                                                                      |
+------------------+-----------------------------------------------------------------------------------------------------------------+
$ openstack image show a8672506-049f-4fda-bc58-e64a646d587e --format=shell
checksum="4eada48c2843d2a262c814ddc92ecf2c"
container_format="ami"
created_at="2015-03-31T19:42:53.000000"
deleted="False"
disk_format="ami"
id="a8672506-049f-4fda-bc58-e64a646d587e"
is_public="True"
min_disk="0"
min_ram="0"
name="cirros-0.3.2-x86_64-uec"
owner="0087dccc995e4ea3aaf209ddc8ad33e2"
properties="{u'kernel_id': u'557dec3a-f912-430c-bda1-ace9c669b78d', u'ramdisk_id': u'1d9b6c2e-96d9-4426-a98d-7fa378c26189'}"
protected="False"
size="25165824"
status="active"
updated_at="2015-03-31T19:42:54.000000"

Interactive

The CLI also provides an interactive shell.

$ openstack
(openstack) image list
+--------------------------------------+---------------------------------+
| ID                                   | Name                            |
+--------------------------------------+---------------------------------+
| 1b79bbb2-e8b6-430f-8cb7-ced8f3589807 | cirros-0.3.2-x86_64-uec         |
| e52bd241-db30-40e2-9230-40f21ff3e4c7 | cirros-0.3.2-x86_64-uec-ramdisk |
| 59e28cdf-c4af-4b02-bdeb-3779ec4c306d | cirros-0.3.2-x86_64-uec-kernel  |
+--------------------------------------+---------------------------------+
(openstack) quit

AWS cost saving tips – EBS Volumes

A trivial cost saving tip for checking if you are spending money in your AWS environment on unused resources. This is especially appropriate when using provisioned IOPS EBS volumes.

$ ec2-describe-volumes | grep available

VOLUME	vol-44dff904	8	snap-d86d0884	us-east-1b	available	2014-08-01T14:11:24+0000	standard
VOLUME	vol-62dff922	100		us-east-1b	available	2014-08-01T14:11:24+0000	io1	1000
VOLUME	vol-15dff955	8	snap-d86d0884	us-east-1b	available	2014-08-01T14:11:24+0000	standard
VOLUME	vol-80a88ec0	8	snap-d86d0884	us-east-1b	available	2014-08-01T15:12:54+0000	standard
VOLUME	vol-ca82a48a	100		us-east-1b	available	2014-08-01T16:13:49+0000	standard
VOLUME	vol-5d79581d	8	snap-d86d0884	us-east-1b	available	2014-08-01T18:27:01+0000	standard
VOLUME	vol-baf9dbfa	8	snap-d86d0884	us-east-1b	available	2014-08-03T18:20:59+0000	standard
VOLUME	vol-53ffdd13	8	snap-d86d0884	us-east-1b	available	2014-08-03T18:25:52+0000	standard
VOLUME	vol-ade7daed	8	snap-d86d0884	us-east-1b	available	2014-08-13T20:10:46+0000	standard
VOLUME	vol-34e2df74	8	snap-065a2e52	us-east-1b	available	2014-08-13T20:26:17+0000	standard
VOLUME	vol-cacef38a	100	snap-280ffb7f	us-east-1b	available	2014-08-13T21:19:18+0000	standard
VOLUME	vol-41350a01	8	snap-f23ccba5	us-east-1b	available	2014-08-14T16:54:27+0000	standard
VOLUME	vol-51350a11	100	snap-fc3ccbab	us-east-1b	available	2014-08-14T16:54:27+0000	standard
VOLUME	vol-912f10d1	8	snap-96ee24c1	us-east-1b	available	2014-08-14T17:15:06+0000	standard
VOLUME	vol-a82f10e8	100	snap-9dee24ca	us-east-1b	available	2014-08-14T17:15:06+0000	standard

These are available and unused EBS volumes which you should consider deleting.

Improving performance – A full stack problem

Improving the performance of a web system involves knowledge of how the entire technology stack operates and interacts. There are many simple and common tips that can provide immediate improvements for a website. Some examples include:

  • Using a CDN for assets
  • Compressing content
  • Making fewer requests (web, cache, database)
  • Asynchronous management
  • Optimizing your SQL statements
  • Have more memory
  • Using SSD’s for database servers
  • Updating your software versions
  • Adding more servers
  • Configuring your software correctly
  • … And the general checklist goes on

Understanding where to invest your energy first, knowing what the return on investment can be, and most importantly the measurement and verification of every change made is the difference between blind trial and error and a solid plan and process. Here is a great example for the varied range of outcome to the point about “Updating your software versions”.

On one project the MySQL database was reaching saturation, both the maximum number of database connections and maximum number of concurrent InnoDB transactions. The first is a configurable limit, the second was a hard limit of the very old version of the software. Changing the first configurable limit can have dire consequences, there is a tipping point, however that is a different discussion. A simple software upgrade of MySQL which had many possible improvement benefits, combined with corrected configuration specific for this new version made an immediate improvement. The result moved a production system from crashing consistently under load, to at least barely surviving under load. This is an important first step in improving the customer experience.

In the PHP application stack for the same project the upgrading of several commonly used frameworks including Slim and Twig by the engineering department seemed like a good idea. However applicable load testing and profiling (after it was deployed, yet another discussion point) found the impact was a 30-40% increase in response time for the application layer. This made the system worse, and cancelled out prior work to improve the system.

How to tune a system to support 100x load increase with no impact in performance takes knowledge, experience, planning, testing and verification.

The following summarized graphs; using New Relic monitoring as a means of representative comparison; shows three snapshots of the average response time during various stages of full stack tuning and optimization. This is a very simplified graphical view that is supported by more detailed instrumentation using different products, specifically with much finer granularity of hundreds of metrics.

These graphs represent the work undertaken for a system under peak load showing an average 2,000ms response time, to the same workload under 50ms average response time. That is a 40x improvement!

If your organization can benefit from these types of improvements feel free to Contact Me.

There are numerous steps to achieving this. A few highlights to show the scope of work you need to consider includes:

  • Knowing server CPU saturation verses single core CPU saturation.
  • Network latency detection and mitigation.
  • What are the virtualization mode options of virtual cloud instances?
  • Knowing the network stack benefits of different host operating systems.
  • Simulating production load is much harder than it sounds.
  • Profiling, Profiling, Profiling.
  • Instrumentation can be misleading. Knowing how different monitoring works with sampling and averaging.
  • Tuning the stack is an iterative process.
  • The simple greatest knowledge is to know your code, your libraries, your dependencies and how to optimize each specific area of your technology stack.
  • Not everything works, some expected wins provided no overall or observed benefits.
  • There is always more that can be done. Knowing when to pause and prioritize process optimizations over system optimizations.

These graphs show the improvement work in the application tier (1500ms to 35ms to 25ms) and the database tier (500ms to 125ms to 10ms) at various stages. These graphs do not show for example improvements made in DNS resolution, different CDNs, managing static content, different types and ways of compression, remove unwanted software components and configuration, standardized and consistent stack deployments using chef, and even a reduction in overall servers. All of these successes contributed to a better and more consistent user experience.

40x performance improvements in LAMP stack

Writing re-runable shell script

I recently started playing with devstack again (An all-in-on OpenStack developer setup). Last time was over 3 years ago because I remember a pull request for a missing dependency at the time.

The installation docs provide information to bootstrap your system with a necessary user and privileges, however like many docs for software setup they contain one off instructions.

adduser stack
echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

When you write operations code you need to always be thinking about “testability” and “automation”. It is important to write re-runable code. You should always write parameterized code when possible, which can be refactored into usable functions at any time.

This is a good example to demonstrate a simple test condition for making the initial instructions re-runable.

sudo su -
NEW_USER="stack"
# This creates default group of same username
# This creates user with default HOME in /home/stack
[ `grep ${NEW_USER} /etc/passwd | wc -l` -eq 0 ] && useradd -s /bin/bash -m ${NEW_USER}
NEW_USER_SUDO_FILE="/etc/sudoers.d/${NEW_USER}"
[ ! -s ${NEW_USER_SUDO_FILE} ] && umask 226 && echo "${NEW_USER} ALL=(ALL) NOPASSWD: ALL" > ${NEW_USER_SUDO_FILE}
ls -l ${NEW_USER_SUDO_FILE}

Another reason to avoid RDS

My list of reasons for never using or recommending Amazon’s MySQL RDS service grows every time I experience problems with customers. This was an interesting and still unresolved issue.

ERROR 126 (HY000): Incorrect key file for table '/rdsdbdata/tmp/#sql_5b7_1.MYI'; try to repair it

You may see this is a MyISAM table. The MySQL database is version 5.5, all InnoDB tables and is very small 100MB in total size.
What is happening is that MySQL is generating a temporary table, and this table is being written to disk. I am unable to change the code to improve the query causing this disk I/O.

What I can not understand and have no ability to diagnose is why this error occurs sometimes and generally when the database is under additional system load. With RDS you have no visibility of the server running the production database. While you have SQL access, an API for managing MySQL configuration options (I also add not all MySQL variables), and limited system statistics via a graphical interface, all information about the system performance, disk configuration etc is hidden and not accessible. This is a frustrating limitation of using RDS.

NOTE: While I cannot recommend RDS, I am very happy with AWS EC2 services when correctly configured. For a cloud based MySQL solution I would definitely recommend greater control over your MySQL database using EC2 and EBS.

Basic scalability principles to avert downtime

In the press in the last two days has been the reported outage of Amazon Web Services Elastic Compute Cloud (EC2) in just one North Virginia data center. This has affected many large website includes FourSquare, Hootsuite, Reddit and Quora. A detailed list can be found at ec2disabled.com.

For these popular websites was this avoidable? Absolutely.

Basic scalability principles if deployed in these systems architecture would have averted the significant downtime regardless of your development stack. While I work primarily in MySQL these principles are not new, nor are they complicated, however they are fundamental concepts in scalability that apply to any technology including the popular MongoDB that is being used by a number of affected sites.

Scalability 101 involves some simple basic rules. Here are just two that seem to have been ignored by many affected by this recent AWS EC2 outage.

  1. Never put all your eggs in one basket. If you rely on AWS completely, or you rely on just one availability zone that is putting all your eggs in one basket.
  2. Always keep your important data close to home. When it comes to what is most critical to your business you need access and control to your information. At 5am in the morning when the CEO asks how long will our business be unavailabla and what is needed to resolve the problem, the answer “We have no control over this and have no ETA” is not an acceptable answer.

With a successful implementation and appropriate data redundancy you may not have an environment immediately available however you have access to your important information and the ability to create one quickly. Many large hosting companies can provide additional H/W on near demand, especially if you have an initial minimal footprint. Indeed using Amazon Web Services (AWS) as a means to avert a data center disaster is an ideal implementation of Infrastructure As A Service (IAAS). Even with this issue, organizations that had planned for this type of outage could have easily migrated to another AWS availability zone that was unaffected.

Furthermore, system architecture to support various levels of data availability and scalability ensure you can handle many more various types of unavailability without significant system down time as recently seen. There are many different types of availability and unavailability, know what your definition of downtime is and supporting disasters should be your primary focus of scalability, not an after thought.

As an expert in performance and scalability I can help your organization in the design of a suitable architecture to support successful scalability and disaster. This is not rocket science however many organizations gamble without the expertise of a professional to ensure business viability.