Do you control your database outages?

Working with a client last week I noted in my analysis, “The mysql server was restarted on Thursday and so the [updated] my.cnf settings seems current”. This occurred between starting my analysis on Wednesday and delivering my findings on Friday.

# more /var/lib/mysql/ip-104-238-102-213.secureserver.net.err
160609 17:04:43 [Note] /usr/sbin/mysqld: Normal shutdown

The client however stated they did not restart MySQL and would not do that at 5pm which is still a high usage time of the production system. This is unfortunately not an uncommon finding, that a production system had an outage and that the client did not know about it and did not instigate this.

There are several common causes and the “DevOps” mindset for current production systems has made this worse.

  • You have managed hosting and they perform software updates with/without your knowledge. I have for example worked with several Rackspace customers and there would be an outage because Rackspace engineers decided to apply an upgrade at a time that suited them, not their customers.
  • You have chosen automate updates for your Operating System.
  • Your developers update the software when they like.
  • You are using a 3rd party product that is making an arbitrary decision.

In this case the breadcrumbs lead to the last option, that cPanel is performing this operation as hinted by the cPanel specific installed MySQL binaries.

$ rpm -qa | grep -i mysql
cpanel-perl-522-MySQL-Diff-0.43-1.cp1156.x86_64
MySQL55-devel-5.5.50-1.cp1156.x86_64
MySQL55-client-5.5.50-1.cp1156.x86_64
cpanel-perl-522-DBD-mysql-4.033-1.cp1156.x86_64
compat-MySQL50-shared-5.0.96-4.cp1136.x86_64
MySQL55-server-5.5.50-1.cp1156.x86_64
cpanel-mysql-libs-5.1.73-1.cp1156.x86_64
MySQL55-shared-5.5.50-1.cp1156.x86_64
MySQL55-test-5.5.50-1.cp1156.x86_64
compat-MySQL51-shared-5.1.73-1.cp1150.x86_64
cpanel-mysql-5.1.73-1.cp1156.x86_64

Also note that cPanel still uses MySQL 5.1 shared libraries.

So why did cPanel perform not one shutdown, but two in immediate succession? The first was 17 seconds, the second was 2 seconds. Not being experienced with cPanel I cannot offer an answer for this shutdown occurance. I can for others which I will detail later.

160609 17:04:24 [Note] /usr/sbin/mysqld: Normal shutdown
...
160609 17:04:28 [Note] /usr/sbin/mysqld: Shutdown complete
...
160609 17:04:41 [Note] /usr/sbin/mysqld: ready for connections.
...
160609 17:04:43 [Note] /usr/sbin/mysqld: Normal shutdown
...
160609 17:04:45 [Note] /usr/sbin/mysqld: ready for connections.
...

And why did the customer not know about the outage? If you use popular SaaS monitoring solutions such as New Relic and Pingdom you would not have been informed because these products have a sampling time of 60 seconds. I use these products along with Nagios on my personal blog site as they provide adequate instrumentation based on the frequency of usage. I would not recommend these tools as the only tools used in a high volume production system simply because of this one reason. In high volume system you need sampling are much finer granularity.

So just when you were going to justify that 17 seconds while unexpected is tolerable, I want to point out that this subsequently occurred and the outage was over 4 minutes.

160619 11:58:07 [Note] /usr/sbin/mysqld: Normal shutdown
...
160619 12:02:26 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
...

An analysis of the MySQL error log which is correctly not rolled as I always recommend showed a pattern of regular MySQL updates, from 5.5.37 thru 5.5.50. This is the most likely reason a 3rd party product has performed a database outage, to perform a software update at their choosing, not yours.

150316  3:54:11 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.37-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150316  3:54:22 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.37-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150316 19:07:31 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.37-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150317  2:05:45 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.40-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150317  2:05:54 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.40-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150319  1:17:26 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.42-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150319  1:17:34 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.42-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150616  1:39:44 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.42-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150616  1:39:52 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.42-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

151006  1:01:45 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.45-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

151006  1:01:54 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.45-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

151027  1:21:12 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.46-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160105  1:31:35 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.47-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160211  1:52:47 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.48-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160211  1:52:55 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.48-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160503  1:14:59 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160503  1:15:03 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160521 18:46:24 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160522 11:51:45 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160529 15:26:41 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160529 15:30:12 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160604 23:29:15 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160609 17:04:41 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.50-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160609 17:04:45 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.50-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160619 12:21:58 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.50-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 0  MySQL Community Server (GPL)

What is intriguing from this analysis is that several versions were skipped including .38, .39, .41, .43, .44. One may ask the question why?

For Clients

This leads to several questions of the strategy used in your organization for controlling outages of your MySQL infrastructure for upgrades or for other reasons.

  • What is an acceptable outage time?
  • What is the acceptable maintenance window to perform outages?
  • What is your release cadence for MySQL upgrades?
  • Who or what performs updates?
  • Can your monitoring detect small outages?

You should also consider in your business strategy having a highly available (HA) MySQL infrastructure to avoid any outage, or application intelligence to support varying levels of data access as I describe in Successful MySQL Scalability Principles.

Understanding the MySQL Release Cadence

At the recent New York Oracle Users Group summer general meeting I gave a presentation to the Oracle community on the MySQL product release cycle. Details included:

  • Identifying the server product options covering community, enterprise and ecosystem.
  • Describe MySQL enterprise products, features and support options.
  • Describing the DMR, RC, GA, EOL and labs product lifecycle.
  • Discussing the GA release frequency.
  • Talking about the MySQL Upgrade path.

Utilizing OpenStack Trove DBaaS for deployment management

Trove is used for self service provisioning and lifecycle management for relational and non-relational databases in an OpenStack cloud. Trove provides a RESTful API interface that is same regardless of the type of database. CLI tools and a web UI via Horizon are also provided wrapping Trove API requests.

In simple terms. You are a MySQL shop. You run a replication environment with daily backups and failover capabilities which you test and verify regularly. You have defined DBA and user credentials ACL’s across dev, test and prod environments. Now there is a request for using MongoDB or Cassandra, the engineering department has not decided but they want to evaluate the capabilities. How long as a operator does it take to acquire the software, install, configure, setup replication, backups, ACLs and enable the engineering department to evaluate the products?

With Trove DBaaS this complexity is eliminated due to a consistent interface to perform the provisioning, configuration, HA, B&R, ACL across other products the exact same way you perform these tasks for MySQL. This enables operations to be very proactive to changing technology requests supporting digital transformation strategies.

Enabling this capability is not an automatic approval of a new technology stack. It is important that strategic planning, support and management is included in the business strategy to understanding the DBaaS capability for your organization. Examples of operations due diligence would include how to integrate these products into your monitoring, logging and alerting systems. Determine what additional disk storage requirements may be needed. Test, verify and time recovery strategies.

Trove specifically leverages several other OpenStack services for source image and instance management. Each trove guest image includes a base operating system, the applicable database software and a database technology specific trove guest agent. This agent is the intelligence that knows the specific syntax and version needs to perform the tasks. The agent is also the communication mechanism between Trove and the running nova instance.

Trove is a total solution manager for the instance running your chosen database. Instances have no ssh, telnet or other general access. The only access is via the SQL communication via the defined ports, e.g. 3306 for MySQL.

The Trove lifecycle management covers the provisioning, management, security, configuration and tuning of your database. Amrith Kumar in a recent presentation at the NYC Postgres meetup provides a good description of the specifics.

Trove is capable of describing and supporting clustering and replication topologies for the various data stores. It can support backup and restore, failover and resizing of clusters without the operator needing to know the specific syntax of complexities of a database product they are unfamiliar with.

A great example is the subtle difference in MySQL replication management using GTID’s between MySQL and MariaDb. To the developer, the interaction between MySQL and MariaDB via SQL is the same, the management of a replication topology is not identical, but is managed by the Trove guest agent. To the operator, the management is the same.

Also in his presentation, Kumar described Tesora, an enterprise class Trove service provided with a number of important additional features. Tesora supports additional database products including Oracle and DB2Express as well as commercial versions for Oracle MySQL, EnterpriseDB, Couchbase, Datastax, and mongoDB. Using the Horizon UI customizations with pre-defined trove instances greatly reduces the work needed for operators and deployers to build there own.

Are you a responsible developer?

What is a good example of individual developer responsibility? Here is just one example.

A developer downloads a copy of the core production database to their own development laptop. Why? Because it’s easy to work with real data, and it’s hard to consider building applicable test data that all engineers can utilize.

What could be wrong with this approach? Here are a few additional points.

  • Security. Should the developer accidentally leave their laptop on that 90 minute train commute each way daily, could that data end up with a result of negative publicity for the organization. For employees that work at more sensitive organization is theft a possibility? Or, does that employee become disgruntled by lack of management and with poor ethical values take the names, emails, addresses and purchase history of your customers so it can be used for other means.
  • Data Clensing. This includes removing pay rate information of employees of the company that developers should never have access to. It is about obfuscating email address of millions of customers so that test code to improve receipt generate doesn’t accidentally email 1,000 existing customers with a repeat receipt that now contains invalid data. It is about providing a subset of information that is applicable and relevant.
  • Testing philosophy. Testing is all about trying to break your software, not testing that one small feature works in the likely path of use. It is easy to unit test the developer change for editing a customer profile to add a emergency contact field. It is right to consider the lifecycle of customer data. Is it knowing you need to consider the full workflow and the multiple paths to creating and editing a customer profile that causes the responsibility of the organization’s need to be consistent for the entire experience, not just one singular perspective . In simple terms it is about functionality testing at the time of development, not the narrow view of unit testing and that other detailed testing is somebody else’s responsibility.
  • Time. How long did it take to download the 10G dataset and import it? How much of that data is really needed. Does five years of historical products and orders ensure adequate unit and functional testing. Sure it is easy to have the available disk space however what efficiency improvements could exist for a data set 20x smaller. If it took five minutes to reset the test data for development instead of one hour would a developer refresh more often?

Before considering the means to meet an immediate problem such as this one example, stop, think, and act about improving the process for benefit of all technical resources. This is what sets apart an engineer that is just a coder and a software developer.

It is unfortunate that engineering managers are not constantly focussed on process and productivity improvements for sustaining software for the entire lifecycle of a product. The reality is many have worked as developers without applicable mentoring and management and an entire generation of software developers are now influencing the next generation. Historically, the rigidness of the traditional waterfall approach to the software development lifecycle instills a number of key principles that agile only environments have not fostered or understood.

Understanding the DBaaS capability for your organization

As your organization transforms to embrace the wealth of digital information that is becoming available, the capability to store, manage and consume this data in any given format or product becomes an increasing burden for operations.

How does your organization handle the request, “I need to use product Z to store data for my new project?” There are several responses I have experienced first-hand with clients.

  1. Enforce the company policy that Products O and S are all that can be used.
  2. Ignore the request.
  3. Consider the request, but antagonize your own internal organization with long wait times (e.g. months or years) and with repeated delays to evaluate a product you simply do not want to support.
  4. Do whatever the developers say, they know best.

Unfortunately I have seen too many organizations use the first three options as the answer. The last option you make consider as a non valid answer however I have also seen this prevalent when there is no operations team or strategic technical oversight.

Ignorance of the question only leads to a greater pressure point at a later time. This may be when your executive team now enforces the requirement with their timetable. I have seen this happen and with painful ramifications. With the ability to consume public cloud resources with only access to a credit card, development resources can now proceed unchecked more easily if ignored or delayed. When a successful proof of concept is produced this way and now a more urgent need is required to deploy, support and manage, the opportunity to have a positive impact on the design decision of a new data product has passed.

Using DBaaS is one enabling tool within a strategic business model for your organization to satisfy this question with greater control. This however is not the solution but rather one tool combined with applicable processes. In order to scope the requirements for the original question, your model also needs to consider the following:

  • Provisioning capabilities
  • Strategic planning and insight
  • Support and management
  • Release criteria

Provisioning

This is the strength of DBaaS. Operations can enable development to independently provision resources and technology with little additional impeding dependency. There is input from operations to enable varying products to be available by self servicing, however there is also some control. DBaaS can be viewed as a controlled and flexible enabler. A specific example is an organization that uses the MySQL relational database, and now a developer wishes to use the MongoDB NoSQL unstructured store. An operator may cringe at the notion of a lack of data consistency, structure data query access or performance capabilities. These are all valid points, however those are discussions at a strategic level discussion your workflow pipeline and should not be an impediment to iterate quickly. Without oversight, to iterate quickly can lead to unmanageable outcomes.

Strategic Planning

There always needs to be oversight combined with applicable strategy. A single developer stating they want to use the new product Z for a distribution key/value store needs to be vetted first within the engineering organization and its own developer peers. If another project is already using Product Y that has the same core data access and features, this burden of an additional product support should be a self contained discussion validating the need first.

This is one strength of a good engineering manager that balances the requirements of the business needs and objectives with the capabilities of the resources available, including staff, tools and technology. Applicable principles put in place should also ensure that some aspect of planning is instilled into the development culture.

Support and Management

The development and engineering resources rarely consider the administration and support required for the suite of products and services used in an organization. The emphasis is on feature development and customer requirements, not the sustainability, longevity and security of any system. Operational support is a long list of needs, just a few include:

  • Information security.
  • Information availability.
  • Service level agreements (SLAs) between partners, service providers and the internal organization
  • The backup ecosystem, time taken, consistency, point-in-time recovery, testing and verification, cost of H/W, S/W, licenses.
  • Internet connectivity.
  • Capacity planning and cost analysis of storing and archiving ever increasing sources of data.
  • Hardware and software upgrades.

Two way communication which is often overlooked is the start of better understanding. That is, operations being included and involved in strategic development planning, and engineering resources included in operations needs and requirements for ensuring those new product features operate for the benefit of customers. In summary, “bridging the communication chasm”.

DevOps is an abused term, this implies that developers now perform a subset of responsibilities of Operations. As an individual that has worked in both development teams and lead operations teams, your resources skills, personality, rational thinking and decision making needs are vastly different between an engineering task and a production operations task.

Developers need to live a 24 hour day (with the unnecessary 3am emergency call) in the shoes of an operator. The reverse is also true, however the ramifications to business continuity are not the same. Just one factor, the cost, or more specifically the loss to the business due to a production failure alters the decision making process. Failure can be anything from a hardware or connectivity problem, bad code that was released to a data breach.

Release Criteria

If an organization has a strong (and flexible) policy on release criteria, all parties from the stack-holder, executive, engineering, operations and marketing should be able to contribute to the discussion and decision for a new product, and applicable in-house or third-party support. This discussion is not a pre-requisite for any department or developer to iterate quickly, however it is pre-requisite to migrate from a proof-of-concept prototype to a supported feature. Another often overlooked criteria in the pursuit for rapid deployment of new features which are significantly more difficult to remove after publication.

Expired MySQL passwords

I was surprised to find on one of my websites the message “Connect failed: Your password has expired. To log in you must change it using a client that supports expired passwords.

Not knowing that I was using a MySQL password expiry policy I reviewed the 5.7 documentation quickly which *clearly* states “The default default_password_lifetime value is 0, which disables automatic password expiration.”.

I then proceeded to investigate further, my steps are below the following comment.

However, it is always important with MySQL documentation and a new feature (in this case a 5.7 feature) to review release notes when installing versions or to least read ALL the documentation, because you may miss important information, such as.

Note:
From MySQL 5.7.4 to 5.7.10, the default default_password_lifetime value is 360 (passwords must be changed approximately once per year). For those versions, be aware that, if you make no changes to the default_password_lifetime variable or to individual user accounts, all user passwords will expire after 360 days, and all user accounts will start running in restricted mode when this happens. Clients…

I would encourage you to view the MySQL password expiry policy to see the full note. I have only include the intro here are a teaser, because you need to read the entire note.

Analysis

Back to impatient analysis steps.

$ mysql -u admin -p 
*********

SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 5.7.9-log |
+-----------+

SHOW GLOBAL VARIABLES LIKE 'default_p%';
+---------------------------+-------+
| Variable_name             | Value |
+---------------------------+-------+
| default_password_lifetime | 360   |
+---------------------------+-------+


SELECT host,user,password_last_changed 
FROM mysql.user
WHERE password_last_changed + INTERVAL @@default_password_lifetime DAY < CURDATE();
+-----------+--------------+-----------------------+
| host      | user         | password_last_changed |
+-----------+--------------+-----------------------+
| localhost | XXX          | 2014-12-01 12:53:36   |
| localhost | XXXXX        | 2014-12-01 12:54:04   |
| localhost | XX_XXXX      | 2015-06-04 11:01:11   |
+-----------+--------------+-----------------------+

Indeed there are some passwords that have expired.

After finding the applicable application credentials I looked at verifying the problem.

$ mysql -uXX_XXXX -p
*******************
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Server version: 5.7.9-log

mysql>

Interesting, there was no error to make a client connection, however.

mysql> use XXXX;
ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.

I then proceeded to change the password with the applicable hint shown.

ALTER USER XX_XXXX@localhost IDENTIFIED BY '*************************';

I chose to reuse the same password because changing the password would require a subsequent code change. MySQL accepted the same password. (A topic for a separate discussion on this point).

A manual verification showed the application and users operating as it should be, so immediate loss of data was averted. Monitoring of the sites home page however did not detect this problem of a partial page error, so should a password expiry policy be used, an applicable check in a regularly scheduled operational task is a good feature request.

All of this could have been avoided if my analysis started with reading the documentation and the note (partly shown above) which has an alternative and potentially more practical immediate solution.

In a firefighting operational mode it can be a priority to correct the problem, however more detailed analysis is prudent to maintain a "Being proactive rather than reactive" mindset. Being a Friday, I feel the old saying "There is more than one way to skin a cat" is applicable.

I am also more familiar with the SET PASSWORD syntax, so reviewing this 5.7 manual page is also a good read to determine what specific syntax is now deprecated and what "ALTER USER is now the preferred statement for assigning passwords" also.

Digital transformation strategies

“The cosmos is complex, the cloud does not have to be”.

This quote by Ben Amaba, Worldwide Executive at IBM Cloud, early in his presentation at the Performance without Limits 3.0 on IBM Cloud event was his introduction to what I interpreted as stepping back from “what do I do with the cloud?” to consider “what makes my business successful?”. Indeed “the cloud”, i.e., Infrastructure as a Service (IaaS), should not be the complicated component in your business strategy.

The realization is that today, digital information exists and its growth is accelerating exponentially. Any strategy to embrace this need is essential to maintaining business success. This implies that to achieve transformation, your business has to include using available and potentially un-thought-of digital information to innovate and personalize. The present traditional approach towards provisioning resources and services simply cannot meet this need. Hence the adoption of “utilizing the cloud” is becoming the ubiquitous answer.

The business model that one develops for maximizing this infrastructure-on-demand needs to be a provable, reproducible, resilient and a flexible reference architecture. It needs to have set principles to embrace the potential of the cloud. It needs to minimize the potential of failure.

Amaba talked about having three guiding principles in his presentation. These are:

  1. Hybrid
  2. Discipline
  3. Analytical

As we consider digital transformation strategies, just understanding the potential capability of a hybrid structure is required. This will vary from organization and industry and will rely on a balance of private and public cloud services. Locking your organization into an all public cloud solution (e.g. AWS), or an all private cloud solution (e.g. all VMWare) limits your capacity to adapt.

Implementing an on-premise cloud infrastructure that leverages OpenStack to replace existing propriety off-premise cloud providers such as AWS, Azure and Google Cloud is not the ROI you should hope for. Indeed a hybrid private/public strategy with the capacity to enable greater access to applicable and real-time data and tools, providing the ability for your employees, associated researchers, strategic partners and even individuals via a foldit gamification type approach all increase the innovative transformation that can ensure your company is the disruptor, not the disrupted.

Let’s consider a theoretical example when a hybrid cloud strategy enables the capacity for innovation to occur in record time. CERN announces the release of 300TB of LDR data. If this was released into one specific cloud infrastructure, could your organization support that? For example, if this was the SoftLayer cloud infrastructure, leveraging the compute resources of this cloud provider would be beneficial to your internal organization because one feature of this cloud is that it includes free data transfer across the entire worldwide network. This one feature on a specific cloud has an immediate cost-benefit.

Not being capable of expanding your organizations authentication, intellectual property analysis engines, and tools to quickly and seamlessly cater for the data could also be a competitive disadvantage. Amaba noted that 1/3 of top companies will face disruption in the next few years. Disruption is not limited to a competitor, a customer or a supplier. It can include the lack of ability to adapt timely to opportunity. Is being able to utilize any public cloud rather than one specific public cloud included in your business process? Could your internal global infrastructure and network support an additional 300TB of data immediately? While this is a specific use case, the availability of data and the ability for your organization’s employees to consume, digest and analyze is a digital strategy you need to be prepared for to compete in speed and innovation within your industry. Is the source of new data for your organization an opportunity or a problem?

The discussion of whata hybrid cloud is and how does my organization cater for and uses a hybrid cloud is the reason that thought leadership is needed. To understand the enterprise architecture of legacy systems, the capacity of new cloud-native applications and the huge divide in transition between these to enable utilization of existing data-wealth must also be part of your transformation strategy.

Amaba’s presentation also included discussing the discipline of needing to ensure and provide consistency. This ranges from the varying views of information to your consumers to the choices for workload assignment and access. Analytics was the third principle that encompassed the capability to determine insights, from using big data analysis to cognitive computing.

These thoughts are a reflection on the few notes taken at the time. I am really looking forward to seeing the slides and video presentation to fully reflect and comment in more detail.

Understanding the Oslo Libraries

Underpinning all of the OpenStack projects including Nova, Cinder, Keystone, Glance, Horizon, Heat, Trove, Murano and others is a set of core common libraries that provide a consistent, highly tested and compatible feature set. The Oslo project is a collection of over 30 libraries that are designed to reduce the technical debt of code duplication across projects and provide for a greater quality code path due to the frequency of use in OpenStack projects.

These libraries provide a variety of different features from the more commonly used functionality found in projects including configuration, logging, caching, messaging and database management to more specific features like deprecation management, handling plugins as well as frameworks for command line programs and state machines. The Oslo Python libraries are designed to be Python 2.7 and Python 3.4 compatible, leading the way in migration towards Python 3.

The first stable Oslo library oslo.config was included in the Grizzly release. Now over 30 libraries comprise the Oslo project. These libraries fall into a number of broad categories.

1. Stable OpenStack specific libraries

These libraries, using the olso. prefix are generally well described the library name.

  • oslo.cache
  • oslo.concurrency
  • oslo.context
  • oslo.config
  • oslo.db
  • oslo.i18n
  • oslo.log
  • oslo.messaging
  • oslo.middleware
  • oslo.policy
  • oslo.privsep
  • oslo.reports
  • oslo.serialization
  • oslo.service
  • oslo.utils
  • oslo.versionedobjects
  • oslo.vmware

2. Python libraries that can easily operate with other projects

In addition to the oslo namespace libraries, Oslo has a number of generically named libraries that are not OpenStack specific. The goal is that these libraries can be utilized outside of OpenStack by any Python project. These include:

  • automaton – a framework for building state machines.
  • cliff – a framework for building command line programs.
  • debtcollector – a collection of python patterns that help you collect your technical debt in a non-destructive manner (following deprecation patterns and strategies and so-on).
  • futurist – a collection of async functionality and additions from the future.
  • osprofiler – an OpenStack cross-project profiling library.
  • hacking – a library that provides a set of tools for enforcing coding style guidelines.
  • pbr – (or Python Build Reasonableness) is a add-on library that helps provide (and enforce) a set of sensible default setuptools behaviours.
  • pyCADF – a python implementation of the DMTF Cloud Audit (CADF) data model.
  • stevedore – a library for managing plugins for Python applications.
  • taskflow – a library that helps create applications that handle state/failures… in a reasonable manner.
  • tooz – a library that aims at centralizing the most common distributed primitives like group membership protocol, lock service and leader election

3. Convenience libraries

There are also several libraries that are used during the creation of, or support of OpenStack libraries.

The first was oslo-incubator where as the name suggests, initial libraries were incubated. As this code matured it was refactored into standard libraries. Projects have either graduated, been incorporated elsewhere or been deprecated. While the Oslo Incubator has been removed of libraries in Mitaka, one of the goals of the Newton cycle is to see the adoption of Oslo libraries in all projects. We will be providing a series of blogs to detail the walkthrough and reviews of existing projects for reference.

Other libraries include:

  • oslosphinx is a sphinx add-on library that provides theme and extension support for generating documentation with Sphinx. The Developer Documentation, Release Notes, a number of the OpenStack manuals including the Configuration Reference and now the Nova API Reference rely on this library.

  • oslotest is a helper library that provides base classes and fixtures for creating unit and functional tests.
  • oslo-cookiecutter is a project that creates a skeleton Oslo library from a set of templates.

4. Proposed or deprecated libraries

Some libraries fall outside of these categories, such as oslo.rootwrap. This was a mature library for handling fine filtering of shell commands to run as root. This is now deprecated in favor of oslo.privsep which is a mechanism for running selected python code with elevated privileges.

pylockfile is a legacy (and adopted) inter-process lock management library that was never used within OpenStack.

The oslo.version is an example of a proposed library at present to help in using python metadata to determine versioning.

The Oslo team is also evaluating what other common code may be suitable for an Oslo library.

The meaning behind the Oslo Name

Each OpenStack project has some reason behind the name. Oslo is in reference to the Oslo Peace Accords and “bringing peace” to the OpenStack project.

Oslo is also the capital of Norway, and in Norway you can find Moose. The moose is our project mascot.

are you running KVM or QEMU launched instances?

A recent operators mailing list thread asked this question regarding the OpenStack user survey results of April 2016 (See page 39).

As I verified my own local multi-node devstack dedicated H/W environment with varying commands, I initially came across the following error (which later was found to be misleading).

$ virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking for device /dev/kvm                                         : FAIL (Check that the 'kvm-intel' or 'kvm-amd' modules are loaded & the BIOS has enabled virtualization)
  QEMU: Checking for device /dev/vhost-net                                   : WARN (Load the 'vhost_net' module to improve performance of virtio networking)
  QEMU: Checking for device /dev/net/tun                                     : PASS
   LXC: Checking for Linux >= 2.6.26                                         : PASS

This is an attempt to collate a list of varying commands collected from various sources, and the output of these in my Ubuntu 14.04 LTS environment.

# Are you running 64-bit architecture (0=bad; >0 is good)
$ egrep -c ' lm ' /proc/cpuinfo
8

# Does your processor support hardware virtualization (0=bad; >0 is good)
$ egrep -c '^flags.*(vmx|svm)' /proc/cpuinfo
8

# Are you running a 64-bit OS
$ uname -m
x86_64

# Have I installed the right Ubuntu packages
$ dpkg -l | egrep '(libvirt-bin|kvm|ubuntu-vm-builder|bridge-utils)'
ii  bridge-utils                        1.5-6ubuntu2                          amd64        Utilities for configuring the Linux Ethernet bridge
ii  libvirt-bin                         1.2.2-0ubuntu13.1.17                  amd64        programs for the libvirt library
ii  qemu-kvm                            2.0.0+dfsg-2ubuntu1.24                amd64        QEMU Full virtualization

# Have packages configured user privileges
$ grep libvirt /etc/passwd /etc/group
/etc/passwd:libvirt-qemu:x:108:115:Libvirt Qemu,,,:/var/lib/libvirt:/bin/false
/etc/passwd:libvirt-dnsmasq:x:109:116:Libvirt Dnsmasq,,,:/var/lib/libvirt/dnsmasq:/bin/false
/etc/group:libvirtd:x:116:rbradfor,stack

# Have I configured QEMU to use KVM
$ cat /etc/modprobe.d/qemu-system-x86.conf
options kvm_intel nested=1

# Have I loaded the KVM kernel modules
$ lsmod | grep kvm
kvm_intel             143630  3 
kvm                   456274  1 kvm_intel

# Are there any KVM related system messages
$ dmesg | grep kvm
[ 2030.719215] kvm: zapping shadow pages for mmio generation wraparound
[ 2032.454780] kvm [6817]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xabcd

# Can I use KVM?
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

# Can I find a KVM device
$ ls -l /dev/kvm
crw-rw---- 1 root kvm 10, 232 May 11 14:15 /dev/kvm

# Have I configured nested KVM 
$ cat /sys/module/kvm_intel/parameters/nested
Y

All of the above is the default output of a stock Ubuntu 14.04 install on my H/W, and with the correctly configured Bios (which requires a hard reboot to verify, and a camera to record the proof).

Some more analysis when changing the Bios.

$ sudo kvm-ok
INFO: /dev/kvm does not exist
HINT:   sudo modprobe kvm_intel
INFO: Your CPU supports KVM extensions
INFO: KVM (vmx) is disabled by your BIOS
HINT: Enter your BIOS setup and enable Virtualization Technology (VT),
      and then hard poweroff/poweron your system
KVM acceleration can NOT be used

When running a VirtualBox VM, the following is found.

$ sudo kvm-ok
INFO: Your CPU does not support KVM extensions
KVM acceleration can NOT be used

Now checking my OpenStack installation for related KVM needs.

# Have I configured Nova to use KVM virtualization
$ grep virt_type /etc/nova/nova.conf
virt_type = kvm

# Checking hypervisor type via API's
$ curl -s -H "X-Auth-Token: ${OS_TOKEN}" ${COMPUTE_API}/os-hypervisors/detail | $FORMAT_JSON | grep hypervisor_type
            "hypervisor_type": "QEMU",
            "hypervisor_type": "QEMU",

# Checking hypervisor type via OpenStack Client
$ openstack hypervisor show -f json 1 | grep hypervisor_type
  "hypervisor_type": "QEMU"

Devstack by default has configured libvirt to use kvm.

Spinning up an instance I ran the following additional checks.


# List running instances
$ virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------
 2     instance-00000001              running

# Check processlist for KVM usage
$ ps -ef | grep -i qemu | grep accel=kvm
libvirt+ 19093     1 21 16:24 ?        00:00:03 qemu-system-x86_64 -enable-kvm -name instance-00000001 -S -machine pc-i440fx-trusty,accel=kvm,usb=off...

Information from the running VM in my environment.

$ ssh [email protected]

$ egrep -c ' lm ' /proc/cpuinfo
1

$ egrep -c '^flags.*(vmx|svm)' /proc/cpuinfo
1

$ uname -m
x86_64


$ cat /proc/cpuinfo
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 6
model name	: QEMU Virtual CPU version 2.0.0
...

So, while the topic of the ML thread does indeed cover the confusion over OpenStack reporting the hypervisor type as QEMU when infact it does seem so but is enabling KVM via my analysis. I find the original question as a valid problem to operators.

And finally, this exercise while a lesson in understanding a little more about hypervisor and commands available, the original data was simply an operator error where sudo was needed (and not for other commands).

$ sudo  virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking for device /dev/kvm                                         : PASS
  QEMU: Checking for device /dev/vhost-net                                   : PASS
  QEMU: Checking for device /dev/net/tun                                     : PASS
   LXC: Checking for Linux >= 2.6.26                                         : PASS

References

Using your devstack cloud

You have setup and installed devstack. Now what!

The Horizon UI will allow you to administer your running cloud from a web interface. We are not going to discuss the web UI in this post.

Using the command line will provide you access to the following initial developer/operator capabilities.

  • Duplicating the features of the UI with the client tools
  • Observing the running services
  • Understanding the logging of OpenStack services
  • Understanding the configuration of OpenStack services
  • Understanding the source code of OpenStack services

This is not an exhaustive list or explanation of each point but an intro into navigating around the running OpenStack services.

Duplicating UI features

OpenStack has a number of individual command line clients for many services, and a common client openstack.

To get started:

$ openstack user list
Missing parameter(s): 
Set a username with --os-username, OS_USERNAME, or auth.username
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
Set a scope, such as a project or domain, set a project scope with --os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name

By default you will need to provide applicable authentication details via arguments or environment variables.
Using the output of the devstack setup, we can obtain applicable details needed for most parameters.

$ ./stack.sh
...
...
...
This is your host IP address: 192.168.56.101
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.56.101/dashboard
Keystone is serving at http://192.168.56.101:5000/
The default users are: admin and demo
The password: passwd

We can now retrieve a summary list of users defined in your project with:

$ openstack --os-username=admin --os-password=passwd --os-auth-url=http://192.168.56.101:5000/ --os-project-name=demo user list
+----------------------------------+----------+
| ID                               | Name     |
+----------------------------------+----------+
| a531ea1011af43bb8277f3e5edfea34b | admin    |
| d6ce303e83b64a2998228c55ebd274c3 | demo     |
| fe7301aa4d2b44b482cd6ba19c24f6b8 | alt_demo |
| e18ae48148df4593b4067785c5e72820 | nova     |
| 9a49deabb7b64454abf411de87c2862c | glance   |
| 1315257f265740f8a32988b014c9e693 | cinder   |
+----------------------------------+----------+

One parameter that is required but no information was available in the devstack installation output was project. There are a number of projects defined in the installation which you can obtain with:

$ openstack --os-username=admin --os-password=passwd --os-auth-url=http://192.168.56.101:5000/ --os-project-name=admin project list
+----------------------------------+--------------------+
| ID                               | Name               |
+----------------------------------+--------------------+
| 3b9f48af38ac40a495ca7b22d4d5c036 | demo               |
| 42c574962a114974bfe35e4a3467df60 | service            |
| 7af69c571e764d5f99688ed2e59930d5 | alt_demo           |
| 893b8954952c4319abd6596b587bba5f | admin              |
| da71fdc9c88f4eddac38937dfef542a2 | invisible_to_admin |
+----------------------------------+--------------------+

By defining authentication with environment variables you can easily simply CLI command usage. For example:

$ export OS_USERNAME=admin
$ export OS_PASSWORD=passwd
$ export OS_AUTH_URL=http://192.168.56.101:5000/
$ export OS_PROJECT_NAME=demo
$ openstack user list
...

devstack pre-packages a few source files that enable you to avoid specifying these arguments or environment variables manually. For example to duplicate this example:

$ source accrc/admin/demo
$ openstack user list

The openstack command provides a --help option to list the available options. You can also inquire as to commands with the command list option.

$ openstack --help
$ openstack command list

With the openstack command line interface you can perform all the operations needed to configure, administer and run your cloud services.

Observing the running services

OpenStack is made up of a number of services, those key services in devstack start with nova, keystone, glance, cinder and horizon. devstack conveniently packages the individual running services into separate screen processes, leveraging a cursors based view of services via the output of log files.

You can view the running screen sessions by reattaching with.

$ screen -r

If you get the following error when attempting to reattach “Cannot open your terminal ‘/dev/pts/0′ – please check.”, you have likely tried reconnecting in a different shell session. You can address this with:

$ script /dev/null
$ screen -r

Commands in screen are driven by a key combination starting with ^a (ctrl-A). ^a d will detach from your screen session you just reattached to. This is what gets you out of screen. See the later section for the full list screen help commands.

On the command line you can run the following command to list the available images via the glance service.

$ openstack image list
+--------------------------------------+---------------------------------+--------+
| ID                                   | Name                            | Status |
+--------------------------------------+---------------------------------+--------+
| 864bad45-d0de-4031-aea6-80b6af72cf2a | cirros-0.3.4-x86_64-uec         | active |
| 75e8b1ef-ae84-41aa-b0a0-7ea785771f14 | cirros-0.3.4-x86_64-uec-ramdisk | active |
| f694bdb1-4bb0-4f18-a7c9-290ad26b1fc8 | cirros-0.3.4-x86_64-uec-kernel  | active |
+--------------------------------------+---------------------------------+--------+

Within screen you can look at the glance api screen log (^a 5) and can observe the logging that occurs in relation to this command. For example we can see an INFO message to get the images (GET /v2/images), and we can see several DEBUG messages. We will use these DEBUG messages in a later post to describe handling logging output.

The INFO message will look like:

2016-04-04 16:24:00.139 INFO eventlet.wsgi.server [req-acf98429-60de-4d18-a69c-36a7d80bed7c a531ea1011af43bb8277f3e5edfea34b 3b9f48af38ac40a495ca7b22d4d5c036] 192.168.1.60 - - [04/Apr/2016 16:24:00] "GET /v2/images HTTP/1.1" 200 2202 0.116774

While we will discuss logging formats in another post, the standard format (in devstack) includes:

  • Date/Time
  • Logging Level
  • Package
  • Request context. this is made up of
    • req-acf98429-60de-4d18-a69c-36a7d80bed7c a request-id, useful for grouping logging records
    • a531ea1011af43bb8277f3e5edfea34b refers to the user id (as seen in user list above, i.e. admin)
    • 3b9f48af38ac40a495ca7b22d4d5c036 refers to the project id (as seen in the project list above, i.e. demo)
  • The actual log message
In order to page back in screen output, you enter copy mode “^a [” and then you can use ^b (page back) and ^f (page forward) keys.

Understanding the logging of OpenStack services

What is actually observed in the screen output is what is being logged for the Glance API service. We can verify this with the log file logged in /opt/stack/logs.

$ tail -f /opt/stack/logs/g-api.log

NOTE: You may see that there are colors within both the screen and log output. This is an optional configuration setup used by devstack (not an OpenStack default for logging). We will use this later to show a change in the logging of the service.

We can verify the details of the command used within the screen session (^a 5) by killing the running process with ^c.

Using the bash history, you can up arrow to observe the last running command, and restart this.

/usr/local/bin/glance-api --config-file=/etc/glance/glance-api.conf & echo $! >/opt/stack/status/stack/g-api.pid; fg || echo "g-api failed to start" | tee "/opt/stack/status/stack/g-api.failure"

The actual log file is produced by the screen configuration defined in devstack/stack-screenrc.

screen -t g-api bash
"tuff "/usr/local/bin/glance-api --config-file=/etc/glance/glance-api.conf
logfile /opt/stack/logs/g-api.log.2016-04-04-110956
log on

In a running OpenStack environment you would configure logging output to file as per the log_file option.

Understanding the configuration of OpenStack services

This command indicated a configuration file /etc/glance/glance-api.conf. Glance like other services may contain several configuration files. These are by default defined in the individual projects namespace under /etc.

$ ls -l /etc/glance/
total 152
-rw-r--r-- 1 stack stack 65106 Apr  4 11:12 glance-api.conf
-rw-r--r-- 1 stack stack  3266 Mar 11 12:22 glance-api-paste.ini
-rw-r--r-- 1 stack stack 13665 Apr  4 11:12 glance-cache.conf
-rw-r--r-- 1 stack stack 51098 Apr  4 11:12 glance-registry.conf
-rw-r--r-- 1 stack stack  1233 Mar 11 12:22 glance-registry-paste.ini
drwxr-xr-x 2 stack root   4096 Apr  4 11:12 metadefs
-rw-r--r-- 1 stack stack  1351 Mar 11 12:22 policy.json
-rw-r--r-- 1 stack stack  1380 Mar 11 12:22 schema-image.json

This is an appropriate time to point to several documentation sources including the Glance Developer Documentation – Configuration Options and the Configuration Guide Image Service options which describe in more detail these listed configuration files and the possible options available. You can find similar documentation for other services.

To demonstrate just how the configuration and logging work with a running service the following will modify the logging of the Glance API service by commenting out the logging configuration lines, and then reverting to the oslo.log configuration defaults.

$ sudo vi /etc/glance/glance-api.conf

Comment out the four logging_ options in the [DEFAULT] section.

[DEFAULT]
#logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s ^[[01;35m%(instance)s^[[00m
#logging_debug_format_suffix = ^[[00;33mfrom (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d^[[00m
#logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [^[[00;36m-%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m
#logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [^[[01;36m%(request_id)s ^[[00;36m%(user)s %(tenant)s%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m

Now, repeating the earlier steps within the g-api screen window, kill and restart the service.
The first thing you will observe is that the logging no longer contains color (this helps greatly for log file analysis). Repeat the CLI option to list the images, and you will notice a slightly modified logging message occur.

2016-04-05 11:38:57.312 17696 INFO eventlet.wsgi.server [req-1e66b7e5-3429-452e-a9b7-e28ee498f772 a531ea1011af43bb8277f3e5edfea34b 3b9f48af38ac40a495ca7b22d4d5c036 - - -] 192.168.1.60 - - [05/Apr/2016 11:38:57] "GET /v2/images HTTP/1.1" 200 2202 11.551233

The request context now is a modified format (containing additional - - - values) as a result of using the default value of logging_context_format_string. We will discuss the specifics of logging options in a later post.

There are a reasonable number of log files for a minimal devstack installation, some services have multiple log files.

$ cd /opt/stack/logs; ls -l *.log
lrwxrwxrwx 1 stack stack       27 Apr  5 12:49 c-api.log -> c-api.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       27 Apr  5 12:49 c-sch.log -> c-sch.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       27 Apr  5 12:49 c-vol.log -> c-vol.log.2016-04-05-124004
-rw-r--r-- 1 stack stack 16672591 Apr  5 14:01 dstat-csv.log
lrwxrwxrwx 1 stack stack       27 Apr  5 12:42 dstat.log -> dstat.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       27 Apr  5 12:48 g-api.log -> g-api.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       27 Apr  5 12:48 g-reg.log -> g-reg.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       29 Apr  5 12:50 horizon.log -> horizon.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       32 Apr  5 12:42 key-access.log -> key-access.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       25 Apr  5 12:42 key.log -> key.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       27 Apr  5 12:48 n-api.log -> n-api.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       29 Apr  5 12:49 n-cauth.log -> n-cauth.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       28 Apr  5 12:48 n-cond.log -> n-cond.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       27 Apr  5 12:49 n-cpu.log -> n-cpu.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       27 Apr  5 12:48 n-crt.log -> n-crt.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       28 Apr  5 12:42 n-dhcp.log -> n-dhcp.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       27 Apr  5 12:48 n-net.log -> n-net.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       29 Apr  5 12:49 n-novnc.log -> n-novnc.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       27 Apr  5 12:49 n-sch.log -> n-sch.log.2016-04-05-124004
lrwxrwxrwx 1 stack stack       46 Apr  5 12:40 stack.sh.log -> /opt/stack/logs/stack.sh.log.2016-04-05-124004

To turn off color in logging across service, you can configure this in the devstack local.conf file before starting the stack.

# local.conf
LOG_COLOR=False

Understanding the source code of OpenStack services

devstack installs the OpenStack code in two ways, via packaging and via source.

Generally all libraries are installed via packaging. You can discern these via looking at the python packages via pip with:

$ pip freeze
...
oslo.cache==1.5.0
oslo.concurrency==3.6.0
oslo.config==3.9.0
oslo.context==2.2.0
oslo.db==4.6.0
oslo.i18n==3.4.0
oslo.log==3.2.0
oslo.messaging==4.5.0
oslo.middleware==3.7.0
oslo.policy==1.5.0
oslo.reports==1.6.0
oslo.rootwrap==4.1.0
oslo.serialization==2.4.0
oslo.service==1.7.0
oslo.utils==3.7.0
oslo.versionedobjects==1.7.0
oslo.vmware==2.5.0
...
python-barbicanclient==4.0.0
python-ceilometerclient==2.3.0
python-cinderclient==1.6.0
python-designateclient==2.0.0
python-glanceclient==2.0.0
python-heatclient==1.0.0
python-ironicclient==1.2.0
python-keystoneclient==2.3.1
python-magnumclient==1.1.0
python-manilaclient==1.8.0
python-memcached==1.57
python-mimeparse==1.5.1
python-mistralclient==2.0.0
python-neutronclient==4.1.1
python-novaclient==3.3.0
python-openstackclient==2.2.0
python-saharaclient==0.13.0
python-senlinclient==0.4.0
python-subunit==1.2.0
python-swiftclient==3.0.0
python-troveclient==2.1.1
python-zaqarclient==1.0.0
...

This is a list of all Python packages so it’s not possible to determine which are OpenStack specific, and which are dependencies. These installed packages are actually Python source that you can view and even modify.

$ ls -l /usr/local/lib/python2.7/dist-packages/

You can approximate the installed OpenStack packages via source by looking at the base source directory:

$ ls -l /opt/stack
total 92
drwxr-xr-x 10 stack stack 4096 Mar 11 12:23 cinder
drwxr-xr-x  6 stack root  4096 Apr  5 12:42 data
-rw-r--r--  1 stack stack  440 Apr  5 12:52 devstack.subunit
drwxr-xr-x  4 stack stack 4096 Mar 11 12:27 dib-utils
drwxr-xr-x 10 stack stack 4096 Mar 11 12:22 glance
drwxr-xr-x 15 stack stack 4096 Mar 11 12:26 heat
drwxr-xr-x  7 stack stack 4096 Mar 11 12:27 heat-cfntools
drwxr-xr-x 10 stack stack 4096 Mar 11 12:27 heat-templates
drwxr-xr-x 11 stack stack 4096 Mar 11 14:13 horizon
drwxr-xr-x 13 stack stack 4096 Mar 11 11:57 keystone
drwxr-xr-x  2 stack stack 4096 Apr  5 12:50 logs
drwxr-xr-x 12 stack stack 4096 Mar 11 15:45 neutron
drwxr-xr-x 13 stack stack 4096 Mar 11 12:25 nova
drwxr-xr-x  8 stack stack 4096 Mar 11 12:24 noVNC
drwxr-xr-x  4 stack stack 4096 Mar 11 12:27 os-apply-config
drwxr-xr-x  4 stack stack 4096 Mar 11 12:27 os-collect-config
drwxr-xr-x  5 stack stack 4096 Mar 11 12:27 os-refresh-config
drwxr-xr-x  7 stack stack 4096 Apr  5 12:51 requirements
drwxr-xr-x 13 stack stack 4096 Mar 11 15:47 solum
drwxr-xr-x  3 stack stack 4096 Apr  4 11:13 status
drwxr-xr-x 10 stack stack 4096 Mar 11 12:22 swift

devstack enables you to configure which packages you want to install via source. Checkout plugins for more information. For example, the following added to the local.conf would install solum.

# local.conf
...
enable_plugin solum git://git.openstack.org/openstack/solum

You have complete flexibility of which branch and version of each package using devstack. This enables you to use devstack as a testing tool for code changes.

At this time to understand more about how software is installed check out devstack documentation and review the stack.sh script.

What’s next

This is only a cursory introduction into what devstack sets up during the installation process. Subsequent posts will talk more on topics including the configuration options, the different logging capabilities and how to test code changes.

screen help

^a ? will provide the following help output.

                                                                                     Screen key bindings, page 1 of 2.

                                                                                     Command key:  ^A   Literal ^A:  a

  break       ^B b         dumptermcap .            info        i            meta        a            pow_detach  D            reset       Z            title       A            xoff        ^S s      
  clear       C            fit         F            kill        K k          monitor     M            prev        ^H ^P p ^?   screen      ^C c         vbell       ^G           xon         ^Q q      
  colon       :            flow        ^F f         lastmsg     ^M m         next        ^@ ^N sp n   quit        \            select      '            version     v         
  copy        ^[ [         focus       ^I           license     ,            number      N            readbuf     <            silence     _            width       W         
  detach      ^D d         hardcopy    h            lockscreen  ^X x         only        Q            redisplay   ^L l         split       S            windows     ^W w      
  digraph     ^V           help        ?            log         H            other       ^A           remove      X            suspend     ^Z z         wrap        ^R r      
  displays    *            history     { }          login       L            pow_break   B            removebuf   =            time        ^T t         writebuf    >         

^]   paste .
"    windowlist -b
-    select -
0    select 0
1    select 1
2    select 2
3    select 3
4    select 4
5    select 5
6    select 6
7    select 7
8    select 8
9    select 9
I    login on
O    login off
]    paste .
|    split -v
:kB: focus prev

Running a devstack virtual machine with limited memory

If you have a system with only 4GB of RAM, you need to assign at least 2.5GB (2560M) to a virtual machine to install devstack. Even with this limited RAM there are times the devstack installation will fail.

One way to give the installation process an opportunity to complete is to configure your virtual machine to have swap space. The amount of swap space you can configure may be limited to the size of your initial disk partition configuration (which is 8GB). The following steps add a 2GB swap file to your virtual machine.

sudo swapon -s
free -m
sudo fallocate -l 2G /swapfile
ls -lh /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon -s
free -m
echo "/swapfile   none    swap    sw    0   0" | sudo tee -a /etc/fstab
cat /etc/fstab
The use of swap space by your virtual machine instead of available RAM will cause a significant slowdown of any software. For the purposes of a minimal installation this option provides a means to observe a running minimal OpenStack cloud.

Downloading and installing devstack

The following instructions assume you have a running Linux virtual machine that can support the installation of devstack to demonstrate a simple working OpenStack cloud.

For more information about the preparation needed for this step, see these pre-requisite instructions:

Pre-requisites

You will need to login to your Linux virtual machine as a normal user (e.g. stack if you followed these instructions).

To verify the IP address of your machine you can run:

$ ifconfig eth1

NOTE: This assumes you configured a second network adapter as detailed.

You need to determine the IP address assigned. If this is your first-time using VirtualBox and this was configured with default settings, the value will be 192.168.56.101

eth1      Link encap:Ethernet  HWaddr 08:00:27:db:42:6e  
          inet addr:192.168.56.101  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fedb:426e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:398500 errors:0 dropped:0 overruns:0 frame:0
          TX packets:282829 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:35975184 (35.9 MB)  TX bytes:59304714 (59.3 MB)

Verify that you have applicable sudo privileges.

$ sudo id

If you are prompted for a password, then your privileges are not configured correctly. See here.

Download devstack

After connecting to the virtual machine the following commands will download the devstack source code:

$ sudo apt-get install -y git-core
# NOTE: You will not be prompted for a password
#       This is important for the following installation steps
$ git clone https://git.openstack.org/openstack-dev/devstack

Configure devstack

The following will create an example configuration file suitable for a default devstack installation.

$ cd devstack
# Use the sample default configuration file
$ cp samples/local.conf .
$ HOST_IP="192.168.56.101"
$ echo "HOST_IP=${HOST_IP}" >> local.conf

NOTE: If your machine has different IP address you should specify this alternative value.

Install devstack

$ ./stack.sh

Depending on your physical hardware and network connection, this takes approximately 20 minutes.

When completed you will see the following:

...
This is your host IP address: 192.168.56.101
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.56.101/dashboard
Keystone is serving at http://192.168.56.101:5000/
The default users are: admin and demo
The password: nomoresecrete
While the installation of devstack is happening, you should read Configuration section, and look at the devstack/samples/local.conf sample configuration file being used.

Accessing devstack

You now have a running OpenStack cloud. There are two easy ways to access the running services to verify.

  • Connect the Horizon dashboard in your browser with the URL (e.g. http://192.168.56.101/), and use the user and password described (e.g. admin and nomoresecrete).
  • Use the OpenStack client that is installed with devstack, for example:
$ source accrc/admin/admin
$ openstack image list

See Using your devstack cloud for more information about analyzing your running cloud, restarting services, configuration files and how to demonstrate a code change.

Other devstack commands

There are some useful commands to know about with your devstack setup.

If you restart your virtual machine, you reconnect to devstack by re-running the installation (there is no longer a rejoin-stack.sh):

$ ./stack.sh

To shutdown a running devstack.

$ ./unstack.sh

To cleanup your VM of devstack installed software.

$ ./clean.sh

Setting up Ubuntu using vagrant

As discussed in Setting up an Ubuntu virtual machine using VirtualBox there are several other alternatives to defining an Ubuntu virtual machine. One of these alternatives is using Vagrant.

Pre-requisites

Vagrant requires the installation of VirtualBox.

Install Vagrant

See Vagrant Downloads for the correct file for your platform.

For Ubuntu, the following commands will download a recent copy and install on your computer.

$ wget https://releases.hashicorp.com/vagrant/1.8.1/vagrant_1.8.1_x86_64.deb
$ sudo dpkg -i vagrant_1.8.1_x86_64.deb

Launching an Ubuntu image

The following commands will initialize an start an Ubuntu 14.04 vagrant instance.

$ vagrant init ubuntu/trusty64
$ vagrant up --provider virtualbox
$ vagrant ssh

You should now be connected to the new virtual machine.

Vagrant creates a port forwarding configuration from your local machine automatically. You can connect via ssh directly with:

ssh vagrant@localhost -p 2222 -i .vagrant/machines/default/virtualbox/private_key

NOTE: Port 2222 may be different if this is already in use. You can verify this via the output of the vagrant up command, for example:

...
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
...

Post configuration

In order to access your vagrant instance with a specific IP address and leverage the recommended devstack instructions you need to add the config.vm.network line to the Vagrantfile in the directory used on your host computer. You also need to set the virtual machine memory to at least 2.5GB to get a minimal devstack operational.

Vagrant.configure(2) do |config|
  config.vm.box = "ubuntu/trusty64"
  config.vm.network "private_network", type: "dhcp"
 
  config.vm.provider "virtualbox" do |v|
    v.memory = 2560
  end
end

You will then need to restart the vagrant image in order to have a host-only IP assigned to the virtual machine and applicable memory.

$ vagrant reload
$ vagrant ssh
$ ifconfig eth1
$ free -m

This has created a suitable virtual machine ready for Downloading and installing devstack.

Setting up CentOS on VirtualBox for RDO

Create a CentOS Virtual Machine (VM)

NOTE: There are several different ways in creating a base VM CentOS image. These steps are the more manual approach, however they are provided for completeness in understanding varying options.

To create a virtual machine in VirtualBox select the New icon. This will prompt you for some initial configuration. Use these recommendations:

  • Name and operating System
    • Name: RDO
    • Type: Linux
    • Version: Red Hat (64-bit)
  • Memory Size
    • Use at minimum 4GB.
  • Hard Disk
    • Use the default settings including 8.0GB, VDI type, dynamically allocated, File location and size.

By default your virtual machine is ready to install however by making the following network recommendation it will be easier to access your running virtual machine via SSH and the RDO web interface and APIs from your host computer.

  • Click Settings
  • Select Network
  • Enable Adapter 2 and attach to a Host-only Adapter and select vboxnet0
  • Ok

Install CentOS Operating System

You are now ready to install the Operating System on the virtual machine with the following instructions.

  • Click Start
  • Open the CentOS .iso file you just downloaded.
  • You will be prompted for a number of options, select the default provided and use the following values when prompted.
  • Install CentOS 7
  • Select English and English (United States) (or your choice of language)
  • Select System to configure your installation destination
    • Click Done to use the default VM disk and automatically configure partitioning
  • Select Network & hostname
    • Enable both of the listed Ethernet connections
    • Enter rdo for the Host Name
    • Click Done
  • Click Begin Installation
  • Click Root Password
    • Enter password of your choosing
  • Click User Creation
    • Enter rdo for user name (or any value of your choice)
    • Enter Openstack for password (or any password of your choice)
    • Click Done

When the installation is complete, click Reboot.

You will now be able to login with username: rdo and password: Openstack (or the values you chose).

Post Installation

While the second ethernet adapter for your VM is configured it is not enabled.

$ su -
# Enter root password
$ sed -ie "s/ONBOOT=no/ONBOOT=yes/" /etc/sysconfig/network-scripts/ifcfg-enp0s8
$ ifup enp0s8
$ ip addr
# RDO does not operate with NetworkManager
$ sudo systemctl stop NetworkManager.service
$ sudo systemctl disable NetworkManager.service

The ip output will verify the IP address that was assigned. If you configured the VirtualBox host-only adapter with defaults, the address will be 192.168.56.1XX.

To verify access to your virtual machine from your host computer, you should SSH with:

$ ssh [email protected]

Setting up Ubuntu on VirtualBox for devstack

As discussed, devstack enables a software developer to run a standalone minimal OpenStack cloud on a virtual machine (VM). In this tutorial we are going to step through the installation of an Ubuntu VM using VirtualBox manually. This is a pre-requisite to installing devstack.

NOTE: There are several different ways in creating a base Ubuntu VM image. These steps are the more manual approach, however they are provided for completeness in understanding varying options.

Pre-requisites

  1. You will need a computer running a 64 bit operating system on Mac OSX, Windows, Linux or Solaris with at least 4GB of RAM and 10GB of available disk drive space.
  2. You will need to have a working VirtualBox on your computer. See Setting up VirtualBox to run virtual machines as a pre-requisite for these steps.
  3. You will need an Ubuntu server .iso image. Download the Ubuntu Server 14.04 (Trusty) server image (e.g. ubuntu-14.04.X-server-amd64.iso) to your computer. This will be the base operating system of your virtual machine that will run devstack.

If using Mac OS X or Linux you can obtain a recent .iso release with the command:

$ wget http://releases.ubuntu.com/14.04/ubuntu-14.04.4-server-amd64.iso
NOTE: devstack can be installed on different operating systems. As a first time user, Ubuntu 14.04 is used as this is a more common platform (and used by OpenStack infrastructure). Other operating systems include Ubuntu (14.10, 15.04, 15.01), Fedora (22, 23) and CentOS/RHEL 7.

Create an Ubuntu Virtual Machine

To create a virtual machine in VirtualBox select the New icon. This will prompt you for some initial configuration. Use these recommendations:

  • Name and operating System
    • Name: devstack
    • Type: Linux
    • Version: Ubuntu (64-bit)
  • Memory Size
    • If you have 8+GB use 4GB.
    • If you have only 4GB use 2.5GB. (Note. Testing during the creation of this guide found that 2048M was insufficient, and that a minimum of 2560M was needed)
  • Hard Disk
    • Use the default settings including 8.0GB, VDI type, dynamically allocated, File location and size.

By default your virtual machine is ready to install however by making the following network recommendation it will be easier to access your running virtual machine and devstack from your host computer.

  • Click Settings
  • Select Network
  • Enable Adapter 2 and attach to a Host-only Adapter and select vboxnet0
  • Ok

You are now ready to install the Operating System on the virtual machine with the following instructions.

  • Click Start
  • Open the Ubuntu .iso file you just downloaded.
  • You will be prompted for a number of options, select the default provided and use the following values when prompted.
  • Install Ubuntu Server
  • English (or your choice)
  • United States (or your location)
  • No for configure the keyboard
  • English (US) for keyboard (or your preference)
  • English (US) for keyboard layout (or your preference)
  • Select eth0 as your primary network interface
  • Select default ubuntu for hostname
  • Enter stack for full username/username
  • Enter Openstack for password (or your own preference)
  • Select No to encrypt home directory
  • Select Yes for time zone selected
  • Select Guided – use entire disk for partition method
  • Select highlighted partition
  • Select Yes to partition disks
  • Select Continue for package manager proxy
  • Select No automatic updates
  • Select OpenSSH Server in software to install
  • Select Yes to install GRUB boot loader
  • Select Continue when installation complete

The new virtual machine will now restart and you will be able to login with the username and password specified (i.e. stack and Openstack).

Post Installation

After successfully logging in run the following commands to complete the Ubuntu installation setup needed as pre-requisites to install devstack.

$ sudo su -
# Enter your stack user password
$ umask 266 & echo "stack ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/stack
$ apt-get update && apt-get upgrade -y
$ echo "auto eth1
iface eth1 inet dhcp" >> /etc/network/interfaces
$ ifup eth1

You are now ready to download and install devstack.

You can also setup an Ubuntu virtual machine via vagrant which simplifies these instructions.

More information

This blog is a series for the software developer with no experience in OpenStack to experience just the tip of functionality and features to become more interested in the project.

VirtualBox networking for beginners

When using VirtualBox for my OpenStack development I always configure two network adapters for ease of development. The first is a NAT adapter that enables the guest VM connectivity to the Internet via the host. The second network adapter is a host-only Adapter that enables my host computer (aka my terminal windows) to SSH directly to the guest VM, or to access a web interface for example. This enables the use of tools like ssh, scp, rsync etc easily with multiple VMs without thinking of different ports.

Having the two adapters is very convenient, however when you install products such as devstack or RDO these require additional steps to manage the interface and configure the installation. These steps are relatively straightforward but they make the most simple instructions more complex.

There are alternatives to using the NAT only adapter and enable port forwarding. For example you can configure port forwarding of port 2222 to the guest 22 with (when VM is not running):

$ VBoxManage modifyvm "vm-name" --natpf1 "guestssh,tcp,,2222,,22"
$ VBoxManage startvm "vm-name"

You can now connect to the guest VM via port forwarding on your host, in this case connecting to port 2222.

$ ssh user@localhost -p 2222

Personally I find this a disadvantage. You need to provide port forwarding for all ports you want to communicate on e.g. ssh (22), http (80) and keystone (5000). You need to do it in advance of using your VM, and you also need to do this for each VM.

However, depending on your needs and experience this is a valid alternative.

Ubuntu two adapter configuration

On Ubuntu, the following configuration file defines two DHCP network adapters.

$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

auto eth1
iface eth1 inet dhcp

You can verify adapter information (e.g. IP address) using ifconfig.

$ ifconfig
eth0      Link encap:Ethernet  HWaddr 08:00:27:7f:a0:e2  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe7f:a0e2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:585 errors:0 dropped:0 overruns:0 frame:0
          TX packets:455 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:129820 (129.8 KB)  TX bytes:64215 (64.2 KB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:66:8d:cb  
          inet addr:192.168.56.102  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe66:8dcb/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:221 errors:0 dropped:0 overruns:0 frame:0
          TX packets:152 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:28289 (28.2 KB)  TX bytes:19443 (19.4 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:371 errors:0 dropped:0 overruns:0 frame:0
          TX packets:371 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90509 (90.5 KB)  TX bytes:90509 (90.5 KB)

The ip command is also available.

To configure an IP with a fixed address on the host-only adapter network which is useful for many machines, you would use:

auto eth1
iface eth1 inet static
address 192.168.56.50
netmask 255.255.255.0
gateway 192.168.56.1

CentOS two adapter configuration

CentOS keeps a configuration file per interface. We start be determining the interface names.

$ ls -l /etc/sysconfig/network-scripts/ifcfg-*
-rw-r--r--. 1 root root 310 Mar 30 16:53 /etc/sysconfig/network-scripts/ifcfg-enp0s3
-rw-r--r--. 1 root root 278 Mar 30 17:00 /etc/sysconfig/network-scripts/ifcfg-enp0s8
-rw-r--r--. 1 root root 277 Mar 30 16:53 /etc/sysconfig/network-scripts/ifcfg-enp0s8e
-rw-r--r--. 1 root root 254 Sep 16  2015 /etc/sysconfig/network-scripts/ifcfg-lo

And then can review the per interface configuration with:

$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
TYPE="Ethernet"
BOOTPROTO="dhcp"
DEFROUTE="yes"
PEERDNS="yes"
PEERROUTES="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_FAILURE_FATAL="no"
NAME="enp0s3"
UUID="2c0bdd66-badc-449c-8db8-b2c85a716dab"
DEVICE="enp0s3"
ONBOOT="yes"

$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s8
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s8
UUID=4127685d-a7b9-4b8d-a399-6dcacdb3396d
DEVICE=enp0s8
ONBOOT=yes

You can verify the network configuration using the ip command.

$ ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:32:c0:4c brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 15876sec preferred_lft 15876sec
    inet6 fe80::a00:27ff:fe32:c04c/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:43:22:85 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.103/24 brd 192.168.56.255 scope global dynamic enp0s8
       valid_lft 1056sec preferred_lft 1056sec
    inet6 fe80::a00:27ff:fe43:2285/64 scope link 
       valid_lft forever preferred_lft forever

CentOS does not provide ifconfig by default, it’s included in the net-tools package (RDO for example installs this).

$ sudo yum install -y net-tools
$ ifconfig
enp0s3: flags=4163  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fe32:c04c  prefixlen 64  scopeid 0x20
        ether 08:00:27:32:c0:4c  txqueuelen 1000  (Ethernet)
        RX packets 437445  bytes 441847285 (421.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 142720  bytes 8897712 (8.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163  mtu 1500
        inet 192.168.56.103  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:fe43:2285  prefixlen 64  scopeid 0x20
        ether 08:00:27:43:22:85  txqueuelen 1000  (Ethernet)
        RX packets 14250  bytes 1708162 (1.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 15367  bytes 13061787 (12.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 5706360  bytes 778262566 (742.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5706360  bytes 778262566 (742.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

References

Installing VirtualBox for OpenStack development

Download VirtualBox for your operating system

VirtualBox is an open source virtualization product that will allow you to create virtual machines on a computer using Linux, Mac OS X or Windows. While the current version is 5.x, older versions will also work if you are already using this software.

NOTE: There are different products that can provide virtualization on your computer. As a first time user with virtualization, VirtualBox is a common open source product used by developers.

Install VirtualBox on your system

Just follow the default prompts.

Recommended VirtualBox Networking

To provide for a better experience for installing and accessing devstack or RDO the following VirtualBox configuration setup is recommended to create a host-only adapter network on your host machine.

  • Start VirtualBox
  • Open Preferences (e.g. File|Preferences)
  • Select Network
  • Select Host-only Networks
  • Add Network (accept all defaults)

This additional step will create a network configuration in VirtualBox that is called vboxnet0. This will define a network in the 192.168.56.X range, and will configure a DHCP server that will issue IP addresses starting at 192.168.56.101. This will enable you to more easily access your VMs from your host computer as discussed in VirtualBox networking for beginners. ‎

Installing Openstack with devstack, a first-time guide

This guide will enable the reader to install a minimal OpenStack cloud using devstack for the first time.

This guide will assume you have never installed virtualization software, used or configured devstack or even observed a running OpenStack cloud. This guide does assume that you can perform some basic software development instructions as documented.

This guide is targeted towards the software developer that may want to review the Python code and contribute to the open source project or the system architect that wants to evaluate some of the features of OpenStack. If you are an end user should try a public cloud that runs OpenStack such as OVH, Rackspace or other public cloud providers listed in the OpenStack Marketplace).

There are some hardware requirements and various copy/paste command line instructions on a Linux virtual machine. While it would be possible to publish a completed virtual machine you could download and click to run, understanding the underpinnings of the most basic installation and configuration of devstack will provide an appreciation of the complexity of the product and the software development capabilities.

At the end of this process you will have a running OpenStack cloud on your computer that is running on a Linux virtual machine. You will be able to access this with your browser and be able to perform basic cloud infrastructure tasks, such as creating a compute instance. This guide is not intended to talk about the benefits or usages of a cloud.

You will need a computer running Mac OS X, Windows, Linux (see supported list) or Solaris with at least 4GB of RAM and 10GB of available disk drive space in order to complete the following steps.

  1. Installing VirtualBox
  2. Setting up an Ubuntu virtual machine using VirtualBox
  3. ‎Downloading and installing devstack
  4. Using your OpenStack devstack cloud
NOTE: These steps will provide one means of installing devstack with one type of virtualization software on a specific Linux operating system. This is only meant as a first-time users guide, and some pre-defined decisions have been made. There are multiple ways to implement and use devstack with different software and operating systems.

What’s next?

Without knowing the purpose of following this first-time guide what’s next depends on your. As a software developer you may be interested in looking OpenStack Bugs or contributing to new features of one of the many projects. As an architect you may want to understand a more complex configuration setup as you plan to determine what may be necessary to utilize a cloud infrastructure in your organization. This guide is only intended as the first introduction and hopefully has provided the intended result for the reader to consider what OpenStack can possibly provide.

More references

We will assume you have never installed virtualization software on your computer and have not installed devstack, or even seen an OpenStack interface. The devstack documentation does not make this assumption and so these more generic instructions are useful to the uninitiated. While some (including this author) feel these are instructions worthy of the official devstack documentation, others (with valid reasons) do not and hence the democracy of a large distributed open source project. For more information see review #290854. This guide joins the many others searchable by Internet search engines.

devstack, your personal OpenStack Cloud

As a software developer or system architect that is interested in looking at the workings of OpenStack, devstack is one of several different ways to start a personal cloud using the current OpenStack code base.

In it’s most basic form, you can run devstack in a virtual machine and be able to manage your personal cloud via the Horizon web interface (known as the Dashboard), or via several CLI APIs such as the OpenStack client (OSC). You can use this to launch compute services, manage boot images and disk volumes, define networking and configure administrative users, projects and roles.

The benefit of devstack is for the developer and deployer. You can actually see the running cloud software, interact and engage with individual services. devstack is a valuable tool to debug and bugfix services. devstack is used by the OpenStack CI/CD system for testing so it is robust enough to evaluate the core projects and many of the available projects that can be configured to be installed with devstack. You can also configure to use trunk (i.e. master) code, or specific branches or tags for individual services. The CI system for example will install the trunk of services, and the specific branch of a new feature or bug fix for one given project in order to perform user and functional testing.

devstack also enables more complex configuration setups. You can setup devstack with LXC containers, you can run a multi-node setup, you can run with Neutron networking. While devstack installs a small subset of projects including keystone, nova, cinder, glance and horizon, you can use devstack to run other OpenStack projects such as Manila, Trove, Magnum, Sahara, Solum and Heat.

The benefit of devstack is for evaluation of capabilities. devstack is not a product to use to determine a path for production deployment of OpenStack. This process includes many more complex considerations of determining why you want to implement an infrastructure for demand for your organization, and considerations of the most basic technical needs such as uptime and SLA requirements, high availability, monitoring and alerting, security management and upgrade paths of your software.

If you are ready to see what OpenStack could provide and want to run a local cloud, you can start with installing Openstack with devstack, a first-time guide.

Additional References

What is OpenStack?

OpenStack is a cloud computing software product that is the leading open source platform for creating cloud infrastructure. Used by hundreds of companies to run public, private and hybrid clouds, OpenStack is the second most popular open source project after the Linux Kernel.

OpenStack is a product of many different projects (currently over 50), written primarily in Python and is in 2016 over 4 million lines of code.

OpenStack Distributions

There are numerous distributions of OpenStack from leading vendors such as Red Hat, IBM, Canonical, Cisco, SUSE, Oracle and VMware to name a few. Each vendor provide a means of installing and managing an OpenStack cloud and integrates the cloud with a large number of hardware and software products and appliances. Regardless of your preferred host operating system or deployment methodology with ansible, puppet or chef, there is an project or provider to suit your situation.

Evaluating OpenStack

For the software developer or system architect there are several ways to evaluate the basic features of OpenStack. Free online services such as Mirantis Express and TryStack or production clouds at Rackspace and OVH offer you an infrastructure to handle compute, storage, networking and orchestration features and you can engage with your personal cloud via web and CLI interfaces.

If however you want to delve behind the UI and API’s to see OpenStack in operation there are simple VM based means using devstack, RDO or Ubuntu OpenStack that can operate a running OpenStack on single VM or multiple VMs. You can follow the OpenStack documentation installation guides which cover openSUSE 13.2, SUSE Linux Enterprise Server 12, Red Hat Enterprise Linux 7, CentOS 7 and Ubuntu 14.04 (LTS) to install OpenStack on a number of physical hardware devices (a minimal configuration is three servers).

These options will introduce you to what *may* be possible with OpenStack personally. This is however the very tip of a very large iceberg. Considering a cloud infrastructure for your organization is a much more complex set of decisions about the impact, usefulness and cost-effectiveness for your organisation.

References

Retiring an OpenStack project

As part of migrating Oslo Incubator code to graduated libraries I have come across several inactive OpenStack projects. (An inactivate project does not mean the project should be retired or removed). However in my case when consulting the mailing list, it was confirmed that the kite project met the criteria to proceed.

There are good procedures to Retiring a project in the Infra manual. In summary these steps are:

  • Inform the developer community via the mailing list of the intention to retire the project, confirming there are no unaware interested parties.
  • Submit an openstack-infra/project-config change to remove the zuul gate jobs that are run when reviews are submitted. This is needed as the subsequent review to remove code will fail if these checks are enabled..
  • Submit a project review that removes all code, and updates the README with a standard message “This project is no longer maintained. …” See Infra manual for full details. This change will have a Depends-On: for your project-config review. If this review is not yet merged, you should add a Needed-By: reference accordingly.
  • Following the approved review to remove the code from HEAD, a subsequent review to openstack-infra/project-config is needed to remove other infrastructure usage of the project and mark the project as read only.
  • Finally, a request to openstack/governance is made to propose removal of the repository from the governance.

As with good version control, the resulting code for the project is not actually removed.
As per the commit comment you simply git checkout HEAD^1 to access the project in question.

Oracle OpenStack leveraging MySQL Cluster and Docker

At Oracle Openworld this year, Oracle OpenStack Release 2 was announced. This Kilo based distribution included some new deployment features not see in other OpenStack distros including the use of Kolla, Docker and MySQL Cluster. The press release states “First commercially available OpenStack implementation completely packaged as Docker instances”.

Using Docker to containerize each component of services is a very convenient means of dev/test/prod management. Your single node developer environment gets HA automatically, you can easily deploy to two or more management nodes varying services, and they look and act just like your production environment. I have often struggled with developing in OpenStack either with single project environments, creating a devstack, a previously installed 3 node physical server deployment which takes up room and power in my office, and also comparing other single node solutions including Canonical and Mirantis. I am often left to using online services such as Mirantis Express, TryStack and HP Cloud to more easily evaluate the end product, but without any access to the operating cloud under the covers.

It is an interesting move to using MySQL Cluster. I liked this announcement. This is a very robust Master/Master MySQL compatible product that starts with a High Availability implemented through a shared nothing architecture. You get the benefit of dynamic adding of data shards as your system grows. However MySQL Cluster is a very different product under the covers. For those familiar with managing MySQL server a different set of skills are required, starting with the concept of a management node, data nodes and SQL nodes and the different ways to manage, monitor and triage. MySQL Cluster is effectively an in-memory solution so this will require some additional sizing considerations especially for production deployments. Your backup/recovery/disaster recovery strategy will also change. All of this administration exists for MySQL Cluster, it is just different. If you only use MySQL server these are new skills to master. As an expert in MySQL server and having not used MySQL Cluster for at least 7 years, I cannot provide an insights for example of the use of common monitoring tools (including newer SaaS offerings). Still, MySQL Cluster is an extremely stable and production ready product, that scales to millions of QPS easily. Using this as a HA solution gives you a rock solid base which is what you need first.

While I attended a number of sessions and took the Hands On Lab for Oracle OpenStack the proof is having a running local environment. My next post will talk about my experiences installing.

References

Managing MySQL Version Upgrades Presentation

The following presentation was given at the Oracle Technology Network (OTN) Latin America 2015 tour events in Uruguay, Argentina, Chile and Peru.

In this presentation I talk about the various versions and means of installing and upgrading MySQL including:

  • MySQL version history from 3.23 to 5.7.8
  • Historical installation options
  • Recommended use of Oracle yum repository for current version
  • The installation and upgrade process, and errors that occur
  • Compatibility changes between MySQL 5.5 and MySQL 5.6 including
    • Reserved words (and their true impact)
    • Legacy TIMESTAMP usage
    • FULLTEXT indexes
    • The query optimizer
    • Clear text password warnings and security improvements
  • Important configuration differences
  • Other recommendations
  • The use of replication

Testing and Verifying your MySQL Backup Strategy Presentation

This past week I have been the sole MySQL representative on the Oracle Technology Network (OTN) Latin America 2015 tour events in Uruguay, Argentina, Chile and Peru.

In this presentation I talk about the important steps for testing and verifying your MySQL backup strategy to ensure your business continuity in any disaster recovery situation. This includes:

  • Overview of the primary product options
  • Backup and recovery strategy considerations
  • Technical requirements
  • Common problems observed
  • What about a failover strategy

Deploying Ubuntu OpenStack Kilo

My previous Ubuntu OpenStack setup has been using the Juno release. I received some installation problems for Kilo using the stable repo and so I switched to using the experimental repo. This comes with a number of surface changes.

  • The interactive installation asks for the installation type first, and password second.
  • The IP range of installed OpenStack services changes from 10.0.4.x to 10.0.7.x.
  • Juju GUI is no longer installed by default. You need to specifically add this as a service after initial installation.
  • The GUI displays additional information during installation.
  • The LXC container name changes from uoi-bootstrap to openstack-single-<user>.

Uninstall any existing environment

Remove any existing installed OpenStack cloud.

sudo openstack-install -k
sudo openstack-install -u

NOTE: Be sure to remove your existing cloud before upgrading. Failing to do so will mean you need to manually cleanup some things with:

sudo lxc-stop --name uoi-bootstrap
sudo lxc-destroy --name uoi-bootstrap
rm -rf $HOME/.cloud_install

Update the OpenStack installer

Upgrade Ubuntu OpenStack with the following commands. In my environment this installed version 0.99.14.

sudo apt-add-repository ppa:cloud-installer/experimental
sudo apt-get update
sudo apt-get upgrade openstack

Install OpenStack Kilo

Installing an Ubuntu OpenStack environment still uses the openstack-install command with an additional argument.

sudo openstack-install --upstream-ppa

NOTE: Updated 6/18/15 When using the experimental repo with version 0.99.12 or earlier you must specify the --extra-ppa argument and value, i.e. sudo openstack-install –extra-ppa ppa:cloud-installer/experimental. Thanks stokachu for pointing this out.

Adding Services

After setting up a Kilo cloud using Ubuntu OpenStack I was able to successfully add a Swift component. Something else that was not quite working as expected in stable.

References

Writing and testing unit tests in OpenStack

The following outlines an approach of identifying and improving unit tests in an OpenStack project.

Obtain the source code

You can obtain a copy of current source code for an OpenStack project at http://git.openstack.org. Active projects are categorized into openstack, openstack-dev, openstack-infra and stackforge.

NOTE: While you can find OpenStack projects on GitHub, these are just a mirror of the source repositories.

In this example I am going to use the Magnum project.

$ git clone git://git.openstack.org/openstack/magnum 
$ cd magnum

Run the current tests

The first step should be to run the current tests to verify the current code. This is to become familiar with the habit, especially if you may have made modifications and are returning to looking at your code. This will also create a virtual environment, which you will want to use later to test your changes.

$ tox -e py27

Should this fail, you may want to ensure all OpenStack developer dependencies are inplace on your OS.

Identify unit tests to work on

You can use the code coverage of unit tests to determine possible places to start adding to existing unit tests. The following command will produce a HTML report in the /cover directory of your project.

$ tox -e cover

This output will look similar to this example coverage output for Magnum. You can also produce a text based version with:

$ coverage report -m 

I will use this text version as a later verification.

Working on a specific unit test

Drilling down on any individual test file you will get an indication of code that does not have unit test coverage. For example in magnum/common/utils:

Once you have found a place to work with and you have identified the corresponding unit test file in the magnum/tests/unit sub-directory, in this example I am working on on magnum/tests/unit/common/test_utils.py, you will want to run this individual unit test in the virtual environment you previously created.

$ source .tox/py27/bin/activate
$ testr run test_utils -- -f

You can now start working on making your changes in whatever editor you wish. You may want to also work interactively in python initially to test and verify classes and methods especially if you are unfamiliar with how the code functions. For example, using the identical import found in test_utils.py for the test coverage I started with these simple checks.

(py27)$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from magnum.common import utils
>>> utils.is_valid_ipv4('10.0.0.1') == True
True
>>> utils.is_valid_ipv4('') == False
True

I then created some appropriate unit tests for these two methods based on this interactive validation. These tests show that I not only test for valid values, I also test various boundary contains for invalid values including blank, character and out of range values of IP addresses.

    def test_valid_ipv4(self):
        self.assertTrue(utils.is_valid_ipv4('10.0.0.1'))
        self.assertTrue(utils.is_valid_ipv4('255.255.255.255'))

    def test_invalid_ipv4(self):
        self.assertFalse(utils.is_valid_ipv4(''))
        self.assertFalse(utils.is_valid_ipv4('x.x.x.x'))
        self.assertFalse(utils.is_valid_ipv4('256.256.256.256'))
        self.assertFalse(utils.is_valid_ipv4(
                         'AA42:0000:0000:0000:0202:B3FF:FE1E:8329'))

    def test_valid_ipv6(self):
        self.assertTrue(utils.is_valid_ipv6(
                        'AA42:0000:0000:0000:0202:B3FF:FE1E:8329'))
        self.assertTrue(utils.is_valid_ipv6(
                        'AA42::0202:B3FF:FE1E:8329'))

    def test_invalid_ipv6(self):
        self.assertFalse(utils.is_valid_ipv6(''))
        self.assertFalse(utils.is_valid_ipv6('10.0.0.1'))
        self.assertFalse(utils.is_valid_ipv6('AA42::0202:B3FF:FE1E:'))

After making these changes you want to run and verify your modified test works as previously demonstrated.

$ testr run test_utils -- -f
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./magnum/tests/unit} --list  -f
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./magnum/tests/unit}  --load-list /tmp/tmpDMP50r -f
Ran 59 (+1) tests in 0.824s (-0.016s)
PASSED (id=19)

If your tests fail you will see a FAILED message like. I find it useful to write a failing test intentionally just to validate the actual testing process is working.


${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./magnum/tests/unit}  --load-list /tmp/tmpsZlk3i -f
======================================================================
FAIL: magnum.tests.unit.common.test_utils.UtilsTestCase.test_invalid_ipv6
tags: worker-0
----------------------------------------------------------------------
Empty attachments:
  stderr
  stdout

Traceback (most recent call last):
  File "magnum/tests/unit/common/test_utils.py", line 98, in test_invalid_ipv6
    self.assertFalse(utils.is_valid_ipv6('AA42::0202:B3FF:FE1E:832'))
  File "/home/rbradfor/os/openstack/magnum/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py", line 672, in assertFalse
    raise self.failureException(msg)
AssertionError: True is not false
Ran 55 (-4) tests in 0.805s (-0.017s)
FAILED (id=20, failures=1 (+1))

Confirming your new unit tests

You can verify this has improved coverage percentage by re-running the coverage commands. I use the text based version as an easy way to see a decrease in the number of lines not covered.

Before

$ coverage report -m | grep "common/utils"
magnum/common/utils    273     94     76     38    62%   92-94, 105-134, 151-157, 208-211, 215-218, 241-259, 267-270, 275-279, 325, 349-384, 442, 449-453, 458-459, 467, 517-518, 530-531, 544
$ tox -e cover

After

$ coverage report -m | grep "common/utils"
magnum/common/utils    273     86     76     38    64%   92-94, 105-134, 151-157, 241-259, 267-270, 275-279, 325, 349-384, 442, 449-453, 458-459, 467, 517-518, 530-531, 544

I can see 8 lines of improvement which I can also verify if I look at the html version.

Before

After

Additional Testing

Make sure you run a full test before committing. This runs all tests in multiple Python versions and runs the PEP8 code style validation for your modified unit tests.

$ tox -e py27

Here are some examples of PEP8 problems with my improvements to the unit tests.

pep8 runtests: commands[0] | flake8
./magnum/tests/unit/common/test_utils.py:88:80: E501 line too long (88 > 79 characters)
./magnum/tests/unit/common/test_utils.py:91:80: E501 line too long (87 > 79 characters)
...
./magnum/tests/unit/common/test_utils.py:112:32: E231 missing whitespace after ','
./magnum/tests/unit/common/test_utils.py:113:32: E231 missing whitespace after ','
./magnum/tests/unit/common/test_utils.py:121:30: E231 missing whitespace after ','
...

Submitting your work

In order for your time and effort to be included in the OpenStack project there are a number of key details you need to follow that I outlined in contributing to OpenStack. Specifically these documents are important.

You do not have to be familiar with the procedures in order to look at the code, and even look at improving the code. You will need to follow the steps as outlined in these links if you want to contribute your code. Remember if you are new, the best access to help is to jump onto the IRC channel of the project you are interested in.

This example along with additions for several other methods was submitted (See patch). It was reviewed and ultimately approved.

References

Some additional information about the tools and processes can be found in these OpenStack documentation and wiki pages.

Contributing to OpenStack

Following my first OpenStack Summit in Vancouver 4/2015 it was time to become involved with contributing to OpenStack.

I have lurked around the mailing lists and several IRC channels for a few weeks and familiarized myself with OpenStack in varying forms including devstack, the free hosted Mirantis Express and the VM version, Ubuntu OpenStack, and even building my own 3 physical server cloud from second hand hardware purchased on eBay.

There are several resources available however I suggest you start with this concise presentation I attended at the summit by Adrian Otto on “7 Habits of Highly Effective Contributors” (slides, video).

You should also look at contributions from existing developers by looking at current code being submitted for review at https://review.openstack.org. I spent several weeks just looking at submissions, and I look at new submissions most days. While it does not always make sense (including a lot initially) its important to look at the full scope of all the projects. It is extremely valuable to look at how the review process works, how others comment on contributions, and look at the types of patches and code changes that are being contributed. There are a number of ways of not doing it right which can be discouraging when you first start contributing. The following links are vital to read, and re-read.

Individual projects also have various information, for example Magnum’s Ways to Contribute.

The benefit of observing for some time is you can be better prepared when you start to contribute. I was also new to how unit testing and automated testing worked in Python (about 7th on my list of known languages), and so learning about running OpenStack tests with tox and understanding the different OpenStack tox configs were valuable lessons, helped by feedback of OpenStack developers on the mailing list and IRC (If you have not looked at the 7 Habits presentation, now is a great time).

I took the time to find areas of interest and value which become more apparent after attending my first Design Summit. I even committed to assist in a design priority in the Magnum project as a result of my learning about how unit testing worked.

And if you write about your experiences another thing you can do is Add your blog to Planet OpenStack. I have received great feedback from the OpenStack community when writing about my first experiences.

Tracking the Ubuntu OpenStack installation process

Following on from Installing Ubuntu OpenStack the following steps help you navigate around the single server installation, monitoring and debugging the installation process.

Configuration

The initial execution of the installer will create a default config.yaml file that defines the container and OpenStack services. After a successful installation this looks like:

$ more $HOME/.cloud-install/config.yaml
container_ip: 10.0.3.149
current_state: 2
deploy_complete: true
install_type: Single
openstack_password: openstack
openstack_release: juno
placements:
  controller:
    assignments:
      LXC:
      - nova-cloud-controller
      - glance
      - glance-simplestreams-sync
      - openstack-dashboard
      - juju-gui
      - keystone
      - mysql
      - neutron-api
      - neutron-openvswitch
      - rabbitmq-server
    constraints:
      cpu-cores: 2
      mem: 6144
      root-disk: 20480
  nova-compute-machine-0:
    assignments:
      BareMetal:
      - nova-compute
    constraints:
      mem: 4096
      root-disk: 40960
  quantum-gateway-machine-0:
    assignments:
      BareMetal:
      - quantum-gateway
    constraints:
      mem: 2048
      root-disk: 20480

This file changes during the installation process which I described later.

The LXC Container

The single server installation is managed within a single LXC container. You can obtain details of and connect to the container with the following.

$ sudo lxc-ls --fancy
----------------------------------------------------------------------------
uoi-bootstrap  RUNNING  10.0.3.149, 10.0.4.1, 192.168.122.1  -     YES      

$ sudo lxc-info --name uoi-bootstrap
Name:           uoi-bootstrap
State:          RUNNING
PID:            19623
IP:             10.0.3.149
IP:             10.0.4.1
IP:             192.168.122.1
CPU use:        27692.85 seconds
BlkIO use:      63.94 GiB
Memory use:     24.29 GiB
KMem use:       0 bytes
Link:           vethC0E9US
 TX bytes:      507.43 MiB
 RX bytes:      1.43 GiB
 Total bytes:   1.93 GiB

$ sudo lxc-attach --name uoi-bootstrap

You can also connect to the server directly. As I prefer to NEVER configure or connect to a server as root this is how I access the LXC container.

$ ssh [email protected]

Juju Status

When connected to the LXC container you can then look at the status of the Juju orchestration with.

$ export JUJU_HOME=~/.cloud-install/juju

$ juju status
environment: local
machines:
  "0":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: localhost
    instance-id: localhost
    series: trusty
    state-server-member-status: has-vote
  "1":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.62
    instance-id: ubuntu-local-machine-1
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=4096M root-disk=40960M
  "2":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.77
    instance-id: ubuntu-local-machine-2
    series: trusty
    containers:
      2/lxc/0:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.147
        instance-id: ubuntu-local-machine-2-lxc-0
        series: trusty
        hardware: arch=amd64
      2/lxc/1:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.15
        instance-id: ubuntu-local-machine-2-lxc-1
        series: trusty
        hardware: arch=amd64
      2/lxc/2:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.135
        instance-id: ubuntu-local-machine-2-lxc-2
        series: trusty
        hardware: arch=amd64
      2/lxc/3:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.133
        instance-id: ubuntu-local-machine-2-lxc-3
        series: trusty
        hardware: arch=amd64
      2/lxc/4:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.119
        instance-id: ubuntu-local-machine-2-lxc-4
        series: trusty
        hardware: arch=amd64
      2/lxc/5:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.88
        instance-id: ubuntu-local-machine-2-lxc-5
        series: trusty
        hardware: arch=amd64
      2/lxc/6:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.155
        instance-id: ubuntu-local-machine-2-lxc-6
        series: trusty
        hardware: arch=amd64
      2/lxc/7:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.36
        instance-id: ubuntu-local-machine-2-lxc-7
        series: trusty
        hardware: arch=amd64
      2/lxc/8:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.11
        instance-id: ubuntu-local-machine-2-lxc-8
        series: trusty
        hardware: arch=amd64
    hardware: arch=amd64 cpu-cores=2 mem=6144M root-disk=20480M
  "3":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.10
    instance-id: ubuntu-local-machine-3
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=20480M
  "4":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.96
    instance-id: ubuntu-local-machine-4
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
  "5":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.140
    instance-id: ubuntu-local-machine-5
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
  "6":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.197
    instance-id: ubuntu-local-machine-6
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
services:
  glance:
    charm: cs:trusty/glance-11
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - glance
      identity-service:
      - keystone
      image-service:
      - nova-cloud-controller
      - nova-compute
      object-store:
      - swift-proxy
      shared-db:
      - mysql
    units:
      glance/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/4
        open-ports:
        - 9292/tcp
        public-address: 10.0.4.119
  glance-simplestreams-sync:
    charm: local:trusty/glance-simplestreams-sync-0
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      identity-service:
      - keystone
    units:
      glance-simplestreams-sync/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/5
        public-address: 10.0.4.88
  juju-gui:
    charm: cs:trusty/juju-gui-16
    exposed: false
    units:
      juju-gui/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/1
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: 10.0.4.15
  keystone:
    charm: cs:trusty/keystone-12
    exposed: false
    relations:
      cluster:
      - keystone
      identity-service:
      - glance
      - glance-simplestreams-sync
      - neutron-api
      - nova-cloud-controller
      - openstack-dashboard
      - swift-proxy
      shared-db:
      - mysql
    units:
      keystone/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/2
        public-address: 10.0.4.135
  mysql:
    charm: cs:trusty/mysql-12
    exposed: false
    relations:
      cluster:
      - mysql
      shared-db:
      - glance
      - keystone
      - neutron-api
      - nova-cloud-controller
      - nova-compute
      - quantum-gateway
    units:
      mysql/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/0
        public-address: 10.0.4.147
  neutron-api:
    charm: cs:trusty/neutron-api-6
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - neutron-api
      identity-service:
      - keystone
      neutron-api:
      - nova-cloud-controller
      neutron-plugin-api:
      - neutron-openvswitch
      shared-db:
      - mysql
    units:
      neutron-api/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/7
        open-ports:
        - 9696/tcp
        public-address: 10.0.4.36
  neutron-openvswitch:
    charm: cs:trusty/neutron-openvswitch-2
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      neutron-plugin:
      - nova-compute
      neutron-plugin-api:
      - neutron-api
    subordinate-to:
    - nova-compute
  nova-cloud-controller:
    charm: cs:trusty/nova-cloud-controller-51
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cloud-compute:
      - nova-compute
      cluster:
      - nova-cloud-controller
      identity-service:
      - keystone
      image-service:
      - glance
      neutron-api:
      - neutron-api
      quantum-network-service:
      - quantum-gateway
      shared-db:
      - mysql
    units:
      nova-cloud-controller/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/3
        open-ports:
        - 3333/tcp
        - 8773/tcp
        - 8774/tcp
        - 9696/tcp
        public-address: 10.0.4.133
  nova-compute:
    charm: cs:trusty/nova-compute-14
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cloud-compute:
      - nova-cloud-controller
      compute-peer:
      - nova-compute
      image-service:
      - glance
      neutron-plugin:
      - neutron-openvswitch
      shared-db:
      - mysql
    units:
      nova-compute/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: "1"
        public-address: 10.0.4.62
        subordinates:
          neutron-openvswitch/0:
            upgrading-from: cs:trusty/neutron-openvswitch-2
            agent-state: started
            agent-version: 1.20.11.1
            public-address: 10.0.4.62
  openstack-dashboard:
    charm: cs:trusty/openstack-dashboard-9
    exposed: false
    relations:
      cluster:
      - openstack-dashboard
      identity-service:
      - keystone
    units:
      openstack-dashboard/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/6
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: 10.0.4.155
  quantum-gateway:
    charm: cs:trusty/quantum-gateway-10
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - quantum-gateway
      quantum-network-service:
      - nova-cloud-controller
      shared-db:
      - mysql
    units:
      quantum-gateway/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: "3"
        public-address: 10.0.4.10
  rabbitmq-server:
    charm: cs:trusty/rabbitmq-server-26
    exposed: false
    relations:
      amqp:
      - glance
      - glance-simplestreams-sync
      - neutron-api
      - neutron-openvswitch
      - nova-cloud-controller
      - nova-compute
      - quantum-gateway
      cluster:
      - rabbitmq-server
    units:
      rabbitmq-server/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/8
        open-ports:
        - 5672/tcp
        public-address: 10.0.4.11

You can also look at a subset of the status for a particular service, for example keystone with:

$ juju status keystone
environment: local
machines:
  "1":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.128
    instance-id: ubuntu-local-machine-1
    series: trusty
    containers:
      1/lxc/2:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.142
        instance-id: ubuntu-local-machine-1-lxc-2
        series: trusty
        hardware: arch=amd64
    hardware: arch=amd64 cpu-cores=2 mem=6144M root-disk=20480M
services:
  keystone:
    charm: cs:trusty/keystone-12
    exposed: false
    relations:
      cluster:
      - keystone
      identity-service:
      - glance
      - glance-simplestreams-sync
      - neutron-api
      - nova-cloud-controller
      - openstack-dashboard
      shared-db:
      - mysql
    units:
      keystone/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 1/lxc/2
        public-address: 10.0.4.142

Monitoring the Installation

When performing an installation you can monitor the executed commands with:

$ tail -f $HOME/.cloud-install/commands.log

...

This provides a lot of debugging output. A streamlined logging is actually possible with automated installation described later.

Uninstalling

As the single server instance is in a LXC container, as the documentation states uninstalling the environment is a rather trivial process that takes only a few seconds.

This will teardown the cloud but leaving userdata available for a subsequent deployment.

$ sudo openstack-install -k
Warning:

This will destroy the host Container housing the OpenStack private cloud. This is a permanent operation.
Proceed? [y/N] Y
Removing static route
Removing host container...
Container is removed.

You can also do a more permanent uninstall of the cloud and packages.

$ sudo openstack-install -u
Warning:

This will uninstall OpenStack and make a best effort to return the system back to its original state.
Proceed? [y/N] Y
Restoring system to last known state.
Ubuntu Openstack Installer Uninstalling ...Single install path.

This does not however seem to cleanup $HOME/.cloud-install. You can safely remove this or move it sideways when re-deploying without any issues.

Installation automation

As described in my original post, the openstack-install script is a cursors-based interactive view. You can automate the installation by defining the needed setup inputs in a separate configuration file and running in headless mode.

$ echo "install_type: Single
openstack_password: openstack" > install.yaml

$ sudo openstack-install --headless --config install.yaml

This has the added benefit providing a more meaningful log of the state of the installation with less verbose information then in the commands.log file.

[INFO  • 06-02 12:02:42 • cloudinstall.install] Running in headless mode.
[INFO  • 06-02 12:02:42 • cloudinstall.install] Performing a Single Install
[INFO  • 06-02 12:02:42 • cloudinstall.task] [TASKLIST] ['Initializing Environment', 'Creating container', 'Bootstrapping Juju']
[INFO  • 06-02 12:02:42 • cloudinstall.task] [TASK] Initializing Environment
[INFO  • 06-02 12:02:42 • cloudinstall.consoleui] Building environment
[INFO  • 06-02 12:02:42 • cloudinstall.single_install] Prepared userdata: {'extra_sshkeys': ['ssh-rsa ...\n'], 'seed_command': ['env', 'pollinate', '-q']}
[INFO  • 06-02 12:02:42 • cloudinstall.single_install] Setting permissions for user rbradfor
[INFO  • 06-02 12:02:43 • cloudinstall.task] [TASK] Creating container
[INFO  • 06-02 12:04:20 • cloudinstall.single_install] Setting DHCP properties for host container.
[INFO  • 06-02 12:04:20 • cloudinstall.single_install] Adding static route for 10.0.4.0/24 via 10.0.3.160
...
[INFO  • 06-02 12:22:50 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: pending
[INFO  • 06-02 12:23:31 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: installed
[INFO  • 06-02 12:23:52 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: started
[INFO  • 06-02 12:24:34 • cloudinstall.consoleui] Checking availability of keystone: started
[INFO  • 06-02 12:24:44 • cloudinstall.consoleui] Checking availability of keystone: started
[INFO  • 06-02 12:24:44 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: started
[INFO  • 06-02 12:27:38 • cloudinstall.consoleui] Checking availability of quantum-gateway: started
[INFO  • 06-02 12:27:38 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: started
[INFO  • 06-02 12:27:38 • cloudinstall.consoleui] Validating network parameters for Neutron
[INFO  • 06-02 12:27:53 • cloudinstall.consoleui] All systems go!=

And 25 minutes later you have an available cloud.

If you attempt to look at the GUI status page with openstack-status you will be given a text based version of messages like.

$ sudo openstack-status
[INFO  • 06-02 12:06:21 • cloudinstall.core] Running openstack-status in headless mode.
[INFO  • 06-02 12:06:21 • cloudinstall.consoleui] Loaded placements from file.
[INFO  • 06-02 12:06:21 • cloudinstall.consoleui] Waiting for machines to start: 3 unknown
[INFO  • 06-02 12:08:20 • cloudinstall.consoleui] Waiting for machines to start: 1 pending, 2 unknown
[INFO  • 06-02 12:08:48 • cloudinstall.consoleui] Waiting for machines to start: 2 pending, 1 unknown
[INFO  • 06-02 12:09:04 • cloudinstall.consoleui] Waiting for machines to start: 1 down (started), 1 pending, 1 unknown
[INFO  • 06-02 12:09:13 • cloudinstall.consoleui] Waiting for machines to start: 1 down (started), 2 pending
[INFO  • 06-02 12:09:20 • cloudinstall.consoleui] Waiting for machines to start: 2 down (started), 1 pending
[INFO  • 06-02 12:09:26 • cloudinstall.consoleui] Waiting for machines to start: 1 pending, 2 started
[INFO  • 06-02 12:09:51 • cloudinstall.consoleui] Waiting for machines to start: 1 down (started), 2 started
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Verifying service deployments
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Missing ConsoleUI() attribute: set_pending_deploys
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Checking if MySQL is deployed
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Deploying MySQL to machine lxc:1
[INFO  • 06-02 12:10:49 • cloudinstall.consoleui] Deployed MySQL.
[INFO  • 06-02 12:10:49 • cloudinstall.consoleui] Checking if Juju GUI is deployed
[INFO  • 06-02 12:10:49 • cloudinstall.consoleui] Deploying Juju GUI to machine lxc:1
[INFO  • 06-02 12:11:00 • cloudinstall.consoleui] Deployed Juju GUI.
[INFO  • 06-02 12:11:00 • cloudinstall.consoleui] Checking if Keystone is deployed
[INFO  • 06-02 12:11:00 • cloudinstall.consoleui] Deploying Keystone to machine lxc:1
...

It seems you can trick it into providing both a GUI and text version with the following in another shell session.

$ sed -ie "/headless/d" $HOME/.cloud-install/config.yaml
$ sudo openstack-status

NOTE: You will not get any output until the initial container is completed. This also leaves a .pid file that must be manually cleaned up if you run to soon. The next invocation provides the following message.

$ sudo openstack-status
Another instance of openstack-status is running. If you're sure there are no other instances, please remove ~/.cloud-install/openstack.pid
$ rm $HOME/.cloud-install/openstack.pid

Monitoring the installation progress

The running config.yaml file changes over the duration of the installation.
It’s most basic configuration (when starting with the GUI) is:

$ more $HOME/.cloud-install/config.yaml
current_state: 0
openstack_release: juno

The release is also defined in the $HOME/.cloud-install/openstack_release file.

When starting by passing the configuration as previously mentioned it’s initial state is:

$ more $HOME/.cloud-install/config.yaml
config_file: install.yaml
current_state: 0
headless: true
install_type: Single
openstack_password: openstack
openstack_release: juno

This is updated when the LXC container is installed.

$ more $HOME/.cloud-install/config.yaml
config_file: install.yaml
container_ip: 10.0.3.77
current_state: 0
headless: true
install_type: Single
openstack_password: openstack
openstack_release: juno

And also updated during installation, such as.

$ more $HOME/.cloud-install/config.yaml
config_file: install.yaml
container_ip: 10.0.3.77
current_state: 0
headless: true
install_type: Single
openstack_password: openstack
openstack_release: juno
placements:
  controller:
    assignments:
      LXC:
      - nova-cloud-controller
      - glance
      - glance-simplestreams-sync
      - openstack-dashboard
      - juju-gui
      - keystone
      - mysql
      - neutron-api
      - neutron-openvswitch
      - rabbitmq-server
    constraints:
      cpu-cores: 2
      mem: 6144
      root-disk: 20480
  nova-compute-machine-0:
    assignments:
      BareMetal:
      - nova-compute
    constraints:
      mem: 4096
      root-disk: 40960
  quantum-gateway-machine-0:
    assignments:
      BareMetal:
      - quantum-gateway
    constraints:
      mem: 2048
      root-disk: 20480

When completed the configuration has the following settings.

config_file: install.yaml
container_ip: 10.0.3.77
current_state: 2
deploy_complete: true
install_type: Single
openstack_password: openstack
openstack_release: juno
placements:
...

Problems

When using the GUI installer the first time you quit (using Q), it seems to leave the terminal state wrong. The following will reset this to normal use.

$ stty sane  ^j    # (i.e. Ctrl-J together).

Subsequent uses of openstack-status do not have the same problem.

References

In my next post I am going to talk about the analysis taken to debug errors in the installation, starting with Keystone – hook failed: “config-changed” message I got attempting to install kilo, and hence this more detailed analysis of the installation process components.

Installing Ubuntu OpenStack

The The Canonical Distribution of Ubuntu OpenStack provides a simple installer to run an OpenStack cloud. You can deploy a simple single machine setup with fully containerized services (11 in total), or a multi server installation leveraging MAAS – Metal as a Service and Landscape Autopilot.

Installation

This post describes my experiences with the single machine setup on a 4 core machine with 32GB of RAM with a clean Ubuntu 14.04 LTS OS. The installation requires the following commands to configure the repo, install and configure your OpenStack cloud. In this example, the installed version is 0.22.3.

sudo apt-add-repository -y ppa:cloud-installer/stable
sudo apt-get update
sudo apt-get install -y openstack
sudo openstack-install --version
sudo openstack-install

The final step uses a cusors-based interface and only requires two steps before the installation.

  • Specify a password
  • Specify the install type




The UI provides a progress status of the installation. Initially new containers will start with a Pending status. Following the starting of the Juju GUI container the footer bar shows the URL for the JujuGUI, in my case http://10.0.4.112. Following the starting of the Openstack Dashboard you will then get a Horizon URL also detailed in the footer such as http://10.0.4.74/horizon.






Horizon

The Horizon display is what you generally expect.




JujuGUI

The JujuGUI provides a display of the deployment orchestration via charms. You can also drill down to specific services. An example is for the glance service using the charm cs:trusty/glance-11. This describes the relationships and configuration which are also seen in the GUI. You can also view online the full source code used to create this deployed service.




OpenStack Status

You can view the state of your containerized cloud with openstack-status which is a cursors-based display of the running installation, the same used during the installation. This displays the units deployed, status messages and a footer URL bar that indicates the URL’s of Horizon and JujuGUI. Each time you invoke this it will also check services, as indicated by the [INFO] messages.


Connecting to Containers

The installer will automatically create a SSH key for the user that you use to run the openstack-install command. This enables you to SSH to any of the containers, for example to connect to the MySQL container.

ssh [email protected]
$ mysql -uroot -p`sudo cat /var/lib/mysql/mysql.passwd` -e "SHOW SCHEMAS"
+--------------------+
| Database           |
+--------------------+
| information_schema |
| glance             |
| keystone           |
| mysql              |
| neutron            |
| nova               |
| performance_schema |
+--------------------+

You can use the various OpenStack clients to access OpenStack services. These are not installed by default.

sudo apt-get install -y python-glanceclient python-openstackclient python-novaclient python-keystoneclient
$ source $HOME/.cloud-install/openstack-admin-rc
$ glance image-list
+--------------------------------------+---------------------------------------------------------------+-------------+------------------+-----------+--------+
| ID                                   | Name                                                          | Disk Format | Container Format | Size      | Status |
+--------------------------------------+---------------------------------------------------------------+-------------+------------------+-----------+--------+
| f3cd4ec6-8ce6-4a44-85ec-2f8f066f351b | auto-sync/ubuntu-trusty-14.04-amd64-server-20150528-disk1.img | qcow2       | bare             | 257294848 | active |
+--------------------------------------+---------------------------------------------------------------+-------------+------------------+-----------+--------+

More Information

Read Tracking the Ubuntu OpenStack installation process for more detailed information on monitoring the installation process.

Thanks to the New York OpenStack Group and a presentation by Mark Baker of Canonical who demonstrated MAAS and Landscape AutoPilot installation of OpenStack. Slides of Automating hard things slides.