Deploying Ubuntu OpenStack Kilo

My previous Ubuntu OpenStack setup has been using the Juno release. I received some installation problems for Kilo using the stable repo and so I switched to using the experimental repo. This comes with a number of surface changes.

  • The interactive installation asks for the installation type first, and password second.
  • The IP range of installed OpenStack services changes from 10.0.4.x to 10.0.7.x.
  • Juju GUI is no longer installed by default. You need to specifically add this as a service after initial installation.
  • The GUI displays additional information during installation.
  • The LXC container name changes from uoi-bootstrap to openstack-single-<user>.

Uninstall any existing environment

Remove any existing installed OpenStack cloud.

sudo openstack-install -k
sudo openstack-install -u

NOTE: Be sure to remove your existing cloud before upgrading. Failing to do so will mean you need to manually cleanup some things with:

sudo lxc-stop --name uoi-bootstrap
sudo lxc-destroy --name uoi-bootstrap
rm -rf $HOME/.cloud_install

Update the OpenStack installer

Upgrade Ubuntu OpenStack with the following commands. In my environment this installed version 0.99.14.

sudo apt-add-repository ppa:cloud-installer/experimental
sudo apt-get update
sudo apt-get upgrade openstack

Install OpenStack Kilo

Installing an Ubuntu OpenStack environment still uses the openstack-install command with an additional argument.

sudo openstack-install --upstream-ppa

NOTE: Updated 6/18/15 When using the experimental repo with version 0.99.12 or earlier you must specify the --extra-ppa argument and value, i.e. sudo openstack-install –extra-ppa ppa:cloud-installer/experimental. Thanks stokachu for pointing this out.

Adding Services

After setting up a Kilo cloud using Ubuntu OpenStack I was able to successfully add a Swift component. Something else that was not quite working as expected in stable.

References

Writing and testing unit tests in OpenStack

The following outlines an approach of identifying and improving unit tests in an OpenStack project.

Obtain the source code

You can obtain a copy of current source code for an OpenStack project at http://git.openstack.org. Active projects are categorized into openstack, openstack-dev, openstack-infra and stackforge.

NOTE: While you can find OpenStack projects on GitHub, these are just a mirror of the source repositories.

In this example I am going to use the Magnum project.

$ git clone git://git.openstack.org/openstack/magnum 
$ cd magnum

Run the current tests

The first step should be to run the current tests to verify the current code. This is to become familiar with the habit, especially if you may have made modifications and are returning to looking at your code. This will also create a virtual environment, which you will want to use later to test your changes.

$ tox -e py27

Should this fail, you may want to ensure all OpenStack developer dependencies are inplace on your OS.

Identify unit tests to work on

You can use the code coverage of unit tests to determine possible places to start adding to existing unit tests. The following command will produce a HTML report in the /cover directory of your project.

$ tox -e cover

This output will look similar to this example coverage output for Magnum. You can also produce a text based version with:

$ coverage report -m 

I will use this text version as a later verification.

Working on a specific unit test

Drilling down on any individual test file you will get an indication of code that does not have unit test coverage. For example in magnum/common/utils:

Once you have found a place to work with and you have identified the corresponding unit test file in the magnum/tests/unit sub-directory, in this example I am working on on magnum/tests/unit/common/test_utils.py, you will want to run this individual unit test in the virtual environment you previously created.

$ source .tox/py27/bin/activate
$ testr run test_utils -- -f

You can now start working on making your changes in whatever editor you wish. You may want to also work interactively in python initially to test and verify classes and methods especially if you are unfamiliar with how the code functions. For example, using the identical import found in test_utils.py for the test coverage I started with these simple checks.

(py27)$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from magnum.common import utils
>>> utils.is_valid_ipv4('10.0.0.1') == True
True
>>> utils.is_valid_ipv4('') == False
True

I then created some appropriate unit tests for these two methods based on this interactive validation. These tests show that I not only test for valid values, I also test various boundary contains for invalid values including blank, character and out of range values of IP addresses.

    def test_valid_ipv4(self):
        self.assertTrue(utils.is_valid_ipv4('10.0.0.1'))
        self.assertTrue(utils.is_valid_ipv4('255.255.255.255'))

    def test_invalid_ipv4(self):
        self.assertFalse(utils.is_valid_ipv4(''))
        self.assertFalse(utils.is_valid_ipv4('x.x.x.x'))
        self.assertFalse(utils.is_valid_ipv4('256.256.256.256'))
        self.assertFalse(utils.is_valid_ipv4(
                         'AA42:0000:0000:0000:0202:B3FF:FE1E:8329'))

    def test_valid_ipv6(self):
        self.assertTrue(utils.is_valid_ipv6(
                        'AA42:0000:0000:0000:0202:B3FF:FE1E:8329'))
        self.assertTrue(utils.is_valid_ipv6(
                        'AA42::0202:B3FF:FE1E:8329'))

    def test_invalid_ipv6(self):
        self.assertFalse(utils.is_valid_ipv6(''))
        self.assertFalse(utils.is_valid_ipv6('10.0.0.1'))
        self.assertFalse(utils.is_valid_ipv6('AA42::0202:B3FF:FE1E:'))

After making these changes you want to run and verify your modified test works as previously demonstrated.

$ testr run test_utils -- -f
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./magnum/tests/unit} --list  -f
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./magnum/tests/unit}  --load-list /tmp/tmpDMP50r -f
Ran 59 (+1) tests in 0.824s (-0.016s)
PASSED (id=19)

If your tests fail you will see a FAILED message like. I find it useful to write a failing test intentionally just to validate the actual testing process is working.

${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./magnum/tests/unit}  --load-list /tmp/tmpsZlk3i -f
======================================================================
FAIL: magnum.tests.unit.common.test_utils.UtilsTestCase.test_invalid_ipv6
tags: worker-0
----------------------------------------------------------------------
Empty attachments:
  stderr
  stdout

Traceback (most recent call last):
  File "magnum/tests/unit/common/test_utils.py", line 98, in test_invalid_ipv6
    self.assertFalse(utils.is_valid_ipv6('AA42::0202:B3FF:FE1E:832'))
  File "/home/rbradfor/os/openstack/magnum/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py", line 672, in assertFalse
    raise self.failureException(msg)
AssertionError: True is not false
Ran 55 (-4) tests in 0.805s (-0.017s)
FAILED (id=20, failures=1 (+1))

Confirming your new unit tests

You can verify this has improved coverage percentage by re-running the coverage commands. I use the text based version as an easy way to see a decrease in the number of lines not covered.

Before

$ coverage report -m | grep "common/utils"
magnum/common/utils    273     94     76     38    62%   92-94, 105-134, 151-157, 208-211, 215-218, 241-259, 267-270, 275-279, 325, 349-384, 442, 449-453, 458-459, 467, 517-518, 530-531, 544
$ tox -e cover

After

$ coverage report -m | grep "common/utils"
magnum/common/utils    273     86     76     38    64%   92-94, 105-134, 151-157, 241-259, 267-270, 275-279, 325, 349-384, 442, 449-453, 458-459, 467, 517-518, 530-531, 544

I can see 8 lines of improvement which I can also verify if I look at the html version.

Before

After

Additional Testing

Make sure you run a full test before committing. This runs all tests in multiple Python versions and runs the PEP8 code style validation for your modified unit tests.

$ tox -e py27

Here are some examples of PEP8 problems with my improvements to the unit tests.

pep8 runtests: commands[0] | flake8
./magnum/tests/unit/common/test_utils.py:88:80: E501 line too long (88 > 79 characters)
./magnum/tests/unit/common/test_utils.py:91:80: E501 line too long (87 > 79 characters)
...
./magnum/tests/unit/common/test_utils.py:112:32: E231 missing whitespace after ','
./magnum/tests/unit/common/test_utils.py:113:32: E231 missing whitespace after ','
./magnum/tests/unit/common/test_utils.py:121:30: E231 missing whitespace after ','
...

Submitting your work

In order for your time and effort to be included in the OpenStack project there are a number of key details you need to follow that I outlined in contributing to OpenStack. Specifically these documents are important.

You do not have to be familiar with the procedures in order to look at the code, and even look at improving the code. You will need to follow the steps as outlined in these links if you want to contribute your code. Remember if you are new, the best access to help is to jump onto the IRC channel of the project you are interested in.

This example along with additions for several other methods was submitted (See patch). It was reviewed and ultimately approved.

References

Some additional information about the tools and processes can be found in these OpenStack documentation and wiki pages.

Contributing to OpenStack

Following my first OpenStack Summit in Vancouver 4/2015 it was time to become involved with contributing to OpenStack.

I have lurked around the mailing lists and several IRC channels for a few weeks and familiarized myself with OpenStack in varying forms including devstack, the free hosted Mirantis Express and the VM version, Ubuntu OpenStack, and even building my own 3 physical server cloud from second hand hardware purchased on eBay.

There are several resources available however I suggest you start with this concise presentation I attended at the summit by Adrian Otto on “7 Habits of Highly Effective Contributors” (slides, video).

You should also look at contributions from existing developers by looking at current code being submitted for review at https://review.openstack.org. I spent several weeks just looking at submissions, and I look at new submissions most days. While it does not always make sense (including a lot initially) its important to look at the full scope of all the projects. It is extremely valuable to look at how the review process works, how others comment on contributions, and look at the types of patches and code changes that are being contributed. There are a number of ways of not doing it right which can be discouraging when you first start contributing. The following links are vital to read, and re-read.

Individual projects also have various information, for example Magnum’s Ways to Contribute.

The benefit of observing for some time is you can be better prepared when you start to contribute. I was also new to how unit testing and automated testing worked in Python (about 7th on my list of known languages), and so learning about running OpenStack tests with tox and understanding the different OpenStack tox configs were valuable lessons, helped by feedback of OpenStack developers on the mailing list and IRC (If you have not looked at the 7 Habits presentation, now is a great time).

I took the time to find areas of interest and value which become more apparent after attending my first Design Summit. I even committed to assist in a design priority in the Magnum project as a result of my learning about how unit testing worked.

And if you write about your experiences another thing you can do is Add your blog to Planet OpenStack. I have received great feedback from the OpenStack community when writing about my first experiences.

Tracking the Ubuntu OpenStack installation process

Following on from Installing Ubuntu OpenStack the following steps help you navigate around the single server installation, monitoring and debugging the installation process.

Configuration

The initial execution of the installer will create a default config.yaml file that defines the container and OpenStack services. After a successful installation this looks like:

$ more $HOME/.cloud-install/config.yaml
container_ip: 10.0.3.149
current_state: 2
deploy_complete: true
install_type: Single
openstack_password: openstack
openstack_release: juno
placements:
  controller:
    assignments:
      LXC:
      - nova-cloud-controller
      - glance
      - glance-simplestreams-sync
      - openstack-dashboard
      - juju-gui
      - keystone
      - mysql
      - neutron-api
      - neutron-openvswitch
      - rabbitmq-server
    constraints:
      cpu-cores: 2
      mem: 6144
      root-disk: 20480
  nova-compute-machine-0:
    assignments:
      BareMetal:
      - nova-compute
    constraints:
      mem: 4096
      root-disk: 40960
  quantum-gateway-machine-0:
    assignments:
      BareMetal:
      - quantum-gateway
    constraints:
      mem: 2048
      root-disk: 20480

This file changes during the installation process which I described later.

The LXC Container

The single server installation is managed within a single LXC container. You can obtain details of and connect to the container with the following.

$ sudo lxc-ls --fancy
----------------------------------------------------------------------------
uoi-bootstrap  RUNNING  10.0.3.149, 10.0.4.1, 192.168.122.1  -     YES      

$ sudo lxc-info --name uoi-bootstrap
Name:           uoi-bootstrap
State:          RUNNING
PID:            19623
IP:             10.0.3.149
IP:             10.0.4.1
IP:             192.168.122.1
CPU use:        27692.85 seconds
BlkIO use:      63.94 GiB
Memory use:     24.29 GiB
KMem use:       0 bytes
Link:           vethC0E9US
 TX bytes:      507.43 MiB
 RX bytes:      1.43 GiB
 Total bytes:   1.93 GiB

$ sudo lxc-attach --name uoi-bootstrap

You can also connect to the server directly. As I prefer to NEVER configure or connect to a server as root this is how I access the LXC container.

$ ssh [email protected]

Juju Status

When connected to the LXC container you can then look at the status of the Juju orchestration with.

$ export JUJU_HOME=~/.cloud-install/juju

$ juju status
environment: local
machines:
  "0":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: localhost
    instance-id: localhost
    series: trusty
    state-server-member-status: has-vote
  "1":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.62
    instance-id: ubuntu-local-machine-1
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=4096M root-disk=40960M
  "2":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.77
    instance-id: ubuntu-local-machine-2
    series: trusty
    containers:
      2/lxc/0:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.147
        instance-id: ubuntu-local-machine-2-lxc-0
        series: trusty
        hardware: arch=amd64
      2/lxc/1:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.15
        instance-id: ubuntu-local-machine-2-lxc-1
        series: trusty
        hardware: arch=amd64
      2/lxc/2:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.135
        instance-id: ubuntu-local-machine-2-lxc-2
        series: trusty
        hardware: arch=amd64
      2/lxc/3:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.133
        instance-id: ubuntu-local-machine-2-lxc-3
        series: trusty
        hardware: arch=amd64
      2/lxc/4:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.119
        instance-id: ubuntu-local-machine-2-lxc-4
        series: trusty
        hardware: arch=amd64
      2/lxc/5:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.88
        instance-id: ubuntu-local-machine-2-lxc-5
        series: trusty
        hardware: arch=amd64
      2/lxc/6:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.155
        instance-id: ubuntu-local-machine-2-lxc-6
        series: trusty
        hardware: arch=amd64
      2/lxc/7:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.36
        instance-id: ubuntu-local-machine-2-lxc-7
        series: trusty
        hardware: arch=amd64
      2/lxc/8:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.11
        instance-id: ubuntu-local-machine-2-lxc-8
        series: trusty
        hardware: arch=amd64
    hardware: arch=amd64 cpu-cores=2 mem=6144M root-disk=20480M
  "3":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.10
    instance-id: ubuntu-local-machine-3
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=20480M
  "4":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.96
    instance-id: ubuntu-local-machine-4
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
  "5":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.140
    instance-id: ubuntu-local-machine-5
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
  "6":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.197
    instance-id: ubuntu-local-machine-6
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
services:
  glance:
    charm: cs:trusty/glance-11
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - glance
      identity-service:
      - keystone
      image-service:
      - nova-cloud-controller
      - nova-compute
      object-store:
      - swift-proxy
      shared-db:
      - mysql
    units:
      glance/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/4
        open-ports:
        - 9292/tcp
        public-address: 10.0.4.119
  glance-simplestreams-sync:
    charm: local:trusty/glance-simplestreams-sync-0
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      identity-service:
      - keystone
    units:
      glance-simplestreams-sync/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/5
        public-address: 10.0.4.88
  juju-gui:
    charm: cs:trusty/juju-gui-16
    exposed: false
    units:
      juju-gui/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/1
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: 10.0.4.15
  keystone:
    charm: cs:trusty/keystone-12
    exposed: false
    relations:
      cluster:
      - keystone
      identity-service:
      - glance
      - glance-simplestreams-sync
      - neutron-api
      - nova-cloud-controller
      - openstack-dashboard
      - swift-proxy
      shared-db:
      - mysql
    units:
      keystone/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/2
        public-address: 10.0.4.135
  mysql:
    charm: cs:trusty/mysql-12
    exposed: false
    relations:
      cluster:
      - mysql
      shared-db:
      - glance
      - keystone
      - neutron-api
      - nova-cloud-controller
      - nova-compute
      - quantum-gateway
    units:
      mysql/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/0
        public-address: 10.0.4.147
  neutron-api:
    charm: cs:trusty/neutron-api-6
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - neutron-api
      identity-service:
      - keystone
      neutron-api:
      - nova-cloud-controller
      neutron-plugin-api:
      - neutron-openvswitch
      shared-db:
      - mysql
    units:
      neutron-api/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/7
        open-ports:
        - 9696/tcp
        public-address: 10.0.4.36
  neutron-openvswitch:
    charm: cs:trusty/neutron-openvswitch-2
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      neutron-plugin:
      - nova-compute
      neutron-plugin-api:
      - neutron-api
    subordinate-to:
    - nova-compute
  nova-cloud-controller:
    charm: cs:trusty/nova-cloud-controller-51
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cloud-compute:
      - nova-compute
      cluster:
      - nova-cloud-controller
      identity-service:
      - keystone
      image-service:
      - glance
      neutron-api:
      - neutron-api
      quantum-network-service:
      - quantum-gateway
      shared-db:
      - mysql
    units:
      nova-cloud-controller/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/3
        open-ports:
        - 3333/tcp
        - 8773/tcp
        - 8774/tcp
        - 9696/tcp
        public-address: 10.0.4.133
  nova-compute:
    charm: cs:trusty/nova-compute-14
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cloud-compute:
      - nova-cloud-controller
      compute-peer:
      - nova-compute
      image-service:
      - glance
      neutron-plugin:
      - neutron-openvswitch
      shared-db:
      - mysql
    units:
      nova-compute/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: "1"
        public-address: 10.0.4.62
        subordinates:
          neutron-openvswitch/0:
            upgrading-from: cs:trusty/neutron-openvswitch-2
            agent-state: started
            agent-version: 1.20.11.1
            public-address: 10.0.4.62
  openstack-dashboard:
    charm: cs:trusty/openstack-dashboard-9
    exposed: false
    relations:
      cluster:
      - openstack-dashboard
      identity-service:
      - keystone
    units:
      openstack-dashboard/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/6
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: 10.0.4.155
  quantum-gateway:
    charm: cs:trusty/quantum-gateway-10
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - quantum-gateway
      quantum-network-service:
      - nova-cloud-controller
      shared-db:
      - mysql
    units:
      quantum-gateway/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: "3"
        public-address: 10.0.4.10
  rabbitmq-server:
    charm: cs:trusty/rabbitmq-server-26
    exposed: false
    relations:
      amqp:
      - glance
      - glance-simplestreams-sync
      - neutron-api
      - neutron-openvswitch
      - nova-cloud-controller
      - nova-compute
      - quantum-gateway
      cluster:
      - rabbitmq-server
    units:
      rabbitmq-server/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 2/lxc/8
        open-ports:
        - 5672/tcp
        public-address: 10.0.4.11

You can also look at a subset of the status for a particular service, for example keystone with:

$ juju status keystone
environment: local
machines:
  "1":
    agent-state: started
    agent-version: 1.20.11.1
    dns-name: 10.0.4.128
    instance-id: ubuntu-local-machine-1
    series: trusty
    containers:
      1/lxc/2:
        agent-state: started
        agent-version: 1.20.11.1
        dns-name: 10.0.4.142
        instance-id: ubuntu-local-machine-1-lxc-2
        series: trusty
        hardware: arch=amd64
    hardware: arch=amd64 cpu-cores=2 mem=6144M root-disk=20480M
services:
  keystone:
    charm: cs:trusty/keystone-12
    exposed: false
    relations:
      cluster:
      - keystone
      identity-service:
      - glance
      - glance-simplestreams-sync
      - neutron-api
      - nova-cloud-controller
      - openstack-dashboard
      shared-db:
      - mysql
    units:
      keystone/0:
        agent-state: started
        agent-version: 1.20.11.1
        machine: 1/lxc/2
        public-address: 10.0.4.142

Monitoring the Installation

When performing an installation you can monitor the executed commands with:

$ tail -f $HOME/.cloud-install/commands.log

...

This provides a lot of debugging output. A streamlined logging is actually possible with automated installation described later.

Uninstalling

As the single server instance is in a LXC container, as the documentation states uninstalling the environment is a rather trivial process that takes only a few seconds.

This will teardown the cloud but leaving userdata available for a subsequent deployment.

$ sudo openstack-install -k
Warning:

This will destroy the host Container housing the OpenStack private cloud. This is a permanent operation.
Proceed? [y/N] Y
Removing static route
Removing host container...
Container is removed.

You can also do a more permanent uninstall of the cloud and packages.

$ sudo openstack-install -u
Warning:

This will uninstall OpenStack and make a best effort to return the system back to its original state.
Proceed? [y/N] Y
Restoring system to last known state.
Ubuntu Openstack Installer Uninstalling ...Single install path.

This does not however seem to cleanup $HOME/.cloud-install. You can safely remove this or move it sideways when re-deploying without any issues.

Installation automation

As described in my original post, the openstack-install script is a cursors-based interactive view. You can automate the installation by defining the needed setup inputs in a separate configuration file and running in headless mode.

$ echo "install_type: Single
openstack_password: openstack" > install.yaml

$ sudo openstack-install --headless --config install.yaml

This has the added benefit providing a more meaningful log of the state of the installation with less verbose information then in the commands.log file.

[INFO  • 06-02 12:02:42 • cloudinstall.install] Running in headless mode.
[INFO  • 06-02 12:02:42 • cloudinstall.install] Performing a Single Install
[INFO  • 06-02 12:02:42 • cloudinstall.task] [TASKLIST] ['Initializing Environment', 'Creating container', 'Bootstrapping Juju']
[INFO  • 06-02 12:02:42 • cloudinstall.task] [TASK] Initializing Environment
[INFO  • 06-02 12:02:42 • cloudinstall.consoleui] Building environment
[INFO  • 06-02 12:02:42 • cloudinstall.single_install] Prepared userdata: {'extra_sshkeys': ['ssh-rsa ...\n'], 'seed_command': ['env', 'pollinate', '-q']}
[INFO  • 06-02 12:02:42 • cloudinstall.single_install] Setting permissions for user rbradfor
[INFO  • 06-02 12:02:43 • cloudinstall.task] [TASK] Creating container
[INFO  • 06-02 12:04:20 • cloudinstall.single_install] Setting DHCP properties for host container.
[INFO  • 06-02 12:04:20 • cloudinstall.single_install] Adding static route for 10.0.4.0/24 via 10.0.3.160
...
[INFO  • 06-02 12:22:50 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: pending
[INFO  • 06-02 12:23:31 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: installed
[INFO  • 06-02 12:23:52 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: started
[INFO  • 06-02 12:24:34 • cloudinstall.consoleui] Checking availability of keystone: started
[INFO  • 06-02 12:24:44 • cloudinstall.consoleui] Checking availability of keystone: started
[INFO  • 06-02 12:24:44 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: started
[INFO  • 06-02 12:27:38 • cloudinstall.consoleui] Checking availability of quantum-gateway: started
[INFO  • 06-02 12:27:38 • cloudinstall.consoleui] Checking availability of nova-cloud-controller: started
[INFO  • 06-02 12:27:38 • cloudinstall.consoleui] Validating network parameters for Neutron
[INFO  • 06-02 12:27:53 • cloudinstall.consoleui] All systems go!=

And 25 minutes later you have an available cloud.

If you attempt to look at the GUI status page with openstack-status you will be given a text based version of messages like.

$ sudo openstack-status
[INFO  • 06-02 12:06:21 • cloudinstall.core] Running openstack-status in headless mode.
[INFO  • 06-02 12:06:21 • cloudinstall.consoleui] Loaded placements from file.
[INFO  • 06-02 12:06:21 • cloudinstall.consoleui] Waiting for machines to start: 3 unknown
[INFO  • 06-02 12:08:20 • cloudinstall.consoleui] Waiting for machines to start: 1 pending, 2 unknown
[INFO  • 06-02 12:08:48 • cloudinstall.consoleui] Waiting for machines to start: 2 pending, 1 unknown
[INFO  • 06-02 12:09:04 • cloudinstall.consoleui] Waiting for machines to start: 1 down (started), 1 pending, 1 unknown
[INFO  • 06-02 12:09:13 • cloudinstall.consoleui] Waiting for machines to start: 1 down (started), 2 pending
[INFO  • 06-02 12:09:20 • cloudinstall.consoleui] Waiting for machines to start: 2 down (started), 1 pending
[INFO  • 06-02 12:09:26 • cloudinstall.consoleui] Waiting for machines to start: 1 pending, 2 started
[INFO  • 06-02 12:09:51 • cloudinstall.consoleui] Waiting for machines to start: 1 down (started), 2 started
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Verifying service deployments
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Missing ConsoleUI() attribute: set_pending_deploys
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Checking if MySQL is deployed
[INFO  • 06-02 12:10:44 • cloudinstall.consoleui] Deploying MySQL to machine lxc:1
[INFO  • 06-02 12:10:49 • cloudinstall.consoleui] Deployed MySQL.
[INFO  • 06-02 12:10:49 • cloudinstall.consoleui] Checking if Juju GUI is deployed
[INFO  • 06-02 12:10:49 • cloudinstall.consoleui] Deploying Juju GUI to machine lxc:1
[INFO  • 06-02 12:11:00 • cloudinstall.consoleui] Deployed Juju GUI.
[INFO  • 06-02 12:11:00 • cloudinstall.consoleui] Checking if Keystone is deployed
[INFO  • 06-02 12:11:00 • cloudinstall.consoleui] Deploying Keystone to machine lxc:1
...

It seems you can trick it into providing both a GUI and text version with the following in another shell session.

$ sed -ie "/headless/d" $HOME/.cloud-install/config.yaml
$ sudo openstack-status

NOTE: You will not get any output until the initial container is completed. This also leaves a .pid file that must be manually cleaned up if you run to soon. The next invocation provides the following message.

$ sudo openstack-status
Another instance of openstack-status is running. If you're sure there are no other instances, please remove ~/.cloud-install/openstack.pid
$ rm $HOME/.cloud-install/openstack.pid

Monitoring the installation progress

The running config.yaml file changes over the duration of the installation.
It’s most basic configuration (when starting with the GUI) is:

$ more $HOME/.cloud-install/config.yaml
current_state: 0
openstack_release: juno

The release is also defined in the $HOME/.cloud-install/openstack_release file.

When starting by passing the configuration as previously mentioned it’s initial state is:

$ more $HOME/.cloud-install/config.yaml
config_file: install.yaml
current_state: 0
headless: true
install_type: Single
openstack_password: openstack
openstack_release: juno

This is updated when the LXC container is installed.

$ more $HOME/.cloud-install/config.yaml
config_file: install.yaml
container_ip: 10.0.3.77
current_state: 0
headless: true
install_type: Single
openstack_password: openstack
openstack_release: juno

And also updated during installation, such as.

$ more $HOME/.cloud-install/config.yaml
config_file: install.yaml
container_ip: 10.0.3.77
current_state: 0
headless: true
install_type: Single
openstack_password: openstack
openstack_release: juno
placements:
  controller:
    assignments:
      LXC:
      - nova-cloud-controller
      - glance
      - glance-simplestreams-sync
      - openstack-dashboard
      - juju-gui
      - keystone
      - mysql
      - neutron-api
      - neutron-openvswitch
      - rabbitmq-server
    constraints:
      cpu-cores: 2
      mem: 6144
      root-disk: 20480
  nova-compute-machine-0:
    assignments:
      BareMetal:
      - nova-compute
    constraints:
      mem: 4096
      root-disk: 40960
  quantum-gateway-machine-0:
    assignments:
      BareMetal:
      - quantum-gateway
    constraints:
      mem: 2048
      root-disk: 20480

When completed the configuration has the following settings.

config_file: install.yaml
container_ip: 10.0.3.77
current_state: 2
deploy_complete: true
install_type: Single
openstack_password: openstack
openstack_release: juno
placements:
...

Problems

When using the GUI installer the first time you quit (using Q), it seems to leave the terminal state wrong. The following will reset this to normal use.

$ stty sane  ^j    # (i.e. Ctrl-J together).

Subsequent uses of openstack-status do not have the same problem.

References

In my next post I am going to talk about the analysis taken to debug errors in the installation, starting with Keystone – hook failed: “config-changed” message I got attempting to install kilo, and hence this more detailed analysis of the installation process components.

Installing Ubuntu OpenStack

The The Canonical Distribution of Ubuntu OpenStack provides a simple installer to run an OpenStack cloud. You can deploy a simple single machine setup with fully containerized services (11 in total), or a multi server installation leveraging MAAS – Metal as a Service and Landscape Autopilot.

Installation

This post describes my experiences with the single machine setup on a 4 core machine with 32GB of RAM with a clean Ubuntu 14.04 LTS OS. The installation requires the following commands to configure the repo, install and configure your OpenStack cloud. In this example, the installed version is 0.22.3.

sudo apt-add-repository -y ppa:cloud-installer/stable
sudo apt-get update
sudo apt-get install -y openstack
sudo openstack-install --version
sudo openstack-install

The final step uses a cusors-based interface and only requires two steps before the installation.

  • Specify a password
  • Specify the install type




The UI provides a progress status of the installation. Initially new containers will start with a Pending status. Following the starting of the Juju GUI container the footer bar shows the URL for the JujuGUI, in my case http://10.0.4.112. Following the starting of the Openstack Dashboard you will then get a Horizon URL also detailed in the footer such as http://10.0.4.74/horizon.






Horizon

The Horizon display is what you generally expect.




JujuGUI

The JujuGUI provides a display of the deployment orchestration via charms. You can also drill down to specific services. An example is for the glance service using the charm cs:trusty/glance-11. This describes the relationships and configuration which are also seen in the GUI. You can also view online the full source code used to create this deployed service.




OpenStack Status

You can view the state of your containerized cloud with openstack-status which is a cursors-based display of the running installation, the same used during the installation. This displays the units deployed, status messages and a footer URL bar that indicates the URL’s of Horizon and JujuGUI. Each time you invoke this it will also check services, as indicated by the [INFO] messages.


Connecting to Containers

The installer will automatically create a SSH key for the user that you use to run the openstack-install command. This enables you to SSH to any of the containers, for example to connect to the MySQL container.

ssh [email protected]
$ mysql -uroot -p`sudo cat /var/lib/mysql/mysql.passwd` -e "SHOW SCHEMAS"
+--------------------+
| Database           |
+--------------------+
| information_schema |
| glance             |
| keystone           |
| mysql              |
| neutron            |
| nova               |
| performance_schema |
+--------------------+

You can use the various OpenStack clients to access OpenStack services. These are not installed by default.

sudo apt-get install -y python-glanceclient python-openstackclient python-novaclient python-keystoneclient
$ source $HOME/.cloud-install/openstack-admin-rc
$ glance image-list
+--------------------------------------+---------------------------------------------------------------+-------------+------------------+-----------+--------+
| ID                                   | Name                                                          | Disk Format | Container Format | Size      | Status |
+--------------------------------------+---------------------------------------------------------------+-------------+------------------+-----------+--------+
| f3cd4ec6-8ce6-4a44-85ec-2f8f066f351b | auto-sync/ubuntu-trusty-14.04-amd64-server-20150528-disk1.img | qcow2       | bare             | 257294848 | active |
+--------------------------------------+---------------------------------------------------------------+-------------+------------------+-----------+--------+

More Information

Read Tracking the Ubuntu OpenStack installation process for more detailed information on monitoring the installation process.

Thanks to the New York OpenStack Group and a presentation by Mark Baker of Canonical who demonstrated MAAS and Landscape AutoPilot installation of OpenStack. Slides of Automating hard things slides.

My thoughts on Architecture and Software Development with MySQL

Yesterday I was able to present to the Portland MySQL Users Group two presentations that are important foundations for effective development with MySQL.

With 26 years of architectural experience in RDBMS and 16 years of MySQL knowledge, my extensive exposure to large and small companies from consulting has lead to these presentations containing common obstacles I have seen an help organization with. All that can be easily avoided. The benefits of improved development and better processes leads to better quality software, better performance and a lower cost of ownership, that is saving companies money.

Thanks to Emily and Daniel for organizing and New Relic for hosting.

Python 3 semantics for integer division

As I refresh my skills in Python 2 to Python 3 semantics I discovered there is a difference in the division operator (i.e. /).

When using integers in Python 2 the result (by default) is an integer. For example.

$ python2
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
>>> 1/2
0
>>> 1/2.0
0.5

In Python 3 the result is a float.

$ python3
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
>>> 1/2
0.5

It has been encouraged in the Porting Python 2 Code to Python 3 documentation to perform the following import.

$ python2
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
>>> from __future__ import division
>>> 1/2
0.5

I was also unaware of the floor operator (i.e. //) as specified in PEP 238.

$ python2
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
>>> from __future__ import division
>>> 1//2
0
>>> 1/2
0.5

I uncovered this by playing with algorithms between versions. This Newton Method for the Square Root was something I was unaware of, and this example failed when using Python 2.

My improved version on the referenced example without an import.

def squareroot(number, precision = 5):
  root = number/2.0
  for i in range(20):
    nroot = (1/2.0)*(root + (number / root))
    #print i, nroot
    if (root - nroot < 1.0/10**precision):
      break
    root = nroot
  return round(nroot, precision)
>>> squareroot(10)
3.16228
>>> squareroot(10,1)
3.2
>>> squareroot(10,2)
3.16
>>> squareroot(10,5)
3.16228
>>> squareroot(10,10)
3.1622776602

Installing Python 3.3 on Ubuntu 14.04.2 LTS

Ubuntu 14.04 by default uses Python 2.7 and 3.4. If you want to install Python 3.3, in my case because various Openstack projects that maintain 3.3 compatibility.

I had a hard time finding what I would consider an official means. These are a third party PPA steps. I welcome comments for any other ways to install multiple Python environments.

$ sudo apt-get install -y python-software-properties
$ sudo add-apt-repository -y ppa:fkrull/deadsnakes
gpg: keyring `/tmp/tmpuljbio98/secring.gpg' created
gpg: keyring `/tmp/tmpuljbio98/pubring.gpg' created
gpg: requesting key DB82666C from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpuljbio98/trustdb.gpg: trustdb created
gpg: key DB82666C: public key "Launchpad Old Python Versions" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

NOTE: The add repo command prompts for a user response without -y.

$ sudo add-apt-repository ppa:fkrull/deadsnakes

 This PPA has older and newer Python versions for Ubuntu. The packages in the official archives generally don't go back all that far, but people might still need to develop and test against these old Python interpreters. There also was a time when Google App Engine still ran on Python 2.5, but nobody likes to talk about that.

A disclaimer first: I do not guarantee any kind of updates. In particular, I shed all responsibility for security issues in these packages. If you want to use them in a security-or-otherwise-critical environment (say, on a production server), you do so at your own risk.

For Python 2.7 updates for supported Ubuntu releases, see my dedicated 2.7 PPA:

https://launchpad.net/~fkrull/+archive/ubuntu/deadsnakes-python2.7

Reporting Issues
================
Issues can be reported in the master issue tracker at:

https://bitbucket.org/fk/deadsnakes-issues

Donations
=========
If you like what I'm doing here, you can show your appreciation by donating:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=TTTFWBJ2DZK6E

Supported Ubuntu Versions
=========================
Supported — Precise, Trusty, Utopic

Generally, I try to support Ubuntu releases until their official End-of-Life.

Supported Python Versions
=========================
Currently supported releases — 2.3, 2.4, 2.5, 2.6, 2.7, 3.1, 3.2, 3.3, 3.4

Basically, if an Ubuntu version doesn't have an official package for a specific major Python version (be it "any more" or "yet"), look for one in this PPA. However, for a given Python major release, don't expect to find newer point releases if there is already an older point release in the official Ubuntu repositories (i.e., if an Ubuntu release has a package for Python 2.6.4, I won't provide a package with 2.6.5 for that Ubuntu release): newer Python point releases shouldn't add new features or change behaviour, so they're rather pointless (no pun intended) for development and testing; conversely, if that Python point release has a bug that is fixed in a newer release, that's still an issue with the original package and should be taken up with the Ubuntu or Debian maintainer of the package. Besides, making these update packages externally to the original repositories is a bit of a pain.

As an exception, I have updated Python 2.7 packages for several Ubuntu releases in a separate PPA:

https://launchpad.net/~fkrull/+archive/ubuntu/deadsnakes-python2.7

Supported Python Packages
=========================
Using third-party modules packaged for Debian or Ubuntu with the Python interpreters from this repository is a bit of a mixed bag. For Python 2, Python modules from the official repositories will not work, as a consequence of how Python packaging works in Debian. For Python 3 on the other hand, all pure-Python module packages at least should be available; compiled extension modules will not work however.

In general, you're better off installing Python modules using the common Python packaging tools rather than the system package manager. For an introduction into the Python packaging ecosystem and its tools, refer to the Python Packaging User Guide [1].

[1] https://packaging.python.org
 More info: https://launchpad.net/~fkrull/+archive/ubuntu/deadsnakes
Press [ENTER] to continue or ctrl-c to cancel adding it

gpg: keyring `/tmp/tmp1rda6tjx/secring.gpg' created
gpg: keyring `/tmp/tmp1rda6tjx/pubring.gpg' created
gpg: requesting key DB82666C from hkp server keyserver.ubuntu.com
gpg: /tmp/tmp1rda6tjx/trustdb.gpg: trustdb created
gpg: key DB82666C: public key "Launchpad Old Python Versions" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
OK
$ sudo apt-get update
$ sudo apt-cache show python3.3
Package: python3.3
Priority: optional
Section: python
Installed-Size: 255
Maintainer: Felix Krull 
Architecture: amd64
Version: 3.3.6-4+trusty1
Suggests: python3.3-doc, binutils
Depends: python3.3-minimal (= 3.3.6-4+trusty1), libpython3.3-stdlib (= 3.3.6-4+trusty1), mime-support
Filename: pool/main/p/python3.3/python3.3_3.3.6-4+trusty1_amd64.deb
Size: 136098
MD5sum: 924f7fcd5e84d0938c1ae8e8c5b7f226
SHA1: 8e34edec87644b7c620a98a5f2e5fa54f4cbba67
SHA256: 44bc419559695dd78f9b5e37a9bf7ce586c4d3b1922e896bb15ed9d243cb578c
Description-en: Interactive high-level object-oriented language (version 3.3)
 Python is a high-level, interactive, object-oriented language. Its 3.3 version
 includes an extensive class library with lots of goodies for
 network programming, system administration, sounds and graphics.
Description-md5: d77f2abf1b0095e2e7bf5e21022e3d54
Multi-Arch: allowed
Original-Maintainer: Matthias Klose 

And now you can install the specific version.

$ sudo apt-get install -y python3.3 python3.3-dev

At this time I now have 3 different versions as well as the python and python3 aliases.

$ python --version
Python 2.7.6

$ python3 --version
Python 3.4.0

$ python2.7 --version
Python 2.7.6

$ python3.3 --version
Python 3.3.6

$ python3.4 --version
Python 3.4.0

This enabled me to now use Python 3.3. for my Openstack tox testing which at first pass produces the same error as Python 3.4

$ tox -epy33 --notest

Percona Live Presentation: MySQL Security Essentials

The slides for my MySQL Security Essentials presentation at Percona Live 2015 MySQL Conference and Expo are now available.

In this presentation I discuss just how insecure legacy versions of MySQL are and what are the essential requirements for securing your installation on disk, via network and with user privileges. I provide recommendations for how to manage application access for your most important data asset.

This presentation describes the key security improvements in MySQL 5.6 and MySQL 5.7 as well as additional features provided in MariaDB 10.0 and 10.1 supporting roles and encryption.

I have also included slides for how easy it is to Hack MySQL and examples of denial of service attacks that are possible with even limited MySQL access.

Percona Live Presentation: Improving Performance With Better Indexes

The slides for my Improving Performance With Better Indexes presentation at Percona Live 2015 MySQL Conference and Expo are now available.

In this presentation I discuss how to identify, review and analyze SQL statements in order to create better indexes for your queries. This includes understanding the EXPLAIN syntax and how to create and identify covering and partial column indexes.

This presentation is based on the work with a customer showing the 95% improvement of a key 15 table join query running 15,000 QPS in a 25 billion SQL statements per day infrastructure.

As mentioned, Explaining the MySQL Explain is an additional presentation that goes into more detail for learning how to read Query Execution Plans (QEP) in MySQL.

Updating MySQL on Ubuntu 12.04 LTS to MySQL 5.6

The Ubuntu 12.04.3 LTS release only provides MySQL 5.1 and MySQL 5.5 using the default Ubuntu package manager.

Oracle (owners of the MySQL(tm)) now provide Debian/Ubuntu APT repositories for all GA and DMR versions of MySQL including supporting Ubuntu 12.04.

The following steps demonstrate upgrading from the Ubuntu 5.5 server package to the Oracle 5.6 server package.

Verify MySQL Packages

$ apt-cache search mysql-server
mysql-server - MySQL database server (metapackage depending on the latest version)
mysql-server-5.5 - MySQL database server binaries and system database setup
mysql-server-core-5.5 - MySQL database server binaries
auth2db - Powerful and eye-candy IDS logger, log viewer and alert generator
cacti - Frontend to rrdtool for monitoring systems and services
torrentflux - web based, feature-rich BitTorrent download manager

Verify MySQL on Server

$  dpkg -l | grep mysql
ii  libdbd-mysql-perl                      4.020-1build2                                       Perl5 database interface to the MySQL database
ii  libmysqlclient-dev                     5.5.34-0ubuntu0.12.04.1                             MySQL database development files
ii  libmysqlclient18                       5.5.34-0ubuntu0.12.04.1                             MySQL database client library
ii  mysql-client-5.5                       5.5.31-0ubuntu0.12.04.1                             MySQL database client binaries
ii  mysql-client-core-5.5                  5.5.34-0ubuntu0.12.04.1                             MySQL database core client binaries
ii  mysql-common                           5.5.34-0ubuntu0.12.04.1                             MySQL database common files, e.g. /etc/mysql/my.cnf
ii  mysql-server-5.5                       5.5.31-0ubuntu0.12.04.1                             MySQL database server binaries and system database setup
ii  mysql-server-core-5.5                  5.5.31-0ubuntu0.12.04.1                             MySQL database server binaries
ii  php5-mysqlnd                           5.3.10-1ubuntu3.8                                   MySQL module for php5 (Native Driver)

Results may vary based on dependencies.

Checking the MySQL error log (as it’s the right good practice to always do)

$ sudo tail -50 /var/log/mysql/error.log
150402 16:02:49 [Note] Plugin 'FEDERATED' is disabled.
150402 16:02:49 InnoDB: The InnoDB memory heap is disabled
150402 16:02:49 InnoDB: Mutexes and rw_locks use GCC atomic builtins
150402 16:02:49 InnoDB: Compressed tables use zlib 1.2.3.4
150402 16:02:49 InnoDB: Initializing buffer pool, size = 1.0G
150402 16:02:49 InnoDB: Completed initialization of buffer pool
150402 16:02:49 InnoDB: highest supported file format is Barracuda.
150402 16:02:49  InnoDB: Waiting for the background threads to start
150402 16:02:50 InnoDB: 5.5.31 started; log sequence number 20079278867
150402 16:02:50 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306
150402 16:02:50 [Note]   - '127.0.0.1' resolves to '127.0.0.1';
150402 16:02:50 [Note] Server socket created on IP: '127.0.0.1'.
150402 16:02:50 [Note] Event Scheduler: Loaded 0 events
150402 16:02:50 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.31-0ubuntu0.12.04.1'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  (Ubuntu)

A check shows that this is not the most current version available of 5.5 using the Ubuntu packages.

$ apt-cache show mysql-server-5.5
Package: mysql-server-5.5
Priority: optional
Section: database
Installed-Size: 31947
Maintainer: Ubuntu Developers 
Original-Maintainer: Debian MySQL Maintainers 
Architecture: amd64
Source: mysql-5.5
Version: 5.5.41-0ubuntu0.12.04.1
...

Package: mysql-server-5.5
Status: install ok installed
Priority: optional
Section: database
Installed-Size: 31950
Maintainer: Ubuntu Developers 
Architecture: amd64
Source: mysql-5.5
Version: 5.5.31-0ubuntu0.12.04.1
...

Just to be consistent with keeping current versions, you may choose to update MySQL 5.5 to the current available version.

$ sudo apt-get install mysql-server-5.5
...

Installing Oracle APT Packaging

The recommended documented way to move to using the Oracle repo is:

cd /tmp
# See https://dev.mysql.com/downloads/repo/apt/ for your right distro version
wget https://dev.mysql.com/get/mysql-apt-config_0.3.3-2ubuntu12.04_all.deb
sudo dpkg -i mysql-apt-config*.deb

This unfortunately uses a cursors based interface which is not something you automate for production systems and not the approach I would suggest.

So doing what this does

echo "deb http://repo.mysql.com/apt/ubuntu/ precise mysql-apt-config
deb http://repo.mysql.com/apt/ubuntu/ precise mysql-5.6" | sudo tee /etc/apt/sources.list.d/mysql.list
curl -s http://ronaldbradford.com/mysql/mysql.gpg | sudo apt-key add -
sudo apt-get update

Now we can look at available versions.

$ apt-cache search mysql-server
mysql-server-5.5 - MySQL database server binaries and system database setup
mysql-server-core-5.5 - MySQL database server binaries
auth2db - Powerful and eye-candy IDS logger, log viewer and alert generator
cacti - Frontend to rrdtool for monitoring systems and services
torrentflux - web based, feature-rich BitTorrent download manager
mysql-community-server - MySQL Server
mysql-server - MySQL Server meta package depending on latest version

This is where life gets a little confusing. Because Ubuntu supported MySQL 5.1 (as mysql-server) and MySQL 5.5 (as mysql-server-5.5) it can be misleading.

$ apt-cache show mysql-server
$ apt-cache show mysql-server
Package: mysql-server
Source: mysql-community
Version: 5.6.23-1ubuntu12.04
Architecture: amd64
Maintainer: MySQL Release Engineering 
Installed-Size: 46
Depends: mysql-community-server (= 5.6.23-1ubuntu12.04)
Homepage: http://www.mysql.com/
Priority: optional
Section: database
Filename: pool/mysql-5.6/m/mysql-community/mysql-server_5.6.23-1ubuntu12.04_amd64.deb
Size: 11644
SHA256: 1cb166cd230d2a4daca761ea80f2f34ee1fc0c92aaae972c914d81746f235d63
SHA1: 63548c852d5faeda751fbf038c0799fbbeac9905
MD5sum: da2f709a29a7cac97c834e6e69929891
Description: MySQL Server meta package depending on latest version
 The MySQL(TM) software delivers a very fast, multi-threaded, multi-user,
 and robust SQL (Structured Query Language) database server. MySQL Server
 is intended for mission-critical, heavy-load production systems as well
 as for embedding into mass-deployed software. MySQL is a trademark of
 Oracle. This is a meta package that depends on the latest mysql server
 package available in the repository.

Package: mysql-server
Priority: optional
Section: database
Installed-Size: 114
Maintainer: Ubuntu Developers 
Original-Maintainer: Debian MySQL Maintainers 
Architecture: all
Source: mysql-5.5
Version: 5.5.41-0ubuntu0.12.04.1
...

We are looking to ensure the Maintainer is the Official Release.

Upgrading to MySQL 5.6

sudo service mysql stop
ps -ef | grep mysql
sudo apt-get install -y mysql-server
$ sudo apt-get install -y mysql-server
...
The following extra packages will be installed:
  mysql-client mysql-common mysql-community-client mysql-community-server
The following packages will be REMOVED:
  mysql-client-5.5 mysql-client-core-5.5 mysql-server-5.5 mysql-server-core-5.5
The following NEW packages will be installed:
  mysql-client mysql-community-client mysql-community-server mysql-server
The following packages will be upgraded:
  mysql-common
...
Configuration file `/etc/mysql/my.cnf'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** my.cnf (Y/I/N/O/D/Z) [default=N] ? N
...
Installing new version of config file /etc/apparmor.d/usr.sbin.mysqld ...
2015-04-02 16:53:07 0 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.
2015-04-02 16:53:07 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
150402 16:53:14 mysqld_safe Can't log to error log and syslog at the same time.  Remove all --log-error configuration options for --syslog to take effect
...

You may think the process is completed, but it is not. Always, Always check the error log. Have you checked your MySQL error log today?

$ sudo tail -300 /var/log/mysql/error.log
...
2015-04-02 16:53:14 20429 [Warning] Buffered warning: Changed limits: table_cache: 431 (requested 2000)

2015-04-02 16:53:14 20429 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
2015-04-02 16:53:14 20429 [Note] Plugin 'FEDERATED' is disabled.
2015-04-02 16:53:14 20429 [Note] InnoDB: Using atomics to ref count buffer pool pages
2015-04-02 16:53:14 20429 [Note] InnoDB: The InnoDB memory heap is disabled
2015-04-02 16:53:14 20429 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2015-04-02 16:53:14 20429 [Note] InnoDB: Memory barrier is not used
2015-04-02 16:53:14 20429 [Note] InnoDB: Compressed tables use zlib 1.2.3.4
2015-04-02 16:53:14 20429 [Note] InnoDB: Using Linux native AIO
2015-04-02 16:53:14 20429 [Note] InnoDB: Not using CPU crc32 instructions
2015-04-02 16:53:14 20429 [Note] InnoDB: Initializing buffer pool, size = 1.0G
2015-04-02 16:53:15 20429 [Note] InnoDB: Completed initialization of buffer pool
2015-04-02 16:53:15 20429 [Note] InnoDB: Highest supported file format is Barracuda.
2015-04-02 16:53:15 20429 [Note] InnoDB: 128 rollback segment(s) are active.
2015-04-02 16:53:15 20429 [Note] InnoDB: Waiting for purge to start
2015-04-02 16:53:15 20429 [Note] InnoDB: 5.6.23 started; log sequence number 20079286519
2015-04-02 16:53:15 20429 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 487fda28-d97a-11e4-9254-e0cb4e3feb73.
2015-04-02 16:53:15 20429 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306
2015-04-02 16:53:15 20429 [Note]   - '127.0.0.1' resolves to '127.0.0.1';
2015-04-02 16:53:15 20429 [Note] Server socket created on IP: '127.0.0.1'.
2015-04-02 16:53:15 20429 [ERROR] Column count of mysql.events_waits_current is wrong. Expected 19, found 16. Created with MySQL 50541, now running 50623. Please use mysql_upgrade to fix this error.
2015-04-02 16:53:15 20429 [ERROR] Column count of mysql.events_waits_history is wrong. Expected 19, found 16. Created with MySQL 50541, now running 50623. Please use mysql_upgrade to fix this error.
2015-04-02 16:53:15 20429 [ERROR] Column count of mysql.events_waits_history_long is wrong. Expected 19, found 16. Created with MySQL 50541, now running 50623. Please use mysql_upgrade to fix this error.
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_host_by_event_name' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Incorrect definition of table performance_schema.events_waits_summary_by_thread_by_event_name: expected column 'THREAD_ID' at position 0 to have type bigint(20), found type int(11).
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_user_by_event_name' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_account_by_event_name' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Column count of mysql.file_summary_by_event_name is wrong. Expected 23, found 5. Created with MySQL 50541, now running 50623. Please use mysql_upgrade to fix this error.
2015-04-02 16:53:15 20429 [ERROR] Column count of mysql.file_summary_by_instance is wrong. Expected 25, found 6. Created with MySQL 50541, now running 50623. Please use mysql_upgrade to fix this error.
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'host_cache' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Incorrect definition of table performance_schema.mutex_instances: expected column 'LOCKED_BY_THREAD_ID' at position 2 to have type bigint(20), found type int(11).
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'objects_summary_global_by_type' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Incorrect definition of table performance_schema.rwlock_instances: expected column 'WRITE_LOCKED_BY_THREAD_ID' at position 2 to have type bigint(20), found type int(11).
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'setup_actors' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'setup_objects' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'table_io_waits_summary_by_index_usage' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'table_io_waits_summary_by_table' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'table_lock_waits_summary_by_table' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Column count of mysql.threads is wrong. Expected 14, found 3. Created with MySQL 50541, now running 50623. Please use mysql_upgrade to fix this error.
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'events_stages_current' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'events_stages_history' has the wrong structure
2015-04-02 16:53:15 20429 [ERROR] Native table 'performance_schema'.'events_stages_history_long' has the wrong structure
...

Completing the MySQL 5.6 Upgrade

A MySQL upgrade of the meta schema is necessary.

$ sudo mysql_upgrade -uroot -p
Enter password:
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock'
Warning: Using a password on the command line interface can be insecure.
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock'
Warning: Using a password on the command line interface can be insecure.
mysql.columns_priv                                 OK
mysql.db                                           OK
mysql.event                                        OK
mysql.func                                         OK
mysql.general_log                                  OK
mysql.help_category                                OK
mysql.help_keyword                                 OK
mysql.help_relation                                OK
mysql.help_topic                                   OK
mysql.host                                         OK
mysql.ndb_binlog_index                             OK
mysql.plugin                                       OK
mysql.proc                                         OK
mysql.procs_priv                                   OK
mysql.proxies_priv                                 OK
mysql.servers                                      OK
mysql.slow_log                                     OK
mysql.tables_priv                                  OK
mysql.time_zone                                    OK
mysql.time_zone_leap_second                        OK
mysql.time_zone_name                               OK
mysql.time_zone_transition                         OK
mysql.time_zone_transition_type                    OK
mysql.user                                         OK
Running 'mysql_fix_privilege_tables'...
Warning: Using a password on the command line interface can be insecure.
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock'
Warning: Using a password on the command line interface can be insecure.
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock'
Warning: Using a password on the command line interface can be insecure.
...
OK
$ sudo service mysql restart
$ sudo service mysql restart
 * Stopping MySQL Community Server 5.6.23
....
 * MySQL Community Server 5.6.23 is stopped
 * Re-starting MySQL Community Server 5.6.23
150402 17:06:17 mysqld_safe Can't log to error log and syslog at the same time.  Remove all --log-error configuration options for --syslog to take effect.
......
 * MySQL Community Server 5.6.23 is started
$ sudo tail -300 /var/log/mysql/error.log
...
2015-04-02 17:06:15 20429 [Note] /usr/sbin/mysqld: Shutdown complete

150402 17:06:15 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
150402 17:06:17 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
2015-04-02 17:06:17 0 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.
2015-04-02 17:06:17 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2015-04-02 17:06:17 20994 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000)

2015-04-02 17:06:17 20994 [Warning] Buffered warning: Changed limits: table_cache: 431 (requested 2000)

2015-04-02 17:06:17 20994 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
2015-04-02 17:06:17 20994 [Note] Plugin 'FEDERATED' is disabled.
2015-04-02 17:06:17 20994 [Note] InnoDB: Using atomics to ref count buffer pool pages
2015-04-02 17:06:17 20994 [Note] InnoDB: The InnoDB memory heap is disabled
2015-04-02 17:06:17 20994 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2015-04-02 17:06:17 20994 [Note] InnoDB: Memory barrier is not used
2015-04-02 17:06:17 20994 [Note] InnoDB: Compressed tables use zlib 1.2.3.4
2015-04-02 17:06:17 20994 [Note] InnoDB: Using Linux native AIO
2015-04-02 17:06:17 20994 [Note] InnoDB: Not using CPU crc32 instructions
2015-04-02 17:06:17 20994 [Note] InnoDB: Initializing buffer pool, size = 1.0G
2015-04-02 17:06:17 20994 [Note] InnoDB: Completed initialization of buffer pool
2015-04-02 17:06:17 20994 [Note] InnoDB: Highest supported file format is Barracuda.
2015-04-02 17:06:17 20994 [Note] InnoDB: 128 rollback segment(s) are active.
2015-04-02 17:06:17 20994 [Note] InnoDB: Waiting for purge to start
2015-04-02 17:06:17 20994 [Note] InnoDB: 5.6.23 started; log sequence number 20081020877
2015-04-02 17:06:17 20994 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306
2015-04-02 17:06:17 20994 [Note]   - '127.0.0.1' resolves to '127.0.0.1';
2015-04-02 17:06:17 20994 [Note] Server socket created on IP: '127.0.0.1'.
2015-04-02 17:06:17 20994 [Note] Event Scheduler: Loaded 0 events
2015-04-02 17:06:17 20994 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.6.23'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)

Correcting errors

As you can see there are several warnings/errors when starting MySQL.

The first is

 mysqld_safe Can't log to error log and syslog at the same time.  Remove all --log-error configuration options for --syslog to take effect.

We solve this with

$ sudo rm -f /etc/mysql/conf.d/mysqld_safe_syslog.cnf
$ sudo service mysql restart
 * Stopping MySQL Community Server 5.6.23
....
 * MySQL Community Server 5.6.23 is stopped
 * Re-starting MySQL Community Server 5.6.23
......
 * MySQL Community Server 5.6.23 is started

This is an Ubuntu default that conflicts with the my.cnf log_error were are familiar with in monitoring the MySQL error log. You can read my opinion on this in The correct approach to rolling MySQL logs

The second is

2015-04-02 17:06:17 0 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.

We solve this with

sudo sed -ie "s/^key_buffer[^_]/key_buffer_size/" /etc/mysql/my.cnf

Next

2015-04-02 17:18:06 22123 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.

We solve this with

sudo sed -ie "s/^myisam-recover[^-]/myisam-recover-options/" /etc/mysql/my.cnf

The warnings are interesting, and will part of the following post on MySQL 5.6 configuration changes discussed in the next point.

2015-04-02 17:22:08 22626 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000)
2015-04-02 17:22:08 22626 [Warning] Buffered warning: Changed limits: table_cache: 431 (requested 2000)

Leveraging MySQL 5.6 benefits

We may now have a MySQL 5.6 installation however we are far from utilizing the benefits of MySQL 5.6 fully. In a subsequent post I will talk about the configuration options we need to now consider, both new options such as innodb_purge_threads and important improvements such as sync_binlog. There are far greater complex changes including innodb_file_per_table, master_info_repository and relay_log_info_repository and then changes in defaults such as performance_schema

Validating MySQL version numbers

As part of a MySQL 5.5 to MySQL 5.6 upgrade across several Ubuntu servers of varying distros an audit highlighted a trivial but interesting versioning identification error in Ubuntu’s packaging of MySQL.

Ubuntu 12.04 LTS

$ sudo dpkg -l | grep mysql-server-5.5
ii  mysql-server-5.5   5.5.41-0ubuntu0.12.04.1  ...
$ mysql -uroot -p -e "SELECT VERSION()"
+-------------------------+
| VERSION()               |
+-------------------------+
| 5.5.41-0ubuntu0.12.04.1 |
+-------------------------+

But when you look at the mysql --version it does NOT say 5.5.41.

$ mysql --version
mysql  Ver 14.14 Distrib 5.5.34, for debian-linux-gnu (x86_64) using readline 6.2

Ubuntu 14.04 LTS

On 14.04 I get expected results.

$ sudo dpkg -l | grep mysql-server-5.5
ii  mysql-server-5.5       5.5.41-0ubuntu0.14.04.1   ...
[email protected]:~$ mysql -uroot -p -e "SELECT VERSION()"
+-------------------------+
| VERSION()               |
+-------------------------+
| 5.5.41-0ubuntu0.14.04.1 |
+-------------------------+
$ mysql --version
mysql  Ver 14.14 Distrib 5.5.41, for debian-linux-gnu (x86_64) using readline 6.3

Dynamic recreation of InnoDB redo logs

MySQL 5.6 will now automatically recreate the InnoDB redo log files during a MySQL restart if the size (or number) of these logs changes, i.e. a change to innodb_log_file_size. See Changing the Number or Size of InnoDB Log Files which states “If InnoDB detects that the innodb_log_file_size differs from the redo log file size, it will write a log checkpoint, close and remove the old log files, create new log files at the requested size, and open the new log files.”

Before MySQL 5.6 it was necessary to stop MySQL and remove the InnoDB log files manually before restarting MySQL.

The error log shows:

tail -f /mysql/log/error.log
...
2015-03-28 21:51:25 3767 [Warning] InnoDB: Resizing redo log from 2*3072 to 2*65536 pages, LSN=1626017
2015-03-28 21:51:25 3767 [Warning] InnoDB: Starting to delete and rewrite log files.
2015-03-28 21:51:25 3767 [Note] InnoDB: Setting log file ./ib_logfile101 size to 1024 MB
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000
2015-03-28 21:51:28 3767 [Note] InnoDB: Setting log file ./ib_logfile1 size to 1024 MB
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000
2015-03-28 21:51:31 3767 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
2015-03-28 21:51:31 3767 [Warning] InnoDB: New log files created, LSN=1626017
...

It was however odd that MySQL had indicated it had successfully started but the underlying Redo log files were not complete and in-place as seen by the following directory listings.

$ sudo service mysql start
......
 * MySQL Community Server 5.6.23 is started
$
$ ls -lh /var/lib/mysql/
total 1.9G
-rw-rw---- 1 mysql mysql   56 Mar 28 19:42 auto.cnf
-rw-rw---- 1 mysql mysql  12M Mar 28 21:51 ibdata1
-rw-rw---- 1 mysql mysql 902M Mar 28 21:51 ib_logfile1
-rw-rw---- 1 mysql mysql 1.0G Mar 28 21:51 ib_logfile101
drwxr-x--- 2 mysql mysql 4.0K Mar 28 19:42 mysql
drwx------ 2 mysql mysql 4.0K Mar 28 19:42 performance_schema
$ ls -lh /var/lib/mysql/
total 2.1G
-rw-rw---- 1 mysql mysql   56 Mar 28 19:42 auto.cnf
-rw-rw---- 1 mysql mysql  12M Mar 28 21:51 ibdata1
    -rw-rw---- 1 mysql mysql 1.0G Mar 28 21:51 ib_logfile0 -rw-rw---- 1 mysql mysql 1.0G Mar 28 21:51 ib_logfile1 drwxr-x--- 2 mysql mysql 4.0K Mar 28 19:42 mysql drwx------ 2 mysql mysql 4.0K Mar 28 19:42 performance_schema

SQL, ANSI Standards, PostgreSQL and MySQL

I have recently been working with the Donors Choose Open Data Set which happens to be in PostgreSQL. Easy enough to install and load the data in PostgreSQL, however as I live and breath MySQL, lets load the data into MySQL.

And here is where start our discussion, first some history.

SQL History

SQL – Structure Query Language is a well known common language for communicating with Relational Databases (RDBMS). It is not the only language I might add, having both used many years ago and just mentioned QUEL at a Looker Look and Tell event in New York. It has also been around since the 1970s making it; along with C; one of oldest in general use programming languages today.

SQL became an ANSI standard in 1986, and an ISO standard in 1987. The purpose of a standard is to provide commonality when communicating or exchanging information; in our case; a programming language communicating with a RDBMS. There have been several iterations of the standard as functionality and syntax improves. These are commonly referred to as SQL-86, SQL-89, SQL-92, SQL:1999, SQL:2003, SQL:2006, SQL:2008 and SQL:2011.

And so, with SQL being a standard it means that what we can do in PostgreSQL should translate to what we can do in MySQL.

SQL Communication

Both products provide a Command Line Interface (CLI) client tool for SQL communication, mysql for MySQL and psql for PostgreSQL. No surprises there. Both use by default the semicolon ; as a SQL statement terminator, and both CLI tools use \q as a means to quit and exit the tool. Certainly not a standard but great for syntax compatibility.

DDL Specification

Our journey begins with defining tables.

DROP TABLE

Both products SQL syntax support DROP TABLE. Infact, both support the DROP TABLE [IF EXISTS] syntax.

DROP TABLE donorschoose_projects;
DROP TABLE IF EXISTS donorschoose_projects;

CREATE TABLE

Both support CREATE TABLE.

Both support defining columns in the typical format <column_name> <datatype>, and both support the NOT NULL attribute. Talking about specific datatypes for columns is a topic on its own and so I discuss this later.

The PostgreSQL syntax was a table option WITHOUT OIDS which is not valid in MySQL. It is also obsolescent syntax in PostgreSQL 9.3. From the PostgreSQL manual “This optional clause specifies whether rows of the new table should have OIDs (object identifiers) assigned to them. The default is to have OIDs. Specifying WITHOUT OIDS allows the user to suppress generation of OIDs for rows of a table. This may be worthwhile for large tables … Specifying WITHOUT OIDS also reduces the space required to store the table on disk by 4 bytes per row of the table, thereby improving performance.”

In this example as this is just for testing, dropping the WITHOUT OIDS syntax creates a mutually compatible syntax.

Comments

Both MySQL and PostgreSQL support -- as an inline comment in an SQL statement. No need to strip those out.

ALTER TABLE

Both support ALTER TABLE ADD CONSTRAINT syntax which in our example is used to define the PRIMARY KEY, however while the syntax remains the same, the choice of datatype affects the outcome.

The following works in both products when the datatype is CHARACTER(32). More about CHARACTER() later.

ALTER TABLE donorschoose_projects ADD CONSTRAINT pk_donorschoose_projects PRIMARY KEY(_projectid);

In our example dataset, the primary key is defined with a TEXT datatype, and in MySQL this fails.

ERROR 1170 (42000): BLOB/TEXT column '_projectid' used in key specification without a key length

As the data in the dataset for primary keys by further analysis is indeed a 32 byte hexadecimal value, this is changed to CHARACTER(32) to be compatible for this data loading need. This however is an important key difference in any migration process with other data sets.

Side Note

Both products support the definition of the PRIMARY KEY in the CREATE TABLE syntax two different ways.

CREATE TABLE demo_pk1 (id character(32) NOT NULL PRIMARY KEY);
CREATE TABLE demo_pk2 (id character(32) NOT NULL, PRIMARY KEY(id));

CREATE INDEX

Both use CREATE INDEX syntax however with our sample dataset, this is the first observed difference in syntax with provided sample SQL statements.

PostgresSQL

CREATE INDEX projects_schoolid ON projects USING btree (_schoolid);

MySQL
The USING <type> qualifier must appear before the ON <table>.

CREATE INDEX USING btree projects_schoolid ON projects (_schoolid);

In both products USING btree is an optional syntax (for minimum compatibility) purposes so removing this provides a consistency.

Data Types

The following data types are defined in the PostgreSQL example data set. Each is discussed to identify a best fit in MySQL. For reference:

character

This data type is for a fixed width character field and requires a length attribute. MySQL supports CHARACTER(n) syntax for compatibility, however generally CHAR(n) is the preferred syntax. Indeed, PostgreSQL also supports CHAR(n).

The following showing both variants is valid in both products.

CREATE TABLE demo_character(c1 CHARACTER(1), c2 CHAR(1));

varchar/character varying

While this dataset does not use these datatypes, they are critical in the general conservations of character (aka string) types. This refers to a variable length string.

While character varying is not a valid MySQL syntax, varchar is compatible with both products.

CREATE TABLE demo_varchar(vc1 VARCHAR(10));

text

In PostgresSQL, text is used for variables of undefined length. The maximum length of a field is 1GB as stated in the FAQ.

In MySQL however TEXT only stores 2^16 characters (64K). The use of LONGTEXT is needed to support the full length capacity in PostgeSQL. This store 2^32 characters (~4GB).

Of all the complexity of this example dataset, the general use of text will be the most difficult to modify to a more applicable VARCHAR or TEXT datatype when optimizing in MySQL.

integer

PostgreSQL uses the integer datatype for a signed 4 byte integer value. MySQL supports the same syntax, however generally prefers to refer to the shorter INT syntax. Both products support both overall.

mysql> CREATE TABLE demo_integer(i1 INTEGER, i2 INT);
Query OK, 0 rows affected (0.11 sec)

mysql> INSERT INTO demo_integer VALUES (1,-1);
Query OK, 1 row affected (0.05 sec)

mysql> SELECT * FROM demo_integer;
+------+------+
| i1   | i2   |
+------+------+
|    1 |   -1 |
+------+------+
1 row in set (0.00 sec)
demo=# CREATE TABLE demo_integer(i1 INTEGER, i2 INT);
CREATE TABLE
demo=# INSERT INTO demo_integer VALUES (1,-1);
INSERT 0 1
demo=# SELECT * FROM demo_integer;
 i1 | i2
----+----
  1 | -1
(1 row)

And just to note the boundary of this data type.

mysql> TRUNCATE TABLE demo_integer;
Query OK, 0 rows affected (0.04 sec)

mysql> INSERT INTO demo_integer VALUES (2147483647, -2147483648);
Query OK, 1 row affected (0.04 sec)

mysql> SELECT * FROM demo_integer;
+------------+-------------+
| i1         | i2          |
+------------+-------------+
| 2147483647 | -2147483648 |
+------------+-------------+
1 row in set (0.00 sec)
demo=# TRUNCATE TABLE demo_integer;
TRUNCATE TABLE

demo=# INSERT INTO demo_integer VALUES (2147483647, -2147483648);
INSERT 0 1
demo=# SELECT * FROM demo_integer;
     i1     |     i2
------------+-------------
 2147483647 | -2147483648
(1 row)

The difference is in out-of-bounds value management, and here MySQL defaults suck. You can read my views at DP#4 The importance of using sql_mode.

demo=# TRUNCATE TABLE demo_integer;
TRUNCATE TABLE
demo=# INSERT INTO demo_integer VALUES (2147483647 + 1, -2147483648 - 1);
ERROR:  integer out of range
demo=# SELECT * FROM demo_integer;
 i1 | i2
----+----
(0 rows)
mysql> TRUNCATE TABLE demo_integer;
Query OK, 0 rows affected (0.04 sec)

mysql> INSERT INTO demo_integer VALUES (2147483647 + 1, -2147483648 - 1);
Query OK, 1 row affected, 2 warnings (0.07 sec)

mysql> SELECT * from demo_integer;
+------------+-------------+
| i1         | i2          |
+------------+-------------+
| 2147483647 | -2147483648 |
+------------+-------------+
1 row in set (0.00 sec)

While not in this dataset, both support the bigint data type. While the PostgreSQL docs indicate bigint is 8 bytes, testing with PostgresSQL 9.3 failed. Something to investigate more later.

demo=# CREATE TABLE demo_bigint(i1 BIGINT);
CREATE TABLE
demo=# INSERT INTO demo_bigint VALUES (2147483647 + 1), (-2147483648 - 1);
ERROR:  integer out of range
mysql> CREATE TABLE demo_bigint(i1 BIGINT);
Query OK, 0 rows affected (0.12 sec)

mysql> INSERT INTO demo_bigint VALUES (2147483647 + 1), (-2147483648 - 1);
Query OK, 2 rows affected (0.04 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> SELECT * from demo_bigint;
+-------------+
| i1          |
+-------------+
|  2147483648 |
| -2147483649 |
+-------------+
2 rows in set (0.01 sec)

And for reference, both products support smallint, a 2-byte integer.

Each product has additional integer data types.

numeric

For a fixed-precision number, PostgreSQL uses numeric but supports decimal.It would not be surprising to know that MySQL uses DECIMAL and for compatibility supports NUMERIC.

This leads to a side-bar discussion on knowing your data-types for your product. In a recent interview for a MySQL Engineer, a candidate (with SQL Server experience) provided a code example defining the NUMERIC datatype. I knew it was technically valid in MySQL syntax, but never actually seen this in use. When I asked the candidate for what was the syntax commonly used for a fixed-precision datatype they were unable to answer.

real/double precision

This dataset does not include these data types, however for reference, PostgresSQL uses real for 4 bytes, and double precision for 8 bytes. MySQL uses float for 4 bytes, and double for 8 bytes. MySQL however supports both PostgreSQL syntax options, however PostgreSQL supports float, but not double.

demo=# CREATE TABLE demo_floatingpoint(f1 FLOAT, f2 REAL, d1 DOUBLE, d2 DOUBLE PRECISION);
ERROR:  type "double" does not exist
LINE 1: ...TE TABLE demo_floatingpoint(f1 FLOAT, f2 REAL, d1 DOUBLE, d2...

demo=# CREATE TABLE demo_floatingpoint(f1 FLOAT, f2 REAL, d2 DOUBLE PRECISION);
CREATE TABLE
mysql> CREATE TABLE demo_floatingpoint(f1 FLOAT, f2 REAL, d1 DOUBLE, d2 DOUBLE PRECISION);
Query OK, 0 rows affected (0.07 sec)

date

Both PostgreSQL and MySQL use the date data type.

timestamp

Both PostgreSQL and MySQL use the timestamp data type to store date/time values. However, there is a difference in both precision and implementation here.

In PostgresSQL, timestamp supports a date before EPOCH, while in MySQL it does not. MySQL uses the DATETIME datatype.

Using PostgresSQL timestamp and MySQL DATETIME, both support microsecond precision. MySQL however only started to provide this in MySQL 5.6.

A key difference in column definition is the PostgreSQL timestamp without time zone syntax, used in our example dataset. Analysis of data loading will determine the impact here.

boolean

SQL:1999 calls for a Boolean datatype, and both PostgreSQL and MySQL support defining a column as BOOLEAN. MySQL however implicitly converts this to a SIGNED TINYINT, and any future DDL viewing shows this reference.

When referencing boolean, in PostgreSQL WHERE column_name = TRUE or WHERE column_name = t retrieves a true value. In MySQL WHERE column_name = TRUE or WHERE column_name = 1. When you SELECT a boolean, in PostgresSQL the answer is ‘t’, in MySQL, the answer is 1.

demo=# CREATE TABLE demo_boolean (b1 boolean);
CREATE TABLE
demo=# INSERT INTO demo_boolean VALUES (TRUE),(FALSE);
INSERT 0 2
demo=# SELECT * FROM demo_boolean;
 b1
----
 t
 f
(2 rows)
mysql> CREATE TABLE demo_boolean (b1 boolean);
Query OK, 0 rows affected (0.11 sec)

mysql> INSERT INTO demo_boolean VALUES (TRUE),(FALSE);
Query OK, 2 rows affected (0.03 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> SELECT * FROM demo_boolean;
+------+
| b1   |
+------+
|    1 |
|    0 |
+------+
2 rows in set (0.00 sec)

Other Data Types

Only the data types in this example have been reviewed.

Other syntax

In our sample SQL script, there is psql specific syntax to show a debugging line with \qecho .... For compatibility these are removed.

The loading of data with the \COPY <table_name> FROM PSTDIN WITH CSV HEADER is PostgreSQL specific and so loading the data is a future topic.

Finally, the VACUUM ANALYZE <table_name> command is also PostgreSQL specific and removed. This is a means effectively of optimizing and analyzing the table.

Both PostgreSQL and MySQL have an ANALYZE command, however the syntax is different, with the required TABLE keyword in MySQL.

PostgresSQL

ANALYZE donorschoose_projects;

ANALYZE TABLE donorschoose_projects;
ERROR:  syntax error at or near "table"

MySQL

ANALYZE donorschoose_projects;
ERROR 1064 (42000): You have an error in your SQL syntax;...

ANALYZE TABLE donorschoose_projects;

MySQL has an OPTIMIZE TABLE syntax, however while technically valid syntax this is not compatible with the default storage table InnoDB.

mysql> OPTIMIZE TABLE donorschoose_projects;
+----------------------------+----------+----------+-------------------------------------------------------------------+
| Table                      | Op       | Msg_type | Msg_text                                                          |
+----------------------------+----------+----------+-------------------------------------------------------------------+
| test.donorschoose_projects | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| test.donorschoose_projects | optimize | status   | OK                                                                |
+----------------------------+----------+----------+-------------------------------------------------------------------+
2 rows in set (0.32 sec)

Loops in shell scripting

If you are die hard Bourne Shell (/bin/sh) scripter, it can be a challenge not to be enticed by the syntax niceties of the Born Again Borne Shell (/bin/bash).

One example is the {..} syntax

#!/bin/bash
for I in {0..5}
do
   echo $I
done
0
1
2
3
4
5

This syntax is not valid in /bin/sh on Linux.

#!/bin/sh
for I in {0..5}
do
   echo $I
done
{0..5}

NOTE: However apparently it does work in Mac OS X, which is derived from BSD, not Linux.

/bin/sh gives you a for loop but it requires the full list of iterated values instead of a range.

#!/bin/sh

for I in 0 1 2 3 4 5
do
  echo $I
done

Note: Passing a string does not work by default.

#!/bin/sh

for I in "0 1 2 3 4 5"
do
  echo $I
done

The approach to product the same result requires some format management.

#!/bin/sh

OIFS=$IFS
IFS=" "
for I in `echo "0 1 2 3 4 5"`
do
  echo $I
done
IFS=$OIFS

You can use while

#!/bin/sh

I=0
while [ $I -le 5 ]
do 
  echo $I
  I=`expr $I + 1`
done

You can use one of several other shell commands, in this example awk

#!/bin/sh

for I in `awk 'BEGIN{for (i=0;i<=5;i++) print i}'`
do 
  echo $I
done

Or, the function specifically design for sequences of numbers seq

#!/bin/sh

for I in `seq 0 5`
do 
  echo $I
done

And for these few examples, there will be more possibilities to achieve close to feature parity of the /bin/bash syntax.
An example found on BSD is jot - 0 5. This is not available Ubuntu by default but installed with the athena-jot package. However the syntax is then different for correct usage.

Understanding when EXPLAIN is not using an index as intended

When reading a MySQL Query Execution Plan (QEP) produced by the EXPLAIN command, generally one of the first observations is to validate an index is being used per table (i.e. per row of output). In MySQL, this is observed with the key column.

In the following two simple single table examples we see the use of the PRIMARY key. To the untrained eye this may lead to assume that the right index is being used.

Example 1

+----+-------------+----------------+-------+---------------+---------+---------+------+------+-------------+
| id | select_type | table          | type  | possible_keys | key     | key_len | ref  | rows | Extra       |
+----+-------------+----------------+-------+---------------+---------+---------+------+------+-------------+
|  1 | SIMPLE      | txxxxxxxxxxxx  | index | NULL          | PRIMARY | 4       | NULL |  100 | Using where |
+----+-------------+----------------+-------+---------------+---------+---------+------+------+-------------+

Example 2

+----+-------------+------------+-------+---------------+---------+---------+-------+------+-------+
| id | select_type | table      | type  | possible_keys | key     | key_len | ref   | rows | Extra |
+----+-------------+------------+-------+---------------+---------+---------+-------+------+-------+
|  1 | SIMPLE      | txxxxxxxxx | const | PRIMARY       | PRIMARY | 4       | const |    1 |       |
+----+-------------+------------+-------+---------------+---------+---------+-------+------+-------+

However this is not entirely true. While used, it is not as intended. A MySQL index can be used during three different stages of the query execution, the JOIN/WHERE, the GROUP BY and the ORDER BY. This type of use can be seen by looking at the Index Hint Syntax which enables you to suggest or force an index at these three various stages.

Without looking at the SQL statements for the above plans, the first giveaway is the possible_keys column which indicates the indexes considered when evaluating the JOIN/WHERE of your SQL statement. The absence in the first query informs you that no index was used. The use of the PRIMARY key in the first query is the result of an ORDER BY syntax. If we remove this ORDER BY we see the true possible execution.

+----+-------------+----------------+------+---------------+------+---------+------+--------+-------------+
| id | select_type | table          | type | possible_keys | key  | key_len | ref  | rows   | Extra       |
+----+-------------+----------------+------+---------------+------+---------+------+--------+-------------+
|  1 | SIMPLE      | txxxxxxxxxxxx  | ALL  | NULL          | NULL | NULL    | NULL | 262827 | Using where |
+----+-------------+----------------+------+---------------+------+---------+------+--------+-------------+

The first query also includes a LIMIT and hence gives the perception that only a small number of rows are processed. In relational theory you would surmise this query is not efficient. Unfortunately as this example was on version MySQL 5.5, it was not possible to use the optimizer_trace functionality in MySQL 5.6 to delve deeper into understanding what decisions MySQL would take.

This example is not to say that you should add an index. This is one possible outcome in optimizing the SQL statement, however there are advantages and disadvantages. Each SQL statement should be reviewed in conjunction with it’s usage, the overall table structure(s) and all other SQL statements that utilize applicable tables.

What is testing?

In software development this is a simple question. What is [the purpose of] testing? If asked to give a one sentence answer what would you say? I have asked this simple question of attendees at many presentations, and also to software developers I have worked with or consulted to.

The most common answer is. “Testing is about making sure the software works, the function your testing does what it should, for example saves the information you entered”.

Unfortunately this is not the purpose of testing, and this attitude leads to what I generally term as poor quality software. “Testing is about trying to break your product any way possible, all the time.”

With this clarification in understanding of a basic and necessary software engineering principle, the attitude towards software development and the entire focus and mindset of engineering and quality assurance can change for the better.

Another very simple example which I often ask when consulting. What does your website look like when it’s down? Again, the general answer is often vague and/or incomplete. How do you know when your website is down? I have heard the response “The users will let you know”. You may laugh, but it is certainly not funny. Show me your website in a down state? Show me your website in a degraded state? When the answer is either unclear, or with a recent employment the same response, there has simply been little thought into producing a quality product by a testing process that is intent on breaking your software.

What procedures do you follow when receiving alerts about errors? What procedures do you put in place to ensure they do not happen again? Again, one has to be disappointed when the response is, “I will set up an email alert to the team for this type of error?” This reactive response is not addressing the problem, only acknowledging the existence of a problem. What is needed is being proactive. Was a bug raised? Can the problem be easily reproduced? How was the problem fixed the first time? Can this be corrected in the code? Can the interim resolution be automated?

When there is a negative user experience from any type of failure or error another important feedback loop is the post-mortem to review the when, why, how and who of the situation and to create a plan to ensure this does not happen again.

Testing needs to baked in to everything that is done, and practice makes for a more perfect outcome. In a high volume environment it is critical to have a simulated environment where you can benchmark performance of any new release for any regressions. A well defined load testing environment can be used to review experimental branches of possible performance improvements. It is also where you can determine the bottleneck and breaking point as you increase load 2X, 5X, 10X. It is impossible to be proactive when your system can fail at 2X load, and the engineering resources needed to implement a solution will not happen in time.

Disaster is inevitable. It will happen, whether small or large. Hardware and software inherently fails. How it fails and what is done to mitigate this to ensure the best possible consistent and rewarding consumer experience is only possible by consistently practicing to break your software at all stages in the development and deployment lifecycle.

MySQL Admin 101 for System Admins – key_buffer_size

As discussed in my presentation to NYLUG, I wanted to provide system administrations with some really quick analysis and performance fixes if you had limited knowledge of MySQL.

One of the most important things with MySQL is to tune memory properly. This can be complex as there are global buffers, and per session buffers, memory tables, and differences between storage engines. Even this first tip has conditions.

Configuration of MySQL can be found in the my.cnf file (How can I find that). Some variables are dynamic and some are not, and these can change between versions. Check out The most important MySQL Reference Manual page that everybody should bookmark for reference.

Here is a great example for the key_buffer_size found in the [mysqld] section of my.cnf. This is also historically known in legacy config files as key_buffer. This older format has been removed in 5.7. This is a global buffer that is responsible for caching the MyISAM Index data only. Two important things here, this is for the MyISAM storage engine only, and it’s only for indexes. MyISAM data relies on the OS file system cache.

We can confirm the current value in a running MySQL instance with:

mysql> SELECT LOWER(variable_name) as variable, variable_value/1024/1024 as MB 
       FROM   information_schema.global_variables 
       WHERE  variable_name = 'key_buffer_size';
+-----------------+------+
| variable        | MB   |
+-----------------+------+
| key_buffer_size |   16 |
+-----------------+------+
1 row in set (0.00 sec)

The following query will give you the current size of MyISAM indexes stored on disk in your instance.

mysql> SELECT FORMAT(SUM(data_length)/1024/1024,2) as data_mb, 
              FORMAT(SUM(index_length)/1024/1024,2) as index_mb 
       FROM   information_schema.tables 
       WHERE  engine='MyISAM';
+--------------+--------------+
| data_mb      | index_mb     |
+--------------+--------------+
| 504.01       | 114.48       |
+--------------+--------------+
1 row in set (2.36 sec)

NOTE: This is all MyISAM indexes in all schemas. At this time we have not determined what is “hot” data, “cold” data, backup tables etc. It’s a crude calculation, but in absence of more information, seeing that MyISAM is being used, and the buffer is not configured (default is generally 8MB), or is configured poorly as in this example shows that changing this value is an important step to consider. However, The first part of solving the problem is identifying the problem.

Tuning the buffer is hard. You have to take into consideration the amount of system RAM, is the server dedicated for MySQL only, or a shared server for example with a web container such as Apache. Are other storage engines used (for example InnoDB) that requires it’s own buffer size, are there multiple MySQL Instances on the server.

For this example of tuning, we are assuming a dedicated MySQL server and no other storage engines used.

Determining the system RAM and current usage can be found with:

$ free -m
             total       used       free     shared    buffers     cached
Mem:          3955       3846        109          0        424       1891
-/+ buffers/cache:       1529       2426
Swap:         1027          0       1027

With this information, we see a system with 4G of RAM (plenty of available RAM), a key_buffer_size of 16M, and the current maximum size of indexes is 114M. For this most simple case it’s obvious we can increase this buffer, to say 128M and not affect overall system RAM usage, but improve MyISAM performance.

Here are the same numbers for a different system to give you a comparison of what you may uncover.

mysql> SELECT LOWER(variable_name) as variable, variable_value/1024/1024 as MB
    ->        FROM   information_schema.global_variables
    ->        WHERE  variable_name = 'key_buffer_size';
+-----------------+------+
| variable        | MB   |
+-----------------+------+
| key_buffer_size |  354 |
+-----------------+------+
1 row in set (0.00 sec)

mysql> SELECT FORMAT(SUM(data_length)/1024/1024,2) as data_mb,
    ->               FORMAT(SUM(index_length)/1024/1024,2) as index_mb
    ->        FROM   information_schema.tables
    ->        WHERE  engine='MyISAM';
+------------+------------+
| data_mb    | index_mb   |
+------------+------------+
| 150,073.57 | 122,022.97 |
+------------+------------+
1 row in set (3.71 sec)

As I follow up in my next post on the innodb_buffer_pool_size, I will further clarify the complexity of MySQL memory tuning, and show that this information gathering is only a guide, and first step to a more complex analysis and tuning operation.

Improving performance – A full stack problem

Improving the performance of a web system involves knowledge of how the entire technology stack operates and interacts. There are many simple and common tips that can provide immediate improvements for a website. Some examples include:

  • Using a CDN for assets
  • Compressing content
  • Making fewer requests (web, cache, database)
  • Asynchronous management
  • Optimizing your SQL statements
  • Have more memory
  • Using SSD’s for database servers
  • Updating your software versions
  • Adding more servers
  • Configuring your software correctly
  • … And the general checklist goes on

Understanding where to invest your energy first, knowing what the return on investment can be, and most importantly the measurement and verification of every change made is the difference between blind trial and error and a solid plan and process. Here is a great example for the varied range of outcome to the point about “Updating your software versions”.

On one project the MySQL database was reaching saturation, both the maximum number of database connections and maximum number of concurrent InnoDB transactions. The first is a configurable limit, the second was a hard limit of the very old version of the software. Changing the first configurable limit can have dire consequences, there is a tipping point, however that is a different discussion. A simple software upgrade of MySQL which had many possible improvement benefits, combined with corrected configuration specific for this new version made an immediate improvement. The result moved a production system from crashing consistently under load, to at least barely surviving under load. This is an important first step in improving the customer experience.

In the PHP application stack for the same project the upgrading of several commonly used frameworks including Slim and Twig by the engineering department seemed like a good idea. However applicable load testing and profiling (after it was deployed, yet another discussion point) found the impact was a 30-40% increase in response time for the application layer. This made the system worse, and cancelled out prior work to improve the system.

How to tune a system to support 100x load increase with no impact in performance takes knowledge, experience, planning, testing and verification.

The following summarized graphs; using New Relic monitoring as a means of representative comparison; shows three snapshots of the average response time during various stages of full stack tuning and optimization. This is a very simplified graphical view that is supported by more detailed instrumentation using different products, specifically with much finer granularity of hundreds of metrics.

These graphs represent the work undertaken for a system under peak load showing an average 2,000ms response time, to the same workload under 50ms average response time. That is a 40x improvement!

If your organization can benefit from these types of improvements feel free to Contact Me.

There are numerous steps to achieving this. A few highlights to show the scope of work you need to consider includes:

  • Knowing server CPU saturation verses single core CPU saturation.
  • Network latency detection and mitigation.
  • What are the virtualization mode options of virtual cloud instances?
  • Knowing the network stack benefits of different host operating systems.
  • Simulating production load is much harder than it sounds.
  • Profiling, Profiling, Profiling.
  • Instrumentation can be misleading. Knowing how different monitoring works with sampling and averaging.
  • Tuning the stack is an iterative process.
  • The simple greatest knowledge is to know your code, your libraries, your dependencies and how to optimize each specific area of your technology stack.
  • Not everything works, some expected wins provided no overall or observed benefits.
  • There is always more that can be done. Knowing when to pause and prioritize process optimizations over system optimizations.

These graphs show the improvement work in the application tier (1500ms to 35ms to 25ms) and the database tier (500ms to 125ms to 10ms) at various stages. These graphs do not show for example improvements made in DNS resolution, different CDNs, managing static content, different types and ways of compression, remove unwanted software components and configuration, standardized and consistent stack deployments using chef, and even a reduction in overall servers. All of these successes contributed to a better and more consistent user experience.

40x performance improvements in LAMP stack

Correctly setting your mysql prompt using sudo

If you run multiple MySQL environments on multiple servers it’s a good habit to set your MySQL prompt to double check which server you are on.
however, using the MYSQL_PS1 environment variable I found this does not work under sudo (the normal way people run sudo).

I.e., the following syntax’s work.

$ mysql
$ sudo su - -c mysql
$ sudo su - ; mysql

but the following does not.

$ sudo mysql

The trick is actually to ensure via /etc/sudoers you inherit the MySQL_PS1 environment variable.

echo "export MYSQL_PS1="`hostname` [d]> "" | sudo tee /etc/profile.d/mysql.sh
echo 'Defaults    env_keep += "MYSQL_PS1"' | sudo tee /tmp/mysql
sudo chmod 400 /tmp/mysql
sudo mv /tmp/mysql /etc/sudoers.d

What is FTS_BEING_DELETED.ibd

I currently have on a MySQL 5.6 database using innodb_file_per_table the following individual tablespace file.

schema/FTS_00000000000001bb_BEING_DELETED.ibd

The schema is all InnoDB tables, and there ARE NO Full Text Indexes. I cannot comment on if a developer has tried to create one previously.
I am none the wiser in explaining the ongoing use of these files, or if it can be/should be deleted.

On closer inspection there are infact a number of FTS files.

$ ls -al FTS*
-rw-r----- 1 mysql mysql 98304 Jan 29 16:21 FTS_00000000000001bb_BEING_DELETED_CACHE.ibd
-rw-r----- 1 mysql mysql 98304 Jan 29 16:20 FTS_00000000000001bb_BEING_DELETED.ibd
-rw-r----- 1 mysql mysql 98304 Jan 29 16:26 FTS_00000000000001bb_CONFIG.ibd
-rw-r----- 1 mysql mysql 98304 Jan 29 16:21 FTS_00000000000001bb_DELETED_CACHE.ibd
-rw-r----- 1 mysql mysql 98304 Jan 29 16:00 FTS_00000000000001bb_DELETED.ibd
-rw-r----- 1 mysql mysql 98304 Jan 29 16:20 FTS_00000000000001c7_BEING_DELETED_CACHE.ibd
-rw-r----- 1 mysql mysql 98304 Jan 29 16:26 FTS_00000000000001c7_BEING_DELETED.ibd
-rw-r----- 1 mysql mysql 98304 Jan 29 16:21 FTS_00000000000001c7_CONFIG.ibd
-rw-r----- 1 mysql mysql 98304 Jan 29 16:20 FTS_00000000000001c7_DELETED_CACHE.ibd
-rw-r----- 1 mysql mysql 98304 Jan 29 16:20 FTS_00000000000001c7_DELETED.ibd

Any MySQL gurus with knowledge to share, and for the benefit of others that Internet search at a later time.

Related articles included Overview and Getting Started with InnoDB FTS and Difference between InnoDB FTS and MyISAM FTS but do not mention file specifics.

The article InnoDB Full-text Search in MySQL 5.6 (part 1) provides more insight that these files remain even if a full text index was created and has since being removed. It is not clear from the filename which tables these files relate to.

Good Test Data

Over the years you collect datasets you have created for various types of testing, seeding databases etc. I have always thought one needs to better manage this for future re-use. Recently I wanted to do some “Big Data” playing and again that question of what datasets can I use let me to review the past collated list at Seeking public data for benchmarks.

The types of things I was wanting to do lead me to realize a lot of content is “public domain” and Project Gutenberg is just one great source of text in multiple languages. This was just one aspect of my wish list but text based data is used from blogs, comments, articles, microblogs etc, and multiple languages was important from some text analysis.

With a bit of thinking about the building blocks, I created Good Test Data. A way for me to have core data, IP’s, people’s names, User Agents strings, text for articles, comments and a lot more. And importantly the ability to generate large randomized amounts of this data quickly and easily.

Now I can build a list of 1 million random names with unique usernames and emails with ease. I can generate millions of varying articles, from a short microblog, a comment, a blog to a multi page article. Then be able to produce HTML/PDF/PNG versions giving me file attachments. I’ve been playing more with image generation, creating banner images with varying text, and now I’m generating MP4 video to simulate the various standard sizes for advertising and just to see what people need.

I’m not sure of the potential use and benefit for others and that wasn’t the primary goal, however I would like to know how these building blocks could be used. The data is relatively agnostic, being able to easily load into MySQL tables. Depending on demand, being able to create pre-configured open source product data for e-commence products, CRM or blogging are all possible options.

The GRANT/REVOKE dilemma

It is common practice to grant your application the privileges of “GRANT SELECT, INSERT, UPDATE, DELETE ON yourdb.* TO [email protected]”.

But what if you want to ensure you cannot DELETE data from just one table?

Ideally I want to be able to “REVOKE DELETE ON yourdb.important_table FROM [email protected]”. You cannot do currently this with the MySQL privilege system.

If your schema has 100 tables, and you want to remove DELETE from one, you have to define DELETE for the 99 others, and remember that for each new table, you need to remember to also modify user privileges.

Unexplained halts using mysql command line client

I recently came across an issue trying to connect to a MySQL server using the mysql client. It appeared as through the connection was hanging.

A subsequent connection using the -A option highlighted the problem with the previous connection stuck in the state “Waiting for table metadata lock”.

mysql> SHOW PROCESSLISTG
*************************** 1. row ***************************
     Id: 37
   User: root
   Host: localhost
     db: tmp
Command: Query
   Time: 90
  State: preparing
   Info: create table missing as select id from AK where id not in (select id ..
*************************** 2. row ***************************
     Id: 38
   User: root
   Host: localhost
     db: tmp
Command: Field List
   Time: 50
  State: Waiting for table metadata lock
   Info:
*************************** 3. row ***************************
     Id: 39
   User: root
   Host: localhost
     db: tmp
Command: Query
   Time: 0
  State: init
   Info: show processlist

In this example you can see a long running CREATE TABLE … SELECT statement as the cause of the problem. The -A or –no-auto-rehash argument is a means of disabling the tab completion in the mysql command line client.

Giving thanks to MySQL authors challenge

Next week the US celebrates Thanksgiving Day. For those that are American or live here, this is a significant event. Three different experiences recently have lead me to write this request for ALL MySQL community members to give thanks to those that have contributed to the MySQL ecosystem. I have made a commitment to myself, and I would like to challenge others to write one book review per week in December, that’s 4 book reviews to the MySQL books that I have on my bookshelf that have made an impact in some way. I ask others to give it a go too.

It only takes a few minutes to pen a comment on Amazon, or a publishers site, but to authors it means so much more. I can only speak for myself, but any comment; good, bad or ugly; helps to know you are out there and you took the time to acknowledge somebody’s work of art (in this case a publication).

I only have to look at my bookshelf and I find the following MySQL books (in order they currently are placed which is no specific order), MySQL Crash Course by Ben Forta, MySQL Clustering by Alex Davies and Harrison Fisk, MySQL Cookbook by Paul DuBois, MySQL Stored Procedure Programming by Guy Harrison, Developing Web Applications with Apache, MySQL, memcached, and Perl by Patrick Galbraith, Pro MySQL (The Expert’s Voice in Open Source) by Mike Kruckenberg and Jay Pipes, MySQL Administrator’s Bible by Sheeri Cabral, MySQL (Third Edition) by Paul DuBois, High Performance MySQL: Optimization, Backups, Replication, and More (Second Edition) by Baron Schwartz, Peter Zaitsev, Vadim Tkachenko, Jeremy Zawodny, Arjen Lentz and Derek Balling, High Performance MySQL: Optimization, Backups, and Replication (Third Edition) by Baron Schwartz, Peter Zaitsev, Vadim Tkachenkoo, MySQL High Availability: Tools for Building Robust Data Centers by Charles Bell, Mats Kindal and Lars Thalman, Expert PHP and MySQLby Andrew Curiso, Ronald Bradford and Patrick Galbraith, Effective MySQL Backup and Recovery (Oracle Press) by Ronald Bradford, Effective MySQL Replication Techniques in Depth by Ronald Bradford and Chris Schneider, Effective MySQL Optimizing SQL Statements (Oracle Press) by Ronald Bradford, Database in Depth: Relational Theory for Practitioners by Chris Date, Pentaho Solutions: Business Intelligence and Data Warehousing with Pentaho and MySQL by Roland Bouman and Josh an Dongen, MySQL Administrator’s Guide and Language Reference (2nd Edition) and MySQL 5.0 Certification Study Guide by Paul Dubois, Stefan Hinz and Cartsten Pedersen.

And they are just the physical books, I have several PDF and Kindle only versions, and other MySQL Books I know about and have not purchased.

I would also like to give a special shout out to Sheeri Cabral and MySQL Marinate. A program using the O’Reilly Learning MySQL book to help anybody that wants to learn. At my most recent Effective MySQL Meetup a beginner question was asked by an audience member, and it was another audience member (not even myself) that piped up and recommended MySQL Marinate.

Finally, I learned MySQL by reading the online reference manual from cover to cover, and I did it again several years later, and probably should do it again some day. I am unable to find the names of the authors present or past, nor the right place you could leave a comment, but thanks to those I do know about, Jon Stephens, Mike Hillyer, Stefan Hinz, MC Brown and Paul DuBois.

Kick all the tires before you buy the product

Translating theory to practice is never easy. Morgan gives us the right steps in a play environment to move from dev.mysql.com native MySQL rpm’s to the new MySQL yum repository. I thought I would try it out.

1. Confirming existing packages

A necessary step, however immediately I have more dependencies including Perl DBD (used in several utilities) including MHA.

$ sudo su -
$ rpm -qa | grep -i mysql
MySQL-devel-5.6.13-1.el6.x86_64
MySQL-test-5.6.13-1.el6.x86_64
MySQL-shared-compat-5.6.13-1.el6.x86_64
MySQL-server-5.6.13-1.el6.x86_64
perl-DBD-MySQL-4.013-3.el6.x86_64
MySQL-client-5.6.13-1.el6.x86_64
MySQL-embedded-5.6.13-1.el6.x86_64
MySQL-shared-5.6.13-1.el6.x86_64
mha4mysql-node-0.54-1.el5.noarch

A further trap in the more complex real-world environments, in my case the installation of Percona XtraBackup. This will become apparent in the next step. We need to check for these packages.

$ rpm -qa | grep -i percona
percona-release-0.0-1.x86_64
percona-xtrabackup-2.1.4-656.rhel6.x86_64

1. Update your environment

There is a mixed blessing here. Assuming you keep your machines current (and you should), the impact here should be minimal, but buyer beware. In my case the update wanted to update java-1.7.0-openjdk. Should not be a big deal, but what other products are impacted by updates? Java on this system for example is used by the New Relic MySQL Monitoring. What if there was some important application component that may become unravelled with some update.

Doing a blanket update on a reasonably current CentOS 6.4 system broke.

$ yum update
...
Installing:
 Percona-SQL-shared-compat                      x86_64                5.0.92-b23.89.rhel6                      percona                1.1 M
     replacing  MySQL-shared.x86_64 5.6.13-1.el6
 Percona-Server-shared-compat                   x86_64                5.5.34-rel32.0.591.rhel6                 percona                3.4 M
     replacing  MySQL-shared.x86_64 5.6.13-1.el6
 Percona-Server-shared-compat-51                x86_64                5.1.72-rel14.10.597.rhel6                percona                2.4 M
     replacing  MySQL-shared.x86_64 5.6.13-1.el6
 kernel                                         x86_64                2.6.32-358.23.2.el6                      updates                 26 M
Transaction Summary
============================================================================================================================================
Install       4 Package(s)
Upgrade      26 Package(s)

...
Transaction Check Error:
  file /usr/lib64/libmysqlclient.so.12.0.0 from install of Percona-SQL-shared-compat-5.0.92-b23.89.rhel6.x86_64 conflicts with file from package MySQL-shared-compat-5.6.13-1.el6.x86_64
  file /usr/lib64/libmysqlclient.so.14.0.0 from install of Percona-SQL-shared-compat-5.0.92-b23.89.rhel6.x86_64 conflicts with file from package MySQL-shared-compat-5.6.13-1.el6.x86_64
  file /usr/lib64/libmysqlclient_r.so.12.0.0 from install of Percona-SQL-shared-compat-5.0.92-b23.89.rhel6.x86_64 conflicts with file from package MySQL-shared-compat-5.6.13-1.el6.x86_64
  file /usr/lib64/libmysqlclient_r.so.14.0.0 from install of Percona-SQL-shared-compat-5.0.92-b23.89.rhel6.x86_64 conflicts with file from package MySQL-shared-compat-5.6.13-1.el6.x86_64
  file /usr/lib64/libmysqlclient.so.16.0.0 from install of Percona-Server-shared-compat-5.5.34-rel32.0.591.rhel6.x86_64 conflicts with file from package MySQL-shared-compat-5.6.13-1.el6.x86_64
  file /usr/lib64/libmysqlclient_r.so.16.0.0 from install of Percona-Server-shared-compat-5.5.34-rel32.0.591.rhel6.x86_64 conflicts with file from package MySQL-shared-compat-5.6.13-1.el6.x86_64
  file /usr/lib64/libmysqlclient.so.12.0.0 conflicts between attempted installs of Percona-Server-shared-compat-5.5.34-rel32.0.591.rhel6.x86_64 and Percona-SQL-shared-compat-5.0.92-b23.89.rhel6.x86_64
  file /usr/lib64/libmysqlclient.so.14.0.0 conflicts between attempted installs of Percona-Server-shared-compat-5.5.34-rel32.0.591.rhel6.x86_64 and Percona-SQL-shared-compat-5.0.92-b23.89.rhel6.x86_64
  file /usr/lib64/libmysqlclient_r.so.12.0.0 conflicts between attempted installs of Percona-Server-shared-compat-5.5.34-rel32.0.591.rhel6.x86_64 and Percona-SQL-shared-compat-5.0.92-b23.89.rhel6.x86_64
  file /usr/lib64/libmysqlclient_r.so.14.0.0 conflicts between attempted installs of Percona-Server-shared-compat-5.5.34-rel32.0.591.rhel6.x86_64 and Percona-SQL-shared-compat-5.0.92-b23.89.rhel6.x86_64
...

The problem here is an unhealthy relationship between repositories for Percona Xtrabackup. I don’t know the reason, but this is the curse of dependencies that make real world upgrades more complex.

Updating just MySQL is rather useless as it’s installed by rpm.

$ yum update MySQL-server
Loaded plugins: fastestmirror, presto
Loading mirror speeds from cached hostfile
 * base: mirror.cogentco.com
 * extras: mirror.cogentco.com
 * updates: mirror.trouble-free.net
Setting up Update Process
No Packages marked for Update

Removing existing MySQL

Stopping MySQL is easy. Making a decision about, should I backup important files, like the config file, or the data should be considered here. The reality is for a production system, assume something unexpected will happen, even if you have tested it. Being able to go back in any step of an upgrade is actually more important than the step itself. So the following IS NOT ENOUGH in a production system.

service mysql stop

This old dinosaur learned a new trick in the yum interactive shell. Cool!

Note, I am doing a point release upgrade here, from .13 to .14.

$ yum shell
> remove MySQL-client-5.6.13-1.el6 MySQL-embedded-5.6.13-1.el6 MySQL-server-5.6.13-1.el6 MySQL-shared-5.6.13-1.el6 MySQL-devel-5.6.13-1.el6 MySQL-test-5.6.13-1.el6 MySQL-shared-compat-5.6.13-1.el6
> install mysql-community-server
> run

Now the fun begins, you have to read carefully what is happening, check out the Removing for dependencies section.

============================================================================================================================================
 Package                              Arch                 Version                            Repository                               Size
============================================================================================================================================
Installing:
 mysql-community-server               x86_64               5.6.14-3.el6                       mysql-community                          51 M
Removing:
 MySQL-client                         x86_64               5.6.13-1.el6                       installed                                81 M
 MySQL-devel                          x86_64               5.6.13-1.el6                       installed                                19 M
 MySQL-embedded                       x86_64               5.6.13-1.el6                       installed                               431 M
 MySQL-server                         x86_64               5.6.13-1.el6                       installed                               235 M
 MySQL-shared                         x86_64               5.6.13-1.el6                       installed                               8.4 M
 MySQL-shared-compat                  x86_64               5.6.13-1.el6                       installed                                11 M
 MySQL-test                           x86_64               5.6.13-1.el6                       installed                               318 M
Installing for dependencies:
 mysql-community-client               x86_64               5.6.14-3.el6                       mysql-community                          18 M
 mysql-community-common               x86_64               5.6.14-3.el6                       mysql-community                         296 k
 mysql-community-libs                 x86_64               5.6.14-3.el6                       mysql-community                         1.8 M
Removing for dependencies:
 cronie                               x86_64               1.4.4-7.el6                        @CentOS6-Base/$releasever               166 k
 cronie-anacron                       x86_64               1.4.4-7.el6                        @CentOS6-Base/$releasever                43 k
 crontabs                             noarch               1.10-33.el6                        @CentOS6-Base/$releasever               2.4 k
 mha4mysql-node                       noarch               0.54-1.el5                         installed                                98 k
 percona-xtrabackup                   x86_64               2.1.4-656.rhel6                    @percona                                 24 M
 perl-DBD-MySQL                       x86_64               4.013-3.el6                        @base                                   338 k
 postfix                              x86_64               2:2.6.6-2.2.el6_1                  @CentOS6-Base/$releasever               9.7 M
 sysstat                              x86_64               9.0.4-20.el6                       @base                                   807 k

Transaction Summary
============================================================================================================================================
Install       4 Package(s)
Remove       15 Package(s)

Also, you need to quit the yum shell.

> quit
Leaving Shell

Verification

So, we have now installed MySQL via the new yum repositories, and we can verify this.

$ service mysqld start
Starting mysqld:                                           [  OK  ]
$ mysql -e "SELECT VERSION()"
+------------+
| VERSION()  |
+------------+
| 5.6.14-log |
+------------+
$ chkconfig mysqld on

But, we now have a broken system, because dependencies were removed.

Extra steps needed

Installation of Percona Backup.

$ yum install percona-xtrabackup
Loaded plugins: fastestmirror, presto
Loading mirror speeds from cached hostfile
 * base: mirrors.advancedhosters.com
 * extras: mirror.cogentco.com
 * updates: mirror.trouble-free.net
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package percona-xtrabackup.x86_64 0:2.1.5-680.rhel6 will be installed
--> Processing Dependency: perl(DBD::mysql) for package: percona-xtrabackup-2.1.5-680.rhel6.x86_64
--> Running transaction check
---> Package perl-DBD-MySQL.x86_64 0:4.013-3.el6 will be installed
--> Processing Dependency: libmysqlclient.so.16(libmysqlclient_16)(64bit) for package: perl-DBD-MySQL-4.013-3.el6.x86_64
--> Processing Dependency: libmysqlclient.so.16()(64bit) for package: perl-DBD-MySQL-4.013-3.el6.x86_64
--> Running transaction check
---> Package Percona-Server-shared-51.x86_64 0:5.1.72-rel14.10.597.rhel6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================
 Package                                  Arch                   Version                                      Repository               Size
============================================================================================================================================
Installing:
 percona-xtrabackup                       x86_64                 2.1.5-680.rhel6                              percona                 6.8 M
Installing for dependencies:
 Percona-Server-shared-51                 x86_64                 5.1.72-rel14.10.597.rhel6                    percona                 2.1 M
 perl-DBD-MySQL                           x86_64                 4.013-3.el6                                  base                    134 k

Transaction Summary
============================================================================================================================================
Install       3 Package(s)

Total size: 9.1 M
Total download size: 2.3 M
Installed size: 30 M
Is this ok [y/N]: y
Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 2.3 M
(1/2): Percona-Server-shared-51-5.1.72-rel14.10.597.rhel6.x86_64.rpm                                                 | 2.1 MB     00:00
(2/2): perl-DBD-MySQL-4.013-3.el6.x86_64.rpm                                                                         | 134 kB     00:00
--------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                       9.7 MB/s | 2.3 MB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : Percona-Server-shared-51-5.1.72-rel14.10.597.rhel6.x86_64                                                                1/3
  Installing : perl-DBD-MySQL-4.013-3.el6.x86_64                                                                                        2/3
  Installing : percona-xtrabackup-2.1.5-680.rhel6.x86_64                                                                                3/3
  Verifying  : percona-xtrabackup-2.1.5-680.rhel6.x86_64                                                                                1/3
  Verifying  : perl-DBD-MySQL-4.013-3.el6.x86_64                                                                                        2/3
  Verifying  : Percona-Server-shared-51-5.1.72-rel14.10.597.rhel6.x86_64                                                                3/3

Installed:
  percona-xtrabackup.x86_64 0:2.1.5-680.rhel6

Dependency Installed:
  Percona-Server-shared-51.x86_64 0:5.1.72-rel14.10.597.rhel6                      perl-DBD-MySQL.x86_64 0:4.013-3.el6

Complete!

Please explain this one to be batman, removing MySQL removed the sysstat package. Very weird.

$ iostat
-bash: iostat: command not found
$ yum install sysstat
Loaded plugins: fastestmirror, presto
Loading mirror speeds from cached hostfile
 * base: mirrors.advancedhosters.com
 * extras: mirror.cogentco.com
 * updates: centos.someimage.com
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package sysstat.x86_64 0:9.0.4-20.el6 will be installed
--> Processing Dependency: /etc/cron.d for package: sysstat-9.0.4-20.el6.x86_64
--> Running transaction check
---> Package cronie.x86_64 0:1.4.4-7.el6 will be installed
--> Processing Dependency: dailyjobs for package: cronie-1.4.4-7.el6.x86_64
--> Processing Dependency: /usr/sbin/sendmail for package: cronie-1.4.4-7.el6.x86_64
--> Running transaction check
---> Package cronie-anacron.x86_64 0:1.4.4-7.el6 will be installed
--> Processing Dependency: crontabs for package: cronie-anacron-1.4.4-7.el6.x86_64
---> Package postfix.x86_64 2:2.6.6-2.2.el6_1 will be installed
--> Running transaction check
---> Package crontabs.noarch 0:1.10-33.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================
 Package                              Arch                         Version                                 Repository                  Size
============================================================================================================================================
Installing:
 sysstat                              x86_64                       9.0.4-20.el6                            base                       225 k
Installing for dependencies:
 cronie                               x86_64                       1.4.4-7.el6                             base                        70 k
 cronie-anacron                       x86_64                       1.4.4-7.el6                             base                        29 k
 crontabs                             noarch                       1.10-33.el6                             base                        10 k
 postfix                              x86_64                       2:2.6.6-2.2.el6_1                       base                       2.0 M

Transaction Summary
============================================================================================================================================
Install       5 Package(s)

Total download size: 2.4 M
Installed size: 11 M
Is this ok [y/N]: y
Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 2.4 M
(1/5): cronie-1.4.4-7.el6.x86_64.rpm                                                                                 |  70 kB     00:00
(2/5): cronie-anacron-1.4.4-7.el6.x86_64.rpm                                                                         |  29 kB     00:00
(3/5): crontabs-1.10-33.el6.noarch.rpm                                                                               |  10 kB     00:00
(4/5): postfix-2.6.6-2.2.el6_1.x86_64.rpm                                                                            | 2.0 MB     00:00
(5/5): sysstat-9.0.4-20.el6.x86_64.rpm                                                                               | 225 kB     00:00
--------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                        21 MB/s | 2.4 MB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : 2:postfix-2.6.6-2.2.el6_1.x86_64                                                                                         1/5
  Installing : cronie-1.4.4-7.el6.x86_64                                                                                                2/5
  Installing : crontabs-1.10-33.el6.noarch                                                                                              3/5
  Installing : cronie-anacron-1.4.4-7.el6.x86_64                                                                                        4/5
  Installing : sysstat-9.0.4-20.el6.x86_64                                                                                              5/5
  Verifying  : crontabs-1.10-33.el6.noarch                                                                                              1/5
  Verifying  : cronie-1.4.4-7.el6.x86_64                                                                                                2/5
  Verifying  : 2:postfix-2.6.6-2.2.el6_1.x86_64                                                                                         3/5
  Verifying  : sysstat-9.0.4-20.el6.x86_64                                                                                              4/5
  Verifying  : cronie-anacron-1.4.4-7.el6.x86_64                                                                                        5/5

Installed:
  sysstat.x86_64 0:9.0.4-20.el6

Dependency Installed:
  cronie.x86_64 0:1.4.4-7.el6    cronie-anacron.x86_64 0:1.4.4-7.el6    crontabs.noarch 0:1.10-33.el6    postfix.x86_64 2:2.6.6-2.2.el6_1

Complete!

Re-installing MySQL MHA node.

$ cd /tmp
$ wget http://mysql-master-ha.googlecode.com/files/mha4mysql-node-0.54-0.el6.noarch.rpm
$ sudo rpm -ivh mha4mysql-node-*.noarch.rpm

There is now a lot more work needed to check and recheck the dependencies and verify what did work previously still works.

At this time doing this on 20 DB servers to move to the new yum repository is a fail for this client. It’s simply not worth it.

Conclusion

Theory easy, practice not to easy!

What SQL is running in MySQL

Using the MySQL 5.6 Performance Schema it is very easy to see what is actually running on your MySQL instance. No more sampling or installing software or worrying about disk I/O performance with techniques like SHOW PROCESSLIST, enabling the general query log or sniffing the TCP/IP stack.

The following SQL is used to give me a quick 60 second view on a running MySQL system of ALL statements executed.

use performance_schema;
update setup_consumers set enabled='YES' where name IN ('events_statements_history','events_statements_current','statements_digest');
truncate table events_statements_current; truncate table events_statements_history; truncate table events_statements_summary_by_digest;
do sleep(60);
select now(),(count_star/(select sum(count_star) FROM events_statements_summary_by_digest) * 100) as pct, count_star, left(digest_text,150) as stmt, digest from events_statements_summary_by_digest order by 2 desc;
update setup_consumers set enabled='NO' where name IN ('events_statements_history','events_statements_current','statements_digest');

NOTE: These statements are for simple debugging and demonstration purposes. If you want to monitor SQL statements on an ongoing basis, you should not simply truncate tables and globally enable/disable options.

There are four performance schema tables that are applicable for looking at initial SQL analysis.

  1. The events_statements_summary_by_digest table shown below gives as the name suggests a way to summarize all queries into a common query pattern (or digest). This is great to get a picture of volume and frequency of SQL statements.
  2. The events_statements_current shows the currently running SQL statements
  3. The events_statements_history shows the fun, because it provides a *short, default 10 threads* history of the SQL statements that have run in any given thread.
  4. The events_statements_history_long (when enabled) gives you a history of the most recent 10,000 events.

One query can give me a detailed review of the type and frequency of ALL SQL statements run. The ALL is important, because on a slave you also get ALL replication applied events.

mysql> select now(),(count_star/(select sum(count_star) FROM events_statements_summary_by_digest) * 100) as pct, count_star, left(digest_text,150) as stmt, digest from events_statements_summary_by_digest order by 2 desc;
select * from events_statements_current where digest='ffb6231b78efc022175650d37a837b99'G
+---------------------+---------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+
| now()               | pct     | count_star | stmt                                                                                                                                                   | digest                           |
+---------------------+---------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+
| 2013-11-07 18:24:46 | 60.6585 |       7185 | SELECT * FROM `D.....` WHERE `name` = ?                                                                                                                | d6399273d75e2348d6d7ea872489a30c |
| 2013-11-07 18:24:46 | 23.4192 |       2774 | SELECT nc . id , nc . name FROM A.................. anc JOIN N........... nc ON anc . ............_id = nc . id WHERE ......._id = ?                   | c6e2249eb91767aa09945cbb118adbb3 |
| 2013-11-07 18:24:46 |  5.5298 |        655 | BEGIN                                                                                                                                                  | 7519b14a899fd514365211a895f5e833 |
| 2013-11-07 18:24:46 |  4.6180 |        547 | INSERT INTO V........ VALUES (...) ON DUPLICATE KEY UPDATE v.... = v.... + ?                                                                           | ffb6231b78efc022175650d37a837b99 |
| 2013-11-07 18:24:46 |  1.0891 |        129 | SELECT COUNT ( * ) FROM T............... WHERE rule = ? AND ? LIKE concat ( pattern , ? )                                                              | 22d984df583adc9a1ac282239e7629e2 |
| 2013-11-07 18:24:46 |  1.0553 |        125 | SELECT COUNT ( * ) FROM T............... WHERE rule = ? AND ? LIKE concat ( ? , pattern , ? )                                                          | a8ee43287bb2ee35e2c144c569a8b2de |
| 2013-11-07 18:24:46 |  0.9033 |        107 | INSERT IGNORE INTO `K......` ( `id` , `k......` ) VALUES (...)                                                                                         | 675e32e9eac555f33df240e80305c013 |
| 2013-11-07 18:24:46 |  0.7936 |         94 | SELECT * FROM `K......` WHERE k...... IN (...)                                                                                                         | 8aa7dc3b6f729aec61bd8d7dfa5978fa |
| 2013-11-07 18:24:46 |  0.4559 |         54 | SELECT COUNT ( * ) FROM D..... WHERE NAME = ? OR NAME = ?                                                                                              | 1975f53832b0c2506de482898cf1fd37 |
| 2013-11-07 18:24:46 |  0.3208 |         38 | SELECT h . * FROM H........ h LEFT JOIN H............ ht ON h . id = ht . ......_id WHERE ht . ........._id = ? ORDER BY h . level ASC                 | ca838db99e40fdeae920f7feae99d19f |
| 2013-11-07 18:24:46 |  0.2702 |         32 | SELECT h . * , ( POW ( ? * ( lat - - ? ) , ? ) + POW ( ? * ( ? - lon ) * COS ( lat / ? ) , ? ) ) AS distance FROM H........ h FORCE INDEX ( lat ) WHER | cd6e32fc0a20fab32662e2b0a282845c |
| 2013-11-07 18:24:46 |  0.1857 |         22 | SELECT h . * , ( POW ( ? * ( lat - ? ) , ? ) + POW ( ? * ( - ? - lon ) * COS ( lat / ? ) , ? ) ) AS distance FROM H........ h FORCE INDEX ( lat ) WHER | a7b43944f5811ef36c0ded7e79793536 |
| 2013-11-07 18:24:46 |  0.0760 |          9 | SELECT h . * , ( POW ( ? * ( lat - ? ) , ? ) + POW ( ? * ( ? - lon ) * COS ( lat / ? ) , ? ) ) AS distance FROM H........ h FORCE INDEX ( lat ) WHERE  | 4ccd8b28ae9e87a9c0b372a58ca22af7 |
| 2013-11-07 18:24:46 |  0.0169 |          2 | SELECT * FROM `K......` WHERE k...... IN (?)                                                                                                           | 44286e824d922d8e2ba6d993584844fb |
| 2013-11-07 18:24:46 |  0.0084 |          1 | SELECT h . * , ( POW ( ? * ( lat - - ? ) , ? ) + POW ( ? * ( - ? - lon ) * COS ( lat / ? ) , ? ) ) AS distance FROM H........ h FORCE INDEX ( lat ) WH | 299095227a67d99824af2ba012b81633 |
| 2013-11-07 18:24:46 |  0.0084 |          1 | SELECT * FROM `H........` WHERE `id` = ?                                                                                                               | 2924ea1d925a6e158397406403a63e3a |
| 2013-11-07 18:24:46 |  0.0084 |          1 | SHOW ENGINE INNODB STATUS                                                                                                                              | 0b04d3acd555401f1cbc479f920b1bac |
| 2013-11-07 18:24:46 |  0.0084 |          1 | DO `sleep` (?)                                                                                                                                         | 3d6e973c2657d0d136bbbdad05e68c7a |
| 2013-11-07 18:24:46 |  0.0084 |          1 | SHOW ENGINE INNODB MUTEX                                                                                                                               | a031f0e6068cb12c5b7508106687c2cb |
| 2013-11-07 18:24:46 |  0.0084 |          1 | SELECT NOW ( ) , ( `count_star` / ( SELECT SUM ( `count_star` ) FROM `events_statements_summary_by_digest` ) * ? ) AS `pct` , `count_star` , LEFT ( `d | 8a9e990cd85d6c42a2e537d04c8c5910 |
| 2013-11-07 18:24:46 |  0.0084 |          1 | SHOW SLAVE STATUS                                                                                                                                      | d2a0ffb1232f2704cef785f030306603 |
| 2013-11-07 18:24:46 |  0.0084 |          1 | TRUNCATE TABLE `events_statements_summary_by_digest`                                                                                                   | a7bef5367816ca771571e648ba963515 |
| 2013-11-07 18:24:46 |  0.0084 |          1 | UPDATE `setup_consumers` SET `enabled` = ? WHERE NAME IN (...)                                                                                         | 8205ea424267a604a3a4f68a76bc0bbb |
| 2013-11-07 18:24:46 |  0.0084 |          1 | SHOW GLOBAL STATUS                                                                                                                                     | ddf94d7d7b176021b8586a3cce1e85c9 |
+---------------------+---------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+

This immediately shows me a single simple application query that is executed 60% of the time. Further review of the data and usage pattern shows that should be cached. This is an immediate improvement on system scalability.

While you can look at the raw performance schema data, using ps_helper from Mark Leith makes live easier using the statement_analysis view because of normalizing timers into human readable formats (check out lock_latency).

mysql> select * from ps_helper.statement_analysis order by exec_count desc limit 10;
+-------------------------------------------------------------------+-----------+------------+-----------+------------+---------------+-------------+-------------+--------------+-----------+---------------+--------------+------------+-----------------+-------------+-------------------+----------------------------------+
| query                                                             | full_scan | exec_count | err_count | warn_count | total_latency | max_latency | avg_latency | lock_latency | rows_sent | rows_sent_avg | rows_scanned | tmp_tables | tmp_disk_tables | rows_sorted | sort_merge_passes | digest                           |
+-------------------------------------------------------------------+-----------+------------+-----------+------------+---------------+-------------+-------------+--------------+-----------+---------------+--------------+------------+-----------------+-------------+-------------------+----------------------------------+
| CREATE VIEW `io_by_thread_by_l ... SUM ( `sum_timer_wait` ) DESC  |           |     146117 |         0 |          0 | 00:01:47.36   | 765.11 ms   | 734.74 us   | 00:01:02.00  |         3 |             0 |            3 |          0 |               0 |           0 |                 0 | c877ec02dce17ea0aca2f256e5b9dc70 |
| SELECT nc . id , nc . name FRO ...  nc . id WHERE ......._id = ?  |           |      41394 |         0 |          0 | 16.85 s       | 718.37 ms   | 407.00 us   | 5.22 s       |    155639 |             4 |       312077 |          0 |               0 |           0 |                 0 | c6e2249eb91767aa09945cbb118adbb3 |
| BEGIN                                                             |           |      16281 |         0 |          0 | 223.24 ms     | 738.82 us   | 13.71 us    | 0 ps         |         0 |             0 |            0 |          0 |               0 |           0 |                 0 | 7519b14a899fd514365211a895f5e833 |
| INSERT INTO V........ VALUES ( ...  KEY UPDATE v.... = v.... + ?  |           |      12703 |         0 |          0 | 1.73 s        | 34.23 ms    | 136.54 us   | 696.50 ms    |         0 |             0 |            0 |          0 |               0 |           0 |                 0 | ffb6231b78efc022175650d37a837b99 |
| SELECT * FROM `D.....` WHERE `name` = ?                           |           |      10620 |         0 |          0 | 3.85 s        | 25.21 ms    | 362.52 us   | 705.16 ms    |         1 |             0 |            1 |          0 |               0 |           0 |                 0 | d6399273d75e2348d6d7ea872489a30c |
| SELECT COUNT ( * ) FROM T..... ... ? LIKE concat ( pattern , ? )  |           |       2830 |         0 |          0 | 1.22 s        | 2.14 ms     | 432.60 us   | 215.62 ms    |      2830 |             1 |       101880 |          0 |               0 |           0 |                 0 | 22d984df583adc9a1ac282239e7629e2 |
| SELECT COUNT ( * ) FROM T..... ... KE concat ( ? , pattern , ? )  |           |       2727 |         0 |          0 | 932.01 ms     | 30.95 ms    | 341.77 us   | 189.47 ms    |      2727 |             1 |        38178 |          0 |               0 |           0 |                 0 | a8ee43287bb2ee35e2c144c569a8b2de |
| INSERT IGNORE INTO `K......` ( `id` , `k......` ) VALUES (...)    |           |       2447 |         0 |          0 | 499.33 ms     | 9.65 ms     | 204.06 us   | 108.28 ms    |         0 |             0 |            0 |          0 |               0 |           0 |                 0 | 675e32e9eac555f33df240e80305c013 |
| SELECT * FROM `K......` WHERE k...... IN (...)                    |           |       2237 |         0 |          0 | 1.58 s        | 62.33 ms    | 704.19 us   | 345.61 ms    |     59212 |            26 |        59212 |          0 |               0 |           0 |                 0 | 8aa7dc3b6f729aec61bd8d7dfa5978fa |
| SELECT COUNT ( * ) FROM D..... WHERE NAME = ? OR NAME = ?         |           |       1285 |         0 |          0 | 797.72 ms     | 131.29 ms   | 620.79 us   | 340.45 ms    |      1285 |             1 |            8 |          0 |               0 |           0 |                 0 | 1975f53832b0c2506de482898cf1fd37 |
+-------------------------------------------------------------------+-----------+------------+-----------+------------+---------------+-------------+-------------+--------------+-----------+---------------+--------------+------------+-----------------+-------------+-------------------+----------------------------------+

Indeed, this simple query highlights a pile of additional information necessary for analysis like:

  1. What is that CREATE VIEW command that’s executed many more times?
  2. In this view, query 2 is executed some 3x more then query 4, yet in my 60 second sample it was 3x less. Has the profile of query load changed. What exactly is being sampled in this view?
  3. The lock_latency shows some incredibility large lock times, over 5 seconds for the top SELECT statement. Is this an outlier. Unfortunately the views give min/avg/max for the total_latency but no breakdown on lock_latency to see how much of a problem this actually is?

A quick note, the statement_analysis_raw view gives you the full SQL statement, so for example the first point listed the statement actually was.

select query from ps_helper.statement_analysis_raw order by exec_count desc limit 1;
CREATE VIEW `io_by_thread_by_latency` AS SELECT IF ( `processlist_id` IS NULL , `SUBSTRING_INDEX` ( NAME , ? , - ? ) , `CONCAT` ( `processlist_user` , ? , `processlist_host` ) ) SYSTEM_USER , SUM ( `count_star` ) `count_star` , `format_time` ( SUM ( `sum_timer_wait` ) ) `total_latency` , `format_time` ( MIN ( `min_timer_wait` ) ) `min_latency` , `format_time` ( AVG ( `avg_timer_wait` ) ) `avg_latency` , `format_time` ( MAX ( `max_timer_wait` ) ) `max_latency` , `thread_id` , `processlist_id` FROM `performance_schema` . `events_waits_summary_by_thread_by_event_name` LEFT JOIN `performance_schema` . `threads` USING ( `thread_id` ) WHERE `event_name` LIKE ? AND `sum_timer_wait` > ? GROUP BY `thread_id` ORDER BY SUM ( `sum_timer_wait` ) DESC

Monitoring an online MySQL ALTER TABLE using Performance Schema

Recently a client asked me how long it would take for an ALTER TABLE to complete. Generally the answer is “it depends”. While this was running on a production system I tried with the Performance Schema in MySQL 5.6 to work out some answer to this question. While I never got to investigate various tests using INPLACE and COPY for comparison, Morgan Tocker made the request for experiences with online ALTER in A closer look at Online DDL in MySQL 5.6. Hopefully somebody with more time can expand on my preliminary observations.

Using Mark Leith’s ps_helper (older version) I monitored the File I/O to see if I could determine when using innodb_file_per_table the percentage of table writing to be completed.

Other data access on this slave server was disabled, so these results represent the I/O of a single ALTER TABLE statement.

  • We can clearly see the reading of the Txxxxxxxxxxxxxx.ibd table,starting at 4.03G, then 7.79G, and following that a peak of 9.75 GB.
  • We see the writing of Txxxxxxxxxxxxxx.ibd in the last samples, and it completes 9.85 GB, the “write_pct” is shows ~50%, a good indicator of a total read and write of the table
  • What we also see is an “Innodb Merge Temp File” which is initially all writes, an indication of what may be a “copy”, however what is *interesting* is the amount of reads and writes are 6X the size of the underlying table
  • Recall that this server is effectively *idle*, so there is no need to be keeping version information of actual table changes
  • This “Innodb Merge Temp File” also works in 1MB chunks, so comparing the “count” values between the base table of 16K operations can be misleading if you just look at count deltas during the process.

Given more time, I would have performed more extensive monitoring including timestamps, and run a test using the COPY algorithm to see if this took less time.

Sample 1

mysql> select * from ps_helper.io_global_by_file_by_bytes limit 10;
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+-----------+-----------+
| file                                        | count_read | total_read | avg_read  | count_write | total_written | avg_write | total     | write_pct |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+-----------+-----------+
| @@datadir/nsikeyword/Txxxxxxxxxxxxxx.ibd    |     264207 | 4.03 GiB   | 16.00 KiB |         191 | 2.98 MiB      | 16.00 KiB | 4.03 GiB  |      0.07 |
| @@datadir/Innodb Merge Temp File            |          0 | 0 bytes    | 0 bytes   |        1912 | 1.87 GiB      | 1.00 MiB  | 1.87 GiB  |    100.00 |
| @@datadir/ibdata1                           |        412 | 8.41 MiB   | 20.89 KiB |         693 | 46.52 MiB     | 68.73 KiB | 54.92 MiB |     84.69 |
| @@datadir/ib_logfile1                       |          2 | 64.50 KiB  | 32.25 KiB |       16670 | 11.92 MiB     | 750 bytes | 11.98 MiB |     99.47 |
| /mysql/binlog/mysql-relay-bin.003236        |        749 | 5.85 MiB   | 8.00 KiB  |       49943 | 5.85 MiB      | 123 bytes | 11.70 MiB |     50.00 |
| @@datadir/mysql/slave_master_info.ibd       |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes   | 64.00 KiB |      0.00 |
| @@datadir/mysql/slave_relay_log_info.ibd    |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes   | 64.00 KiB |      0.00 |
| @@datadir/mysql/slave_worker_info.ibd       |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes   | 64.00 KiB |      0.00 |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+-----------+-----------+

Sample 2

mysql> select * from ps_helper.io_global_by_file_by_bytes limit 10;
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+-----------+-----------+
| file                                        | count_read | total_read | avg_read  | count_write | total_written | avg_write | total     | write_pct |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+-----------+-----------+
| @@datadir/nsikeyword/Txxxxxxxxxxxxxx.ibd    |     510483 | 7.79 GiB   | 16.00 KiB |         191 | 2.98 MiB      | 16.00 KiB | 7.79 GiB  |      0.04 |
| @@datadir/Innodb Merge Temp File            |          0 | 0 bytes    | 0 bytes   |        3517 | 3.43 GiB      | 1.00 MiB  | 3.43 GiB  |    100.00 |
| @@datadir/ibdata1                           |        412 | 8.41 MiB   | 20.89 KiB |         693 | 46.52 MiB     | 68.73 KiB | 54.92 MiB |     84.69 |
| @@datadir/ib_logfile1                       |          2 | 64.50 KiB  | 32.25 KiB |       16670 | 11.92 MiB     | 750 bytes | 11.98 MiB |     99.47 |
| /mysql/binlog/mysql-relay-bin.003236        |        749 | 5.85 MiB   | 8.00 KiB  |       49943 | 5.85 MiB      | 123 bytes | 11.70 MiB |     50.00 |
| @@datadir/mysql/slave_master_info.ibd       |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes   | 64.00 KiB |      0.00 |
| @@datadir/mysql/slave_relay_log_info.ibd    |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes   | 64.00 KiB |      0.00 |
| @@datadir/mysql/slave_worker_info.ibd       |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes   | 64.00 KiB |      0.00 |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+-----------+-----------+

Sample 3

mysql> select * from ps_helper.io_global_by_file_by_bytes limit 10;
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+------------+-----------+
| file                                        | count_read | total_read | avg_read  | count_write | total_written | avg_write | total      | write_pct |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+------------+-----------+
| @@datadir/Innodb Merge Temp File            |      19503 | 19.05 GiB  | 1.00 MiB  |       23798 | 23.24 GiB     | 1.00 MiB  | 42.29 GiB  |     54.96 |
| @@datadir/nsikeyword/Txxxxxxxxxxxxxx.ibd    |     638920 | 9.75 GiB   | 16.00 KiB |         191 | 2.98 MiB      | 16.00 KiB | 9.75 GiB   |      0.03 |
| @@datadir/ibdata1                           |        412 | 8.41 MiB   | 20.89 KiB |         693 | 46.52 MiB     | 68.73 KiB | 54.92 MiB  |     84.69 |
| @@datadir/ib_logfile1                       |          2 | 64.50 KiB  | 32.25 KiB |       16670 | 11.92 MiB     | 750 bytes | 11.98 MiB  |     99.47 |
| /mysql/binlog/mysql-relay-bin.003236        |        749 | 5.85 MiB   | 8.00 KiB  |       49943 | 5.85 MiB      | 123 bytes | 11.70 MiB  |     50.00 |
| @@datadir/mysql/proc.MYD                    |        692 | 423.96 KiB | 627 bytes |           0 | 0 bytes       | 0 bytes   | 423.96 KiB |      0.00 |
| @@datadir/mysql/slave_master_info.ibd       |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes   | 64.00 KiB  |      0.00 |
| @@datadir/mysql/slave_relay_log_info.ibd    |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes   | 64.00 KiB  |      0.00 |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+------------+-----------+

Sample 4

mysql> select * from ps_helper.io_global_by_file_by_bytes limit 10;
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+------------+-----------+
| file                                        | count_read | total_read | avg_read  | count_write | total_written | avg_write | total      | write_pct |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+------------+-----------+
| @@datadir/Innodb Merge Temp File            |      36865 | 36.00 GiB  | 1.00 MiB  |       41160 | 40.20 GiB     | 1.00 MiB  | 76.20 GiB  |     52.75 |
| @@datadir/nsikeyword/Txxxxxxxxxxxxxx.ibd    |     638920 | 9.75 GiB   | 16.00 KiB |         191 | 2.98 MiB      | 16.00 KiB | 9.75 GiB   |      0.03 |
| @@datadir/ibdata1                           |        412 | 8.41 MiB   | 20.89 KiB |         693 | 46.52 MiB     | 68.73 KiB | 54.92 MiB  |     84.69 |
| @@datadir/ib_logfile1                       |          2 | 64.50 KiB  | 32.25 KiB |       16670 | 11.92 MiB     | 750 bytes | 11.98 MiB  |     99.47 |
| /mysql/binlog/mysql-relay-bin.003236        |        749 | 5.85 MiB   | 8.00 KiB  |       49943 | 5.85 MiB      | 123 bytes | 11.70 MiB  |     50.00 |
| @@datadir/mysql/proc.MYD                    |        692 | 423.96 KiB | 627 bytes |           0 | 0 bytes       | 0 bytes   | 423.96 KiB |      0.00 |
| @@datadir/mysql/slave_master_info.ibd       |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes   | 64.00 KiB  |      0.00 |
| @@datadir/mysql/slave_relay_log_info.ibd    |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes   | 64.00 KiB  |      0.00 |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+-----------+------------+-----------+

Sample 5

mysql> select * from ps_helper.io_global_by_file_by_bytes limit 10;
+---------------------------------------------+------------+------------+-----------+-------------+---------------+------------+------------+-----------+
| file                                        | count_read | total_read | avg_read  | count_write | total_written | avg_write  | total      | write_pct |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+------------+------------+-----------+
| @@datadir/Innodb Merge Temp File            |      57009 | 55.67 GiB  | 1.00 MiB  |       60144 | 58.73 GiB     | 1.00 MiB   | 114.41 GiB |     51.34 |
| @@datadir/nsikeyword/Txxxxxxxxxxxxxx.ibd    |     638922 | 9.75 GiB   | 16.00 KiB |       92839 | 2.56 GiB      | 28.88 KiB  | 12.31 GiB  |     20.78 |
| @@datadir/ibdata1                           |        412 | 8.41 MiB   | 20.89 KiB |        8081 | 1.44 GiB      | 186.98 KiB | 1.45 GiB   |     99.43 |
| @@datadir/ib_logfile0                       |          4 | 3.50 KiB   | 896 bytes |        4443 | 845.59 MiB    | 194.89 KiB | 845.60 MiB |    100.00 |
| @@datadir/ib_logfile1                       |          2 | 64.50 KiB  | 32.25 KiB |       19327 | 735.53 MiB    | 38.97 KiB  | 735.60 MiB |     99.99 |
| /mysql/binlog/mysql-relay-bin.003236        |        749 | 5.85 MiB   | 8.00 KiB  |       49943 | 5.85 MiB      | 123 bytes  | 11.70 MiB  |     50.00 |
| @@datadir/mysql/proc.MYD                    |        696 | 426.43 KiB | 627 bytes |           0 | 0 bytes       | 0 bytes    | 426.43 KiB |      0.00 |
| @@datadir/mysql/slave_master_info.ibd       |          4 | 64.00 KiB  | 16.00 KiB |           0 | 0 bytes       | 0 bytes    | 64.00 KiB  |      0.00 |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+------------+------------+-----------+

Sample 6

mysql> select * from ps_helper.io_global_by_file_by_bytes limit 10;
+---------------------------------------------+------------+------------+-----------+-------------+---------------+------------+------------+-----------+
| file                                        | count_read | total_read | avg_read  | count_write | total_written | avg_write  | total      | write_pct |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+------------+------------+-----------+
| @@datadir/Innodb Merge Temp File            |      60144 | 58.73 GiB  | 1.00 MiB  |       60144 | 58.73 GiB     | 1.00 MiB   | 117.47 GiB |     50.00 |
| @@datadir/nsikeyword/Txxxxxxxxxxxxxx.ibd    |     638922 | 9.75 GiB   | 16.00 KiB |      348927 | 9.85 GiB      | 29.61 KiB  | 19.60 GiB  |     50.27 |
| @@datadir/ibdata1                           |        427 | 8.64 MiB   | 20.72 KiB |       31941 | 5.30 GiB      | 173.84 KiB | 5.30 GiB   |     99.84 |
| @@datadir/ib_logfile0                       |          4 | 3.50 KiB   | 896 bytes |       15763 | 2.97 GiB      | 197.34 KiB | 2.97 GiB   |    100.00 |
| @@datadir/ib_logfile1                       |          2 | 64.50 KiB  | 32.25 KiB |       29864 | 2.72 GiB      | 95.62 KiB  | 2.72 GiB   |    100.00 |
| /mysql/binlog/mysql-relay-bin.003236        |        749 | 5.85 MiB   | 8.00 KiB  |       49943 | 5.85 MiB      | 123 bytes  | 11.70 MiB  |     50.00 |
| @@datadir/mysql/proc.MYD                    |        700 | 428.89 KiB | 627 bytes |           0 | 0 bytes       | 0 bytes    | 428.89 KiB |      0.00 |
| @@datadir/mysql/innodb_index_stats.ibd      |          5 | 80.00 KiB  | 16.00 KiB |           1 | 16.00 KiB     | 16.00 KiB  | 96.00 KiB  |     16.67 |
+---------------------------------------------+------------+------------+-----------+-------------+---------------+------------+------------+-----------+

MySQL shutdown via service reporting ERROR

Working with MySQL 5.6 under CentOS 6.4 I came across the following problem with MySQL reporting it did not shutdown successfully.

$ sudo su -
$ service mysql stop
Shutting down MySQL................................................................................................................................................................................................................................................................................................
.........................................................................................................................................................................................................................................................................................................
.........................................................................................................................................................................................................................................................................................................
................. ERROR!

However, you have to look into the problem in the all important MySQL Error Log where I found a different story.

$ tail -100 /mysql/log/error.log
...
2013-11-04 14:43:40 32351 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2013-11-04 14:43:40 32351 [Note] Shutting down plugin 'INNODB_LOCKS'
2013-11-04 14:43:40 32351 [Note] Shutting down plugin 'INNODB_TRX'
2013-11-04 14:43:40 32351 [Note] Shutting down plugin 'InnoDB'
2013-11-04 14:43:40 32351 [Note] InnoDB: FTS optimize thread exiting.
2013-11-04 14:43:40 32351 [Note] InnoDB: Starting shutdown...
2013-11-04 14:44:41 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:45:41 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:46:41 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:47:41 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:48:41 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:49:41 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:50:42 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:51:42 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:52:42 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:53:42 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:54:42 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:55:42 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:56:43 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:57:43 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:58:43 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 14:59:43 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 15:00:43 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 15:01:43 32351 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-04 15:02:01 32351 [Note] InnoDB: Shutdown completed; log sequence number 109188218872
2013-11-04 15:02:01 32351 [Note] Shutting down plugin 'MRG_MYISAM'
2013-11-04 15:02:01 32351 [Note] Shutting down plugin 'CSV'
2013-11-04 15:02:01 32351 [Note] Shutting down plugin 'MEMORY'
2013-11-04 15:02:01 32351 [Note] Shutting down plugin 'MyISAM'
2013-11-04 15:02:01 32351 [Note] Shutting down plugin 'sha256_password'
2013-11-04 15:02:01 32351 [Note] Shutting down plugin 'mysql_old_password'
2013-11-04 15:02:01 32351 [Note] Shutting down plugin 'mysql_native_password'
2013-11-04 15:02:01 32351 [Note] Shutting down plugin 'binlog'
2013-11-04 15:02:01 32351 [Note] /usr/sbin/mysqld: Shutdown complete

131104 15:02:01 mysqld_safe mysqld from pid file /var/run/mysqld/mysql.pid ended

The log indicates that the server is actually shutdown, an attempt to try again results in

$ service mysql stop
 ERROR! MySQL server PID file could not be found!

Doing some more investigation to pinpoint if the issue is “service” (i.e. “init.d”) related, I found that the command took suspiciously exactly 15 minutes (on several occasions) and MySQL was indeed not shutdown. For example:

$ sudo su -
$ time service mysql stop; tail /mysql/log/error.log
Shutting down MySQL................................................................................................................................................................................................................................................................................................
..........................................................................................................................................................................................................................................................................................................
..........................................................................................................................................................................................................................................................................................................
................ ERROR!

real	15m4.003s
user	0m0.561s
sys	0m2.194s
2013-11-06 16:50:37 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:51:38 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:52:38 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:53:38 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:54:38 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:55:38 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:56:39 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:57:39 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:58:39 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:59:39 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool

However, waiting it did indeed complete successfully.

tail -f /mysql/log/error.log
2013-11-06 16:51:38 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:52:38 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:53:38 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:54:38 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:55:38 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:56:39 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:57:39 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:58:39 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 16:59:39 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 17:00:39 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 17:01:39 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 17:02:40 19887 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2013-11-06 17:03:03 19887 [Note] InnoDB: Shutdown completed; log sequence number 109974145720
2013-11-06 17:03:03 19887 [Note] Shutting down plugin 'MRG_MYISAM'
2013-11-06 17:03:03 19887 [Note] Shutting down plugin 'CSV'
2013-11-06 17:03:03 19887 [Note] Shutting down plugin 'MEMORY'
2013-11-06 17:03:03 19887 [Note] Shutting down plugin 'MyISAM'
2013-11-06 17:03:03 19887 [Note] Shutting down plugin 'sha256_password'
2013-11-06 17:03:03 19887 [Note] Shutting down plugin 'mysql_old_password'
2013-11-06 17:03:03 19887 [Note] Shutting down plugin 'mysql_native_password'
2013-11-06 17:03:03 19887 [Note] Shutting down plugin 'binlog'
2013-11-06 17:03:03 19887 [Note] /usr/sbin/mysqld: Shutdown complete

131106 17:03:03 mysqld_safe mysqld from pid file /var/run/mysqld/mysql.pid ended