A testimony to Linux resilience

A client released a new version of their website onto 20 AWS m1.medium instances (current site at peak load runs approximately 60 m1.medium webservers).
It was clearly an unsuccessful release, but what was surprising was the system did not actually crash, it was effectively a meltdown, but servers were still operational with load averages > 100. I was impressed with the ability for Linux to still (just) function.

parallel-ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 -i -h   uptime
 18:01:00 up 18:44,  0 users,  load average: 104.26, 110.03, 113.12
 18:01:00 up 18:56,  1 user,  load average: 62.33, 87.75, 90.40
 18:01:03 up 18:44,  0 users,  load average: 105.28, 115.33, 115.61
 18:01:03 up 18:44,  0 users,  load average: 149.35, 155.74, 133.68
 18:01:03 up 18:51,  0 users,  load average: 124.63, 121.31, 115.91
 18:01:03 up 18:44,  0 users,  load average: 118.99, 109.92, 110.60
 18:01:04 up 18:44,  0 users,  load average: 121.73, 118.40, 113.50
 18:01:04 up 18:44,  0 users,  load average: 113.89, 120.56, 114.64
 18:01:05 up 18:44,  0 users,  load average: 119.30, 119.71, 115.65
 18:01:05 up 18:44,  0 users,  load average: 126.33, 120.33, 119.02
 18:01:05 up 18:44,  0 users,  load average: 117.47, 113.01, 112.84
 18:01:05 up 18:44,  0 users,  load average: 172.21, 158.62, 135.19
 18:01:05 up 18:44,  0 users,  load average: 115.81, 114.96, 116.18
 18:01:05 up 18:44,  0 users,  load average: 122.25, 115.32, 115.27
 18:01:05 up 18:44,  0 users,  load average: 164.13, 168.04, 153.03
 18:01:05 up 18:44,  0 users,  load average: 123.80, 114.94, 110.29
 18:01:06 up 18:44,  0 users,  load average: 173.64, 173.80, 158.76
 18:01:06 up 18:44,  0 users,  load average: 132.52, 140.94, 135.43
 18:01:06 up 18:44,  0 users,  load average: 166.17, 151.68, 135.23
 18:01:06 up 18:44,  0 users,  load average: 170.14, 164.03, 145.31

The AWS m1.medium is a single CPU instance.

$ cat /proc/cpuinfo
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 45
model name	: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
stepping	: 7
cpu MHz		: 1800.000
cache size	: 20480 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de tsc msr pae cx8 cmov pat clflush mmx fxsr sse sse2 ss ht syscall nx lm up rep_good aperfmperf unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lahf_lm arat epb xsaveopt pln pts dts
bogomips	: 3600.00
clflush size	: 64
cache_alignment	: 64
address sizes	: 46 bits physical, 48 bits virtual
power management:

An unexplained connection experience

The “Too many connections” problem is a common issue with applications using excessive permissions (and those that grant said global permissions). MySQL will always grant a user with SUPER privileges access to a DB to investigate the problem with a SHOW PROCESSLIST and where you can check the limits. I however found the following.

mysql> show global variables like 'max_connections';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| max_connections | 2000  |
+-----------------+-------+
1 row in set (0.00 sec)

mysql> show global status like 'max%';
+----------------------+-------+
| Variable_name        | Value |
+----------------------+-------+
| Max_used_connections | 6637  |
+----------------------+-------+
1 row in set (0.00 sec)

How can the max_used_connection exceed max_connections? This is possible because you can dynamically change max_connections in a normal MySQL environment. However ,this is AWS RDS where you cannot change variables dynamically via mysql client. You can via other command line options but this has not happened. Furthermore, this server is using the defauly.mysql.5.5 parameter group to further validate the claim that it has not been changed.

I do not have an answer for the client in this case.

I would also add this as another ding on the usability of RDS in production environments. I was locked out of the DB for a long time, and with no visibility of what was going on. The only options were wait, or restart the server. RDS does not provide this level of visibility of the processlist using a privileged user that could see what was going on. Perhaps an interface they should consider in future.

Additional DB objects in AWS RDS

To expand on Jervin’s Default RDS Account Privileges, RDS for MySQL provides a number of routines and triggers defined the the ‘mysql’ meta schema. These help in various tasks because the SUPER privilege is not provided.

SELECT routine_schema,routine_name
FROM information_schema.routines;
+----------------+-----------------------------------+
| routine_schema | routine_name                      |
+----------------+-----------------------------------+
| mysql          | rds_collect_global_status_history |
| mysql          | rds_disable_gsh_collector         |
| mysql          | rds_disable_gsh_rotation          |
| mysql          | rds_enable_gsh_collector          |
| mysql          | rds_enable_gsh_rotation           |
| mysql          | rds_kill                          |
| mysql          | rds_kill_query                    |
| mysql          | rds_rotate_general_log            |
| mysql          | rds_rotate_global_status_history  |
| mysql          | rds_rotate_slow_log               |
| mysql          | rds_set_configuration             |
| mysql          | rds_set_gsh_collector             |
| mysql          | rds_set_gsh_rotation              |
| mysql          | rds_show_configuration            |
| mysql          | rds_skip_repl_error               |
+----------------+-----------------------------------+
15 rows in set (0.00 sec)

SELECT trigger_schema, trigger_name,
          CONCAT(event_object_schema,'.',event_object_table) AS table_name,
          CONCAT(action_timing,' ',event_manipulation) AS trigger_action
FROM information_schema.triggers;
+----------------+--------------+------------+----------------+
| trigger_schema | trigger_name | table_name | trigger_action |
+----------------+--------------+------------+----------------+
| mysql          | block_proc_u | mysql.proc | BEFORE UPDATE  |
| mysql          | block_proc_d | mysql.proc | BEFORE DELETE  |
| mysql          | block_user_i | mysql.user | BEFORE INSERT  |
| mysql          | block_user_u | mysql.user | BEFORE UPDATE  |
| mysql          | block_user_d | mysql.user | BEFORE DELETE  |
+----------------+--------------+------------+----------------+

Cloning MySQL 5.6 instances

A tip for all those cloud users that like cloning database servers (as reported in my book Effective MySQL – Replication Techniques in Depth).

Starting with MySQL 5.6, MySQL instances have a UUID. Cloning servers to quickly create slaves will result in the following error message.

mysql> SHOW SLAVE STATUSG
...
  Last_IO_Error: Fatal error: The slave I/O thread stops because master and slave have equal MySQL server UUIDs; these UUIDs must be different for replication to work.
...

The solution is simple. Clear our the file based configuration file (located in the MySQL datadir) and restart the MySQL instance.

$ rm -f /mysql/data/auto.cnf
$ service mysql restart

Basic scalability principles to avert downtime

In the press in the last two days has been the reported outage of Amazon Web Services Elastic Compute Cloud (EC2) in just one North Virginia data center. This has affected many large website includes FourSquare, Hootsuite, Reddit and Quora. A detailed list can be found at ec2disabled.com.

For these popular websites was this avoidable? Absolutely.

Basic scalability principles if deployed in these systems architecture would have averted the significant downtime regardless of your development stack. While I work primarily in MySQL these principles are not new, nor are they complicated, however they are fundamental concepts in scalability that apply to any technology including the popular MongoDB that is being used by a number of affected sites.

Scalability 101 involves some simple basic rules. Here are just two that seem to have been ignored by many affected by this recent AWS EC2 outage.

  1. Never put all your eggs in one basket. If you rely on AWS completely, or you rely on just one availability zone that is putting all your eggs in one basket.
  2. Always keep your important data close to home. When it comes to what is most critical to your business you need access and control to your information. At 5am in the morning when the CEO asks how long will our business be unavailabla and what is needed to resolve the problem, the answer “We have no control over this and have no ETA” is not an acceptable answer.

With a successful implementation and appropriate data redundancy you may not have an environment immediately available however you have access to your important information and the ability to create one quickly. Many large hosting companies can provide additional H/W on near demand, especially if you have an initial minimal footprint. Indeed using Amazon Web Services (AWS) as a means to avert a data center disaster is an ideal implementation of Infrastructure As A Service (IAAS). Even with this issue, organizations that had planned for this type of outage could have easily migrated to another AWS availability zone that was unaffected.

Furthermore, system architecture to support various levels of data availability and scalability ensure you can handle many more various types of unavailability without significant system down time as recently seen. There are many different types of availability and unavailability, know what your definition of downtime is and supporting disasters should be your primary focus of scalability, not an after thought.

As an expert in performance and scalability I can help your organization in the design of a suitable architecture to support successful scalability and disaster. This is not rocket science however many organizations gamble without the expertise of a professional to ensure business viability.