Simple steps to increasing site availability

A recent database production migration with a large client highlighted a fundamental flaw in their designed architecture for suitable site availability. While the development team had take several good steps in improving scalability of the site, there was a clear failure in understanding and supporting different levels of data availability which I cover in my presentation Successful Scalability Principles.

It was the decision of the development manager to shut down the entire site to perform a final DB migration. The downtime was only 60 seconds but this approach was completely unnecessary with any user requests simply being rejected without any explanation.

The Problem

The system had already be siloed/partitioned/sharded into 5 distinct sources of information. 4 of these data sources in MySQL had applicable read and write capacity (i.e. MySQL replication), and application configuration to support reading data not from the primary data source. Both of these principles are good steps towards scalability and performance. What was lacking was availability.

The wrong way

The migration of the final partition involved moving from AWS RDS to AWS EC2 instances running MySQL. This final all important module managed advertisements, campaigns and ad tracking required that no data was lost.

In AWS, the approach taken was to remove approximately 60 webservers from the public load balancer (ELB). The result of this was all requests, some 20,000 to 25,000 requests simply hung or produced a likely HTTP 500 error.

This was the first fundamental flaw. What does your website look like when it is unavailable? In this case this was never considered or planned for. At worse, all sites should have an emergency “site unavailable due to maintenance” page, trivially managed by a second virtual host in your apache web server configuration. This can be enabled with zero downtime. While inconveniencing the end user, you are informing the end user and they will be more receiving of proactive information.

The second fundamental flaw is that the unavailability of one part of the system, should not affect the entire system if there is no interaction. There are 5 distinct and standalone partitions, only 1 required downtime.

The Right Way

In this situation there was more then one approach to minimize downtime while switching data sources and to ensure all data was captured.

Most sites fail with the fundamental principle of supporting different levels of data availability. In this specific case, one partition (i.e. 1/5 of the data) would be unavailable. Why should that situation effect 100% of your website? Furthermore, only the ability to write was affected, why then should that affect the ability to read ads.

There are at least four types of data availability. Specifically the ability to write data, read data, read cached data and no data access. There are also more fine grained methods of which I will also discuss one.

Defining your data availability requires your application to support and manage data access. This is not easy if you application was not developed with this in mind. I will give you a simple example. Many popular LAMP frameworks including Drupal & WordPress were never designed for read scalability. They relied on a single MySQL server. The act of scaling reads, and providing a read-only site is an after thought and many website struggle to create creative ways to support this primary architectural design pattern.

Knowing that a user request requires the ability to read and/or write data is the first key step. Knowing what type of data is the second. Providing a messaging system between what levels of data access there is, and the ability to turn off features while maintaining site uptime is critical for improving site availability.

More advanced approaches then consider the role of caching data. Generally sites will use caching to assist in reads, but caching can also be implemented to support non critical writes. In this particular example, a write to cache presented a small but tangible risk for data loss. The solution was to implement a secondary logging strategy. This is a separate persistent write capability during the downtime, and the ability to replay. By limiting the writes to log only (i.e. write once) operations, it became very simple to migrate from one system to a second system, logging and reapplying all data changes and ensuring no site downtime, and no data loss.

Conclusion

Managing site availability comes back to a very important question. Clearly define your uptime needs.