The Facebook effect

First it was the SlashDot effect, then it was the Digg effect, now it’s the Facebook effect. I have a friend at Facebook and he was talking about the effect of the Facebook Platform API that was released a few weeks ago. Sites were now struggling to cope with the effect of massive amounts of new traffic, ensuring that experienced MySQL Consultants will have plenty of scale out opportunities.

Here is an abstract from an article I read recently. Analyzing the Facebook Platform, three weeks in

_
Translation: unless you already have, or are prepared to quickly procure, a 100-500+ server infrastructure and everything associated with it — networking gear, storage gear, ISP interconnetions, monitoring systems, firewalls, load balancers, provisioning systems, etc. — and a killer operations team, launching a successful Facebook application may well be a self-defeating proposition.

This is a “success kills” scenario — the good news is you’re successful, the bad news is you’re flat on your back from what amounts to a self-inflicted denial of service attack, unless you have the money and time and knowledge to tackle the resulting scale challenges.

This comes from the success of iLike . Some more reading references are Crazy love when startup iLike hits pay dirt and Holy cow… 6mm users and growing 300k/day! . Wow!

You can’t buy viral marketing with this type of traffic growth figures. Given the current MySQL 12 Days of Scale-out and recent experiences where clients are seeking HA & scale-out solutions but have not architectured present systems to manage any level of scale-out via the MySQL proven techniques of replication and sharding.

It’s an important lesson that any organization wanting to develop a successful web site needs to ensure the architecture is designed with massive scale-out in mind from the beginning. This means starting with your application supporting partitioning of your data (both vertically and horizontally), and supporting replication, including the possibility of lag with MySQL Replication slaves.

You see on a lot of larger Web 2.0 sites these days after saving data, a message like “Your information will be available momentarily” or a message of this nature and the data saved not automatically displayed. This is a clear means of supporting lag, even if only for a few milliseconds. This is just the first of many steps in application design managing scale-out architectures.

Tagged with: Databases MySQL

Related Posts

How long does it take the ReadySet cache to warm up?

During my setup of benchmarking I run a quick test-sysbench script to ensure my configuration is right before running an hour+ duration test. When pointing to a Readyset cache where I have cached the 5 queries used in the sysbench test, but I have not run any execution of the SQL, throughput went up 10x in 5 seconds.

Read more

Monitoring Latency with Throughput

Higher throughput does not imply improved performance. This is a common problem when the need for an application to support more users, you provide higher concurrency and that appears to show the capability to support higher throughput.

Read more

Using Readyset Caching with AWS RDS MySQL

Readyset is a next-generation database caching solution that offers a drop-in; no application code changes; approach to improve database performance. If you are using a legacy application where it is difficult to modify SQL statements, or the database is overloaded due to poorly-designed SQL access patterns, implementing a cache is a common design strategy for addressing database reliability and scalability issues.

Read more