BIGINT v INT. Is there a big deal?

The answer is yes.

In this face off we have two numeric MySQL data types, both Integer. In fact MySQL has 9 different numeric data types for integer, fixed precision and floating point numbers, however we are just going to focus on two, BIGINT and INT. This design consideration is part of my recent presentation Top 20 Design Tips for Data Architects.

What is the difference?
We turn to the MySQL Reference Manual first, in 10.1.1. Overview of Numeric Types we see the following.


INT[(M)] [UNSIGNED] [ZEROFILL]

A normal-size integer. The signed range is -2147483648 to 2147483647. The unsigned range is 0 to 4294967295.

BIGINT[(M)] [UNSIGNED] [ZEROFILL]

A large integer. The signed range is -9223372036854775808 to 9223372036854775807. The unsigned range is 0 to 18446744073709551615.

Ok, well an INT can store a value to 2.1 Billion, and an a BIGINT can store a value to some larger number to 20 digits. That MySQL search didn’t help much with details, we have to dig deeper to find 10.2. Numeric Types in which we find that INT is a 4 byte integer, and a BIGINT is an 8 byte integer.

So what’s the big deal?

Quite a lot actually. Using INT rather then BIGINT can make a significant reduction in disk space. Just this one change alone can save you 10%-20% (depends on your particular situation). More significantly, when used as a primary key, and for foreign keys and indexes, reducing your index size could be 50%, and this will improve performance when these indexes are used.

My approach is this. Let’s just focus on primary keys and foreign keys to begin with. Are you going to store more then 2.1 Billion rows in your table? The answer should be no? Should you say yes, then you do have grand plans, but you are also failing to consider the ramifications of handling larger data sets (a topic for later discussion).

There are exceptions to this rule, if you do a huge number of inserts and deletes, then while you may not have 2.1 Billion rows, you may have done 2.1 Billion inserts. Again better design practices should be considered in this case.

The Test

As with everything, we need some evidence to stake the claim. Using the Sakila sample database.

We start with a simple intersection table, that has a high number of numeric only columns. This will show the best case situation.

We will create two tables, one with all BIGINT columns, and one with all INT columns and then compare the size. These tables are only small, but they show the proportion of savings of disk space.

CREATE TABLE inventory_bigint LIKE inventory;
ALTER TABLE inventory_bigint
  MODIFY inventory_id  BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
  MODIFY film_id BIGINT UNSIGNED NOT NULL,
  MODIFY store_id BIGINT UNSIGNED NOT NULL;
INSERT INTO inventory_bigint SELECT * from inventory;
CREATE TABLE inventory_int LIKE inventory;
ALTER TABLE inventory_int
  MODIFY inventory_id  INT UNSIGNED NOT NULL AUTO_INCREMENT,
  MODIFY film_id INT UNSIGNED NOT NULL,
  MODIFY store_id INT UNSIGNED NOT NULL;
INSERT INTO inventory_int SELECT * from inventory;

select table_name,engine,row_format, table_rows, avg_row_length,
        (data_length+index_length)/1024/1024 as total_mb,
         (data_length)/1024/1024 as data_mb,
         (index_length)/1024/1024 as index_mb
from information_schema.tables
where table_schema='sakila'
and   table_name LIKE 'inventory%'
order by 6 desc;
+------------------+--------+------------+------------+----------------+-------------+-------------+-------------+
| table_name       | engine | row_format | table_rows | avg_row_length | total_mb    | data_mb     | index_mb    |
+------------------+--------+------------+------------+----------------+-------------+-------------+-------------+
| inventory_bigint | InnoDB | Compact    |     293655 |             51 | 43.60937500 | 14.51562500 | 29.09375000 |
| inventory_int    | InnoDB | Compact    |     293715 |             37 | 29.54687500 | 10.51562500 | 19.03125000 |
| inventory        | InnoDB | Compact    |     293707 |             33 | 22.54687500 |  9.51562500 | 13.03125000 |
+------------------+--------+------------+------------+----------------+-------------+-------------+-------------+
3 rows in set (0.15 sec)

In this example, the data portion decreased from 14MB to 10MB or 28%, and the index portion from 29M to 19M or 34%.

CREATE TABLE customer_bigint LIKE customer;
ALTER TABLE customer_bigint
     MODIFY customer_id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
     MODIFY store_id BIGINT UNSIGNED NOT NULL,
     MODIFY address_id BIGINT UNSIGNED NOT NULL,
     MODIFY active BIGINT UNSIGNED NOT NULL;

CREATE TABLE customer_int LIKE customer;
ALTER TABLE customer_int
     MODIFY customer_id INT UNSIGNED NOT NULL AUTO_INCREMENT,
     MODIFY store_id INT UNSIGNED NOT NULL,
     MODIFY address_id INT UNSIGNED NOT NULL,
     MODIFY active INT UNSIGNED NOT NULL;

select table_name,engine,row_format, table_rows, avg_row_length,
        (data_length+index_length)/1024/1024 as total_mb,
         (data_length)/1024/1024 as data_mb,
         (index_length)/1024/1024 as index_mb
from information_schema.tables
where table_schema='sakila'
and   table_name LIKE 'customer%'
order by 6 desc;

+-----------------+--------+------------+------------+----------------+-------------+-------------+-------------+
| table_name      | engine | row_format | table_rows | avg_row_length | total_mb    | data_mb     | index_mb    |
+-----------------+--------+------------+------------+----------------+-------------+-------------+-------------+
| customer_bigint | InnoDB | Compact    |     154148 |            139 | 37.09375000 | 20.54687500 | 16.54687500 |
| customer_int    | InnoDB | Compact    |     151254 |            121 | 30.06250000 | 17.51562500 | 12.54687500 |
| customer        | InnoDB | Compact    |      37684 |            125 |  7.81250000 |  4.51562500 |  3.29687500 |
| customer_list   | NULL   | NULL       |       NULL |           NULL |        NULL |        NULL |        NULL |
+-----------------+--------+------------+------------+----------------+-------------+-------------+-------------+
4 rows in set (0.22 sec)

In this example, the data portion decreased from 20MB to 17MB or 15%, and the index portion from 16M to 12M or 25%.

NOTE: The sample data set was increased for this example.

Conclusion

Even with these simple tables and small data sets it’s clear that INT is a saving of diskspace over BIGINT. In many clients I’ve seen huge savings in multi TB databases, just with a small number of schema optimizations. If this saving alone for a more optimized database design was only 10%, it is an easy 10% that will reflect a direct improvement in performance.


About the Author

Ronald Bradford provides Consulting and Advisory Services in Data Architecture, Performance and Scalability for MySQL Solutions. An IT industry professional for two decades with extensive database experience in MySQL, Oracle and Ingres his expertise covers data architecture, software development, migration, performance analysis and production system implementations. His knowledge from 10 years of consulting across many industry sectors, technologies and countries has provided unique insight into being able to provide solutions to problems. For more information Contact Ronald.

References

Off to OSCON

I will be heading to my first OSCON next week where I will be presenting MySQL Proxy: from Architecture to Implementation in conjunction with Giuseppe Maxia .

As was written by Colin Charles Our booth is yours… Sun at OSCON, Sun/MySQL would appear to also have a reasonable turnout. So it will be good to see some old colleagues and friends, and hopefully meet some new contacts.

While I am based on the East Coast, I do also provide expert MySQL consulting for clients in any location. Should you like to find out more about my offerings covering Architecture, Performance, Scaling, Migration and Knowledge Transfer for MySQL Solutions, please Contact Me and I will arrange a time to meet next week.

Why SQL_MODE is important? Part I

MySQL pre version 5.0 was very lax in it’s management of valid data. It was easy for data integrity to be abused if you knew how. The most common examples were truncations and silent conversions that if not understood could provide a serious data integrity issue.

In version 5.0, the introduction of SQL_MODE solved this problem. We will look at one example of how SQL_MODE can be enabled to provided improved data integrity.

You want to store the individual RGB (red/green/blue) decimal values of colors in a table. Each of these has a range from 0 to 255. You read that you can store 255 values in a TINYINT Integer data type, so you create a table like:

DROP TABLE IF EXISTS color_to_decimal;
CREATE TABLE color_to_decimal(
name VARCHAR(20) NOT NULL PRIMARY KEY,
red    TINYINT NOT NULL,
green TINYINT NOT NULL,
blue   TINYINT NOT NULL);

You insert some data like:

INSERT INTO color_to_decimal (name, red,green,blue) VALUES ('white',255,255,255);
INSERT INTO color_to_decimal (name, red,green,blue) VALUES ('black',0,0,0);
INSERT INTO color_to_decimal (name, red,green,blue) VALUES ('red',255,0,0);
INSERT INTO color_to_decimal (name, red,green,blue) VALUES ('green',0,255,0);
INSERT INTO color_to_decimal (name, red,green,blue) VALUES ('blue',0,0,255);
INSERT INTO color_to_decimal (name, red,green,blue) VALUES ('yellow',255,255,0);

Great, but when you look at you data you get?

SELECT     name, red, green, blue
FROM       color_to_decimal
ORDER BY name;
+--------+-----+-------+------+
| name   | red | green | blue |
+--------+-----+-------+------+
| black  |   0 |     0 |    0 |
| blue   |   0 |     0 |  127 |
| green  |   0 |   127 |    0 |
| red    | 127 |     0 |    0 |
| white  | 127 |   127 |  127 |
| yellow | 127 |   127 |    0 |
+--------+-----+-------+------+
6 rows in set (0.01 sec)

What happened, you delete the data and re-insert only to find no changes. You have been the victim of a silent conversion, via a means of truncation.

The TINYINT data type is 1 byte (8 bits). 8 bits can store the values from 0 to 255. When you use this integer data type, only 7 bits are actually available, which gives the range of 0 to 127. Why? Because MySQL reserved one bit for the sign, either positive or negative, even though you didn’t want a sign.

So knowing this, you go back and recreate your table with the following definition.

DROP TABLE IF EXISTS color_to_decimal;
CREATE TABLE color_to_decimal(
name VARCHAR(20) NOT NULL PRIMARY KEY,
red    TINYINT UNSIGNED NOT NULL,
green TINYINT UNSIGNED NOT NULL,
blue   TINYINT UNSIGNED NOT NULL);

You load your data and look at it again, and you see.

+--------+-----+-------+------+
| name   | red | green | blue |
+--------+-----+-------+------+
| black  |   0 |     0 |    0 |
| blue   |   0 |     0 |  255 |
| green  |   0 |   255 |    0 |
| red    | 255 |     0 |    0 |
| white  | 255 |   255 |  255 |
| yellow | 255 |   255 |    0 |
+--------+-----+-------+------+
6 rows in set (0.00 sec)

But, should you have been told about this, should there have been an error. Well, in MySQL this is actually a warning, and most applications never support and cater for warnings. It is only when you use the MySQL client program, as in these examples, you are given an indication, with the following line after each insert. If you look closely.

mysql> INSERT INTO color_to_decimal (name, red,green,blue) VALUES ('blue',0,0,255);
Query OK, 1 row affected, 1 warning (0.00 sec)


However there is a savior for this situation, and that is SQL_MODE.

When set to the setting TRADITIONAL, an error and not a warning is generated, and most applications support catching errors. Look at what happens in our example using the original table.

SET SQL_MODE=TRADITIONAL;
DROP TABLE IF EXISTS color_to_decimal;
CREATE TABLE color_to_decimal(
name VARCHAR(20) NOT NULL PRIMARY KEY,
red    TINYINT NOT NULL,
green TINYINT NOT NULL,
blue   TINYINT NOT NULL);
INSERT INTO color_to_decimal (name, red,green,blue) VALUES ('white',255,255,255);
ERROR 1264 (22003): Out of range value for column 'red' at row 1


As an added benefit you get this data integrity for free. We didn’t test it, because we know the data coming in is in the range of 0-255, but what if the user entered 500 for example. Let’s see.

TRUNCATE TABLE color_to_decimal;
SET SQL_MODE='';
INSERT INTO color_to_decimal (name, red, green, blue) VALUES('a bad color',500,0,0);
SELECT name, red, green, blue FROM color_to_decimal;
SET SQL_MODE=TRADITIONAL;
INSERT INTO color_to_decimal (name, red, green, blue) VALUES('a bad color',500,0,0);

Looking closely at the response in the client.

mysql> TRUNCATE TABLE color_to_decimal;
Query OK, 0 rows affected (0.02 sec)

mysql> SET SQL_MODE='';
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO color_to_decimal (name, red, green, blue) VALUES('a bad color',500,0,0);
Query OK, 1 row affected, 1 warning (0.00 sec)

mysql> SELECT name, red, green, blue FROM color_to_decimal;
+-------------+-----+-------+------+
| name        | red | green | blue |
+-------------+-----+-------+------+
| a bad color | 127 |     0 |    0 |
+-------------+-----+-------+------+
1 row in set (0.00 sec)

mysql> SET SQL_MODE=TRADITIONAL;
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO color_to_decimal (name, red, green, blue) VALUES('a bad color',500,0,0);
ERROR 1264 (22003): Out of range value for column 'red' at row 1

As discussed in my presentation Top 20 design tips for Data Architects the UNSIGNED column construct should be always defined unless there is a specific reason not to.

In some respects I would argue that the default for an Integer column should be actually UNSIGNED, and that SIGNED should be specified when you want a sign. Most integer columns generally in schema’s only contain positive numbers. There are of course plenty of examples, positional geo data, financial data, medical data for example.

One could also argue that MySQL should make the default SQL_MODE at least TRADITIONAL, and that only when you want backward compatibility should you then change the SQL_MODE.

This is Part I on SQL_MODE, there are few more interesting cases to discuss at a later time.


About the Author

Ronald Bradford provides Consulting and Advisory Services in Data Architecture, Performance and Scalability for MySQL Solutions. An IT industry professional for two decades with extensive database experience in MySQL, Oracle and Ingres his expertise covers data architecture, software development, migration, performance analysis and production system implementations. His knowledge from 10 years of consulting across many industry sectors, technologies and countries has provided unique insight into being able to provide solutions to problems. For more information Contact Ronald.

References

Sun Stock Prices

Sun Microsystem’s (NASDAQ:JAVA) hit a low this week of $8.71. There was a stronger rally and a close at $9.16 today. The financial times reports Sun Micro chief sees rays of hope, and Bloomberg Sun Rises After Fourth-Quarter Profit Tops Estimates.

I cashed out in March at $16.32, so that’s like a 50% drop in share price. I was lucky having been at MySQL long enough to have options to vest. Newer employees are not that lucky. I certainly hope MySQL Sun Employees get the Q4 weighted bonuses. (A structure I didn’t believe compensated with the old bonus structure).

I have been following more closely since Matt Asay’s comments in Who is buying Sun?



Image courtesy of Google Financial’s.

A Bill Gates bio

In the recent Wired magazine (yes, the paper one), there was an interesting time line of Bill Gates. It was rather an odd format, but I found the two page spread an enjoyable read. Some things of note in his early childhood, tips perhaps for us wanting to be successful.

  • 1968 Gates and Allen learn basic and blow entire school budget of computing time in a few weeks.
  • 1968 Employed to report PDP-10 software bugs.
  • 1971 Writes class scheduling program that places him in classes with the “right” girls.
  • 1973 Photographic memory, lucky him.
  • 1975 Writes Basic for MTIS.
  • 1976 Registers the trade name Microsoft.
  • 1980 Buys QDOS for $50k, later renaming and reselling as DOS.
  • 1984 Microsoft is one of the first software developers on Macintosh.
  • 1986 Company goes public.
  • 1996 Daily income is $30million, that’s per day.
  • 1998 The famous pie incident.

So it seems, finding bugs, using technology to meet the right women, buying and reselling somebody else’s work worked out, but will it work now.

I read elsewhere that the companies of today, such as Amazon, eBay, Google, FaceBook etc didn’t even exist 15 years ago.

There is still the opportunity for people out there, like you and I to be billionaires. Like one of my fridge magnets states. “Life isn’t about finding yourself. Life is about creating yourself.” Time to go create the next great thing we all must have.

Auditing your MySQL Data – Part 2

Continuing from my earlier post Auditing your MySQL Data, Roland has accurately highlighted that my initial post leaves out some important information for auditing. As the original charter was only to keep a history, for the purpose of comparing certain columns, a history was all that was needed.

Providing a history of changes forms the basis of auditing, and in keeping with my post title and intended follow-up, this is the all important second part. However in order to provide true auditing additional information is necessary. This includes:

  1. When was an operation performed
  2. What operation was performed, i.e. INSERT, UPDATE and DELETE
  3. Who performed the operation

Date and operation can be determined via the database, but in order to gather all this information, interaction with the application is necessary to obtain the true user information (This can’t be determined via a trigger)

The issue becomes a greater need for design understanding. What is the purpose of the audit data? How will it be accessed? How complex in maintaining the data do you wish to consider?

One alternative is keep a separate log of audit history. The benefits are a clear and easy way to provide a history of a users’ actions, and can preserve the structure of database table between the base and audit table can remain the same, triggers can remain relatively simply. However if you want to look at the data with audit history, it is better to embed these columns within each table, and triggers have to be customized and maintained in more detail the my original post.

When considering the progression of these points, the design process normally returns to the following conclusion. The following columns are added to the base table.

  • A create_timestamp column is added
  • A last_update_timestamp column is added
  • A last_update_user_id column is added

The create_timestamp is optional from an auditing perspective, because the last_update_timestamp of the first audit row will contain the same value, however experience has shown this column is valuable for other design considerations.

The only remaining issue is the type of operation, INSERT,UPDATE & DELETE. Both INSERT and UPDATE can be inferred, DELETE can not. To maintain the simplicity model, a common approach is to use a BEFORE DELETE trigger to insert an audit record with all the same values of the previous row, with the last_update_timestamp manually set. DELETE can then be determined via a no difference in any updated values.

It ultimate conclusion comes down your application design and needs. For example, your design for example may include a flag or row status for example to indicate deletes which are later cleaned up via a batch process so you don’t really care about the date/time of the actual purging of data. This then negates the need for any DELETE trigger.

Again, thanks to Roland for providing a link to Putting the MySQL information_schema to Use which provides a number of SQL statements that help in the generation of Triggers to support full auditing.

You should be aware that CURRENT_USER normally serves zero purpose if all changes are made via an application user.

At this time, you also have another design consideration. Do you introduce a procedure to re-create the triggers via an automated means for each schema change, or do you manually maintain triggers with schema changes. With each approach, additional checking and verification is necessary to ensure your triggers are correctly configured.

Auditing your MySQL Data

I was asked recently by a client to help with providing a history of data in certain tables. Like most problems, there is no one single solution, and in this case there are several possible solutions. I was able to provide a database specific only solution, with just minimal impact to the existing schema.

Here is my approach, your feedback and alternative input as always a welcome.

The problem

Client: I want to keep a history of all changes to two tables, and have a means of viewing this history.

For the purposes of this solution, we will use one table, called ‘customer’ from the Sakila Sample Database.

Solution

For tables to be audited, we will introduce a new column called ‘audit_id’ which is NULLABLE, and hopefully will not affect any existing INSERT statements providing column naming (a Best Practice) is used.
We do this to ensure that the Audit Table has both the same structure (number and ordering of columns), and can have a Primary Key defined.

Schema Preparation

mysql> USE sakila;
mysql> ALTER TABLE customer ADD audit_id INT UNSIGNED NULL;

We can then create an Audit Schema to store the Audit Table. This helps to ensure a clean schema and support for appropriate backup and recovery. Using a standard of suffixing existing schemas with ‘_audit';

mysql> CREATE DATABASE IF NOT EXISTS sakila_audit;
mysql> USE sakila_audit;
mysql> SET FOREIGN_KEY_CHECKS=0;   # (1)
mysql> CREATE TABLE customer LIKE sakila.customer;
mysql> ALTER TABLE customer DROP KEY `email`;   # (2)
mysql> # Foreign Keys (3)
mysql> ALTER TABLE customer
           DROP PRIMARY KEY,
           MODIFY customer_id SMALLINT UNSIGNED NOT NULL,  # (4)
           MODIFY audit_id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY;

NOTES:
(1) Due to Foreign Key Constraints,
(2) We need to remove all existing UNIQUE Keys, I use an I_S SQL as shown below to find these programmatically, you can also view via SHOW CREATE TABLE customer;
(3) It appears that Foreign Keys are not created with the LIKE syntax. This may change in the future, so this note to check appropriately.
(4) We need to drop the primary key, but as this involves an AUTO_INCREMENT column we need to alter this as well, and this step involves naming the primary key column as well, which will vary per table.

We now have an Audit table in a separate schema that has the same columns. Part of the process of any new Schema Releases is to ensure these tables are kept in sync. An appropriate I_S statement could be used for verification. In this case, the support for a TRIGGER on Instance Startup to run, and throw an error to the error log or system error log (possible via UDF) would enable a balance check here.

Access to Audit Information

We provide a VIEW to the Audit table for history purposes. In addition, we use this for simplification of trigger management.

mysql> CREATE VIEW sakila.customer_history AS SELECT * FROM sakila_audit.customer;

NOTE: No index optimization has been performed on the Audit Table. It would be anticipated that existing Indexes could indeed be dropped and replaced with new indexes appropriate for data access.

Trigger Creation

In order to keep a copy of all data, we introduce two database triggers to manage a copy of all data in the history table. It is possible to say that History consists of the Current Version (in the base table) and all previous versions in the Audit Table. This requires 3 triggers. An alternative is to keep a full copy of all versions including current in the Audit Table. This requires 2 triggers, and takes more diskspace, however is a simpler and cleaner implementation.

USE sakila;
DELIMITER $$
DROP TRIGGER IF EXISTS customer_ari$$
CREATE TRIGGER customer_ari
AFTER INSERT ON customer
FOR EACH ROW
BEGIN
  INSERT INTO customer_history
  SELECT * FROM customer
  WHERE  customer_id = NEW.customer_id;
END;
$$
DROP TRIGGER IF EXISTS customer_aru$$
CREATE TRIGGER customer_aru
AFTER UPDATE ON customer
FOR EACH ROW
BEGIN
  INSERT INTO customer_history
  SELECT * FROM customer
  WHERE  customer_id = NEW.customer_id;
END;
$$
DELIMITER ;

NOTE: I do not generally like to use ‘SELECT *’ however in this situation, the trigger is significantly simplified. This is of benefit if you are maintaining audit triggers on many tables. The disadvantage is you must ensure your schema tables (e.g. sakila and sakila_audit) are always kept in sync with the same number and order of columns. Failing to add a column to the audit database will result in an error, which is a good confirmation. Failing to add a column in the right order, may corrupt your data. Exercise caution when modifying the schema in this situation.

Testing

As with any proper coding, we need to test this. The following sample SQL was run to test on a sample database.

SET FOREIGN_KEY_CHECKS=0;
USE sakila;
TRUNCATE TABLE sakila.customer;
TRUNCATE TABLE sakila_audit.customer;
SELECT 'no customer data', IF (count(*)=0,'OK','ERROR'),COUNT(*) AS VALUE from customer;
SELECT 'no customer history data', IF (count(*)=0,'OK','ERROR'),COUNT(*) AS VALUE from customer_history;
INSERT INTO customer (customer_id,store_id,first_name,last_name,email,address_id,active,create_date)
              VALUES(NULL,1,'mickey','mouse',',[email protected]',1,TRUE,NOW());
SELECT 'customer data = 1 row', IF (count(*)=1,'OK','ERROR'),COUNT(*) AS VALUE from customer;
SELECT 'customer history data = 1 row', IF (count(*)=1,'OK','ERROR'),COUNT(*) AS VALUE from customer_history;
INSERT INTO customer(customer_id,store_id,first_name,last_name,email,address_id,active,create_date)
              VALUES(NULL,1,'donald','duck',',[email protected]',1,TRUE,NOW());
SELECT 'customer data = 2 rows', IF (count(*)=2,'OK','ERROR'),COUNT(*) AS VALUE from customer;
SELECT 'customer history data = 2 rows', IF (count(*)=2,'OK','ERROR'),COUNT(*) AS VALUE from customer_history;
UPDATE customer SET email='[email protected]' where email='[email protected]';
SELECT 'customer data = 2 rows', IF (count(*)=2,'OK','ERROR'),COUNT(*) AS VALUE from customer;
SELECT 'customer history data = 3 rows', IF (count(*)=3,'OK','ERROR'),COUNT(*) AS VALUE from customer_history;
DELETE FROM customer where email='[email protected]';
SELECT 'customer data = 1 rows', IF (count(*)=1,'OK','ERROR'),COUNT(*) AS VALUE from customer;
SELECT 'customer history data = 3 rows', IF (count(*)=3,'OK','ERROR'),COUNT(*) AS VALUE from customer_history;
DELETE FROM customer;
SELECT 'customer data = 0 rows', IF (count(*)=0,'OK','ERROR'),COUNT(*) AS VALUE from customer;
SELECT 'customer history data = 3 rows', IF (count(*)=3,'OK','ERROR'),COUNT(*) AS VALUE from customer_history;

Improvements

It would be great if MySQL’s Procedural Language was a little more flexible and robust. Some improvements I’d love to see that would enable a more programmatic solution as the above contains a number of dependencies in schema_name and column_name.

  • Raise Error Handling to throw errors appropriately
  • Anonymous code block support, e.g. BEGIN …code… END; and an automatic execution, not the need to create a Procedure then execute it.
  • Ability to execute dynamic SQL more easily, for example CREATE DATABASE IF NOT EXISTS @variable; or CREATE VIEW @schema.@table_name_history FROM …
  • Support for multiple type (BEFORE|AFTER INSERT|UPDATE|DELETE) triggers per table.

INFORMATION_SCHEMA Query

mysql> SET @schema='sakila';
mysql> SET @table='customer';
mysql> SELECT DISTINCT CONCAT('ALTER TABLE ', table_name, ' DROP KEY `',constraint_name,'`;') AS cmd
           FROM INFORMATION_SCHEMA.table_constraints
           WHERE constraint_schema=@schema
           AND table_name=@table
           AND constraint_type='UNIQUE';

Focus on what you do best

When you have a great idea for a web application, it can be hard to consider with all the moving parts to focus just on what’s your uniqueness or differentiator from everybody else.

You may want to have control over your forums, comments, chat, photo management etc, i.e. user data, but how much does that help you. Is allocating resources to these features when plenty of completed applications exist distracting you and lengthening your time to market.

I always like to refer to Guy Kawasaki’s quote “Don’t worry, be crappy”. While I don’t necessarily agree with just throwing functionality out to the www, I believe in quality over quantity, you want to ensure that more time is spent in reviewing the input for new or improved features rather then bugs, bugs, bugs.


Ping.fm and Plurk are two new community driven sites that have leveraged the functionality of other sites, these being Get Satisfaction – People-Powered Customer Service for Everything! and Disq Us – Turn your blog comments into a webwide discussion.

There are advantages and disadvantages to this approach. As an smaller web site with a growing community, exposing what you do to a wider audience when using a third party to manage something can greatly help in exposure and associated marketing at no cost. On the down side, you are losing traffic to another site.

You need to ensure you can always get access to your data, and your community contributions. Ensuring adequate API’s for integration and data extraction are key. From a technology perspective, BitKeeper and LaunchPad come to mind. BitKeeper is a closed source, version control system that MySQL used. This was a killer for community contributions, where individual users simply could not contribute, and if they wanted even access to getting source code via the repository had to pay for an appropriate client. SourceForge and Apache are two examples of huge communities where they leverage the power of the community. LaunchPad is the latest kid on the block, but suffers from the fact that while access to applications hosted there are free, the actually LaunchPad code itself is closed. This has caused some issues.

It’s a fine line, and in the genre of software development, the Internet can create copies of anything just about overnight. More and more I hear about companies working in stealth mode rather then open community input and interaction, but that’s a topic for another discussion.

Getting Started with Simple DB

With my continued investigation of evaluating alternative data management with cloud computing options, I’m now evaluating Amazon Simple DB. Still in restricted beta, it helps to have a friend on the inside.

Working through the Getting Started Guide (API Version 2007-11-07) was ok, annoying in parts. Here are some issues I found. I was working with Java as the programming language.

  • The Docs enable you to view the language syntax in Java, C#, Perl, PHP, VB.NET, ScratchPad. You can also restrict the view to a specific language. A rather cool feature. One observation is there is no Python, which is rather ironic as my first investigation was Google App Engine (GAE), and the only language here is Python. Something I had to learn first.
  • Preparing the Samples asks you download the Amazon SimpleDB Sample code, but this is not actually a link to the sample code but an index to Community Code. I used Java Library for Amazon SimpleDB which wasn’t even on the first page of results.
  • The supplied docs for specifying the Classpath was rather wrong, helps to simply find all .jar files and included these. Mine looks like:
    • #!/bin/sh
      #  http://docs.amazonwebservices.com/AmazonSimpleDB/2007-11-07/GettingStartedGuide/?ref=get-started
      
      SDB_HOME="/put/directory/to/unzip/here"
      export CLASSPATH=$CLASSPATH:
      $SDB_HOME/src/com/amazonaws/sdb/samples/
      $SDB_HOME/lib/amazon-simpledb-2007-11-07-java-library.jar:
      $SDB_HOME/third-party/log4j-1.2.14/log4j-1.2.14.jar:
      $SDB_HOME/third-party/commons-codec-1.3/commons-codec-1.3.jar:
      $SDB_HOME/third-party/commons-logging-1.1/commons-logging-1.1.jar:
      $SDB_HOME/third-party/jaxb-ri-2.1/jaxb-xjc.jar:
      $SDB_HOME/third-party/jaxb-ri-2.1/activation.jar:
      $SDB_HOME/third-party/jaxb-ri-2.1/jaxb-impl.jar:
      $SDB_HOME/third-party/jaxb-ri-2.1/jaxb-api.jar:
      $SDB_HOME/third-party/jaxb-ri-2.1/jsr173_1.0_api.jar:
      $SDB_HOME/third-party/commons-httpclient-3.0.1/commons-httpclient-3.0.1.jar
      
  • All examples in the docs then refer to making changes such as “invokeCreateDomain(service, action); line and add the following lines after // @TODO: set action parameters here:”, problem is all the samples don’t have the action variable, but rather a variable called request. The comment in the code ” // @TODO: set request parameters here” is at least accurate.
  • The docs contain a lot of Java syntax that is would not for example compile correctly. Plenty of occurances of a missing semicolon ‘;’
  • Each example defines
    String accessKeyId = ““;
    String secretAccessKey = “
    “;
    Ok for the first example, but as soon as I moved to the second, I re-factored these into a Interface called Constants.
  • In all the examples, they never provide any sample output, this would help just to confirm stuff. In Java Library for Amazon SimpleDB download link, there is an example output, but it’s outdated, with new data attribute called BoxUsage. My output is:
    • CreateDomain Action Response
      =============================================================================
      
          CreateDomainResponse
              ResponseMetadata
                  RequestId
                      f04df8eb-71fa-4d4e-9bd5-cc98e853a2e4
                  BoxUsage
                      0.0055590278
      
  • And now some specifics. In a Relational database such as MySQL, you have Instance/Schema/Table/Column. Within SimpleDB it would indicate that you need a separate AWS account for Instance management. That’s probably a good thing as it will enable tracking of costs. There appears to be no concept of a Schema. Data is stored in Domains, this is the equivalent to a Table. Within each Domain, you specific Attributes, a correlation with Columns. One key difference is the ability to define a set of Attributes with the same identity (much like a list that is supported via Python/GAE). For any row of data, you must specify an itemName, this being equivalent to a Primary Key.These names table me back to old days (20 years ago) of Logical Data Models that used entities,attributes and relationships.
  • The term Replace is used when updating data for a given row.
  • When retrieving data, you first return a list of itemNames, then you can via Attributes for that given item.
  • You can perform a simple where qualification using a Query Expression, including against multiple Attributes via intersection syntax
  • A observation that is of significant concern is the lack of security against any type of operation. The Getting Started guide ends with Deleting the Domain. Is there no means to define permissions against type of users, such as an Application User, and a Database Administrator for managing the objects.

Well it took me longer to write this post, then to run through the example, but at least on a lazy Sunday afternoon, a first look at SimpleDB was quite simple.

I did also run into an error initially. I first started just via CLI under Linux (CentOS 5), but switched back to installing Eclipse on Mac OS/X for better error management, and of course this error didn’t occur.

Do you store credit cards in your MySQL Database?

The Payment Card Industry Data Security Standard (PCI DSS) has been developed to help organizations that process card payments to prevent credit card fraud, cracking and various other security vulnerabilities and threats.

This has been developed by the major credit card companies such as MasterCard and Visa. If one of the companies that created the standard, Mastercard International uses PCI General for MySQL then you would be confident that the software is of the highest quality to satisfy all requirements.

A few questions to consider.

Q: Why is PCI compliance important?
A: Credit Card companies will start to demand organizations that store credit card numbers have adequate security of their (as in the credit card company) data.

Q: How can I support PCI compliance with minimal impact?
A: Any solution that requires coding changes and then the necessary testing and verification will incur a large cost for a successful deployment. A turn key solution that can be implemented in a near seamless manner without code changes is ideal for any company.

PCI General for MySQL achieves this. With the security and encryption managed at the Operating System kernel level, MySQL data, and application communications is totally secure. Introducing PCI General via MySQL replication and with a controlled fail-over to a MySQL slave running under PCI General is the simplest and easiest method of introducing PCI Compliance into your production environment

For more information, please visit PCI General for MySQL or Contact Me and I’ll direct you to additional information.

Facebook performance woes today

It seems that of late a number of successful community web sites have been experiencing problems in scalability and performance. Today it’s Facebook.

Initially I got a “Problem loading page” browser message. No big deal, you get those. A few refreshes and I’m in. I was trying to send a message, I got the following popup error message.

No Network
Transport error (#1001) while retrieving data from endpoint `/inbox/ajax/ajax.php': A network error occurred. Check that you are connected to the internet.

Well yes, I’m connected to the Internet, every other site is fine. A more technical look gives the lovely connection aborted message.

lynx http://facebook.com

Looking up facebook.com
Making HTTP connection to facebook.com
Sending HTTP request.
HTTP request sent; waiting for response.
HTTP/1.1 302 Found
Data transfer complete
HTTP/1.1 302 Found
Using http://www.facebook.com/common/browser.php
Looking up www.facebook.com
Making HTTP connection to www.facebook.com
Sending HTTP request.
HTTP request sent; waiting for response.
Retrying as HTTP0 request.
Looking up www.facebook.com
Making HTTP connection to www.facebook.com
Sending HTTP request.
HTTP request sent; waiting for response.
Alert!: Unexpected network read error; connection aborted.
Can't Access `http://www.facebook.com/common/browser.php'
Alert!: Unable to access document.

lynx: Can't access startfile

Deleting from ARCHIVE tables

I can’t say I’ve used the ARCHIVE storage engine before, but at the NY MySQL Meetup last night there was discussion of the improvements to ARCHIVE in 5.1 and the fact that you could not DELETE from archive. A simple test confirmed this indeed throws an error.

DROP TABLE IF EXISTS url_log;
CREATE TABLE url_log(
log_id INT UNSIGNED NOT NULL AUTO_INCREMENT,
log_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
user_id INT UNSIGNED NULL,
url VARCHAR(100) NOT NULL,
PRIMARY KEY (log_id))
ENGINE=ARCHIVE;

DELETE FROM url_log;
ERROR 1031 (HY000): Table storage engine for 'url_log' doesn't have this option

However, part of MySQL 5.1 which is RC status, there is partitioning. Thinking that one could probably partition say a log table by DAY OF MONTH, and then you could do what you want with the data in a partition and delete the partition, I tried the following test.
NOTE: for the purposes of testing, I used SECOND() rather then DAY() and smaller ranges for simplicity.

DROP TABLE IF EXISTS url_log;
CREATE TABLE url_log(
log_id INT UNSIGNED NOT NULL AUTO_INCREMENT,
log_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
user_id INT UNSIGNED NULL,
url VARCHAR(100) NOT NULL,
PRIMARY KEY (log_id))
ENGINE=ARCHIVE
PARTITION BY RANGE ( SECOND(log_date) ) (
    PARTITION p0 VALUES LESS THAN (10),
    PARTITION p1 VALUES LESS THAN (20),
    PARTITION p2 VALUES LESS THAN (30),
    PARTITION p3 VALUES LESS THAN MAXVALUE
);

However this throws an error.

ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function

Primary keys, and AUTO_INCREMENT do not play well with partitioning, so for the purpose of this proof of concept, I’ll drop these.

DROP TABLE IF EXISTS url_log;
CREATE TABLE url_log(
log_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
user_id INT UNSIGNED NULL,
url VARCHAR(100) NOT NULL)
ENGINE=ARCHIVE
PARTITION BY RANGE ( SECOND(log_date) ) (
    PARTITION p0 VALUES LESS THAN (10),
    PARTITION p1 VALUES LESS THAN (20),
    PARTITION p2 VALUES LESS THAN (30),
    PARTITION p3 VALUES LESS THAN MAXVALUE
);

Create a simple Stored Procedure to randomly generate some data. The function is not efficient using RAND() and SLEEP() but it does provide the generation of some data.

DELIMITER $$
DROP PROCEDURE IF EXISTS load_url_log;
CREATE PROCEDURE load_url_log (insert_count INT)
BEGIN
  DECLARE i INT DEFAULT 1;

  WHILE i < insert_count
  DO
     INSERT INTO url_log(user_id, url)
     VALUES (FLOOR(RAND()*100), CONCAT(REPEAT('x',FLOOR(RAND()*99)),SLEEP(RAND())));
  END WHILE;
END $$

DELIMITER ;

CALL load_url_log(500);

I quick check shows a distribution of data.

mysql> select distinct(second(log_date)) from url_log;
mysql> select distinct(second(log_date)) from url_log where second(log_date) < 10;
+--------------------+
| (second(log_date)) |
+--------------------+
|                  0 |
|                  1 |
|                  2 |
|                  3 |
|                  4 |
|                  5 |
|                  6 |
|                  7 |
|                  8 |
|                  9 |
+--------------------+
10 rows in set (0.00 sec)

And now the purpose of the test. Deleting data via deleting a partition.

mysql> ALTER TABLE url_log DROP PARTITION p0;
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> select distinct(second(log_date)) from url_log where second(log_date) < 10;
Empty set (0.00 sec)

And it works. Re-creating however did not.

ALTER TABLE url_log ADD PARTITION (PARTITION p0 VALUES LESS THAN (10));
ERROR 1481 (HY000): MAXVALUE can only be used in last partition definition

Ok, so in my example I was lazy, I didn’t create specific partitions as I would in real world here, e.g. 31 partitions for DAYS. Simulate a little better.

DROP TABLE IF EXISTS url_log;
CREATE TABLE url_log(
log_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
user_id INT UNSIGNED NULL,
url VARCHAR(100) NOT NULL)
ENGINE=ARCHIVE
PARTITION BY RANGE ( SECOND(log_date) ) (
    PARTITION p0 VALUES LESS THAN (10),
    PARTITION p1 VALUES LESS THAN (20),
    PARTITION p2 VALUES LESS THAN (30),
    PARTITION p3 VALUES LESS THAN (60)
);
ALTER TABLE url_log DROP PARTITION p0;
ALTER TABLE url_log ADD PARTITION (PARTITION p0 VALUES LESS THAN (10));
ERROR 1493 (HY000): VALUES LESS THAN value must be strictly increasing for each partition


Still doesn’t work. RTFM indicates this.

DROP TABLE IF EXISTS url_log;
CREATE TABLE url_log(
log_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
user_id INT UNSIGNED NULL,
url VARCHAR(100) NOT NULL)
ENGINE=ARCHIVE
PARTITION BY LIST ( SECOND(log_date) ) (
    PARTITION p0 VALUES IN (0,1,2,3,4,5,6,7,8,9),
    PARTITION p1 VALUES IN (10,11,12,13,14,15,16,17,18,19),
    PARTITION p2 VALUES IN (20,21,22,23,24,25,26,27,28,29),
    PARTITION p3 VALUES IN (30,31,32,33,34,35,36,37,38,39),
    PARTITION p4 VALUES IN (40,41,42,43,44,45,46,47,48,49),
    PARTITION p5 VALUES IN (50,51,52,53,54,55,56,57,58,59)
);

ALTER TABLE url_log DROP PARTITION p0;
ALTER TABLE url_log ADD PARTITION (PARTITION p0 VALUES IN (0,1,2,3,4,5,6,7,8,9));

And that works.

So by creating an ARCHIVE table with 31 LIST partitions, one for each day of the month, you could use ARCHIVE to log data into DAY partitions, analyze, summarize,log,copy the data from the previous day, and purge it within 28 days.

Check your spelling

I’ve been Plurking more lately rather then Twittering. I’d like to offer to help out at Twitter if I could find the right person to talk to.

I’m no English major, but I do like to ensure my spelling is correct (at least for the bulk of the audience). You see grammar problems on sites, due to the nature of English not being the first language of many people, but one should always check your spelling as per this popup message I got today.

Time Warner Cable Speed

I had my Time Warner cable installed yesterday, a rather painless process. Reported as having Internet speeds of 10MB down and 1/2 MB up, these were confirmed with speedtest.net

Installation – July 5

A day later – Jul 6

Setting up on EC2

Thanks to my friend Dustin, and his EC2 demo using Elasticfox Firefox Extension for Amazon EC2 I got an EC2 image setup. With other references Link 1,Link 2,Link 3 I was also able to create my own AMI.

Some notes specific for my configuration.

Pre-config ElasticFox key for launching directly from ElasticFox SSH connections.

mkdir ~/ec2-keys
mv ~/Downloads/elasticfox.pem ~/ec2-keys/id_elasticfox
chmod 600 ~/ec2-keys/id_elasticfox
chmod 700 ~/ec2-keys/
ssh -i /Users/rbradfor/ec2-keys/id_elasticfox [email protected]

Installed Software.

apt-get update
apt-get -y autoremove
apt-get -y install apache2
apt-get -y install mysql-server
# Prompts for password (very annoying)
apt-get -y install php5
apache2ctl graceful
echo "Hello World" > /var/www/index.html
echo "< ? phpinfo() ?>" > /var/www/phpinfo.php

Configuration to save AMI.

scp -i ~/ec2-keys/id_elasticfox ~/ec2-keys/id_elasticfox pk-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem cert-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem [email protected]:/mnt
ec2-bundle-vol -d /mnt -c cert-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem -k pk-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem -u AccountNumber -r i386 -p ubuntu804_lamp
ec2-upload-bundle -b rbradford_804_lamp_ami -m /mnt/ubuntu804_lamp.manifest.xml -a AccessID -s SecretKey

Giving control of your data to the cloud

I’ve been doing some research and evaluation of more cloud computing. Specifically my focus has been on data store, and considering how to augment an existing operation using a popular database such as MySQL.

I’ve been looking first at Google App Engine and now I have my SimpleDB Beta today will be looking here next.

Some observations I’ve struggled with are:

  • No Native CLI, say for basic data setup. You can do some programmatic input for SELECT statements in a Query object in a SQL like syntax called GQL, but you can’t do DML
  • No simple data viewer. In production you would not do this, but I’m in evaluation and still looking at functionality, verification of results etc. A phpmyadmin clone is what I’m seeking for example. I suspect this would have been a good Google Summer of Code project.
  • Python only. While this was great for me to need to spend a 1/2 day to learn about Python syntax, it was just another small starting hurdle. If your organization doesn’t do or use Python, this is another skill or resource needed.

But the biggest concern and hurdle I’m understanding from my traditional principles is loss of control. Loss of control for monitoring and instrumentation, performance, availability and backup and recovery.

Recent issues in performance and unavailability have highlighted that App Engine is not suitable for mission critical web sites, not as your primary focus. I see huge benefits in augmenting access to information, perhaps more historical for example, a well define API within your application could easily support options to consider cloud storage as a secondary storage and primary retrieval of less important data.

My focus when revisiting here will be looking at means of object translate between tables and the Data Store and maybe an API for data transfer etc.

I suspect that when I get to evaluating EC2/S3 more I will have much more support by being able to leverage existing tools and techniques.

Your site unavailable page

When your site is down what do people see? If overloaded to you respond well or not?

For much larger organizations with the infrastructure and DNS management this should be part of your DR strategy. Yesterday was FireFox Download Day. Mozilla I don’t think coped as well as they could have and were not prepared. Looking at the two screens below you can see examples of error pages I got.

In this case it was a planned event, the increase in traffic was predicable ahead of time. Surely they could have had a static pages with something about the event, the high load and even then a static page of links if your goal was to download FireFox 3.

My site certainly doesn’t support the automated failover, I have to make a DNS change at my domain register Go Daddy to a different host that is already ready and wait for delegation, but I’m prepared for a significant outage like previously.

What determines authoritative information

I had need to visit a particular store in New York on Sunday on referral by a friend. I knew they had two locations. Like all tech savvy people I googled sports authority new york. I even visited the website from the link of the top result (with map). I typed in the address into my iPhone as listed by the map, why somebody hasn’t invented a means to point and click that I don’t know (it probably does exist, but finding it and knowing about it is a completely more complex problem). I even clicked on the map and zoomed in for nearest subway stop.

I got on the NY subway and headed into Manhattan. What resulted was me scratching my head when I could not find the intended store. In fact, the store never existed at 57 W 57th St. Even trying the phone number as per Google resulted in a no answer. Fortunately the trusty iPhone with Internet access and viewing the store locator on the official Sports Authority website enabled me to find the closest store, over a mile away.

I have often joked about the reliance on online information and the assumption of accurate information from even trusted sites. I’ve used this example previously, I know 1 mile is approximately 1.6 kilometers, and if you put in “convert 1 mile to kilometer” Google gives you an answer of “1 mile = 1.609344 kilometer”. What if that was indeed wrong, and it was 1.659344 for example.

Where are the safeguards for verifying information? Could it even be possible?
The benefit of information available readily does not equate to good information, indeed today searching for something can provide too much information and not exactly what you are seeking.

One wonders!

Don't use HostMonster

Following a 2-4 day outage from my hosting provider of my dedicated server, I decided to move non critical websites to shared hosting. I have one with 1&1 but I created a second account to share load and act as a backup with www.hostmonster.com after a recommendation from a friend. I was able to move stuff, I was able to get some domains there, but it didn’t last long.

What a disaster. NEVER USE HOSTMONSTER!

Probably about a week after my account was created, they decided to move my account, they didn’t notify me it was going to happen, they just did it. They lost all my files, and did not tell me, after making multiple inquires and phone calls.

Here is some history.

Tue, Jun 10, 2008 at 8:59 PM

Dear Hostmonster Customer,

Hostmonster has started migrating your account (ronaldbr).
Below you will find important migration details.  Please refer to your ticket
number 0 for any specific details.


Although Hostmonster will do everything possible to ensure that your
migration goes quickly and smoothly; it is important to understand
that your account will be moving from one physical server location to
another. During migration the IP address attached to your domain name
will be changed from your old server IP to your new server IP. This will
cause a temporary interruption in email, ftp, and the visibility
of your website.


This window of interruption occurs because most Internet Service Providers
(ISP's) take 24-72 hours to clear their Cache.


Although this window could last approximately 24-72 hours it typically
only lasts 48 hours before your site becomes fully functional again. Your
web browser (IE, Firefox, Netscape) has a Cached version of your site
stored on your local system. In some cases it will help if you clear your
browser cache. For more Information about Cache and clearing your browser
Cache please review our article on:

http://helpdesk.hostmonster.com/kb/index.php?x=&mod_id=2&id=352

If after waiting 48 hours and clearing your browsers Cache, your
website has not begun functioning normally please contact our World
Class U.S. based Support Team by phone:

   Hostmonster Support:

       * Main Line: (866) 573-4678

       * Outside U.S: (801) 494-8462

       Support Questions: Press 2


Important Migration Details:

Your username and cPanel password will remain the same.

   * Migration Date: June 10, 2008

   * Migration Start Time: 06:00 PM MST

   * Migration End Time: (estimate) 04:00 AM MST

   * Old Server IP: 74.220.207.97

   * New Server IP:


Check out the CEO's Blog!

Come see the latest news, information, and updates on Hostmonster. While
you're there tell me how you think our company is doing!

Thank you again for choosing HostMonster.Com!

Matt Heaton (CEO)

http://www.mattheaton.com/

I opened a ticket 24 hrs later at Wed Jun 11 2008 09:26PM.

I got a quick response, but was lied too when told no data was lost.

Wed Jun 11 2008 10:02PM by [email protected]
Dear Customer,
Thanks for contacting us.
We apologize for the trouble you've been having, we are working on the issue the migration has some complications none of your data was lost please allow 24 hours tops for the site to be fully functional again.
Thanks
Corbin
Level 1 Support Engineer
hostmonster.com

I asked for a reason why this was done, and why I wasn’t even notified. I was given a lame response with a “isn’t likely” it will happen again, “but might be” both in the same sentence.

Thu Jun 12 2008 10:13PM by [email protected]
We migrated your site to free up hard-disk space on our server, and I apologize that appropriate notice wasn't given. I have notified my supervisor in an effort to recommend improved communications. It isn't likely that your account will be migrated again for our business needs, but it might be needed in the future.

Thank you for your inquiry.

John Pratt
Support Level 1
HostMonster.com
866.573.4678

Probably my second or third call now, is Friday morning, and I’ve had added to my ticket some lame text that it’s being escalated.

Fri Jun 13 2008 08:32AM by [email protected]
I am reopening this ticket. It was supposedly moved from host97 to host262 but cpanel man. is still showing host97 and tracert shows host97. I've tried going to both host262.hostmonster.com:2082 and host97.hostmonster.com:2082 but it's not logging me in. I've tried to change the password and tried both host262.hostmonster.com:2082 and host97.hostmonster.com:2082 with the new password and won't login either. An L3 says the username broke in the migration but it's a different kind of "broken" since it doesn't have the error message in Cpanel Manager. I got L3 approval to move this ticket to the escalations queue.

Nicholas Martin
Support Level 1
HostMonster.com
866.573.4678

So now, it’s Sunday morning, on the phone again, no information, reason or help forthcoming again. A note on ticket Sat 2pm (not visible to me), apparently all files lost on both old server and new server. When were you like going to tell the customer.

This service is woeful, I want my money back.

Of course when I said I wanted to cancel my account and get my money back (I can’t login remember), I was told I would have to call back when the billing department was open.

Well, my complaint will be going to [email protected] – Supervisor of Host Monster Tech Support.

Handling Disaster 101

I’ve had to accept the “practice what you preach” pill recently due to a disaster at my hosting provider. See Learning from a Disaster.

While it was my own personal site on a dedicated server in question and not a business generating review I found that my MySQL Backup Strategy was incomplete ( It is also based on code 4 years old). I found that I had not tested my Disaster Recovery Plan adequately. I have used my backup and recovery approach in the past for various controlled situations and testing successfully.

So what mistakes did I make. There were two.

1. I was using a cold backup approach. That is specifically copying the entire MySQL Database in a controlled manner at the file system level. These were also copied to an alternate shared hosting server for storage. This works fine when you backup server supports a means of restoring data in this format, however if your backup shared hosting facility does not give you access to the MySQL data area, then this does not work. Not wanting to pay for two dedicated hosts this backup solution is impractical for my present hosting. Time to consider alternatives, such as being prepared with an EC2 image.

2. Recently I moved to using two MySQL instances, both 5.0 GA and 5.1 RC. The problem is I didn’t adjust my backup scripts appropriately to reflect two instances. Of course when my server was unavailable for 43 hours I was completely screwed in at least I could only throw an Site Unavailable page rather then my website. Combined with my hosting provider totally screwing DNS and admin access to manage DNS for 3-4 days more then almost 2 days of total server unavailability I also had to change support for my DNS. Would have been easier if I had a DNS DR plan.

The lesson is, no matter how trivial the information or website, if you actually want the content, make sure you test your recovery strategy ahead of a disaster.

Learning from a Disaster

As Farhan has already pointed out to us, Disaster is Inevitable – Must shutdown generators. My primary hosting provider The Planet had a serious meltdown, 9,000 servers unavailable, DNS and administration application .

My server was effectively totally unavailable from 3PM Saturday until 10AM Monday, 43 hours in total.

The problem didn’t stop there. Started and verified servers and domains, but like 8 hours later I find that the DNS is wrong on two important domains (I didn’t discover this because I have them in local /etc/hosts) because I moved them to a different IP like 2 weeks ago.

The Planet denied any problems, ticket logged to get them fixed because admin interface was still down.
Wind forward, Service Updates pages states

June 3 – 3:00pm CDT
All DNS Zone files for ns1, ns2, ns5 and ns6 are completely updated as of the information that was available 5:30PM Central Saturday. All DNS servers have been rebooted and BIND has been restarted.
.

What a flat out lie. DNS lookup directly against name server confirms it’s wrong. At Wed 12:00am Wednesday, the admin interface finally gives access to view and update zone files. What the. It indicates and IP address which is should be, but clearly not what it is.

I will so be sending a complaint to [email protected] when I finally get this resolved.

And just to make this all worse, my present professional site ronaldbradford.com is of critical importance. It’s my only exposure for preparing to provide information to potential employers, and it’s used for the preparation of consulting information. I’ve been forced earlier today because of events starting today for NY Internet Week to purchase DNS services at www.easydns.com. That didn’t also got to plan, yet another story.

Working with Google App Engine

Yesterday I took a more serious look at Google App Engine, I got a developer account some weeks ago.

After going though the getting started demo some time ago, I chose an idea for a FaceBook Application and started in true eXtreme Programming (XP) style (i.e. What’s the bare minimum required for first iteration). I taught myself some Python and within just a few minutes had some working data being randomly generated totally within the development SDK environment On my MacBook. I was not able to deploy initially via the big blue deploy button, the catch is you have to register the application manually online.

Then it all worked, and hey presto I’ve got my application up at provided domain hosting at appspot.com

Having coming from a truly relational environment, most notably MySQL of recent years I found the Datastore API different in a number of ways.

  • There is no means of Sequences/Auto Increment. There is an internal Unique Key, but it’s a String, not an integer, not enabling me to re-use it.
  • The ListProperty enables the use of Lists in Python (like Arrays) to be easily stored.
  • The ReferenceProperty is used as a foreign key relationship, and then can be more reference within an object hierarchy
  • I really missed an interactive interface. You have no abililty to look at your data, specifically for me I wanted to seek some data, then I wanted to delete some data, but I had to do all this via code.

Having developed a skelaton FaceBook application before in PHP, I figured a Python version would not be that much more work, but here is where I good stumped Information at Hosting a Facebook Application on Google AppEngine leveraging the PyFacebook project didn’t enable me to integrate Google App Engine with FaceBook just yet.

This had me thinking I need to resort to a standalone simply Python Facebook application to confirm the PyFacebook usage. Now my problems started. Under Mac it’s a lot more complex to install and configure Python/Django etc then under Linux. I tried to do it on my dedicated server, but drat Python is at 2.3.4, and it seems 2.5.x is needed.

Still it was a valuable exercise, I dropped the FaceBook goal and just worked on more Google App Engine stuff. Still early days, but it was productive to try out this new technology.

What I need to work on now is how to hold state within Python infrastructure so I can manage a user login and storing and retrieving user data for my sample app.

Screen recorder software for Mac

I’ve been researching what software exists for a Mac to produce video like screen recording and audio. Multiple people have recommended Camtasia however this is Windows only.

I today with some research came across Jing Project which at first test seems perfect, easy to install, use and even deploy to an integrated Screencast. And it’s free.

What’s perplexing is it’s a product from the same company, techsmith yet it’s not listed as a product?

Beyond Blogs

I was reading today in a printed magazine Business Week the article Beyond Blogs. It’s unusual these days to actually read on paper what we can find on our online world.

What’s interesting is the printed article did actually contain content I didn’t find online. There was a section called “We Didn’t See ‘em coming”, and it’s finally important site mentioned was iTunes. I found the following comment extremely relevant. “…. But we didn’t guess it would become the leading destination for podcast downloads. Contrary to our expectations, podcasts have evolved into a feature of traditional radio, not a rival to it.”

It’s important that with any business model you know, understand and review consistently your competitors. I find many organizations that don’t do this. You need to know your competitor. But as mentioned with iTunes, the designers of podcasts could have easily considered radio to be a competitor initially. One must always evaluate the changing times regularly.

The following are three more quotes of interest.

But in the helter-skelter of the blogosphere, we wrote, something important was taking place: In the 10 minutes it took to set up a blogging account, anyone with an Internet connection could become a global publisher. Some could become stars and gain power.

Like the LAMP stack has done for websites, the cost to entry now to get exposure is very low. The problem is now too much content exists to review, compare and evaluate effectively.

Turned out it wasn’t quite that simple. The magazine article, archived on our Web site, kept attracting readers and blog links. A few professors worked it into their curricula, sending class after class of students to the story. With all this activity, the piece gained high-octane Google juice.“.

I’d not heard of Google juice before.

In relation to Linked In, FaceBook and MySpace, “While only a small slice of the population wants to blog, a far larger swath of humanity is eager to make friends and contacts, to exchange pictures and music, to share activities and ideas. These social connectors are changing the dynamics of companies around the world. Millions of us are now hanging out on the Internet with customers, befriending rivals,…“.

Corruption using MySQL AES_[EN|DE]NCRYPT functions

I was contacted this week by a previous client regarding a failure of processing data. This was a Contact, Financial and Media Management system I developed for a non-for-profit organization a LAMJ stack, and I’ve had to do nothing since deployment in the past 3 years, no bug fixes, no feature enhancements. The only thing lacking is additional management reporting, and data is extracted for this option now.

It runs under commodity Hardware, Linux and MySQL and it’s only unscheduled downtime was an power failure before UPS power was installed. However this all changed last week. Processing of regular scheduled encrypted data simply failed unexpectedly.

A summary of the environment.

  • Data is inserted with AES_ENCRYPT(data,key);
  • Data is retrieved with AES_DECRYPT(data,key);
  • Data is never updated.
  • New data is always added, and historical data always remains.
  • The application has no end user access to modify data.
  • The application has no function anywhere to modify the data, so no rouge happening could have occured.
  • An AUTO_INCREMENT column and TIMESTAMP columns provide a level of auditing for data forensics.
  • Backup copies of data exist up to 3 years for reference.
  • The seed key has not changed.

The problem

Selecting the first 10 rows saved in the table (By AUTO_INCREMENT Key and confirmed by dates), 8 of 10 are now corrupt. Select the last 10 rows inserted, zero are corrupt. Across 20,000 records 75% are now corrupt.

A lot of analysis has been performed to identify and track the data that was recorded, a certain amount of data forensics, and it was confirmed information was successfully processed last month for example. As this performs financial transactions, there is a lot more auditing available and being reviewed however it is simply a mystery that I can’t solve.

  • What options remain? is this a Hardware problem, Disk or even Memory.
  • What other data maybe corrupt?
  • How can more investigation occur to track the cause of the problem.
mysql> select version();
+------------------+
| version()        |
+------------------+
| 4.1.10a-standard |
+------------------+
1 row in set (0.00 sec)

Updated Website and Professional Blog

For those that have my existing blog bookmarked, or use any RSS/Atom feeds please update your information now.

My new blog can be found at http://ronaldbradford.com/blog/

RSS 2 is http://ronaldbradford.com/blog/feed/rss2, Atom is http://ronaldbradford.com/blog/feed/atom

Ok, so before you read on please change now. While all old links will redirect I encourage you to take a few moments in your browser and RSS reader.

So what’s new. Well as Baron stated in I moved this blog to pairLite with zero downtime, and it was easy I moved my old blog site blog.arabx.com.au and my temporary staging site ronaldbradford.com and transitioning my passworded beta site all with no downtime. Even with all my testing over several weeks, and several test users, my production migration had one hitch with some internal url re-directs not working correctly, but my overall site main links were not affected. You would have had to been quick at 1:30am on Saturday to catch it.

Other features is a consolidation of information from various sites now all under my brand ronaldbradford.com and allowing me now to expand on that. I’ve also split my professional blog from my personal views, photos and comments as well as new SEO/SEM optimizations. More on these experiences later. My theme was actually inspired by some business cards I had made in Australia while home last week, quality 300GSM color cards that were designed and printed in a just a few hours and are very good quality (which was very impressive).

I’m now free to work on so many other projects including many MySQL draft posts now Version 1.0 of my new site is out. As with all release early, release often sites, expect 1.1 soon!

Everything open source from Sun

In the recent interview Missed Twitter Questions from Jonathan Schwartz Interview at Web 2.0 Expo Sun CEO Jonathan Schwartz is quoted as saying “Everything Sun delivers will be freely available, via a free and open license (either GPL, LGPL or Mozilla/CDDL), to the community.”

Presently not everything is under one of these licenses. Java getting fully Open Source highlights that Java is getting there, but still contains closed source components. Open Solaris, NetBeans, Glassfish, Virtual Box and Open Office are. Even the mainline Solaris 10 is. Star Office is one that is not.

MySQL is also not there and presently has at least three different licenses. You have GPL for the MySQL Community Server. You have a subscription model for MySQL Enterprise which includes additional bug fixes not available in the Community Server. The subscription model also includes the MySQL Enterprise Monitor, software that will fail to operate if your subscription lapses. Additional upcoming MySQL Proxy features will also be subscription or have been termed “closed source” only. You also have the different model used by MySQL Workbench for licenses.

I know many that work on these products at MySQL do not agree with the various license policies. Indeed recent comments from Rich Green of Sun have also indicated we are not going to change anything at MySQL.

I was initially interested if Sun would move MySQL to the CDDL, however this question was recently raised and Marten Mickos stated there were no plans to move away from the GPL v2.

I can only hope that as Sun continues to promote itself as the largest open source company, these differences subside and disappear, and don’t continue to evolve and change.

Migrating my blog & updating WordPress

I’m migrating my existing WordPress run blog site at blog.arabx.com.au to a my new site ronaldbradford.com (which is not yet publically available)

As part of this process I’ll be doing a number of upgrades/changes including:

  1. Update blog software to 2.5.1 from 2.0.2 (I’d previously done a 2.0.2 upgrade to 2.3.2, but not deployed)
  2. Migrate to new domain
  3. Upgrade existing MySQL 5.1 version from 5.1.11 to 5.1.24
  4. Migrate database to using MySQL 5.1.24, from 5.0.22 (my server runs 5.0 and 5.1 instances)
  5. Split my blog into Professional & Personal

Upgrading
The upgrade is straightforward, backup database, download latest wordpress software. I run full revision to older versions via directories + symlinks so my installation is more complicated, but fully recoverable. Install, and run upgrade script. That all works, but my site breaks. Suspecting is my heavily customized them, by disabling that my site is up. One to add to the TODO list.

Migrate to new domain
Dump + Reload data into new schema. Copy WordPress install. I had to make two data changes to correctly use the new domain.


update wp_options set option_value='http://ronaldbradford.com/blog' where option_name='siteurl';
update wp_options set option_value='http://ronaldbradford.com/blog' where option_name='home';

Upgrade MySQL 5.1 version
That was also relatively straightforward. I was surprised I was running such an old 5.1.11, but I remember originally using 5.1.6 in production use before that.

Migrate database to 5.1 from 5.0
This is where my problems have begun. WordPress does not appear to like host+port stuff. I can confirm access via MySQL client. At 9:30pm Sunday night, this may have to remain in the unresolved bin for a few days.