What is a BLOB?

If your answer was Binary Large Object, then you would be wrong. Check out Mike’s comments on History of Database Blobs from the man that created the BLOB, Mr Jim Starkey.

I had the chance to meet Jim a few months ago, and I spent quite some time talking on various database topics. He is one cool guy.

Mercurial Version Control Software

I got asked (being a Java developer) about what was involved in creating an Eclipse Plugin for Mercurial. Well in true Google style, why invent when somebody probably already has. A quick check finds Mercurial Eclipse by VecTrace.

Now until last week, I’d never heard of Mercurial, so this is really an introduction to somebody that has no idea.

What is Mercurial?

Mercurial is a fast, lightweight Source Control Management system designed for efficient handling of very large distributed projects.

Ok, so big deal, I use CVS. I also use Subversion (SVN) for my Apache contributions, and also for MySQL GUI products. Why do we need another Version Control Product? Mercurial is a Distributed Software Configuration Management Tool. The following is from the Mercurial Wiki

A distributed SCM tool is designed to support a model in which each Repository is loosely coupled to many others. Each Repository contains a complete set of metadata describing one or more projects. These repositories may be located almost anywhere. Individual developers only need access to their own repositories, not to a central one, in order to Commit changes.

Distributed SCMs provide mechanisms for propagating changes between repositories.

Distributed SCMs are in contrast to CentralisedSCMs.

So, clearly a distributed model would well in a large distributed organisation where external factors limit continous access to a single central repository. Low Bandwidth, Poor Internet Connectivity, being on a plane, and travelling are all things that would make a distributed model a more ideal solution. I know I’ve taken my laptop away, and being an “Agile Methodology” developer, I commit often. When you have several days of uncommitted work it goes against the normal operation.

You can get more information at the official website at http://www.selenic.com/mercurial. A few quick links are: Quick Start Guide, Tutorial, Glossary.

Installing Mercurial

su -
cd /src
wget http://www.selenic.com/mercurial/release/mercurial-0.9.tar.gz
tar xvfz mercurial-0.9.tar.gz
cd mercurial-0.9
# NOTE: Requires python 2.3 or better.
python -V
python setup.py install --force

A quick check of the syntax.

$ hg
Mercurial Distributed SCM

basic commands (use "hg help" for the full list or option "-v" for details):

 add        add the specified files on the next commit
 annotate   show changeset information per file line
 clone      make a copy of an existing repository
 commit     commit the specified files or all outstanding changes
 diff       diff repository (or selected files)
 export     dump the header and diffs for one or more changesets
 init       create a new repository in the given directory
 log        show revision history of entire repository or files
 parents    show the parents of the working dir or revision
 pull       pull changes from the specified source
 push       push changes to the specified destination
 remove     remove the specified files on the next commit
 revert     revert files or dirs to their states as of some revision
 serve      export the repository via HTTP
 status     show changed files in the working directory
 update     update or merge working directory

A more detailed list:

$ hg help
Mercurial Distributed SCM

list of commands (use "hg help -v" to show aliases and global options):

 add        add the specified files on the next commit
 annotate   show changeset information per file line
 archive    create unversioned archive of a repository revision
 backout    reverse effect of earlier changeset
 bundle     create a changegroup file
 cat        output the latest or given revisions of files
 clone      make a copy of an existing repository
 commit     commit the specified files or all outstanding changes
 copy       mark files as copied for the next commit
 diff       diff repository (or selected files)
 export     dump the header and diffs for one or more changesets
 grep       search for a pattern in specified files and revisions
 heads      show current repository heads
 help       show help for a given command or all commands
 identify   print information about the working copy
 import     import an ordered set of patches
 incoming   show new changesets found in source
 init       create a new repository in the given directory
 locate     locate files matching specific patterns
 log        show revision history of entire repository or files
 manifest   output the latest or given revision of the project manifest
 merge      Merge working directory with another revision
 outgoing   show changesets not found in destination
 parents    show the parents of the working dir or revision
 paths      show definition of symbolic path names
 pull       pull changes from the specified source
 push       push changes to the specified destination
 recover    roll back an interrupted transaction
 remove     remove the specified files on the next commit
 rename     rename files; equivalent of copy + remove
 revert     revert files or dirs to their states as of some revision
 rollback   roll back the last transaction in this repository
 root       print the root (top) of the current working dir
 serve      export the repository via HTTP
 status     show changed files in the working directory
 tag        add a tag for the current tip or a given revision
 tags       list repository tags
 tip        show the tip revision
 unbundle   apply a changegroup file
 update     update or merge working directory
 verify     verify the integrity of the repository
 version    output version and copyright information

Mercurial Eclipse Plugin

The plugin is still in it’s early days, but the “FREEDOM” of open source enables me to easily review. After a quick install and review of docs, I shot off an email to the developer, stating why I was looking, and while I have other projects on the go, I asked what I could do to help. It’s only be 2 days and we have already communicated via email several times on various topics. That’s one reason why I really love the Open Source Community. Generally people are very receptive to feedback, comments and especially help.

Within Eclipse

  • Help ->Software Updates-> Find and install…
  • Select “Search for new features to install”, click Next
  • Click “New Remote site…”
  • Enter following details and click Ok
    • Name: MercurialEclipse Beta site
    • URL: http://zingo.homeip.net:8000/eclipse-betaupdate/
  • Follow the prompts to accept the license and download.

So now with Eclipse, on a project you can simply go Right Click -> Team -> Share Project -> Select Mercurial

A Quick Mercurial Tutorial

Of course the quickest way to learn about using Mercurial is to look at an existing product. So taking this plugin project for a spin.

$ cd /tmp
$ hg clone http://zingo.homeip.net:8000/hg/mercurialeclipse com.vectrace.MercurialEclipse
$ hg clone com.vectrace.MercurialEclipse example
$ cd example
# Create some new dummy files
$ touch test.txt
$ touch html/test.html
# View files against respository status
$ hg status
? html/test.html
? test.txt
# Add the new files
$ hg add
adding html/test.html
adding test.txt
# Commit changes
$ hg commit -m "Testing Mercurial"

So other then the second clone command (which enabled me to not mess up the original repository and to test distributed handling next), this is just the same as CVS (checkout, diff, add, commit)

# The Distributed nature involves first Pulling from the "upstream" respository
$ hg pull ../com.vectrace.MercurialEclipse
pulling from ../com.vectrace.MercurialEclipse
searching for changes
no changes found
# Confirm our new file is not in "upstream" respository
$ ls ../com.vectrace.MercurialEclipse/test.txt
ls: ../com.vectrace.MercurialEclipse/test.txt: No such file or directory
# Push local respository changes to "upstream" respository
$ hg push ../com.vectrace.MercurialEclipse
pushing to ../com.vectrace.MercurialEclipse
searching for changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 2 changes to 2 files
$ ls ../com.vectrace.MercurialEclipse/test.txt
ls: ../com.vectrace.MercurialEclipse/test.txt: No such file or directory

Hmmm, missed something here. The Quick Start Guide docs seems to want to work back into the original respository, pulling in changes from the “downstream” repository, then executing a merge and commit command. I don’t want to do this just in case I start messing up the original repository. Time for me to create a new standalone repository so I don’t screw anything up. Stay tuned for an updated tutorial.

Of course this is very basic, I need to look into commands used in CVS like login, tagging (versioning), branching, diffing revisions and merging but this is a start.

I do have a concern with the number of distributed respositories, when and how do you know you are in sync with the “master”, indeed what is the “master”. It does merit some investigation to see what management is in place, like identifying all respositories, and comparing for example.

Conclusion

Ok, so now I’ve got a grasp on Mercurial, time to review the Java Code and see what works and what doesn’t in the Eclipse environment. Of course I don’t use Mercurial so what I may consider as functionality required, may be lower priority to those users out there. Any feedback I’m sure will be most appreciated by the original developer.

What is software quality?

Greg Lehey wrote today Is MySQL getting buggier?. The underlying question of his comments is a more fundamental and passionate topic, and especially for me. That is “Software Quality”.

The quintessential question is this. “How do you determine the ‘software quality’ of a product?” And then quickly followed by, “How do you benchmark this with other software products?”

The short answer to second question is simple. You can’t. The reasons why become apparent in addressing the first question. (There’s a mathematical term for this two question situation, another one of the million things to research and remember one day).

15 years ago as part of my masters research I worked on “Improving Software Quality and Software Productivity”. At the time when I started, I found that these were generally considered mutually exclusive. Quite simply, to improve software quality was to decrease productivity, and when you had to improve productivity, quality declined. Unless you had a clear end goal (which you can’t in software, features, cost, satisfaction etc.), it was impossible to measure the total “cost”. It was also impossible to clearly determine impact on key factors, as with software you never have a “control group” working in isolation of other groups trying different things, and being able to do imperial comparison analysis.

In summary, I found that to improve software quality and software productivity you had to reconsider the approach at several different levels.

  1. You had to strive to work towards a simpler business solution, rather then a more complex solution.
  2. You had to leverage converting business knowledge that was held in lengthy and often dated paper documents, into electronically managable content. Some of the benefits included concurrency, consistency and comparative analysis of the knowledge. The early days of CASE repositories were built on this principle, again before they became to complex.
  3. You also had to look at better ways of writing code, and the area I focussed on was code generation. That is “Code writing Code”. (I have a very successful story of 400,000-500,000 Lines of C code. The ODT was a specialised intelligent peer-to-peer selective replication, between a central repository and 32 distributed sites, allowing for independent heterogeneous operations. It provided levels of complexity within replication including two-way master of partial database data, selective row replication between nodes depending on business needs, and collision management. And I did this is1992. A story for another day).

I also found in the business and industry sectors of research, it was near impossible to get a commonality of some components, a higher level of re-use, a higher value of cost savings. There were many factors for this, another topic for another time.

Now that was a long time ago, and a number of things have changed greatly, including my views. These are far more prevalent regarding Open Source.

Some historical general principles in measurement don’t apply today in Open Source. I remember reading recently in the browser wars, that FireFox was “too honest”. Being more open to acknowledge problems/bugs and correcting them, then the Microsoft perspective. How do you compare quality in this situation. (Side Note: I wish there was a search engine that would search all pages that you had viewed, bookmarked etc., rather then searching the WWW for everything. Would make life much easier in finding references like this.)

Open Source does something commercial software can’t do. If you find a bug, and you can’t wait, then fix it yourself, or at worse pay somebody to fix it for you. The scope, functionality and rules are constantly changing, the resourcing and priorities as well. How do you adequately measure something so fluid.

I am a proponent of “Agile Development Methodologies”, having use for many years eXtreme Programming (XP) either rather seriously, or taking small attributes and applying to existing infrastructures. Terms include Test Driven Development (TDD), Feature Driven Development (FDD), Code Coverage, Unit Tests.

Speaking just on Unit Tests. You write a Unit Test before you write your code. It helps to understand what you are trying to achieve before you try to achieve it. It also provides coverage for when the rules change, you write new or changed tests, which allows you to have analysis performed by your code to address problems. I’ve found that writing extensive tests become a self documentation system of the functionality, it highlights what can and can’t occur. Simply ‘grep’ing can offer a clear summarised description of code, even if you have now knowledge of it. In Java you write long method names (some abstract examples would be ‘testFunctionDoesThisAndNotThis”, “testFunctionAcceptsTwoOfThese”, “testFunctionFailsWithValuesOfThis” etc.) I need to provide a more concrete example to better explain.

An Agile approach, also tends to better encapsulate the Web 2.0 principle of considering software as a service, and not a product. This entirely changes the general thought of software development, of the large cost of software development, followed by a maintenance period and cost of the product, followed by a review of potential new products as circumstances change, followed by … (you get the picture)

If you consider an acceptable ongoing cost for providing a service, perhaps pegged against other business revenue/expenses, and then your product was managed by what available resources you had (time, money, people), rather then by functionality requirements you entire perspective changes. You include a high level of user ownership, the release often principle, and the capacity to review and adapt to changing and improving technology and requirements, all these factors change the outcome on how software is development, managed and perceived.

The other thing about all this is, It’s common sense. It’s a simpler approach, and if everybody considered “How can I make this simpler?” first, it would change they way software is developed and perceived.

So getting back onto the topic of Software Quality. I found that in the right circumstances (and there are wrong circumstances), imploring a change in principle towards Agile concepts can have a significant effect on Software Quality. But, Beauty is in the eyes of the beholder. Software Quality is also much the same. What may be acceptable for somebody is unacceptable for somebody else. How then do you measure this?

So after all this, we are left with the question. “How do you determine the ‘software quality’ of a product?” Well, we can only consider a single product in isolation, and we are talking the MySQL server. I should note that you can’t consider all MySQL products in one analysis, different factors including the relative age of the product, the technologies etc. all affect the measurement. You need to consider each product individually.

So why is the number of bugs, or number of open bugs a measure of software quality. What about how big is the product (e.g. lines of code), how complex is the product, how mature is the product, how tested is the product. What we considered the lifespan of critical bugs as a measurement? So what if there were a larger number of critical bugs, but they were fixed immediately is that a good or poor reflection on software quality. If software is in the Web 2.0 “Perpetual Beta”, then are bugs really incomplete functionality.

In software and for organisations, I think an approach could be similar as mentioned by a prospective employer recently. “We know things aren’t the best they could be, but they are better then they were. If our goal could be to improve just 1% each day, then we would be easily 100% better in 6 months.” So with whatever metrics were in place (and you have to have some form of measurement), if there is always a continual improvement, then software quality is always improving.

Differences in syntax between mysql and mysqltest

As I wrote earlier in Using the MySQL Test Suite I found an issue with using the current MySQL Sakila Sample Database as a test with mysqltest.

I was running an older version of 5.1.7 beta so I figured the best course of action was to upgrade to 5.1.11 beta.

Well the problem still exists, and I found that the cause was due to the syntax of the DELIMITER command. Assuming the creation of the schema tables, here is an example of what I found.

Running in an interactive mysql session the following works

DROP TRIGGER ins_film;
DELIMITER ;;
CREATE TRIGGER `ins_film` AFTER INSERT ON `film` FOR EACH ROW BEGIN
    INSERT INTO film_text (film_id, title, description)
        VALUES (new.film_id, new.title, new.description);
  END;;
DELIMITER ;

On a side note, why does DROP TRIGGER not contain the IF EXISTS syntax like DROP DATABASE and DROP TABLE. It’s a shame, it’s a nice SQL extension that MySQL provides.

Now running the same SQL in a mysqltest test (with appropiate create tables) produces the following error

TEST                            RESULT
-------------------------------------------------------
sakila-trigger                 [ fail ]

Errors are (from /opt/mysql/mysql-test/var/log/mysqltest-time) :
mysqltest: At line 53: Extra delimiter "" found
(the last lines may be the most important ones)

It seems, that the DELIMITER command within mysql accepts a line terminator as a statement terminator, while in mysqltest, this is not so.
The solution is to be more strict in the mysqltest, using the following syntax, note the first and list DELIMITER lines are terminated by the appropiate delimiter in use.

DELIMITER //;

CREATE TRIGGER `ins_film` AFTER INSERT ON `film` FOR EACH ROW BEGIN
  INSERT INTO film_text (film_id, title, description)
  VALUES (new.film_id, new.title, new.description);
END//

DELIMITER ;//

Surprisingly, this syntax does then not work in mysql? Is this a bug? I’m note sure, perhaps somebody could let me know.

Using PBXT 0.9.5

Paul has released Version 0.95 of his PBXT MySQL Transactional Storage Engine.

Here is what I did to get it operational under CentOS 4.3.

su -
useradd pbxt
su - pbxt
wget http://www.primebase.com/xt/download/mysql-4.1.16-pbxt-0.9.5.tar.gz
tar xvfz mysql-4.1.16-pbxt-0.9.5.tar.gz
cd mysql-4.1.16-pbxt-0.9.5
./configure --with-debug=full --with-pbxt-storage-engine --without-query-cache --with-mysqld-user=pbxt  --prefix=/home/pbxt/mysql
make
make install
scripts/mysql_install_db
cd /home/pbxt/mysql
./bin/mysqld_safe --user=pbxt --basedir=/home/pbxt/mysql &

Now, lets test and confirm PBXT is operational.

bin/mysql -uroot mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 1 to server version: 4.1.16-pbxt-0.9.5-debug

Type 'help;' or 'h' for help. Type 'c' to clear the buffer.

mysql> show engines;
+------------+---------+------------------------------------------------------------+
| Engine     | Support | Comment                                                    |
+------------+---------+------------------------------------------------------------+
| MyISAM     | DEFAULT | Default engine as of MySQL 3.23 with great performance     |
| HEAP       | YES     | Alias for MEMORY                                           |
| MEMORY     | YES     | Hash based, stored in memory, useful for temporary tables  |
| MERGE      | YES     | Collection of identical MyISAM tables                      |
| MRG_MYISAM | YES     | Alias for MERGE                                            |
| ISAM       | NO      | Obsolete storage engine, now replaced by MyISAM            |
| MRG_ISAM   | NO      | Obsolete storage engine, now replaced by MERGE             |
| InnoDB     | YES     | Supports transactions, row-level locking, and foreign keys |
| INNOBASE   | YES     | Alias for INNODB                                           |
| BDB        | NO      | Supports transactions and page-level locking               |
| BERKELEYDB | NO      | Alias for BDB                                              |
| NDBCLUSTER | NO      | Clustered, fault-tolerant, memory-based tables             |
| NDB        | NO      | Alias for NDBCLUSTER                                       |
| EXAMPLE    | NO      | Example storage engine                                     |
| ARCHIVE    | NO      | Archive storage engine                                     |
| CSV        | NO      | CSV storage engine                                         |
| BLACKHOLE  | NO      | Storage engine designed to act as null storage             |
| PBXT       | YES     | Super fast transactional, row-level locking engine         |
+------------+---------+------------------------------------------------------------+
18 rows in set (0.00 sec)

Now to take if for a spin with the MySQL Sakila Sample Database. At this time I need a modified version of the schema, as the sample database is designed for Version 5.

Download sakila.mysql4 script.
Download sakila.pbxt.sql script.

bin/mysql -uroot mysql
mysql>  source sakila.mysql4
mysql> source  sakila.pbxt.sql
# NOTE: Alter table of film_text fails as expected due to FULLTEXT index
mysql> show create table actor;
mysql> exit;

A few warnings in my sakila.mysql4 script I’ll need to review, but it looks ok from an installation step. Now back to my tests that I’m writing to use the Sakila Sample Database which I was working on, so I can really test the ACID state and performance of PBXT.

Using the MySQL Test Suite

MySQL provides two different tools to test the MySQL Server with SQL statements. One is mysqltest and in 5.1 mysqlslap. Both of these tools have quite different purposes. This is a quick review of the usage of mysqltest.

Current Usage

Under Linux deploys, the README in the mysql-test directory gives you all the information you need to know to run.

To run the full test suite.

cd /opt/mysql/mysql-test
 ./mysql-test-run

Rather easy, it does take some time, and I was surprised to find a number of tests marked as ‘skipped’. The general purpose of having tests in a product is to provide coverage of software functionality, and tests should always be forward compatible, unless they are no longer applicable and should be removed. I wonder why the time wasn’t taken to ensure they work in each release.

Should you already have a mysqld running, you can also use this using the -extern syntax to run specific tests against the server. For Example.

./mysql-test-run --extern alias analyze

There are over 700 tests, so this approach is not practical for all tests. It is noted in the README that some tests can’t run in extern mode.
Looking a bit closer at both the server configuration and a running test sheds some light on the types of parameters that are used, and how mysqltest operates.

/opt/mysql/bin/mysqld --no-defaults --server-id=1 --basedir=/opt/mysql --port=9306 --port-open-timeout=380 --local-infile
--exit-info=256 --core --datadir=/opt/mysql/mysql-test/var/master-data --pid-file=/opt/mysql/mysql-test/var/run/master.pid
--socket=/opt/mysql/mysql-test/var/tmp/master.sock --log=/opt/mysql/mysql-test/var/log/master.log
 --character-sets-dir=/opt/mysql/share/mysql/charsets --default-character-set=latin1 --tmpdir=/opt/mysql/mysql-test/var/tmp
--language=/opt/mysql/share/mysql/english/ --innodb_data_file_path=ibdata1:128M:autoextend --open-files-limit=1024
--log-bin-trust-function-creators --loose-binlog-show-xid=0 --rpl-recovery-rank=1 --init-rpl-role=master --key_buffer_size=1M
--sort_buffer=256K --max_heap_table_size=1M --log-bin=/opt/mysql/mysql-test/var/log/master-bin --user=root

/opt/mysql/bin/mysqltest --no-defaults --socket=/opt/mysql/mysql-test/var/tmp/master.sock --database=test --user=root
--password= --silent -v --skip-safemalloc --tmpdir=/opt/mysql/mysql-test/var/tmp --port=9306 -R r/count_distinct3.result

Extending MySQL Test Suite

That’s all nice, but the power of this infrastructure is you can incoporate your own tests. This may be beneficial if you have complex statements and use bleeding edge versions of MySQL. Indeed, it’s another approach to being able to provide to MySQL reproducable test cases if you have a problem.

There are basically 2 steps to incoporating your own tests.

  1. Create a test script in the t subdirectory.
  2. Create a test results file in the r subdirectory.

Using the MySQL Sakila Sample Database as a test case, I created a test with the default sakila-schema.sql and sakila-data.sql files. I did need to replace the DROP and USE DATABASE commands and add a set of DROP TABLE statements.

cd /opt/mysql/mysql-test
vi t/sakila.test
./mysql-test-run --record sakila
more r/sakila.results

Example Results

./mysql-test-run sakila

Stopping master cluster
Installing Test Databases
Removing Stale Files
Installing Master Databases
running  ../bin/mysqld --no-defaults --bootstrap --skip-grant-tables     --basedir=.. --datadir=mysql-test/var/master-data --skip-innodb --skip-ndbcluster --skip-bdb
Installing Master Databases 1
running  ../bin/mysqld --no-defaults --bootstrap --skip-grant-tables     --basedir=.. --datadir=mysql-test/var/master-data1 --skip-innodb --skip-ndbcluster --skip-bdb
Installing Slave Databases
running  ../bin/mysqld --no-defaults --bootstrap --skip-grant-tables     --basedir=.. --datadir=mysql-test/var/slave-data --skip-innodb --skip-ndbcluster --skip-bdb
Manager disabled, skipping manager start.
Loading Standard Test Databases
Starting Tests

TEST                            RESULT
-------------------------------------------------------
sakila                         [ pass ]
-------------------------------------------------------

Ending Tests
Shutting-down MySQL daemon

Master shutdown finished
Slave shutdown finished
All 1 tests were successful.

Now, of surprise with my present version of 5.1.7 beta, a number of statements from the sakila-schema failed.
You can download my working version of the sakila.test from above here. The first failure was:

TEST                            RESULT
-------------------------------------------------------
sakila2                        [ fail ]

Errors are (from /opt/mysql/mysql-test/var/log/mysqltest-time) :
mysqltest: At line 220: query ';
CREATE TRIGGER `ins_film` AFTER INSERT ON `film` FOR EACH ROW BEGIN
INSERT INTO film_text (film_id, title, description)
VALUES (new.film_id, new.title, new.description);
END;;
CREATE TRIGGER `upd_film` AFTER UPDATE ON `film` FOR EACH ROW BEGIN
IF (old.title != new.title) or (old.description != new.description)
THEN
UPDATE film_text
SET title=new.title,
description=new.description,
film_id=new.film_id
WHERE film_id=old.film_id;
END IF;
END;;
CREATE TRIGGER `del_film` AFTER DELETE ON `film` FOR EACH ROW BEGIN
DELETE FROM film_text WHERE film_id = old.film_id;
END;;
DELIMITER ;
--' failed: 1065: Query was empty
(the last lines may be the most important ones)

Now is the time to upgrade to the latest 5.1.11 beta to confirm operations and isolate the errors into smaller tests, and hopefully all is well, if not, I’ve got something to contribute back.

Using Cascade in Foreign Keys

For those that are using a Referential Integrity based Storage Engine in MySQL, the use of Foreign Keys can provide an extra level of data integrity within your application. There are of course caveats to this statement, such as correct design, proper transactional use etc, but that’s a topic for another time. The use of CASCADE has an effect on this.

So, just a quick refresher, using the MySQL Sakila Sample Database (if you don’t have it, get it. This can be a valuable tool in many other areas). Let’s first check out the Schema Diagram. We will look at the first relationship in the top left corner of the diagram.

There is a defined relationship with the actor and film_actor tables. An actor record can belong to one or more film_actor records, and a film_actor record must have one corresponding actor record.

Schema Definition

CREATE TABLE actor (
  actor_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
  first_name VARCHAR(45) NOT NULL,
  last_name VARCHAR(45) NOT NULL,
  last_update TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY  (actor_id),
  KEY idx_actor_last_name (last_name)
)ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE film_actor (
  actor_id SMALLINT UNSIGNED NOT NULL,
  film_id SMALLINT UNSIGNED NOT NULL,
  last_update TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY  (actor_id,film_id),
  KEY idx_fk_film_id (`film_id`),
  CONSTRAINT fk_film_actor_actor FOREIGN KEY (actor_id)
         REFERENCES actor (actor_id) ON DELETE RESTRICT ON UPDATE CASCADE,
  CONSTRAINT fk_film_actor_film FOREIGN KEY (film_id)
         REFERENCES film (film_id) ON DELETE RESTRICT ON UPDATE CASCADE
)ENGINE=InnoDB DEFAULT CHARSET=utf8;

Explanation

The above shows the CREATE TABLE syntax as provided in Version 0.8 of the Sakila Sample Database. Looking specifically at the constraint we wish analyse.

  CONSTRAINT fk_film_actor_actor FOREIGN KEY (actor_id)
         REFERENCES actor (actor_id) ON DELETE RESTRICT ON UPDATE CASCADE,

The FOREIGN KEY for a specific column REFERENCES an appropiate table and column, and includes optional UPDATE and DELETE actions.

The ON DELETE RESTRICT is indeed optional as this is the default operation, but is a good practice for both readibility and future compatibility.
The ON UPDATE CASCADE in this case, is indeed unwarranted as the actor primary key is an AUTO_INCREMENT column and should never be updated. By good design definition, a PRIMARY KEY should never be updated in a table, and then should be set to RESTRICT. If however you ever wish to update a primary key value in RESTRICT mode, you can’t as to first UPDATE children before the parent is to set the value to an unknown value that values validation. The best approach here is always use AUTO_INCREMENT primary keys, and never update the values.

Use

So let’s take the current implementation for a spin to see how the default RESTRICTED implementation operates.

mysql> select count(*) from film_actor where actor_id=1;
+----------+
| count(*) |
+----------+
|       19 |
+----------+
1 row in set (0.05 sec)
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> delete from actor where actor_id=1;
ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails
    (`sakila/film_actor`, CONSTRAINT `fk_film_actor_actor` FOREIGN KEY (`actor_id`) REFERENCES `actor` (`actor_id`)
   ON UPDATE CASCADE)
mysql> rollback;
Query OK, 0 rows affected (0.06 sec)

As you can see, an attempt to delete an actor fails due to a foreign key constraint failure.

The correct implementation for this design is to delete child relationships first. For example.

mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> delete from film_actor where actor_id=1;
Query OK, 19 rows affected (0.00 sec)
mysql> delete from actor where actor_id=1;
Query OK, 1 row affected (0.00 sec)
mysql> rollback;
Query OK, 0 rows affected (0.06 sec)

Let’s now try CASCADE.

mysql> alter table film_actor drop foreign key fk_film_actor_actor;
Query OK, 5462 rows affected (0.78 sec)
Records: 5462  Duplicates: 0  Warnings: 0
mysql> alter table film_actor add CONSTRAINT `fk_film_actor_actor` FOREIGN KEY (`actor_id`) REFERENCES `actor` (`actor_id`) ON DELETE CASCADE;
Query OK, 5462 rows affected (0.69 sec)
Records: 5462  Duplicates: 0  Warnings: 0
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> select count(*) from film_actor where actor_id=1;
+----------+
| count(*) |
+----------+
|       19 |
+----------+
1 row in set (0.01 sec)
mysql> delete from actor where actor_id=1;
Query OK, 1 row affected (0.00 sec)
mysql> select count(*) from film_actor where actor_id=1;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
mysql> rollback;
Query OK, 0 rows affected (0.00 sec)

As you can see, by changing the definition to CASCADE, the requirement to delete all child rows is nullified.

Considerations

When designing I try to avoid the use of CASCADE. While it has it’s benefits, it is a “catch” by the database, and can hide poor design and coding techniques. In can however in more heavily loaded systems, provide a performance improvement, as additional commands for deleting don’t incur the network communications overhead. I’ll need to add this as a note to my performance testing guidelines to see what effect this has.

Another benefit on a more strict approach is MySQL for example currently does not fire triggers resulting on cascading constraint operations.

Try it yourself

As you can see it’s easy to use. To try it yourself.

  • Use an appropiate storage engine, e.g. InnoDB. You can use ALTER TABLE [table] ENGINE=InnoDB; on your tables.
  • Add an index to the target (foreign key) column, your source (primary key) will have an index by definition of being a primary key.
  • Ensure your source and target columns are the same data type
  • Create the appropiate constraint.

While the MySQL Manual refers to both the source (primary) and target (foreign) columns must have Indexes, I’ve found this not to be the case in 5.1. An ALTER TABLE will auto create the target (foreign) columns index if it doesn’t exist.

Just note, traps for young players. Both the source (primary) and target (foreign) columns must have the identical datatype definition. A common problem I’ve seen is that UNSIGNED is used for a Primary Key definition and not for a foreign key definition.

References
Wikipedia definition of “Referential Integrity”
MySQL Manual Page – FOREIGN KEY Constraints
MySQL Sakila Sample Database Forum
MySQL Forge Wiki on Sakila Sample Database

What constitutes a good error message to the user?

Today, will go down in my professional history as quite possibly the lowest I would ever think of a software developer. I’ve carefully avoided the term “fellow coder”, speaking of a IT industry sticking by fellow IT people, but not today.

I presently support an existing production system, some 1000+ users that’s been in place around 3 years in which I’ve had no prior involvement until recently. Integration with other external parties within the system have provided an esclation in errors being reported in this external communication, and lack of adequate feedback to the user is another topic. Email is the means of reporting administratively of user errors, again another topic of issue. Within these emails, which are almost impossible to manage due to the limited internal GUI only toolset and lack of access to actual email account files to automate scripting (yet another topic? Do you see a trend here), is some relevent information regarding the transaction, and then most importantly the error message. The thing I need to investigate the problem.

The line reads (removing some stuff):


Error Code: 234567892

Ok, well a little cryptic, but surely you can work out from the external system what this means. Investigation of some more errors, in the mail GUI product, yet another series of open windows (you can’t view messages like a regular mail client with a summary list and a detail panel), provides for a trend in the error messages:


Error Code: 1235492567
Error Code: -434783134
Error Code: 34345199

The trend being there is none. Of course today by mid morning the email error count is into the hundreds, and I’m none the wiser. Well time to closely investigate the code management (as I’ve already contacted the external party, and asked if I can provide some error codes to receive greater information).

The following constitutes the two lines of code taken in the determination of the error messages I’ve shown so far. Note, this code takes the external system response, and then “attempts to determine usefull error content for presentation back to the user”.


errorNo = new Random().nextInt();
error = “Error Code: ” + errorNo;

Now while everybody laughed out loud, including fellow developers, DBA, IT manager, the Business Owners and Users (which can’t read code but could understand the first of these two lines I highlighted), and yes it really was funny in context with a bigger picture, but really it wasn’t funny at all. Some things make my blood boil, and this was one of them. With all the time lost between multitudes of people, users, call centre etc, I’d never felt a stronger conviction to hunt down the developer that wrote this.

The end of the story is after even trolling old CVS repository entries I was unable to piece sufficient information to determine the author. Most likely done pre version control, and then that trail leads to only a few names I’ve heard mentioned before.

I’d like to see somebody top that one!

MySQL Ideas

Seems I have over time, thought of many ideas, jotted some notes on some, and even done some work, but everybody knows that “home projects” can take a long time.

Here are a few that have resurfaced over the past month, and I doubt I’ll ever get to them, or perhaps some other enterprising person has already done a similar thing. Of course most are for my own personal/professional gratification, but input from others of “great idea, when do we see it” could sway my interests.

INFORMATION_SCHEMA for MySQL Version 4

Why?

Well, quering the INFORMATION_SCHEMA is very cool, and long overdue for information gathering including statistics, schema definitions, schema version comparision tools etc. Of course there are concerns regarding the performance of using the INFORMATION_SCHEMA, and any design should significantly consider this limitation. The INFORMATION_SCHEMA is not physically represented as a normal MySQL database with tables and real data files, and then can be optimised appropiately, queries reference various sources of information and return the results at query time.

How would I achieve this? Here is an approach. The biggest issue would be what programming technology and non-Linux support.

  • Generate a physical INFORMATION_SCHEMA create SQL script from the most current database version. This would allow you to create a real database with real tables.
  • Population. Well at least some tables will be empty, but some work would be required to the population of other information. A first pass with some easy shell scripting and SHOW commands would easily populate information for SCHEMATA (SHOW DATABASES), TABLES (SHOW TABLES), COLUMNS (DESC [table]). Some are covered by the mysql schema. You get the picture. Of course some information will be more more difficult to populate easily.
  • Syncing. Populating is one thing, but if the information is not current then it becomes useless. Depending on your organisations management, in a controlled environment, simply re-running the population process when schema information is changed, or run nightly may work. A more brut force approach may be to look at filesystem timestamps of the database table definitions and trigger when changes occur, but I suspect this would a poor approach.

Storage Engines in use

I’ve often wondered, do people use X storage engine.

I would love to see plastered on the MySQL Home Page stats of 99,999,999 Downloads, 99,999,999 Installations reporting in, and then some breakdowns of database versions, storage engine usage, even demographics. This type of statistical information can be great, but can also be dangerous to competitors, or can be too detailed to become useful. Consider the objective it a general census.

I recently was introduced to Fotolog, and in the top left corner is Members, Photos, and Photos today. Of course now they have volume the numbers are impressive. You blink, do a refresh and they have changed.

Having a more general information goal approach, rather then detailed would be my guidelines. Here are some points.

  • Each MySQL installation is registered providing it a unique id. This is used for all collection and communication, and there is never any real reference back to organisation/name etc. How this works correctly is a little difficult, but would need to involve some checksuming of information like hostname,os,mysql version,maybe ip.
  • A script runs to generate basic statisical information, including unique installation id, present GMT date/time, MySQL version, master/slave, installation date, number of databases, breakdown of storage engines used per database/installation, total number of tables per database/installation.
  • The Information needs to be recorded, so it can be transmitted, and also used for historical comparision, best option would a flatfile on the filesystem.
  • Having an automated approach to then say email this information to a known server. There would need to be some means to have some authentication, and feedback to confirm success. Using email is not an ideal approach, but is a more readily available medium. Another easy means would be a webservices approach, but would require a more direct internet connection, while mail could overcome this means.
  • More advanced ideas would be to record information in and XML format, have a XML storage engine, and enable reports to be run against this collated information, so the System Adminstrator has at there finger tips historical details that are gathered. The benefit of an XML approach is this can be more easily collated with clever approaches with XSLT for example. The fallback would could be a CSV file, but the information being sent, may well need to be more structured for various reasons.
  • Of course if it was limited to Version 5 up, the benefits of stored procedures, events and UDF’s a lot of this could be implemented in MySQL specific technologies, but what about the bulk of installations pre Version 5, you would definitely what to include these.
  • My concerns would definitely be SPAM abuse, or false figures, so these points alone may nullify the process even being worth it.

This is just a broad stroke overview, I’ve got more detailed analysis of pro’s, con’s, example file layouts, and a means of collating the information and providing dynamic reporting on the server.

Of course there are a few ideas I have implemented, and I’m still passionate about and working on in that “spare time” we all never have.

ShowCase MySQL

A showcase of MySQL (via existing popular Open Source apps). I’ve done this via a VMWare Appliance called UltimateLAMP.

I did this for two reasons, first of course I wanted to easily show applications that could be used without any technical requirements (indeed the hardest part is installing the free VMware player), and second, if there was more penetration of possible MySQL options in my present home city of Brisbane, it may provide some MySQL work opportunities. Of course, this an application approach to demonstrating MySQL, rather then a MySQL features approach, and is dependent then on the community and these applications may be less then ideal in a large scale deployment or may not be of a quality to be placed with MySQL, but it’s one way to get leverage.

I would like some way to poll usage of MySQL within organsiations my home town, but it’s a very difficult approach, and I’m sure some organisations would not even know they are using MySQL. The process may also provide a means of providing a service, say quarterly updates of software (MySQL and open source products), customisations, but I think it would be a difficult sell. Having an army of people applying these ideals across a number of cities may provide a more marketable approach.

ACID Performance Benchmark

Leveraging the Apache project JMeter, which has all the infrastructure for handling theads, reporting, graphing, and volume load testing, I’ve written to enhancements for JMeter JDBC Sampler, the first was CALL support, enabling calls to stored procedures. The second is TRANSACTION support enabling a number of SQL statements to be execute in a true transaction, all pass or all fail.

These allow me to provide an application specific performance approach. Now MySQL has various means of testing, the mysql-test that comes with installations (I’ve got more on this for a later time), as well as mysqlslap (again, some more points on this at a later time). Both of these tools have benefits, but I wanted a more application specific approach. Using JMeter provides an overhead, but it provides a more realistic application approach, my goal is not to get the maximum thoughtput possible, being as close to the screws, but to provide a more realistic approach.

Of course, my objective is to use application specific SQL, the SQL you run, this would need to be gathered via various means. Information gathering including what types of transactions, the interactions, the volumes of use would be needed to create realistic approaches. The benefits of JMeter for example, is I can simulate various transactions, enable them to interact, enable them to have implied delays to simulate user use. These are some of the advantages over the current MySQL performance approaches.

Providing this level of simulation however is only half the requirement. This provides for a benchmark for performance, but what about validation. Sure the system can be used with 100 threads with X Insert Orders, and Y Select Orders, and Z Invoice Prints etc per thread, but what if the there were issues and errors.

Data Quality is a key interest and this level of benchmarking also has to have the extent of validating the results, and taking appropiate actions for failures. I’m planning also that this can be used by enabling Referential Integrity (via an appropiate storage engine) on an existing application and then confirming transactional state of queries within applications that are not presently transaction safe.

Presently, I using this approach to assist in the testing of the MySQL Community Storage Engine PBXT. I’m hoping in conjunction with the Sakila Sample Database and various data sets, that I can also publish comparision results with other Storage Engines.

Schema Designer Analyser

My background is database modelling and large systems design, and I’m writing a A MySQL Guide to Database Modelling, Design and Support (still under development). This is an online and very simple approach for open source development to improve it’s image in database modelling. Keeping things simple will help all those non-professionals out there.

To back up this article, which has a checklist for organisations to follow and then incoporate into standards, is a tool to do the analysis that the checklists requires.

I had hoped to initial develop this using the GRT portion of the MySQL Workbench, but I’ve decided to make that a secondary project for now, focusing on a tehnology of greater understanding, possibly Java. Of course the biggest issues is accurate gathering of schema information, so you can see why a INFORMATION_SCHEMA for MySQL version 4 would come in handy.

Conclusion

I’ve got a lot more ideas, and discussions with other MySQL peers rekind ideas I’ve thought of previously. I really should get to putting them all out there to see how many people think I’m crazy, or to find other crazy people to contribute, or even fund! One can dream.

Improving Open Source Databases – WordPress

As part of both my UltimateLAMP side project, and my greater involvement with MySQL Workbench, I’ve been wanting to review and document the database schemas of Open Source products.

Indeed, as part of discussions with Mike Zinner of MySQL AB at the recent MySQL Users Conference, I suggested an idea used in the Java World, for example by Clover and Agitar, where to promote the usefullness of their respective tools (in this case code coverage), they provide results against Open Source Products. For an example check out Clover Examples.

With MySQL Workbench, to get some greater exposure of the use of graphical Data Modelling tool, I mentioned the idea of providing a respository of schemas, may help in exposure, usage and feedback of MySQL Workbench. Indeed my work is towards this being added to MySQL Forge, however I’ve found that at my first indepth work, using WordPress 2.0.2 has been a less then successful experience.

The schema would rate a 2 out of 5 on my scale of optimal database schemas. Here are some of my initial findings.

  • Inconsistent naming standards
    • ID uppercase for primary keys in some tables, but not in others.
    • different standards for primary keys, some id, some [table]_id
    • Inconsistent identification of foreign keys (e.g. post_parent). All keys should have _id.
    • Inconsistent datatypes for optional foreign keys (e.g. default of 0 rather then NULL)
    • Inconsistent column naming standards, some tables prefix columns with table name or table like name, (e.g. post_), but not all columns. Some tables has multiple prefix values (e.g. cat_, category_)
  • Poor usage of datatypes. Examples include BIGINT(20) for primary keys, TEXT for post title, INT(10) for columns which as values a TINYINT can support.
  • Inconsistent use of enums (e.g. Y/N for some values, and yes/no for others)
  • No Referential Integrity support (ie. Foreign Keys), limiting the benefits of a modelling tool showing table relationships
  • Different rules for primary keys (most tables have a suggorate key, but not all)

I am actually working on A MySQL Guide to Database Modelling, Design and Support, which covers a lot of these points and more. I’m some way from completing this document, but this initial research of the WordPress product highlights it’s importance to the open source community.

Of course we can just make noise about what we don’t like, or we can do something about it. I’m submitting my ideas, suggestions and code changes to the project. What is done about it is outside of my control, but I’ve at least given my input to this Open Source Product.

WordPress 2.0.2 Schema SQL (taken from a mysqldump after installation)

Here is a graphical representation of the Data Model from MySQL Workbench.
NOTE: My MySQL Workbench will not enable me to do a Export Image, so I’ll need to add this at a later time

Introducing Referential Integrity

Unfortunately this is no automated means of creating appropiate Referential Integrity for a schema. With more strict naming standards, it would be possible to reconstuct this information generally.

The approach to create these new rules were performed manually, but a tool could be developed to provide approximations for a number of these.

# Implement Foreign Keys. Changes include:
#  - Appropiate Storage Engine
#  - ensure column is of same definition as matching primary key (e.g. datatype + unsigned)
#  - create index for foreign key
#  - ensure optional keys use NULL

ALTER TABLE wp_categories ENGINE=InnoDB;
ALTER TABLE wp_comments ENGINE=InnoDB;
ALTER TABLE wp_linkcategories ENGINE=InnoDB;
ALTER TABLE wp_links ENGINE=InnoDB;
ALTER TABLE wp_options ENGINE=InnoDB;
ALTER TABLE wp_post2cat ENGINE=InnoDB;
ALTER TABLE wp_postmeta ENGINE=InnoDB;
ALTER TABLE wp_posts ENGINE=InnoDB;
ALTER TABLE wp_usermeta ENGINE=InnoDB;
ALTER TABLE wp_users ENGINE=InnoDB;

ALTER TABLE wp_usermeta MODIFY user_id  bigint(20) UNSIGNED NOT NULL DEFAULT '0';
ALTER TABLE wp_usermeta ADD FOREIGN KEY usermeta_users_fk (user_id) REFERENCES wp_users(ID) ON DELETE CASCADE;

ALTER TABLE wp_posts MODIFY post_author BIGINT(20) UNSIGNED NOT NULL;
CREATE INDEX post_author ON wp_posts(post_author);
ALTER TABLE wp_posts ADD FOREIGN KEY posts_users_fk (post_author) REFERENCES wp_users(ID) ON DELETE CASCADE;

ALTER TABLE wp_posts MODIFY post_parent BIGINT(20) UNSIGNED NULL DEFAULT NULL;
UPDATE wp_posts SET post_parent = NULL WHERE post_parent = 0;
CREATE INDEX post_parent ON wp_posts(post_parent);
ALTER TABLE wp_posts ADD FOREIGN KEY posts_posts_fk (post_parent) REFERENCES wp_posts(ID);

ALTER TABLE wp_postmeta MODIFY post_id  bigint(20) UNSIGNED NOT NULL DEFAULT '0';
ALTER TABLE wp_postmeta ADD FOREIGN KEY postmeta_posts_fk (post_id) REFERENCES wp_posts(ID) ON DELETE CASCADE;

ALTER TABLE wp_categories MODIFY cat_ID BIGINT(20) UNSIGNED NULL DEFAULT NULL AUTO_INCREMENT;
ALTER TABLE wp_categories MODIFY category_parent BIGINT(20) UNSIGNED NULL DEFAULT NULL;
UPDATE wp_categories SET category_parent = NULL WHERE category_parent = 0;
ALTER TABLE wp_categories ADD FOREIGN KEY categories_categories_fk (category_parent) REFERENCES wp_categories(cat_ID);

ALTER TABLE wp_post2cat MODIFY rel_id BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT;
ALTER TABLE wp_post2cat MODIFY post_id BIGINT(20) UNSIGNED NOT NULL;
ALTER TABLE wp_post2cat MODIFY category_id BIGINT(20) UNSIGNED NOT NULL;
ALTER TABLE wp_post2cat ADD FOREIGN KEY post2cat_posts_fk (post_id) REFERENCES wp_posts(ID) ON DELETE CASCADE;
ALTER TABLE wp_post2cat ADD FOREIGN KEY post2cat_categories_fk (category_id) REFERENCES wp_categories(cat_ID) ON DELETE CASCADE;
ALTER TABLE wp_comments MODIFY user_id BIGINT(20) UNSIGNED NULL DEFAULT NULL;
UPDATE wp_comments SET user_id = NULL WHERE user_id = 0;
CREATE INDEX user_id ON wp_comments(user_id);
ALTER TABLE wp_comments ADD FOREIGN KEY comments_users_fk (user_id) REFERENCES wp_users(id) ON DELETE CASCADE;
ALTER TABLE wp_comments MODIFY comment_post_ID BIGINT(20) UNSIGNED NULL DEFAULT NULL;
ALTER TABLE wp_comments ADD FOREIGN KEY comments_posts_fk (comment_post_ID) REFERENCES wp_posts(ID) ON DELETE CASCADE;

ALTER TABLE wp_comments MODIFY comment_parent BIGINT(20) UNSIGNED NULL DEFAULT NULL;
UPDATE wp_comments SET comment_parent = NULL WHERE comment_parent = 0;
CREATE INDEX comment_parent ON wp_comments(comment_parent);
ALTER TABLE wp_comments ADD FOREIGN KEY comments_comments_fk (comment_parent) REFERENCES wp_comments(comment_id);

ALTER TABLE wp_linkcategories MODIFY cat_id BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT;
ALTER TABLE wp_links MODIFY link_category BIGINT(20) UNSIGNED NOT NULL;
ALTER TABLE wp_links ADD FOREIGN KEY links_category_fk (link_category) REFERENCES wp_linkcategories(cat_id);

While there are no column name changes, by default the WordPress code should operate with this revised schema. Issues would include a poor delete strategy that still voilates the liberal constraints now enforced. Special handling of 0 for optional columns when the value is now NULL may also be a problem.

Revised Data Model in MySQL Workbench.
NOTE: My MySQL Workbench will not enable me to do a Export Image, so I’ll need to add this at a later time

Introducing improved Standards

Of course, I’d like to see the schema improve, here are my first draft ideas for improvements of the schema.

# Naming Improvements ===================================================================================
# Rename to lowercase id
ALTER TABLE wp_users CHANGE ID id BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT;
ALTER TABLE wp_posts CHANGE ID id BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT;
ALTER TABLE wp_categories CHANGE cat_ID cat_id BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT;
ALTER TABLE wp_comments CHANGE comment_ID comment_id BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT;
ALTER TABLE wp_comments CHANGE comment_post_ID comment_post_id BIGINT(20) UNSIGNED NOT NULL;

# Include _id for all Foreign Keys
ALTER TABLE wp_posts CHANGE post_author post_author_id BIGINT(20) UNSIGNED NOT NULL;
ALTER TABLE wp_posts CHANGE post_parent post_parent_id BIGINT(20) UNSIGNED NOT NULL;
ALTER TABLE wp_categories CHANGE category_parent category_parent_id BIGINT(20) UNSIGNED NOT NULL;
ALTER TABLE wp_comments CHANGE comment_parent comment_parent_id BIGINT(20) UNSIGNED NULL DEFAULT NULL;
ALTER TABLE wp_links CHANGE link_category link_category_id BIGINT(20) UNSIGNED NULL DEFAULT NULL;

# Primary Key Name Standardisation
ALTER TABLE wp_posts CHANGE id post_id BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT;
ALTER TABLE wp_users CHANGE id user_id BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT;

# Foreign Key Standardisation
ALTER TABLE wp_categories MODIFY category_parent_id parent_id BIGINT(20) UNSIGNED NOT NULL;
ALTER TABLE wp_comments CHANGE comment_parent_id parent_id BIGINT(20) UNSIGNED NULL DEFAULT NULL;

# Other Column Standaisations
ALTER TABLE wp_categories MODIFY cat_name category_name VARCHAR(55) NOT NULL;

# Column width improvements
ALTER TABLE wp_categories MODIFY category_count MEDIUMINT(5) NOT NULL DEFAULT '0';
ALTER TABLE wp_posts MODIFY post_title VARCHAR(255) NOT NULL;
ALTER TABLE wp_linkcategories MODIFY cat_name VARCHAR(255) NOT NULL;
ALTER TABLE wp_linkcategories MODIFY cat_id TINYINT(3) UNSIGNED NOT NULL AUTO_INCREMENT;
ALTER TABLE wp_options MODIFY autoload ENUM('Y','N') NOT NULL DEFAULT 'Y';

# Obvious Index Improvements
# Make login a unique index
DROP INDEX user_login_key ON wp_users;
CREATE UNIQUE INDEX  user_login_key  ON wp_users(user_login);

These will require code changes and adequate testing, something I’m not hopeful would happen unless I drove it, but you never know, the next time I review the WordPress Schema I might be impressed.

Conclusion

This is just a broadstroke first review of the schema. As part of all good design, testing of the code, further review and refinement based on better understanding of the product, and also peer review are all important steps not yet undertaken.

Migrating an MyISAM schema to use Referential Integrity

Here are some steps involved. Using the current MySQL defacto engine InnoDB. Of course, Falcon, PBXT and others will enable alternative engines to be used.

Convert Table Storage Engine Types

$ mysql -u[user] -p[password] [database] -e "SHOW TABLES" | grep -v "Tables_in" | sed -e "s/^/ALTER TABLE /" | sed -e "s/$/ ENGINE=InnoDB;/" > upgrade.sql
$ mysql -u[user] -p[password] [database] < upgrade.sql

NOTE: This may not work for all tables, for example those with FULLTEXT indexes will fail.

For the introduction of Referential Integrity we need to ensure the following.

  • Each Foreign Key column should have an index. Prior to 4.1 I think this was a requirement, however it's a good general practice regardless for SQL performance.
  • The datatype must match between Primary Key and Foreign Keys. The most obvious oversight is normally UNSIGNED, however you also for example have INT and INT, and not INT and BIGINT as datatypes.
  • Optional Foreign Keys must contain NULL values and not the normal practice of having a default of 0.

A Sample Foreign Key Syntax.

mysql> ALTER TABLE wp_usermeta ADD FOREIGN KEY usermeta_user_fk (user_id) REFERENCES wp_user(id);
ERROR 1005 (HY000): Can't create table './wordpress/#sql-cd9_10.frm' (errno: 150)

A closer investigation of what this error really is:

$ perror 150
MySQL error code 150: Foreign key constraint is incorrectly formed

A confirmation of table definitions.

mysql> desc wp_usermeta;
+------------+---------------------+------+-----+---------+----------------+
| Field      | Type                | Null | Key | Default | Extra          |
+------------+---------------------+------+-----+---------+----------------+
| umeta_id   | bigint(20)          |      | PRI | NULL    | auto_increment |
| user_id    | bigint(20) unsigned |      | MUL | 0       |                |
| meta_key   | varchar(255)        | YES  | MUL | NULL    |                |
| meta_value | longtext            | YES  |     | NULL    |                |
+------------+---------------------+------+-----+---------+----------------+
4 rows in set (0.00 sec)

 mysql> desc wp_users;
+---------------------+---------------------+------+-----+---------------------+----------------+
| Field               | Type                | Null | Key | Default             | Extra          |
+---------------------+---------------------+------+-----+---------------------+----------------+
| id                  | bigint(20) unsigned |      | PRI | NULL                | auto_increment |
| user_login          | varchar(60)         |      | UNI |                     |                |
| user_pass           | varchar(64)         |      |     |                     |                |
| user_nicename       | varchar(50)         |      |     |                     |                |
| user_email          | varchar(100)        |      |     |                     |                |
| user_url            | varchar(100)        |      |     |                     |                |
| user_registered     | datetime            |      |     | 0000-00-00 00:00:00 |                |
| user_activation_key | varchar(60)         |      |     |                     |                |
| user_status         | int(11)             |      |     | 0                   |                |
| display_name        | varchar(250)        |      |     |                     |                |
+---------------------+---------------------+------+-----+---------------------+----------------+
10 rows in set (0.00 sec)

A second pair of eyes (thanks Jon), showed that I actually spelt a table name wrong. Did you spot it. Of course it would have been nice if the error message actually told me this. This rather bland message Can’t create table could actually mean.

  • missing index (pre 4.1)
  • incompatible columns data types
  • Invalid Table.

I’m sure if I tried to break it I’d find more examples, but just a trap for unsuspecting people.

Now migrating an existing schema to using Referential Integrity provides some initial benefits (row level locking, misplaced key updates/deletes) but it does not provide true integrity unless your application has been written to support transactions. Chances are it hasn’t, but this is at least the first step.

Restyling a Mediwiki Installation – Lesson 1

Following my implementation of UltimateLAMP, read heaps more at this thread, I undertook to provide customizations of a MediaWiki Installation. Here is the first lesson that you can undertake if you wish to beautify the default MediaWiki Installation.

For the purposes of this demonstration, I am going to help out Jay & Colin and propose a restyle the MySQL forge to fit in with the default Home Page. Hopefully you will see it there soon!

Lesson 1 – Updating the default Monobook.css

There are several different ways to make style changes, the simplest is to customize the system wide Monobook.css, and this Lesson will focus on this.

By accessing the link [http://my.wiki.site/]index.php/MediaWiki:Monobook.css you will be able to make the following changes.

The best way to approach this, like any good programming style, make small changes, testing and confirmation and continue.

Note: For all screen prints, click for a larger image

1. Cleanup Backgrounds

body        { background-image: none; background-color: #FFFFFF;}
.portlet    { background-color: #FFFFFF; }

The MediaWiki Page is made up of three sections, these are the represented by styles .portlet, #content, and #footer. For the purposes of our first example, the content section and the footer section are already white.

==>

At this point I should recommend that you use FireFox for future work. You should then install the Web Developer Add-on. What results from this plugin is invaluable information about the internals of a web page. The two options most useful for this exercise is:

  • Information | Display Id and Class Details.
  • Information | Display Element Information (move the cursor around to get information)
==>

2. Cleanup Borders

I don’t feel that borders around things are warranted. I’m more a clean look kinda guy. Remove all borders, say one to separate the footer from the page.

#content    { border-width: 0px; }
.portlet .pBody
            { border-width: 0px; }
#footer     { border-top: 1px solid #888888; border-bottom-width: 0px; }
==>

Maybe, that’s a little too clean. Add some separators on left side options.

#p-navigation,
#p-search,
#p-tb       { border-top: 1px dotted #888888; }
==>

3. Links

Using the Forge Styles http://forge.mysql.com/css/shared.css we can adjust the links accordingly.

a:link      { color: #10688E; text-decoration: none; }
a:hover     { color: #003366; text-decoration: underline; }
a:visited   { color: #106887; text-decoration: none; }

a.new:link  { color: #AA0000; text-decoration: none; }
a.new:hover { color: #AA0000; text-decoration: underline; }

==>

4. Content

Ok, we have played around a little, now to move into some more serious changes. Looking at the general page look and feel that you see in most page content.


html,body,p,td,a,li
            { font: 12px/19px Verdana, "Lucida Grande", "Lucida Sans Unicode", Tahoma, Arial, sans-serif; }

h1          { font: bold 24px Helvetica, Arial, sans-serif; color: #EB694A; letter-spacing: -1px;
              margin: 1.5em 0 0.25em 0;
              border-bottom: 1px dotted #888888; line-height: 1.1em; padding-bottom: 0.2em; }

h2          { font: bold 18px Helvetica, Arial, sans-serif;  color: #EB694A; letter-spacing: -1px;
              margin: 2em 0 0 0;
              border-bottom: 1px dotted  #888888; line-height: 1.1em; padding-bottom: 0.2em; }

h3          { font-size: 12px; color: #6F90B5; }

h4          { font-size: 12px; color: #6F90B5; }

At this time, I’ve created an Example Page to better demonstration of the look and feel, as the default MediaWiki Main page has limited content.

Some Text Content at the start of the page.

= A First Level Heading =
The first paragraph text.
[[Main Page | Internal Link to existing Page]],
[[Nonexistent Page | Internal Link to nonexistent Page]],
[http://forge.mysql.com  External Link],
http://www.mysql.com  (plain text link)

== A Second Level Heading ==
The first paragraph text.

== A Second Level Heading ==
The first paragraph text.
* List Item Line 1
* List Item Line 2
* List Item Line 3
=== A Third Level Heading ===
The first paragraph text.
* List Item Line 1
** Sub Item 1
** Sub Item 1
** Sub Item 1
*** Sub Item 1
*** Sub Item 2
*** Sub Item 3
* List Item Line 2
* List Item Line 3
=== A Third Level Heading ===
The first paragraph text.
# List Item Line 1
## Sub Item 1
## Sub Item 1
## Sub Item 1
### Sub Item 1
### Sub Item 2
### Sub Item 3
# List Item Line 2
# List Item Line 3
== A Second Level Heading ==
The first paragraph text.

== A Second Level Heading ==
The first paragraph text.

==>

5. Table of Contents

By default, the Table of Contents shows at the top of a page when a given amount of sections or content is present (not sure what the trigger is). The issue is, for larger pages, it means you need to scroll down before you can see any page content. You can disable this with the __NOTOC__ option, but a better solution is to position the Table of Contents so as to not interfere with initial content.

#toc        { float: right; margin: 0 0 1em 1em; border: solid 1px #888888; text-color: #EFEFEF; color: #333333; }
#toc td     { padding: 0.5em; }
#toc .tocindent
            { margin-left: 1em; }
#toc .tocline
            { margin-bottom: 0px; }
#toc p      { margin: 0; }
#toc .editsection
            { margin-top: 0.7em;}
==>

Ok, we are about half way there.

6. Menu Options

In order to get a look and feel like the Forge Home Page., we now have to work on the rest of the navigation options at the top of the page above the content. Let’s start with Second Line of Menu Options (I’ll explain more later why).

#p-cactions { padding-right: 0px; margin-right: 0px; background-color: #006486; width: 100%; top: 77px; }
#p-cactions ul
            { margin: 0; padding: 0; list-style: none;  font-size: 85%; margin-left: 10px; }
#p-cactions li
            { float:left; margin:0; padding:0; text-indent:0; border-width: 0px; }
#p-cactions li a
            { display:block;  color:#F7F7F7;  font-weight: bold;
              background-color: #666666;  border:solid 1px #DDDDDD;
              border-width: 0px;  border-left-width:1px;  text-decoration:none; white-space:nowrap;}
#p-cactions li a:hover
            { background-color: #FFBC2F; color: #666666; }
#p-cactions li.selected a
            { background: #EA7F07;  border:none;}
#p-cactions li.selected  a:hover
            { color: #000000; }
==>

7. Top Menu Options

#p-personal .pBody
            { background: #FFFFFF url(http://forge.mysql.com/img/bggradient.jpg) no-repeat top right; }

#p-personal li a,
#p-personal li a.new
            { color: #FFFFFF; text-decoration: none; font-weight: bold; }
#p-personal li a:hover
            { color: #E97B00; background-color: transparent; text-decoration: underline; }
==>

The down side is it should be the same height at the Forge Page. This required a little more work, and other sections had to be adjusted accordingly, hence why I left this to last. (The size is based on the later mention logo height + margins)

#p-personal { height: 62px; }
#p-personal .pBody
            { height: 62px; }
#p-cactions { top: 62px; }
#content    { margin-top: 84px; }

==>

8. Some Miscellaneous Things

  • Deemphasis the text in the footer a little
  • Remove the annoying arrow that occurs after external links
  • Remove the logo from the login link
  • Hide the My Talk link
#footer     { color: #888888; }
#bodyContent a[href ^="http://"]
            { background:inherit !important; padding-right:inherit !important}
li#pt-userpage
            { background: none; }
li#pt-mytalk
            { display: none; }

==>

The Badging

Let us not forget the final step, the logo badging.

This requires a change to a MediaWiki filesystem file

LocalSettings.php.

$wgLogo             = "http://forge.mysql.com/img/mysqllogo.gif";

And the following Style changes.

#p-logo, #p-logo a, #p-logo a:hover
            { width:100px; height: 52px; }
#p-logo     { margin-left: 10px; margin-top: 5px; margin-bottom: 5px; }

#p-cactions { left: 0px; }
#p-cactions ul
            { margin-left: 180px; }

In order to overcome the top options bleeding to white text on white background, I’ve increased the right side fill of the default bggradient image, replacing the appropriate ULR with the following.

#p-personal .pBody
            { background: #FFFFFF url(/images/bggradient.png) no-repeat top right; }
==>

Conclusion

It’s not quite perfect yet, but this shows how it can be done. Some minor things are left, but I’ve run out of time for the few hours I allocated to this.
The end result of monobook.css for this lesson is:

/* edit this file to customize the monobook skin for the entire site */

/* Background Display */

body        { background-image: none; background-color: #FFFFFF;}
.portlet    { background-color: #FFFFFF; }

/* Borders */

#content    { border-width: 0px; }
.portlet .pBody
            { border-width: 0px; }
#footer     { border-top: 1px solid #888888; border-bottom-width: 0px; }

#p-navigation,
#p-search,
#p-tb       { border-top: 1px dotted #888888; }

/* Links */
a:link      { color: #10688E; text-decoration: none; }
a:hover     { color: #003366; text-decoration: underline; }
a:visited   { color: #106887; text-decoration: none; }

a.new:link  { color: #AA0000; text-decoration: none; }
a.new:hover { color: #AA0000; text-decoration: underline; }

a.external:link {color: #000000; text-decoration: none; }
a.external:hover { color: #AA0000; text-decoration: underline; }


/* Page Look & Feel */
html,body,p,td,a,li
            { font: 12px/19px Verdana, "Lucida Grande", "Lucida Sans Unicode", Tahoma, Arial, sans-serif; }

h1          { font: bold 24px Helvetica, Arial, sans-serif; color: #EB694A; letter-spacing: -1px;
              margin: 1.5em 0 0.25em 0;
              border-bottom: 1px dotted #888888; line-height: 1.1em; padding-bottom: 0.2em; }

h2          { font: bold 18px Helvetica, Arial, sans-serif;  color: #EB694A; letter-spacing: -1px;
              margin: 2em 0 0 0;
              border-bottom: 1px dotted  #888888; line-height: 1.1em; padding-bottom: 0.2em; }

h3          { font-size: 12px; color: #6F90B5; }

h4          { font-size: 12px; color: #6F90B5; }

/* Table of Contents */
#toc        { float: right; margin: 0 0 1em 1em; border: solid 1px #888888; #EFEFEF; color: #333333; }
#toc td     { padding: 0.5em; }
#toc .tocindent
            { margin-left: 1em; }
#toc .tocline
            { margin-bottom: 0px; }
#toc p      { margin: 0; }
#toc .editsection
            { margin-top: 0.7em;}


/* Second Line Top Menu Options */

#p-cactions { padding-right: 0px; margin-right: 0px; background-color: #006486; width: 100%; }

#p-cactions ul
            { margin: 0; padding: 0; list-style: none;  font-size: 85%; margin-left: 10px; }

#p-cactions li
            { float:left; margin:0; padding:0; text-indent:0; border-width: 0px; }

#p-cactions li a
            { display:block;  color:#F7F7F7;  font-weight: bold;
              background-color: #666666;  border:solid 1px #DDDDDD;
              border-width: 0px;  border-left-width:1px;  text-decoration:none; white-space:nowrap;}

#p-cactions li a:hover
            { background-color: #FFBC2F; color: #66666; }

#p-cactions li.selected a
            { background: #EA7F07;  border:none;}

#p-cactions li.selected  a:hover
            { color: #000000; }

/* Top Menu Options */
#p-personal .pBody
            { background: #FFFFFF url(/images/bggradient.png) no-repeat top right; }

#p-personal li a,
#p-personal li a.new
            { color: #FFFFFF; text-decoration: none; font-weight: bold; }
#p-personal li a:hover
            { color: #E97B00; background-color: transparent; text-decoration: underline; }

/* Top Menu Height Adjustments */
#p-personal { height: 62px; }
#p-personal .pBody
            { height: 62px; }
#p-cactions { top: 62px; }
#content    { margin-top: 84px; }

/* Minor Things */
#footer     { color: #888888; }
#bodyContent a[href ^="http://"]
            { background:inherit !important; padding-right:inherit !important}
li#pt-userpage
            { background: none; }
li#pt-mytalk
            { display: none; }

/* Badging */
#p-logo, #p-logo a, #p-logo a:hover
            { width:100px; height: 52px; }
#p-logo     { margin-left: 10px; margin-top: 5px; margin-bottom: 5px; }

#p-cactions { left: 0px; }
#p-cactions ul
            { margin-left: 180px; }

References

User Styles Alternative
Gallery of user styles
Skins
LocalSettings.php Style Information
Navigation Bar
User Rights
Wikipedia Monobook
MediaWiki Skin HowTo

UltimateLAMP

This product is no longer available. This was created over 6 years ago and software is too out of date.


As I discussed earlier in A picture can tell a thousand words, I outlined briefly what the intention of UltimateLAMP was for. Let me spill the beans so to speak.

What is UltimateLAMP?

UltimateLAMP is a fully functional environment allowing you to easily try and evaluate a number of LAMP stack software products without requiring any specific setup or configuration of these products. UltimateLAMP runs as a Virtual Machine with VMware Player (FREE). This demonstration package also enables the recording of all user entered information for later reference, indeed you will find a wealth of information already available within a number of the Product Recommendations starting with the supplied Documentation.

My executive punch line with the “right” buzz words is:


You have heard of all the hype in Open Source with lowering Total Cost of
Ownership (TCO) or Total Economic Impact (TEI)? Evaluate Open Source now
in your organistion at no cost or risk with this software package.

What are the uses for UltimateLAMP?







Well, in a nutshell UltimateLAMP allows for instant exposure of Open Source products that use MySQL. You have to remember my goal here is to promote MySQL. Unfortunately, as with any database the features alone will not get the new sale, it’s the applicability to an organisation, and with Open Source the wealth of existing and developing applications that exist can. I don’t necessarily like this approach. Indeed many open source products are poorly designed and can be poor choices in a large scale enterprise solution, but the flip side is, where else can you start.

The benefits of selling a MediaWiki for example due to the success and scalability with Wikipedia is great. So it’s important that the product list includes proven products and currently developing products (rather then stale ones). This is something that the community can definitely provide valuable feedback on to help in this selection.

Other then becoming a CD used as a drink coaster, I feel the potential is here already to provide a copy to people, even install it on a managers computer. You can’t break the software, so why not install it for your non-computer user friends/family. The goal is to move up to executive management however I feel the exposure to the general community first will greatly help.

How it came about

There is some history to this idea. Here are some of the highlights.

  • Late 2005, several planning sessions with Jon and Morgan about a more practical Open Source Contribution user group lead to obviously LAMP stack products for simplicity and exposure. This lead to exposure of LAMP stack products to more of the general person and split from the original intended goal, but was a great idea.
  • Early 2006, futher discussions of how MySQL could get exposure and traction into organisations. For myself professional, how could I promote in industry sectors that I work in.
  • In April 2006, the MySQL Users Conference with discussions of this idea with others and the positive feedback
  • In May 2006, the VMware Appliance Challenge was the possible exposure and deadline needed for me to “Just Do It”. Originally the idea was intended as a Live CD, but in some ways a virtual machine is just as good.

Where to from now!

Well, supply and demand. I don’t know if anybody else has a particular use, or will even download to use it, or market it.

I could see the potential for MySQL User Groups to get behind my idea, and enable members to filter this into known organisations. I could see for targeted opportunities/events, CD’s or information could be distributed. With the support and backing of MySQL AB, I could see the opportunity for even a breakfast CEO/CTI/CIO introduction or other format of meeting the ultimate intended audience.

I could see the potential that an organisation or entity could provide free hosting (30 days) to an organisation that pointed a 3rd level domain to the provider (e.g. wiki.acme.com). It’s like the honeypot, if it takes off after 30 days, the company will either want to pay for hosting, or what to move it. And that’s also fine, organisation provides a MySQL dump, and links to documented installation instructions, or perhaps a sale of services for initial installation/customisation/training can be made.

What can I do?

There is always a list of things that can be done. For now the greatest thing I can ask for is feedback. The good, the bad and the ugly. It’s find to get the comments to say, “That’s great”, or “Good job” or “I can use this”, it’s just as important to get the comments that are proactive in what’s not good. I would value any feedback. Please feel free to Download UltimateLAMP

On my immediate ToDo List or even partially complete is:

  • Document VMPlayer installation instructions for Windows/Linux (partial).
  • Add more product sample content.
  • Add more mediawiki content about the product, like customisation options, references to specific documentation, or other online working examples.
  • Documenting the installation/creation instructions for individual products.
  • Figure out a better way for users to contribute content that get’s rolled back into the Virtual Machines. For the mediawiki, I could see a public online copy, but for other products it could become harder
  • Optimise VM image (removing unnecessary OS stuff), removing product language support (not ideal), but my goal is to provide a 2 CD pack. The first CD has VMplayer in Win/Linux/RPM formats and the default VMware BrowserAppliance (All software from VMWare). The second CD is UltimateLAMP. Combined in a DVD 2x case along with perhaps a small booklet of a few pages, would enable this product to potential move to a commercial state.
  • Investigating other products

Should anybody wishing to help, leave a comment, that way I can see somebody is reading this and so can others.

Related Post:

UltimateLAMP Passwords

A picture can tell a thousand words

I’m a keen advocate of MySQL. However, while I use it and promote it within my limited IT circles, I often wonder how MySQL can get better traction and exposure, especially within both the industry sectors and physical locations where I am presently.

This presents a dilemma, it’s almost like the term that has been used in Venture Capital, and in the well named book, Crossing the Chasm. I see and believe that MySQL already has good penetration within certain industry sectors, specifically Internet Based, Startup Based, or Small Based segments. However, I’m sure within other commercial sectors, MySQL has either a token exposure or little to no exposure at all, at least in the circles I mix with.

So how can MySQL the product and MySQL AB the company get both better exposure and penetration? Ultimately it’s great for the community, both in dollars driving product features, product support and from my interest, more jobs. I figure there are many different approaches, and they all target organisations, and the individuals making decisions within the organisations differently. You have the 24×7 support with MySQL Network and certified installations that can satisfy management, you have the speed, flexibility, performance and capability that can appeal to a DBA. But this I doubt is the ultimate hurdle.

How do you do the active sell in 60 seconds to a potential cold contact, or how do you as an MySQL Open Source advocate within an organisation do the active or passive sell to a CEO/CTO/CIO/IT Manager etc.

Well I have an idea, it’s called UltimateLAMP. It’s not the ideal solution, it’s not the great sell, but I figure it’s one approach that I can contribute to and promote.

Stay tuned, more to come very soon.

MySQL GUI Products

I’ve started now to actively use more of the MySQL Workbench, MySQL Administrator and MySQL Query Browser and MySQL Migration Toolkit. I am traditionally a very command line person, and it’s important that these skills are never put on the back burner. For all those budding Developers and DBA’s you need these skills, expecially for any MySQL Certification.

To indicate my indent, I’ve even created a new blog category specifically for MySQL GUI Products.
As I’ve mentioned previously, Mike Zinner from MySQL AB really impressed me with where the products were going in development. I’m still rather disappointed about the stability and compatibility when attempting to run under Linux (which is my desktop), but I’ve put that aside and configured a suitable environment to best use and test these products under Windoze! I’ll also be able to trial products under both environments and provide valuable feedback to Workbench Bugs.

To overcome this, as I use Windows reluctantly in a VMware environment, I’ve decided to install all MySQL GUI Products on an older laptop (a Dell Inspiron 5000, PIII 600MHz), which I use for media playing (DivX .avi’s) on my TV. Yes, I hate to resort back to windows, but I had difficulties getting the TV Out working properly under Linux, and I only have so much free time. I would have liked to nut it out, but I find Windows superior in only 2 things, driver compatibility and running Photoshop, so I’ll leave it at that.

So being about to VNC to this machine gives me a spread of processing usage which is acceptable for now, and I’m working from a single desktop. (Time to dream some more about that Dell 24″ UltarSharp Widescreen LCD Flat Panel I’d like).

Well again, with just starting MySQL Workbench, and for the first ever time launching MySQL Administrator I came across Windows functionality I’d not seen before. Within MySQL Administrator under the Windows menu option I was able to switch to MySQL Workbench, a completely different running Windows Program. Now, I’d not seen this before in any multiple running products in Windows. Obviously the power of the GRE enables this, what a nice feature. Well back to now looking at this product some more.

MySQL :: Developer Zone Quick Polls

I don’t get to the MySQL Developer Zone main page often enough. In thinking about what pages I view everyday or regularly, it doesn’t rate as high as Planet MySQL, MySQL Forums or even the MySQL Forge.

I was most dissappointed in the results of a recent poll What did you think of the 2006 Users Conference?. The top response was I had no idea there was a Users Conference. That’s not good to see this.

An interesting poll What are you most looking forward to at the MySQL Users Conference (April 24-27)?, the clear winner was Drinking beer with MySQL gurus. What does this say about the attendees. Either they are all alcho’s or the just want to be around guru’s in a less technical way.

I see this page also has a live feed of Planet MySQL. Perhaps we should get some more stuff down the right side of PlanetMySQL like the current Quick Poll itself and a feed of the current developer articles at the Developer Zone.

A Post MySQL Conference review. The 4 F's

Finally back home after some R&R at Yosemite before leaving the US. In conclusion, to sum up my experience of the 4th Annual MySQL Users Conference “Excellent”.
Here’s my take. Friends, Functionality & New Features, the Future.

Friends

I’ve used MySQL now for over 6 years, and full time for a number of years, yet I’ve only become active in the MySQL community, particularly Planet MySQL in the past 6 months. Over that time, I’ve read a lot from members, and heard from many people. It was great at the conference to meet many of these people for the first time. The list includes: Community MembersFrank Mash,Mike Kruckenberg, Markus Popp, Roland Bouman, Giuseppe Maxia and Paul McCullagh. MySQL EmployeesMike Hillyer, Colin Charles, Jay Pipes, Mike Zinner, and New ContactsKristian Köhntopp, Jeremy Cole, Sheeri Kritzer, Taneli Otala, Laura Thompson just to start the list.

Functionality

Not only was there plenty of discussion on Server Functionality, there was plenty of MySQL Client functionality including the MySQL Workbench, MySQL Migration Toolkit and the other MySQL GUI products.
There were a number of discussions on uses and implementations of MySQL in large web deployments. It would be great to see some more white papers here.

New Features

A few months ago I wrote an article A call to arms!. In some part, I was just giving my opinion and hoping to gee up some support and feedback from the community. Well, the MySQL 5 Pluggable Storage Architecture got a great boost with announcements of transactional storage engines Falcon by Jim Starkey, Solid and PBXT. Add details of InnoDB New Features, MyISAM additions, and indications of other wonderful if not entirely practical options. I’m sure there is much more in stall to come this year that wasn’t discussed.

A number of talks featured Cluster including Monday’s tutorial, and with 5.1 and beyond I can see next year there will be more discussion on successful Cluster implementations. There was a lot of talks about Scaling out. I’d like to see more practical examples, perhaps a detailed tutorial.

The Future

What does the future hold for MySQL? The MySQL Server and Storage Engine Roadmap provided an insight of the upcoming planned features and releases over the next 2 years. Of course, the marketplace can change quickly, and MySQL is in a great position to react to the needs of the community quickly.

And before your know it, the 5th Annual MySQL Conference will be in play.

Conference Feedback

One thing I had a chance to discuss with Jay Pipes after the conference, I wasn’t the first to mention, and plans are already in motion, was a number of talks just needed more time. Moving the schedule to 55 minute talks gives that extra time, even if it is open question time from the floor, but it also makes knowing when sessions are on much easier, if they always start at the top of the hour.

In Conclusion

Frank (a.k.a Farhan Mashraqi) asked me what session I liked the most? Hard to say. Agile Database Techniques: Data Doesn’t Have to be a Four-Letter Word Anymore rated very highly, as the content was close to heart and my expertise. HackFest B: Creating New SHOW Commands by Brian Aker, showed just how easy it was to get into the MySQL source. Of course the internals are much more complex then this, but it was a good introduction. My favourite keynote was The Ubuntu Project: Improving Collaboration in the Free Software World. There were a number of talks I was disappointed in, as well as a number I didn’t get to due to 8 sessions in parallel.

I would have to say, that what impressed me most was no one single talk, but the functionality of the GRT Shell that Mike Zinner and his team have built into the GUI product line. I was very impressed, and I could see this providing extensive functionality and not just MySQL specific centric tools. This will be area I’ll be focussing on my contributions in the near future.

MySQL Stored Procedures Performance

Another one of the sessions at the MySQL Users Conference I attended was Tuning MySQL5 SQL and Stored Procedures by Guy Harrison from Quest Software. A global company with 6000+ customers.

Guy has written a number of Oracle Performance Books in the past. His work now is on the “Spotlight” product family – Database diagnositic tools converting data to graphical representations. For these products, MySQL 5 and InnoDB only is necessary, simply due to accessing the right internal information for presentation. There are Freeware MySQL product downloads.

In this presentation he stated, nothing he was talking about specifically was relatively new. He did make quite a funny comment, “He is now seeking refugee status in the MySQL Community”.

Guy is author of O’Reilly “MySQL Stored Procedure Programming” Book. I managed to get for free at the conference from the MySQL Quiz night, in addition to a shirt and cap for stumping a Guru.

His talk were on tools and techniques for tuning MySQL.

  • Explain Command – reveals what the optimizer intends to do
  • Explain Extended
mysql> explain extended select ...;
mysql> show warnings G
Shows what the optimizer actually did. In this example, An IN was converted to EXISTS

There were 4 ways to provide optimizer hints.

  1. STRAIGHT_JOIN
  2. USE INDEX(…)
  3. FORCE INDEX(…)
  4. IGNORE INDEX(…)

In addition to the Show Query Log, there are Innodb specific commands, two in particular.

show status like 'innodb%'
* innodb_buffer_pool_read_requests
* innodb_data_read

Indexing and the optimizer

  • In MySQL Index is the best tool to improve performance, however sometimes it’s better to access the entire table.
  • Indexes generally effective when between 5% and 20% of rows are accessed.
  • Subqueries need to be satisified by an index or performance will be quite inefficent.
  • Overloading indexes with additional columns when key queries only use a few columns can enable improved performance.

Not all indexes are created equal. In the following examples, each advancement improved performance.

  • No indexes ()
  • Single Index (customer)
  • multiple indexes (customer, product)
  • concatenated indexes (customer + product)
  • covering index (including required columns, customer+product+qty)

Examples of SQL that can’t benefit from Indexes.

  • Derived tables – SELECT table in a from clause, creates a temporary table and will never get an index.
  • Views with UNIONS/GROUP BY

A comment from the audience was that derived tables can be of a benefit to a correlated sub-query in specific examples.

Stored Procedures provided a mixed blessing for performance.

  • Can improved perfomrance when high network overhead.
  • Some improvement on parsing.
  • Breaking up complex queries may provide benefits.
  • SQL is highly optimized for SET operations.
  • SP is not optimized for number crunching. Computionally not a fast language.

A written routine to calculate prime numbers provided the following performance (from most expensive to least) MySQL SP, Oracle SP,PHP,Perl,Java ,VB.NET ,C (gcc). This showed an example that was excessively inefficient. On the other hand, if the program is network dependent (e.g. access a million rows, perform some statisical aggretation). Comparatively the same between Java and SP locally, but much better in a remote host mode.

Performance of SQL in a SP will dominate overall performance. Where SQL is tuned, goto tried and proven traditional optimisation techniques.

  • Optimize iterations
  • Optimize Logic/Testing
  • Avoid recursion

Loop Management

  • Only perform necessary code within iterations
  • LEAVE or CONTINUE when possible in loops
  • Test the most likely IF/THEN statements first
  • extract if comparisions duplicated to produce nested if’s (within reason)

Some guidelines for Triggers.

  • Triggers will have a non-trival overhead for even the simplest trigger.
  • Due to FOR EACH ROW only, don’t have expensive SQL in any trigger.
  • Very carefully tune SQL in triggers.
  • Empty trigger produced 12% overhead.

For more information check out www.quest.com/mysql

Mark Shuttleworth

Thursday’s Keynote speaker at the MySQL Conference was Mark Shuttleworth talking on The Ubuntu Project: Improving Collaboration in the Free Software World.

Ubuntu from Canonical Ltd is fastly becoming one of the most popular Linux Distributions, based on Debian. Here were some notes from his presentation. Mark had some really powerful one or two line slides that typified both his presentation intention and the goal of Ubuntu.

Delivering on the Promise of Free Software

  • MySQL levelled the playing field, individuals could use MySQL to build applications that could now compete in the industry.
  • Apache, Python and Linux made it all possible.
  • This has provided the Opportunity and passion to build software that can serve a purpose.
  • Ubuntu is the continuation of levelling the playing field. Making always available at no cost.
  • Whatever your vision, you have the opportunity without financial limitation.
  • To be a pioneer in new fields, open source freedom enables the opportunity to anybody.

The New Deal of Free software

* Different Economics

  • Ubuntu is world class and free of charge.
  • Important to recognise the governence.
  • We don’t build 99%, it’s built by communities. We want to maintain good healthy and open relationships with the communities of software developers and contributors.

* Different culture

  • Transparency. You can participate, you get to see how the software is built. Difficult for co-ordination of people worldwide, the advantage is you get to be people in the public space everywhere.
  • To be able to try, experience to express their own inspiration.

Our vision is to build a complete community-driven distribution that can grow and substain itself through support and services.

Self substaining at a professional level. Community Driven.

  • The Ubuntu Proposition
  • Genuine Freedom and Flexibility
  • Zero Licensing Fees
  • Certification: Hardware, Softare, Skills
  • Superb Commercial Technical Support
  • Standards Compliance
  • Internatiolzation and customization
  • Collobaration to the Core

We do the best work when we focus on extreme challenges.

Mark made a great comment about the Chinese Version, What was in English on any pages, for Mark was the saving grace, but for the Chinese user, that was the wall, so a committed emphasis on Internationalisation was important.

Ubuntu as you know it today is widely rated #1 desktop Linux with millons of users globally, with complete office and home desktop distributions. We started on the desktop, we are continuing to expand into different variants including the enterprise. The next release in June 2006 – Dappy – 6.06 will includ LTS (Long Term Support) 3 years for Desktop, 5 years for server.

We maintain relationships with the community ecosystem.

  • Looking the the User Relationships.
  • Prime focus is simplicity. Also Comprehensive and Community, Current.
  • For the business relationships, wanting to offering Tiered Partnership Program and Extensive Global Solutions Provider Network.

Free Software Relationships

  • Close to upstream (e.g. MySQL, Apache, Firefox etc)
  • Close also to other distributions (both debian based and not). We want the best of their work to also be part of what’s available to all.
  • Need to ramp up the collaboration of projects especially between projects. Check out launchpad.net
    Site – Much easy to colloborate across projects
  • With MySQL and Ubuntu you have freedom to deploy without asking permission, and support when and where you need it
  • A key is make packaging ubuntu and mysql the best getting contributions from the community. Expresses most powerfully what MySQL can do.

Beating history into submission

  • We judge ourselves by history. We should look at our strengths. we should ask “What futures are enabled.”

Free software enables a different economic paradigm.
Embrace it.

A question from the audience was where does the name Ubuntu come from. The Answer was.
Ubuntu is common to many African languages. It means “human-esse”. The Ubuntu root is common to a lot of words. Some means included “The way we look after other people defines who we are. ”

Kubuntu – Has a meaning “For Humanity”.



My MySQL Conference Presentation


My talk at the MySQL Users Conference went well. MySQL For Oracle Developers was part of two talks, the second was by Mark Leith on MySQL For Oracle DBA’s.

I had a number of positive comments from attendees, including Ken Jacobs of Oracle who also contributed some valuable information in comparision of UPDATE/DELETE ORDER BY/LIMIT statements I was unaware of.

You can download my paper here.

My slides, like many talks just scratched the surface in the alloted. I’ve been working on additional reference material, a work in progress is available at MySQL For Oracle Developers. My continuatation on this will be dependent on feedback from the community of it’s intended value.

Hacking MySQL Source improvements

Further to my earlier post Hacking MySQL Source (in a good way) in which I was having a compilation problem when re-compiling MySQL source (i.e. tt worked the first build, but failed on subsequent re-compiles, even with no changes).
I’ve been able to solve my re-compiling issue, with special thanks to Jan Kenschke of MySQL AB, who was near during the MySQL Quiz Show.

As suspected, there is no need to re-run the ./BUILD/compile-pentium-debug command as this cleans, a simple make command is sufficient (I needed to confirm no other command args where needed). By doing this, it also better highlighted the actual error, where previously is wasn’t. I didn’t keep a copy of the error, but in effect I needed to do the following in order to successfully recompile.

su - mysqldev
cd mysql-5.1.10-beta-nightly-20060413
touch mysql-test/std_data/client-key.pem
touch mysql-test/std_data/client-cert.pem
touch mysql-test/std_data/cacert.pem
touch mysql-test/std_data/server-cert.pem
touch mysql-test/std_data/server-key.pem
make
# Success

I can’t answer why I had to do this manually (create these cert permissions), and why some level of make doesn’t (or does but I don’t know the command), but it works, and right now that’s important. Hopefully a MySQL guru can enlightment me.

From here, I was able to test as I’d written previously. I just need to confirm my changes work before reporting back, as well as providing some guidelines for debugging with gdb.

Contributing to the MySQL Community

Everybody can contribute to the MySQL Open Source Community. Here are some ideas:

  • You can start with completely a MySQL Conference Survey form (those forms that we are all avoiding).
  • You can download, evaluate and test newer versions of MySQL Products.
  • You can contribute to the MySQL Forums to ask questions and even give answers.
  • Log Bugs & Enhancements on MySQL Products.
  • Write about your experiences with MySQL in a Blog and add your Blog to PlanetMySQL. With now over 100 feeds, if everybody wrote something about MySQL in the next year, we would have over 1000 feeds next conference.
  • Contribute your thoughts (via wiki), MySQL products, code snippets and examples to MySQL Forge (thanks Mike)
  • You can even contribute to the MySQL Source Code.

If you are passionate about MySQL and you are attending the MySQL User Conference, then jump into the Open Source MySQL Community.

Extracting User SQL Queries from a MySQL Server

Joe Kottke from FeedBurner in his MySQL Conference presentation today FeedBurner: Scalable Web Applications Using MySQL and Java mentioned a novel way of extracting SQL Select Statements from a MySQL Server.

The obvious MySQL ways include SHOW [FULL] PROCESSLIST, the Slow Query Log, and the General Query Log. In 5.1 you also use the INFORMATION_SCHEMA.PROCESSLIST.

Anyway, Joe mentioned he does a strings on tcpdump to extract the queries, and there was no need to restart the server (this point interested me). So, not letting a challenge go, having never used tcpdump.

Well, here’s how.


tcpdump -w mysqldump.txt -i lo

NOTE: You must log output to file with -w to get the full packets and not just headers.

In a seperate session.


$ mysql -uroot -p mysql -h `hostname`
mysql> select * from user;
mysql> exit
# I did a ctrl-C in the other terminal (not sure if necessary)
$ strings tcpdump.txt
select * from user
mysql
user
user

NOTE: localhost wasn’t valid, it had to be the hostname of the server. Of course you could access the server from a different client.

Of course there may be other or better ways, but it worked for me and was rather cool.

Perhaps this is a good reason to use SSL?

MySQL Migration Toolkit Extensions

Just to add some more features of the MySQL Migration Toolkit from the MySQL Conference presentation MySQL Migration Toolkit by Mike Zinner from MySQL AB.

You can extend and modify the funcionality of the Migration toolkit with the supplied Eclipse Java project. I’ll need to download the code to write some more detailed notes, but here were a few points from Mike.

Classes are in the com.mysql.grt.modules Package Space
The main configurable classes are nameed Migration????..java and ReverseEngineer???.java with ???? being for the different Database Products

Within the Eclipse project, you can incoporate other languages. For example, ReveserEngineerMySQL Jdbc.java for example has a callGrtFunction to code written in C. This I’m assuming will allow you to incoporate all the other languages of the GRT such as Python, Lua. (I’m still impressed you can actually do this, a mini project for another day to pull this apart.

A question from the audience was about Oracle User Defined data types. MigrationOracle.java for example can easily be extended to cater for the migration of user defined data types to a suitable valid MySQL alternative.

References
MySQL Migration Toolkit – Provides details, screenshots and Flash Tutorials for SQL Server, Oracle and Access Migrations.

The MySQL Migration Toolkit

A Wednesday afternoon presentation at the MySQL Conference was MySQL Migration Toolkit by Mike Zinner from MySQL AB.

In summary, the Migration Toolkit currently provides the following feature set.

  • Wizard like interface
  • Plug-in architecture
  • Migration of Oracle/MS SQL Server/MS Access/Generic JDBC Data Sources. 2006 support for Sybase and DB2.
  • Provides user support to expand to other databases
  • Customisable the produced wizard scripts at any step
  • Completely Scriptable (the wizard interface can produce command line replay scripts) -allows re-running, re-scheduling
  • Binaries include an Eclipse Java Project to enable easy extension of the Migration Toolkit.

The product is built on the Generic Runtime Environment (GRT) which is used in other MySQL GUI Products. The GRT is a thin C Layer that allows you to easily add modules written in other languages (e.g. C/C++, Java, Lua, Python etc).

The following are the steps of the Migration process.

  • Source/Target
  • Object selection
  • Object Mapping
  • Manual Editing
  • Schema Creation
  • Data Mapping
  • Bulk Transfer
  • Summary

Some more relevent points on some of the individual steps.

Source/Target

  • Screen caters for source specific requirements (e.g. SID for Oracle)

Object selection

  • Choose the different Schema Objects, including tables, views. Can limit by type, or individual objects
  • For Oracle additional objects include Schemas, Routines and Sequences

Object Mapping

  • Define Methods for mapping of schema objects
  • Includes Language types (e.g. latin1), multilanguage (utf8), or a user defined
  • For Tables you can choose the purpose which includes data consistency, statistical data. These rules will include determining best storage engine, intepreting auto increment columns.

Manual Editing

  • You Can drill down in great detail to define the object mapping, down to a column level
  • Provides a migrated objects review, including warnings (for example views need to be manually checked, or sequence not migrated)

Schema Creation

  • Can create objects in real-time or create a SQL script for later executation
  • You can review the generate SQL code that will be used in real-time and perform further manual refinement (e.g. converting to specific SMALLINT,INT,MEDIUMINT refinement)
  • Will show any warnings, and allow for manual correction

Data Mapping

  • Can create data in a real-time to a MySQL database
  • Create Insert Statements (normal data and blob data) (optimized multi-row insert with byte limits)
  • No facility at present for flat file creation/loading via LOAD INFILE

Bulk Transfer
Summary

  • A migration report which can be saved to disk.
  • Create a Migration script for the complete process. (which is a generated lua script)

I can see the possibility for the following enhancements.

  • Consider ENUM fields by analysing data (columns char < 20) and offer as data type
  • Consider determining AUTO_INCREMENT columns

References
MySQL Migration Toolkit – Provides details, screenshots and Flash Tutorials for SQL Server, Oracle and Access Migrations.

Hacking MySQL Source (in a good way)

HackFest B: Creating a New SHOW Command by Brian Aker at the MySQL Users Conference

Brian stepped through the steps for those attending to modify and deploy new functionality in the mysql server. Cool. The end result I would consider for an experienced developer as relatively easy (after avoiding the pitfalls).

NOTE: I wasn’t able to complete this successfully during the session, but I’ve posted this, so hopefully the input and review of others can help in overcoming the current issues. See Outstanding Issues throughout my notes.

Our goal: Produce a new command SHOW CONFERENCE;

The following commands were performed on CentOS 4.2. There may be some differences with different Linux Distros.

Prepare a current Source Code Tree


su -
useradd mysqldev
su - mysqldev
wget http://downloads.mysql.com/snapshots/mysql-5.1/mysql-5.1.10-beta-nightly-20060426.tar.gz
# NOTE: You should check the snapshots page for latest versions http://downloads.mysql.com/snapshots.php
tar xvfz mysql-5.1.10-beta-nightly-20060413.tar.gz
cd mysql-5.1.10-beta-nightly-20060413
# compile script will depend on H/W and requirements
./BUILD/compile-pentium-debug --prefix=/home/mysqldev
make install
scripts/mysql_install_db --datadir=/home/mysqldev/data
sql/mysqld --basedir=/home/mysqldev --datadir=/home/mysqldev/data &

NOTE: This can take a while, so it’s a good opportunity to leave this and come back at a later time. The –prefix allows you to install into the new users directory ensuring that you can test without affecting any currently installed mysql installations on your machine.

The BUILD directory contains many different compilations for platforms and variants. This version contains 41 specific scripts. For the purpose of our new SHOW command, we are going to run in debug mode for any necessary debugging.

Outstanding Issues

  • I was surprised a make install didn’t configure the bin/mysqld and scripts/mysql_install_db as you would get in a binary distribution. There is probably additional commands necessary for this, but for the purpose of development it works.
  • I run multiple versions of MySQL on my machine, and I had some conflict with my /etc/my.cnf file, so I just removed it for the purpose of this test. There needs to be a cleaner way to ensure this isn’t used, and also to compartmentalise/isolate different source trees.

Confirm Installation

I ran this in a seperate terminal window.

su - mysqldev
bin/mysql -e "SELECT VERSION()"
+------------------------------------+
| VERSION()                          |
+------------------------------------+
| 5.1.10-beta-nightly-20060413-debug |
+------------------------------------+

This gives us a suitable source baseline.

Implementing a new SHOW Command

The easiest means of developing a new SHOW command is to base this on an existing command. We are going to base this new SHOW command on the SHOW AUTHORS command.

We are going to be looking at the following files.

  1. lex.h
  2. sql_yacc.yy
  3. sql_lex.h
  4. sql_parse.cc
  5. mysql_priv.h
  6. sql_show.cc

All these files are found under the sql directory in the source tree.

lex.h

131:     { "CONFERENCE",       SYM(CONFERENCE_SYM)},

sql_yacc.yy

 202:  %token  CONFERENCE_SYM
...
8267:       | CONFERENCE_SYM
8268:         {
8269:           LEX *lex=Lex;
8270:           lex->sql_command= SQLCOM_SHOW_CONFERENCE;
8271:         }
...
9366:       | CONFERENCE_SYM        {}

sql_lex.h

112:   SQLCOM_SHOW_CONFERENCE,

sql_parse.cc

3515:   case SQLCOM_SHOW_CONFERENCE:
3516:     res= mysqld_show_conference(thd);
3517:     break;

mysql_priv.h

915: bool mysqld_show_conference(THD *thd);

sql_show.cc

229: bool mysqld_show_conference(THD *thd)
230: {
231:   List<item> field_list;
232:   Protocol *protocol= thd->protocol;
233:   DBUG_ENTER("mysqld_show_conference");
234:
235:   field_list.push_back(new Item_empty_string("Name",100));
236:
237:   if (protocol->send_fields(&field_list,
238:                             Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF))
239:     DBUG_RETURN(TRUE);
240:
241:   protocol->prepare_for_resend();
242:   protocol->store("Welcome to the MySQL User Conference 2006", system_charset_info);
243:   if (protocol->write())
244:     DBUG_RETURN(TRUE);
245:   send_eof(thd);
246:   DBUG_RETURN(FALSE);
247: }


cd ..
./BUILD/compile-pentium-debug --prefix=/home/mysqldev
...
gmake[3]: *** Waiting for unfinished jobs....
/bin/chmod +x mysql-test-run-t
/bin/mv mysql-test-run-t mysql-test-run
gmake[3]: Leaving directory `/home/mysqldev/mysql-5.1.10-beta-nightly-20060413/mysql-test'
gmake[2]: *** [all-recursive] Error 1
gmake[2]: Leaving directory `/home/mysqldev/mysql-5.1.10-beta-nightly-20060413/mysql-test'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/home/mysqldev/mysql-5.1.10-beta-nightly-20060413'
gmake: *** [all] Error 2

Outstanding Issues

  • I’m sure there must be a more effecient means of compiling changes, this process effectively cleaned and started again.
  • I had a build failure which didn’t seem to make any sense.

At this point I wasn’t able to continue, but here are notes I took of next steps.

make install
gdb mysqld
run --gdb --debug

In another terminal session.

su - mysqldev
bin/mysql
mysql> SHOW CONFERENCE;

I’m keen to see it work in my own environment and documented for others to try.

Brian moved on to creating a INFORMATION_SCHEMA query but we ran out of time to complete this. I’ve got some notes to document as a later date.

PBXT – The MySQL Community Transactional Storage Engine

In having a discussion with Paul McCullagh (the creator of PBXT transactional storage engine) and Taneli Otala MySQL AB CTO after the keynote presentation at the MySQL User Conference, Taneli made the following comment (paraphrased and reproduced with permission).

“I talk about PBXT in discussions shamelessly. The development of PBXT was excellent timing in the MySQL community landscape”.

It was an excellent commendation that MySQL AB management considered so highly the contributions from the community. As mentioned in the opening keynote, MySQL with the Storage Engine API with MySQL 5.1 has great potential to expand what options are available to user of MySQL.

Good work Paul. I like many others wish you the best in your continued development and contribution.