Auditing an existing MySQL Installation

Yesterday I ran into an old collegue that now runs quite a successful computer store chain and highly successful web store here in Australia. Long story short he was having some MySQL problems, so I offered to pass my eye over it. Now, given that they had some data corruption in stock levels not being correct (e.g. getting to an inventory count of -1), my first split second thought was inappropiate transaction management.

In thinking last night, what would I do as part of auditing an existing MySQL Installation and application quickly for reference and also to highlight issues etc, assuming I would have no access to any administrators or developers.

So what would you do? I made some preliminary notes, here’s the full account of what I did do.

Audit Steps

OS Specifics

$ mkdir /tmp/arabx
$ cd /tmp/arabx
# keep a quick copy of stuff
$ script
# Linux generals
$ hostname
$ uname -a
$ uptime
$ df
$ which mysql
$ mysql --version

MySQL Specifics

$ mysql -uroot -p mysql
mysql>  show variables;
mysql>  show status;
mysql>  show databases;
mysql>  show processlist;
mysql>  show full processlist;
mysql>  select host,user,passwd from user;
mysql> exit;
$ cat /etc/my.cnf
# error not found?
$ find / -name my.cnf
# Results
$ cat /etc/mysql/my.cnf
$ cp /etc/mysql/my.cnf .
$ cd /usr/local/mysql
$ du
$ cd data
$ ls -al
$ cp *.err /tmp/arabx  # for later review
# What, no database, surely they were not recording in the mysql database
# quick confirm there was another database from earlier list?
$ grep datadir /etc/mysql/my.cnf
# I see off somewhere else
$ cd /var/lib/mysql
$ ls -al
$ du
$ cp *.err /tmp/arabx  # for later review
$ cd /tmp/arabx
$ mysqldump --no-data  -u[user] -p [db] > schema.sql
$ grep TYPE schema.sql
# Observation:  All MyISAM, the first proof of my initial theory
$ mysql -u[user] -p [db]
mysql> show tables;
mysql> show tables status;
mysql>exit;
$ cd /tmp
$ tar cvfz arabx.tar.gz arabx
$ scp arabx.tar.gz ...

Application Specifics
I’m not going to detail the steps here as this is really very implementation dependant. What I did in summary was:

  • Identified website location on filesystem, general du, ls -l commands
  • View working screens showing Stock Count logs, Forfill Order and Product Return
  • Review code of these areas, as well as the data, confirming what was seen on screen via SQL

Recommendations

Immediate

  • Cleanup current directory of html files to remove old files, or old backup copies. This ensures no PC has some old page with bad code bookmarked
  • Cleanup /usr/local/mysql/data as it’s now defined in /var/local/mysql. It threw me with to data directories and error logs. It was only that the database directory was not in the first location, otherwise I may have missed it initially.
  • Stock Adjustment page needs to log an Adjustment history for Audit Trail (Only Sales and Returns were in Audit Trail, so it left a possible gap?)
  • Implementation of Transactions. Given the volume of transactions, and that it would appear that LOCK TABLES had been implemented but removed due to performance, obvious choice is implement InnoDB tables, and transaction management in code. (Quite some work)
  • Change of some columns from DATE to DATETIME to record the time of occurance. (Code was using NOW(), so that part’s already done). Or, implement some TIMESTAMP columns (as none were in use) and leverage MySQL standard functionality.

Medium Term

  • Running MySQL 4.0, which is fine, but the data corruption and lack of clean website code, leads to an easier solution for auditing via triggers.
  • Upgrade to MySQL 5.0 Key reason, triggers. This enables more audit analysis with even less need for or initial understanding of the application
  • Implement a CVS respository. Even for single development, it’s a habit that far outways the impact of a few more commands.
  • Review Backup strategies for HTML code and respository, which was only being mirrored, but not backed up for a disaster recovery situation

I guess if I ever do this again, there is merit in cleaning this all up and providing some level of automation.
Does anybody have any suggestions of obvious other things to consider.

MySQL Workbench 1.0.1 First Impressions

These are my first impressions of MySQL WorkBench 1.0.1. Rant and rave you may say, but a new user, or an experienced modeller would probably observe these points. Also, given that (with a poll?) I suspect a good number of users will be previous DBDesigner users, some level of functionality that may not be considered initially in a new product, should be considered for this migration path of legacy users.

I’ll take the time to review the Forums adding my points, and review Bug System but for now, just blogging is easier.

Conclusion

Note: Generally a conclusion is at the bottom, but I’d suspect most readers will switch off quickly, I wanted to catch the gaze in the first few secs.

In summary, I’ve just scratched the surface, I’d do a more fuller QA review and testing, but I’m not certain yet of the best feedback loops, and whether it’s warranted. A lot of what I’ve said may already be known internally. I don’t want to sound petty, but I would not like to reference this as a 1.0 product, it’s got some way to go to being both stable, consistent, and complete in a basic level of functionality. In addition, the MySQL Website 5.0 page speal lists MySQL Workbench as a 5.0 product, I think this is a little inaccurate.

This product appears to have a sound start, but it’s a long way from me considering moving from DBDesigner, which is a real shame, as I’ve been waiting now probably 8 months to get my hands on this (and loose some annoying bugs in the now unsupported DBDesigner), and I’m desperately wanting to get support for MySQL 5 re-engineering.

Main View

After startup you are presented with the MySQL Gui Window, which consists of a number of elements, the menu line, the toolbars (horizantial and vertical), the Main View, and some tab views (shown on right side).
Main View

  • You can’t double click on a table to get into edit mode. You must choose the pencil (bottom left corner of table). When you have a table with 30+ columns, this involves some more scrolling, particularly depending on your view magnification, if you start by finding the table first in a large 150+ table model.
  • While the display of the different elements within a table is nice (e.g. Columns, Indicies, foreign keys etc), when I’ve got 150+ tables in my diagram I don’t want to see this verbose information. I’d like to see a user option of cut down views, removing this headings, display/hide indicies,foreign keys, triggers.
  • What does the lock icon do, and the other bottom right icon, there seems to be no feedback as to what they do?
  • What does the other icon do (not pencil, not lock), again appears to be no feedback?
  • I don’t see a snap to grid? I also don’t see anyway to adjust the size of the grid.
  • I see in the Schema window, the hopeful ability to reference multiple schemas (unclear of the full benefit here), but for now I see no clear way to create then one. Given this, when you select the Table Tool in the Tool Options you get a drop down box for schema. How do you create multiple schemas? Should you in one diagram? If you only have one defined, the drop box should not be present.
  • Tool Options shows 3 colors, that’s nice, however how do you move to selecting more? The color only highlights the table name in the Main View. I’d like to see a background color for all tables. Again in 150+ tables, color coding tables from modules makes it easier to see them, the heading is not enough.
  • You can color a table when creating, but there is no ability to edit the color at a later time?
  • After creating two tables both with Primary Keys, I tried to create a relationship between both, using all 3 relationship options. While I could select the source table, I couldn’t select the destination table in any way. (click, double click, etc)
  • Creating a view with the View tool, provides the iconic buttons in the top section of layout, different to the table layout. It should be consistent.
  • The View Tool, allows you to enter SQL for the view, that’s nice, but it should be displayed on the Data Model. The Data Model’s responsibility is to show tables, columns and relationships. It should show table names, column names (in the case of ‘*’, I think it should probably show ‘*’, not expanded columns due to the complexity.
  • I created a table with an INT(11), VARCHAR(40) and X (which becomes X(0), se Table Editor points). In the resultant Main View, I get INT,VARCHAR and X((0)) as the column definitions. There is no consistency here.
  • I would like to see the capability of removing data types from the display within tables. In a more structured environment, you would first start with a logical model, then move to a physical model, or in early design, you are just defining the probably attributes of table, before they become columns. The datatype isn’t determined, so just listing names, and icon (key,mandatory) is all that’s required.

Schema Tab

  • With my long list of tables, I would like to be able to quickly go from the table in this view to the table in the Main View. (Previously DBDesigner provided a double click to achieve this).
  • I understand that the ordering of categories within the database is alphabetical, but it should be in priority order (Table,View,Routine…). Two reasons, this is the likely creation process, you create tables first, and second, for any schema management prior to 5, the other 3 aren’t relevent.
  • With the previous point, I don’t see any compatibility management for MySQL versions. While there are two many subtle changes to warrent a great deal of work, a simply compatibility of 4.0, 4.1, 5.0, 5.1 would clearly provide when in a 4.x compatibility, views, routines etc are invalid, and for simplicity should be removed from views (Schema Tab) and even vertical menu.

Editing

Table Editor.

  • Has no access to help. In this case, on first view I thought, what does Flags mean as a column attribute? While they not be any help yet, the infrastructure should be in place.
  • Data Type has no completion text. (a.k.a DBDesigner would autocomplete, say if you started with a D, to get Date)
  • Data Type, if you specify nothing, it defaults to (0) which is invalid. If you specify D it defaults to D(0). Regardless of what is entered, the table structure should be syntaxially valid at all times on exit of the field.
  • Options Tab. There are a lot of MyISAM specific options, yet I’m using the default Engine which was InnoDB. These should probably be hidden from view.
  • I don’t like the fact you have to go Apply, then Close, and it’s so easy just to go Close and lose your changes without warning. I think theres is a clear need just for a ‘Save’ which saves your changes and returns you to the Main View
  • Show Advanced/Hide Advanced, is only relevent for the Columns tab (at this time), so it should not be visible on the other tabs.
  • I see no ability in this version (1.0.1+) to define the set of valid Data Types (in my case I like to remove some, to simply autocompletion (which isn’t present)

Other

User Interface

  • No Tool tips for menu icons
  • Help | About didn’t do anything. Would have expected a popup window.
  • I’d like to see some tool tips on the icons used within the objects placed on the Main View.
  • There is no other placeholders for help. Worse case, defining the screens as to be completed, perhaps with some identifier, and then the ability for users to contribute to the help (a web 2.0 think), it will be better then writing from scratch. Even a wiki to allow users to start on the help would I think be benefical

Infrastructure
The saved files are XML, yet you give an extension of .mwb While that is nobel, and identifying it’s it’s .xml using a .xml shows it’s XML and also it makes it easier for other use. I can think of 2 straight away. First, I may wish to use XPath Explorer to quickly search for something. Second, I may use XSLT to provide a HTML quick view of my schema?

Errors
Go the following error, but was not able to reproduce, created table, added some columns, moved around tabs, added a new column lastUpdateTimestamp, [tab] to goto the data type, and crash.

** CRITICAL **: myx_grt_bridge_dict_item_get_value: assertion `dict != NULL’ failed

Testing/Trialing new MySQL Releases

By now, I’m sure you have all heard about Free VMware Player allowing easy and quick access to see, view and use other OS’s easily. For those Windows users out there, now is your chance to trial MySQL under Linux with no impact to your system, why wait.

See the MySQL Virtual Machine details on the VMware site. On closer inspection this effectively pushes you to the VMware Technology Network (VMTN) page within the MySQL website.

The MySQL guys needs to update their site to reflect new reference to the free player, rather then a trial version of Workstation. Even VMware Server is free (could be mentioned). You can read more about the offerings etc at a previous blog Using VMware Server (free).

Also MySQL is only offering MySQL 4.1.12 (that was released in May last year). Surely this would be a great way to push the newer releases, I’d like to see MySQL offer this for at least the current production GA version 5.0, and it would be a great way to promote new releases, like an alpha release of 5.1. How easy would it be then for people to trial and test the new features without any hint of messing up or changing any existing environments. Perhaps the MySQL marketing department could consider this?

Is is more work? Yes. Will it take resources? Yes. But look at the benefits. Instead of the diehards that are prepared to download bleeding edge releases, you can now reach a newer market of users with an easier, cleaner and throw away installation of newer releases. You would not consider doing it for every dot release, more the current GA 5.0, and alpha and RC’s for newer versions I think.

A working MySQL Workbench Under Linux

I must admit I’d given up trying to get MySQL Workbench working under Linux. I guess I’d spent at least 4 or 5 days full time at it, and it was just out of my league, with GTK and C++ errors. It had seemed like a loosing battle, I’ve had 3 detailed documented goes at 1.0.0. Last Post 19th Jan, and 2 with 1.0.1 Last Post 31st Jan.

Anyway, the good news is I now have MySQL Worbench working under Linux. I’ve been working with MySQL AB on a number of specific problems and you should expect a release from MySQL AB soon to address these. Here is a summary of my experience.

  • 1.0.1 removes the requirements for java dependancies
  • 1.0.1 provides a README.linux with configuration requirements, and a subset list of minimum library requirements (which needs updating)
  • libsigc++2.0.11 (an older version) is necessary, most recent versions up to .17 do not work
  • Solving the library dependancies from true source .tar.gz was difficult to manage, however I finally got to use rpm libraries to provide some consistent reproducible steps. I’ll document later as there we a number of issues with achieving this under CentOS 4.2
  • Solving the GLIBCXX_3.4.4, GLIBCXX_3.4.5, GLIBCXX_3.4.6 error. This is what finally stopped me. Thanks to my new Guru Friend Alfredo who put me on the path to solving this.
  • Underlying glXGetProcAddress() compile error required code changes (which are being incorporated in next release). This finally lead to a binary being built.
  • Executing Binary lasted a split second Segmentation fault $PRG-bin $* Debugging determined the problem to be a specific NVIDIA issue, reverting back to generic Video Drivers only left me with a OpenGL and minimum 32MB 3D accelerated graphics card error.
  • Patched code to this problem (to be incorporated in next release) has lead to a working product, at least enough to play around. Now the real fun begins.

Got heaps more to write about, however that would have slowed down the good news. More to come soon.

My thanks to the team at MySQL AB. I have received excellent care and response from the top down. I’d like to say that all the pain has been worth it in the end, and had this not been a open source company, I would have given up a long time ago.

Federated Syntax

I’ve never used Federated. I’m waiting for the JDBC version capabilities so I can connect to a non MySQL Server (specifically Oracle). In reading the docs, I see that the syntax includes a CONNECTION String.


CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other INT(20) NOT NULL DEFAULT '0',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://root@remote_host:9306/federated/test_table';

I’m surprised in the state of the syntax for two reasons.

  • First, there are hardcoded variables in the string, this sort of breaks the rules of seperating syntax from authentication specifics
  • Second, if you have multiple FEDERATED tables, you incur a level of duplication of this information, and then multiple necessary changes if the connection string is modified

I can’t ask I’d like to see the following, as I don’t use this syntax, but I’d like to suggest a level of abstraction of this information. Could the CONNECTION point to a referenced object (e.g. CONNECTION=’HOST1′). This would solve my second point. Now, the requirements of the HOST1 object needs to contain a connection string, and where would this be stored. I don’t have an answer for that one. Perhaps somebody else could make a suggestion?

Updated
Thanks Mike for your comment, click on comments below to read. Boy it’s been been a long time since I read any ISO Standards documents. Just to clarify Mike’s point for those investigative types. SQL/MED is Management of External Data, and the official ISO document is I believe:

ISO/IEC 9075-9:2003 Information technology – Database Languages – SQL – Part 9: Management of External Data (SQL/MED) ISO Reference

Abstract
ISO/IEC 9075 defines the SQL language. The scope of the SQL language is the definition of data structure and the operations on data stored in that structure. Parts 1, 2 and 11 encompass the minimum requirements of the language. Others parts define extensions.
ISO/IEC 9075-9:2003 defines extensions to SQL to support management of external data through the use of foreign-data wrappers and datalink types.

This document makes reference to DATALINK, a syntax I remember vaguely from Oracle Days. Will need to vett that. What’s also of note in this document is the reference to using the TABLE TYPE of ‘FOREIGN’.

I wonder why MySQL went with FEDERATED as the TABLE TYPE. I’m sure this is a good reason?

Brisbane MySQL Users Group Meeting with Brian Aker

We had the privilege of Brian Aker Director of Architecture for MySQL speaking at the Brisbane MySQL Users Group this week (28 th Jan 2006). After the initial discussions on various topics, Brian got into his discussion on MySQL 5.1. I was surprised that only 2 people (myself being one of them) had 5.1 installed and being used in any way.

Here were some of the highlights of the talk. Should have taken some more notes.

MySQL 5.1 Features

Partitioning Will include Range, Hash, List and Key handling and Subpartitioning. Documentation · Forum.
Partitioning is a table structure syntax. No changes to any SQL statements are required to use partitioning within your application.

Log Tables Log files can be stored within the database. Error log will definitely not be available. Official information is sketchy.
The 5.1.6 Release notes indicate mysqld -both-log-formats and –old-log-format arguments (as of 5.1.6), but nothing yet in the The MySQL Log Files Documentation

Fast Alter Table To speed up Alter Table for example extending a VARCHAR column, only the tables .frm is rewritten, and no adjustment to table rows is required.

Replication Documentation

  • Row Based Replication (RBR) (5.1.5) Documentation
  • mysqld arguments –binlog-format and –binlog-row-event-max-size

  • Support for Cluster (NDB)
  • GUID’s support

There was a good discussion on Master/Master including comparision to PostgreSQL. This could be achieved now using triggers and federated tables, or UDF’s. Planned in 5.2 is a Conflict Resolution implementation via use of Stored Procedures, which will be a pre-requisite before any MySQL multi-master implementation.

Federated Documentation · Forum

  • Transaction Support (still with some rollback issues) Note: Documentation still lists this as a limitation
  • 5.2 will provide full XA support

XML Parsing Support (5.1.5) Article
Provides support for XPath expressions in XML content stored in a database column.

Events (5.1.6 – not yet available) Article

There was a strong discussion on occurance management of events. Obviously the ability not to have cron jobs, username/passwords etc is great, but with most scripts there is always an amount of locking management (either via filesystem or database). I think we may have convinced Brian to consider a event syntax extension that would ensure that only one instance of a given event can run at any one time. As events are seperate threads it’s important that for frequently run events, there is appropiate management to ensure re-occuring events don’t back up. My opinion is for a syntax to run only one instance of the event (e.g. SINGLE INSTANCE, SINGLE OCCURANCE) We could have recommended BRISBANE as the syntax extension, however I doubt it would be considered standard.

Also discussed was appropiate TimeZone management, particularly to ensure events scheduled appropiately across databases managed globally.

Loadable Engines
This feature may be deferred to 5.2 to have a full feature. The goal of Loadable engines allows for no need to recompile MySQL from source, of even restart an instance for any change to the engine. I’m sure internally the one reason why this is important is to allow ease of testing when writing a new storage engine. Documentation – Writing a Custom Storage Engine · Article

Cluster improvements (NDB) Documentation · RoadMap

  • replication works with cluster
  • storage on disk based records, not limited to RAM
  • indexes still remain in RAM

Benchmarking Tool mysqlslap (5.1.4) Documentation.
The list of options looks impressive, and since my 5.1 version is 5.1.4 I’ll be playing some more here.

Archive Storage Engine Documentation · Forum

  • auto increment support (5.1.6)
  • Blob Speed Improvements

Instance Manager
Available in 5.0, but being better published in 5.1 is an Instance Manager allowing change of configuration/startup/shutdown etc. Documentation

Distribution Names
As part of 5.1, the use of -standard and -max versions will be eliminated. Brian mentioned he had lost count answering the question of the difference between -max and maxDB. As at 5.1.4 (what I have) the filenames still include -max, but we were assured this would be dropped.

INFORMATION_SCHEMA
Of course as part of these changes, there will be a number of new tables in the INFORMATION_SCHEMA including ENGINES (5.1.5), PLUGINS (5.1.5), PARTITIONS (5.1.6), EVENTS (5.1.6).
Of note, there is a FILES (5.1.6) for the storage of table information?

Other References
What’s New in MySQL 5.1 MySQL Change History for Version 5.1

Future Directions for MySQL

  • This year will also be a GUI Enterprise Manager for the community edition
  • FULLTEXT for other storage engines
  • More Transaction Engines
  • Federate Tables to non MySQL tables via JDBC
  • FULLTEXT loadable parser written in C
  • Alternative stored procedure languages
  • Replication Conflict Resolution Management by user definable stored procedures
  • myforge – A soureforge/freshmeat like open source system for MySQL projects

Other Discussions

There were a number of post talk discussions. Those of interest to me included running other languages in stored procedures (myPerl/myPhp user contributions available at Brian’s open source site www.tangent.org), increasing exposure to ISP’s to offer hosting with MySQL 5.0, including best practice guidelines, and of most discussion Asterisk The Open Source PBX.

Building MySQL Workbench 1.0.1 for Linux (Part 2)

Following my earlier post of MySQL Workbench 1.0.1 for Linux and logging a MySQL Bug, I’ve had the Bug verified, and the a further update of a compiler success. Details of compile from Bug #16880

Compiled with:

- glibmm-2.8.1
- gtk+-2.8.8
- libsigc++-2.0.11

I’ve got:


libsigc++-2.0.17
glib-2.6.6
glibmm-2.6.1
atk-1.9.0
pango-1.8.2
gtk+-2.6.9
gtkmm-2.6.5
lua-5.0.2

So starting with the obvious downgrading of libsigc++


$ su -
$ cd /src
$ wget http://ftp.gnome.org/pub/GNOME/sources/libsigc++/2.0/libsigc++-2.0.11.tar.gz
$ tar xvfz libsigc++-2.0.11.tar.gz
$ cd libsigc++-2.0.11
$ ./configure --prefix=/usr
make
make install

Now to retrace the MySQL compiling.


$ su -
$ cd /src
$ wget ftp://ftp.mysql.com/pub/mysql/download/mysql-workbench-1.0.1.tar.gz
$ tar xvfz mysql-workbench-1.0.1.tar.gz
$ cd mysql-workbench-1.0.1
$ cd mysql-gui-common
$ ./configure --enable-grt --enable-canvas
$ make

No errors, a sound start.


$ cd ../mysql-workbench/
$ ./configure
$ make
../../../mysql-gui-common/library_gc/source/libgcanvas.a(myx_gc_gl_helper.o)(.text+0x1b5): In function `getProcAddress(char const*)':
/src/mysql-workbench-1.0.1/mysql-gui-common/library_gc/source/myx_gc_gl_helper.cpp:264: undefined reference to `glXGetProcAddress'
collect2: ld returned 1 exit status
make[3]: *** [mysql-workbench-bin] Error 1
make[3]: Leaving directory `/src/mysql-workbench-1.0.1/mysql-workbench/source/linux'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/src/mysql-workbench-1.0.1/mysql-workbench/source/linux'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/src/mysql-workbench-1.0.1/mysql-workbench/source'
make: *** [all-recursive] Error 1

Crash and burn. So try to reproduce with the more accurate libraries. Of course it’s a bit of trial and error, here is the end result in order, but it was much more difficult as previously to get this.


$ cd /src
$ wget ftp://ftp.gtk.org/pub/gtk/v2.8/pango-1.10.2.tar.gz
$ tar xvfz pango-1.10.2.tar.gz
$ cd pango-1.10.2
$ ./configure --prefix=/usr
$ make
$ make install
$ wget ftp://ftp.gtk.org/pub/gtk/v2.8/glib-2.8.6.tar.gz
$ tar xvfz glib-2.8.6.tar.gz
$ cd glib-2.8.6
$ ./configure --prefix=/usr
$ make
$ make lib
$ wget ftp://ftp.gtk.org/pub/gtk/v2.8/atk-1.10.3.tar.gz
$ tar xvfz atk-1.10.3.tar.gz
$ cd atk-1.10.3
$ ./configure --prefix=/usr
$ make
$ make install
# (sigc++-2.0 >= 2.0.0 glib-2.0 >= 2.8.0 gobject-2.0 >= 2.8.0 gmodule-2.0 >= 2.8.0)
# Latest is 2.8.4
$ wget http://ftp.gnome.org/pub/GNOME/sources/glibmm/2.8/glibmm-2.8.1.tar.gz
$ tar xvfz glibmm-2.8.1.tar.gz
$ cd glibmm-2.8.1
$ ./configure --prefix=/usr
$ make
$ make install
# New Requirements for gtk 2.8.8
$ wget ftp://ftp.gtk.org/pub/gtk/v2.8/dependencies/cairo-1.0.2.tar.gz
$ tar xvfz cairo-1.0.2.tar.gz
$ cd cairo-1.0.2
$ ./configure --prefix=/usr
$ make
$ make install
# Pango not found. Pango built with Cairo support is required
$ cd /src/pango-1.10.2
$ ./configure --prefix=/usr --with-cairo=YES
$ make
$ make install
$wget ftp://ftp.gtk.org/pub/gtk/v2.8/gtk+-2.8.8.tar.gz
$tar xvfz gtk+-2.8.8.tar.gz
$cd gtk+-2.8.8
# Requirements (glib-2.0 >= 2.7.1 atk >= 1.0.1 pango >= 1.9.0 cairo >= 0.9.2)
$ ./configure --prefix=/usr
$ make
$ make install
# gtkmm Latest is 2.8.2
$ wget http://ftp.gnome.org/pub/GNOME/sources/gtkmm/2.8/gtkmm-2.8.2.tar.gz
tar xvfz gtkmm-2.8.2.tar.gz
cd gtkmm-2.8.2
./configure --prefix=/usr
# Note, this takes by far the longest to compile
make
make install

SideNote: This was a maze to get the right dependancies for the right products. It took many hours.

And around the merry-go-round we go again.


cd /src/mysql-workbench-1.0.1/mysql-gui-common
$ ./configure --enable-grt --enable-canvas
$ make
$ make install
$ cd ../mysql-workbench/
$ ./configure
$ make
/usr/lib/python2.3/config/libpython2.3.a(posixmodule.o)(.text+0x3c5a): In function `posix_tmpnam':
: warning: the use of `tmpnam_r' is dangerous, better use `mkstemp'
/usr/lib/python2.3/config/libpython2.3.a(posixmodule.o)(.text+0x3b94): In function `posix_tempnam':
: warning: the use of `tempnam' is dangerous, better use `mkstemp'
../../../mysql-gui-common/library_gc/source/libgcanvas.a(myx_gc_gl_helper.o)(.text+0x1b5): In function `getProcAddress(char const*)':
/src/mysql-workbench-1.0.1/mysql-gui-common/library_gc/source/myx_gc_gl_helper.cpp:264: undefined reference to `glXGetProcAddress'
collect2: ld returned 1 exit status
make[3]: *** [mysql-workbench-bin] Error 1
make[3]: Leaving directory `/src/mysql-workbench-1.0.1/mysql-workbench/source/linux'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/src/mysql-workbench-1.0.1/mysql-workbench/source/linux'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/src/mysql-workbench-1.0.1/mysql-workbench/source'
make: *** [all-recursive] Error 1

Same result, so after another 4 or 5 hours, I still have nothing.

Update
Some more research (I really should devote my time to other pursuits). An official comment from NVidia Corporation on their forums. View Thread

This is not a NVIDIA bug, glXGetProcAddress() is part of GLX 1.4, which the NVIDIA Linux/UNIX graphics driver doesn’t claim to support (nor do I think was GLX 1.4 ever finalized); you should always use glXGetProcAddressARB().

Blog/Wiki Spamming – What makes your blood boil

Well this is low. I’ve just been spammed on my Wiki. And it was cunning, I just found it by accident. An enterprising hacker embedded into my Home Page hidden links that were not visible via normal page view, but ultimately would be via a search bot or some other means.

Even better the pricks even deleted content on one page. Here is an example diff of pages. I even posted a few days ago What makes your blood boil? and that was just an article on a news site with misleading dated information. I could see the opportunity for more blood boiling articles, but it doesn’t solve the problem. The problem is, I don’t know how to solve the problem.

So what can you do other then clean up this hacker mess, and put in checks to find this, and then continually revise these checks. What bodies can you complain to about URL’s listed, how can you get these removed from the web. That’s what I really want to know.

MediaWiki, the blog software I use did provide a number of manual tools to help identify and correct the information, but only in response to my accidental discovery. I was able to review a History of my Home Page, and then I could drill down to all User Contributions. But it’s a manual process to cleanup. Needless to say I’ll be developing something to more easily identify this and provide. You can also use Block user to stop IP addresses. I’ll also be adding them to my firewall rules.

For those that use Wiki’s or Blog’s, please let me know your software used, and I’ll endevour over time to perhaps provide tips for these products.

As per a request by a MySQL user, I opened my Blog to allow unregistered user comments. I did this reluctently as I wanted that ‘confirmed opt-in’ step, of a user registration, however I had faith in the community. Well I’ve since turned that off as well. I’ve had SPAM posted to my blog, and most recently a post in which I wanted to respond to, but as the information supplied for email as clearly invalid I could not.

So the penalty is users wishing to make comments have to register, it’s quite painless, however the downside is as soon as you enter an email on a website what’s going to happen to it. And this could happen unwillingly. While the Blog owner as no intentions of using registered emails, a flaw in software could expose information for unlawful access.

I’ve implemented a solution to this, and I’m trialling another. The first is I have a virtual domain, not just a virtual email. You have all heard about getting a free email account at a HotMail or Yahoo or etc, and then for subscriptions you use this email, therefore protecting your own identity email from obvious spam. Well I go one step further. I use a whole domain (in this case myvirtualemail.biz), so it costs ~USD$10 per year, big deal, but what I do is I create a new alias for everything I subscribe to. I then have total control of where I send the email, and what I want to do with it. I can also track if I get any spam, which email alias it was addressed to, and then I have a greater granularity of source of the problem. I can just simply trash the alias, therefore no more SPAM in my inbox. The problem is, the SPAM still comes, it takes network traffic, bandwidth and CPU, it gets rejected and then takes even more network traffic and bandwidth.

Now you have me started on a completely different topic of Authenticated Mail. Email is long since been poisoned, and it should be left to die, and out of the ashes a new Phoenix . Cut out the source of SPAM by ensuring the sender is Authenticated. I guess a topic for another time, but it’s the impossibility of moving people off email that makes it enviable.

And the other trial, I’m using GMail to do my filtering for me. I’ve converted my default DNS administrator email (which gets a lot of spam as it’s public on domain registrars), to an alias that redirects to a gmail account, and then from this I forward filtered mail to a POP account. With gmails strict policy on attachments, this approach may not be ideal for the end user, but for this purpose it’s ideal. Still testing this idea, but it’s looking good so far. BTW, if you want a free gmail account, let me know and I’ll send an invitation.

If only I have and enterprising billionaire to fund some of my ideas, I’d really like to be one person that makes a difference, and my difference is to rid us of obvious stuff that destroys our time. And don’t get me started on Viruses. Simple solution, If the Windows OS was read-only, and there was some authentication process for software installation, we could probably rid the world of viruses. I must admit, the VMWare player running a virtual machine solves the problem as well.

Update

Some info on Setting user rights in MediaWiki
in WIKI_HOME/LocalSettings.php I added

$wgWhitelistEdit = true;
$wgShowIPinHeader = false; # For non-logged in users
$wgSysopUserBans = false; # Allow sysops to ban logged-in users
$wgSysopRangeBans = false; # Allow sysops to ban IP ranges

Other references: Wikipedia:Vandalism, Counter Vandalism Unit

MySQL Sakila Sample Application

I’m sure you are all aware by now of Mike Hillyer’s MySQL Sakila Sample Database that will be launched at the MySQL Conference. We now have an official MySQL Forum for this as well.

As part of leveraging this existing database, and using this for the basis of my MySQL Conference presentation on MySQL for Oracle Developers, I’ve released the first version of my MySQL Sakila Sample Application at http://sakila.arabx.com.au which I would very much like some feedback on. Please use the official MySQL Forum for any comments, suggestions or complaints. Please Note I am still very much in the planning and design phase.

We also have an Unofficial Wiki that describes a little more of the concept and purpose the Sample Application, and a call for others to get onboard to design and develop their own versions in varying languages.

So the sample application, what does this showcase? For now, work has been on the presentation of data, as we finalise the schema and data. In addition the application has been designed to be more self documenting describing via the top menu options the functions available, and business logic considerations, specific MySQL features and a schema to show the underlying tables in question. Look at Admin or Film for an initial example.

Here is a quick list of the functions of the MySQL Sakila Sample Application. For now there is not user authentication so it’s open for all to view.

  • Home
  • Customers
    • Index (NF), Search/List, Add, View (NF)
  • Rentals
    • Index(NF), New, Search (NF), Return(NF), Overdue(NF), Out(NF)
  • Film
    • Index, Movie List, Actor List, Categories, Languages
  • Reports
    • Index (NF), Top Film Rentals, Top Customer Rentals
  • Admin
    • Index, Staff, Stores, Countries, Cities

(NF) – Not Functional – Please note, as the data hasn’t been finalised, some of the data is my own patch just to display functionality.
More information on what’s available is in the Admin|Documentation page.

MySQL Workbench 1.0.1 for Linux

Just released at the MySQL Forums yesterday an updated source version of MySQL Workbench for Linux available at ftp://ftp.mysql.com/pub/mysql/download/mysql-workbench-1.0.1.tar.gz.

So can Version 1.0.1 compile when I had no success with compiling 1.0.0?


$ su -
$ cd /src
$ wget ftp://ftp.mysql.com/pub/mysql/download/mysql-workbench-1.0.1.tar.gz
$ tar xvfz mysql-workbench-1.0.1.tar.gz
$ cd mysql-workbench-1.0.1
$ cd mysql-gui-common
$ ./configure --enable-grt --enable-canvas
$ make
MySQLGRT/MGRTValueTree.cc:255: instantiated from here
/usr/include/sigc++-2.0/sigc++/adaptors/bound_argument.h:158: error: 'const class sigc::bound_argument >&>' has no member named 'visit'
make[3]: *** [MGRTValueTree.o] Error 1
make[3]: Leaving directory `/src/mysql-workbench-1.0.1/mysql-gui-common/source/linux'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/src/mysql-workbench-1.0.1/mysql-gui-common/source'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/src/mysql-workbench-1.0.1/mysql-gui-common'
make: *** [all] Error 2

Arrrrrggggghhhhh!. What do I have to do to get this beast to compile. I am running the most current libsigc++ of 2.0.17, available for download at http://ftp.gnome.org/pub/GNOME/sources/libsigc++/2.0/

The job made easier this time by the src archive containing the all important README that provides both dependancies and required additional configuration arguments. However it only stated libsigc++ 2.0 which is what I have.

more /src/mysql-workbench-1.0.1/mysql-workbench/README.linux

Compiling Workbench from Scratch
================================

Required Software:
------------------

Much of the requirements are common for all GUI tools shipped by MySQL (such as
Administrator and Query Browser). Instead of building everything by hand, you
should try using the packages supplied by your distribution, preferably using
some tool like apt-get, smart or even yum. Do not forget that you need the -devel
(or -dev) version of each pacakge as well as the main one.

- gtkmm-2.4 or newer and it's dependencies, such as:
        - glibmm-2.4
        - libsigc++-2.0
        - gtk-2.4 (or whatever is required by gtkmm)
- libpcre-5
- MySQL client libraries and headers for MySQL >= 5.0
- OpenGL libraries and headers
- lua 5 (www.lua.org)

Building:
---------

- unpack workbench
- cd mysql-gui-common
- ./configure --enable-grt --enable-canvas
- make
- make install
- cd ../mysql-workbench
- ./configure
- make
- make install

I’ve logged as Bug #16880. I understand there are many variables in different Linux Operating Systems, and libraries, however if I meet the minimum requirements, I have no other options. Perhaps some specifics of what exact versions of required libraries were used to compile in an internal environment would be helpful. Are programs compiled on a generic OS distro and not a developers machine, so it can be reproduced and is testable.

Updated
In public response to pablo’s comment (click on comments to see, which I think is unqualified, please give more details), MySQL AB do read this, I have had positive comments from MySQL AB staff, and they have been proactive since I’ve raised this an earlier issues. In the space of some 4 hours I’ve had a confirmation and verification of my listed Bug, so in this case, response is excellent. Previously with 1.0.0 I was frustrated in a number of areas, first I’ve been waiting probably 8 months to get my hands on this product, at present I don’t have an alternative Open Source Linux Modelling tool supporting MySQL 5. Second, there seemed to be slow progress or perhaps insufficient feedback of progress from MySQL AB, but unless we raise our voice how will they know.

Again in defense of MySQL AB (of which I am not associated with), this is an developing Open Source Product and what we as the community ultimately get out of the product it what we collectively put in. If for example we just say it’s buggy, but don’t report it, seek advice, try to use our best abilities to reproduce and assist in providing a better product. Unfortunately the language and GUI requirements are skills I am not confident in. If they were I would be more involved in coding to correct this problem to move forward.

Another reason for presenting my information is to seek advice if others are experiencing these issues, to raise awareness to MySQL AB, who are an organisation with limited resources yet continue to provide what I consider is an excellent product for it’s current marketplace targets. I’m keen to see a working product because then I can provide 15 years of database modelling experience in large systems design in the review and testing of the product, and hopefully contribute in identifying useful and practical functionality to continully improve the product.

Again, speaking on MySQL AB’s behalf, the introduction of GUI products released last year on which the MySQL Workbench are built on, are only new products and new products take time to mature. Of course I’ve love a fully working product, but I’m not providing finances to millions or billions in R&D for software development.

I think it’s important that we all embrace the ideals of Open Source. Sure they are not perfect at this time, but this is not the ideal of one company to change the world of computing and computer software, but to all individuals to have an opportunity to contribute.

Pablo, I would be most happy to take this discussion offline and ask some more questions regarding the concerns you have made in your post, and try to help better understand your concerns, but without a suitable valid email address I’m unable to respond.

Downgrading a MySQL schema from 5 to 4 (Part 2)

As requested by Frank, here are the working parts of my earlier Downgrading a MySQL schema from 5 to 4 article.

The Problem

To recap, I received a MySQL Version 5.0 schema via a sql file, however I was unable to upgrade from MySQL 4.0 to MySQL 5.0 on my old RedHat 7.3 production server. As an interim solution, I still wanted the schema and data to allow for initial development (without the 5 specific features including Views,Triggers and Procedures/Functions). However the MySQL 5.0 SQL file would not run in MySQL 4.0.

Sample

Here is a small subset of the MySQL Sakila Sample Database schema to demonstrate the problem.

DROP SCHEMA IF EXISTS sakila;
CREATE SCHEMA sakila;
USE sakila;
--
-- Table structure for table `actor`
--
CREATE TABLE actor (
  actor_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
  first_name VARCHAR(45) NOT NULL,
  last_name VARCHAR(45) NOT NULL,
  last_update TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  active BOOLEAN NOT NULL DEFAULT TRUE,
  PRIMARY KEY  (actor_id),
  KEY idx_actor_last_name (last_name)
)ENGINE=InnoDB DEFAULT CHARSET=utf8;

Conversion

The following commands produced the output valid for a MySQL 4.0 database.


$ dos2unix sakila-schema-0.2.sql
$ cat sakila-schema-0.2.sql | sed -f mysql5to4.sed >sakila-schema-0.2.mysql4

NOTE: These commands are Linux Commands. There are probably Windows compatible commands however without Windows around to test this, I’m not going to offer any advice here. Anybody with any experience here please advise.
If you don’t have Linux, then get it. Sound to hard, well start with a Live CD, for example Knoppix or Ubuntu. These will allow you to boot of CD and it will never affect your Windows machine. If you don’t have broadband or a CD Burner, look at buying a computer magazine. You will find quite regularly options or Live CD’s. Of if you can’t afford that, Ubuntu – ShipIt will ship you the CD for free.
Even better then this, you can now use VMWare Player and have a working Linux environment running in parallel with your Windows at the same time, and it will not affect Windows in any way. You just need the VMWare Player, then a suitable Virtual machine. If there is a single person with an excuse why they can’t experiment with Linux, your not embracing Open Source. Start today, start right now, I’ve given you the links.

Results

DROP DATABASE IF EXISTS sakila;
CREATE DATABASE sakila;
USE sakila;
--
-- Table structure for table `actor`
--
CREATE TABLE actor (
  actor_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
  first_name VARCHAR(45) NOT NULL,
  last_name VARCHAR(45) NOT NULL,
  last_update TIMESTAMP NOT NULL,
  active TINYINT NOT NULL DEFAULT TRUE,
  PRIMARY KEY  (actor_id),
  KEY idx_actor_last_name (last_name)
)TYPE=InnoDB;

The Code

Ops, almost forgot the important bit. Note, the syntax in this file is specific, characters and spaces are significant. My advice if you have no idea about ex regular expresssions and syntax, don’t touch.

The contents of mysql5to4.sed

s/CREATE SCHEMA/CREATE DATABASE/
s/DROP SCHEMA/DROP DATABASE/
s/ DEFAULT CHARSET=utf8;$/;/
s/ENGINE=/TYPE=/
s/ ON UPDATE CURRENT_TIMESTAMP//
s/ DEFAULT CURRENT_TIMESTAMP//
s/BOOL /TINYINT /
s/BOOLEAN /TINYINT /
/^DELIMITER/d

Further Work

  • These workarounds work on a generated mysqldump file. For example, I bank on implied standards of UPPERCASE in the syntax of commands.
  • I don’t cater for situations where DEFAULT CURRENT_TIMESTAMP is defined on a TIMESTAMP column which is not the first listed in the database table. This could be catered for with some more shell scripting with awk (More about awk 1, 2, 3)
  • I don’t cater for syntax situations not currently described in the Sakila Sample Database.
  • For now, I’ve manually removed the TRIGGERS, VIEWS, PROCEDURES and FUNCTIONS, however there would be good success via shell scripting to also archieve this.

MySQL Alternative

Morgan provided a suitable syntax for mysqldump in his Compatibility between MySQL Versions article, however I found this not to work completely successful. Here are my findings.

Syntax


mysqldump --compatible=name

Example

Using 5.1.4-alpha-max under Linux (not glib23)

$ mysqldump -P3307 -hlocalhost.localdomain -uroot -d --compatible=mysql40 sakila

The resultant file had the following lines, I’ve just cut and paste to show issues.
Using 4.0.13-standard.


mysql> DELIMITER ;;
ERROR 1064: You have an error in your SQL syntax. Check the manual that corresponds to your MySQL server version for the right syntax to use near 'DELIMITER' at line 1

Using 4.1.10a-standard

mysql> DELIMITER ;
ERROR:
DELIMITER must be followed by a 'delimiter' character or string

This invalid handling of DELIMITER has a downstream affect when it gets to triggers.

mysql> /*!50003 CREATE TRIGGER `ins_film` AFTER INSERT ON `film` FOR EACH ROW BEGIN
-> INSERT INTO film_text (film_id, title, description)
-> VALUES (new.film_id, new.title, new.description);
Query OK, 0 rows affected (0.00 sec)
mysql> END */;;
ERROR 1064: You have an error in your SQL syntax. Check the manual that corresponds to your MySQL server version for the right syntax to use near 'END */' at line 1

I Need to get around to checking the Bug Database to see if these need to be logged.

What makes your blood boil?

It’s appalling that in this day of technological advancements and communication, the excuse for publishing dated information just doesn’t fly. 50 or 100 years ago you could be excused for writing something that was 6 months out of date, yet this article “Which Database Is Right For You?” Dated (2006-01-05) states:

So what doesn’t MySQL support? Today, MySQL doesn’t support views (‘virtual’ tables made from other tables), stored procedures (small programs that can be stored in the database) or triggers (actions that the database can be told to do automatically when certain things happen). However, many of these features are promised in future versions.

Well as I finished the article, looking forward to the contact details to notify this “writer” of their dated information, I was beaten to the punch.

Editor’s Note – Correction: We were contacted by Jay Pipes,Community Relations Manager, North America MySQL, Inc. who says that “MySQL 5 (the current stable release of MySQL) does indeed support stored procedures, views, triggers, functions, and many more features.”

Why can’t they put that note in the MySQL section of the article, it can easily get missed at the bottom.

It was only recently that I went out to by a magazine on referral to read an in depth article “The Usual Suspects”. I was most disappointed in it’s dated and in someways inaccurate content. You can read my comments at Review of Database Magazine Article – “The Usual Suspects”

Sequences in MySQL

One piece of SQL functionality that doesn’t appear to have any consistency or an ANSI SQL Standard is the management of system generated sequential numbers, used for example in suggorate keys.

MySQL uses AUTO_INCREMENT which serves the purposes adequately, however in my documenting of differences with Oracle in my upcoming MySQL Conference presentation “MySQL for Oracle Developers” there a number of key differences with Oracle’s SEQUENCE usage.

MySQL AUTO_INCREMENT to Oracle SEQUENCE Differences

  • AUTO_INCREMENT is limited to one column per table
  • AUTO_INCREMENT must be assigned to a specific table.column (not allowing multi table use)
  • AUTO_INCREMENT is INSERTed as a not specified column, or a value of NULL

The MaxDB Reserved Words list includes SEQUENCE for the CREATE SEQUENCE however I’ve never used MaxDB. Other popular open source products such as PostgreSQL and Ingres use sequences. Refer to the references section for more details.

Usage

The following provides an example sytax usage within MySQL and Oracle.

MySQL

CREATE TABLE Movie(
id           INT NOT NULL AUTO_INCREMENT,
name     VARCHAR(60) NOT NULL,
released YEAR NOT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB;


INSERT INTO Movie (name,released) VALUES ('Gladiator',2000);
INSERT INTO Movie (id,name,released) VALUES (NULL,'The Bourne Identity',1998);

Oracle

CREATE TABLE Movie(
id          INT NOT NULL,
name     VARCHAR2(60) NOT NULL,
released INT NOT NULL,
PRIMARY KEY (id)
);
CREATE SEQUENCE MovieSeq;


INSERT INTO Movie (id,name,released) VALUES (MovieSeq.NEXTVAL,'Gladiator',2000);

You can within Oracle use a Before Insert trigger to simulate handling of the MySQL Insert syntax. Note: Within Oracle you will require a SEQUENCE per table and a TRIGGER per table. Oracle supports multiple triggers of the same type per table (not sure if MySQL supports this).

CREATE OR REPLACE TRIGGER BRI_MOVIE_TRG
BEFORE INSERT ON Movie
FOR EACH ROW
BEGIN
  SELECT MovieSeq.NEXTVAL INTO :new.id FROM DUAL;
END BRI_MOVIE_TRG;
.
RUN;


INSERT INTO Movie (name,released) VALUES ('The Lion King',1994);

Oracle’s syntax uses the sequence name with .NEXTVAL or .CURVAL.

Future Directions

I would like to see a SEQUENCE implementation with MySQL (whether official or unofficial). I’m sure some enterprising person in the community already has one. Database abstraction layer systems would also most likely have implementations. I liked the PostgreSQL Syntax for ease of use with the following commands.

  • NEXTVAL(‘sequence’);
  • CURRVAL(‘sequence’);
  • SETVAL(‘sequence’,value);

Wanting something and doing something about it are two different things, so here is what I wiped together to demonstrate a possible implementation. It needs a lot more work in appropiate error handling. transaction management, testing and performance analysis, however it shows the options of one possible implementation.

currval

DROP TABLE IF EXISTS sequence;
CREATE TABLE sequence (
name              VARCHAR(50) NOT NULL,
current_value INT NOT NULL,
increment       INT NOT NULL DEFAULT 1,
PRIMARY KEY (name)
) ENGINE=InnoDB;
INSERT INTO sequence VALUES ('MovieSeq',3,5);
DROP FUNCTION IF EXISTS currval;
DELIMITER $
CREATE FUNCTION currval (seq_name VARCHAR(50))
RETURNS INTEGER
CONTAINS SQL
BEGIN
  DECLARE value INTEGER;
  SET value = 0;
  SELECT current_value INTO value
  FROM sequence
  WHERE name = seq_name;
  RETURN value;
END$
DELIMITER ;

Some Testing:

mysql> SELECT currval('MovieSeq');
+---------------------+
| currval('MovieSeq') |
+---------------------+
|                   3 |
+---------------------+
1 row in set (0.00 sec)
mysql> SELECT currval('x');
+--------------+
| currval('x') |
+--------------+
|            0 |
+--------------+
1 row in set, 1 warning (0.00 sec)
mysql> show warnings;
+---------+------+------------------+
| Level   | Code | Message          |
+---------+------+------------------+
| Warning | 1329 | No data to FETCH |
+---------+------+------------------+
1 row in set (0.00 sec)

What was interesting was I originally used a cursor, as below, but the results for passing an invalid argument (basic boundary testing), returned a SQL error while the above implementation returned a more manageable warning.

 DECLARE c CURSOR FOR
    SELECT current_value FROM sequence
    WHERE name = seq_name;
  OPEN c;
  FETCH c INTO value;


mysql> select currval('x');
ERROR 1329 (02000): No data to FETCH

Indeed the Apache Object Relational Bridge Sequence Manager section shows a very cool syntax for MSSQL.

UPDATE TABLE SET @MAX_KEY = MAX_KEY = MAX_KEY + 1

UPDATE table SET var = column = value which effectively allows you to eliminated the need for a seperate UPDATE and SELECT for this type of operation.

nextval

DROP FUNCTION IF EXISTS nextval;
DELIMITER $
CREATE FUNCTION nextval (seq_name VARCHAR(50))
RETURNS INTEGER
CONTAINS SQL
BEGIN
   UPDATE sequence
   SET          current_value = current_value + increment
   WHERE name = seq_name;
   RETURN currval(seq_name);
END$
DELIMITER ;

mysql> select nextval('MovieSeq');
+---------------------+
| nextval('MovieSeq') |
+---------------------+
|                  15 |
+---------------------+
1 row in set (0.09 sec)

mysql> select nextval('MovieSeq');
+---------------------+
| nextval('MovieSeq') |
+---------------------+
|                  20 |
+---------------------+
1 row in set (0.01 sec)

mysql> select nextval('MovieSeq');
+---------------------+
| nextval('MovieSeq') |
+---------------------+
|                  25 |
+---------------------+
1 row in set (0.00 sec)

setval

DROP FUNCTION IF EXISTS setval;
DELIMITER $
CREATE FUNCTION setval (seq_name VARCHAR(50), value INTEGER)
RETURNS INTEGER
CONTAINS SQL
BEGIN
   UPDATE sequence
   SET          current_value = value
   WHERE name = seq_name;
   RETURN currval(seq_name);
END$
DELIMITER ;

mysql> select setval('MovieSeq',150);
+------------------------+
| setval('MovieSeq',150) |
+------------------------+
|                    150 |
+------------------------+
1 row in set (0.06 sec)

mysql> select curval('MovieSeq');
+---------------------+
| currval('MovieSeq') |
+---------------------+
|                 150 |
+---------------------+
1 row in set (0.00 sec)

mysql> select nextval('MovieSeq');
+---------------------+
| nextval('MovieSeq') |
+---------------------+
|                 155 |
+---------------------+
1 row in set (0.00 sec)

Other References

Ingres CREATE SEQUENCE Page 105.
PostgreSQL CREATE SEQUENCE
Apache Object/Relational Bridge – Sequence Manager – Subproject of The Apache DB Project
MySQL CREATE PROCEDURE

To enum or not to enum?

I’ve never used database columns that embedded defined valid values within the schema definition. Within MySQL there are 2 definitions, ENUM and SET. There are a few reasons why, but first an explanation of these data types.

In summary, using the MySQL Sample Database.

CREATE TABLE film (
film_id INT UNSIGNED NOT NULL AUTO_INCREMENT,
...
rating ENUM('G','PG','PG-13','R','NC-17') DEFAULT 'G',
special_features SET('Trailers','Commentaries','Deleted Scenes','Behind the Scenes') DEFAULT NULL,
PRIMARY KEY (film_id)
)ENGINE=InnoDB DEFAULT CHARSET=utf8;

So from this, the following commands allow you to inspect this information via mysql.


DESCRIBE film;
SHOW COLUMNS FROM film LIKE 'rating';

With the introduction of the INFORMATION_SCHEMA in MySQL 5, a more traditional method using a valid SELECT statement.


SELECT COLUMN_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA='sakila'
AND TABLE_NAME='film'
AND COLUMN_NAME = 'rating'

This is great, however you have to know a lot of information, the table_name and column_name ok, they are fixed, but now the table_schema information is required to be known by the application.

Why I don’t use these?

Historically, there have been 3 reasons why I’ve never used ENUM.

  • First, it’s not standard SQL (at least to my knowledge), and historically hasn’t been consistent between Database products
  • More importantly as data, it’s terrible to maintain easily, as it’s within the table structure
  • The management of valid values within an application, and the need to manage this data dynamically.

Now, I’m not certain if within MySQL there are any funky ways to manage this type of information. I’d welcome comments on what people do. However from my previous experience, this is my method of implementation.

All information of this type, I refer to as a code. I implement this within a common table, defined as reference_data (based historically on the Oracle cg_ref_codes). Here is a cut down version of my definition for simplicity purposes.


CREATE TABLE reference_data(
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
type VARCHAR(20) NOT NULL,
code VARCHAR(20) NOT NULL,
value VARCHAR(100) NOT NULL,
PRIMARY KEY (id)
)ENGINE=InnoDB DEFAULT CHARSET=utf8;

Of course, I’ve extended this basic structure to include ordering of data per type, default value of a given type, grouping of types (for larger systems) and a status to manage historically codes no longer valid for new data.

Within tables, I define all codes with an extention of _code so as to clearly identify the content of a given column.

So, for this example I would add data such as:


INSERT INTO reference_data(type,code,value) VALUES
('rating','G','General Audience - All Ages Admitted'),
('rating','PG','Parental Guidance Suggested. Some Material May Not Be Suitable For Children'),
('rating','PG-13','Parents Strongly Cautioned. Some Material May Be Inappropriate For Children Under 13'),
('rating','R','Restricted. Under 17 Requires Accompanying Parent or Adult Guardian'),
('rating','NC-17','No One 17 And Under Admitted');

This for me provides the following benefits:

  • The Data is data, meaning it can be easily modified via normal application and sql functionality
  • Additional information such as a more general description can be provided and presented to end users
  • Additional attributes non shown in this example, provides ordering support, and a default value
  • The same set of data for a given type can be used in multiple tables

Now, the clear downside of this level of functionality is that the data integrity that is managed by ENUM at the database level has now been lost. You need to temper this with the level of access to your database, and you can easily implement via Database Triggers this level of security. However, I think that clear ability to tie the required values for an ENUM column to easy accessibility via an application, and allow for easy management is clearly a strength.

I’m yet to be convinced otherwise.

Support for Technology Stacks

As part of my next conference presentation Overcoming the Challenges of Establishing Service and Support Channels I’ve been struggling to find with my professional sources, any quality organisations that provide full support for a technology stack, for example a LAMP stack, or a Java Servlet stack.

Restricted to searching via online, I’ve been impressed by what I’ve found at Spike Source www.spikesource.com. An organisation with an experienced CEO, well known in the Java Industry. They certainly have all the buzz words covered in their product information.

Benefits of their SpikeSource Core Stack.

  • Fully tested and certified
  • Installs in minutes with integrated installer
  • Enterprise-class maintenance and support available
  • Vendor neutral
  • Horizontally and vertically scalable

SpikeSource offers three prebuilt configurations that can have you up and running in around ten minutes. These configurations comprise the following component choices:

  • LAMP Stack – for Websites with dynamic database-driven content written using Perl and PHP.
  • Servlet Stack – for dynamic Websites written using Java-based Web technologies such as servlets.
  • J2EE Stack – for Web applications that separate Web interface and application logic using Java Servlets and Enterprise JavaBeans.

Supported Platforms. What’s of interest here is RHEL, SuSE as well as Fedora Core 3. In line with for example Oracle software running under Linux.

What’s interesting, is they have MySQL 4.1.14 in their spikesource stack (1.6.2), so they are quite some months behind here. Especially now that MySQL 5 has been available 3 months now. Not only just stack technology, their infrastructure supports a large number of open source products and appears to provide infrastructure via a community to enhance the product offerings within this stack. The Spike Developer Zone Components List provides a long list of products.

Their release notes provide good instructions, in particular what configuration was used in the building of the software. For example, here is the MySQL Release Notes, MySQL Quick Start Guide, MySQL Troubleshooting Guide

They talk about testing, where Core Stack Testing provides more details here.

They also claim to provide VMWare Community Virtual Machine that can be run via the free VM Player on any system without having an effect on an existing system. This is indeed impressive, however it doesn’t seem available. There are many other installations available at the VMWare site.

I’m interested to see what else existing in the marketplace for a fully supported technology stack, rather then support of individual components (e.g. RedHat for Linux, MySQL AB for MySQL, JBoss for a servlet container)

In reading comparisions, there is reference also to Source Labs – www.sourcelabs.com. Anybody that can offer recommendations that I can research would be great.

Downgrading a MySQL schema from 5 to 4

Why oh why would you want to do this. Well it my case, I’ve committed to developing a web application using MySQL 5 features, knowing that I had to upgrade my production server from 4.0

Well as part of doing this, I hit a stumbling block. My current production web server runs RedHat 7.3, and even with all the latest rpm updates, it does not have glib 2.3 which is required for MySQL 5. I’m no guru, but trying to upgrade from Redhat 7.3 to 9, to at least get these rpm is not an easy process. I’m not confident to try to compile glib 2.3 and all it’s dependancies on a production server, nor is it possible to recompile MySQL down (I suspect not). All just too many variables. It appears the time has come to scrap it and work with a more current RedHat Enterprise Linux version. Down side, the 25 web sites are not going to be too happy.

Anyway, as an interim to at least move forward as much as possible I dowgraded the provided schema.sql to at least run under 4.0. We all know about new features in 5.0, so this is just really the same list, but it was worthwhile documenting.

  • Remove views
  • Remove triggers
  • Remove DELIMITER syntax
  • Change ENGINE= to TYPE=
  • Change DROP SCHEMA to DROP DATABASE
  • Change CREATE SCHEMA to CREATE DATABASE
  • Remove DEFAULT CHARSET=utf8 from table definition
  • Remove DEFAULT TIMESTAMP from TIMESTAMP columns (NOTE: Need to verify the column is the first TIMESTAMP column)
  • Change BOOLEAN to TINYINT(1)

Update:
Friend and collegue Morgan points out that if I was dumping a schema with 5.1 you could use the syntax mysqldump --compatible=name. Unfortunately in my case the schema was coming from a third party, however now I know this syntax, I can always ask for it.

Update 2:
I know in earlier versions that you could get a MySQL product installation, and also get one compiled with glib23. I just figured that over time the older glib22 were dropped, and were not available with 5.1. It seems domestic blindness has now hit me in computer software. I did review the MySQL 5.1 downloads page, and did not see lower in the page mixed in the 30 to 40 downloads and option to download a glibc22 version. Thanks Arjen for pointing that out.


Domestic Blindness
“The inability to find a common object that is right in front of your face.”

The challenges of compiling non working Open Source (Part 2)?

Did I push to much in my last post? I don’t think so, but I guess it’s a fragile balance sometimes in Open Source between those keen end users, and the developers that do give so much towards their own creations (I understand, I’m in that category myself).

I was very proud of my work yesterday, it took a whole day of my time (I do have better things to do, like finish my own Open Source project HTMLtags, while will allow me to build my sample application, which I can then use for my MySQL Users Conference presentation). I learnt to dig around the net a lot, go on the wild goose chase several times, understand some more under the hoods of compiling, libraries and dependencies in the GTK world I would have otherwise not really cared about. But as I said, I got to a brick wall by the end, and it was dishearting.

It seemed my Bug Report #16604 on MySQL Workbench compiling listing a clear number of bugs was not well received by the development team. Maybe I should have slept on it, but about 1am in the morning I made another plea for assistance.

Well I woke this morning, and as I mentioned in my opening statement was I too passionate about this pursuit. Perhaps not. Overnight, a positive response to my #16604 provided feedback that my hard work was indeed incoporated, and the next clue of compilation (which could just be in a simple INSTALL file hopefully in the future) was presented.

So eagerly ignoring breakfast, and the increasing pile of dirty dishes, I jump in with this new intel. It ws just one line.

you need to configure gui-common with: ./configure –enable-grt –enable-canvas to enable the components needed by Workbench.


$ cd /src/mysql-workbench-1.0.0/mysql-gui-common
$ ./configure --enable-grt --enable-canvas
$ make
make[3]: Entering directory `/src/mysql-workbench-1.0.0/mysql-gui-common/library_grt/source'
if gcc -DHAVE_CONFIG_H -I. -I. -I../.. -I../../library_grt/include -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/libxml2 -I../../library_util/include -I../../library_util/shared_include -I../../library_grt/newt -I/opt/mysql/include -I/usr/include/pcre -I/usr/include/python2.3 -DENABLE_JAVA_MODULES -DENABLE_PYTHON_MODULES -DLUA_TEXT_DIALOGS -g -O2 -MT lua_dialogs.o -MD -MP -MF ".deps/lua_dialogs.Tpo" -c -o lua_dialogs.o lua_dialogs.c;
then mv -f ".deps/lua_dialogs.Tpo" ".deps/lua_dialogs.Po"; else rm -f ".deps/lua_dialogs.Tpo"; exit 1; fi
lua_dialogs.c:20:17: lua.h: No such file or directory
lua_dialogs.c:21:21: lauxlib.h: No such file or directory
....
make[3]: *** [lua_dialogs.o] Error 1
make[3]: Leaving directory `/src/mysql-workbench-1.0.0/mysql-gui-common/library_grt/source'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/src/mysql-workbench-1.0.0/mysql-gui-common/library_grt'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/src/mysql-workbench-1.0.0/mysql-gui-common'

More googling, hopefully http://www.lua.org is what this is???


$ cd /src
$ wget http://www.lua.org/ftp/lua-5.0.2.tar.gz
$ tar xvfz lua-5.0.2.tar.gz
$cd lua-5.0.2
$ ./configure
$ vi config
# (change Line 151: INSTALL_ROOT from /usr/local to /usr)
$ ./configure
$ make
$ make install


$ cd /src/mysql-workbench-1.0.0/mysql-gui-common
$ make
make[3]: Entering directory `/src/mysql-workbench-1.0.0/mysql-gui-common/source/linux'
if g++ -DHAVE_CONFIG_H -I. -I. -I../.. -DXTHREADS -D_REENTRANT -DXUSE_MTSAFE_API -I/usr/include/libglade-2.0 -I/usr/include/gtk-2.0 -I/usr/include/libxml2 -I/usr/lib/gtk-2.0/include -I/usr/X11R6/include -I/usr/include/atk-1.0 -I/usr/include/pango-1.0 -I/usr/include/freetype2 -I/usr/include/freetype2/config -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/gtkmm-2.4 -I/usr/lib/gtkmm-2.4/include -I/usr/include/glibmm-2.4 -I/usr/lib/glibmm-2.4/include -I/usr/include/gdkmm-2.4 -I/usr/lib/gdkmm-2.4/include -I/usr/include/pangomm-1.4 -I/usr/include/atkmm-1.6 -I/usr/include/sigc++-2.0 -I/usr/lib/sigc++-2.0/include -I/opt/mysql/include -I/usr/include/pcre -DENABLE_JAVA_MODULES -DENABLE_PYTHON_MODULES -I/usr/include/freetype2 -I../../library/include -I../../library_util/include -I../../library_util/shared_include -I../../library_grt/include -I../../library_grt_modules/include -I../../library_grt_workbench/include -I../../library_gc/include -I../../library_gc/ftgl/include -I.. -DDATADIRNAME=""share"" -DCOMMONDIRNAME="""" -g -O2 -MT MGRT.o -MD -MP -MF ".deps/MGRT.Tpo" -c -o MGRT.o `test -f 'MySQLGRT/MGRT.cc' || echo './'`MySQLGRT/MGRT.cc;
then mv -f ".deps/MGRT.Tpo" ".deps/MGRT.Po"; else rm -f ".deps/MGRT.Tpo"; exit 1; fi
MySQLGRT/MGRT.cc: In member function `void MGRT::init_thread(const std::string&)':
MySQLGRT/MGRT.cc:115: error: `myx_grt_shell_print_welcome' undeclared (first use this function)
MySQLGRT/MGRT.cc:115: error: (Each undeclared identifier is reported only once for each function it appears in.)
MySQLGRT/MGRT.cc:170: error: `myx_lua_init_loader' undeclared (first use this function)
MySQLGRT/MGRT.cc:194: error: `myx_grt_init_lua_shell' undeclared (first use this function)
MySQLGRT/MGRT.cc: In member function `void MGRT::perform_shell_command(const Glib::ustring&)':
MySQLGRT/MGRT.cc:275: error: `myx_grt_lua_shell_execute' undeclared (first use this function)
MySQLGRT/MGRT.cc: In member function `Glib::ustring MGRT::shell_prompt()':
MySQLGRT/MGRT.cc:283: error: `myx_grt_lua_shell_get_prompt' undeclared (first use this function)
make[3]: *** [MGRT.o] Error 1
make[3]: Leaving directory `/src/mysql-workbench-1.0.0/mysql-gui-common/source/linux'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/src/mysql-workbench-1.0.0/mysql-gui-common/source'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/src/mysql-workbench-1.0.0/mysql-gui-common'
make: *** [all] Error 2

This seems defined in library_grt/source/myx_grt_lua_shell.c which has compiled to object code. So it’s probably clearly related with the lua dependency in
some way, but I’m not a C++ developer so apart from googling and greping it’s a good 3 levels over my head.

A very strong dead end again, clearly very C++ and MySQL specific. Back to my new buddy within MySQL AB, for the next clues.

The challenges of compiling non working Open Source?

One of the great benefits of Open Source, it’s Free, and you can get great support, sometimes even from the developers directly (rather then 5 levels of paid customer support for a commercial product). One of the greatest banes of Open Source, if you have a problem, and nobody has experienced and documented in a forum etc the problem you have with the same OS, libraries etc, you could be totally up the creek without a paddle, boat and for that matter water. (luckily you still have oxgyen)

Well, I’m having this problem with MySQL Workbench. A product promising so much, but if you can’t get the binary working on Linux to even start, where do you go.

You will see via the Forums, I’m not the only person. This is the current Bugs List.

Wanting to make a difference, even just for myself, and those others that also seem lost, I set out to pursue this to the bitter end. Long story short, some 6-7 hrs later I’m so close, and the response in a related Bug at the point I’ve now finally reached is:

at this moment we do not process bugs of mysql workbench, because it’s still in the intensive development, and we provided snapshot in order to give people first implression of WB.
I am changing status to ‘Analyzing’ and return to this bugreport when WB will be issued officially.

Well, my only statement here at 11:30pm at night, is I’m annoyed and frustated. The only reason why I’m compiling source is the snapshot doesn’t work, there is no information of when it will be issued officially, infact as I’ve mentioned previously, there are 3 different versions for 3 different OS’s at present. (More Info in Forums). It’s complicated as it seems to be all one way communication, people reporting problems, but no information feedback in return. Perhaps I should have worked on my own Open Source project?

My simple question in response to this comment is. “Please provide the source that built the binary, so at least we can work with the same apples. The released source and snapshot binary are not consistent.”

My only pursuit now is to publish my findings. I’m made signficant progress that will help others, but only so far. I’ve also uncovered 3 configuration errors, and library inconsistency as per minimum requirements (which isn’t documented, it’s hit and miss, trial and error), and a fatal compilation error stopping you in your tracks. My current logged bug report of this Bug #16604

The road already covered

My environment is CentOS 4.2 (a.k.a RedHat Enterprise Linux RHEL 4.2) recompiled and free.

The first pot hole.

Downloaded Linux Binary 1.0.0-alpha from ftp://ftp.mysql.com/pub/mysql/download/mysql-workbench-1.0.0-alpha-linux-i386.tar.gz (announced on forums MySQL Workbench 1.0.0alpha for Linux (November 24, 2005)

Errors on startup.
./mysql-workbench-bin: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.5' not found (required by ./mysql-workbench-bin)
./mysql-workbench-bin: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.4' not found (required by ./mysql-workbench-bin)

Refer to my Initial findings published at forums (December 16, 2005) over 1 month ago with details of my environment and installed libraries.

Looking at the hole, now it’s a trench.

Diving right in.

$ su -
$ cd /src
$ wget ftp://ftp.mysql.com/pub/mysql/download/mysql-workbench-1.0.0-alpha-linux.tar.gz
$ tar xvfz mysql-workbench-1.0.0-alpha-linux.tar.gz
$ cd /src/mysql-workbench-1.0.0/mysql-gui-common
$ ./configure

Errors
No package ‘gtkmm-2.4′ found
configure: error: Library requirements (libglade-2.0 gtkmm-2.4) not met; consider adjusting the PKG_CONFIG_PATH environment variable if your libraries are in a nonstandard prefix so pkg-config can find them.
Found http://www.gtkmm.org – gtkmm is the official C++ interface for the popular GUI library GTK+. First problem, current version is 2.8, previous version (as on home page links is 2.2).

More digging, clicking the home page documentation link, and some docs recommend binary installs (why am I not surprised). gtkmm 2.4 documentation. What the! The home page has 2.8 and 2.2, and the docs are 2.4. I’m a little confused, but I trundle on.

Found http://www.gtkmm.orgNo RHEL binaries, tried for Fedora Core 4. Attempt to Add FC4 extras to yum settings and do yum install gtkmm24-docs failed. (this was the docs recommendation)

So track it down on a mirror. Now that is now gtkmm24 Version 2.6???? I’m sure there’s a reason but I’m a GTK lay person and it’s confusing me.


$ wget http://public.planetmirror.com/pub/fedora/linux/extras/4/i386/gtkmm24-2.6.2-2.i386.rpm
$ wget http://public.planetmirror.com/pub/fedora/linux/extras/4/i386/gtkmm24-devel-2.6.2-2.i386.rpm
$ wget http://public.planetmirror.com/pub/fedora/linux/extras/4/i386/gtkmm24-docs-2.6.2-2.i386.rpm

$ rpm -ivh gtkmm24-2.6.2-2.i386.rpm
warning: gtkmm24-2.6.2-2.i386.rpm: V3 DSA signature: NOKEY, key ID 1ac70ce6
error: Failed dependencies:
libglibmm-2.4.so.1 is needed by gtkmm24-2.6.2-2.i386
libsigc-2.0.so.0 is needed by gtkmm24-2.6.2-2.i386
libstdc++.so.6(GLIBCXX_3.4.4) is needed by gtkmm24-2.6.2-2.i386

I figured it wasn’t going to be that easy. However, found an interesting note, (GLIBCXX_3.4.4), hmmm, see that error before. Is that sunlight I see looking up out of the trench.

It doesn’t ran it pours.

Won’t focus here.

libsigc Not found in CentOS RPMS Found at http://libsigc.sourceforge.net

$ cd /src
$ wget http://ftp.gnome.org/pub/GNOME/sources/libsigc++/2.0/libsigc++-2.0.17.tar.gz
$ tar xvfz libsigc++-2.0.17.tar.gz
$ cd /src/libsigc++-2.0.17
$ ./configure
$ make
$ make install
$ rpm -ivh gtkmm24-2.6.2-2.i386.rpm

Still fails with same dependancies including libsigc-2.0.so.0 which is what I just compiled.

Using a trench digger now, shovelling was too much work.

I won’t bore you with the iterative details of my approach, needless to say, basically I had to work backwards from this resultant coding, a number of times. But this worked. (Compiling the dependancies that is)


$ cd /src
$ wget http://ftp.gnome.org/pub/GNOME/sources/libsigc++/2.0/libsigc++-2.0.17.tar.gz
$ tar xvfz libsigc++-2.0.17.tar.gz
$ cd /src/libsigc++-2.0.17
$ ./configure --prefix=/usr
$ make
$ make install

One Down, note the –prefix=/usr is significant.. There must be some funky way to use LD_LIBRARY_PATH, LIBDIR, just need somebody to explain why simple evironment variable doesn’t work downstream.


# configure: error: Package requirements (sigc++-2.0 >= 2.0.0 glib-2.0 >= 2.4.0 gobject-2.0 >= 2.4.0 gmodule-2.0 >= 2.4.0) were not met.
$ cd /src
$ wget http://ftp.gnome.org/pub/GNOME/sources/glibmm/2.4/glibmm-2.4.8.tar.gz
$ tar xvfz glibmm-2.4.8.tar.gz
$ cd /src/glibmm-2.4.8
$ ./configure --prefix=/usr
$ make
$ make install

Two down.


# Requires checking for glibmm-2.4 >= 2.4.0 atk >= 1.6.0... Package glibmm-2.4 was not found in the pkg-config search path.
$ cd /src
$ wget http://ftp.gnome.org/pub/GNOME/sources/gtkmm/2.4/gtkmm-2.4.11.tar.gz
$ tar xvfz gtkmm-2.4.11.tar.gz
$ cd /src/gtkmm-2.4.11
$ ./configure --prefix=/usr
$ make
$ make install

Three Down.


$ cd /src/mysql-workbench-1.0.0/mysql-gui-common
$ ./configure
checking for pcre-config... no
configure: error: Could not find pcre-config script. Make sure the pcre libraries are installed

What is pcre-config? Not much luck finding that, try pcre, come across http://www.pcre.org/ – Perl Compatible Regular Expressions. Well could be, no info about pcre-config, nothing on the Perl regular expressions man page. Well, no pain no gain, it’s in a yum list so.


$ yum install pcre

Already installed, well that doesn’t help. Some more digging around, more time digging a bigger hole, there’s a pcre-devel, let’s try that.


$ yum install pcre-devel

Voila!, I have a pcre-config script


$ cd /src/mysql-workbench-1.0.0/mysql-gui-common
$ ./configure

./configure: line 3488: -f: command not found

Are a problem, lucky already documented in Forums (at least somebody has been this far).

Replace Line 3488
if ! -f po ; then
with
if test ! -f po ; then

Try Again.


$ cd /src/mysql-workbench-1.0.0/mysql-gui-common
$ ./configure

./configure: line 7368: syntax error near unexpected token `else'

Haven’t see this documented.

Replace Line 7466 (note 2 lines earlier)
if test "${ac_cv_prog_PCRE_LIBS+set}" = set; theN
with
if test "${ac_cv_prog_PCRE_LIBS+set}" = set; then

Try Again.


$ cd /src/mysql-workbench-1.0.0/mysql-gui-common
$ ./configure
$ make
../../library_util/include/myx_util_functions.h:30:18: pcre.h: No such file or directory

Patience is running short, a find locates file in /usr/include/prce directory.


# I'm now no longer amused with this, I'm not a C developer, but basic code should compile.
$ cp cd /usr/include /pcre/* /usr/include
$ ./configure
$ make
MGTableEditor.cc: In constructor `MGTableEditor::MGTableEditor(bool)':
MGTableEditor.cc:229: error: 'class Gtk::ComboBoxEntry' has no member named 'get_entry'
MGTableEditor.cc:265: error: 'class Gtk::ComboBoxEntry' has no member named 'get_entry'

Well, why am I not surprised. More reading, now into the bowes of GTK. API Docs at http://www.gtkmm.org/docs/gtkmm-2.4/docs/reference/html/classGtk_1_1ComboBoxEntry.html
shows that get_entry is valid for 2.4.

Hmmm, more though, now is that 2.4, or is that 2.6. Well this trench digger just fell into the hole is was digging.

A bigger hammer, now the full blown Hydraulic Excavator

I won’t linger here, been down this path before, however more plumbing was required so it was an iterative process again.

cd /src
wget ftp://ftp.gtk.org/pub/gtk/v2.6/glib-2.6.6.tar.gz
ftp://ftp.gtk.org/pub/gtk/v2.6/glib-2.6.6.tar.gz
cd /src/glib-2.6.6
./configure --prefix=/usr


# checking for sigc++-2.0 >= 2.0.0 glib-2.0 >= 2.6.0 gobject-2.0 >= 2.6.0 gmodule-2.0 >= 2.6.0... Requested 'glib-2.0 >= 2.6.0' but version of GLib is 2.4.7
cd /src
wget http://ftp.acc.umu.se/pub/GNOME/sources/glibmm/2.6/glibmm-2.6.1.tar.gz
tar xvfz glibmm-2.6.1.tar.gz
cd /src/glibmm-2.6.1
./configure --prefix=/usr
make
make install


cd /src
wget ftp://ftp.gtk.org/pub/gtk/v2.6/atk-1.9.0.tar.bz2
bunzip2 atk-1.9.0.tar.bz2
tar xvf atk-1.9.0.tar
cd /src/atk-1.9.0
./configure --prefix=/usr
make
make install


cd /src
wget ftp://ftp.gtk.org/pub/gtk/v2.6/pango-1.8.2.tar.gz
tar xvfz pango-1.8.2.tar.gz
cd /src/pango-1.8.2
./configure --prefix=/usr
make
make install


# Requested 'pango >= 1.8.0' but version of Pango is 1.6.0
cd /src
wget ftp://ftp.gtk.org/pub/gtk/v2.6/gtk+-2.6.9.tar.gz
tar xvfz gtk+-2.6.9.tar.gz
cd /src/gtk+-2.6.9
./configure --prefix=/usr
make
make install


#pango-1.8.2.tar.gz
# checking for ATKMM... Requested 'atk >= 1.9.0' but version of Atk is 1.8.0
# checking for GDKMM... Requested 'gtk+-2.0 >= 2.6.0' but version of GTK+ is 2.4.13
cd /src
wget http://ftp.acc.umu.se/pub/GNOME/sources/gtkmm/2.6/gtkmm-2.6.5.tar.gz
tar xvfz gtkmm-2.6.5.tar.gz
cd /src/gtkmm-2.6.5
./configure --prefix=/usr
make
make install

Now the test.


$ cd /src/mysql-workbench-1.0.0/mysql-gui-common
$ make clean
$ ./configure
$ make
$ make install

WOOOOHOOO! no errors here!

One down, one to go.


$ cd /src/mysql-workbench-1.0.0/mysql-workbench-1.0.0
$ make clean
$ ./configure
$ make

make[3]: Entering directory
`/src/mysql-workbench-1.0.0/mysql-workbench/source/linux'
make[3]: *** No rule to make target
`../../../mysql-gui-common/source/linux/libwbcommongui.a', needed by `mysql-workbench-bin'. Stop.

Investigation

$ cd /src/mysql-workbench-1.0.0/mysql-workbench/source/linux # (current make directory)
$ ls -l ../../../mysql-gui-common/source/linux/lib*
-rw-r--r-- 1 root root 7410114 Jan 18 22:43 ../../../mysql-gui-common/source/linux/libmacommongui.a
-rw-r--r-- 1 root root 8006062 Jan 18 22:44 ../../../mysql-gui-common/source/linux/libqbcommongui.a

Similar library name exists, but not libwbcommongui.a Well that’s it, it’s finally now looks not a environement problem, more a software development problem. I’ve logged my findings at Bug #16604. I guess we will wait for a good response.

Was it all worth it! Well 90 mins of documenting, the longest blog I’ll ever write. If the next runner can move this forward to a getting a binary from compilation, that then starts, then it was worth it. But only in a reasonable time.

Database Modelling Software for MySQL

I’m stuck between a rock and a hard place. I’ve been using DBDesigner 4 from FabForce, an open source visual design tool, and apart from working around a number of bugs, I’ve found it practical to design from scratch. The big plus, it works under Linux.

With the announcement that this was being incorporated into MySQL, called MySQL Workbench, I was looking forward to getting my hands on it. I guess that was about 8 months ago. Finally about 6 weeks ago, Version 1.0.0-alpha was released for Linux. Unfortunately it didn’t work, would not even start for me. Logged as Bug #15421, which got marked as a duplicate of Bug #15218 (I could have sworn I did a search first). Anyway, this got promptly closed as Unable to reproduce, but I see it’s finally been reopened again.

Windows is at 1.0.2, Mac OS/X is at 1.0.3-alpha, and yet linux is still at 1.0.0-alpha. What gives? There seems to be a clear lack of communication as to what’s going on.

The problem is, I’m preparing presentatations for the Brisbane MySQL Users Group, and the MySQL User Conference, and I want to import an existing MySQL database into a modeling tool to generate diagrams. It seems that DBDesigner4 is not MySQL 5 friendly, and given it’s not supported anymore where do you turn.

In DBDesigner, I get the error. “dbExpress error: Invalid Username/password”. Upteen double checks and then searching on the web finally leads to
http://forums.mysql.com/read.php?113,32121,57081#msg-57081, and the trick SET PASSWORD FOR ‘some_user’@’some_host’ = OLD_PASSWORD(‘newpwd’);

So now I can get a connection to the database, there ain’t no model to import. It seems I’m now forced to go look elsewhere for a open-source and free modelling tool that runs under Linux.

Ultimately I can’t really complain, this is the primary problem with Open Source, on one side, it’s Free, the other side support, documentation and help can be a hit and miss affair, and there are no commitments to a release schedule or feedback as there is no money involved.

Still if after all this time, I’m forced to go out and buy a product, it will be a great shame.

How many installations, and just what are they doing?

Would it not be great if on the MySQL website there was a page of stats (updated daily) that provided statistics like number of installations, a breakdown of versions registered (not certain I like that exact word) , OS’s, countries etc. More specifically, some useful stats on the engine types in practical use, avg number of tables per database etc. Of course the types of stats could be limitless, but with the success of MySQL as well as other open source projects, more imperial figures on installations other then just downloads I think would definitely benefit given the current momentum. (Availability of information to competitors could be both a good and bad thing.) Perhaps figures can be shown in percentages, not actual numbers.

Anyway, nice idea you say, we can all come up with ideas, but how could you implement something like this.

I made a post yesterday and mentioned the thought of an XML storage engine. It seemed to stir some feedback. Those notes were actually taken from more detailed notes that I’ve never published so I thought it would be good to explain some of the background.

Many months ago at the Brisbane MySQL Users Group our informal discussions following the evening presentation turned to a more detailed analysis of the different engine types available within MySQL. A throw away comment by somebody like ‘We have no idea if BDB is really used?’ prompted me to consider the possibility of why can’t this question be answered in the future.

How would it work? Well for each installation of MySQL on a server (given you can have multiple installations of varying versions per machine) the option to provide a feedback loop is made available post installation, it could even be part of the install processs (probably not in .rpm). The feedback loop configures the following basic settings.

  • Frequency (Monthly, Weekly, Daily, Once Off)
  • Granularity (Summary, Verbose)

What would I have this feedback loop submit:

  • MySQL Version
  • OS Version
  • Country/Locale
  • Number of User Databases (excluding mysql & test)
  • Number of tables per storage Engine per database

More detailed information, not necessary for stats, but perhaps useful for MySQL internals would be output from the SHOW set of commands. (variables, status, table status,engines, innodb status). This alone is another topic, allowing collection of information that can be used to evaluate usage, possible tunings etc). Additional information such as number of triggers, stored procedures etc, would identify if this level of functionality is actually really being used out there.

By registering with MySQL to provide a feedback loop from your server installation to MySQL, you are provided with a unique Id, which enables clear tracking for MySQL, but provides clear anonymity for the customer to ensure we are not taking any propriety information from them.

We never want to hide the data of an installation that is provided to MySQL, it needs to be in the open. Why not make it an open standard of data exchange (using XML), and why not then enable the installation itself to use these statistics internally for some level of reporting.

XML allows for the files to be reused outside of MySQL directly. Using XSLT you could take these XML files for better presentation of statistical information. This would probably go against the ethos of MySQL with now information stored in XML instead of the database. So why not create an XML Storage Engine.

It would for lack of a better word be grossly inefficient, however it was more a Read/Only type structure, more you just write out a chunk of data in one go, and you always read the entire file. You don’t do partial updates (you could, but you rewrite the entire file), it leads itself to statistically information, written once only. Of course you would have to satisfy the search

What other possible extensions for an XML storage engine, you could store and read RSS feeds, even OpenDoc standards as implemented by Open Office for Documents.

Now, regardless of all the associated complexities like, how do you post add this functionality to older versions, how do you automate the scheduling of the feedback look across different OS’s, blah, blah, blah, the purpose is just to throw the idea around for a minute and see what falls out.

Additional statistics more useful for internal MySQL usage could pinpoint numbers of upgrades, from which version to which versions (more specifically if not current GA version). The biggest downside is as part of Open Source you can’t enforce gathering any of this information, so when the website says 1 million installations (it’s 1 million of people that have taken the time to notify MySQL, and that could be 10% or 50%, you just don’t know).

Could there perhaps be space here for a commercial product? I doubt it as this is very tightly coupled with MySQL, and unless there was seamless integration I doubt very few people would go to any trouble after installing MySQL.

It’s just an idea, but it’s nice to have ideas at least some of the time.

MySQL 5.1 is gaining some momentum

It wasn’t that long ago that MySQL released the GA Release of Version 5.0 with major new features (Oct 24 2005). It still took 5.0 about a year to go from alpha to GA, however I’d suspect a much shorter turnaround this time.

Version 5.1 is already at alpha, and the largest public functionality mentioned has been Partitioning. It is also anticipated that Storage Engines (a very handy MySQL feature in comparison to other RDBS products), will be a hot-pluggable API instead of a source re-compilation. Now I’ve never even looked at the Storage Engine code, but it’s been talked about a few times, particularly the CSV Storage Engine in general discussion.

There are some new features being documented now, but not generally available in an alpha build including Events Management (5.1.6) and XPath support within SQL (5.1.5).

Well I could use the events management immediately with some refactoring of code into a stored procedure. My current CRM allows for a system wide (ie all users) Phone Batch to be created for phone donations. Batches are locked into daily, so a new batch must be manually created each day. The ability to both automatically create a daily open Phone Batch, as well as automatically close the previous day’s while only a small task, removes a currently repetitive task from somebody’s day. Of course at present, this is planned, but via a cron job and wrapper script, now I could do it all in the Database.

XPath expressions, interesting, I use XPath Explorer a great standalone Java app, and Eclipse Plugin, but I guess now I can just do it at a SQL Prompt. What I’d really love to see is an XML Storage Engine.

XML Storage Engine

What about an XML Storage Engine? It would for lack of a better word be grossly inefficient, however it was more a Read/Only type structure, more you just write out a chunk of data in one go, and you always read the entire file. You don’t do partial updates (you could, but you rewrite the entire file), it leads itself to statistically information, written once only. So would it provide any benefit. Really only you have any sources of XML data and you just want to keep this information in an XML format, effectively the MySQL data file is an actual XML file that can be easily copied.

XML would allow for the files to be reused outside of MySQL directly. Using XSLT you could take these XML files for better presentation of statistical information. What other possible extensions for an XML storage engine could be possible. You could store and read RSS feeds, even OpenDoc standards as implemented by Open Office for Documents.

Would it serve any real purpose? Probably not. But with MySQL Storage Engines at least it’s easily possible.

Unit Testing A Database

In a recent job interview I was asked the question regarding Unit Testing/Automated Testing of a Database? An interesting question and indeed an interesting problem. I thought it was a good topic to describe what I’ve done in the past, and where I would go for a more complete testing environment given the opportunity of a entire XP project.

This is the approach I have implemented successfully in the past. It’s not a complete solution, however at the time with the client it provided appropriate coverage.

I don’t use a framework such as dbUnit to load data via XML, or specifically test data. XML is ugly to store data, and also with maintenance and comparison. I start with a pre-configured database of representative sample data, refer to my notes later on this, and then I use the tests of the application to perform the necessary data manipulation. This ensures that you are as close as possible to testing actual situations, and ensuring that any issues the application does (such as enter bad data, or RI failures) are caught appropriately.

Within this process an automated build test would first reset the database to a known set of data. I’ve also found that this helps as you can also recreate the schema if necessary. As part of Schema Design in an XP Development, I have two ways to create the database schema. You can create it from scratch (so there is always appropriate SQL to create the current version of the Schema, lets say BUILD_102. Alternatively, you can always upgrade between releases, for example between BUILD_101 and BUILD_102 with the appropriate upgrade scripts. Upgrade or patch scripts only move one version to the next version. It’s not possible in a production environment to simply recreate your schema for each release, however for testing and training you can. It’s also pointless after 50 releases to have to perform 50 patch releases from the original source schema for every automated build test.

This does lead to two paths necessary for creating a schema, but this can also be tested adequately in an automated way.

I also split my application tests suites that use the sample data into two buckets. Destructive and Non-Destructive. The reason is the Non-Destructive tests (i.e., non DML statements) can be re-run as many times as necessary. The Destructive tests (i.e., DML statements) can only be run once, before the database must be restored. Of course you can have the approach of setUp() and tearDown() within JUnit however it’s cleaner if you can extract this somewhat to a higher level, making the Unit Tests easier. By also running tests that don’t continually use the some data, or builds the data though test execution, you get a better coverage of different data sets. To give a few examples, You could create a Test that created a row of data, then edited the same row, then deleted the row. These are indeed valid, but if the first test fails, how do you know if the update and delete tests are also broken, they are by dependant by default and will fail, but did they really. If with your sample data, you created a new row, edited a different pre-configured row, and also deleted a different pre-configured row, you could eliminate the need to dependencies.

Now of course, there are situations where data must be specifically checked at the database level, for example it may never be displayed in your application, it may be intermediate information that is then summarised for display, or internal audit information held against data (for example Create User, Last Updated User), or data created by procedures or triggers. There are also situations when even within an application testing that can verify the data in a User Interface, you want to verify this at the source.

To this end I have a custom written JUnit extension that can perform specific SQL statements and comparison. I’ll need to write about this and provide this at a later time. (when I can dig it up)

Sample Data

Sample Data in the database is pre-configured, not in XML files, but so it can be managed by more primitive means, either by a database GUI interface or via SQL flat files or even text files. Why have pre-configured data this way? A few reasons.

  • It’s not coupled to your tests in any way, so it can be reused, for example as Training Data.
  • You can use database specific tools more easily say in loading the data in a relational way.
  • You can use the same database specific tools to export the data easily, if say you use an application to modify certain information.
  • You could more easily incorporate legacy data that is also being migrated if you use the same database specific tools.

Granted XML is universal in it’s data representation, it’s more self descriptive, but it’s a really pain to edit manually, and it’s very verbose when there are simply more primitive methods of this type of data management.

So we are creating a pre-configured data set, and an extensive one when possible. As I mentioned, the re-use capabilities for training or demonstrations really works.

Training Data

I have successfully with a number of systems, specifically CRM implementations used a Cartoon Environment for the sample data. There are a few reasons for this. First, most people I’ve ever met can related in some way to some set of the data. If they can’t, then there can read info online, or watch a movie etc, and get an appreciation from the representative data set, effectively I’m leveraging of the time and effort of others here, much better then a non-descript set of data.

You have the cartoon characters (e.g. Mickey Mouse, Donald Duck, Daffy Duck, Marvin the Martian, The Simpsons, The Flintstones, The Jetsons), use all the streets and rides as Disneyland as addresses, the animators as the users (e.g. Walt Disney, Chuck Jones, Stan Lee), you can use the different studios (e.g. Warner Bros, Disney, Pixar) for different states or countries, you can use shows or movies (e.g. Toy Story, Shrek) to group characters in other ways.

With this type of data, common attributes such as birth date, family units, nick names, people deceased etc, are all part of the available data. It’s surprising how much information you can find when using The Simpsons for example, of full names, addresses, interests etc.

It’s impressive when the CEO of a company is showing the application to overseas business partners, when his knowledge of the application (from his management perspective) is sufficient, there is no knowledge of the data really necessary to use or explain as it’s commonly used and generally understood.

At this point I would like to ensure that I correctly acknowledge the registered trademarks of Disney, Warner Bros, Hanna Barbera, Pixax, Dreamworks and that I am not using the names for any profit.

Summary

So this is what I’ve used in the past. What would I do in the future if I was charged with bullet proof testing of a database, even independent of an application, effectively 100% test coverage of the data. Well, this is an unproven approach, but I’d relish the opportunity to give it a full blooded test one day.

How to test the database with an automated test approach.

I’d consider the breaking down of testing into 3 areas. These being:

  1. Schema
  2. Data
  3. Business Logic/Referential Integrity

Each of this is effectively built on the the preceding points.

Schema

This would be quite straight forward it’s a flat comparison between schema’s, which could be managed via the appropriate products data dictionary tables using SQL. You could even simply compare 2 schema’s in a few simple SQL statements. You could also use the approach of export the schema definition, and then compare flat files. You will find some downsides to these approaches, ordering is a big thing, columns within a database table, or the order of the tables that are exported may not be guaranteed. However given appropriate standards are defined used of tools this comparison could easily occur.

Being able to verify patches between releases, and full installed schema’s are also possible. The schema is the easy part.

Data

Data could be tested in varying means. Counts, sums and sample comparisons, but it’s also just data, why not md5sum the entire data. Why not even dump the data to flat files, and use basic difference tools for comparison. One simple approach. Especially if you are loading data, using or manipulating it, then you can export and compare at a file level. This would work very well for data considered Read Only for the life of testing.
This format may allow you to compare data between two different database products, e.g. Oracle use for your transactional online processing, and MySQL use for Web Data or Management Summary Reporting application.

In order to test the data you need the schema, but how can you test the data without the business logic and Referential Integrity. Within MySQL you can easily disable foreign key constraints, or easily adjust the table type to a structure to ignore this syntax. This could allow you to run tests with and without Referential Integrity to determine the strength of your application.

Of course this is a static version of the data. Performing separate testing of DML statements directly against the database could prove a waste of time, unless your application was written in such a way, that your application database layer was a complete API. Still you would be simulating what your application is ultimately doing, so it could be overkill. You could apply the techniques of comparison with know results after a successful running of automated build testing.

Business Logic/Referential Integrity

The hard stuff. Well you could be half way there with adequate schema and data testing. At least you are then confident that core integrity exists.

The problem is also to do with application integrity verses database integrity.

Let’s take a percentage, it goes from 0 to 100. Now using MySQL for example, you would define this as TINYINT(3) UNSIGNED, giving you a valid range of 0-255, and by default a display characteristic of 3 characters (the (3) in this example is just beautification).

You application logic restricts the value into this column to 0 to 100. But do you enforce this at the database? Depends on your needs. If the application is the only way to insert and maintain data, then you could get away with it, if data can be managed from other external systems, you may have APIs that also need to manage it. What if you grant SQL access to DBA’s, could they accidentially mess it up. That song “It’s a fine line between pleasure and pain” comes to mind at this moment. I guess what I’m getting at here, in solely database testing you could easily insert 255 into a percent column and pass a number of data specific tests. I’d assume it would fail some as well otherwise your tests aren’t complete, but when using the application you could never test 255, as the client would never allow it.

There are a lot more issues in RI testing, Cascading UPDATE, DELETE rules for etc. And then when you work all that out, you have to start with Triggers and Stored procedures. I’m not going to spend any time here at this time.

Unforeseen Side Effects

Data is a strange beast. It’s the source of information, so I always like to go back to the data for comparison, however the lack of good data (most notably Legacy Systems Migration) can drive you mad. What good is it to have a new system but not be able to enforce an adequate level of Referential Integrity or Business logic due to incomplete historical data. In essence this has proven in the time I’ve also supported large systems, a good portion of the development cost in support, it’s bad data and/or the need for a simple application to have more complex rules to cater for so called incomplete data.

I’ll give you a trivial example. Gender. In the new system an organisation will always ensure they get the gender of a customer (let’s not wonder how they do it, it’s just an example). So the application is designed to support Male/Female, Reference Data may exist to translate M & F to Male and Female respectively for data storage efficiency. Check constraints, enumeration data types (which I don’t like) may exist. Reports may do side by side comparisons.

Now the company buys a competitor, and then gets their database of 500,000 customers, but they don’t record the Gender. Do you then relax all your great integrity? Do you introduce a gender of Unknown? But that’s only for display, the maintenance screens can’t allow you to select it, so you then need a different level of Reference Data management. Do you make an educated guess and correct the data? Does the customer do an expensive mailout campaign and data collection process to correct this information? So what’s the big deal anyway the customer asks? Well if it’s an organisation that sells hygiene products, you don’t want to sending out material on Shaving Cream to Women on your list and Body Wax to Men. However you can’t use that explanation to describe solely a database driven reason to the customer for the cost of introducing this data. How do you show business value to the customer, when they simply what the data available?

I know this is a trivial example, but if I had a dollar for every trivial problem that customers spent months on,verses the really hard problems, I be writing this from a much more comfortable and relaxing resort haven. (A ski resort rather then a beach resort)

Conclusion

So can you Unit Test a Database solely without an application? Yes. Would you want to? Maybe, to a certain degree. Depending on your type of data. If all your information is highly visible in data entry and data retrieval, you should couple testing more closely with your application. If your data is very generated and collated, rarely user entered but bulk loaded, such as Sensis information or GIS information, then dedicated testing all aspects of the database decoupled from the application could indeed make your application test easier, because it’s easier to identify bad data that the application creates.

Database Modelling within an XP Methodology

In an eXtreme Programming (XP) Agile Methodology approach towards software development the absence of adequate database design, or the scant regard of it, with the assumption that a framework and persistence infrastructure will take care of that can be a disaster in a larger enterprise solution. In essence it’s a scaling effect. The smaller the system, normally the smaller the number of users, amount of functionality and volume of data does not show the inefficiencies in database design as they can be masked by acceptable performance. But scaling up the system, or designing a large enterprise system the effect will become multiplied quickly. Of course using solid XP practices, the ability to make changes and integrate will be easier, but the amount and complexity of changes may be significant.

A more pragmatic approach is necessary in Database Modelling and Design, especially in a larger enterprise solution when using XP or an Agile approach. Assuming that the choice of Relational Database/s has been chosen, greater care is necessary and advanced preparation and planning required. Purists could argue YAGNI, but ultimately the customer will be distraught if the system is perfect in functionality and user interface but can’t handle the performance a production load of users or gradual growth of users when all is well in testing, demonstrations and customer training. The other catch is the need for additional disk space added monthly due to unknown requirements.

Additional considerations such as legacy systems data migration, database sizing, database growth, performance requirements, number of users etc can’t conform to a traditional XP approach. These tasks require varying lead times, for example the purchase and configuration of hardware and software need to be augmented with the XP approach.

XP is not for every project, and in the number of instances I’ve been involved with, certain considerations due to the environment, customer and usually management are necessary to be adjusted or tailored for the specific project. I’m not stating that an XP approach can’t apply to large scale enterprise Database Modelling, more that some adjustments particularly within the Planning Process are needed to integrate a more balanced solution.

The database is the foundation, I draw an analogy when discussing with friends to building a house. If you don’t have the core foundation correct, the slab, the essential primary fittings of plumbing, power etc, and a floor plan of key, important and known things, you will forever when building your house be spending additional time and resources to prop this up, taking away time, money and energy from the significant part of building the house so it can be ultimately used. (Thought: I wonder how you would build a house XP style. What’s the most important part for the customer. Something to ponder one day).

Are foundations perfect? No. Do they change and adapt? Yes, They do. However the cost is significantly higher, and the investment in getting it right the first time is invaluable.

I can’t count the number of times I’ve come onto an existing project, and the Database Designer for lack of a better word is a novice. It makes me shutter. It’s an expertise I’ve specialised in, however I’ve had to broaden my skill set, as this task is not used throughout a traditional SDLC project of 12-24 months, and it really saddens me when simple 101 mistakes are made and the downstream impact is significant, and management don’t know or realise the impact. Even worse is when I tell them what’s needed for a correction, and they look at the impact, and the decision is that this fixed known cost to correct is worse then any projected unknown down stream costs of maintenance and future development.

Ultimately I’ve got my way on projects more easily when I’ve also focused on Application Performance Tuning in projects, where I look solely at the application needs, not just the hard core DBA or System Administration tuning. A DBA doesn’t really care about the application and end user impact, they care about the figures in the database. In this situation, low level structural changes and associated costs are weighted against application performance ultimately necessary for the end user. I remember one project that took the development/user/testing/release team 3 months of work (probably ~ 200 man weeks of work) to implement and deploy a key structural change that I had identified and proposed as part of longer performance analysis period of this system. Of course when you had 6 full-time DBA’s alone, 33 remote distributed systems and 1000’s of users, the resultant impact and the future improvements possible were worth the investment. The need not to upgrade the hardware alone was worth it. This project was also over 10 years ago and a lot of techniques have changed since then. Anyway, back to the point of the discussion.

Let me give you an example situation where you use a traditional XP approach to software development, and how to weave in a more structured database design and modelling approach.

User Stories

As part of the gathering of initial User Stories from the customer, the Database Designer should be reviewing these and beginning to build a high level Logical Data Model. The Database Designer is not needed in any initial interaction with the Customer, however early reviews may clearly indicate gaps in stories due to historical experience that can then be feed back to the customer for considerations. Initially it should just be on paper, or a whiteboard, as initially this should correspond to the high level comparison with the User Stories and also the fluid nature of changing user stories. However it should highlight immediately known key entities, key relationships between entities, key integrations to external systems and areas that involve early input of volume estimates of data, perhaps with comparison to existing legacy systems.

For example, a rates billing system I was involved in had ~400,000 clients billed quarterly (4 times a year), and each bill had on average 10 line items (just looking a small sample, and getting a figure from the legacy system). Now it doesn’t take much to consider the size of tables which would record billing history (400,000x4x10) for each cycle, or GL reconcilation over 5 years (400,000x4x10x2x5 that’s 160 Million). Disk storage alone without any hint of the average row length, or indexes can be guestimated when a number of key entities are identified. Of course it could be out by a factor of 2, 5, or 10 times at this early stage, but not a factor of 100 times.

In a situation like this with any entity, even at the first mud map, I would be recording a number of indicators when possible. These would be:

  • Initial number of rows
  • Annual growth in rows
  • Projected rows after ‘x’ years

In addition I’d also flag each entity roughly for performance considerations by access, and also volume of transactions. This helps in identification of key performance considerations which may impact the Database Design in areas such as indexes (i.e. Disk Space) and schema optimisations after normalisation.

  • OLTP
  • Batch

For example, the billing process that creates quarterly bills and inserts millions of rows is both a Batch Process and a frequency of once every three months, as apposed to new accounts which is OLTP, happens at a regular quantity per day every day, and during the day.

Now, in the mud map these figures don’t have to be accurate, in fact in the billing system example, if the table didn’t have 10,000 rows it didn’t rate a mention, and 10,000 to < 400,000 was considered small. The limit here being the volume of the most key entity. Simply because a number of tables will be a factor larger based on this key figure, and this will swamp any insignificant tables. Again, this is week 1 of a potential 52 week project, it can afford to be vague in areas.

Spike

How do you term this in a XP environment? Call it a Spike. The purpose of a Spike is to explore options and evaluate risk. While in general the practice would be to throw away the results, in this case these Spikes should be kept and built on progressively.

Release Planning

As part of the Customer’s User Stories being estimated and prioritised within Release Planning, more refined Logical Modelling and even Physical Modelling is necessary. Of course until the customer refines priorities based on estimates, the actual order of implementation can’t be confirmed.

It is possible however a number of stories within an iteration may not relate together within the Database Model. How do you address this? Regardless of the actual attributes of these logical entities that are not available, a physical model representation can be built on necessary tables and relationships to ensure practical use of functionality.

Should the need for missing relationships within a Database Model impact the priorities of User Stories? No. Estimates should accurately reflect if additional work is needed for any given User Story, hence the Database Designer is necessary in the estimation process.

Iteration Planning

As part of the Iteration Planning it is key that the physical Database Requirements are in place to enable Writing Unit Tests and Writing Code for each Development Story for the Iteration. This should occur at the start of the Iteration, but is not before.

Daily Stand Up Meeting

This is the best opportunity during the Daily Stand Up Meeting to advise of structure impacts, or developers to request more involvement by the Database Designer in their specific task.

No system is perfect, and there will be times that the Database Model does not reflect the requirements by a coder for a task. At this time the developer is responsible to adjust their instance of the Database model to complete the coding task. The issue is; does the coder have the ability to modify the database schema when checking in? No. The responsibility for the Database Model is for the Database Designer, a difference to the normal coding practice where any coder can modify any code. The next section describes this reason in detail. At this point there is an inconsistency within the repository, and any automated testing for a build will fail. And this is what should happen. From the developers perspective their instance of the Code Ownership is correct, but for the entire iteration it is not.

One could argue this is not valid however, the cycle of Test, Code, Refactor, CheckIn while applying to the individual task, should also apply to the Database Design, however it should be at an iteration level not at a task level. To have your continuous builds breaking will however raise a lot of red flags, and I’m sure this approach will ensure that the tests which are broken as a result, will be corrected by the schema definition (the Code), there will be some refactoring if for example the Developer didn’t serve in the best interests of the bigger picture, the schema is checked in, and the continuous build bar is now green. Yipee. The bar is green, the code is clean, we can all go home now.

The Database Designer

The Database Designer should be dedicated resource in the development team for this task. For example in a team to 6-10 developers you would have 1 or 2 people. Even in a team of 2 your would have 1 person responsible. Unlike coding where any of the team pair up and work on tasks within the Iteration, the Database designer team is responsible for Database Design, like the Development Team is responsible for Coding. There are a few reasons for this:

  • They are ultimately responsible for the structure, the foundation, and this is the bigger picture that includes visibility and scope of further iterations and releases.
  • The skills are more specific.
  • They should also be a part/time developer of the team, so as to best understand the dynamics and also be part of the team.

Correspondingly, the end of the iteration should include an addition code review, that is schema related. A difference report of the schema definition at the start and end of the iteration should be compared, to ensure best practices in the larger picture as well as standards, optimisations, future performance considerations (e.g. indexes), future disk requirements (e.g. adding a new index to a 160 million GL table will take 4GB of disk space).

Other Factors

In fact as part of my upcoming conference presentation Overcoming the Challenges of Establishing Service and Support Channels I spend some time discussing Data Quality. Quite often this is the bane of support due to the complexities of software development to cater for data exceptions, or most commonly data anomalies due to historical data not meeting minimum RI and data specifications in your new system.

Linux Format Reader Awards 2006

The Linux Format magazine is having it’s annual reader awards in a number of categories.

These include (I’ve include my picks after each category):

You can nominate at http://www.linuxformat.co.uk/nominate/. Nomimations close Friday 10 Feb 2006.

Adding to the Library Collection

I took the chance today to order some books from Amazon today to add to the library. Of course I’m still reading 2 current books Spring in Action and the MySQL Certification Study Guide in order to site the second MySQL Professional Certification Exam.

As with most things, you start off looking or reading on the web for something and you end up completely somewhere else. In this case, it was looking at Linux Software Labs (Australia) at the price of their Linux Distribution CD’s, which lead me to the book Beyond Java listed on their site. Called my local computer book store, but not being open (Boxing Day Public Holiday), lead me to go, “well I’ve been meaning to order some books from Amazon”, what were they again. So this lead me to coming up with a whole new list, and I figured for the cost of freight to Australia, I may as well order a few. So here is what I got.

Better, Faster, Lighter Java, MySQL (3rd Edition) (Developer’s Library) , High Performance MySQL,and of course Beyond Java.

The hard part now being the waiting 6-10 days.

Speaking at MySQL Users Group

I’m preparing to speak at the next MySQL Brisbane Users Group in Febraury 2006. My topic will be Know your competitor – A MySQL Developers Guide to Using Oracle Express Edition. You can get a full copy of my presentation slides at my Articles Page.

Having a strong background in Oracle, and having been using MySQL for the past 5 years, the release of Oracle Database 10g Express Edition (XE) as a Free offering (with limitations of 1 CPU, 1GB Ram, and 4GB disk) is an interesting move by Oracle.

I’ve written a number of recent comments on various Oracle/MySQL things including Responses to some Oracle v’s MySQL Questions, How can Oracle 10g Express Edition target MySQL?, Oracle 10g Express Edition Target Audience. Is it MySQL?, Oracle 10g Express, Free v’s Open Source and OFA.

The question could be posed, what relevence does this have to MySQL developers? Well, in some respects very little, but in others, both knowing about your competitor a little more, being able to see their offering, and in particular in comparision to MySQL can help in a level of understanding in Database differences. I am hoping that from the discussions, people will consider some approaches to design and development that is more “database compatible”, regardless of which database.

Will Oracle 10g Express Edition take off, well difficult question, there are many target markets, will it compete with MySQL in Open Source, hopefully my talk will sporn some discussion of peoples experiences in the various organisations and businesses represented in our meetings.

Upcoming Open Source Conference Presentation

I’ve been working recently on a paper I’m presenting to a conference in February 2006 titled Implementing Open Source for Optimal Business Performance. I’ve got the final glossy brochure yesterday so I now have something to show everybody. View Here (Be warned it’s a little bright)

The topic I have been asked to speak on is Overcoming the Challenges of Establishing Service and Support Channels. My notes are still in the early stage, but are available at http://wiki.arabx.com.au/index.php/ARABX_Articles:Overcoming_Service_and_Support_Channels

Conference Details on Ark Group Web Site

Review of Database Magazine Article – "The Usual Suspects"

In the “Australian Technology and Business Magazine” – December 2005 edition there was an article on comparing database products. Here are my comments, which I also plan to forward to the editor.

BTW: I’ve since also found this articles content on another site here. It seems that most if not all is the same.

In response to your cover story article “The Usual Suspects Four databases we suspect your business could be quite interested in.” which appeared in the December 2005 edition, I would have to sum up your article in one word “Disappointing”. Let me provide some feedback from my perspective.

You start by defining a scenario, which is the only approach you can take for a suitable comparision of database products due to diversity of features available in todays products. A good start, necessary to limit the discussion of features and functionality. However, you then specify some additional business requirements, for example “relatively small e-commerce” and “cost of the initial server and database software is certainly an issue.” Now, having worked for a number of small internet and e-commerce companies, you don’t have the budget for a Dell Quad Xeon processor machine, nor then the requirements for co-located hosting or dedicate networking bandwidth for your fancy new hardware as well as the additional staffing support costs. So immediately your scenario is more unrealistic.

The major sticking point I have is your 4 processor requirement. The most efficient and cost effective initial implementation is to lease dedicated servers, there are numerous reasons including cost savings, better hardware support, larger bandwidth capacity and easy growth path to start. You can also easily monitor growth and more quickly change needs then having a large initial hardware cost. I could continue regarding hosting, however this alone changes the requirements to using single or dual processor machines given your scenario. With this in mind the playing field is now completely different but a better reflection of the scenario. Your argument for “scale up to a small server farm”, also does not hold, because you can easily get economy of scale in splitting application server and database server, splitting OLTP and batch database requirements and other common practices, not to mention additional benefits such as redundancy.

My final comments on your hardware, specifically in relation to MySQL (including using Version 3), you can get significant performance from hardware given your small size requirements and even with modest growth on single and dual processor equipment. Other then opening remarks your article makes no further references to performance requirements or indeed any level of performance analysis of the products reviewed in this article.

You make scant reference to other database products, mentioning only one ‘Sybase’ in half a sentence in your opening and once again. In 8 pages, surely rounding the article to give a clearer perspective of the marketplace with even one paragraph to mention that there are many different database products both commercial and open source that service differing business needs. Other major products not compared at this time include Sybase, Informix, Ingres, PostgreSQL, MaxDB and Berley DB as well as many more.

Your choice of products is also not consistent or reflective of your scenerio. I’ll provide a few specific reasons. Firstly, you compare beta products against production products, if your criteria was current production products you should have compared using MS SQL Server 2000, however that would clearly provide a poor reflection in Microsoft due to it’s clearly dated product. If you allowed one beta product, why then did you not use MySQL 5.0 beta which was available at the time. While you have taken the effort to adjust your article to include references to MySQL 5.0, and you in turn choose MySQL as your editor’s choice, you should have been consistent throughout the article as you give mixed comparising referencing two versions of one product. Futher to this, you choose to use a dated Oracle product in 10g Release 1. 10g Release 2 has been available for a number of months. I would also question your decision to choose the more expensive Standard Edition over Standard Edition One, but this again could be soley due to your overspecd hardware.

If your rationale for including beta was cost based, then you did Oracle a clear injustice. You make again, only a half sentence reference to Oracle’s new released free product in the opening section. You mix more recent MySQL 5.0 information within your review of MySQL 4.1.14, yet you mention nothing of Oracle 10g Express Edition, for example it’s a free product much like Microsoft SQL Server Express, but also has similar limitation in 1 CPU, 1GB RAM and 4GB of disk but all the power and functionality of other Oracle Products, as well as default inclusion of web based administration tools with HTMLDB.

Your quick product summary (4 columns of information) suffers from a number of already mentioned points, however in relation to the only commercial product with a free offering, MS SQL Server Express, you clearly gloss over the limitations. 1 CPU, 1GB Ram and 4GB of disk is critical information, this should have been included in the product summary, you only go part the way. Regarding MySQL, should have clearly stated reference to $0 under GPL license. On that note, and mentioned in your detailed review, there are limitations in the distribution of MySQL within a commercial product and this is not in your summary.

Your article places no emphasis on performance or efficiency. Given your need to mention your testing on quad processor hardware, you make references to various limits of CPU across products, memory and hardware requirements as well as some generic maximum sizings, but nothing on performance, throughput and then growth potential, as this was part of your opening scenario.

It’s not possible to clearly date when this review of products was performed, granted the marketplace has changed rapidly in recent months, the fact that your article references Oracle 10g Express edition clearly includes changes were possible to the article in early November.

In an 8 page article, as mentioned you could have allocated one column to mention that the Database marketplace contains many more products. In particular considering you have included an Open Source product, and you selected this as product of choice, I feel this gives even more justification to at least giving credit to the emerging Open Source Database market. You actually place I recall only one mention to “Open Source” which is signifcant in the context of your choice. Other products would include PostgreSQL, Berkley DB, Apache Derby and even Ingres. While your article should clearly not need to analyse these at this time, by leading into this topic you provide clear opportunity for further discussion.

At the end of the day, while you provided a concise one page breakdown of features and certain limits, this technical information does not provide a clear benefit to an IT manager, or even a technical person.