MySQL Stored Procedures Performance

Another one of the sessions at the MySQL Users Conference I attended was Tuning MySQL5 SQL and Stored Procedures by Guy Harrison from Quest Software. A global company with 6000+ customers.

Guy has written a number of Oracle Performance Books in the past. His work now is on the “Spotlight” product family – Database diagnositic tools converting data to graphical representations. For these products, MySQL 5 and InnoDB only is necessary, simply due to accessing the right internal information for presentation. There are Freeware MySQL product downloads.

In this presentation he stated, nothing he was talking about specifically was relatively new. He did make quite a funny comment, “He is now seeking refugee status in the MySQL Community”.

Guy is author of O’Reilly “MySQL Stored Procedure Programming” Book. I managed to get for free at the conference from the MySQL Quiz night, in addition to a shirt and cap for stumping a Guru.

His talk were on tools and techniques for tuning MySQL.

  • Explain Command – reveals what the optimizer intends to do
  • Explain Extended
mysql> explain extended select ...;
mysql> show warnings G
Shows what the optimizer actually did. In this example, An IN was converted to EXISTS

There were 4 ways to provide optimizer hints.

  1. STRAIGHT_JOIN
  2. USE INDEX(…)
  3. FORCE INDEX(…)
  4. IGNORE INDEX(…)

In addition to the Show Query Log, there are Innodb specific commands, two in particular.

show status like 'innodb%'
* innodb_buffer_pool_read_requests
* innodb_data_read

Indexing and the optimizer

  • In MySQL Index is the best tool to improve performance, however sometimes it’s better to access the entire table.
  • Indexes generally effective when between 5% and 20% of rows are accessed.
  • Subqueries need to be satisified by an index or performance will be quite inefficent.
  • Overloading indexes with additional columns when key queries only use a few columns can enable improved performance.

Not all indexes are created equal. In the following examples, each advancement improved performance.

  • No indexes ()
  • Single Index (customer)
  • multiple indexes (customer, product)
  • concatenated indexes (customer + product)
  • covering index (including required columns, customer+product+qty)

Examples of SQL that can’t benefit from Indexes.

  • Derived tables – SELECT table in a from clause, creates a temporary table and will never get an index.
  • Views with UNIONS/GROUP BY

A comment from the audience was that derived tables can be of a benefit to a correlated sub-query in specific examples.

Stored Procedures provided a mixed blessing for performance.

  • Can improved perfomrance when high network overhead.
  • Some improvement on parsing.
  • Breaking up complex queries may provide benefits.
  • SQL is highly optimized for SET operations.
  • SP is not optimized for number crunching. Computionally not a fast language.

A written routine to calculate prime numbers provided the following performance (from most expensive to least) MySQL SP, Oracle SP,PHP,Perl,Java ,VB.NET ,C (gcc). This showed an example that was excessively inefficient. On the other hand, if the program is network dependent (e.g. access a million rows, perform some statisical aggretation). Comparatively the same between Java and SP locally, but much better in a remote host mode.

Performance of SQL in a SP will dominate overall performance. Where SQL is tuned, goto tried and proven traditional optimisation techniques.

  • Optimize iterations
  • Optimize Logic/Testing
  • Avoid recursion

Loop Management

  • Only perform necessary code within iterations
  • LEAVE or CONTINUE when possible in loops
  • Test the most likely IF/THEN statements first
  • extract if comparisions duplicated to produce nested if’s (within reason)

Some guidelines for Triggers.

  • Triggers will have a non-trival overhead for even the simplest trigger.
  • Due to FOR EACH ROW only, don’t have expensive SQL in any trigger.
  • Very carefully tune SQL in triggers.
  • Empty trigger produced 12% overhead.

For more information check out www.quest.com/mysql

Mark Shuttleworth

Thursday’s Keynote speaker at the MySQL Conference was Mark Shuttleworth talking on The Ubuntu Project: Improving Collaboration in the Free Software World.

Ubuntu from Canonical Ltd is fastly becoming one of the most popular Linux Distributions, based on Debian. Here were some notes from his presentation. Mark had some really powerful one or two line slides that typified both his presentation intention and the goal of Ubuntu.

Delivering on the Promise of Free Software

  • MySQL levelled the playing field, individuals could use MySQL to build applications that could now compete in the industry.
  • Apache, Python and Linux made it all possible.
  • This has provided the Opportunity and passion to build software that can serve a purpose.
  • Ubuntu is the continuation of levelling the playing field. Making always available at no cost.
  • Whatever your vision, you have the opportunity without financial limitation.
  • To be a pioneer in new fields, open source freedom enables the opportunity to anybody.

The New Deal of Free software

* Different Economics

  • Ubuntu is world class and free of charge.
  • Important to recognise the governence.
  • We don’t build 99%, it’s built by communities. We want to maintain good healthy and open relationships with the communities of software developers and contributors.

* Different culture

  • Transparency. You can participate, you get to see how the software is built. Difficult for co-ordination of people worldwide, the advantage is you get to be people in the public space everywhere.
  • To be able to try, experience to express their own inspiration.

Our vision is to build a complete community-driven distribution that can grow and substain itself through support and services.

Self substaining at a professional level. Community Driven.

  • The Ubuntu Proposition
  • Genuine Freedom and Flexibility
  • Zero Licensing Fees
  • Certification: Hardware, Softare, Skills
  • Superb Commercial Technical Support
  • Standards Compliance
  • Internatiolzation and customization
  • Collobaration to the Core

We do the best work when we focus on extreme challenges.

Mark made a great comment about the Chinese Version, What was in English on any pages, for Mark was the saving grace, but for the Chinese user, that was the wall, so a committed emphasis on Internationalisation was important.

Ubuntu as you know it today is widely rated #1 desktop Linux with millons of users globally, with complete office and home desktop distributions. We started on the desktop, we are continuing to expand into different variants including the enterprise. The next release in June 2006 – Dappy – 6.06 will includ LTS (Long Term Support) 3 years for Desktop, 5 years for server.

We maintain relationships with the community ecosystem.

  • Looking the the User Relationships.
  • Prime focus is simplicity. Also Comprehensive and Community, Current.
  • For the business relationships, wanting to offering Tiered Partnership Program and Extensive Global Solutions Provider Network.

Free Software Relationships

  • Close to upstream (e.g. MySQL, Apache, Firefox etc)
  • Close also to other distributions (both debian based and not). We want the best of their work to also be part of what’s available to all.
  • Need to ramp up the collaboration of projects especially between projects. Check out launchpad.net
    Site – Much easy to colloborate across projects
  • With MySQL and Ubuntu you have freedom to deploy without asking permission, and support when and where you need it
  • A key is make packaging ubuntu and mysql the best getting contributions from the community. Expresses most powerfully what MySQL can do.

Beating history into submission

  • We judge ourselves by history. We should look at our strengths. we should ask “What futures are enabled.”

Free software enables a different economic paradigm.
Embrace it.

A question from the audience was where does the name Ubuntu come from. The Answer was.
Ubuntu is common to many African languages. It means “human-esse”. The Ubuntu root is common to a lot of words. Some means included “The way we look after other people defines who we are. ”

Kubuntu – Has a meaning “For Humanity”.



My MySQL Conference Presentation


My talk at the MySQL Users Conference went well. MySQL For Oracle Developers was part of two talks, the second was by Mark Leith on MySQL For Oracle DBA’s.

I had a number of positive comments from attendees, including Ken Jacobs of Oracle who also contributed some valuable information in comparision of UPDATE/DELETE ORDER BY/LIMIT statements I was unaware of.

You can download my paper here.

My slides, like many talks just scratched the surface in the alloted. I’ve been working on additional reference material, a work in progress is available at MySQL For Oracle Developers. My continuatation on this will be dependent on feedback from the community of it’s intended value.

Hacking MySQL Source improvements

Further to my earlier post Hacking MySQL Source (in a good way) in which I was having a compilation problem when re-compiling MySQL source (i.e. tt worked the first build, but failed on subsequent re-compiles, even with no changes).
I’ve been able to solve my re-compiling issue, with special thanks to Jan Kenschke of MySQL AB, who was near during the MySQL Quiz Show.

As suspected, there is no need to re-run the ./BUILD/compile-pentium-debug command as this cleans, a simple make command is sufficient (I needed to confirm no other command args where needed). By doing this, it also better highlighted the actual error, where previously is wasn’t. I didn’t keep a copy of the error, but in effect I needed to do the following in order to successfully recompile.

su - mysqldev
cd mysql-5.1.10-beta-nightly-20060413
touch mysql-test/std_data/client-key.pem
touch mysql-test/std_data/client-cert.pem
touch mysql-test/std_data/cacert.pem
touch mysql-test/std_data/server-cert.pem
touch mysql-test/std_data/server-key.pem
make
# Success

I can’t answer why I had to do this manually (create these cert permissions), and why some level of make doesn’t (or does but I don’t know the command), but it works, and right now that’s important. Hopefully a MySQL guru can enlightment me.

From here, I was able to test as I’d written previously. I just need to confirm my changes work before reporting back, as well as providing some guidelines for debugging with gdb.

Contributing to the MySQL Community

Everybody can contribute to the MySQL Open Source Community. Here are some ideas:

  • You can start with completely a MySQL Conference Survey form (those forms that we are all avoiding).
  • You can download, evaluate and test newer versions of MySQL Products.
  • You can contribute to the MySQL Forums to ask questions and even give answers.
  • Log Bugs & Enhancements on MySQL Products.
  • Write about your experiences with MySQL in a Blog and add your Blog to PlanetMySQL. With now over 100 feeds, if everybody wrote something about MySQL in the next year, we would have over 1000 feeds next conference.
  • Contribute your thoughts (via wiki), MySQL products, code snippets and examples to MySQL Forge (thanks Mike)
  • You can even contribute to the MySQL Source Code.

If you are passionate about MySQL and you are attending the MySQL User Conference, then jump into the Open Source MySQL Community.

Extracting User SQL Queries from a MySQL Server

Joe Kottke from FeedBurner in his MySQL Conference presentation today FeedBurner: Scalable Web Applications Using MySQL and Java mentioned a novel way of extracting SQL Select Statements from a MySQL Server.

The obvious MySQL ways include SHOW [FULL] PROCESSLIST, the Slow Query Log, and the General Query Log. In 5.1 you also use the INFORMATION_SCHEMA.PROCESSLIST.

Anyway, Joe mentioned he does a strings on tcpdump to extract the queries, and there was no need to restart the server (this point interested me). So, not letting a challenge go, having never used tcpdump.

Well, here’s how.


tcpdump -w mysqldump.txt -i lo

NOTE: You must log output to file with -w to get the full packets and not just headers.

In a seperate session.


$ mysql -uroot -p mysql -h `hostname`
mysql> select * from user;
mysql> exit
# I did a ctrl-C in the other terminal (not sure if necessary)
$ strings tcpdump.txt
select * from user
mysql
user
user

NOTE: localhost wasn’t valid, it had to be the hostname of the server. Of course you could access the server from a different client.

Of course there may be other or better ways, but it worked for me and was rather cool.

Perhaps this is a good reason to use SSL?

MySQL Migration Toolkit Extensions

Just to add some more features of the MySQL Migration Toolkit from the MySQL Conference presentation MySQL Migration Toolkit by Mike Zinner from MySQL AB.

You can extend and modify the funcionality of the Migration toolkit with the supplied Eclipse Java project. I’ll need to download the code to write some more detailed notes, but here were a few points from Mike.

Classes are in the com.mysql.grt.modules Package Space
The main configurable classes are nameed Migration????..java and ReverseEngineer???.java with ???? being for the different Database Products

Within the Eclipse project, you can incoporate other languages. For example, ReveserEngineerMySQL Jdbc.java for example has a callGrtFunction to code written in C. This I’m assuming will allow you to incoporate all the other languages of the GRT such as Python, Lua. (I’m still impressed you can actually do this, a mini project for another day to pull this apart.

A question from the audience was about Oracle User Defined data types. MigrationOracle.java for example can easily be extended to cater for the migration of user defined data types to a suitable valid MySQL alternative.

References
MySQL Migration Toolkit – Provides details, screenshots and Flash Tutorials for SQL Server, Oracle and Access Migrations.

The MySQL Migration Toolkit

A Wednesday afternoon presentation at the MySQL Conference was MySQL Migration Toolkit by Mike Zinner from MySQL AB.

In summary, the Migration Toolkit currently provides the following feature set.

  • Wizard like interface
  • Plug-in architecture
  • Migration of Oracle/MS SQL Server/MS Access/Generic JDBC Data Sources. 2006 support for Sybase and DB2.
  • Provides user support to expand to other databases
  • Customisable the produced wizard scripts at any step
  • Completely Scriptable (the wizard interface can produce command line replay scripts) -allows re-running, re-scheduling
  • Binaries include an Eclipse Java Project to enable easy extension of the Migration Toolkit.

The product is built on the Generic Runtime Environment (GRT) which is used in other MySQL GUI Products. The GRT is a thin C Layer that allows you to easily add modules written in other languages (e.g. C/C++, Java, Lua, Python etc).

The following are the steps of the Migration process.

  • Source/Target
  • Object selection
  • Object Mapping
  • Manual Editing
  • Schema Creation
  • Data Mapping
  • Bulk Transfer
  • Summary

Some more relevent points on some of the individual steps.

Source/Target

  • Screen caters for source specific requirements (e.g. SID for Oracle)

Object selection

  • Choose the different Schema Objects, including tables, views. Can limit by type, or individual objects
  • For Oracle additional objects include Schemas, Routines and Sequences

Object Mapping

  • Define Methods for mapping of schema objects
  • Includes Language types (e.g. latin1), multilanguage (utf8), or a user defined
  • For Tables you can choose the purpose which includes data consistency, statistical data. These rules will include determining best storage engine, intepreting auto increment columns.

Manual Editing

  • You Can drill down in great detail to define the object mapping, down to a column level
  • Provides a migrated objects review, including warnings (for example views need to be manually checked, or sequence not migrated)

Schema Creation

  • Can create objects in real-time or create a SQL script for later executation
  • You can review the generate SQL code that will be used in real-time and perform further manual refinement (e.g. converting to specific SMALLINT,INT,MEDIUMINT refinement)
  • Will show any warnings, and allow for manual correction

Data Mapping

  • Can create data in a real-time to a MySQL database
  • Create Insert Statements (normal data and blob data) (optimized multi-row insert with byte limits)
  • No facility at present for flat file creation/loading via LOAD INFILE

Bulk Transfer
Summary

  • A migration report which can be saved to disk.
  • Create a Migration script for the complete process. (which is a generated lua script)

I can see the possibility for the following enhancements.

  • Consider ENUM fields by analysing data (columns char < 20) and offer as data type
  • Consider determining AUTO_INCREMENT columns

References
MySQL Migration Toolkit – Provides details, screenshots and Flash Tutorials for SQL Server, Oracle and Access Migrations.

Hacking MySQL Source (in a good way)

HackFest B: Creating a New SHOW Command by Brian Aker at the MySQL Users Conference

Brian stepped through the steps for those attending to modify and deploy new functionality in the mysql server. Cool. The end result I would consider for an experienced developer as relatively easy (after avoiding the pitfalls).

NOTE: I wasn’t able to complete this successfully during the session, but I’ve posted this, so hopefully the input and review of others can help in overcoming the current issues. See Outstanding Issues throughout my notes.

Our goal: Produce a new command SHOW CONFERENCE;

The following commands were performed on CentOS 4.2. There may be some differences with different Linux Distros.

Prepare a current Source Code Tree


su -
useradd mysqldev
su - mysqldev
wget http://downloads.mysql.com/snapshots/mysql-5.1/mysql-5.1.10-beta-nightly-20060426.tar.gz
# NOTE: You should check the snapshots page for latest versions http://downloads.mysql.com/snapshots.php
tar xvfz mysql-5.1.10-beta-nightly-20060413.tar.gz
cd mysql-5.1.10-beta-nightly-20060413
# compile script will depend on H/W and requirements
./BUILD/compile-pentium-debug --prefix=/home/mysqldev
make install
scripts/mysql_install_db --datadir=/home/mysqldev/data
sql/mysqld --basedir=/home/mysqldev --datadir=/home/mysqldev/data &

NOTE: This can take a while, so it’s a good opportunity to leave this and come back at a later time. The –prefix allows you to install into the new users directory ensuring that you can test without affecting any currently installed mysql installations on your machine.

The BUILD directory contains many different compilations for platforms and variants. This version contains 41 specific scripts. For the purpose of our new SHOW command, we are going to run in debug mode for any necessary debugging.

Outstanding Issues

  • I was surprised a make install didn’t configure the bin/mysqld and scripts/mysql_install_db as you would get in a binary distribution. There is probably additional commands necessary for this, but for the purpose of development it works.
  • I run multiple versions of MySQL on my machine, and I had some conflict with my /etc/my.cnf file, so I just removed it for the purpose of this test. There needs to be a cleaner way to ensure this isn’t used, and also to compartmentalise/isolate different source trees.

Confirm Installation

I ran this in a seperate terminal window.

su - mysqldev
bin/mysql -e "SELECT VERSION()"
+------------------------------------+
| VERSION()                          |
+------------------------------------+
| 5.1.10-beta-nightly-20060413-debug |
+------------------------------------+

This gives us a suitable source baseline.

Implementing a new SHOW Command

The easiest means of developing a new SHOW command is to base this on an existing command. We are going to base this new SHOW command on the SHOW AUTHORS command.

We are going to be looking at the following files.

  1. lex.h
  2. sql_yacc.yy
  3. sql_lex.h
  4. sql_parse.cc
  5. mysql_priv.h
  6. sql_show.cc

All these files are found under the sql directory in the source tree.

lex.h

131:     { "CONFERENCE",       SYM(CONFERENCE_SYM)},

sql_yacc.yy

 202:  %token  CONFERENCE_SYM
...
8267:       | CONFERENCE_SYM
8268:         {
8269:           LEX *lex=Lex;
8270:           lex->sql_command= SQLCOM_SHOW_CONFERENCE;
8271:         }
...
9366:       | CONFERENCE_SYM        {}

sql_lex.h

112:   SQLCOM_SHOW_CONFERENCE,

sql_parse.cc

3515:   case SQLCOM_SHOW_CONFERENCE:
3516:     res= mysqld_show_conference(thd);
3517:     break;

mysql_priv.h

915: bool mysqld_show_conference(THD *thd);

sql_show.cc

229: bool mysqld_show_conference(THD *thd)
230: {
231:   List<item> field_list;
232:   Protocol *protocol= thd->protocol;
233:   DBUG_ENTER("mysqld_show_conference");
234:
235:   field_list.push_back(new Item_empty_string("Name",100));
236:
237:   if (protocol->send_fields(&field_list,
238:                             Protocol::SEND_NUM_ROWS | Protocol::SEND_EOF))
239:     DBUG_RETURN(TRUE);
240:
241:   protocol->prepare_for_resend();
242:   protocol->store("Welcome to the MySQL User Conference 2006", system_charset_info);
243:   if (protocol->write())
244:     DBUG_RETURN(TRUE);
245:   send_eof(thd);
246:   DBUG_RETURN(FALSE);
247: }


cd ..
./BUILD/compile-pentium-debug --prefix=/home/mysqldev
...
gmake[3]: *** Waiting for unfinished jobs....
/bin/chmod +x mysql-test-run-t
/bin/mv mysql-test-run-t mysql-test-run
gmake[3]: Leaving directory `/home/mysqldev/mysql-5.1.10-beta-nightly-20060413/mysql-test'
gmake[2]: *** [all-recursive] Error 1
gmake[2]: Leaving directory `/home/mysqldev/mysql-5.1.10-beta-nightly-20060413/mysql-test'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/home/mysqldev/mysql-5.1.10-beta-nightly-20060413'
gmake: *** [all] Error 2

Outstanding Issues

  • I’m sure there must be a more effecient means of compiling changes, this process effectively cleaned and started again.
  • I had a build failure which didn’t seem to make any sense.

At this point I wasn’t able to continue, but here are notes I took of next steps.

make install
gdb mysqld
run --gdb --debug

In another terminal session.

su - mysqldev
bin/mysql
mysql> SHOW CONFERENCE;

I’m keen to see it work in my own environment and documented for others to try.

Brian moved on to creating a INFORMATION_SCHEMA query but we ran out of time to complete this. I’ve got some notes to document as a later date.

PBXT – The MySQL Community Transactional Storage Engine

In having a discussion with Paul McCullagh (the creator of PBXT transactional storage engine) and Taneli Otala MySQL AB CTO after the keynote presentation at the MySQL User Conference, Taneli made the following comment (paraphrased and reproduced with permission).

“I talk about PBXT in discussions shamelessly. The development of PBXT was excellent timing in the MySQL community landscape”.

It was an excellent commendation that MySQL AB management considered so highly the contributions from the community. As mentioned in the opening keynote, MySQL with the Storage Engine API with MySQL 5.1 has great potential to expand what options are available to user of MySQL.

Good work Paul. I like many others wish you the best in your continued development and contribution.

Improvements in SHOW STATUS

It just dawned on me as this topic was mentioned in a MySQL Conference presentation yesterday in a manual process. So my thought is, why can’t the following functionality be added to the MySQL server.

My request for two new extensions to the SHOW STATUS command.

SAVE STATUS – This enables the current SHOW STATUS to be saved (or more specifically cached), you can only keep one copy per server instance.

DIFF STATUS – This shows the difference between the current SHOW STATUS and the last saved show status from the SAVE STATUS command.

This would quickly, easily and interactively via a mysql prompt enable a DBA to see the state of change in a more condensed form. The syntax may need to cater for Session/Global scope, and of course would need to be appropiately named, my examples is just an idea.

Of course you can write an easy script that does the same, and provide a diff, but why not include it directly?

Could not have said Agile better myself

I’ve just attended Scott Ambler’s presentation on Agile Database Techniques: Data Doesn’t Have to be a Four-Letter Word Anymore at the MySQL Users Conference.

There is so much content on the topic, it’s impossible to present so much information in a short 45 minute session. I can speak with authority in regards to the same problem of condensing so much content given this issue with my own presentation MySQL for Oracle Developers.

I ask this question. Why is common sense considered such a radical approach? I state this because Agile Methodology approaches in so many ways are common sense, but “traditionalists” (and I use this term for several groups of existing IT dinosours), see change and continual improvement approaches as potentially evil, while they are constantly just trying to stay afloat daily with bloated, inefficient and overly complex legacy systems (I had to throw in several daggers at the same time, couldn’t resist).

This presentation echoed both a lot of my experiences and part of my writings and current projects. I have been a database modeller for over 16 years, and I have worked with Extreme Programming Agile Methodology for now over 6 years. Here are some abstract and unstructured bullet points from the presentation and some of my own comments intermingled. (Unfortunately due to other discussions I missed the initial 15 mins, however given the content and my own professional standings I can only fully understand what the initial content was).

  • Agile Modeling Driven Development AMDD
  • Agile Data Modelling
  • Iteration o (zero) should consist of an Initial Domain Model to provide scope and visibility of the bigger picture, but only a higher level view. Any extended time spent is wasted time.
  • Why use this approach to data modelling? To handle change efficiently.
  • There is a clear lack of tools and techniques in Automated Testing and Code Coverage concepts specifically for Data. See my thoughts on this at the links at the end of this entry.
  • Incremental software development enables a production rollout after every iteration
  • There is movement to a Data Modelling Standard based on the UML Notation
  • Transitioning approach to schema changes. Interesting concept, not necessary in a new project with Test Code Coverage, but essential for Legacy systems for a low risk approach using small steps.
  • Stop talking about data quality and actually doing something about it. Bad data, or more specifically must have migrated data, that breaks all the integrity in a new system has been the bain of my experiences in large data migration projects.
  • Generalising Specialists

Scott threw in a lot of Agile terms, and for most people in the audience I observed that quite a few of these terms were indeed foreign. I could easily see the potential for a Talk on Introduction to Agile – Applying to MySQL Projects.

One term I didn’t hear was YAGNI – You Ain’t Gonna Need It. This is an essential XP principle which in some ways sums up the common sense approach to software development. Don’t even thing about it until it’s made it’s way to the top of the customer requirements for the current interation.

I’ll also be getting the book “Refactoring Databases – Evolutionary Database Design” that was mentioned, I’m keen to read in depth more of the principles I so much promote myself. Sometimes I feel quite isolated in this area of Data Modelling colliding with Agile Methods.

I’ve written previously content that both re-inforces a number of points of this presentation and also complements Scott’s presentation in a number of ways. These include Unit Testing A Database and Database Modelling within an XP Methodology.

I recall this quote from a tee-shirt once owned, and I think is valid for certain IT professionals that continue to cling to traditional approaches in data modelling. Perhaps I should put it on an Agile slogan shirt for myself. Evolve or Die

In closing, I’ve been wanting to write a paper for quite some time titled “Better Productivity and Quality. An Agile Approach” to share my experiences. I’ve haven’t been able to put my thoughts down, having two other major writings in progress at present, but this presentation has only renewed my vigor.

Opening Conference Keynote

Mike writes a good summary of the MySQL Conference opening keynote State of the Dolphin: Interview with Kaj, Monty and David of MySQL.

I’d like to add just two comments.

Firstly, it was great to see community awards to Giuseppe, Roland, Marcus and Rasmus. Well done!

Second, it was a great thing to see on the slides a reference to Paul’s PBXT Transactional Storage Engine. A MySQL storage engine from the community. I’ve had a chance to meet Paul and have a number of great discussions with him. I wish him all the best, and I’m happy to contribute what I can to see his engine make it into the MySQL product in the future.

Bootstraping

Tuesday’s Keynote speaking from the MySQL Users Conference including a presentation from Greg Gianforte of RightNow Technologies on “Bootstrapping: Starting an Open Source Business With Almost No Money!Read More.

Taking directly from Greg’s presentation, here are some key points for bootstraping that meant something to me.

  • Sales is Job #1
  • Don’t spend Beyond Your Means
  • There Is Always Another Way
  • What is Your Noble Purpose?

Some great points. In summary, not having funds should not be an excuse, find another way . You just need to think creatively. I agree completely, thinking creatively is something I do well. Immerse yourself – talk to lots of people, develop a one page description, get it out to as many people as possible, talk to people, ask for an order. Of course sales is #1, but for me personally, and also within the essense of the presentation – Honesty. His passion about helping people to get started has also lead to publishing a book for bootstrappers (in paperback to save money).

One of the key goals is to “Help our clients to better serve our customers”. Greg was also correct in the need to correct the loss of ethics in the IT and software industry, something I fight for back at home.

A number of these points provided great relevence given I’ve worked for two failed Internet startups.

The closing quiz raised some great questions. Here they are. (Hopefully now reproducible in the Open Source community, oops too late now!)

  • #1 What should you do first?
  • #2 You have an idea. What next?
  • #3 You have a good product, What next?
  • #4 You find a prospect. What next?
  • #5 Your product does not do everything a prospect wants. Should you?
  • #6. A major publication calls saying they are writing a big article about your products and wants you to buy an ad in the same issue of their magzine. You should?
  • #7 You are still gettting started and Dun & Bradstreet is asking for company info and detailed financial data. Should you:
  • #8 A major prospect says they want to fly to visit you, but you are still working out of your house. Should you:
  • #9 You are the only person working full time in the business and you are aksed how many employees you have. Tell them:
  • #10 You’ve just made your first sale and generated your first montly profit. You should.

I’ll save the expense by not giving any answers, but my favourite answer was for #9.

Greg started the presentation with the right words for me. Love Mountains, Love Skiing, check out our Careers. Anybody that knows me, knows that my plan for this year, it to be in the Northen Hemisphere for Winter, working for an organisation that is close to the snow, enabling me to go skiing. I just may have check it out.

My First BoF – MySQL GUI's

Last night, I attended Mike Zinner’s presentation of MySQL GUI’s, most specifically MySQL Workbench. In summary I was very impressed with the current work, and the future potential. I’ve been waiting for now about a year to get access to full strength MySQL Workbench (it has been available for some time, but still in early development), having previously used DB Designer for MySQL Data Modelling. One of the limitations is finding a quality tool that runs under Linux. Well MySQL workbench will satisfy this and far more in the future.

A quick summary of some of the glossy features.

  • Reverse Engineering of schemas from multiple input sources including MySQL, Oracle, MS SQL Server.
  • Synchronisation with a MySQL Database.
  • Quick Re-Synchronisation.
  • Full GUI tools and features for creating and maintaining database objects including tables, primary keys, foreign keys, views.
  • Fully customisable styles via XML and <svg> functionality
  • Intitutive point, click and drag functionality for database objects (primary keys for example)
  • An interactive shell for the GRT.

The Best Feature

The introduction of the GRT Shell, an interactive shell will provide near unlimited potential.

From the shell you can do everything you can do via the GUI, and you can do it multiple languages including Lua, Python, C/C++ Java and in future Ruby, as well as the potential of many more. The potential of Scripts and Plugins is limitless. Mike demonstated a number of simple examples, so I’ll be very keen now to review and contribute to this functionality now that I’ve seen it in action. MySQL is working actively for community contribution with the recent introduction of the MySQL Forge.

Some of the discussion was the future direction of technology. Presently, there is some limitation of reuse in the end user GUI. A rich GUI is provided, but is platform specific, providing a different look and feel in Linux, Mac OS and Windows. One of the options, and definitely of interest is the consideration of using Java as a standard GUI language.

One of the requests was for more feedback, greater community input and contribution. I agree there needs to be better means here, one of my suggestions was perhaps an interactive Feedback model built into beta products, actively prompting users to provide feedback to MySQL. This would also enable MySQL to gain greater insite into the users of the GUI products.

In summary, excellent work Mike, with a team of just 4 that supports and develops all the MySQL GUI products.

Here is an overview diagram of the MySQL Workbench.

Save the Falcon!

While many people will be blogging about the conference sessions and MySQL features, functionality and sessions, I thought it would be important to raise awareness of the creativity of developers often lost during the product lifecycle.

I met at the Speakers Function last night Jim Starkey, the founder of Netfrastructure, Inc, a company aquired by MySQL recently. At the conference was the official launch of a new transactional storage engine to be included in the MySQL 5.1 release is codenamed “Falcon”.

Too often, the flare and creativity of products, and also the initial history is lost when it reaches the marketing department of an organisation. In this case, after talking with Jim, we decided it was important to ensure the name “Falcon” remains. Why should the name conform to something generic, and quick frankly boring. So please join me.

Save The Falcon!

My MySQL Conference happenings

Everybody has been writing lately of their likes of sessions and events for the MySQL User Conference, time for my 2 cents worth, with a twist.

I’ll be leaving early tomorrow at 7am (Friday my time), that’s Thursday 2pm Conference time. A taxi, two trains, two planes, and some 21 hours later, I’ll be at San Jose. In time to get a car and head to Davis California, a small town, where I have lived previously. After some catchup, I’m off to Lake Tahoe, (all up about 3 hrs from San Jose), for some R & R, and a weekend of Skiing, starting at Heavenly. No snow in the last few days, but 4 feet in the past week, and a bumper season to date. (The added benefit is it’s much cooler, I’m not a warm weather person, and my current home in Brisbane Australia is just too warm for my liking)

A lazy Sunday afternoon drive back, and ready for the MySQL User Conference starting on Monday. My presentation is nearer to the end on Thursday, so I’ll have plenty of time to soak in what’s on offer. Too much on the MySQL front to mention, performance and new features including clustering highlights, but also keen to hear Tim O’Reilly as I’m become more interested in Web 2.0, where it is and where it’s going. I see this philosophy of Services rather then Products, and highly user driven in features, functionality and content, in line with some of my principles and strong thoughts of Extreme Programming and Agile Development Methodologies.

I’ve also had positive comments and feedback from a number of MySQL colleagues of the past months, so it will be great to expand not only my intellect from the conference content, but my circle of new friends from the MySQL community. (And places in the world to visit in the future)

Why IT professionals get a bad name

Sometimes you just can’t find words to describe bad code, and if you are forced into maintenance it can be a mindfield. I’m presently supporting an existing deployed Web Based Java application, which I’ve had no involvement with previously, and for lack of any complements it’s absolutely terrible.

How this code was ever written, and of course never reviewed (that’s another talk on software development) one can only shutter. If it was an isolated case, then perhaps, but the code is bloated, and riddled with unnecessary complexity. Basic 101 practices such as Standard CSS, well formed html pages (ie, not with double <html> and <body> tags), and Javascript functions that are not duplicated up the warzoo are things you can discover spending 2 mins looking at just the first screen before you even look at the Java Code.

Here is an example.

tempKeyStr = tempKeyStr.substring(0,2).concat(String.valueOf(Integer.parseInt(tempKeyStr.substring(2).trim())));

Ok, I guess I should give you a hint of the type of values in the tempKeyStr prior to this wonderful transformation. This string contains a 2 alphanumerical character prefix, then optionally a space, then a numeric number (usually 2, 3 or 4 digits). An example may be ‘XX 999′ or ‘AA99′.

This will help you to identify what this lines now does?

Ok, well to save the suspense, the purpose of the above line of code is to trim and squeeze any whitespace from the string. So something like the following would surfice.

tempKeyStr = tempKeyStr.replaceAll(" ","");

Of course refactoring would totally eliminate this line, placing the replaceAll syntax in a previous declaration.
Those observant Java guru’s would also state, but what if this was written 5 years ago, replaceAll is only available since 1.4. Yes, correct, but Pattern has always existed, replaceAll is just a convience wrapper for this, and this type conversion from String to Integer to int to String is unnecessary complexity.

On that note of refactoring, I was sent the following link recently Refactoring to the Max. As the title suggests, sometimes people go to far.