Archive for the ‘Web Development’ Category

Poor programming practices

Friday, February 1st, 2013

When will it stop. These amateur programmers that simply cut/paste code really affect those good programmers in the ecosystem trying to make a decent living. I was reviewing a developed (but incomplete) PHP/MySQL system using a common framework (which in itself is irrelevant for this post).

In one source file there were 12 repetitions of the following code:

   //permissions
    $this->security_model->setUserPermissions($id);
    if (!array_key_exists($id,$this->session->userdata['permissions']) OR
	!array_key_exists('id', $this->session->userdata['permissions'][$id]) OR
	!array_key_exists('scope', $this->session->userdata['permissions'][$id]['name'])){
      $this->session->set_flashdata('alert', 'You are not authorized to go there.');
      redirect($this->agent->referrer());
    }

It’s bad enough when code is repeated and not put in a simple re factored function. When it’s repeated 12 times in one file, and OMG over 100 times in the product, that is a recipe for bugs, and high maintenance codes due to extremely poor coding practice.

Another reason to avoid RDS

Wednesday, May 25th, 2011

My list of reasons for never using or recommending Amazon’s MySQL RDS service grows every time I experience problems with customers. This was an interesting and still unresolved issue.

ERROR 126 (HY000): Incorrect key file for table '/rdsdbdata/tmp/#sql_5b7_1.MYI'; try to repair it

You may see this is a MyISAM table. The MySQL database is version 5.5, all InnoDB tables and is very small 100MB in total size.
What is happening is that MySQL is generating a temporary table, and this table is being written to disk. I am unable to change the code to improve the query causing this disk I/O.

What I can not understand and have no ability to diagnose is why this error occurs sometimes and generally when the database is under additional system load. With RDS you have no visibility of the server running the production database. While you have SQL access, an API for managing MySQL configuration options (I also add not all MySQL variables), and limited system statistics via a graphical interface, all information about the system performance, disk configuration etc is hidden and not accessible. This is a frustrating limitation of using RDS.

NOTE: While I cannot recommend RDS, I am very happy with AWS EC2 services when correctly configured. For a cloud based MySQL solution I would definitely recommend greater control over your MySQL database using EC2 and EBS.

Basic scalability principles to avert downtime

Saturday, April 23rd, 2011

In the press in the last two days has been the reported outage of Amazon Web Services Elastic Compute Cloud (EC2) in just one North Virginia data center. This has affected many large website includes FourSquare, Hootsuite, Reddit and Quora. A detailed list can be found at ec2disabled.com.

For these popular websites was this avoidable? Absolutely.

Basic scalability principles if deployed in these systems architecture would have averted the significant downtime regardless of your development stack. While I work primarily in MySQL these principles are not new, nor are they complicated, however they are fundamental concepts in scalability that apply to any technology including the popular MongoDB that is being used by a number of affected sites.

Scalability 101 involves some simple basic rules. Here are just two that seem to have been ignored by many affected by this recent AWS EC2 outage.

  1. Never put all your eggs in one basket. If you rely on AWS completely, or you rely on just one availability zone that is putting all your eggs in one basket.
  2. Always keep your important data close to home. When it comes to what is most critical to your business you need access and control to your information. At 5am in the morning when the CEO asks how long will our business be unavailabla and what is needed to resolve the problem, the answer “We have no control over this and have no ETA” is not an acceptable answer.

With a successful implementation and appropriate data redundancy you may not have an environment immediately available however you have access to your important information and the ability to create one quickly. Many large hosting companies can provide additional H/W on near demand, especially if you have an initial minimal footprint. Indeed using Amazon Web Services (AWS) as a means to avert a data center disaster is an ideal implementation of Infrastructure As A Service (IAAS). Even with this issue, organizations that had planned for this type of outage could have easily migrated to another AWS availability zone that was unaffected.

Furthermore, system architecture to support various levels of data availability and scalability ensure you can handle many more various types of unavailability without significant system down time as recently seen. There are many different types of availability and unavailability, know what your definition of downtime is and supporting disasters should be your primary focus of scalability, not an after thought.

As an expert in performance and scalability I can help your organization in the design of a suitable architecture to support successful scalability and disaster. This is not rocket science however many organizations gamble without the expertise of a professional to ensure business viability.

Why is my database slow?

Tuesday, May 11th, 2010

Not part of my Don’t Assume series, but when a client states “Why is my database slow”", you need to determine if indeed the database is slow.

Some simple tools come to the rescue here, one is Firebug. If a web page takes 5 seconds to load, but the .htm file takes 400ms, and the 100+ assets being downloaded from one base url, then is the database actually slow? Tuning the database will only improve the 400ms portion of 5,000ms download.

There some very simple tips here. MySQL is my domain expertise and I will not profess to improving the entire stack however perception is everything to a user and you can often do a lot. Some simple points include:

  • Know about blocking assets in your <head> element, e.g. .js files.
  • Streamline .js, .css and images to what’s needed. .e.g. download a 100k image only to resize to a thumbnail via style elements.
  • Sprites. Like many efficient but simple SQL statements, network overhead is your greatest expense.
  • Splitting images to a different domain.
  • Splitting images to multiple domains (e.g. 3 via CNAME only needed.) — Hint: Learn about the protocol
  • Cookieless domains for static assets
  • Lighter web container for static assets (e.g. nginx, lighttpd)
  • Know about caching, expires and etags
  • Stripping out http://ww.domain.com from all your internal links (that one alone saved 12% of HTML page size for a client). You may ask is that really a big deal, well in a high volume site the sooner you can release the socket on your webserver, the sooner you can start serving a different request.

Like tuning a database, some things work better then others, some require more testing then others, and consultants never tell you all the tricks.

References

As with everything in tuning, do your research and also determine what works in your environment and what doesn’t. Two excellent resources to start with are Steve Souders and Best Practices for Speeding Up Your Web Site by Yahoo.

Upcoming book – Expert PHP and MySQL

Wednesday, March 3rd, 2010

This month will see the release of the book Expert PHP and MySQL which I was a co-author of. Initially this will be available for purchase in PDF format from the Wrox website and I am hopeful this will be available in print format for the MySQL Users Conference.

More then just your standard PHP and MySQL there is detailed content on technologies including Memcached, Sphinx, Gearman, MySQL UDFs and PHP extensions. We will be posting more information at www.ExpertPhpandMySQL.com. You can download a PDF version of Chapter 1 Techniques Every Expert Programmer Needs to Know.

The book includes the following content:

  1. Techniques Every Expert Programmer Needs to Know
  2. Advanced PHP Constructs
  3. MySQL Drivers and Storage Engines
  4. Improved Performance through Caching
  5. Memcached MySQL
  6. Advanced MySQL
  7. Extending MySQL with User-defined Functions
  8. Writing PHP Extensions
  9. Full Text Search using SPHINX
  10. Multi-tasking in PHP and MySQL
  11. Rewrite Rules
  12. User Authentication with PHP and MySQL
  13. Understanding the INFORMATION_SCHEMA
  14. Security
  15. Service and Command Lines
  16. Optimization and Debugging

Problems compiling MySQL 5.4

Thursday, June 11th, 2009

Seem’s the year Sun had for improving MySQL, and with an entire new 5.4 branch the development team could not fix the autoconf and compile dependencies that has been in MySQL for all the years I’ve been compiling MySQL. Drizzle has got it right, thanks to the great work of Monty Taylor.

I’m working on the Wafflegrid AWS EC2 AMI’s for Matt Yonkovit and while compiling 5.1 was straight forward under Ubuntu 8.10 Intrepid, compiling 5.4 was more complicated.

For MySQL 5.1 I needed only to do the following:

apt-get install -y build-essential
apt-get install libncurses5-dev
./configure
make
make install

For MySQL 5.4, I elected to use the BUILD scripts (based on Wafflegrid recommendations). That didn’t go far before I needed.

apt-get install -y automake libtool

You then have to go compiling MySQL 5.4 for 10+ minutes to get an abstract error, then you need to consider what dependencies may be missing.
I don’t like to do a blanket apt-get of a long list of proposed packages unless I know they are actually needed.

The error was:

make[1]: Entering directory `/src/mysql-5.4.0-beta/sql'
make[1]: warning: -jN forced in submake: disabling jobserver mode.
/bin/bash ../ylwrap sql_yacc.yy y.tab.c sql_yacc.cc y.tab.h sql_yacc.h y.output sql_yacc.output -- -d --verbose
make -j 6 gen_lex_hash
make[2]: Entering directory `/src/mysql-5.4.0-beta/sql'
rm -f mini_client_errors.c
/bin/ln -s ../libmysql/errmsg.c mini_client_errors.c
make[2]: warning: -jN forced in submake: disabling jobserver mode.
rm -f pack.c
../ylwrap: line 111: -d: command not found
/bin/ln -s ../sql-common/pack.c pack.c
....
make[1]: Leaving directory `/src/mysql-5.4.0-beta/sql'
make: *** [all-recursive] Error 1

What a lovely error ../ylwrap: line 111: -d: command not found

ylwrap is part of yacc, and by default in this instance it’s not even an installed package. I’ve compiled MySQL long enough that it requires yacc, and actually bison but to you think it would hurt if the configure told the user this.

It’s also been some time since I’ve compiled MySQL source, rather focusing on Drizzle. I had forgotten just how many compile warnings MySQL throws. Granted a warning is not an error, but you should not just ignore them in building a quality product.

Announcing Drizzle on EC2

Sunday, April 26th, 2009

I have published the very first sharable Drizzle Amazon Machine Image (AMI) for AWS EC2, based on the good feedback from my discussion at the Drizzle Developer Day on what options we should try.

This first version is a 32bit Developer instance, showcasing Drizzle and all necessary developer tools to build Drizzle from source.

What you will find on drizzle-ami/intrepid-dev32 – ami-b858bfd1

Ubuntu 8.10 Intrepid 32 bit base server installation:

  • build tools
  • drizzle dependencies
  • bzr 1.31.1

From the respective source trees the following software is available:

  • drizzle 2009.04.997
  • libdrizzle 0.0.2
  • gearman 0.0.4
  • memcached 1.2.8
  • libmemcached 0.28

Drizzle has been configured with necessary dependencies for PAM authentication, http_auth, libgearman and MD5 but these don’t seem to be available in the binary distribution.

I will be creating additional AMI’s including 64bit and LAMP ready binary only images.

The following example shows using drizzle on this AMI. Some further work is necessary for full automation, parameters and logging. I’ve raised a number of issues the Drizzle Developers are now hard at work on.

1. Starting Drizzle

ssh ubuntu@ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com
sudo /etc/init.d/drizzle-server.init start &

2. Testing Drizzle (the sakila database has been installed)

$ drizzle
Welcome to the Drizzle client..  Commands end with ; or \g.
Your Drizzle connection id is 4
Server version: 2009.04.997 Source distribution

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

drizzle> select version();
+-------------+
| version()   |
+-------------+
| 2009.04.997 |
+-------------+
1 row in set (0 sec)

drizzle> select count(*) from sakila.film;
+----------+
| count(*) |
+----------+
|     1000 |
+----------+
1 row in set (0 sec)

3. Compiling Drizzle

sudo su - drizzle
ls
deploy  drizzle  libdrizzle  sakila-drizzle
cd drizzle
./configure --help
Description of plugins:

   === HTTP Authentication Plugin ===
  Plugin Name:      auth_http
  Description:      HTTP based authentications
  Supports build:   static and dynamic

   === PAM Authenication Plugin ===
  Plugin Name:      auth_pam
  Description:      PAM based authenication.
  Supports build:   dynamic

   === compression UDFs ===
  Plugin Name:      compression
  Description:      UDF Plugin for compression
  Supports build:   static and dynamic
  Status:           mandatory

   === crc32 UDF ===
  Plugin Name:      crc32
  Description:      UDF Plugin for crc32
  Supports build:   static and dynamic
  Status:           mandatory

   === Error Message Plugin ===
  Plugin Name:      errmsg_stderr
  Description:      Errmsg Plugin that sends messages to stderr.
  Supports build:   dynamic

   === Daemon Example Plugin ===
  Plugin Name:      hello_world
  Description:      UDF Plugin for Hello World.
  Supports build:   dynamic

   === Gearman Logging Plugin ===
  Plugin Name:      logging_gearman
  Description:      Logging Plugin that logs to Gearman.
  Supports build:   dynamic

   === Query Logging Plugin ===
  Plugin Name:      logging_query
  Description:      Logging Plugin that logs all queries.
  Supports build:   static and dynamic
  Status:           mandatory

   === Syslog Logging Plugin ===
  Plugin Name:      logging_syslog
  Description:      Logging Plugin that writes to syslog.
  Supports build:   static and dynamic
  Status:           mandatory

   === MD5 UDF ===
  Plugin Name:      md5
  Description:      UDF Plugin for MD5
  Supports build:   static and dynamic

   === One Thread Per Connection Scheduler ===
  Plugin Name:      multi_thread
  Description:      plugin for multi_thread
  Supports build:   static
  Status:           mandatory

   === Old libdrizzle Protocol ===
  Plugin Name:      oldlibdrizzle
  Description:      plugin for oldlibdrizzle
  Supports build:   static
  Status:           mandatory

   === Pool of Threads Scheduler ===
  Plugin Name:      pool_of_threads
  Description:      plugin for pool_of_threads
  Supports build:   static
  Status:           mandatory

   === Default Signal Handler ===
  Plugin Name:      signal_handler
  Description:      plugin for signal_handler
  Supports build:   static
  Status:           mandatory

   === Single Thread Scheduler ===
  Plugin Name:      single_thread
  Description:      plugin for single_thread
  Supports build:   static
  Status:           mandatory

   === Archive Storage Engine ===
  Plugin Name:      archive
  Description:      Archive Storage Engine
  Supports build:   static
  Status:           mandatory

   === Blackhole Storage Engine ===
  Plugin Name:      blackhole
  Description:      Basic Write-only Read-never tables
  Supports build:   static and dynamic
  Configurations:   max, max-no-ndb

   === CSV Storage Engine ===
  Plugin Name:      csv
  Description:      Stores tables in text CSV format
  Supports build:   static
  Status:           mandatory

   === Memory Storage Engine ===
  Plugin Name:      heap
  Description:      Volatile memory based tables
  Supports build:   static
  Status:           mandatory

   === InnoDB Storage Engine ===
  Plugin Name:      innobase
  Description:      Transactional Tables using InnoDB
  Supports build:   static and dynamic
  Configurations:   max, max-no-ndb
  Status:           mandatory

   === MyISAM Storage Engine ===
  Plugin Name:      myisam
  Description:      Traditional non-transactional MySQL tables
  Supports build:   static
  Status:           mandatory

Report bugs to <http://bugs.launchpad.net/drizzle>.

Setting up MySQL on Amazon Web Services (AWS) Presentation

Wednesday, April 22nd, 2009

On Tuesday at the MySQL Camp 2009 in Santa Clara I presented Setting up MySQL on Amazon Web Services (AWS).

This presentation assumed you know nothing about AWS, and have no account. With Internet access via a Browser and a valid Credit Card, you can have your own running Web Server on the Internet in under 10 minutes, just point and click.

We also step into some more detail online click and point and supplied command line tools to demonstrate some more advanced usage.

Some Drupal observations

Monday, February 23rd, 2009

I had the opportunity to review a client’s production Drupal installation recently. This is a new site and traffic is just starting to pick up. Drupal is a popular LAMP stack open source CMS system using the MySQL Database.

Unfortunately I don’t always have the chance to focus on one product when consulting, sometimes the time can be minutes to a few hours. Some observations from looking at Drupal.

Disk footprint

Presently, volume and content is of a low volume, but expecting to ramp up. I do however find 90% of disk volume in one table called ‘watchdog’;


+--------------+--------------+--------------+-------------+--------+
| table_schema | total_mb     | data_mb      | index_mb    | tables |
+--------------+--------------+--------------+-------------+--------+
| xxxxx        | 812.95555878 | 745.34520721 | 67.61035156 |    191 |
+--------------+--------------+--------------+-------------+--------+

+-------------------------------------------+--------+------------+------------+----------------+--------------+--------------+-------------+
| table_name                                | engine | row_format | table_rows | avg_row_length | total_mb     | data_mb      | index_mb    |
+-------------------------------------------+--------+------------+------------+----------------+--------------+--------------+-------------+
| watchdog                                  | MyISAM | Dynamic    |      63058 |            210 | 636.42242813 | 607.72516251 | 28.69726563 |
| cache_menu                                | MyISAM | Dynamic    |        145 |         124892 |  25.33553696 |  25.32577133 |  0.00976563 |
| search_index                              | MyISAM | Dynamic    |     472087 |             36 |  23.40134048 |  16.30759048 |  7.09375000 |
| comments                                  | MyISAM | Dynamic    |      98272 |            208 |  21.83272934 |  19.58272934 |  2.25000000 |

Investigating the content of the ‘watchdog’ table shows detailed logging. Drilling down just on the key ‘type’ records shows the following.

mysql> select message,count(*) from watchdog where type='page not found' group by message order by 2 desc limit 10;
+--------------------------------------+----------+
| message                              | count(*) |
+--------------------------------------+----------+
| content/images/loadingAnimation.gif  |    17198 |
| see/images/loadingAnimation.gif      |     6659 |
| images/loadingAnimation.gif          |     6068 |
| node/images/loadingAnimation.gif     |     2774 |
| favicon.ico                          |     1772 |
| sites/all/modules/coppa/coppa.js     |      564 |
| users/images/loadingAnimation.gif    |      365 |
| syndicate/google-analytics.com/ga.js |      295 |
| content/img_pos_funny_lowsrc.gif     |      230 |
| content/google-analytics.com/ga.js   |      208 |
+--------------------------------------+----------+
10 rows in set (2.42 sec)

Some 25% of rows is just the reporting one missing file. Correcting this one file cuts down a pile of unnecessary logging.

Repeating Queries

Looking at just 1 random second of SQL logging shows 1200+ SELECT statements.
355 are SELECT changed FROM node

$ grep would_you_rather drupal.1second.log
              7 Query       SELECT changed FROM node WHERE type='would_you_rather' AND STATUS=1 ORDER BY created DESC LIMIT 1
              5 Query       SELECT changed FROM node WHERE type='would_you_rather' AND STATUS=1 ORDER BY created DESC LIMIT 1
              3 Query       SELECT field_image_textarea_value AS value FROM content_type_would_you_rather WHERE vid = 24303 LIMIT 0, 1
              4 Query       SELECT changed FROM node WHERE type='would_you_rather' AND STATUS=1 ORDER BY created DESC LIMIT 1
              6 Query       SELECT changed FROM node WHERE type='would_you_rather' AND STATUS=1 ORDER BY created DESC LIMIT 1
             10 Query       SELECT changed FROM node WHERE type='would_you_rather' AND STATUS=1 ORDER BY created DESC LIMIT 1
              9 Query       SELECT changed FROM node WHERE type='would_you_rather' AND STATUS=1 ORDER BY created DESC LIMIT 1
              8 Query       SELECT changed FROM node WHERE type='would_you_rather' AND STATUS=1 ORDER BY created DESC LIMIT 1
              9 Query       SELECT field_image_textarea_value AS value FROM content_type_would_you_rather WHERE vid = 24303 LIMIT 0, 1

There is plenty of information regarding monitoring the Slow Queries in MySQL, but I have also promoted that’s it not the slow queries that ultimately slow a system down, but the 1000′s of repeating fast queries.

MySQL of course has the Query Cache to assist, but this is a course grade solution, and a high volume read/write environment this is meaningless.

There is a clear need for either a application level caching, or a database redesign to pull rather then poll this information, however without more in depth review of Drupal I can not make any judgment calls.

Javascript Helpers

Monday, October 13th, 2008

Combined with my old favorites of Dynamic Drive, DHTML Goodies and Brain Jar, I’ve added the following to my list of Javascript sources.

Improving your web site compatibility with browsers

Friday, September 26th, 2008

Every website page content uses two basic elements, HyperText Markup Language (HTML) and Cascading Style Sheets (CSS). Each of these has various standards, HTML has versions such as 3.2, 4.0, 4.01, and the new XHTML 1.0,1.1, 2.0 along with various version flavors know as strict, transitional & frameset. CSS also has various versions including 1, 2 and 3.

Each browser renders your combined HTML & CSS differently. The look and feel can vary between FireFox, Safari, Chrome, Internet Explorer and the more less common browsers. Indeed each version of a product also renders different. With IE 8 just being released, it’s common versions now are 5.0, 5.5, 6.0 and 7.0. This product alone now has 5 versions that UI designers must test and verify.

To minimize presentation and rendering problems, adhering to the standards can only assist, and greatly benefit the majority of entrepreneurs, designers and developers that are not dedicated resources. There are two excellent online tools from the standards body, the World Wide Web Consortium (W3C) that an easily assist you.

You can also link directly to these sites, so it’s easier to validate your HTML and CSS directly from your relevant webpage. For example, HTML Validation and CSS.

It’s not always possible to meet the standards, and when you are not the full-time developer of your site, it can be time consuming if you don’t check early and regularly.

Brand identity with undesirable domain names

Wednesday, September 24th, 2008

Choosing a domain name for your brand identity is the start. Protecting your domain name by registering for example .net, .org, and the many more extensions is one step in brand identity.

However a recent very unpleasant experience in New York, resulted in realizing some companies also register undesirable domain names. I was one of many unhappy people, mainly tourists as I was showing an Australian friend the sights of New York. We had chosen to use the City Sights NY bus line, but we caught with some 100+ people in a clear “screw the paying customers” experience.

I was really annoyed that my friend, only in New York for 2 days both had to experience this, and missed out on a night tour. I commented, I’m going to register citysightsnysucks.com, and share the full story of our experience, directing people to use Gray Line New York, which clearly by observation were providing the service we clearly did not get.

To my surprise, the domain name was already taken. To my utter surprise, the owner of the domain is the same as citysightsny.com. Did they do this by choice, or did another unhappy person (at least in 2006) register this, only to be perhaps threatened legally to give up the domain.

I would generally recommend in brand identity this approach, especially when select common misspellings and hyphenated versions if applicable can easily lead to a lot of domain names for your brand identity.

$ whois citysightsny.com

Whois Server Version 2.0

Domain names in the .com and .net domains can now be registered
with many different competing registrars. Go to http://www.internic.net
for detailed information.

   Domain Name: CITYSIGHTSNY.COM
   Registrar: INTERCOSMOS MEDIA GROUP, INC. D/B/A DIRECTNIC.COM
   Whois Server: whois.directnic.com
   Referral URL: http://www.directnic.com
   Name Server: NS0.DIRECTNIC.COM
   Name Server: NS1.DIRECTNIC.COM
   Status: clientDeleteProhibited
   Status: clientTransferProhibited
   Status: clientUpdateProhibited
   Updated Date: 31-dec-2006
   Creation Date: 28-nov-2004
   Expiration Date: 28-nov-2011

Registrant:
 CitySights New York LLC
 15 Second Ave
 Brooklyn, NY 11215
 US
 718-875-8200x103
Fax:718-875-7056

Domain Name: CITYSIGHTSNY.COM
$ whois citysightsnysucks.com

Whois Server Version 2.0

Domain names in the .com and .net domains can now be registered
with many different competing registrars. Go to http://www.internic.net
for detailed information.

   Domain Name: CITYSIGHTSNYSUCKS.COM
   Registrar: INTERCOSMOS MEDIA GROUP, INC. D/B/A DIRECTNIC.COM
   Whois Server: whois.directnic.com
   Referral URL: http://www.directnic.com
   Name Server: NS.PUSHONLINE.NET
   Name Server: NS2.PUSHONLINE.NET
   Status: clientDeleteProhibited
   Status: clientTransferProhibited
   Status: clientUpdateProhibited
   Updated Date: 26-jun-2007
   Creation Date: 11-aug-2006
   Expiration Date: 11-aug-

Registrant:
 CitySights New York LLC
 15 Second Ave
 Brooklyn, NY 11215
 US
 718-875-8200x103
Fax:718-875-7056

Domain Name: CITYSIGHTSNYSUCKS.COM

To www or not www

Monday, September 22nd, 2008

Domain names historically have been www.example.com, written also with the protocol prefix http://www.example.com, but in reality www. is optional, only example.com is actually needed.

www. is technically a sub-domain and sub-domains incur a small penalty in search engine optimization.

There is no right or wrong. What is important is that you choose one, and the other needs to be a 301 Permanent Redirect to the one you have chosen.

You also need to know that creating a server alias in your web server configuration, for example Apache or Tomcat is not a permanent redirect, in-fact it is technically duplicate content, with two web sites the same also incurring a penalty for search engine rating.

So what do the big players do. Here are a few.

Use www

  • www.google.com
  • www.facebook.com
  • www.cnn.com
  • www.yahoo.com
  • www.myspace.com
  • www.ebay.com
  • www.plurk.com
  • www.amazon.com
  • www.fotolog.com
  • www.linkedin.com

Do not use www

  • digg.com
  • wordpress.com
  • identi.ca

Show duplicate content

  • flickr.com
  • chi.mp
  • corkd.com
  • vimeo.com
  • garysguide.org
  • engineyard.com

Curiously youtube.com uses a 303 redirect, microsoft.com, stumbleupon.com and craigslist.org a 302 redirect.

How do you check? Use a CLI tool such as wget.

$ wget www.google.com
--2008-09-22 19:56:48--  http://www.google.com/
Resolving www.google.com... 72.14.205.99, 72.14.205.103, 72.14.205.104, ...
Connecting to www.google.com|72.14.205.99|:80... connected.
HTTP request sent, awaiting response... 200 OK

$ wget google.com
--2008-09-22 19:57:56--  http://google.com/
Resolving google.com... 64.233.167.99, 64.233.187.99, 72.14.207.99
Connecting to google.com|64.233.167.99|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.com/ [following]

$ wget www.facebook.com
--2008-09-22 20:07:59--  http://www.facebook.com/
Resolving www.facebook.com... 69.63.178.12
Connecting to www.facebook.com|69.63.178.12|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]

$ wget facebook.com
--2008-09-22 19:59:43--  http://facebook.com/
Resolving facebook.com... 69.63.176.140, 69.63.178.11
Connecting to facebook.com|69.63.176.140|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.facebook.com/ [following]

$ wget digg.com
--2008-09-22 20:10:47--  http://digg.com/
Resolving digg.com... 64.191.203.30
Connecting to digg.com|64.191.203.30|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15322 (15K) [text/html]

$ wget www.digg.com
--2008-09-22 20:14:06--  http://www.digg.com/
Resolving www.digg.com... 64.191.203.30
Connecting to www.digg.com|64.191.203.30|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://digg.com/ [following]

$ wget twitter.com
--2008-09-22 20:26:18--  http://twitter.com/
Resolving twitter.com... 128.121.146.100
Connecting to twitter.com|128.121.146.100|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2655 (2.6K) [text/html]

$ wget www.twitter.com
--2008-09-22 20:26:41--  http://www.twitter.com/
Resolving www.twitter.com... 128.121.146.100
Connecting to www.twitter.com|128.121.146.100|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://twitter.com/ [following]

Professionally, I prefer shorter and simpler without www.

References:

An intestesting approach to free hosting

Wednesday, August 27th, 2008

I came across the OStatic Free hosting service that provide Solaris + Glassfish (Java Container) + MySQL.

They offer “Now you can get free Web hosting on Cloud Computing environment free of charge for up to 12 months.

The catch “accumulate 400 Points for participating on the site“.

A rather novel approach, you get 100 points for registering, but then only 5-15 points per task. The equates to approximately at least 30 tasks you need to perform, such as answering a question, or reviewing somebodies application. That seems like a lot of work?

In this case, Free has definitely a cost and time factor to consider.

Project Darkstar

Thursday, August 14th, 2008

It may sound like either a astronomical research project or a Star Wars spin- off, but Project Darkstar is an open source infrastructure from Sun Microsystems that states “simplify the development and operation of massively scalable online games, virtual worlds, and social networking applications.”

The advertising sounds promising like many sites, the emphasis seems to be on gaming throughout the material, interesting they threw in the term “social networking applications” specifically in opening descriptions.

I believe worthy of investigation, if only to see how that solve some classic problems. So, Learn some more, Start your rockets and Participate.

Extending application data to the cloud

Friday, August 1st, 2008

I was one of the invited panel speakers to A panel on Cloud Computing this week in New York. As one of 2 non vendor presenters, it was a great experience to be invited and be involved with vendors.

While I never got to use my slides available here, I did get to both present certain content, and indeed questions and discussions on the night were on other points of my content.

Cloud computing is here, it’s early days and new players will continue to emerge. For example, from the panel there was AppNexus, reviewed favorably at Info World in comparison with EC2 and Google App Engine, 10gen, an open source stack solution and Kaavo which from an initial 60 seconds of playing provide a management service on top of AWS similar to what ElasticFox provides. I need to investigate further how much the feature set extends and would compete with others like RightScale for example.

The greatest mystery came from Hank Williams and his stealth Kloudshare. He did elaborate more on where they aim to provide services. A new term discussed was “Tools as a service”, akin to moving use metaphorically from writing in Assembly language to the advanced frameworks of today’s generation of languages such as Java and Ruby.

Thanks to Murat Aktihanoglu of Unype who chaired the event.

Installing Buildbot

Monday, July 28th, 2008

BuildBot is a system to automate the compile/test cycle required by most software projects to validate code changes.

Here is my environment.

$ uname -a
Linux app.example.com 2.6.18-53.el5 #1 SMP Mon Nov 12 02:14:55 EST 2007 x86_64 x86_64 x86_64 GNU/Linux
$ python
Python 2.4.3 (#1, May 24 2008, 13:57:05)

Here is what I did to get it installed successfully.

CentOS

$ yum install python-devel
$ yum install zope

Ubuntu

$ apt-get install python-dev
$ apt-get install python-zopeinterface
$ cd /tmp
# installation of Twisted
$ wget http://tmrc.mit.edu/mirror/twisted/Twisted/8.1/Twisted-8.1.0.tar.bz2
$ bunzip2 Twisted-8.1.0.tar.bz2
$ tar xvf Twisted-8.1.0.tar
$ cd Twisted-8.1.0
$ sudo python setup.py install
# installation of BuildBot
$ cd /tmp
$ wget http://downloads.sourceforge.net/buildbot/buildbot-0.7.8.tar.gz
$ tar xvfz buildbot-0.7.8.tar.gz
$ cd buildbot-0.7.8
$ sudo python setup.py install

And a confirmation.
$ buildbot --version
Buildbot version: 0.7.8
Twisted version: 8.1.0

You will notice a few dependencies. I found these out from the following errors.

Error causing needing python-devel

$ python setup.py install
....
gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtun
e=generic -D_GNU_SOURCE -fPIC -fPIC -I/usr/include/python2.4 -c conftest.c -o conftest.o
building 'twisted.runner.portmap' extension
creating build/temp.linux-x86_64-2.4
creating build/temp.linux-x86_64-2.4/twisted
creating build/temp.linux-x86_64-2.4/twisted/runner
gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtun
e=generic -D_GNU_SOURCE -fPIC -fPIC -I/usr/include/python2.4 -c twisted/runner/portmap.c -o build/temp.linux-x86_64-2.4/twisted/runner/portmap.o
twisted/runner/portmap.c:10:20: error: Python.h: No such file or directory
twisted/runner/portmap.c:14: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token
twisted/runner/portmap.c:31: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token
twisted/runner/portmap.c:45: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘PortmapMethods’

Error causing zope to be installed

$ buildbot start /home/buildbot/master/
Traceback (most recent call last):
  File "/usr/bin/buildbot", line 4, in ?
    runner.run()
  File "/usr/lib/python2.4/site-packages/buildbot/scripts/runner.py", line 939, in run
    start(so)
  File "/usr/lib/python2.4/site-packages/buildbot/scripts/startup.py", line 85, in start
    rc = Follower().follow()
  File "/usr/lib/python2.4/site-packages/buildbot/scripts/startup.py", line 6, in follow
    from twisted.internet import reactor
  File "/usr/lib64/python2.4/site-packages/twisted/internet/reactor.py", line 11, in ?
    from twisted.internet import selectreactor
  File "/usr/lib64/python2.4/site-packages/twisted/internet/selectreactor.py", line 17, in ?
    from zope.interface import implements
ImportError: No module named zope.interface

Installation was the easy part, configuration a little more complex.

Your data and the cloud

Saturday, July 19th, 2008

I will be speaking on July 29th in New York at an Entrepreneurs Forum on A Free Panel on Cloud Computing. With a number of experts including Hank Williams of KloudShare, Mike Nolet of AppNexus, and Hans Zaunere of New York PHP fame is should be a great event.

The focus of my presentation will be on “Extending existing applications to leverage the cloud” where I will be discussing both the advantages of the cloud, and the complexities and issues that you will encounter such as data management, data consistency, loss of control, security and latency for example.

Using traditional MySQL based applications I’ll be providing an approach that can lead to your application gaining greater power of cloud computing.


About the Author

Ronald Bradford provides Consulting and Advisory Services in Data Architecture, Performance and Scalability for MySQL Solutions. An IT industry professional for two decades with extensive database experience in MySQL, Oracle and Ingres his expertise covers data architecture, software development, migration, performance analysis and production system implementations. His knowledge from 10 years of consulting across many industry sectors, technologies and countries has provided unique insight into being able to provide solutions to problems. For more information Contact Ronald.

Getting Started with Simple DB

Monday, July 14th, 2008

With my continued investigation of evaluating alternative data management with cloud computing options, I’m now evaluating Amazon Simple DB. Still in restricted beta, it helps to have a friend on the inside.

Working through the Getting Started Guide (API Version 2007-11-07) was ok, annoying in parts. Here are some issues I found. I was working with Java as the programming language.

  • The Docs enable you to view the language syntax in Java, C#, Perl, PHP, VB.NET, ScratchPad. You can also restrict the view to a specific language. A rather cool feature. One observation is there is no Python, which is rather ironic as my first investigation was Google App Engine (GAE), and the only language here is Python. Something I had to learn first.
  • Preparing the Samples asks you download the Amazon SimpleDB Sample code, but this is not actually a link to the sample code but an index to Community Code. I used Java Library for Amazon SimpleDB which wasn’t even on the first page of results.
  • The supplied docs for specifying the Classpath was rather wrong, helps to simply find all .jar files and included these. Mine looks like:
    • #!/bin/sh
      #  http://docs.amazonwebservices.com/AmazonSimpleDB/2007-11-07/GettingStartedGuide/?ref=get-started
      
      SDB_HOME="/put/directory/to/unzip/here"
      export CLASSPATH=$CLASSPATH:\
      $SDB_HOME/src/com/amazonaws/sdb/samples/\
      $SDB_HOME/lib/amazon-simpledb-2007-11-07-java-library.jar:\
      $SDB_HOME/third-party/log4j-1.2.14/log4j-1.2.14.jar:\
      $SDB_HOME/third-party/commons-codec-1.3/commons-codec-1.3.jar:\
      $SDB_HOME/third-party/commons-logging-1.1/commons-logging-1.1.jar:\
      $SDB_HOME/third-party/jaxb-ri-2.1/jaxb-xjc.jar:\
      $SDB_HOME/third-party/jaxb-ri-2.1/activation.jar:\
      $SDB_HOME/third-party/jaxb-ri-2.1/jaxb-impl.jar:\
      $SDB_HOME/third-party/jaxb-ri-2.1/jaxb-api.jar:\
      $SDB_HOME/third-party/jaxb-ri-2.1/jsr173_1.0_api.jar:\
      $SDB_HOME/third-party/commons-httpclient-3.0.1/commons-httpclient-3.0.1.jar
      
  • All examples in the docs then refer to making changes such as “invokeCreateDomain(service, action); line and add the following lines after // @TODO: set action parameters here:”, problem is all the samples don’t have the action variable, but rather a variable called request. The comment in the code ” // @TODO: set request parameters here” is at least accurate.
  • The docs contain a lot of Java syntax that is would not for example compile correctly. Plenty of occurances of a missing semicolon ‘;’
  • Each example defines
    String accessKeyId = ““;
    String secretAccessKey = “
    “;
    Ok for the first example, but as soon as I moved to the second, I re-factored these into a Interface called Constants.
  • In all the examples, they never provide any sample output, this would help just to confirm stuff. In Java Library for Amazon SimpleDB download link, there is an example output, but it’s outdated, with new data attribute called BoxUsage. My output is:
    • CreateDomain Action Response
      =============================================================================
      
          CreateDomainResponse
              ResponseMetadata
                  RequestId
                      f04df8eb-71fa-4d4e-9bd5-cc98e853a2e4
                  BoxUsage
                      0.0055590278
      
  • And now some specifics. In a Relational database such as MySQL, you have Instance/Schema/Table/Column. Within SimpleDB it would indicate that you need a separate AWS account for Instance management. That’s probably a good thing as it will enable tracking of costs. There appears to be no concept of a Schema. Data is stored in Domains, this is the equivalent to a Table. Within each Domain, you specific Attributes, a correlation with Columns. One key difference is the ability to define a set of Attributes with the same identity (much like a list that is supported via Python/GAE). For any row of data, you must specify an itemName, this being equivalent to a Primary Key.These names table me back to old days (20 years ago) of Logical Data Models that used entities,attributes and relationships.
  • The term Replace is used when updating data for a given row.
  • When retrieving data, you first return a list of itemNames, then you can via Attributes for that given item.
  • You can perform a simple where qualification using a Query Expression, including against multiple Attributes via intersection syntax
  • A observation that is of significant concern is the lack of security against any type of operation. The Getting Started guide ends with Deleting the Domain. Is there no means to define permissions against type of users, such as an Application User, and a Database Administrator for managing the objects.

Well it took me longer to write this post, then to run through the example, but at least on a lazy Sunday afternoon, a first look at SimpleDB was quite simple.

I did also run into an error initially. I first started just via CLI under Linux (CentOS 5), but switched back to installing Eclipse on Mac OS/X for better error management, and of course this error didn’t occur.

Generating Word Clouds

Saturday, June 21st, 2008

I came across a wicked cool tool at wondle.net. I wanted to create a word cloud for the back of a business card, now I know how.

Setting up on EC2

Friday, June 20th, 2008

Thanks to my friend Dustin, and his EC2 demo using Elasticfox Firefox Extension for Amazon EC2 I got an EC2 image setup. With other references Link 1,Link 2,Link 3 I was also able to create my own AMI.

Some notes specific for my configuration.

Pre-config ElasticFox key for launching directly from ElasticFox SSH connections.

mkdir ~/ec2-keys
mv ~/Downloads/elasticfox.pem ~/ec2-keys/id_elasticfox
chmod 600 ~/ec2-keys/id_elasticfox
chmod 700 ~/ec2-keys/
ssh -i /Users/rbradfor/ec2-keys/id_elasticfox root@ec2-99-999-999-999.compute-1.amazonaws.com

Installed Software.

apt-get update
apt-get -y autoremove
apt-get -y install apache2
apt-get -y install mysql-server
# Prompts for password (very annoying)
apt-get -y install php5
apache2ctl graceful
echo "Hello World" > /var/www/index.html
echo "< ? phpinfo() ?>" > /var/www/phpinfo.php

Configuration to save AMI.

scp -i ~/ec2-keys/id_elasticfox ~/ec2-keys/id_elasticfox pk-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem cert-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem root@ec2-99-999-999-999.compute-1.amazonaws.com:/mnt
ec2-bundle-vol -d /mnt -c cert-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem -k pk-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem -u AccountNumber -r i386 -p ubuntu804_lamp
ec2-upload-bundle -b rbradford_804_lamp_ami -m /mnt/ubuntu804_lamp.manifest.xml -a AccessID -s SecretKey

Working with Google App Engine

Tuesday, June 3rd, 2008

Yesterday I took a more serious look at Google App Engine, I got a developer account some weeks ago.

After going though the getting started demo some time ago, I chose an idea for a FaceBook Application and started in true eXtreme Programming (XP) style (i.e. What’s the bare minimum required for first iteration). I taught myself some Python and within just a few minutes had some working data being randomly generated totally within the development SDK environment On my MacBook. I was not able to deploy initially via the big blue deploy button, the catch is you have to register the application manually online.

Then it all worked, and hey presto I’ve got my application up at provided domain hosting at appspot.com

Having coming from a truly relational environment, most notably MySQL of recent years I found the Datastore API different in a number of ways.

  • There is no means of Sequences/Auto Increment. There is an internal Unique Key, but it’s a String, not an integer, not enabling me to re-use it.
  • The ListProperty enables the use of Lists in Python (like Arrays) to be easily stored.
  • The ReferenceProperty is used as a foreign key relationship, and then can be more reference within an object hierarchy
  • I really missed an interactive interface. You have no abililty to look at your data, specifically for me I wanted to seek some data, then I wanted to delete some data, but I had to do all this via code.

Having developed a skelaton FaceBook application before in PHP, I figured a Python version would not be that much more work, but here is where I good stumped Information at Hosting a Facebook Application on Google AppEngine leveraging the PyFacebook project didn’t enable me to integrate Google App Engine with FaceBook just yet.

This had me thinking I need to resort to a standalone simply Python Facebook application to confirm the PyFacebook usage. Now my problems started. Under Mac it’s a lot more complex to install and configure Python/Django etc then under Linux. I tried to do it on my dedicated server, but drat Python is at 2.3.4, and it seems 2.5.x is needed.

Still it was a valuable exercise, I dropped the FaceBook goal and just worked on more Google App Engine stuff. Still early days, but it was productive to try out this new technology.

What I need to work on now is how to hold state within Python infrastructure so I can manage a user login and storing and retrieving user data for my sample app.

Trying out Google App Engine

Saturday, April 12th, 2008

I got my registration for Google App Engine this morning after being Waitlisted previously.

Between flying for about 15 hrs tomorrow and then the 2008 MySQL Conference & Expo where I’m presenting and running an Exhibitors booth, fat chance I’ll get to look into this much over the next week.

Pity!

Share/Add This Buttons on sites

Saturday, April 12th, 2008

I’ve noticed a change of buttons lately on sites where you can bookmark/share the relevant information. So I’ve done a cross sample of A Computer Site – www.dell.com, The MySQL Acquirer – www.sun.com, A News site – www.cnn.com and a technology information site – www.techcrunch.com

What got me to go back and research is I’d never seen a FaceBook icon before, or perhaps I’d never paid enough attention.

At the end most sites now wash out to a site called Add This.










NY PHP – Sun & MySQL: A New Hope

Wednesday, March 26th, 2008

Tonight’s New York PHP community meeting was a talk by Philip Antoniades the MySQL Systems Engineering Manager.

With an interesting topic opener “A New Hope” I could not resist to hear Philip’s official MySQL presentation.

Some small points I took away from the presentation.

  • Sun is committed to Postgres with Josh Berkus and a team of 20 people.
  • Solaris.next is the next version of Sun, I thought that was a cool internal name, be it obvious
  • A marketing slide of the highest traffic websites listed Meebo, yousendit, alexaholic, techcrunch, feedburneer, istockphoto and vimeo as reported by Pingdom. Not sure were they get their data, but Google, Yahoo, FaceBook, Wikipedia, MySpace, Fotolog are sites I think of as high traffic. Indeed 3 of these listed sites I’ve never heard of.
  • MySQL 5.1 is expected GA in late Q2 (no year was mentioned).
  • Falcon in 6.0 was listed as the “Next Generation” transaction Storage Engine, an interesting term I’d not heard of before.
  • Sun provides Hosted Database Services (i.e. the cloud) via network.com.
  • MySQL has had an influx of Sun Engineers (60-80).
  • MySQL is being benchmarked more on Sun H/W.

This talk did remind me it’s time to download Open Solaris. With the interesting comment that out of some 1,200 Sun Engineers over 1,000 were Macbook users gives me great confidence it will work just fine on my Macbook.

Out of other discussions there was talk of ZFS, so this will be interesting in what backup opportunities for MySQL without a true Online Backup solution may exist.
Also there was discussion with Sun’s GUI Tool NetBeans, and this casts light on how this will sit with the MySQL GUI team and MySQL Workbench.

Interesting times, this new hope.

Correctly handling redirects

Friday, February 22nd, 2008

I’m in the process of consolidating all my work scattered across the WWW to www.ronaldbradford.com

As part of the process I was originally redirecting this to my present blog but I was not doing it the correct way. You should Use standard redirects: don’t break the back button! from W3C.

For now, I’ve reverted to a splash page and physical links until I can complete the new site.

The iPhone Safari Browser

Sunday, July 15th, 2007

One of my photo websites www.ronaldbradford.com doesn’t work properly on the iPhone under Safari, but it works under Safari on Mac OS/X. Problem is, there is no debugging options and I see no way to workout why DOM compliant Javascript doesn’t work!

Got the next great web thing!

Sunday, May 27th, 2007

I joined a new meetup group New York Dot Com Hatchery on advice from my friend Marc. Now only if my schedule keeps in NY for the event I’ll be very much looking forward to going.

I think this statement on the www.hatchedby.us website sums up the opportunity for expert advice.

The payoff – Your friends won’t tell you the truth, we will. And we can make your billion-dollar dreams come true. Your odds for success? Better than buying a lottery ticket.

What is the maximum number of colons ‘:’ that may appear in a valid URL?

Thursday, March 1st, 2007

In idle conversation I was asked by MM.

Question: What is the maximum number of colons ‘:’ that may appear in a valid URL?

* If you said zero to one, then you are victim of browsers, and you have never used anything but a browser.

* If you said one, then your a novice.

* If you said two, then you have probably seen http://host:port at some time.

* If you said three, then you would be correct, the elite.

http://user:pass@host:port/location

For the record my initial answer was 2.

NY Tech Meetup – Idea Virus

Thursday, December 7th, 2006

On more thing that came from the NY Tech Meetup last night was the Idea Virus. It was handed out on a piece of paper. Here is what it said.

Idea Virus

  1. Think of a novel or awesome idea
  2. Write it on the back of your business card
  3. Spread the viral idea to somebody else
  4. Duplicate and re-propagate ideas that you like

When you go home, leave your accumulated ideas in the Halloween bucket. Check out idea-virus.org to see the results.

So I goto the site, and all I get is a Not quite ready yet… message page. They should have a least put up what I just typed out, and maybe some more “Coming Soon”, or submit to a email or something. 15 mins of work, rather then 2 mins of work would not leave people like me going “Well why bother!”