Database Modelling within an XP Methodology

In an eXtreme Programming (XP) Agile Methodology approach towards software development the absence of adequate database design, or the scant regard of it, with the assumption that a framework and persistence infrastructure will take care of that can be a disaster in a larger enterprise solution. In essence it’s a scaling effect. The smaller the system, normally the smaller the number of users, amount of functionality and volume of data does not show the inefficiencies in database design as they can be masked by acceptable performance. But scaling up the system, or designing a large enterprise system the effect will become multiplied quickly. Of course using solid XP practices, the ability to make changes and integrate will be easier, but the amount and complexity of changes may be significant.

A more pragmatic approach is necessary in Database Modelling and Design, especially in a larger enterprise solution when using XP or an Agile approach. Assuming that the choice of Relational Database/s has been chosen, greater care is necessary and advanced preparation and planning required. Purists could argue YAGNI, but ultimately the customer will be distraught if the system is perfect in functionality and user interface but can’t handle the performance a production load of users or gradual growth of users when all is well in testing, demonstrations and customer training. The other catch is the need for additional disk space added monthly due to unknown requirements.

Additional considerations such as legacy systems data migration, database sizing, database growth, performance requirements, number of users etc can’t conform to a traditional XP approach. These tasks require varying lead times, for example the purchase and configuration of hardware and software need to be augmented with the XP approach.

XP is not for every project, and in the number of instances I’ve been involved with, certain considerations due to the environment, customer and usually management are necessary to be adjusted or tailored for the specific project. I’m not stating that an XP approach can’t apply to large scale enterprise Database Modelling, more that some adjustments particularly within the Planning Process are needed to integrate a more balanced solution.

The database is the foundation, I draw an analogy when discussing with friends to building a house. If you don’t have the core foundation correct, the slab, the essential primary fittings of plumbing, power etc, and a floor plan of key, important and known things, you will forever when building your house be spending additional time and resources to prop this up, taking away time, money and energy from the significant part of building the house so it can be ultimately used. (Thought: I wonder how you would build a house XP style. What’s the most important part for the customer. Something to ponder one day).

Are foundations perfect? No. Do they change and adapt? Yes, They do. However the cost is significantly higher, and the investment in getting it right the first time is invaluable.

I can’t count the number of times I’ve come onto an existing project, and the Database Designer for lack of a better word is a novice. It makes me shutter. It’s an expertise I’ve specialised in, however I’ve had to broaden my skill set, as this task is not used throughout a traditional SDLC project of 12-24 months, and it really saddens me when simple 101 mistakes are made and the downstream impact is significant, and management don’t know or realise the impact. Even worse is when I tell them what’s needed for a correction, and they look at the impact, and the decision is that this fixed known cost to correct is worse then any projected unknown down stream costs of maintenance and future development.

Ultimately I’ve got my way on projects more easily when I’ve also focused on Application Performance Tuning in projects, where I look solely at the application needs, not just the hard core DBA or System Administration tuning. A DBA doesn’t really care about the application and end user impact, they care about the figures in the database. In this situation, low level structural changes and associated costs are weighted against application performance ultimately necessary for the end user. I remember one project that took the development/user/testing/release team 3 months of work (probably ~ 200 man weeks of work) to implement and deploy a key structural change that I had identified and proposed as part of longer performance analysis period of this system. Of course when you had 6 full-time DBA’s alone, 33 remote distributed systems and 1000’s of users, the resultant impact and the future improvements possible were worth the investment. The need not to upgrade the hardware alone was worth it. This project was also over 10 years ago and a lot of techniques have changed since then. Anyway, back to the point of the discussion.

Let me give you an example situation where you use a traditional XP approach to software development, and how to weave in a more structured database design and modelling approach.

User Stories

As part of the gathering of initial User Stories from the customer, the Database Designer should be reviewing these and beginning to build a high level Logical Data Model. The Database Designer is not needed in any initial interaction with the Customer, however early reviews may clearly indicate gaps in stories due to historical experience that can then be feed back to the customer for considerations. Initially it should just be on paper, or a whiteboard, as initially this should correspond to the high level comparison with the User Stories and also the fluid nature of changing user stories. However it should highlight immediately known key entities, key relationships between entities, key integrations to external systems and areas that involve early input of volume estimates of data, perhaps with comparison to existing legacy systems.

For example, a rates billing system I was involved in had ~400,000 clients billed quarterly (4 times a year), and each bill had on average 10 line items (just looking a small sample, and getting a figure from the legacy system). Now it doesn’t take much to consider the size of tables which would record billing history (400,000x4x10) for each cycle, or GL reconcilation over 5 years (400,000x4x10x2x5 that’s 160 Million). Disk storage alone without any hint of the average row length, or indexes can be guestimated when a number of key entities are identified. Of course it could be out by a factor of 2, 5, or 10 times at this early stage, but not a factor of 100 times.

In a situation like this with any entity, even at the first mud map, I would be recording a number of indicators when possible. These would be:

  • Initial number of rows
  • Annual growth in rows
  • Projected rows after ‘x’ years

In addition I’d also flag each entity roughly for performance considerations by access, and also volume of transactions. This helps in identification of key performance considerations which may impact the Database Design in areas such as indexes (i.e. Disk Space) and schema optimisations after normalisation.

  • OLTP
  • Batch

For example, the billing process that creates quarterly bills and inserts millions of rows is both a Batch Process and a frequency of once every three months, as apposed to new accounts which is OLTP, happens at a regular quantity per day every day, and during the day.

Now, in the mud map these figures don’t have to be accurate, in fact in the billing system example, if the table didn’t have 10,000 rows it didn’t rate a mention, and 10,000 to < 400,000 was considered small. The limit here being the volume of the most key entity. Simply because a number of tables will be a factor larger based on this key figure, and this will swamp any insignificant tables. Again, this is week 1 of a potential 52 week project, it can afford to be vague in areas.

Spike

How do you term this in a XP environment? Call it a Spike. The purpose of a Spike is to explore options and evaluate risk. While in general the practice would be to throw away the results, in this case these Spikes should be kept and built on progressively.

Release Planning

As part of the Customer’s User Stories being estimated and prioritised within Release Planning, more refined Logical Modelling and even Physical Modelling is necessary. Of course until the customer refines priorities based on estimates, the actual order of implementation can’t be confirmed.

It is possible however a number of stories within an iteration may not relate together within the Database Model. How do you address this? Regardless of the actual attributes of these logical entities that are not available, a physical model representation can be built on necessary tables and relationships to ensure practical use of functionality.

Should the need for missing relationships within a Database Model impact the priorities of User Stories? No. Estimates should accurately reflect if additional work is needed for any given User Story, hence the Database Designer is necessary in the estimation process.

Iteration Planning

As part of the Iteration Planning it is key that the physical Database Requirements are in place to enable Writing Unit Tests and Writing Code for each Development Story for the Iteration. This should occur at the start of the Iteration, but is not before.

Daily Stand Up Meeting

This is the best opportunity during the Daily Stand Up Meeting to advise of structure impacts, or developers to request more involvement by the Database Designer in their specific task.

No system is perfect, and there will be times that the Database Model does not reflect the requirements by a coder for a task. At this time the developer is responsible to adjust their instance of the Database model to complete the coding task. The issue is; does the coder have the ability to modify the database schema when checking in? No. The responsibility for the Database Model is for the Database Designer, a difference to the normal coding practice where any coder can modify any code. The next section describes this reason in detail. At this point there is an inconsistency within the repository, and any automated testing for a build will fail. And this is what should happen. From the developers perspective their instance of the Code Ownership is correct, but for the entire iteration it is not.

One could argue this is not valid however, the cycle of Test, Code, Refactor, CheckIn while applying to the individual task, should also apply to the Database Design, however it should be at an iteration level not at a task level. To have your continuous builds breaking will however raise a lot of red flags, and I’m sure this approach will ensure that the tests which are broken as a result, will be corrected by the schema definition (the Code), there will be some refactoring if for example the Developer didn’t serve in the best interests of the bigger picture, the schema is checked in, and the continuous build bar is now green. Yipee. The bar is green, the code is clean, we can all go home now.

The Database Designer

The Database Designer should be dedicated resource in the development team for this task. For example in a team to 6-10 developers you would have 1 or 2 people. Even in a team of 2 your would have 1 person responsible. Unlike coding where any of the team pair up and work on tasks within the Iteration, the Database designer team is responsible for Database Design, like the Development Team is responsible for Coding. There are a few reasons for this:

  • They are ultimately responsible for the structure, the foundation, and this is the bigger picture that includes visibility and scope of further iterations and releases.
  • The skills are more specific.
  • They should also be a part/time developer of the team, so as to best understand the dynamics and also be part of the team.

Correspondingly, the end of the iteration should include an addition code review, that is schema related. A difference report of the schema definition at the start and end of the iteration should be compared, to ensure best practices in the larger picture as well as standards, optimisations, future performance considerations (e.g. indexes), future disk requirements (e.g. adding a new index to a 160 million GL table will take 4GB of disk space).

Other Factors

In fact as part of my upcoming conference presentation Overcoming the Challenges of Establishing Service and Support Channels I spend some time discussing Data Quality. Quite often this is the bane of support due to the complexities of software development to cater for data exceptions, or most commonly data anomalies due to historical data not meeting minimum RI and data specifications in your new system.

Comments

  1. says

    Data modeling, and database development in general, is an important aspect of the development of any business application. Not only is it possible to take an evolutionary approach to data modeling, I believe that it is superior to the serial approach favored by far too many data “professionals”. Agile methods such as XP work, are becoming more and more popular, and are here to stay. If you intend to remain in the IT world, then agile is very likely in your future.

    I’ve written about agile/evolutionary data modeling at http://www.agiledata.org/essays/agileDataModeling.html , hope you find it interesting.

    – Scott

  2. Dr. Cyril M. Coupal says

    I was interested by your comment: How would you build a house XP style. Before going into the IT industry and obtaining my PhD in computer science, I was employed to produce bkueprints for the housing industry. Based on my experience in both worlds, my answer to your questio would be: you would not. A house is a results of many highly integrated, connected and strongly coupled components. Because of these characteristics, a complete design before beginning construction is a must. Any change such as moving a furnace in the basement can cause signifiant cost overruns, delays and changes to the structure above if the structure was not designed for such a change. However, a house is a very real physical artifact that even a non-architect (the client) can see and understand.
    For exactly these reasons you can not relate any experience from building construction to software construction. First of all, software is not phsyical and observable (touchy feel). It is far more complex than any house. Also, we build software that (ideally) is loosely coupled and and strongly cohesive within functional units. Therefor, the XP approach works well for software but not for housing.

    Have a great day.

  3. Ralph Flora says

    I would say the house is a bit off because the goals of a house are different than that of software. A house has a rigidly defined function that does not typically change much over the lifetime of the structure. On the other hand, consider a facility for a manufacturer. These are typically “loosely coupled” (i.e. an open floor plane frame building with exposed infrastructure like water pipes and electrical, etc.) structures that do support repurposing and rearrangement based on the type of construction. However, whether XP could be applied to construction in that type of facility would still be an interesting thought exercise.