Web 2.0. Not to be confused with Internet2

What is Web 2.0? Well the definitions out there aren’t clear and precise. Tim O’Reilly from O’Reilly Publishing has a detailed description at http://www.oreillynet.com/lpt/a/6228. (More notes from this below) His compact description is:

“Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an “architecture of participation,” and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.”

The Web 2 Conference (www.web2con.com) with the theme “Revving the Web” has some interesting content on the site.

Let me be clear, I didn’t know what Web 2.0 was 2 hours ago, I stumbled across an article Web 2.0 Principles applied by Yellowikis while research IT Outsourcing jobs in India/China etc (go figure). Anyway, the following summation prompted me to read about this topic a little more.

* Web-based (of course) and uses wiki technology; the same MediaWiki software that powers Wikipedia.
* Any user can both read and write content – adding business listings and editing them. To put it in ‘Web 2.0 wanker’ terms, it harnesses collective intelligence.
* Requires a significant amount of ‘trust’ in the users.
* Can be deployed via the Web in countries all over the world (see Emily Chang’s interview with Paul Youlten for more details on this aspect).
* Developed and is maintained by a small team (just Paul and his 14-year old daughter – both working part-time).
* Has fast, lightweight and inexpensive development cycles.
* Uses Open Source LAMP technologies (Linux, Apache, MySQL and PHP) – meaning it is very cheap to run.
* The content has no copyright and is freely licensed under the GNU Free Documentation License 1.2.
* Can and will hook into other Web systems, e.g. Google Maps. Indeed if it introduces its own APIs, then it will be able to be remixed by other developers.
* Relies on word-of-mouth and other ‘viral’ marketing.
* Requires network effects to kick in order to be successful (at least at the scale of disrupting the Yellow Pages industry).
* Yellowikis will get better the more people use it. The Wikipedia is an excellent example of this.

Taking a few lines from Tim O’Reillys detailed description as a quick taste for you to read more.

  • Wikipedia, an online encyclopedia based on the unlikely notion that an entry can be added by any web user, and edited by any other, is a radical experiment in trust, applying Eric Raymond’s dictum (originally coined in the context of open source software) that “with enough eyeballs, all bugs are shallow,” to content creation. Wikipedia is already in the top 100 websites, and many think it will be in the top ten before long. This is a profound change in the dynamics of content creation!
  • It is a truism that the greatest internet success stories don’t advertise their products. Their adoption is driven by “viral marketing”–that is, recommendations propagating directly from one user to another. You can almost make the case that if a site or product relies on advertising to get the word out, it isn’t Web 2.0.
  • 4. End of the Software Release Cycle – As noted above in the discussion of Google vs. Netscape, one of the defining characteristics of internet era software is that it is delivered as a service, not as a product. This fact leads to a number of fundamental changes in the business model of such a company:
  • One of the key lessons of the Web 2.0 era is this: Users add value. But only a small percentage of users will go to the trouble of adding value to your application via explicit means. Therefore, Web 2.0 companies set inclusive defaults for aggregating user data and building value as a side-effect of ordinary use of the application. As noted above, they build systems that get better the more people use them.
  • Contrast, however, the position of Amazon.com. Like competitors such as Barnesandnoble.com, its original database came from ISBN registry provider R.R. Bowker. But unlike MapQuest, Amazon relentlessly enhanced the data, adding publisher-supplied data such as cover images, table of contents, index, and sample material. Even more importantly, they harnessed their users to annotate the data, such that after ten years, Amazon, not Bowker, is the primary source for bibliographic data on books, a reference source for scholars and librarians as well as consumers. Amazon also introduced their own proprietary identifier, the ASIN, which corresponds to the ISBN where one is present, and creates an equivalent namespace for products without one. Effectively, Amazon “embraced and extended” their data suppliers.
  • Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, “release early and release often” in fact has morphed into an even more radical position, “the perpetual beta,” in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It’s no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a “Beta” logo for years at a time.
  • …Support lightweight programming models that allow for loosely coupled systems…. …Think syndication, not coordination… … Design for “hackability” and remixability…

I could go on.

Other References
ZDNET Web2Con