Another Enterprise LAMP stack provider

ActiveGrid, the Enterprise LAMP company, provides a service-oriented application platform built on the lightweight architecture of the proven LAMP software infrastructure stack. ActiveGrid Enterprise LAMP simplifies and speeds the development of service-oriented applications that weave together existing enterprise systems into new rich web applications and services. ActiveGrid Enterprise LAMP applications can be flexibly deployed on grids of commodity machines or at virtually any ISP.

Please see http://www.activegrid.com for more information.

Blog/Wiki Spamming – What makes your blood boil

Well this is low. I’ve just been spammed on my Wiki. And it was cunning, I just found it by accident. An enterprising hacker embedded into my Home Page hidden links that were not visible via normal page view, but ultimately would be via a search bot or some other means.

Even better the pricks even deleted content on one page. Here is an example diff of pages. I even posted a few days ago What makes your blood boil? and that was just an article on a news site with misleading dated information. I could see the opportunity for more blood boiling articles, but it doesn’t solve the problem. The problem is, I don’t know how to solve the problem.

So what can you do other then clean up this hacker mess, and put in checks to find this, and then continually revise these checks. What bodies can you complain to about URL’s listed, how can you get these removed from the web. That’s what I really want to know.

MediaWiki, the blog software I use did provide a number of manual tools to help identify and correct the information, but only in response to my accidental discovery. I was able to review a History of my Home Page, and then I could drill down to all User Contributions. But it’s a manual process to cleanup. Needless to say I’ll be developing something to more easily identify this and provide. You can also use Block user to stop IP addresses. I’ll also be adding them to my firewall rules.

For those that use Wiki’s or Blog’s, please let me know your software used, and I’ll endevour over time to perhaps provide tips for these products.

As per a request by a MySQL user, I opened my Blog to allow unregistered user comments. I did this reluctently as I wanted that ‘confirmed opt-in’ step, of a user registration, however I had faith in the community. Well I’ve since turned that off as well. I’ve had SPAM posted to my blog, and most recently a post in which I wanted to respond to, but as the information supplied for email as clearly invalid I could not.

So the penalty is users wishing to make comments have to register, it’s quite painless, however the downside is as soon as you enter an email on a website what’s going to happen to it. And this could happen unwillingly. While the Blog owner as no intentions of using registered emails, a flaw in software could expose information for unlawful access.

I’ve implemented a solution to this, and I’m trialling another. The first is I have a virtual domain, not just a virtual email. You have all heard about getting a free email account at a HotMail or Yahoo or etc, and then for subscriptions you use this email, therefore protecting your own identity email from obvious spam. Well I go one step further. I use a whole domain (in this case myvirtualemail.biz), so it costs ~USD$10 per year, big deal, but what I do is I create a new alias for everything I subscribe to. I then have total control of where I send the email, and what I want to do with it. I can also track if I get any spam, which email alias it was addressed to, and then I have a greater granularity of source of the problem. I can just simply trash the alias, therefore no more SPAM in my inbox. The problem is, the SPAM still comes, it takes network traffic, bandwidth and CPU, it gets rejected and then takes even more network traffic and bandwidth.

Now you have me started on a completely different topic of Authenticated Mail. Email is long since been poisoned, and it should be left to die, and out of the ashes a new Phoenix . Cut out the source of SPAM by ensuring the sender is Authenticated. I guess a topic for another time, but it’s the impossibility of moving people off email that makes it enviable.

And the other trial, I’m using GMail to do my filtering for me. I’ve converted my default DNS administrator email (which gets a lot of spam as it’s public on domain registrars), to an alias that redirects to a gmail account, and then from this I forward filtered mail to a POP account. With gmails strict policy on attachments, this approach may not be ideal for the end user, but for this purpose it’s ideal. Still testing this idea, but it’s looking good so far. BTW, if you want a free gmail account, let me know and I’ll send an invitation.

If only I have and enterprising billionaire to fund some of my ideas, I’d really like to be one person that makes a difference, and my difference is to rid us of obvious stuff that destroys our time. And don’t get me started on Viruses. Simple solution, If the Windows OS was read-only, and there was some authentication process for software installation, we could probably rid the world of viruses. I must admit, the VMWare player running a virtual machine solves the problem as well.

Update

Some info on Setting user rights in MediaWiki
in WIKI_HOME/LocalSettings.php I added

$wgWhitelistEdit = true;
$wgShowIPinHeader = false; # For non-logged in users
$wgSysopUserBans = false; # Allow sysops to ban logged-in users
$wgSysopRangeBans = false; # Allow sysops to ban IP ranges

Other references: Wikipedia:Vandalism, Counter Vandalism Unit

Support for Technology Stacks

As part of my next conference presentation Overcoming the Challenges of Establishing Service and Support Channels I’ve been struggling to find with my professional sources, any quality organisations that provide full support for a technology stack, for example a LAMP stack, or a Java Servlet stack.

Restricted to searching via online, I’ve been impressed by what I’ve found at Spike Source www.spikesource.com. An organisation with an experienced CEO, well known in the Java Industry. They certainly have all the buzz words covered in their product information.

Benefits of their SpikeSource Core Stack.

  • Fully tested and certified
  • Installs in minutes with integrated installer
  • Enterprise-class maintenance and support available
  • Vendor neutral
  • Horizontally and vertically scalable

SpikeSource offers three prebuilt configurations that can have you up and running in around ten minutes. These configurations comprise the following component choices:

  • LAMP Stack – for Websites with dynamic database-driven content written using Perl and PHP.
  • Servlet Stack – for dynamic Websites written using Java-based Web technologies such as servlets.
  • J2EE Stack – for Web applications that separate Web interface and application logic using Java Servlets and Enterprise JavaBeans.

Supported Platforms. What’s of interest here is RHEL, SuSE as well as Fedora Core 3. In line with for example Oracle software running under Linux.

What’s interesting, is they have MySQL 4.1.14 in their spikesource stack (1.6.2), so they are quite some months behind here. Especially now that MySQL 5 has been available 3 months now. Not only just stack technology, their infrastructure supports a large number of open source products and appears to provide infrastructure via a community to enhance the product offerings within this stack. The Spike Developer Zone Components List provides a long list of products.

Their release notes provide good instructions, in particular what configuration was used in the building of the software. For example, here is the MySQL Release Notes, MySQL Quick Start Guide, MySQL Troubleshooting Guide

They talk about testing, where Core Stack Testing provides more details here.

They also claim to provide VMWare Community Virtual Machine that can be run via the free VM Player on any system without having an effect on an existing system. This is indeed impressive, however it doesn’t seem available. There are many other installations available at the VMWare site.

I’m interested to see what else existing in the marketplace for a fully supported technology stack, rather then support of individual components (e.g. RedHat for Linux, MySQL AB for MySQL, JBoss for a servlet container)

In reading comparisions, there is reference also to Source Labs – www.sourcelabs.com. Anybody that can offer recommendations that I can research would be great.

Ruby

Being a little despondent regarding Spring, a framework I’ve chosen to skill up in Read More, I’ve changed tack to investigate further Ruby. I was in a training demonstration of Ruby late last year, I’ve had other colleagues talk about it, and in a number of readings of late, Ruby has been making an impact, so time to delve in. I’ve got my working notes on my Wiki, and it took all of a few minutes to be operational. There appears a good wealth of reference material available including at least 2 online books. You can check out these in my Ruby References section.

Here is a comment from one of the current books I’m reading now. Beyond Java.

My partner and I decided to implement a small part of the application in Ruby On Rails, a highly productive web-based programming framework. We did this not to satisfy our customer, but to satisfy a little intellectual curiosity. The results astounded us:

  • For the rewrite, we programmed faster. Mush faster. It took Justin, my lead programmer, four nights to build what it had taken four months to build in Java (RB: They were using Hibernate,Spring and Web Work). We estimated that we were between 5 and 10 times more productive.
  • We generated one-fourth the lines of code; one-fifth if you consider configuration files
  • The productivity gains held up after we moved beyond the rewrite.
  • The Ruby on Rails version of the application performed faster. This is probably not true of all possible use cases, but for our application, the ROR active record persistence strategy trumped Hibernates’s Object Relational Mapping (ORM), at least with minimal tuning.
  • The customer cared much more about productivity then being on a safe Java Foundation. (RB: Highlighted)

As you can well imagine, this shook my world view down to the foundation.”

From Beyond Java. by Bruce. A. Tate pg 3-4.

Adding to the Library Collection

I took the chance today to order some books from Amazon today to add to the library. Of course I’m still reading 2 current books Spring in Action and the MySQL Certification Study Guide in order to site the second MySQL Professional Certification Exam.

As with most things, you start off looking or reading on the web for something and you end up completely somewhere else. In this case, it was looking at Linux Software Labs (Australia) at the price of their Linux Distribution CD’s, which lead me to the book Beyond Java listed on their site. Called my local computer book store, but not being open (Boxing Day Public Holiday), lead me to go, “well I’ve been meaning to order some books from Amazon”, what were they again. So this lead me to coming up with a whole new list, and I figured for the cost of freight to Australia, I may as well order a few. So here is what I got.

Better, Faster, Lighter Java, MySQL (3rd Edition) (Developer’s Library) , High Performance MySQL,and of course Beyond Java.

The hard part now being the waiting 6-10 days.

December Java Users Group talk on AJAX

I attended the December meeting of the Brisbane Java Users Group last night. The presenters Alex and Brad from Working Mouse a Brisbane Based J2EE Solutions Provider gave a talk on AJAX.

What is AJAX? It stands for “Asynchronous Javascript and XML”. While the name has stuck, it both does not require Asynchronous communication, nor need to use XML, at least the Javascript part stays. AJAX is also not a new language or technology, merely a collection of technologies grouped together to provide a given function, which is to provide rich feature in page functionality within a web browser. The presentation centered around DWR – Direct Web Remoting implementation. There are in fact a number around in various server languages.

Let me explain some more, providing dynamic content on a website is straight forward, when you request a page, however to provide dynamic content within a page without refreshing the page (and in turn keeping all page state) is not a feature of the HTTP protocol. The most obvious case always presented is when selecting a Country Select Box value, a Select Box of States is populated based on the selection without the user seeing both the entire page reloading and waiting for this. There are of course a number of examples of use.

AJAX isn’t new, infact the underlying requirements within AJAX, the DHTML, DOM manipulation and XMLHttpRequest were available in 1997 (as mentioned in the presentation by Brad). In fact, I implemented functionality to perform what AJAX does back in the late 90’s, probably starting 1999, using solely Javascript, and some of that is still in use today on at least one of my sites. Of course Google made this functionality popular with it’s use in Google Suggest a few years ago.

While the presentation was a good introduction to those that had not seen this in operation, the subsequent discussions over dinner prompted some strong reactions, which is good in our line of work.

This technology implementation is inherently flawed, primarily due to the reliance on a Web Browser, and being both a multitude of available browsers across platforms and more specifically a lack of standards adoption causes this technology simply not to be available for all users. Of course Microsoft Internet Explorer is a significant pain in the butt here, as it’s simply not standards compliant, and you are forced to write bad code to work in IE simply due to it’s market penetration. There are of course a lot more of concern, proxies at multiple levels of interaction can drive you mad, and the increases in bandwidth and server performance.

That aside, the issue of needing to provide this level of rich content within a browser is another very good case. This is driven by end user need, and ultimately it is rather ridiculous it’s complicated code, it’s yet another language within the application to support, and the support is difficult, it’s even more complicated to provide some type of automated testing. But I guess the strongest comments came from Max, who recognised me after 15 years. Max was a lecturer in my undergraduate studies from 87-89, a long time ago. I would place Max (not his real name by the way, it’s a long story which took some research at the time), as one of the top three lecturers in my studies that influenced my path to where I am today.

His points were totally valid, why oh why are we doing this, it’s just ridiculous this level of complexity, to do what a browser was not simply designed to do. I would tend to agree, we are forced again by the influence of Microsoft technologies on end users to provide a level of experience they have been brainwashed into. It so reminds me of the The Matrix movie, where everybody is living under the power of the machines (Microsoft), and there a small few fighting a rebel cause to show them what the picture really looks like.

Web 2.0 Design Patterns

In his book, “A Pattern Language”, Christopher Alexander prescribes a format for the concise description of the solution to architectural problems. He writes: “Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.”

1. The Long Tail
Small sites make up the bulk of the internet’s content; narrow niches make up the bulk of internet’s the possible applications. Therefore: Leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.
2. Data is the Next Intel Inside
Applications are increasingly data-driven. Therefore: For competitive advantage, seek to own a unique, hard-to-recreate source of data.
3. Users Add Value
The key to competitive advantage in internet applications is the extent to which users add their own data to that which you provide. Therefore: Don’t restrict your “architecture of participation” to software development. Involve your users both implicitly and explicitly in adding value to your application.
4. Network Effects by Default
Only a small percentage of users will go to the trouble of adding value to your application. Therefore: Set inclusive defaults for aggregating user data as a side-effect of their use of the application.
5. Some Rights Reserved.
Intellectual property protection limits re-use and prevents experimentation. Therefore: When benefits come from collective adoption, not private restriction, make sure that barriers to adoption are low. Follow existing standards, and use licenses with as few restrictions as possible. Design for “hackability” and “remixability.”
6. The Perpetual Beta
When devices and programs are connected to the internet, applications are no longer software artifacts, they are ongoing services. Therefore: Don’t package up new features into monolithic releases, but instead add them on a regular basis as part of the normal user experience. Engage your users as real-time testers, and instrument the service so that you know how people use the new features.
7. Cooperate, Don’t Control
Web 2.0 applications are built of a network of cooperating data services. Therefore: Offer web services interfaces and content syndication, and re-use the data services of others. Support lightweight programming models that allow for loosely-coupled systems.
8. Software Above the Level of a Single Device
The PC is no longer the only access device for internet applications, and applications that are limited to a single device are less valuable than those that are connected. Therefore: Design your application from the get-go to integrate services across handheld devices, PCs, and internet servers.

What Is Web 2.0?

In his article What Is Web 2.0 – Design Patterns and Business Models for the Next Generation of Software Tim O’Reilly gives a very detailed description of these seven principles.

1. The Web As Platform
2. Harnessing Collective Intelligence
3. Data is the Next Intel Inside
4. End of the Software Release Cycle
5. Lightweight Programming Models
6. Software Above the Level of a Single Device
7. Rich User Experiences

Core Competencies of Web 2.0 Companies

In exploring the seven principles above, we’ve highlighted some of the principal features of Web 2.0. Each of the examples we’ve explored demonstrates one or more of those key principles, but may miss others. Let’s close, therefore, by summarizing what we believe to be the core competencies of Web 2.0 companies:

* Services, not packaged software, with cost-effective scalability
* Control over unique, hard-to-recreate data sources that get richer as more people use them
* Trusting users as co-developers
* Harnessing collective intelligence
* Leveraging the long tail through customer self-service
* Software above the level of a single device
* Lightweight user interfaces, development models, AND business models

The next time a company claims that it’s “Web 2.0,” test their features against the list above. The more points they score, the more they are worthy of the name. Remember, though, that excellence in one area may be more telling than some small steps in all seven.

Some of the information provided is very interesting, I will be waiting with interest to see if this term “Web 2.0″ becomes something, or not.

Myths Open Source Developers Tell Ourselves

Some interesting points from this ONLamp article on Myths Open Source Developers Tell Ourselves

Publishing your Code Will Attract Many Skilled and Frequent Contributors
Myth: Publicly releasing open source code will attract flurries of patches and new contributors.
Reality: You’ll be lucky to hear from people merely using your code, much less those interested in modifying it.

Feature Freezes Help Stability
Myth: Stopping new development for weeks or months to fix bugs is the best way to produce stable, polished software.
Reality: Stopping new development for awhile to find and fix unknown bugs is fine. That’s only a part of writing good software.

The Best Way to Learn a Project is to Fix its Bugs and Read its Code
Myth: New developers interested in the project will best learn the project by fixing bugs and reading the source code.
Reality: Reading code is difficult. Fixing bugs is difficult and probably something you don’t want to do anyway. While giving someone unglamorous work is a good way to test his dedication, it relies on unstructured learning by osmosis.

Packaging Doesn’t Matter
Myth: Installation and configuration aren’t as important as making the source available.
Reality: If it takes too much work just to get the software working, many people will silently quit.

It’s Better to Start from Scratch
Myth: Bad or unappealing code or projects should be thrown away completely.
Reality: Solving the same simple problems again and again wastes time that could be applied to solving new, larger problems.

Programs Suck; Frameworks Rule!
Myth: It’s better to provide a framework for lots of people to solve lots of problems than to solve only one problem well.
Reality: It’s really hard to write a good framework unless you’re already using it to solve at least one real problem.

I’ll Do it Right *This* Time
Myth: Even though your previous code was buggy, undocumented, hard to maintain, or slow, your next attempt will be perfect.
Reality: If you weren’t disciplined then, why would you be disciplined now?

Warnings Are OK
Myth: Warnings are just warnings. They’re not errors and no one really cares about them.
Reality: Warnings can hide real problems, especially if you get used to them.

End Users Love Tracking CVS
Myth: Users don’t mind upgrading to the latest version from CVS for a bugfix or a long-awaited feature.
Reality: If it’s difficult for you to provide important bugfixes for previous releases, your CVS tree probably isn’t very stable.

Web 2.0. Not to be confused with Internet2

What is Web 2.0? Well the definitions out there aren’t clear and precise. Tim O’Reilly from O’Reilly Publishing has a detailed description at http://www.oreillynet.com/lpt/a/6228. (More notes from this below) His compact description is:

“Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an “architecture of participation,” and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.”

The Web 2 Conference (www.web2con.com) with the theme “Revving the Web” has some interesting content on the site.

Let me be clear, I didn’t know what Web 2.0 was 2 hours ago, I stumbled across an article Web 2.0 Principles applied by Yellowikis while research IT Outsourcing jobs in India/China etc (go figure). Anyway, the following summation prompted me to read about this topic a little more.

* Web-based (of course) and uses wiki technology; the same MediaWiki software that powers Wikipedia.
* Any user can both read and write content – adding business listings and editing them. To put it in ‘Web 2.0 wanker’ terms, it harnesses collective intelligence.
* Requires a significant amount of ‘trust’ in the users.
* Can be deployed via the Web in countries all over the world (see Emily Chang’s interview with Paul Youlten for more details on this aspect).
* Developed and is maintained by a small team (just Paul and his 14-year old daughter – both working part-time).
* Has fast, lightweight and inexpensive development cycles.
* Uses Open Source LAMP technologies (Linux, Apache, MySQL and PHP) – meaning it is very cheap to run.
* The content has no copyright and is freely licensed under the GNU Free Documentation License 1.2.
* Can and will hook into other Web systems, e.g. Google Maps. Indeed if it introduces its own APIs, then it will be able to be remixed by other developers.
* Relies on word-of-mouth and other ‘viral’ marketing.
* Requires network effects to kick in order to be successful (at least at the scale of disrupting the Yellow Pages industry).
* Yellowikis will get better the more people use it. The Wikipedia is an excellent example of this.

Taking a few lines from Tim O’Reillys detailed description as a quick taste for you to read more.

  • Wikipedia, an online encyclopedia based on the unlikely notion that an entry can be added by any web user, and edited by any other, is a radical experiment in trust, applying Eric Raymond’s dictum (originally coined in the context of open source software) that “with enough eyeballs, all bugs are shallow,” to content creation. Wikipedia is already in the top 100 websites, and many think it will be in the top ten before long. This is a profound change in the dynamics of content creation!
  • It is a truism that the greatest internet success stories don’t advertise their products. Their adoption is driven by “viral marketing”–that is, recommendations propagating directly from one user to another. You can almost make the case that if a site or product relies on advertising to get the word out, it isn’t Web 2.0.
  • 4. End of the Software Release Cycle – As noted above in the discussion of Google vs. Netscape, one of the defining characteristics of internet era software is that it is delivered as a service, not as a product. This fact leads to a number of fundamental changes in the business model of such a company:
  • One of the key lessons of the Web 2.0 era is this: Users add value. But only a small percentage of users will go to the trouble of adding value to your application via explicit means. Therefore, Web 2.0 companies set inclusive defaults for aggregating user data and building value as a side-effect of ordinary use of the application. As noted above, they build systems that get better the more people use them.
  • Contrast, however, the position of Amazon.com. Like competitors such as Barnesandnoble.com, its original database came from ISBN registry provider R.R. Bowker. But unlike MapQuest, Amazon relentlessly enhanced the data, adding publisher-supplied data such as cover images, table of contents, index, and sample material. Even more importantly, they harnessed their users to annotate the data, such that after ten years, Amazon, not Bowker, is the primary source for bibliographic data on books, a reference source for scholars and librarians as well as consumers. Amazon also introduced their own proprietary identifier, the ASIN, which corresponds to the ISBN where one is present, and creates an equivalent namespace for products without one. Effectively, Amazon “embraced and extended” their data suppliers.
  • Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, “release early and release often” in fact has morphed into an even more radical position, “the perpetual beta,” in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It’s no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a “Beta” logo for years at a time.
  • …Support lightweight programming models that allow for loosely coupled systems…. …Think syndication, not coordination… … Design for “hackability” and remixability…

I could go on.

Other References
ZDNET Web2Con

Quotes from Web 2.0 Conference Web Site

I’m writing something about Web 2.0, but I got distracted by the random header quotes that appear on the website at www.web2con.com. Never being a Simpon’s fan, but it reminds me of those sites out there with all Bart’s blackboard quotes.

  • “Web 1.0 was making the Internet for people, Web 2.0 is making the Internet better for companies.” – Jess Bezos
  • “I personally use the web as an Intelligence Amplifier” – Bran Ferren of Disney
  • “Truly great companies aren’t built by the greedy, but by the passionate” – William Gurley
  • “Never underestimate the Internet. Manipulate it. Respect it. But don’t try to dominate it.” – Jerry Yang
  • “Operate as if you are in perpetual beta.” – Tim O’Reilly
  • “The value of your product is in inverse proportion to the cost of customer aquisition.” – Shelby Bonnie
  • “Most people think money is the key to reducing risk. Prepartion is.” – Mark Cuban
  • “The internet is the most underutilized advertising medium that’s out there.” – Mary Meeker
  • “In the era of Internet television, it will be as simple and cost-effective to create a microchannel as it is to create a Web site.” – Jeremy Allaire
  • “It used to be that Internet was considered a secondary market. Now it is the primary market.” – Sky Dayton
  • “Innovation is not the exclusive province of New Economy companies.” – John McKinley
  • “I’d rather do something interesting, solve an interesting problem, then do something boring and get rich.” – Louis Monier
  • “There’s always a curve ball! But that’s when the interesting stuff happens.” – Mark Fletcher

A better approach to using China for software development

India and China are the next powerhouses of software development, simply due to the numbers, but I’ve never heard a good report (maybe I have to dig deeper). My recent experiences are with Australian companies placing call centres in these countries, and almost always the language barrier is a clear limit.

As part of an upcoming conference paper I’m giving I have been looking more closer at the software options available, and I came across an interesting concept that has the background funding to get off the ground (a common problem in startups), and addresses a number of issues including the language barrier (which is less prominent with code).

Sinocode (www.sinocode.com) is the new generation in Offshore Development Centres (ODCs), delivering high value developer expertise from China. Our service offers:

  • Strong economic value in robust software solutions;
  • Proven western style management expertise; and,
  • Highly talented staff.

All of these drivers underpin our proven capability to execute. Execution, on time and on budget, is our key attraction.

(This is their sales pitch, not mine)

Some more reading:
Article in the Australian on 4th October 2005 Ernst places faith in China
An ABC radio interview on 30th August 2005 IT entreprenuer Lloyd Ernst

"JS Debugger Service is not installed" – Firefox debugger message

Attempting to run the Venkman JavaScript Debugger in FireFox 1.07 doesn’t work under CentOS 4.2.

It keeps launching an error window with “JS Debugger Service is not installed”. This does not appear to be an isolated issue from a Google search, however finding the solution proved a little harder to track down.

There is a thread at the Mozilla Bug Tracking site that sheds light on the problem.

1. You should first ensure you have the latest version of the debugger? (0.9.85, as far as I’m aware) (Available from http://hacksrus.com/~ginda/venkman/)

2. You need in the location bar on the browser enter about:buildconfig and see if –disable-jsd is specified?

Should you see that, you have your problem, the Linux distro has disabled this.
The solution is to download the tar directly from FireFox, however if your OS has previously installed by rpm, as in my case, you have to consider now running a different non rpm version.