December Java Users Group talk on AJAX

I attended the December meeting of the Brisbane Java Users Group last night. The presenters Alex and Brad from Working Mouse a Brisbane Based J2EE Solutions Provider gave a talk on AJAX.

What is AJAX? It stands for “Asynchronous Javascript and XML”. While the name has stuck, it both does not require Asynchronous communication, nor need to use XML, at least the Javascript part stays. AJAX is also not a new language or technology, merely a collection of technologies grouped together to provide a given function, which is to provide rich feature in page functionality within a web browser. The presentation centered around DWR – Direct Web Remoting implementation. There are in fact a number around in various server languages.

Let me explain some more, providing dynamic content on a website is straight forward, when you request a page, however to provide dynamic content within a page without refreshing the page (and in turn keeping all page state) is not a feature of the HTTP protocol. The most obvious case always presented is when selecting a Country Select Box value, a Select Box of States is populated based on the selection without the user seeing both the entire page reloading and waiting for this. There are of course a number of examples of use.

AJAX isn’t new, infact the underlying requirements within AJAX, the DHTML, DOM manipulation and XMLHttpRequest were available in 1997 (as mentioned in the presentation by Brad). In fact, I implemented functionality to perform what AJAX does back in the late 90’s, probably starting 1999, using solely Javascript, and some of that is still in use today on at least one of my sites. Of course Google made this functionality popular with it’s use in Google Suggest a few years ago.

While the presentation was a good introduction to those that had not seen this in operation, the subsequent discussions over dinner prompted some strong reactions, which is good in our line of work.

This technology implementation is inherently flawed, primarily due to the reliance on a Web Browser, and being both a multitude of available browsers across platforms and more specifically a lack of standards adoption causes this technology simply not to be available for all users. Of course Microsoft Internet Explorer is a significant pain in the butt here, as it’s simply not standards compliant, and you are forced to write bad code to work in IE simply due to it’s market penetration. There are of course a lot more of concern, proxies at multiple levels of interaction can drive you mad, and the increases in bandwidth and server performance.

That aside, the issue of needing to provide this level of rich content within a browser is another very good case. This is driven by end user need, and ultimately it is rather ridiculous it’s complicated code, it’s yet another language within the application to support, and the support is difficult, it’s even more complicated to provide some type of automated testing. But I guess the strongest comments came from Max, who recognised me after 15 years. Max was a lecturer in my undergraduate studies from 87-89, a long time ago. I would place Max (not his real name by the way, it’s a long story which took some research at the time), as one of the top three lecturers in my studies that influenced my path to where I am today.

His points were totally valid, why oh why are we doing this, it’s just ridiculous this level of complexity, to do what a browser was not simply designed to do. I would tend to agree, we are forced again by the influence of Microsoft technologies on end users to provide a level of experience they have been brainwashed into. It so reminds me of the The Matrix movie, where everybody is living under the power of the machines (Microsoft), and there a small few fighting a rebel cause to show them what the picture really looks like.

XP Group in Brisbane

Brisbane has another XP Group. Just found out about it. Info can be found at http://groups.google.com/group/Brisbane-XP. I’ve been involved in some part in 2 previous groups in Brisbane.

I’m thinking about some ideas myself, I’ve got all the XP skills, however I’m now skilling up in Spring (a full-stack Java/J2EE application framework) and Hibernate (a powerful, ultra-high performance object/relational persistence and query service for Java). And I’ve got 2 other friends in similar positions.

Wouldn’t it be great if for 6 to 8 weeks, a few hours a week we could work on a project honning traditional XP as well as having some experience people in technologies helping others. Of course in comes back to some giving all to others, but I’m sure it doesn’t have to be that way.

Speaking at MySQL Users Group

I’m preparing to speak at the next MySQL Brisbane Users Group in Febraury 2006. My topic will be Know your competitor – A MySQL Developers Guide to Using Oracle Express Edition. You can get a full copy of my presentation slides at my Articles Page.

Having a strong background in Oracle, and having been using MySQL for the past 5 years, the release of Oracle Database 10g Express Edition (XE) as a Free offering (with limitations of 1 CPU, 1GB Ram, and 4GB disk) is an interesting move by Oracle.

I’ve written a number of recent comments on various Oracle/MySQL things including Responses to some Oracle v’s MySQL Questions, How can Oracle 10g Express Edition target MySQL?, Oracle 10g Express Edition Target Audience. Is it MySQL?, Oracle 10g Express, Free v’s Open Source and OFA.

The question could be posed, what relevence does this have to MySQL developers? Well, in some respects very little, but in others, both knowing about your competitor a little more, being able to see their offering, and in particular in comparision to MySQL can help in a level of understanding in Database differences. I am hoping that from the discussions, people will consider some approaches to design and development that is more “database compatible”, regardless of which database.

Will Oracle 10g Express Edition take off, well difficult question, there are many target markets, will it compete with MySQL in Open Source, hopefully my talk will sporn some discussion of peoples experiences in the various organisations and businesses represented in our meetings.

Upcoming Open Source Conference Presentation

I’ve been working recently on a paper I’m presenting to a conference in February 2006 titled Implementing Open Source for Optimal Business Performance. I’ve got the final glossy brochure yesterday so I now have something to show everybody. View Here (Be warned it’s a little bright)

The topic I have been asked to speak on is Overcoming the Challenges of Establishing Service and Support Channels. My notes are still in the early stage, but are available at http://wiki.arabx.com.au/index.php/ARABX_Articles:Overcoming_Service_and_Support_Channels

Conference Details on Ark Group Web Site

Review of Database Magazine Article – "The Usual Suspects"

In the “Australian Technology and Business Magazine” – December 2005 edition there was an article on comparing database products. Here are my comments, which I also plan to forward to the editor.

BTW: I’ve since also found this articles content on another site here. It seems that most if not all is the same.

In response to your cover story article “The Usual Suspects Four databases we suspect your business could be quite interested in.” which appeared in the December 2005 edition, I would have to sum up your article in one word “Disappointing”. Let me provide some feedback from my perspective.

You start by defining a scenario, which is the only approach you can take for a suitable comparision of database products due to diversity of features available in todays products. A good start, necessary to limit the discussion of features and functionality. However, you then specify some additional business requirements, for example “relatively small e-commerce” and “cost of the initial server and database software is certainly an issue.” Now, having worked for a number of small internet and e-commerce companies, you don’t have the budget for a Dell Quad Xeon processor machine, nor then the requirements for co-located hosting or dedicate networking bandwidth for your fancy new hardware as well as the additional staffing support costs. So immediately your scenario is more unrealistic.

The major sticking point I have is your 4 processor requirement. The most efficient and cost effective initial implementation is to lease dedicated servers, there are numerous reasons including cost savings, better hardware support, larger bandwidth capacity and easy growth path to start. You can also easily monitor growth and more quickly change needs then having a large initial hardware cost. I could continue regarding hosting, however this alone changes the requirements to using single or dual processor machines given your scenario. With this in mind the playing field is now completely different but a better reflection of the scenario. Your argument for “scale up to a small server farm”, also does not hold, because you can easily get economy of scale in splitting application server and database server, splitting OLTP and batch database requirements and other common practices, not to mention additional benefits such as redundancy.

My final comments on your hardware, specifically in relation to MySQL (including using Version 3), you can get significant performance from hardware given your small size requirements and even with modest growth on single and dual processor equipment. Other then opening remarks your article makes no further references to performance requirements or indeed any level of performance analysis of the products reviewed in this article.

You make scant reference to other database products, mentioning only one ‘Sybase’ in half a sentence in your opening and once again. In 8 pages, surely rounding the article to give a clearer perspective of the marketplace with even one paragraph to mention that there are many different database products both commercial and open source that service differing business needs. Other major products not compared at this time include Sybase, Informix, Ingres, PostgreSQL, MaxDB and Berley DB as well as many more.

Your choice of products is also not consistent or reflective of your scenerio. I’ll provide a few specific reasons. Firstly, you compare beta products against production products, if your criteria was current production products you should have compared using MS SQL Server 2000, however that would clearly provide a poor reflection in Microsoft due to it’s clearly dated product. If you allowed one beta product, why then did you not use MySQL 5.0 beta which was available at the time. While you have taken the effort to adjust your article to include references to MySQL 5.0, and you in turn choose MySQL as your editor’s choice, you should have been consistent throughout the article as you give mixed comparising referencing two versions of one product. Futher to this, you choose to use a dated Oracle product in 10g Release 1. 10g Release 2 has been available for a number of months. I would also question your decision to choose the more expensive Standard Edition over Standard Edition One, but this again could be soley due to your overspecd hardware.

If your rationale for including beta was cost based, then you did Oracle a clear injustice. You make again, only a half sentence reference to Oracle’s new released free product in the opening section. You mix more recent MySQL 5.0 information within your review of MySQL 4.1.14, yet you mention nothing of Oracle 10g Express Edition, for example it’s a free product much like Microsoft SQL Server Express, but also has similar limitation in 1 CPU, 1GB RAM and 4GB of disk but all the power and functionality of other Oracle Products, as well as default inclusion of web based administration tools with HTMLDB.

Your quick product summary (4 columns of information) suffers from a number of already mentioned points, however in relation to the only commercial product with a free offering, MS SQL Server Express, you clearly gloss over the limitations. 1 CPU, 1GB Ram and 4GB of disk is critical information, this should have been included in the product summary, you only go part the way. Regarding MySQL, should have clearly stated reference to $0 under GPL license. On that note, and mentioned in your detailed review, there are limitations in the distribution of MySQL within a commercial product and this is not in your summary.

Your article places no emphasis on performance or efficiency. Given your need to mention your testing on quad processor hardware, you make references to various limits of CPU across products, memory and hardware requirements as well as some generic maximum sizings, but nothing on performance, throughput and then growth potential, as this was part of your opening scenario.

It’s not possible to clearly date when this review of products was performed, granted the marketplace has changed rapidly in recent months, the fact that your article references Oracle 10g Express edition clearly includes changes were possible to the article in early November.

In an 8 page article, as mentioned you could have allocated one column to mention that the Database marketplace contains many more products. In particular considering you have included an Open Source product, and you selected this as product of choice, I feel this gives even more justification to at least giving credit to the emerging Open Source Database market. You actually place I recall only one mention to “Open Source” which is signifcant in the context of your choice. Other products would include PostgreSQL, Berkley DB, Apache Derby and even Ingres. While your article should clearly not need to analyse these at this time, by leading into this topic you provide clear opportunity for further discussion.

At the end of the day, while you provided a concise one page breakdown of features and certain limits, this technical information does not provide a clear benefit to an IT manager, or even a technical person.

Web 2.0 Design Patterns

In his book, “A Pattern Language”, Christopher Alexander prescribes a format for the concise description of the solution to architectural problems. He writes: “Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.”

1. The Long Tail
Small sites make up the bulk of the internet’s content; narrow niches make up the bulk of internet’s the possible applications. Therefore: Leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.
2. Data is the Next Intel Inside
Applications are increasingly data-driven. Therefore: For competitive advantage, seek to own a unique, hard-to-recreate source of data.
3. Users Add Value
The key to competitive advantage in internet applications is the extent to which users add their own data to that which you provide. Therefore: Don’t restrict your “architecture of participation” to software development. Involve your users both implicitly and explicitly in adding value to your application.
4. Network Effects by Default
Only a small percentage of users will go to the trouble of adding value to your application. Therefore: Set inclusive defaults for aggregating user data as a side-effect of their use of the application.
5. Some Rights Reserved.
Intellectual property protection limits re-use and prevents experimentation. Therefore: When benefits come from collective adoption, not private restriction, make sure that barriers to adoption are low. Follow existing standards, and use licenses with as few restrictions as possible. Design for “hackability” and “remixability.”
6. The Perpetual Beta
When devices and programs are connected to the internet, applications are no longer software artifacts, they are ongoing services. Therefore: Don’t package up new features into monolithic releases, but instead add them on a regular basis as part of the normal user experience. Engage your users as real-time testers, and instrument the service so that you know how people use the new features.
7. Cooperate, Don’t Control
Web 2.0 applications are built of a network of cooperating data services. Therefore: Offer web services interfaces and content syndication, and re-use the data services of others. Support lightweight programming models that allow for loosely-coupled systems.
8. Software Above the Level of a Single Device
The PC is no longer the only access device for internet applications, and applications that are limited to a single device are less valuable than those that are connected. Therefore: Design your application from the get-go to integrate services across handheld devices, PCs, and internet servers.

What Is Web 2.0?

In his article What Is Web 2.0 – Design Patterns and Business Models for the Next Generation of Software Tim O’Reilly gives a very detailed description of these seven principles.

1. The Web As Platform
2. Harnessing Collective Intelligence
3. Data is the Next Intel Inside
4. End of the Software Release Cycle
5. Lightweight Programming Models
6. Software Above the Level of a Single Device
7. Rich User Experiences

Core Competencies of Web 2.0 Companies

In exploring the seven principles above, we’ve highlighted some of the principal features of Web 2.0. Each of the examples we’ve explored demonstrates one or more of those key principles, but may miss others. Let’s close, therefore, by summarizing what we believe to be the core competencies of Web 2.0 companies:

* Services, not packaged software, with cost-effective scalability
* Control over unique, hard-to-recreate data sources that get richer as more people use them
* Trusting users as co-developers
* Harnessing collective intelligence
* Leveraging the long tail through customer self-service
* Software above the level of a single device
* Lightweight user interfaces, development models, AND business models

The next time a company claims that it’s “Web 2.0,” test their features against the list above. The more points they score, the more they are worthy of the name. Remember, though, that excellence in one area may be more telling than some small steps in all seven.

Some of the information provided is very interesting, I will be waiting with interest to see if this term “Web 2.0″ becomes something, or not.

Myths Open Source Developers Tell Ourselves

Some interesting points from this ONLamp article on Myths Open Source Developers Tell Ourselves

Publishing your Code Will Attract Many Skilled and Frequent Contributors
Myth: Publicly releasing open source code will attract flurries of patches and new contributors.
Reality: You’ll be lucky to hear from people merely using your code, much less those interested in modifying it.

Feature Freezes Help Stability
Myth: Stopping new development for weeks or months to fix bugs is the best way to produce stable, polished software.
Reality: Stopping new development for awhile to find and fix unknown bugs is fine. That’s only a part of writing good software.

The Best Way to Learn a Project is to Fix its Bugs and Read its Code
Myth: New developers interested in the project will best learn the project by fixing bugs and reading the source code.
Reality: Reading code is difficult. Fixing bugs is difficult and probably something you don’t want to do anyway. While giving someone unglamorous work is a good way to test his dedication, it relies on unstructured learning by osmosis.

Packaging Doesn’t Matter
Myth: Installation and configuration aren’t as important as making the source available.
Reality: If it takes too much work just to get the software working, many people will silently quit.

It’s Better to Start from Scratch
Myth: Bad or unappealing code or projects should be thrown away completely.
Reality: Solving the same simple problems again and again wastes time that could be applied to solving new, larger problems.

Programs Suck; Frameworks Rule!
Myth: It’s better to provide a framework for lots of people to solve lots of problems than to solve only one problem well.
Reality: It’s really hard to write a good framework unless you’re already using it to solve at least one real problem.

I’ll Do it Right *This* Time
Myth: Even though your previous code was buggy, undocumented, hard to maintain, or slow, your next attempt will be perfect.
Reality: If you weren’t disciplined then, why would you be disciplined now?

Warnings Are OK
Myth: Warnings are just warnings. They’re not errors and no one really cares about them.
Reality: Warnings can hide real problems, especially if you get used to them.

End Users Love Tracking CVS
Myth: Users don’t mind upgrading to the latest version from CVS for a bugfix or a long-awaited feature.
Reality: If it’s difficult for you to provide important bugfixes for previous releases, your CVS tree probably isn’t very stable.

Web 2.0. Not to be confused with Internet2

What is Web 2.0? Well the definitions out there aren’t clear and precise. Tim O’Reilly from O’Reilly Publishing has a detailed description at http://www.oreillynet.com/lpt/a/6228. (More notes from this below) His compact description is:

“Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an “architecture of participation,” and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.”

The Web 2 Conference (www.web2con.com) with the theme “Revving the Web” has some interesting content on the site.

Let me be clear, I didn’t know what Web 2.0 was 2 hours ago, I stumbled across an article Web 2.0 Principles applied by Yellowikis while research IT Outsourcing jobs in India/China etc (go figure). Anyway, the following summation prompted me to read about this topic a little more.

* Web-based (of course) and uses wiki technology; the same MediaWiki software that powers Wikipedia.
* Any user can both read and write content – adding business listings and editing them. To put it in ‘Web 2.0 wanker’ terms, it harnesses collective intelligence.
* Requires a significant amount of ‘trust’ in the users.
* Can be deployed via the Web in countries all over the world (see Emily Chang’s interview with Paul Youlten for more details on this aspect).
* Developed and is maintained by a small team (just Paul and his 14-year old daughter – both working part-time).
* Has fast, lightweight and inexpensive development cycles.
* Uses Open Source LAMP technologies (Linux, Apache, MySQL and PHP) – meaning it is very cheap to run.
* The content has no copyright and is freely licensed under the GNU Free Documentation License 1.2.
* Can and will hook into other Web systems, e.g. Google Maps. Indeed if it introduces its own APIs, then it will be able to be remixed by other developers.
* Relies on word-of-mouth and other ‘viral’ marketing.
* Requires network effects to kick in order to be successful (at least at the scale of disrupting the Yellow Pages industry).
* Yellowikis will get better the more people use it. The Wikipedia is an excellent example of this.

Taking a few lines from Tim O’Reillys detailed description as a quick taste for you to read more.

  • Wikipedia, an online encyclopedia based on the unlikely notion that an entry can be added by any web user, and edited by any other, is a radical experiment in trust, applying Eric Raymond’s dictum (originally coined in the context of open source software) that “with enough eyeballs, all bugs are shallow,” to content creation. Wikipedia is already in the top 100 websites, and many think it will be in the top ten before long. This is a profound change in the dynamics of content creation!
  • It is a truism that the greatest internet success stories don’t advertise their products. Their adoption is driven by “viral marketing”–that is, recommendations propagating directly from one user to another. You can almost make the case that if a site or product relies on advertising to get the word out, it isn’t Web 2.0.
  • 4. End of the Software Release Cycle – As noted above in the discussion of Google vs. Netscape, one of the defining characteristics of internet era software is that it is delivered as a service, not as a product. This fact leads to a number of fundamental changes in the business model of such a company:
  • One of the key lessons of the Web 2.0 era is this: Users add value. But only a small percentage of users will go to the trouble of adding value to your application via explicit means. Therefore, Web 2.0 companies set inclusive defaults for aggregating user data and building value as a side-effect of ordinary use of the application. As noted above, they build systems that get better the more people use them.
  • Contrast, however, the position of Amazon.com. Like competitors such as Barnesandnoble.com, its original database came from ISBN registry provider R.R. Bowker. But unlike MapQuest, Amazon relentlessly enhanced the data, adding publisher-supplied data such as cover images, table of contents, index, and sample material. Even more importantly, they harnessed their users to annotate the data, such that after ten years, Amazon, not Bowker, is the primary source for bibliographic data on books, a reference source for scholars and librarians as well as consumers. Amazon also introduced their own proprietary identifier, the ASIN, which corresponds to the ISBN where one is present, and creates an equivalent namespace for products without one. Effectively, Amazon “embraced and extended” their data suppliers.
  • Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, “release early and release often” in fact has morphed into an even more radical position, “the perpetual beta,” in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It’s no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a “Beta” logo for years at a time.
  • …Support lightweight programming models that allow for loosely coupled systems…. …Think syndication, not coordination… … Design for “hackability” and remixability…

I could go on.

Other References
ZDNET Web2Con

Quotes from Web 2.0 Conference Web Site

I’m writing something about Web 2.0, but I got distracted by the random header quotes that appear on the website at www.web2con.com. Never being a Simpon’s fan, but it reminds me of those sites out there with all Bart’s blackboard quotes.

  • “Web 1.0 was making the Internet for people, Web 2.0 is making the Internet better for companies.” – Jess Bezos
  • “I personally use the web as an Intelligence Amplifier” – Bran Ferren of Disney
  • “Truly great companies aren’t built by the greedy, but by the passionate” – William Gurley
  • “Never underestimate the Internet. Manipulate it. Respect it. But don’t try to dominate it.” – Jerry Yang
  • “Operate as if you are in perpetual beta.” – Tim O’Reilly
  • “The value of your product is in inverse proportion to the cost of customer aquisition.” – Shelby Bonnie
  • “Most people think money is the key to reducing risk. Prepartion is.” – Mark Cuban
  • “The internet is the most underutilized advertising medium that’s out there.” – Mary Meeker
  • “In the era of Internet television, it will be as simple and cost-effective to create a microchannel as it is to create a Web site.” – Jeremy Allaire
  • “It used to be that Internet was considered a secondary market. Now it is the primary market.” – Sky Dayton
  • “Innovation is not the exclusive province of New Economy companies.” – John McKinley
  • “I’d rather do something interesting, solve an interesting problem, then do something boring and get rich.” – Louis Monier
  • “There’s always a curve ball! But that’s when the interesting stuff happens.” – Mark Fletcher

A better approach to using China for software development

India and China are the next powerhouses of software development, simply due to the numbers, but I’ve never heard a good report (maybe I have to dig deeper). My recent experiences are with Australian companies placing call centres in these countries, and almost always the language barrier is a clear limit.

As part of an upcoming conference paper I’m giving I have been looking more closer at the software options available, and I came across an interesting concept that has the background funding to get off the ground (a common problem in startups), and addresses a number of issues including the language barrier (which is less prominent with code).

Sinocode (www.sinocode.com) is the new generation in Offshore Development Centres (ODCs), delivering high value developer expertise from China. Our service offers:

  • Strong economic value in robust software solutions;
  • Proven western style management expertise; and,
  • Highly talented staff.

All of these drivers underpin our proven capability to execute. Execution, on time and on budget, is our key attraction.

(This is their sales pitch, not mine)

Some more reading:
Article in the Australian on 4th October 2005 Ernst places faith in China
An ABC radio interview on 30th August 2005 IT entreprenuer Lloyd Ernst

Handling SPAM

Well it’s not a new debate, thats for sure, and I have very strong views on this topic (especially blacklists and ISP’s restricting trade), as well as an approach to a new Protocol termed ‘Authenicated Mail’ or ‘amail’. I’ll need to put my notes on my blog one day.

The purpose for note, is the ongoing increasing spam and inappropiateness of mail that is coming to a general web accessible email address for a client.
I’ve proposed they move to a Challenge Response System for at least general listed email addresses on web sites. This will at least cut down the amount received at the mailbox, however it doesn’t eliminate the mail, the traffic, time, money and space used in mail.

What other approaches can you try. Well, not having a mailto link on your site is a good start. The use of an image to display an email address and not text is also a sound entry approach. I’d also recommend that for general web inquiries, you rotate your names, for example: [email protected], [email protected], [email protected] etc. As the address is only used on the website, changing the email, and turning the old name into an invalid mailbox will help.

I’ve also found another novel approach, this is the second time I’ve seen this. The website asks you to please add predefined characters into the subject line. Here’s and example.

http://r0.unctad.org/ecommerce/ecommerce_en/contact_en.htm

Blog Upgrade Time Again

Well, in the space of a few months, I’ve outgrown the previous Drupal version I was using for Blogs, which was a replacement of an earlier Blog implementation. The primary reason was better date based indexing, either by months, or calendar functions. I was told that a calendar function was available but was unable to locate (at least easily).

Anyway, I’m now using WordPress. It provides the added strength of Archives (which is in months), and one other great thing – Categories. Given I’ve been writing across a few different technologies and interests it’s great to be able to seperate these out, and then even submit this appropiately via RSS to some blog aggregetors.

I’ve moved all my articles, but I’ve lost the dates of older stuff from my first move. The old blog is still available at http://techstuff.arabx.com.au

Degrees of Separation 1 – MySQL to Open Source Definition to 2005 Open Source Awards

I often when reading articles end up where I never started on the web. You can find some amazing things, and of course lose a lot of time. I think it’s about time to document my degrees of separation from time to time.

Mount Window Share under Linux with Samba

First check what shares are available for your Windoze Box (in this case it is at 192:168.100.36 with a login of <username> and a password of <password>)

$ smbclient -L 192.168.100.36 -U <username>
$ mkdir /mnt/<sharedir>
$ mount -t smbfs -o username=<username>,password=<password> //192.168.100.36/<share> /mnt/<sharedir>
$ ls /mnt/<sharedir>

Photoshop CS .PSD Thumbnails in Windows Explorer

Adobe Photoshop CS (v8.x) no longer supports displaying PSD files as thumbnails in Windows Explorer’s thumbnail view, all you get is a standard Photoshop icon.

If you have upgraded from a previous version of Photoshop to Photoshop CS you will be OK but a fresh install of CS will NOT included thumbnail previews of PSDs. However I have discovered how to fix this, all we need is a missing DLL to be placed in the right folder.

  1. Download psicon.dll from DLL Dump
  2. Place the DLL in C:Program FilesCommon FilesAdobeShell
  3. Thumbnail previews of PSD files are now back

New Techstuff Blog

Well, it was about time to move to a more standard Blog for my TechStuff, rather then mixing with my personal blog at http://blog.ronaldandanna.com.

As I wrote this blog software in a few hours one day, and never got around to finishing things like a calendar and RSS feeds, it was also another good excuse to try out Drupal.

Change user file permissions when moving windows disk to new machine

Set, view, change, or remove special permissions for files and folders

Important: If you are not joined to a domain and you want to view the Security tab:

1. Click Start, and then click Control Panel.
2. Click Appearance and Themes, and then click Folder Options.
3. Click the View tab, and then click to clear the Use simple file sharing [Recommended] check box in the Advanced settings box.

To set, view, change, or remove special permissions for files and folders:

1. Click Start, click My Computer, and then locate the file or folder where you want to set special permissions.
2. Right-click the file or folder, click Properties, and then click the Security tab.
3. Click Advanced, and then use one of the following steps:
To set special permissions for an additional group or user, click Add, and then in Name box, type the name of the user or group, and then click OK.
To view or change special permissions for an existing group or user, click the name of the group or user, and then click Edit.
To remove an existing group or user and the special permissions, click the name of the group or user, and then click Remove. If the Remove button is unavailable, click to clear the Inherit from parent the permission entries that apply to child objects. Include these with entries explicitly defined here check box, click Remove, and then skip steps 4 and 5.
4. In the Permissions box, click to select or click to clear the appropriate Allow or Deny check box.
5. In the Apply onto box, click the folders or subfolders where you want these permissions applied.
6. To configure security so that the subfolders and files do not inherit these permissions, click to clear the Apply these permissions to objects and/or containers within this container only check box.
7. Click OK two times, and then click OK in the Advanced Security Settings for FolderName box, where FolderName is the folder name.

CAUTION: You can click to select the Replace
permission entries on all child objects with entries shown here that
apply to child objects. Include these with entries explicitly defined
here
check box. Therefore,all subfolders and files have all
their permission entries reset to the same permissions as the parent
object.If you do this, after you click Apply or OK, you cannot undo this operation if you click to clear the check boxes.

Changing IE spinning logo and title

This tweak will change the animated bitmap image that displays when IE (or Outlook Express) is busy. This tweak also removes or changes the branding string added to the window title ( Figure 2 ). To restore the defaults, delete the five values listed below. To use custom animated images (for instance, a spinning globe), first create 20- by 20-pixel and 38- by 38-pixel static bitmap images. Then create a bitmap image 20 pixels wide and a multiple of 20 pixels high, with each successive frame of animation positioned below the previous. Then create a 38-pixel-wide animated bitmapped image in the same way.

Ripping CD's to MP3 (on Windows)

For use on Windows platforms I use Free MP3 Rip from mgshareware.com.

My Settings are:
In CD | Options menu option

* on the General Tab, select Default Encoding Format MP3.

* on the Output Path Tab, I choose for Path Extension and File Name Extension (3 – Both)

* on the Encoding Tab, I choose a 192 Min bitrate.

Then simply load a CD, Select All songs (with the green tick), and then Rip to Default Format (the CD icon).