webotech homepage Content © WOT Ltd - licensed under Creative Commons License

It started in the summer of 2006. I was on holiday with my girlfriend on the Black Sea coast in Bulgaria, her native country. As a young girl she used to spend her summer holiday's there and she was telling me how drastically it had all changed in recent years.

"Still, it would be nice to buy a small apartment here," she commented.

Less than six months later that passing remark had become a reality. We planned to use the property ourselves but also to rent it out, if we could. Given my software development skills, it was a natural step to set up a website which we could use to advertise the apartment.

That web site is bulgariaseaview.com. If you are after a more in-depth introduction, have a look at my first entry.

Release Notes

Version 1.10 has been deployed, here is the change log:
There haven't been big changes in this release for good reason...

More Content

This release have been focused on content. A volunteer has joined the project and his first task was to add a guide. A lot of my time has been dedicated in bringing him up to speed and ensuring that the development environment is shared (especially since the volunteer lives in another country!). It was also important to get him to do some work which could directly contribute to BSV and the area of lowest risk is adding static content. Guides are nonetheless deceivingly complex; the publishing technology has to be understood, the images with the correct license have to be collated, map data needs to be added, the guide knowledge has to be gained and finally all this has to be brought together and validated. In this case, the results are the new Sofia guide pages. It's all good work and the project now has a new fully trained member.


While the guides where getting their overhaul I thought I would add to them as well by integrating hotel search functionality. This was done with a 3rd party comparison engine which has white labelling technology. They give you access to some rich UI functionality which I checked and found wasn't very good so I rewrote it using compliant code and used that instead, another advantage is that we host it now so we minimise the chances of something going wrong without us noticing. There is an outstanding item for this work and that is automated testing to ensure that the 3rd party service continues to work. Sounds like a job for the new team member to me.

Release Notes

In BSV 1.8 there are a few changes which were driven by the site. By that I mean the way the site was being used was causing too much manual administration so I added code to stop it, more on this below. Oh, there is also a bit of pure creative input in the form on new content which is always a bit of fun. Here are the release notes:

Captchas and Robots

Not all internet robots are written for a good purpose like search engine bots. In fact I would dare say that the vast majority of them are malicious and by their very nature they will ignore the robots.txt file. And this can be problematic.

BSV had been targeted by one or more of these. Basically they would fill in the 'add a property' form with any old rubbish, probably in the hope that it would generate a listing automatically. If the listing contained links back to their sites, these may enjoy an increased page rank. It was never going to work though since BSV requires email validation and all posted content are verified for legitimacy. However it did cause quite a bit of manual work for me and it did leave some rubbish in the database.

One solution to this kind of problem is to put in a captcha image which effectively forces human interaction when a form is filled in. Some clever robots are able to deal with some captcha images but fortunately that was not the case with the ones targeting BSV.

In case you are wondering I looked into using an open source captcha library but I could find a single decent one. Most didn't consider what may happen with a high transaction count nor did they have any kind of session management, so I knocked one up myself with the help of image magick.

Creative Content

Writing content, the blood of the internet, is never dull. It's fun because it means I get to browse the web doing research, take photos and/or use appropriately licensed photos from other sources.

Once I have collated the info I need, I get to put it down in a coherent and hopefully easy to read format before getting it validated by people who know the real truth. The process does take quite a bit of time but it's worth it. In this release I added a guide to Bansko, which is a ski resort. Then, I got my property page translated into Bulgarian since we have been getting a few Bulgarian clients recently.

It's all good.

Release Notes

BSV 1.6 release is primarily a consolidation of existing code, so that BSV is ready for the onslaught of new features. There is a bit of re-organising here, some defect fixing there and a splash of standardising functionality. Here are the release notes:

Third Party Code

I guess it is coincidence that I came across two separate problems with other people's code during this release but this deluge (OK, that's bit strong) has highlighted the pros and cons of third party libraries.

The obvious pro is that you presumably save time and effort because you are using someone else's code and usually there are no problems, actually you avoid them because the library is often well tested and well used. The problem is when something goes wrong. Do you just ditch it, write your own, wait for a patch, etc?

The first issue I came across was with the perl XML::DTD package. All I want to do is validate my XML against the DTD. Pretty simple stuff. I didn't want to write the validator myself as it is completely generic, perfectly specified and to be honest I didn't want to maintain a library that may need to change if the spec changes. Third party, open source code is the perfect solution for this.

Unfortunately I ran into problems installing the package due to dependency issues. When I did eventually install the library there was a bug which needed fixing. In true open source fashion the bug was fixed quickly. However when I wanted to go live with BSV I realised that the package wasn't installed on the server and my provider didn't want to install it because it isn't available on BSD ports. I can only guess it isn't available because of the dependency issues I came across. In the end I had to include XML::DTD in my library tree which I really wanted to avoid.

Google Maps

The new google maps (ajax) API which provides namespacing for all their members was the next piece of code I had problems with. I wanted to implement the API because their web page says you have to if you want the latest stable release of google maps.

The instructions say use the google.maps.* namespace for all classes, methods and properties you currently use in the Google Maps API, replacing the G prefix with this namespace. If you try that with G_HYBRID_MAP property, it won't work. In this case you have to replace the G_ prefix, a minor difference yet one that you can only find out by guessing or reading through google's code. Oh and the code examples all use the old API, so they don't help.

Once I worked that out, I got my code to work fine with firefox so it was time to test it with IE. Suprise, suprise IE bombed out. It just freezes when you try and load the map, using various versions of IE and operating systems. There are no error messages, no logs, nothing it just freezes. I ended up posting the problem on a groups page (ignored so far) and reverting back to the old API.

BSV 1.4 Released

| | Comments (0) | TrackBacks (0)
Release Notes

It's a new month and a new release, BSV version 1.4 more specifically. Here are the release notes:

Production Is The Only True Test

Have you noticed the way a lot of usually web based software is a beta? That's because the software houses recognise that no amount of testing will guarantee a defect free piece of software so they are reluctant to give it the release stamp of approval. As a developer you will always miss something. Most likely you will only find out once it has been deployed in production and given a good bashing by the users.

I guess that this is the theme for today's release since about half the items in the change log are due to problems that users encountered. I think they were minor enough for them not to notice as I didn't get any direct complaints but my logs showed me that not all was well. You will also note that this release has had new asserts added to the UI tests. These have been put in place to esure the defects I fixed aren't unwittingly re-introduced.

Users Driven Development

This release has also definitely been user driven. Having fixed the defects they unearthed, I improved some features which they use (the way map data is submitted and it's content) and finally I added some admin features to help me manage my users.

The difficulty with the admin features was setting up the main framework for it. Now that it has been done, creating new admin functionality should be a lot easier. And another thing, since admin features are only used by me, users won't be finding any defects in them!
Release Notes

It took a long time but I have a couple of good excuses; new house, new job. Once I recovered form the initial shock of these life defining events, I got back to some coding and then after dealing with a number of smaller aftershocks, I released BSV 1.2. As always the release notes are here for reference:

Stats To August 2008

When I did the last release I noted that I needed more site usage, so I concentrated on getting some good incoming links and where appropriate putting a link in the useful resources section. It's still early days but it had an effect here are the stats:

It's probably worth noting that it is high season in Bulgaria so I will naturally get more visitors as they browse the internet for holidays. I hope that when that has died down I will end up in a better place than I was before the high season. That will be the real test of my emarketing strategies.

Users Are Up

Apart from the google analytic stats which are just it's view on my visitors - and you have to remember there is a layer of javascript as well as a statistics application of some kind so google analytics certainly do not report the absolute truth - there are two other sources that I use.

The first is the raw server stats which are guaranteed to be the absolute truth although interpreting them is the real challange. The second is the number of registered users or database stats in the case of BSV. Both these last sources are also on the increase.

Of course with more users comes more support. On a recent occasion I had a user who forgot his password. Since there was no password retrieval facility I had to hack the database and manually manage the change until the user could log in again. Once was enough for me to fix the problem properly and you can see the entry in today's release notes there is an enhancements called forgot username and password functionality. That's it, sorry there is no houmerous anectdote but the point is that the site is growing according to it's needs. In my opinion organic growth should always play a major part in the developent of any project or business for that matter.

Release Notes

I released the next version of BSV over the weekend. It satisfies all my criteria so this release is 1.0 (finally). Here are the release notes:

What Next

Now that there is sufficient functionality for the website to be really useful without taking up too much of my time, it is time to take it to the next level. By that I mean to make sure I have a few more users than I currently have. So I need to concentrate on a bit of e-marketing.

Firstly you may have noticed from the release notes that I now have an about us page, where I unashamedly ask for people to link to the site and donate to the site so that I can invest the money in some e-marketing. In addition, there is a useful sites page which has a list of links to sites that I think are useful for my users. They are all sites that require a reciprocal link and have passed quality criteria (they have to be useful and well used).

I'll keep you posted on the traffic but some preliminary stats are looking good!

Link Farms

Let me explain something about linking to other sites.

Google's early success was largely attributable to the accuracy of the results that it returned when you did a search. It soon became clear that the algorithm that was used to prioritise results was absolutely critical. Some may think intellectual property as valuable as that would be a closely guarded secret although there is an argument that you should be as open as possible, especially when a massive number of consumers are involved. Anyhow, I'm not sure why or exactly when but Google decided to divulge the concept behind their code.

The idea is pretty simple, every web page that has been indexed is assigned a number that represents how popular it is (the page rank). For a given search the results are ordered by popularity. So how can you possibly calculate how popular a web page is without access to exhaustive usage statistics of all web pages? You can't but Google believe that with the information they have, they are able to approximate popularity. They do this by looking at how many inbound links there are and the contents that is wrapped in the links.

All good and well, as long as no one exploits this information. But this is the real world so of course the same group of marketeers that exploited older search engine technologies to improve their listings used the algorithm information similarly, to the detriment of all web users. One of the tactics they used was creating link farms. Link farms are websites or areas of a website which has the sole purpose to improve the page rank of a site by exchanging links indiscriminately. Ultimately link farms are bad because they offer no value to a user at all, they simply artificially (and therefore incorrectly) increase a website's popularity which means Google's results aren't as accurate as they used to be.

Google obviously are aware of link farms and probably are taking/have taken steps to deal with them. There is one rumour going around that when they discover a link farm they simply strip the page rank of the offending site, personally I don't think they would be quite so drastic. They certainly have been presented with a problem which isn't easily solved: how do you identify indiscriminately exchanged links with truly valuable ones in a automated manner? You either need a highly sofisticated heuristic approach which will no doubt be constantly changing or you do it manually (at least some of it).

All this is also a problem for website owners because if we choose to increase our popularity using links (and when you have no budget like me there pretty much is no alternative) how do you do it without being classed as a link farm? The answer is not to link indiscriminately and that is what BSV's useful sites page does.
Software Releases

Since I am writing a journal about this I may as well formalise a couple of things. Firstly I need to give the software a name, BSV is good for me so that's what I called it.

Next on the agenda is versioning. I have decided that version 1 will have to satisfy the following criteria:
  1. Full CRUD functionality for property management
  2. Working satisfactorily in IE 6 and Firefox
This release is missing part of the photos update functionality in IE due to the browser not adhering even closely to W3C standards, so I'll randomly give this version 0.8 (you have to start somewhere).

Release Notes

Here are three quick definitions that I will use in the release notes:
  • enhancement - refers to new functionality
  • change - refers to modification of existing functionality
  • defect - refers to correction of erroneous functionality
I know they are a bit fluffy around the edges but they are only intended to be an indication. Here are the release notes:

Caching Problem

The problem is that IE does not cache images at all if they are referenced in DOM javascript. The effects of this are obvious: pages are slower to load and bandwidth is consumed unnecessarily. When I came across this, I didn't try and change IE settings so it may be that you can fix the problem by playing with Tools > Internet Options. But more to the point I wouldn't want to fix this by tweaking my settings, I'm operating a public website and it should work well with out of the box settings.

Sadly there is no quick fix and no workaround. I have confirmed it to be a problem in IE 6 and IE 7.


Here is a scenario to reproduce
  • create multiple image nodes iNode = document.createElement('img')
  • set the same source attribute on the images iNode.src= uriSrc
  • append the images to an existing DOM node domNode.appendChild(iNode)
IE will download every image as a distinct object, although they have the same src attribute. Maybe I'm going slightly mad but to make it worse it seems to take way longer to do this than it ought to. I'm not going to waste time verifying that so I'll give it the benefit of the doubt.

Real Life

What I've just described is all a bit abstract and you're probably thinking 'yeah, but why would you want to do that? Give me real scenario.' Well, here it is. Have a look at this snapshot:

image caching

There is a main image and below it thumbnails, each with mini icons above and below. The mini icons are used to manage photos, the arrows allow you to move them left and right while the A and X icons allow you to edit the photo title and delete the photo. When you click on any mini icon the meta data for this widget changes and communicates this to the back end asynchronously.

There are two situations where this widget needs to create image nodes and they are when it initialises and when meta data changes (since it synchs with the database). Obviously, when this happens the browser has to render each mini icon. In firefox one loads and they all appear at the same time. In IE one loads, then another, then another. You get bored go and grab a coffee, still loading, come back, still loading, take a sip, the next one appears... OK that's exaggerated but you get the picture.

What To Do?

So what should I do about this on bulgariaseaview.com? This is where my earlier decision to work on this project independently begins to pull it's weight. There is no one to answer to here, so I have opted to leave it working slowly in IE and added a note recommending the use of firefox.

It's a bit naughty, after all who am I to tell anyone what browser they should use? But on the other hand imagine how completely I would be ignored if I asked Microsoft to fix their code and maybe by explaining why it works better in a more compliant browser it will convince users to stop using inferior ones.

Quick Fix

The quick fix is this: do not use CSS styles that begin with an underscore (_) if you want them to be rendered in IE 6. By the way, the bug in IE has been fixed in version 7.

IE CSS Underscore Bug

I was sort of aware of IE's CSS underscore hack but I was doing my best to forget it, since it's a piece of information which is as mundane as it is common with IE, "yet another problem with IE"
However, this issue must be related to that hideous hack.

This is a screenshot of what I wanted to achieve in IE 6:

Simple enough, so I put together this HTML which should be absolutely fine:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
<title>IE 6 - styles starting with underscore</title>
<style type="text/css">
._mOuter {position:relative;top:-2px;left:20px;}
._mTabOn,._mTabOff {width:100px;min-width:100px; text-align:center;
 font-size:14px;border:1px solid #000000;border-bottom-width:2px;
._mTabOn {background-color:#79B0D4;border-top-color:#79B0D4;}
._mTabOff {background-color:#497c9d;}
._mTabLn {color:#000000;display:block;padding:4px;}
._mTabSp {width:1px;background-color:#000000;}
a._mTabLn:link {text-decoration:none;}
a._mTabLn:visited {text-decoration:none;}
a._mTabLn:active {text-decoration:none;}
a._mTabLn:hover {background-color:#5c90b3;text-decoration:none;}
<table cellspacing="0" cellpadding="0" border="0" class="_mOuter">
  <td class="_mTabSp"></td>
   <div class="_mTabOn"><a href="#" class="_mTabLn">Tab 1</a></div>
   <div class="_mTabOff"><a href="#" class="_mTabLn">Tab 2</a></div>
  <td class="_mTabSp"></td>

However IE 6 renders this:


Can you spot what's wrong? Most people would and I would argue that the difference is not acceptable for a public website, even though styles are subjective.

To fix it you simply change all the styles defined in the <style> tag that start with an underscore with styles that don't. Here is the working code:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
<title>IE 6 - styles starting with underscore</title>
<style type="text/css">
.XmOuter {position:relative;top:-2px;left:20px;}
.XmTabOn,.XmTabOff {width:100px;min-width:100px; text-align:center;
 font-size:14px;border:1px solid #000000;border-bottom-width:2px;
.XmTabOn {background-color:#79B0D4;border-top-color:#79B0D4;}
.XmTabOff {background-color:#497c9d;}
.XmTabLn {color:#000000;display:block;padding:4px;}
.XmTabSp {width:1px;background-color:#000000;}
a.XmTabLn:link {text-decoration:none;}
a.XmTabLn:visited {text-decoration:none;}
a.XmTabLn:active {text-decoration:none;}
a.XmTabLn:hover {background-color:#5c90b3;text-decoration:none;}
<table cellspacing="0" cellpadding="0" border="0" class="XmOuter">
  <td class="XmTabSp"></td>
   <div class="XmTabOn"><a href="#" class="XmTabLn">Tab 1</a></div>
   <div class="XmTabOff"><a href="#" class="XmTabLn">Tab 2</a></div>
  <td class="XmTabSp"></td>

An Afterthought

This blog is an afterthought, one that came to me like so: I was happily doing some web 2.0 development, testing in Firefox with those very useful add-ons and it was all looking good. Then I thought I'd better make sure it worked in IE which is always slightly annoying since the computer that has it installed isn't mine.

Having successfully negotiated use of the test machine, "I'll prepare you a hot chocolate if you let me use your lappy", I was ready to do my 5 minutes of validation before calling it a night. There was no way it was going to take longer than that, after all my code was W3C compliant. What could possibly go wrong?

IE bombed out with it's usual ironic error message (you know, the one that is supposed to be helpful but is actually almost completely pointless). It soon became apparent that the only reliable way to fix the problem was to bombard the search engines with questions until they revealed the answer. On this occasion it came in the form of a tech-site written in that unique form of hispano-germano-cyrillo english that you are probably familiar with.

Anyhow, this got me thinking. Wouldn't it be great to write down all the issues I encounter for the duration of the project with details of how I solve them so that others can benefit? What a unique and genius idea! Then I realised that actually it wasn't unique, that hispano-germano-cyrillo-english guy had already thought of it (along with millions of others). Nonetheless, I concluded that it was still a good idea to document the project since there was bound to be at least one person who would find it useful.

That is why this journal is an afterthought and because it is an afterthought, it has come into existence just over 1 year since the project began. So, let me give you a brief run down of what has happened so far.

My Approach

Firstly, I decided that this would be purely my own work. No partners, no clients (like I can just choose to have clients), no other developers, no time schedules, no one to answer to: fantastic. Embarking on a project with that mindset makes a huge difference.

Being my own client I told myself that all I wanted was a quick brochure site that I could start e-marketing within a few days. I toyed with the idea of using java but then I decided if I wanted to do things quickly it would have to be done in perl. Perl development is fast anyway and on top of that I had tried and tested libraries that I had built up over the years which would make it even faster.

I set to work to get a couple of pages with fresh content, good usability, nicely styled (isn't that subjective?), targeted meta data and compliant mark-up. You know, all the elements needed for SEO if you believe in that particular form of occultism. Then I used a bit of web 2.0 magic so that I could display my holiday snaps in a single compact page. Finally, I implemented the google maps API so that users could get a bearing of where the apartment is, and an AJAX control so that they could send me coords of areas of interest. Next I went shopping for a domain name with my favourite registrar, and set up hosting with them as well.  An hour after that, I had deployed my new web app and registered it with the search engines.

A final step was needed: approval from one of the world's greatest consumers and avid on-line reader. I showed the site to my girlfriend and watched her expectantly. She stared at the monitor for a couple of seconds, then her hand moved tentatively towards the mouse. She retracted it before it got there and turned to face me.

"Shall we go dancing tonight?" she blurted out.


How The Website Has Grown

The website has changed a lot since then. Without going into great detail this is what I did on the technical front:

  • I packaged it up properly so that it could be reliably deployed in various (linux and FreeBSD) environments by referencing an appropriate configuration file.
  • I improved my publication technology so that it was able to publish in static or dynamic mode.
  • I modified my templating technology so that code was naturally divided into cohesive units.
  • I improved my core javascript library so that it had inherent support for AJAX calls.
  • I added a whole load of custom UI tags.

As far as what is considered creative in the traditional way, even though personally I consider software development just as if not more creative, this is what I got up to:

  • I added guides with photos (my holiday pictures). For the guides I did some on-line research, had conversations with people from the area and added my personal experience.
  • I picked up a few other pictures with appropriate creative commons licenses and in doing so I had to improve my image widget so that I could properly attribute the photos.
  • I integrated skyscanner mini.

The Big Decision

By this time the site was attracting enough traffic to get a couple of enquiries about the property. More importantly, it felt like the code was solid. I had adequate unit test coverage, was working on integration tests using selenium IDE and a proper error notification framework made sure I fixed defects pretty quickly.

Also at this time, it came to be that an acquaintance who owns an apartment in Bulgaria as well, demanded that I add a page for his property to my site. My defences had been weakened by the effect of beer and I responded enthusiastically "Yeah, no problem. Get me another drink, will you?"

So I was faced with this question. Should I
create another static HTML page for his property
- or -
enhance the web app so he could add and manage his property and in doing so allowing anyone to do the same with theirs?

The answer to this dilemma is obvious and as far as I can remember these are the steps (more or less in order) that bring us to today:

Requirement 1: set up a straight through processing form that,
    • creates a user account with contact information,
    • allows the user to enter property address, descriptive details, select from a list of communal and private facilities, property meta data, season dates, rental rates, rental and payment terms, photos with meta data and of course map coordinates.
  • I used the above requirement with bit of agile modelling (scribbled on a piece of paper which was ultimately destined for the bin) to design the relational object model.
  • From the object model I derived a database schema.
  • I set up a data system for my web app which hooks into mysql, and coded up repeatable and reversible scripts for
    • creating the schema
    • populating standing data
    • validating (at a basic level) data integrity
  • Then I wrote a load of perl modules which together forms the data access layer of my web app.
With that my first requirement was satisfied and I deployed code to see if anyone would add their property.

Requirement 2: use the data collected from requirement 1 to create a single HTML page per property.
  • I had most of the technology for this, all I needed was to code up a component that extracted the data and then used my publishing and templating technologies to create the HTML.
  • I added support for scheduled publication of property pages (with cron) and the ability to publish a single page or the whole lot.
With the second requirement completed, I replaced my old static property page with my new database generated one and added the page for the second property as well.

Requirement 3: the property owner should be able to manage the content related to their property.
  • I dug out an old library that I wrote for an authenticated web session. I improved it (a hell of a lot) until it was a proper MVC system using the command and front controller patterns (I know, it's not a JEE project but it's a good pattern) as the core framework.
  • I added support to the MVC framework so that I could plug my publishing and templating technologies into it.
  • Then for each object model I coded create, read, update and delete functionality.
Well, for almost each one. I have the photos model left and if I hadn't started this blog that would be ready as well.

Find recent content on the main index or look in the archives to find all content.

Recent Comments

Recent Assets

  • stats-aug-2008.gif
  • image caching
  • ie6notok.gif
  • ie6ok.gif