Saturday, 29 November 2008

Brunel, Robert Howlett and David White

Just came across this wonderful short piece on the BBC website. Well worth watching. What a great idea: well done to David White and the BBC.

Modern-day civil-engineering rarely seems to aspire to match that mixture of functional beauty and sense of proportion which Brunel's works all seem to have. But I must put in a good word for the revived, extended and transformed St. Pancras International station. The main concourse is a marvellous place to be (apart from the silly, cheesy, oversized statue of the couple): a couple of times recently, when on our way back to King's Cross station, Dan and I have walked in just to take-in and admire what's been done.

Then we wander back to the dirty, grim, gloomy shed which is the current King's Cross. We've often talked about how wonderful it would be if something similar to St. Pancras were done for King's Cross. The good news is that Network Rail is working on it.

Friday, 28 November 2008

Logging in NetBeans using slf4j

It's probably not necessary to start this article with a mini-lecture on why logging, like unit-testing, is a Good Thing, so all I will say is logging is important not only during development but also invaluable in diagnosing runtime problems, once a solution is delivered and running. From here on I'm going to assume you're convinced.

So, you know you should be logging, but how? And which of the many logging frameworks should you use? This article sets out to answer the first question, showing you how to get started logging in Java, using the NetBeans IDE, and offers an opinion on the answer to the second question.

There are several well-established ways to do logging in Java but the best approach I have come across so far is to use slf4j (the Simple Logging Facade for Java) for all logging statements in source code. The slf4j facade interfaces are completely independent of any specific logging implementation. The binding between the slf4j facade interfaces and a concrete logger implementation is done statically when the application jar is compiled: there is no runtime trickery involved (e.g. using class-loaders).

This approach allows you to replace one logging implementation with another without altering any of your logging code, just by binding to a different concrete logger (i.e. referencing a different logger implementation jar).

Splitting the logging job into two in this way means you need two references in your Java project, in order to use slf4j:
  1. a reference to the slf4j API itself, which provides the interfaces you will use exclusively in your logging code.
  2. a reference to a concrete logging implementation, which your logging code will call via the slf4j facade.
As the slf4j interfaces must be bound to a concrete logger implementation, you need to choose which logger to use. I have chosen Logback, which appears to be an improved version of log4j, one of the more efficient Java logging solutions.

This article describes what you need to do to start logging using this combination. I'm going to assume that you understand the value of logging, and know a little about log statements and logging levels: these are common to all modern logging frameworks, so I won't cover them in detail.
Logger Libraries

Download slf4j and logback. Extract the distribution archives to wherever you keep your libraries. Then, in NetBeans, create libraries (via Tools/Libraries) for both distributions.
  • For slf4j, include the slf4j-api.jar. The precise name of the jar file will depend on the version you download: mine is slf4j-api-1.5.0.jar
  • For logback, you need logback-core.jar and logback-classic.jar. Again, precise file name depends on version e.g. logback-core-0.9.9.jar
You could create a combined library as a convenience for simple projects, if you know that you will frequently want to use the same concrete logger with the facade. However, bear in mind that for projects which build libraries or components used in other projects, you should only reference the slf4j library: the logger implementation binding should be done by the application which uses your component.

The source code for this tutorial assumes two NetBeans libraries, one for slf4j and the other for logback.

Adding Logging To Your Code

In your NetBeans application project, add references to the slf4j and logback libraries. To start logging from your code, you need to do the following things:
  1. Initialise a logger reference in every class you wish to log from.
  2. Add Import statements for the required slf4j types
  3. Add logging statements to your code.
Refer to the source code with this article. Look first at the Main class: the first declaration in the class is for the logger:
private static final Logger _log = LoggerFactory.getLogger(Main.class);
Notice that this logger is declared static: this is because Main houses the static main() application method, so the logger must be static for it to be referenced from the main method. Above, in the imports section, you will see the two imports necessary to use slf4j:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Now open the slf4j-sample-lib project, a trivial sample library which the main application references. Look at the StringReverser class, and notice that the logger is declared slightly differently:
private final Logger _log = LoggerFactory.getLogger(this.getClass().getName());
This is a non-static declaration: only non-static methods will have access to this logger. If you look carefully at the declaration, you'll spot that this one is portable: it doesn't contain the class name explicitly. This declaration can therefore be pasted without modification into any class, but remember that this is non-static, so will not be usable from any static methods in the class.

As the static logger is usable from everywhere (non-static and static contexts), wouldn't it be convenient to have that declaration added automatically in every new class you create? Perfectly possible, in NetBeans. To include the logger declaration in your NetBeans source code template, go to Tools / Templates, open the Java branch and select Java class and click Open in Editor. You'll see the Java class template source, complete with template parameters for the class name and some other stuff I'll ignore here. All we need to do is add a single line to the class skeleton:
public class ${name}
{
private static final Logger _log = LoggerFactory.getLogger(${name}.class);

}
If you save the changes to your template and create a new class in NetBeans, you'll see the logger declaration with the correct class name parameter value has been added to the source. You could even add the necessary import statements to the template.

But the overhead of including the logger declaration is low (just one simple line), and NetBeans will take care of the imports for you (use Fix Imports with Ctrl-Shift-I), so you may decide it's not worth adding these lines to your templates.

Static versus Non-static Loggers

There's one more difference between static and non-static loggers which you should at least be aware of. If you declare the logger static, there will be exactly one instance of the logger retrieved for the class, shared across all instances. As the logger reference is shared, memory is allocated just once and the logger is initialised just once, rather than once per object. This is not usually a problem, but you should be aware that the same logger reference will be shared across every instance of the class.

When declared non-static, every instance of your class will incur the memory and initialisation cost for the logger reference variable. For the vast majority of classes this will not be a significant overhead, but again you should be aware of the tradeoff. Note that within the same application, retrieving a logger by a given name will always return the same logger object: using a non-static logger reference does not mean that every instance of a class gets a new logger object of its own.

For a complete account of this issue, you should consult the slf4j documentation which covers this. Read this section of the online documentation.

Logging Statements

As I said in the introduction, I'm assuming you are somewhat familiar with logging. If not, go back and read the material on the slf4 and logback sites, and follow-up the references there.

Use the logger method appropriate to the level you wish to log at, for example:
_log.info("'{}' reversed is '{}'", instance.getValue(), instance.reverseValue());
Possibly the most important piece of advise I can give here is to consider the cost of logging carefully, especially debug-level logging. The actual call to the logger object is not usually the dominant cost in debug logging: you are likely to be constructing moderately large strings to output, and the cost of preparing these may be where you incur the biggest cost. Consider a line such as this:
_log.debug("Debugging string: “ + ExpensiveGetter());
The runtime cost of constructing the string argument to this logging statement will always be incurred, whether the log-level is set to debug or to some other (less-detailed) level. Clearly that's an undesirable, and fortunately there is a very simple coding idiom we can use to avoid this cost:
if (_log.isDebugEnabled())
{
_log.debug("Debugging string: “ + ExpensiveGetter());
}
Simply guard the debug log statements with a test for the current (runtime) level of the logger. This does add a little source code; it is possible to reduce simple statements like the one above to a one-liner but I generally avoid this, valuing clarity over concision.

The cost of the debug statement will now be reduced to a simple boolean test, which is much more acceptable.

Configuring Logback

If you build and run the sample project accompanying this piece, you'll see the log statements are written to the console. This is the default behaviour for most logging solutions, logback included. In most production situations you will probably want to direct your log messages somewhere else, to a file for example.

In common with other logger implementations (such as log4j) the logback subsystem can be controlled through an XML configuration file. For a complete treatment of the options and syntax, you'll need to refer to the logback documentation and in this article I'll give a superficial account of a simple but usable configuration.

Here is the configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<!--
Document : logback.xml
Created on : 25 February 2008, 22:40
Author : Roger
Description:
Configuration for logback logging framework.
For this file to be read by logback at runtime, it must be placed
on the classpath.
-->
<configuration>
<appender class="ch.qos.logback.core.rolling.RollingFileAppender" name="RootFileAppender">
<file>application-log.txt</file>
<append>true</append>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>debug</level>
</filter>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>application-log.%i</fileNamePattern>
<maxIndex>2</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>100000</maxFileSize>
</triggeringPolicy>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d{yyyy-MM-dd HH:mm:ss},%p,%c,%F,%L,%C{1},%M %m%n</pattern>
</layout>
</appender>
<root>
<level value="debug"/>
<appender-ref ref="RootFileAppender"/>
</root>
</configuration>
Although this is quite a simple configuration file, there's a lot of information in there and to understand it all fully you will need to study the logback documentation. I'll skim over the most important bits here and leave the rest to you.

Appenders are the part of the logging engine which actually write the log messages somewhere. You can choose from many different appenders – my favourite is the RollingFileAppender which writes to a text file and 'rolls-over' to a new file once the existing file passes a certain size, declared in the triggeringPolicy element. The file element declares the filename, and the rollingPolicy/fileNamePattern declares the filename pattern to use for old log files, as the file rolls-over.

Layouts define the format of the messages written to the log appender. I've gone with a standard pattern which appears in the logback documentation: please consult the logback documentation on format strings to discover all the options here. Essentially, every log statement will include the date and time, and the origin of the log message, including the line number.

So the net effect of this configuration is that logback will write all log activity out to the file application-log.txt in the working directory, rolling the file when it reaches the size limit I have set (roughly 100k).

Logback and the Classpath

As explained in the logback documentation, if you want to configure logback with an XML file then the file must be present on the classpath of the application, otherwise logback will not find it and will fall-back to console logging.

If you examine the source code accompanying this piece you'll see that I have placed the logback.xml file in the root of the source folder (src). In the NetBeans project, this is the 'default package': the name of anything placed here will have no namespace prefix. When NetBeans builds a Java application, it will compile any Java sources it finds in src and all its subdirectories, and will also by default include everything it finds in the root folder in the jar file. To prove this to yourself, go into the src folder, create an arbitrary file, say 'fred.txt', and rebuild the project. Now open the NetBeans Files view (Ctrl-2 will open this if it's not already visible), locate the 'dist' folder and open the tree until you locate the slf4j-sample.jar file. The NetBeans file view allows you to browse the jar file contents – just open the tree to the next level:



And there are the two files, logback.xml and fred.txt, packed into the root of the jar. We don't need fred, so delete the file and rebuild the jar.

So the source hierarchy in NetBeans contributes to the runtime classpath, and the 'dist' folder is where NetBeans writes out all artifacts resulting from a build. As the NetBeans documentation (and the generated readme) tells you, you can distribute your application by zipping-up and publishing the contents of the dist folder.

Let's run the application. Open a terminal window in the dist folder and type:
java -jar slf4j-sample.jar
The program should run, but nothing should be written out to the console. Instead, an application-log.txt file should appear in the dist directory. Open that file and you will see the result of the logging statements. Logback has obviously located the configuration file and applied the settings.

If you're thinking 'job done' at this point, not so fast! Just consider how logback has located and loaded the XML file. The logback XML file is packed into the (executable) jar file which is the main artifact generated by the build. When the Java runtime runs this jar it first examines the jar's manifest (the MANIFEST.MF file in the META-INF folder inside the jar), which looks like this (from the sample application jar):
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.7.1
Created-By: 11.0-b15 (Sun Microsystems Inc.)
Main-Class: slf4jsample.Main
Class-Path: lib/logback-classic-0.9.9.jar lib/logback-core-0.9.9.jar l
ib/slf4j-api-1.5.0.jar lib/slf4j-sample-lib.jar
X-COMMENT: Main-Class will be added automatically by build
The runtime looks for the entry point of the program (the qualified name of the class containing the main method) and initialises the classpath from the Class-Path setting. Implicitly, '.' (the root of the jar structure) is also on the runtime classpath.

What this means is that when the logback system goes looking for logback.xml it will find the copy in the jar file. Consider the function of the logback.xml file: the users of the application will almost certainly want to configure the log behaviour themselves rather than accept the settings you have baked-in to the copy of logback.xml inside the jar file.

So, we want to distribute our default logback.xml file but not have it baked-in to the jar file: what to do? Having it in the source classpath is convenient because it appears in the NetBeans project view so we can view and edit it from inside the IDE. However, we don't want it packed into the jar, but we do want it on the runtime classpath. How can we satisfy all of these constraints?

We can achieve this with two simple steps:
  1. To prevent NetBeans including the XML file from the jar file, we exclude logback.xml from the list of source artifacts that NetBeans considers as jar contents.
  2. Add a step to the build process which copies logback.xml into the dist folder, placing it next to the jar file.
For step (1), open the NetBeans project properties dialog and navigate to the Packaging subsection, below Build. Add “logback.xml” to the list of Excluded files:



If you rebuild the application, then use a zip tool or archive manager to open the resulting jar file, you'll see that the logback.xml file is no longer included. There is more help on excluding content from your jar in this page of the NetBeans documentation.

For step (2) we need to arrange for the logback.xml file to be copied into the dist folder during the build process. To do this, we will add a very simple custom build-step to the NetBeans build process. As you probably know, NetBeans uses Apache Ant as the build engine, and the IDE generates the Ant script automatically. I'm not going to go into much detail concerning what you can do with custom build steps – I suggest you read the well-commented Ant scripts, and look at this section of the NetBeans documentation.

There are standard stages in the build process where we can add tasks ('targets' in Ant-speak) to the NetBeans build. The build.xml file (in the project root) contains a comment describing these. All we want NetBeans to do is copy the logback.xml file into the dist folder after it has finished building the jar file, so we will add a -post-jar target to copy the file:
<target name="-post-jar" >
<copy file="${src.dir}/logback.xml" todir="${dist.dir}"/>
</target>
Add this target at the end of build.xml, just before the close-tag for . Now rebuild the project, and take a look in the dist directory: you should see that logback.xml has been copied there. Now you're ready to zip the dist folder and distribute your solution.

That's the end of this little piece. You should know enough now to add straightforward, high-performance logging to your own projects. But there's a lot more to learn: to go further, study the slf4j and logback documentation, follow the examples, and experiment.

Thursday, 20 November 2008

Leonard Cohen at the Albert Hall!

On Tuesday my wife and I were finally able to use her (rather belated) birthday present from me - tickets for Leonard Cohen.  At the Albert Hall!  It was spectacularly good - great seats (stalls, so good line of sight over the arena floor), amazing sound quality, and of course the Man himself, a living legend, who delivered an astonishing 3 or more hours of the finest entertainment I can recall.  The band were superb: every member individually talented but, as with all the best ensembles, the whole was even greater than the sum of the parts. I can't recall a better evening.

But it so nearly didn't happen, for us ...  I had originally bought tickets on eBay, the only place I had been able to find really good seats for the London concerts so-far announced, which were at the O2 Arena.  Not a venue we fancied much, but at the time I don't think the Albert Hall dates had been arranged.  The O2 tickets were in the first couple of rows right in front of the stage, so we were pretty pleased with ourselves. I paid, and waited.

And waited ... and chased the seller, again and again.  Who turned out not to have the tickets at all, but was "expecting to receive them from the promoter" shortly before the date.  Lots of promises about keeping us informed, lots of reassurances they would arrive in time. Until eventually, two days before the gig, I receive an email from the seller saying that some tickets have arrived but are for the wrong event!  By now we're way past the eBay 90-day limit for disputes. Moral: do not trust individuals selling concert tickets on eBay. If you do go ahead, demand the tickets or your money back well within eBay's limit.

Fortunately, I found an online ticket company (Double8tickets) with the AH seats, via a Google ad which appeared in GMail! Normally, I ignore these ads, but this one leapt out at me at a time when I was desperate to find tickets, so I wouldn't have to disappoint Chris.  Just goes to show that these unobtrusive little ads can work very effectively, and are so much less irritating than the loud, flashy mess that flickers away above Microsoft's Live or Hotmail frame.


Friday, 14 November 2008

Community and Culture

While looking around for Java dependency tools I came across KirkK's site, and his JarAnalyzer.  As usual, I wanted to know a little more about the person behind the software, so looked up his blog and found an article from 2007 there which really resonated with me: .Net : Software & Technology @kirkk.com

Kirk and I have travelled in opposite directions: he crossed the tracks to work on a .NET project, whilst I've recently shifted my attention almost completely from .NET and the Microsoft environment (where I have spent the last 10 or so years) to the world of Java.  What's interesting is that he so quickly formed the same view of the Microsoft development that motivated my move to Java.

Microsoft's greedy behaviour has done so much damage to its reputation and the level of goodwill amongst independent developers. Kirk cites the example of TestDriven.Net, but there are examples of other 'alt.net' type projects (NAnt and NDoc, for example) which Microsoft has effectively (though not directly) either killed or marginalised, not with licensing terms but by introducing proprietary (and arguably weaker) competing technologies.  I'm sure that part of the reason for that, with the N-prefixed projects at least, was that they couldn't bear the prospect of absorbing something with what they would see as alien DNA into their product line.

This piece by Mike Hofer describes the NDoc demise and nicely summarises the twin underlying problems: the nature of the development community surrounding the Microsoft platform, and Microsoft's inability (or unpreparedness) to work with it. Mike's article, and the linked post by Charles Chen containing the email from NDoc's founder, make for quite depressing reading. Perhaps the emergence of a mean-spirited, mean-minded community is to be expected, when the centre of its universe is an avaricious commercial juggernaut?

I feel these things especially keenly, now that I'm looking over the wall from the Java community side.  The contrast really couldn't be more stark, more impressive and more compelling.  Even the large companies operating in this space, notably Sun Micrososystems, appear intelligent, enlightened and innovative; there's a very healthy culture here. (A quick and revealing experiment: take Microsoft and Sun - now try to find the corresponding CEO's blog. Top hit in Google for Steve Ballmer when I tried was a send-up site; top hit in Google for Jonathan Schwartz was Jonathan Schwartz's blog. And it's worth reading).

Coming back to the issue of building a healthy community around a technology, Sun's Java is surely the shining example of how to embrace the great work done by independent developers and build on it, rather than trying to crush it. Just look at NetBeans: this is an enterprise-quality IDE, easily the equal of Visual Studio, but not only is it open-source (and free to download) it also employs established open-source tools instead of imposing inferior alternatives: For unit testing, JUnit is completely integrated; NetBeans uses Apache Ant as its underlying build-system; it can work seamlessly with Maven through the excellent plugin. And you choose the version-control system you prefer (e.g. Mercurial, Subversion) and NetBeans will allow you to make full use of it, right inside the IDE.

Lastly, you are free to extend NetBeans by writing whatever plugins (modules) you please, without running the risk of getting into litigious exchanges like the TestDriven.Net debacle described here, which seems to me to plumb the very depths of time-wasting pointlessness.

I never intended to write all that: it was originally just a reaction to a (rather old) blog entry. But as I revisited the world of the Microsoft monoculture through the tale of those N-projects, it just tumbled out.

The Managerese Disease

I recently stumbled upon a wonderful collection of that language we all love to hate: managerese.  Kevin Boone's list is here:

The K-Zone: New developments in managerese

There are other lists like this sprinkled around the net, but this one is the best I've seen so far because it manages to combine a bit of thoughtful deconstruction with some satisfying disdain.

Thursday, 9 October 2008

NetBeans - blank dialog box fix

I've been using LinuxMint 5 for some weeks now, and a very fine distro it is, too. I've noticed that NetBeans 6.1 starts and runs a great deal faster on Mint than on Windows XP - the difference is quite significant. I've been wondering why that is, given that this is Java and therefore presumably only the low-level loaders and file-system interactions differ across platforms. That's just a passing observation and not the point of this post.

I found one annoying thing happening occasionally (but often enough to be irritating): NetBeans dialog boxes would appear completely empty. Take a look at the screenshot:



There didn't seem to be any pattern to this behaviour; it wasn't always the same dialog, and if you closed the dialog and opened it again it would appear normally. Not good.

Well, it turns out this is known problem when using the Compiz window manager on Linux - a quick search revealed a good post on the NetBeans forums about this subject. The answer is to use the latest JRE / JDK build as it addresses this compatibility issue. I decided to download and install JDK 6u10 RC and give it a try.

To get NetBeans to use the newer JDK, you can do one of two things:
  1. Edit the NetBeans configuration file (in <installationFolder>/etc/netbeans.conf) and change the path pointed to by netbeans_jdkhome
  2. Change your system default Java installation.
Not very keen on (1) - feels like I'm hiding a workaround in just one application's config, but the problem exists for any Java application I run on this system.

So I decided to do (2). It worked perfectly, so thought I'd share the steps I used to do it. First, where is Java actually installed on the filesystem? If you do 'which java' in a console, it will report /usr/bin/java. But if you look closely, this is just the first step in an indirection; the ls tool reveals:

$ ls -l /usr/bin/java
lrwxrwxrwx 1 root root 22 2008-10-01 06:55 /usr/bin/java -> /etc/alternatives/java

And /etc/alternatives/java is another step:

$ ls -l /etc/alternatives/java
lrwxrwxrwx 1 root root 36 2008-10-01 06:55 /etc/alternatives/java -> /usr/lib/jvm/java-6-sun/jre/bin/java

And the /usr/lib/jvm/java-6-sun file is actually a symbolic link to the actual Java installation folder, which is in the same folder. That's a lot of indirection, and I am quite sure there are good reasons for all of it, but I haven't time to learn all of them. There is a very detailed post by Anthony Richardson which patiently explains how to create a proper DEB package from the JDK download; this is almost certainly a good idea.

I chose to exploit the fact that the last step in the indirection chain is that symbolic link file. By unpacking the JDK directory into /usr/lib/jvm and getting the java-6-sun link to point to the new JDK folder, I decided I could replace 6u6 with 6u10, system-wide.

However, there is another file in this folder: the .java-6-sun.jinfo file (this is a '.' file - you'll only be able to see those if you use the a switch with ls (e.g. ls -al) or View / Show Hidden Files in Nautilus). Looking inside it, only the first line appears to contain version-specific stuff - everything below uses paths which use the symbolic link:

name=java-6-sun-1.6.0.06
alias=java-6-sun
priority=63
section=non-free

jre ControlPanel /usr/lib/jvm/java-6-sun/jre/bin/ControlPanel
jre java /usr/lib/jvm/java-6-sun/jre/bin/java
jre java_vm /usr/lib/jvm/java-6-sun/jre/bin/java_vm
jre javaws /usr/lib/jvm/java-6-sun/jre/bin/javaws
jre jcontrol /usr/lib/jvm/java-6-sun/jre/bin/jcontrol
<snip>

I decided to leave this as-is for now, and edit it later if required. So, in summary here is what I did:
  1. Copied the JDK 6u10 contents into /usr/lib/jvm.
  2. Went to the directory /usr/lib/jvm and opened a gnome-terminal as root (you need to be root or use sudo, to make changes here).
  3. Completely unnecessarily, I backed-up the old symbolic link file, just in case. The easiest way is to use mv (by default mv doesn't follow sym-links), but if you'd prefer to make a copy of a symlink (rather than the object to which it points) you need to use the -P switch, e.g.
    cp -P java-6-sun java-6-sun_OLD.
  4. Made a copy of the jinfo file:
    cp .java-6-sun.jinfo .java-6-sun.jinfo_ORIGINAL_1.6.0.06
  5. Made java-6-sun symbolic link point to the new JDK directory:
    ln -s jdk1.6.0_10/ java-6-sun
And it does seem to work. NetBeans reports that it's using 6u10, and I haven't seen any empty dialogs yet! Better yet, NetBeans seems to start and run faster, too. (I haven't timed it - this may be placebo effect...)

Hope this may help out other folk using LinuxMint or Ubuntu, facing the same problem. Of course, remember that you can easily disable Compiz (set Visual Effects to None in the Appearances Preferences), and you can get some simple effects back using Gnome Compositing, available via the Mint Desktop tool in Control Center.

Interlocking Fragility

The October 1st edition of Edge includes the following quote from Nassim Taleb's book The Black Swan.  Relevant, given what's happening right now:

From Edge 259 (about 3/4 of the way down the page):
NOTABLE QUOTE

"Globalization creates interlocking fragility, while reducing volatility and giving the appearance of stability. In other words it creates devastating Black Swans. We have never lived before under the threat of a global collapse. Financial Institutions have been merging into a smaller number of very large banks. Almost all banks are interrelated. So the financial ecology is swelling into gigantic, incestuous, bureaucratic banks – when one fails, they all fall. The increased concentration among banks seems to have the effect of making financial crisis less likely, but when they happen they are more global in scale and hit us very hard. We have moved from a diversified ecology of small banks, with varied lending policies, to a more homogeneous framework of firms that all resemble one another. True, we now have fewer failures, but when they occur ….I shiver at the thought."

— Nassim Taleb, The Black Swan (2006)

Disclaimer: I haven't read this book yet. Some comments I've heard or read suggest that it's superficial, rambling pseudo-science, more a collection of entertaining but relatively facile observations (perhaps including the above) than an exposition of something profound. 

Still, this one seems to have come true.

Tuesday, 23 September 2008

Neil Bartlett's OSGi book

Just found Neil Bartlett's OSGi book project:

"OSGi in Practice"

Looking good so far, but I've only skimmed the first couple of chapters.  I plan to read and contribute to the comments and bug-tracker. 

I don't think there is another good OSGi book out there yet, so this is potentially important. Great that Neil has decided to license the book under Creative Commons, too.  Like most folk, if the book is good then I'll buy a paper copy.



,

Friday, 19 September 2008

Ubiquity, and Contrasting Cultures

This is interesting: Mozilla Labs » Blog Archive » Introducing Ubiquity

Watch the video demonstration.  What a great idea, and I'm impressed by how well this appears to work even at the prototype stage. I'm going to install and try it.

But even if it turns out not to be so great an experience for me (demos always work perfectly, don't they?), that's not the point: what matters is that some bright people are doing interesting and worthwhile things and freely sharing the outcome with us, while Microsoft apparently spends its time (and some of its money mountain) on stuff like this.

Links to these two articles were close to each other (can't recall where) and I was so struck by the contrast and what it reveals of the cultural differences that I felt compelled to write this.

Monday, 15 September 2008

Rick Wright

BBC NEWS | Entertainment | Floyd founder Wright dies at 65

Such a shock to see this BBC news headline.  Like many of my generation (40-something), I regard The Dark Side of the Moon as the key soundtrack of my early life.  As a teenager I knew every word and every note of every song (probably still do) from an album I listened to endlessly back then, and have listened to on-and-off ever since. 

The first copy I had was a copy of a school-mate's album, on a C90 cassette.  I borrowed a typewriter to write out the lyrics.  Somewhere, in the bottom of a box, I probably still have those sheets of paper.  I wore out that tape.  Eventually I'd saved enough for my own copy, but (would you believe it) I lent the album to someone and never got it back.  But I'm sure I've got the posters, probably in the bottom of that same box, wherever it is.  It's just impossible to explain how important that music was.  It still affects me, even now.  It will always be my favourite album.  Bar none. Nothing can displace it, ever.

"I am not frightened of dying.
Anytime will do, I don't mind.
Why should I be frightened of dying?
There's no reason for it,
You've got to go sometime..."
(from The Great Gig in the Sky.  Rick Wright / Clare Torrey)

Google Protocol Buffers

The Protocol Buffers (PB) idea is obviously good from a performance point of view, but with so much messaging already defined in XSD, I want a tool which will generate the .proto file for a given XSD.  Does such a thing exist yet?  If not, it would make an interesting project. The obvious twin would be a library (linked to the generated .proto code) which accepts a DOM or a SAX stream and generates PB data - a kind of PB-adapter.

The value of protocol buffers is mostly in creating more compact representations on the wire, and speed in parsing/unparsing. But there's a lot of code out there already using XSD/XML internally which nobody wants to rewrite just to use PBs.  Would the cost of conversion to/from an in-memory DOM cancel-out the gains?

Friday, 12 September 2008

Generating PDF from XML with XSL-FO

One of the healthcare solutions I'm working on has to generate diagnostic report documents in a form we can distribute directly (e.g. secure email) to clinical staff. The diagnostic report text plus the usual patient demographics and such, arrives in a custom XML message from the integration engine. The solution uses HL7 v2 elsewhere, but the PAS messaging is exclusively via custom XML (not HL7 v3 XML).

The legacy document-generation subsystem used Word to create the document: a template was created to get the layout and formatting right, and simple placeholders used to mark the locations of the data we would substitute, taken from the XML data. This approach actually loads Word into memory (on the server!) to do the work - it's slow, memory-hungry and just generally clumsy and ugly. Plus you end up with a Word document, so the clinician needs Word (or the reader) to view it.

PDF is a more acceptable format, in my view. We can secure and digitally sign the document when we generate it to prevent subsequent changes. Recipients can view PDF on any platform with a free viewer. The problem for me was how to generate the PDF programmatically, from the XML data. There are probably several ways to do this, but I chose XSL-FO and the Apache FOP project mainly because I wanted to avoid using a proprietary PDF generator product (there are lots out there), but also since XSL-FO can do more than just generate PDF.

First problem: how do you create the FO which is the 'shape' of the generated document? Of course, you could simply read through all the manuals and write one from scratch. Well, I'm just plain lazy, you see, and I don't want to do all that. I want to take my nice OpenOffice document, or even Word document, and have a tool create an XSL-FO for that document. And I'd like the tool to be free (there are commercial tools of course, but I'm cheap). Does such a thing exist?

It does! Amazingly, precisely this facility exists in Abiword: not my favourite word-processor by a long, long way, but a good solution for this particular problem. OpenOffice should be really good at doing this, as it stores documents natively as XML and already uses FO internally for some style information. But, despite some promising hints, there is no mature support for this. This is a real shame: this is just the sort of thing OOo should be capable of, especially as it's apparently half way there already.

Here's what I managed to find on the OOo site:
Important to mention also that Microsoft does have an XSLT which you can apply to Word documents to generate XSL-FO. It's freely downloadable from this download page. I tried this, and it works, but the resulting FO is much messier than Abiword's.

Once you have the FO, the obvious step is to embed it in an XSL, add xsl:value-of elements in the appropriate places and use a transform to populate the template. This is the approach I took for the proof-of-concept and it worked well. The resulting PDF looks almost right - with a small amount of FO-tweaking, we should have something very usable.

But using XSL means loading up and running the (trivial) transform which I think may be very inefficient for such a simple case, plus it requires the FO to be edited. I've decided to use a simpler approach (using StringTemplate) which I hope will be more efficient, and requires less FO editing (just the addition of $fieldName$ placeholders). All we need is a list of (fieldname, XPath) pairs for our XML message, in order to drive this template.  Of course, most other applications will need the power of XSL (e.g. to deal with tables of entries): I'm only avoiding it here because the data is so simple.

This is something we're bound to want to do again, in different contexts, so I'm using this project and the prototype to build a tool-chain and utilities for this, so we can use the same approach more easily next time.

Sunday, 7 September 2008

Google Chrome, Part 2

Mmmm. It all looked so good in the cartoon. The reality isn't quite there yet, but you can see where this is heading, and overall I'm optimistic. Here's a summary:
  • Windows only at the moment. Understandable, given the market share stats, but what a pity I can't run it on this LinuxMint laptop. Wondering what underlying runtime they're using: surely one of the key features of Chrome is that it will become a client-side platform. They need to be able to run multiple processes (presumably native processes) to get one process per tab.
  • On Windows XP, I found Chrome simply ate CPU cycles. The memory (working set) story wasn't as bad as I had expected (no worse than Firefox, anyway) but the CPU cost meant I was experiencing significant interruptions in other applications. I did experiment a bit, but so far I haven't been able to characterise the circumstances under which I see this.
  • Loved the ability to drag a tab out of the frame to create a standalone 'application', e.g. Google Documents or GMail. That works very well.
  • Rendering speed seemed higher than Firefox.
  • Plugins for sound and video worked for me (e.g. BBC news) but I really didn't set out to test this with different formats.
  • The lack of my Firefox add-ins took a little adjusting to! I'm sure this will eventually come.
  • The UI is quite bare, but you get used to it. The downloads bar (at the bottom of the screen) is actually rather good - better than FF.
  • The search/address bar (can't recall what they call it) is excellent - I found it worked very well for me.
  • The fact that Chrome imports history etc. from FF if you want it to (I did) means it does very nearly allow you to pick up from where you left off in FF.
On the whole, very impressive. But I'm not ready to replace FF just yet.

Tuesday, 2 September 2008

Google Chrome

I must have missed the Chrome story, as I've only just seen a pointer to it from Darren Waters BBC blog piece. Having just read quickly through the Google Chrome storybook, this looks like a potentially important and exciting development. The storybook format is really excellent, too: what a contrast with the way other software companies (e.g. Microsoft) introduce a product!

Technically, this looks like a winner to me. Using a process instead of a thread per tab is a sound idea in principle, with lots of benefits (explained in the cartoon) which you pay for with a slightly higher initial resource footprint. I've little doubt that this will be a worthwhile price to pay, though: almost all of us have machines with enough CPU and memory to accept this.

It seems to me that the most important thing Google has done is to recognize that the nature of the browser has changed utterly, from what was simply a way to view HTML through to something which is trying to be a complete application platform. With Google Mail, Google Apps and Gears, plus a handful of add-ins, Firefox is currently central to the way I work, but because the underlying architecture is still anchored in the past, Firefox isn't going to be stable or performant (or secure) enough to cut-it, for much longer.

This seems to be Chrome's architectural starting point, and the team have sensibly decided to start with a clean sheet of paper, rather than simply build just another branded browser on top of old-world technology. In some ways, I wish the Mozilla engineers had gone down this route. Firefox has been a great success, but if Chrome is as good as the comic-strip suggests, I think Mozilla will need to raise their game significantly.

Monday, 1 September 2008

Climbing in Great Langdale

On Friday, Becca and I had a wonderful day climbing and abseiling, in Langdale. We're both beginners with some experience of climbing walls but none so far on real rock. I managed to get a day's climbing instruction with Adam Marcinowicz of Adventure Peaks, via the Destination Cumbria agency. I can recommend both Destination Cumbria and Adventure Peaks: within hours of my initial enquiry, Lindsay Gibson had located three good options for us, and once we'd chosen AP it was all arranged quickly and efficiently. Great service.
Me on Lower Scout Crag
We met Adam and colleagues at the Adventure Peaks store in Ambleside. AP is in the business of organising and running real expeditions to challenging places - the shop is full of serious (and expensive) equipment - so our little day out is really small-beer.

We spent the morning climbing on Lower Scout Crag, in Great Langdale. This is a beautiful place, but somewhat busy. It wasn't long after we got started that two other groups arrived and began setting up. We tackled two routes, Cub's Wall and Cub's Crack.

The first is relatively easy, but the crack is far harder: the initial moves required to reach good handholds completely defeated me! This was due to a combination of lack of experience / poor technique, and my hands and forearms being weak and out of condition. Becca finally made it up this route, after hanging on the rope and persevering.

We had a brief lunch stop on the pass between Great and Little Langdale, looking out over the valley to Gimmer Crag - beautiful scenery, but we were plagued by flying ants!

Then Adam took us to Cathedral Quarry, to do an abseil down the 30m quarry face. The walk from the car takes you through attractive country, and eventually into the quarry tunnel which leads into the cathedral cavern. Becca and me in Cathedral Cavern.

There are other tunnels leading back through the rock to the approach path - we took a wander through these, too. They are at a constant (low) temperature and pitch dark: climbing helmet or torch (or both) recommended, as I was glad of the helmet on one or two occasions!

The climb up to the abseil point is a scramble up some steep and wet paths - you need to take care here as you're more likely to get hurt on the way up here than on the way down the rope.

The abseil is around 30m, and sheer. About two thirds of the way down the rock face ends and you lower yourself down past the cavern mouth. At the top (quite unprotected and with no warning signs!) I was initially somewhat nervous. Like most folk, I don't like edges at height, so I prepared myself mentally for something I knew I was going to find challenging. I concentrated on two things: (1) treating it as exactly the same as coming back down from a climb up, and (2) trusting the equipment. It worked! By the time I was attaching myself to the rope, I was ready.

As I leaned back, put load onto the rope and stepped onto the face, I felt OK, though my heart-rate was a good bit above resting! It was really only that initial step over the edge which was hard - once I was comfortably into the abseil, I didn't really think about the height.

I ended up doing the descent twice, the second time with a different (and better) belay device which made it more comfortable. Becca managed three descents!

Someone has posted a YouTube video of their descent, which (if it's still there) gives you some idea of the location. I think their estimate of the height (140 feet) is a little on the high side, though: Adam thought 30 meters or so, which is nearer 100 feet.

We had a fantastic day, achieving more than I had expected, all thanks to Adam who was patient with us, and worked hard to give us the best possible experience.

Tuesday, 26 August 2008

Loweswater Lake

It rained almost all of yesterday, but we still managed to get out for a decent walk, this time alongside Loweswater Lake which is one of the smaller lakes up here, but in a pretty setting, almost joined to the next lake, Crummock Water.

The walk we took takes you along a good path which lies about half way up the ridge overlooking Loweswater, eventually descending to a farm at the end of the lake, then back along the lakeside, past the National Trust bothy and the rowing boats for hire, which looked rather forlorn in this weather. We saw very few other people walking today, not surprisingly. The tops of the hills were hidden in cloud a lot of the time, but in true Lake District fashion they would suddenly clear, yielding glorious views.

Slightly annoyed to discover that my supposedly waterproof Berghaus jacket, isn't. You discover the meaning of 'waterproof' in this rain!

Sunday, 24 August 2008

Lake District Break

We're enjoying a very welcome week off, away from it all in the Lake District. We are staying in a really special place, Winder Hall in Low Lorton. Originally a Jacobean manor house, it looks like the Victorians had a hand in updating the interior, and much more recently the owners have created a magnificent place to relax. In the grounds there is even a hot-tub, and a summer-house containing a sauna! We've taken advantage of both - fantastic. The food is good, too - if you ever stay here, then be sure to have some of Clodagh's home-made seville-orange and lemon marmalade, it's epic!

There are some great pictures of Lorton Hall (before it became the Winder Hall hotel) on the Visit Cumbria website. Scroll down about half way to see the house and grounds, and the Pele Tower which forms part of the hall.

The surrounding area is simply outstanding. Today we walked up to the top of Barrow, close to Braithwaite, which is a short drive from Low Lorton. Barrow is not that high (around 1500 feet) but you get one of the best views in the Lake District from the top. The weather has been kind, so it was possible to see all the surrounding fells and a long way further out. This was a warmup for longer, higher walks later in the week - I'd like to get to the top of Great Gable, about twice the height of Barrow.

Thursday, 21 August 2008

NetBeans 6.5 Unusable?

I recently installed NetBeans 6.5 Beta on both OSs on my laptop (Acer 2GHz dual-core, 2GB, dual-booting Win XP and LinuxMint) and I almost immediately uninstalled it. What is going on? Some of the UI appears to be broken: controls not lining up correctly, some dialogs displaying completely blank contents, some the wrong size (e.g. far too big). Sometimes the last two could be fixed by closing/re-opening, but not every time. It also felt a lot slower than 6.1.

Seems other people have been having the same experience. A lot of what Casper wrote in his piece resonated with me, as I also come from the Visual Studio / C# background. I do hope the NetBeans team pays attention because this is an important product which is already good, and deserves to be better. I really don't want to use Eclipse, and I'd prefer not to pay for IntelliJ. Come on Sun!

97 Things Every Software Architect Should Know

Ran across another one of those 'things every architect should know' sites, via Michael Nygard's blog. I haven't read every single one yet, but my unscientific selection yielded a few really good ones, and one or two I felt were less worthy (and not really architecture-specific anyway). I found myself using the titles to select which ones to read in detail, and this proved a pretty reliable guide to the value of the idea and the quality of the writing.

A side-effect of looking at this list was being asked to sign-up to Near-Time. I haven't yet figured out exactly what this is, but it appears to be a public, Web-2.0 collaborative workspace thingy. As usual, I'm probably way behind everyone else on this.

Tuesday, 12 August 2008

When Google Owns You

I must admit I do sometimes wonder how I would cope if Google decided to pull the plug on my account. For me, I would find life without Google (especially GMail, Google Docs, Google Reader and Google Maps) very hard indeed. It's happened to Chris Brogan. Read his post When Google Owns You | chrisbrogan.com.

Finding a replacement which doesn't suck would be hard right now. I just don't really like Yahoo or Microsoft Live much (though I have accounts on both), and even with a replacement service I'd still not have access to the giant heap of old email and documents which Google does such a great job of storing and searching for me. And, despite other peoples' suspicions that Google no longer lives according to its founding principle ("Don't be evil"), I instinctively trust Google much, much more than Microsoft or Yahoo.

All this great Google stuff is completely free, so it's very difficult for any of us to complain if it does get withdrawn. I don't for one moment think that Google is likely to do that without good reason, but it puts all of us in a position of having little or no leverage. Personally, I'd be more than happy to pay a nominal amount for my Google services (by which I mean almost everything except search), to ensure continuity of access, security of my data and a contract which means I'm owed the services. How much? I don't know - what about 30 USD a year? That's unlikely to upset anyone who really wants or needs the service, but if enough people paid it could provide Google with a ton of additional money they could use to invest in making those services even better.

Will Google ever transition to a paid model? The Picasa photo service has a free and paid service already, probably because Flickr and others do this as well as to cover storage costs. But for the rest of Google, I'm not sure. You can imagine how administrative overhead might soar with paid accounts: not only would they be obliged to keep track of everyone's identity, account and payments, but armed with a contract, the kind of people who are intolerant of even occasional outages would feel entitled to complain loudly. I wonder if the folks at Google have 'done the math' and decided that any additional revenue would be eaten up by this sort of thing?

Friday, 8 August 2008

Java and Ruby, C++ and VB

Were you doing Windows development, about 10 years or so ago? If so, then I'm pretty sure you'll recognize the following.

The hard stuff, or more accurately, all the performance-critical stuff and the core components of your product would be written in C++. The functionality in the libraries you and your team created was then composed into COM components, and you designed interfaces, wrote IDL and created type libraries which exposed the component functionality to the outside world.

Lesser mortals could then use COM-aware languages and tools, like Visual Basic (VB), to create sophisticated applications to exploit the power of those components. Component-oriented software development was a reality. UI designers, the subject-matter experts and the VB programmers could get together and build something using your C++ classes, all without having to know about vtables, pointer arithmetic, or what a pure virtual function is.

This was COM (and OLE2) and it was great technology, because it allowed us to do mixed-language development, to compose systems from components without necessarily knowing how they were implemented internally. We could use the right language for the task in hand, which might be C++, Delphi, VB or even JavaScript, knowing the resulting component could be used by any other COM-aware environment.

It wasn't perfect. The COM abstraction was somewhat leaky because even when using 'scripting languages' like VB to combine COM components, you still needed to understand COM's reference counting semantics, and marshaling of data across the COM boundary wasn't always straightforward. But the benefits manifestly outweighed any of the technical difficulties.

With Java and especially .NET we moved away from the idea that we'd build lower-level functionality in one language and use a less formal or type-safe language to 'script' applications on top of it. Microsoft have always claimed that mixed-language development is a cornerstone of .NET and the CLR. It's certainly true that the CLI does explicitly set out to enable multiple languages to target the infrastructure, but this misses the real point, which is that relatively few projects appear to have exploited this. Most .NET solutions are developed in just one language, either C# or VB.Net, the choice usually made early, based more on team culture than on technical grounds.

I haven't heard anyone saying "Well, we used C# for the hairy low-level components, but we built the application itself in VB.Net because it's so much quicker to prototype". Does this happen now? I don't think so, yet back in the C++ / VB world, it really did.

I believe we are in a roughly similar position now, with respect to building complete solutions, as we were when VB and C++ were joined at the hip via COM. We want the big languages and frameworks for building powerful libraries, for high performance and for when we need (or get real benefit from) strong, static typing. But we all want to use these lovely new dynamically-types languages like Ruby, Python or Groovy because they're concise, powerful and fun. Look what I can do in 5 lines of Ruby! It would take 50 lines of Java to do that! And just look at the clever metaprogramming stuff! Wait! These languages are great! Why don't we write everything this way!

Now think back to VB ... who remembers this book ? This was one of those landmark books you recall long past the time you stop consulting it. Part of the charm of this book was the writing of Bruce McKinney, who managed to combine a determination to wring the very most out of VB with an iconoclastic view of Microsoft, the very people who both developed VB and published his book! I liked the book, and I loved the writing. Plus, it really was hardcore: if you paid attention to the book and practised what it preached, you became a better (or at least, more capable) VB developer.

Now, I'm not trying to suggest that Ruby (or Python, or ... etc.) is/are no better than VB6, because that's clearly nonsense. But I am suggesting there are strong parallels between the two situations. With the dynamic, interpreted languages like Ruby and Python we have flexibility, speed (of development) and agility, and we can get something running without all the troublesome attention to detail required by C# or Java. Just as we liked the VARIANT type, we like dynamic (and 'duck') typing, and we like the syntactic economy of not having to use type names everywhere.

So we have a similar situation now to that which existed when VB was at its hardcore height: the evangelists and enthusiasts want to build absolutely everything in Ruby (or Groovy, or Python, or etc...), whether it makes sense to or not. The problem is the 'evangelists'. Or fundamentalists - you decide which term you prefer. Their enthusiasm for their favourite language unfortunately causes them to stray into deeper academic debates concerning type systems. They are so anxious to promote their chosen path, they want to convince the rest of the world that you really don't need static typing, even that static typing cannot save you from anything, so why bother with it? Do everything in <someDynamicLanguage>. Hardcore! I'm not going to wade in to the silly static typing versus unit testing squabble. The essential point is that serious issues are often hijacked, trivialised and misrepresented to serve as ammunition in some language or platform flame-war. Here is an excellent, sane post, as an antidote, discovered as a link in this piece by the equally excellent Tim Bray.

The analogy I'm trying to use is a little strained, I will admit: VB was a fundamentally weak language (see Bruce McKinney's departing missive), whereas Ruby is not. The difference between then and now is that there is no need for a 'Hardcore Ruby' book because you don't need to strain at the boundaries of the language to do powerful things. The similarity rests on the fact that with technologies like JRuby (and IronPython) we have the ability to use a variety of languages and approaches to attack the different layers in our solutions.

Sunday, 20 July 2008

LinuxMint 5 (Elyssa)

I've lost count of the number of Linux distros I've tried and rejected because of:
1) Ugliness (especially screen font-rendering)
2) Device problems, mainly wireless networking
3) Video driver problems, e.g. machine locks up when Compiz or similar enabled.
4) No support (or broken support) for power management, hibernation etc.
5) All of the above.

Even recent 'big-name' distros like OpenSuse and Fedora have had one or more of these problems, at least for me, on my Acer laptop.

Finally I believe I have found a Linux distro I believe I really, honestly may be able to use as a Windows replacement. It's LinuxMint 5.

I am very sensitive to (or intolerant of) poor font rendering on laptop displays. This has been the stumbling block for all previous distros, even those with support for anti-aliasing and font-hinting. Finally, the LinuxMint distro team appears to have sorted this out: the fonts render beautifully.

All the other issues in my list are addressed: the system picked up my wireless network immediately and connected as soon as I entered the pass-phrase. As far as I can tell, all devices appear to work, and the power-management stuff works.

Finally, for me the aesthetics are spot-on. The default theme and the choice of colours really works for me. This matters, when you're spending significant tim in front of the system.

This is a truly excellent distro - far and away the best I have ever used. This one will stay on my laptop and I plan to make the effort to use it instead of Windows.

Thursday, 17 July 2008

NetBeans 6.5 M1

I've just installed M1 and I'm disappointed to note that it appears not to do text anti-aliasing in the editor. Not only that, but the advanced options panel just doesn't have any editor settings in it at all. Strange - it might be something to do with migrating my old settings (from 6.1) which the installer does automatically.

I do hope this is just a temporary glitch - without the anti-aliasing, it's not a pretty sight...

Turns out other folk have spotted this: I just Googled for posts on this and found that a commenter on Tor Norbye's blog mentions this. Tor's replt to this comment suggests it's just a Vista issue, but I'm seeing this on XP Professional too.

Wednesday, 9 July 2008

Moving to Blogger

I'm not sure it's worth running a blog engine myself, any more.  With something like Blogger, free of charge and hosted by Google, it's hard to see why I should keep up with WP maintenance and have to think about upgrades, backups, MySQL and so on.

I've had my own blogger account for a while, but done relatively little with it until this week.  I haven't ever been what you'd call a prolific blogger, but if I'm going to increase the amount / frequency of writing, blogger is almost certainly more convenient.

So, please see my Blogger page from now on.  This blog/feed will almost certainly disappear in the coming months.

Bletchley Park - we must save it.

Unbelievable though it may be, Bletchley Park is under threat, simply because it doesn't have a reliable source of funding. Apparently, it has been deemed ineligible for funding by the National Lottery, and turned down by the Bill & Melinda Gates Foundation.

This is an outrageous, shocking situation. Can we really be guilty of placing such a low value on maintaining this important site? If you know the history behind Bletchley Park and the Enigma code, then you will understand why I feel so strongly about this. If you are not familiar with this amazing story, read around the subject a little and you will soon appreciate the magnitude of the achievement at Bletchley and its importance to this country.

Please do sign the e-petition. Go to this page and follow the instructions.

Tuesday, 8 July 2008

Schema Tools

There are already lots of commercial tools out there for editing and managing XSDs but I wanted to create my own. I don't want to pay for a commercial tool, and I'll learn a lot in the process. The goals for the toolset are quite modest:

1) Given a schema and a root element, generate the element/attribute tree for that infoset.

When you are handed a new schema, it is usually helpful to be able to see what an instance document should look like. Of course, a single schema can validate potentially many different infosets, so the client must be capable of generating the infoset for any given root element.


2) Automatically generate test instance documents.

This is a natural extension of (1). In any application involving the processing of schema-constrained XML, you need to be able to test against valid and invalid instance documents, and to achieve full test coverage for any non-trivial schema you will need a lot of instance documents. This is possibly the most useful application for this project.

Schema types mostly correspond with domain types (i.e. ignoring metadata) and you ideally want to be able to use representative test-data. For instance, if the domain includes people's names, you would prefer to use a pool of semi-realistic family-name/given-name lists, instead of random sequences of characters. So the tool will also allow you to create lists of values and associate them with schema components, using data from the appropriate list when generating test instances.


3) Schema-driven UI generation and XForms

It follows from the previous two items that you will want to look at the XML instance data, not just when testing but also in the application. It's possible to look at the XML text, but it's much nicer to be able to create some sort of form or dialog which presents the data in a more natural way, e.g. as a set of label/text pairs. This part of the tool is intended to help with that; guided by the XSD and some input from the user, the tool will generate a UI for the instance document. I haven't decided which UI to target yet, but it's likely to be Swing/rich-client to start with. Eventually, I will want to generate web forms and XForms.


I've managed to implement (1) already. I'm not sure it covers every case yet, but it does work. The code is messy and the presentation is currently a tiny Swing application. My goal for the UI is to create a Netbeans module, and integrate it into the Netbeans IDE alongside the existing (excellent) schema support. If I can integrate my commands into the NB context menu, the ideal is to trigger my functionality from inside the standard NB schema views.

I have a few more ideas for exploiting this code, but those listed above are a good start I think. Once I have working code I'm prepared to release, I will post something online for people to play with (and/or tear apart).

Monday, 7 July 2008

Netbeans 6 - it's getting better all the time

It's great to see the Netbeans QA process working so well. I reported two issues recently, and both have been fixed. One was a Subversion related issue; within 24 hours I was contacted by the developer assigned to the issue, and offered a patched jar to try! It fixed the problem, and the patch will be rolled-up in v6.5. Impressive.

Netbeans is getting better, faster, than anything else out there, as far as I can see. There are some things I still don't think are good enough yet (such as the UML support) and I wish the whole thing would start up much more quickly, but it's important to recognize just how good this tool already is. And it's a free, small download. The comparison with Visual Studio is almost irresistable; VS is a DVD's worth of code, costs a fortune and offers a much less capable code editor, less refactoring support and doesn't really support rich client development to the extent NB6 does.

Look the the Ruby support in NB6, too: although there is a lot of interesting work going on with dynamic languages at Microsoft, the impression you get is that these are somewhat 'second-class' projects with no real presence in the main-line Visual Studio product plans. JRuby, on the other hand, is almost front-and-centre in the Netbeans world. The integration of languages via the JVM, and the development of integrated tooling in Netbeans makes it possible to do serious work with Java and Ruby, right now.

Another thing: Sun isn't trying to shut-down or marginalise any of the community, open-source projects which populate the Java tools landscape. Instead, they've recognized and accepted the strongest members of the community and built tool support for them in Netbeans. Just take the most obvious examples: Ant, JUnit and Maven. Compare that with what Microsoft has done: ignored NDoc and produced Sandcastle, created MSBuild to replace NAnt, and they want you to use (therefore buy) their own version-control system rather than embrace Subversion or Mercurial. I'm sure Team System is probably fine, as long as you have deep pockets and you are prepared to submit totally to Microsoft's prescription for your development team processes.

Tuesday, 22 April 2008

PSL not DSL

I've long felt that the DSL moniker is a little inappropriate for some (perhaps most) applications of the ideas behind it. The majority of 'DSLs' are really just little languages which help solve specific problems in the software development space, e.g. Rake, which provides a very nice language for expressing graphs of dependent tasks.  Martin Fowler has written quite a lot on the subject of DSLs - this paper is a good example of a little language being described as a DSL.

So it was good to hear Jim Weirich (the creator of Rake) make this very point while talking about the DSL hype in this InfoQ interview. The relevant bit is something like 14 minutes in. I like the term Problem Specific Language (PSL), which Jim invents here.

I'm working on applying Ruby to modelling (and reasoning about) a real domain, and driving a tool-chain for forward-engineering artifacts from the Ruby description. Apart from Rake, I'm also looking at RSpec, which uses Ruby syntax to capture descriptions of system behaviour. I'm very keen to work on the 'real DSL' problem, looking for the sweet-spot where domain-driven design and programming overlap. This whole area seems very fruitful for Ruby, because the language allows us to create very rich and expressive scripts which not only read quite naturally, but can also control complex processes.

Thursday, 17 April 2008

Wubi and Xubuntu

After the last foray into the world of Linux I swore I wouldn't bother for another year or so.  Somehow I came across a link to Wubi and decided it sounded too interesting to ignore.

Live CDs are not really practical for doing anything much more than a cursory look and installing to a partition is too much work. Wubi is a very clever 'third way', installing a distro as if it is a Windows application, yet actually allowing the OS to start from the Windows boot screen, at full speed.  Clever stuff.  It uses a virtual disk (like VMware), which the Wubi creators admit will make file I/O slightly slower.

Wubi is set up to work with Ubuntu Linux, which most people will experience via Gnome and the rather, er, brown theme.  Ubuntu is also available with KDE but there's a third alternative, Xubuntu, which uses the xfce window manager. I like this even more because it's lighter/faster than either of the others, and offers just enough functionality, without getting in my way.  I simply don't need (or want) a lot of silly 3D desktop effects, nor a hundred different ways of playing media files.

I simply ran the installer and selected Xubuntu.  Some time later, it was ready.  Reboot, select Xubuntu, and there it was.  Amazing!  Even more amazing was the fact that Xubuntu located all the laptop hardware including the sound and WLAN chipsets.  As soon as I selected the network applet, it offered to connect me to my home WLAN.  I was running and connected to the internet in minutes.  No other installer/distro combination has got even close to being this good.  Xubuntu is excellent - I have had no difficulty in installing the extras I need (Java JDK, OpenOffice and Netbeans, for example) via Synaptic.

There is a way to convert the Wubi install to something more permanent using LVPM.  I will probably do this, but only when I've sorted out the partitioning of the drive - a chore I'm not looking forward to.

Thursday, 3 April 2008

Tomorrow's code

I've just finished reading this piece by Bill Thompson, on the BBC Technology site. After a potted history (taking in the dear-old BBC Micro, of course) he makes a good point about the way we no longer seem to be encouraging school students to learn to program: greater importance is attached to teaching them to use office suites and accessing the internet than to enquiring into how all this software got written, or why it isn't more reliable.

This is just one symptom of a larger problem facing science and engineering generally. Physics is under threat, engineering (so we hear) is less popular than ever, and it often feels as if greater value is placed on producing 'entrepreneurs' and managers than engineers. I think we're running the risk of forever losing our position as world-leading engineers and innovators; perhaps that position is already lost?

I want to see people excited and enthused by the truly great scientific challenges facing us. Our future national prosperity depends on us being smarter than the competition: we cannot grow bigger, we cannot be much more populous, we cannot rely on mineral or oil wealth, and surely we are beginning to appreciate just how precarious our position can become when we depend too much upon financial markets whose behaviours are globally linked, not wholly predictable and not under our control.

We seem to spend a lot of time looking back wistfully at past triumphs (like the invention of Radar, cracking Enigma and the birth of computation, discovering DNA), rather than looking forward and preparing to celebrate the next ones. Are we talking ourselves out of a great future, collectively mentally preparing ourselves to accept that our greatest achievements lie behind us? Do we really want to turn our country into a kind of museum whose dusty artifacts catalogue a brilliant past?

Friday, 7 March 2008

Hate Vista, love C#

Came across this blog post, while following-up on a semantic-web related Google Code project:

Why I hate Windows Vista (and can’t wait to re-install XP) « The Wandering Glitch 2

I have often wondered: how can the company which brought us C#, the .NET Framework and the CLR possibly be the same company which excreted Vista?

The guy behind the above referenced post is clearly a fan of C# (as I am), but despises Vista (as I do, along with those who made all the colourful comments on his post). I have left Vista on my new laptop, but in a much reduced partition, and have given over the lion's share of the drive to my copy of XP Professional, which I still think is pretty damn good.

What about the competition? On the desktop, there isn't much that's credible. (I'm not considering MacOS because to get that you have to buy the hardware). So I spent (or should that read wasted?) a few hours with some of the latest crop of Linux distros last weekend: Fedora 8, openSUSE 10.3, the latest Mandriva, PCLinuxOS 2007 and probably another one but I can't be bothered to recall which. None of them impressed me in the least. Only ONE of them (PCLinuxOS) correctly detected all of the important devices on this laptop, including the wireless chipset. Plus they all looked terrible when compared to Windows XP, especially in terms of font rendering, so they couldn't even seduce me with glamorous graphics.

What really drives me mad is that none of these distros really stands out: they all look virtually the same (Gnome or KDE, plus or minus a colour-scheme and some desktop wallpaper), they all contain more or less the same rag-tag collection of packages, but (and here's the kicker) they're all different in irritatingly detailed respects, some of which are downright inconvenient such as package management or filesystem layout! Every year is heralded as 'the year of Linux on the desktop', and every year I dutifully have another look, then gratefully boot back into Windows XP and get on with business.

I wonder what's next for Windows? With Microsoft so chock full of truly bright and talented folk, I'm really hoping that we can look forward to the Windows that Vista should have been. And when they do finally release that, why not give away XP? Or, to avoid the inevitable complaints about anti-competitive pricing, make it 50 bucks? Then nobody except the bigots or fundamentalists will need to waste time with Linux. Go on Microsoft, do us all a favour.

Thursday, 28 February 2008

Solaris Developer Express 01/08

The latest release DVD dropped through the letterbox last week, and I eventually found time to try it out, hoping very much that it would fix the issues I mentioned in the previous post.

What a disappointment. First, although there is an option to upgrade the existing installation I decided not to take the small risk that this might not replace all drivers etc., so I went for a fresh install over the top of the old one. This went OK, right up to the point where I removed the DVD and rebooted...

After the initial startup, Solaris starts building a database of some sort. I think this is a one-off operation which it performs the first time a new installation is booted. I recalled seeing it before, so left it to run while I went to get a mug of tea. When I cam back, I found a dead laptop - no login prompt, no power light, nothing. It was then that I realised that I'd been running on battery power during the install and I hadn't flipped the power-supply wall switch to 'on'.

So what? Power switch to on, and start the laptop: it'll be fine. Wrong. The OS wouldn't boot. I can't recall all the rubbish that scrolled past, but I'm pretty sure that the power finally ran out while this configuration database (whatever it is) was building, and without it you're stuffed. I wasn't impressed.

But I was determined enough to start at the beginning (again), this time with AC power on! And of course this time it installed perfectly. However, after logging-in I was disappointed to discover that none of the networking devices were recognized, nor was the sound and I presume also the graphics chipset (as before). This time I didn't even see the nwam dialog, so I have no idea how to make Solaris usable on this laptop.

I'm afraid that's it, for me. I won't be wasting any more time on Solaris, until I'm sure it will at least connect to the network. Windows XP is still my favourite day-to-day desktop OS, because it just works. I'm so glad I didn't remove it.

But what a pity: I had high hopes that Solaris might offer what no Linux distro (so far) has been able to: a good, professional alternative to Windows, created and supported by a trusted company like Sun.

Tuesday, 19 February 2008

Solaris Developer Express on Acer Laptop

I've been keen to try Sun's Solaris Developer Express (SXDE) for a while now. They'll ship you the DVD free-of-charge, so there's no reason not to give it a spin.  I'm waiting to get 1/08 (this year's first drop), so I installed 9/07.  You can read a short review of the 9/07 release here.

I made a partition for it on my Acer TravelMate 5720 (using Acronis Disk Director, which I can recommend), rebooted with the DVD in the drive, and followed my nose.  It was very straightforward, and once into the main part of the install process I was able to leave it chuntering away while I got on with other things.

Eventually, a reboot (without the DVD in the drive of course) and the first pleasant surprise was that the Grub bootloader had correctly detected the other two OS on this machine (Vista and XP), and I was able to boot into all three without a problem.  Full marks for that (and a sigh of relief).  However, once into SXDE I discovered a few annoying issues: hardware support (predictably enough) plus strange behaviour from the wireless network software.  Below is what I found, plus some pointers which I hope may help others in the same situation.

By default, SXDE uses something called Network AutoMagic (nwam) which supposedly detects wireless networks and gives you the opportunity to connect to them.  Well, it certainly detected my wireless network, but stubbornly refused to connect me to it.  It prompted (correctly) for the wireless network ID (not broadcast) and the WPA password, appeared to accept both, but didn't connect or give me an error message.  Every few minutes, the nwam dialog popped up again, but repeated attempts made no difference.  The documentation for nwam is quite poor and not helped by the fact that some URLs in Sun's online documentation seem to point to the wrong place.  The OpenSolaris project pages were probably the most helpful.  I tried stopping/starting the service and playing with the parameters, but this issue remains unresolved, and is very annoying.

Worse still, the NetLink BCM5787M Gigabit Ethernet PCI Express device (i.e. fixed-wire ethernet) also didn't appear to work, which meant no connectivity whatsoever!  Not a great position to be in.  But, there is hope - keep reading.

The other hardware issues were less irritating: the Texas Instruments 5-in-1 multimedia card reader has no Solaris driver (I don't really care), the audio controller (Intel 82801H) chipset is supposed to have a bundled driver but it doesn't work, and the Intel Mobile GM965/GL960 Integrated Graphics Controller is also supposed to have a bundled driver but isn't reported. 

For anyone else doing this, I strongly suggest running Sun's excellent Device Detection Tool, a free (web-start) download which will give you a detailed report of what hardware you have, and whether there is a Solaris / SXDE driver for it.  This gave me hints on where to look for the missing drivers, and told me exactly what hardware I actually have in this laptop - very useful.

This tool pointed me directly to the page on the Broadcom site where I can download the Ethernet drivers for my hardware.  Pretty good, but when you get there, you discover the following message: "Note: Broadcom does not offer UnixWare, SCO and Solaris drivers
for NetLink Ethernet controllers."  Not good.  But it may be I can use the Linux driver.  When I get time, I will return to this and post my experiences here.

Lastly, a note on aesthetics.  Solaris uses Gnome by default, which is OK rather than outstanding in any way.  But colours and fit/finish on the desktop are very good indeed: in my opinion, Solaris looks great in every way, except for font rendering.  There is a font-smoothing facility (in preferences) but this just doesn't approach the quality of Microsoft's ClearType implementation in Windows.  For me, font rendering is a big deal - I hate to look at ugly fonts or blurry, smeary characters.  I suppose this may be because Solaris is using a down-level video driver, because it doesn't support the Intel Mobile chipset on this laptop.  Get the font rendering right, and I'd be happy to sit in front of Solaris all day.

As soon as I have a network connection, I'll continue evaluating Solaris and post my findings here.

Sunday, 17 February 2008

Java Futures - and Software Processeses (sic)

If you're looking for a good summary of what's coming up in Java FX, the Java SE Update (the 'consumer JRE') or JDK 7, then this talk given by Chet Haase is a must-see.  It's quite long, but it's worth taking the extra time because this is good, solid content.  And it's so good to get all this material presented by a first-class technical presenter rather than having to sit through shallow Powerpoint 'fluff' from marketing.

After watching this I located Chet's blog, and found his hugely entertaining send-up of our industry preoccupation with methodology.  I think we have a new candidate taxonomy of the software process landscape!  Conference Driven Development reminded me of some Microsoft technologies I recall hearing about - people were building plans on top of some of this stuff before we'd even left the PDC...

I don't have time to play with Update N or to try Java FX Script just now.  FX Script interests me mainly because of the nice binding environment it promises, but JRuby will probably get my attention before FX mainly because I can put it to use on real problems more quickly.


Saturday, 9 February 2008

Oracle 10g XE, SQL Developer and Java

I've been forced to install and use Oracle 10g XE recently.  The experience hasn't been pleasant.  I'm sure the database technology itself is perfectly good, but the tools and support are dreadful, when compared to Microsoft's SQL Server Express Edition or MySQL.  I'm writing this while still cross, so it's not going to be very nice about Oracle or Java.

I need XE for several reasons, one of which is that I have a dump file containing a complete database which I need to work with.  Oracle XE comes with a (rather lame) web-based admin module which doesn't even seem to have tools for importing database dump (.dmp) files.  To do this you need the command line imp tool.  This would have been OK if it had worked (it didn't), but even then I hate having to find the right command-line incantations for things I only occasionally have to do.

Why didn't imp work?  I'm not sure yet because right now I'm so pissed-off I can't be bothered to scroll through the screen-fulls of error messages it generated.  (By default it doesn't even write these to a log file either - you have to tell it to do that.  Duh.)  This is just flat wrong.  If there are version issues, or permission issues or similar, I should get a simple message, early, to tell me this, and the import process should stop right away, not struggle hopelessly on, scrolling pages of crap at me.

The online documentation isn't very good either.  I admit I haven't burrowed through every page but I shouldn't need to do that, just to import a dmp file and manage an additional database.

So, I looked for a management tool (like the tools you get with MySQL, SQL Server and PostgreSQL), and yes, there is one.  The Oracle SQL Developer is a Java application which you can download with or without the JRE.  Well I've got Java 1.5 and 1.6 on my machine (both JRE and JDK), so I figured I could take the smaller download.  You get a zip which you unpack somewhere, and run the top-level exe. 

So, I ran the EXE.  And what's the first thing I see?  This:



So, unlike many other Java applications, this one can't locate any of the (four) Java installations on my machine.  Poor, but not disastrous I suppose, so I browsed to the Java 1.6 JDK and clicked OK.  I expected the tool to start, but instead I got this:



What?! Perhaps it doesn't like 1.6 - perhaps I'll try 1.5.  So I ran the EXE again, expecting to be prompted to browse to java.exe again.  But no!  Instead, I immediately got the second error dialog again!  The act of browsing to some java.exe seems to write a setting somewhere, which is used in subsequent launches, even if it's wrong.

It's hard to believe that one of the largest software companies on the planet can offer this kind of low-rent, sub-shareware experience - especially on an entry-level product surely intended to attract new users of Oracle technology.

I'm just not going to waste any more of my time on this.  It's more than enough to make me uninstall Oracle 10g XE and give up trying to use it.  I've used MySQL and Microsoft SQL Server Express happily in the past, so I'll see if I can import or convert the dmp file and use one or other of those instead.