Saturday, 18 December 2010

Madura Rules part 4

This one is about how to add external functions to Madura rules. First, let's look at a rule with a built in function:
formula: Customer "sum"
  totalAmount = sum(invoices.amount);
This shows a simple formula rule which uses the internal function named sum(). Sum() in this case adds up all invoices attached to the customer object. The field on the invoice it uses is the amount field. There is another field, named totalAmount, that stores the sum.
Remember that if a new invoice is attached, an old one is removed or the amount on any invoice is changed this rule will fire and work out a new value for totalAmount.
We have a good selection of built in functions, things that manage dates, number conversions and lists. Sum() is an example of a list function. But what if you have your own idea for a function, can you write your own?
Well,of course, but there are some simple rules to observe so that the engine can use it. Here is an example of a user-written function:
public class SampleExternalFunctions
  public static String regex(String source, String pattern)
    return "yes, that's okay";
  public static Number combine(Number a, Number b)
    return a.doubleValue() + b.doubleValue();

The example shows two functions. Neither of them are very impressive in terms of what they do, but that is not the point here. The functions are both static functions and they both have the @Function annotation. That's about all you need to remember. The class must go on the classpath of your application and also on the rules generator dependencies.
The arguments you pass to a function should be as neutral as possible to cater for all cases. So use Number rather than Long or Float. String and Date are fine.

Saturday, 27 November 2010

Madura Rules Part 3

This has been a while coming. Part 1 covered the types of rules available and part 2 was about how to configure Madura Rules using Spring.

In this entry I want to cover some of the less obvious features in Madura Rules: decision tables and constants. These are closely related to choice lists in Madura Objects.

Decision Tables

Decision Tables are descendants of a construct in COBOL. Oh, yes, COBOL really did have some smart stuff in it. Actually these are somewhat simplified from the COBOL decision tables but they do a good job.

Start with an XML file that looks like this:

<DecisionTable name="business-customerType" scope="Customer" message="">
      <ColumnName autoAssign="true">business</ColumnName>

As you can see this is mostly rows and columns with values in each cell. COBOL's equivalent allows expressions in each cell.

The decision table has a name, a relevant object (Customer in this case) which is the equivalent of the scope in the other rules. There is also a message identifier which is delivered as an error if an attempt to set an incorrect value is made.
The column names refer to fields in the Customer object.

Below that are rows and columns. Each row specifies a valid combination. So if we set the business to B then the only valid values for business are AG and FISH. If your application examines the metadata for business, say to create a drop down list, then it will give only those values.

If we set the customerType to A then there is only one valid value for business and, because we set the autoAssign attribute in the columnName, then that value will actually be set.
We can, of course, decide to set the business value first and have it decide what options are available for customerType instead. And we can have more than two columns.

Your decision table data comes from the XML file by default, but you can write a factory in Java to deliver the data.

You can specify soft constants in your rules like this:

rule: Customer "Determine business from customerType"
  if (customerType == ${xyz})
    business = IndustryType.AG;

Here xyz is a constant, except that we might want to change that constant sometimes so we can specify it this way. Then we write some XML that looks like this:

  <Constant name="xyz">aaaab</Constant>

Like the decision table the value may come from the XML or it may be overridden by a factory you can supply yourself which delivers the value. This is useful for values that cannot be determined before deployment time, and/or for values that are used in multiple places and you want to ensure they are actually all the same value.

Injecting the XML

All of the decision tables and all of the constants can reside in a single XML document. In fact this document can also contain the choice lists used by Madura Objects. But they can also live in separate documents, injected separately into the rules engine. This allows options such as generating the decision tables by some external program etc. All documents are injected as Spring resources into the engine so you have all the deployment options supported by that (classpath, file, URL etc).

There is still more to cover, such as how to add external functions to the rules and how to handle I18n issues. But that's another post.

Sunday, 7 November 2010

How to write a generic java compile target in ant

The problem

We use ant for our builds and it generally works just fine. But we've been rationalising our many build scripts lately, identifying the common stuff and putting it into a common file. Obvious, really, and that is going well. In the process I came across an awkwardness, a kind of mismatch inside ant. We wanted a generic compile target that handles:
  1. running javac to compile the source files
  2. copying the non-java files (xml, xsd, xsl etc) across to the bin directory
We often have multiple source paths and javac lets you specify them like this:

javac srcdir="java/src:java/test:java/other"

So I have three directories with source files. But when I want to copy them I have to say:

<copy todir="${basedir}/bin" flatten="no" >
  <fileset dir="java/src" includes="**/*.properties,**/*.xml"/>>
  <fileset dir="java/test" includes="**/*.properties,**/*.xml"/>
  <fileset dir="java/other" includes="**/*.properties,**/*.xml"/>>

Which would be okay but in different projects the list of source directories varies so I can't have a generic target that will compile anything. Well, actually I can...

The Solution

First I need to have ant-contrib in my classpath, then I define the taskdef for it like this:

<taskdef resource="net/sf/antcontrib/"/>

Once that is done I can make my generic target ehich looks like this:

<target name="compile-source">
        <delete dir="${basedir}/bin" failonerror="false"/>
        <mkdir dir="${basedir}/bin"/>
        <javac destdir="${basedir}/bin"  srcdir="${srcpath}">
                <path refid="libs"/>
        <foreach delimiter=":"
    <target name="copysource">
        <copy verbose="true" todir="${basedir}/bin" flatten="no" >
            <fileset dir="${d}" includes="**/*.properties,**/*.xml"/>

The key to this is the foreach tag in the compile-source target. That loops through the srcpath, splitting out the colon delimited fields and running the copy.

It took me a while to figure out how to use foreach, so hopefully this is a help.

Saturday, 23 October 2010

JAXB: Sharing object definitions between projects

This is about publishing java classes generated from xsd files using JAXB's xjc utility. Specifically I'm publishing a jar file from one project and using that jar in a second project. The second project also has an xsd file which imports the xsd from the first project. There are several tricks to this and I have put them into a sample here.

The sample would be two separate projects if it were in the real world, but it is easier to package it all into one. There is a first.xsd which is the xsd file from the first 'project'. You'll find a first-schema.xjb and a first-catalog.xml, and these are quite ordinary. The ant build script calls build-generated.xml to generate the java files and puts them into the generated1 directory. It then compiles those classes and packages them into a jar file along with the first.xsd file.

So, at that point we have a jar file called first.jar which contains the generated classes and an xsd file. This is what the first 'project' delivers. As I said, this is all very ordinary. If you're not too familiar with JAXB you might find them interesting but the second 'project' is where the good stuff is.

The second 'project' includes second.xsd, second-catalog.xml and second-schema.xjb. But before we get any further note that the first.jar file was placed into the temp/lib dir. The compile target (in build-generated.xml) has all the jars in temp/lib on its classpath. So first.jar is on the classpath for the second project compile.

Now, let's take a look at the second.xsd file. Note that it specifies its package name in the xsd like this:

      <jaxb:package name=""/>

You will find a similar entry in first.xsd because it needs a package name as well.

Also in second.xsd you will find this:

<xsd:import namespace="" />

As well as a reference in the header:


So that when we refer to the Invoice object, which is defined in first.xsd, we can say sandbox:Invoice and the reference will be resolved... well almost. There is another step involving the catalog file. Catalog is not a specifically JAXB thing, though I've not had to use it on anything else yet. The second-catalog.xml file
has this entry:


This is almost obvious. The namespace we referred to in the second.xsd file (and remember we only used the namespace, nothing else) is mapped to a URI. The URI refers to the jar file we placed in temp/lib earlier. So that is enough information to find the first.xsd file and import it.

We are almost there. If you run the build in the samples you will find that the first.xsd classes are generated twice, ie in both the generated1 (where you would expect them) and in generated2 when we are really only trying to generate files for second.xsd. I've tried to suppress this second generation and you can see my commented out efforts in second-schema.xjb. Neither of these actually worked for me, but even if they had they would have involved naming each of the classes I want to suppress. This is okay on a sample project when there is only one, but not practical when there are hundreds of classes being imported, which is my real world scenario.

So I'm going to just live with that issue. It is not such a big issue anyway. My builds typically copy the generated classes off to another directory to compile it and I can easily just copy the one package I want and ignore the others.

Other notes:
  1. You have the option of specifying the package name in the xjb file rather than the xsd file. I prefer the xsd file because we can only specify one xjb file at a time, so my second-schema.xjb file would have to specify the package name for the first.xsd which is redundant information. It would, in the real world, have to specify about a dozen packages which is a maintenance issue.
  2. The xjb files for both first and second have a globalBindings option. This can also be specified in the xsd files. But if you do that, and you put it in both xsd files, xjc will complain about finding two globalBindings entries. So this does need to be in the xjc file.
  3. I'm using ant and ivy in the samples. The ivy depends on the ivyroundup repository. It should self-setup for you as long as you have ant and run the build.xml file (install ant, cd to the JAXBSandbox dir and type 'ant').
  4. You can put the schemaLocation on the import in the xsd file rather than use a catalog file. But then the exporting project is dictating where the importing project must store its xsd files. I prefer to keep them decoupled and catalog achieves this nicely.

Friday, 10 September 2010

Fun with JUnit4

I decided it was well past time to take a closer look at JUnit4. If you are still using JUnit3 you need to know that JUnit4 uses annotations and, when used with Spring, you can use the annotations to load an application context. Here is a sample test:

public class RunDatabaseScriptsTest{
    @Autowired private RunDatabaseScripts m_runDatabaseScripts;

    public void testExecute() throws Exception
What this does is run the method called testExecute which, unlike JUnit 3, can be named anything you like. With the @RunWith and @ContextConfiguration in place Spring will load the file RunDatabaseScriptsTest-context.xml and it will inject the bean called runDatabaseScripts into the autowired field. The autowired field does not even need a getter and setter.

So this is nice and succinct and you don't need to extend a class from Spring like you had to in JUnit3. It gave me some ideas though.

We use JNDI names to define our database settings, normal enough in things deployed to an application server. We also use them in our tests, so our bean definitions have JNDI references. We could just use a DriverManagerDataSource bean injected with the database settings but we have 3rd party library that everything depends on and that is most easily configured via JNDI.

The question arises, though, how to ensure our JNDI names have been set up before Spring loads the beans. Any later and Spring will complain about the missing JNDI. So I took a look at that @RunWith annotation which names the class Spring uses to do its work.

But before we go there I need to mention that Spring has added an easy way to extend its runner. You just add an annotation:


near the @RunWith and you can supply a list of listener classes. These implement 


which has methods to run before and after each test as well as before the first test and after the last test. This is useful, but not in my case because they all run after the beans are loaded which is too late for me. Also you cannot configure any options on these listeners which is limiting.

Now, back to the @RunWith which specifies the SpringJUnit4ClassRunner class. I extended this class so I could override this method:

protected TestContextManager createTestContextManager(Class clazz) {
        return new TestContextManagerLocal(clazz);

This is so that I can get the class to take my own TestContextmanager. The TestContextManager is the key to this. All it has to do is extend Spring's TestContextmanager and override the registerExecutionListeners method.

Spring gathers its internal TestExecutionListener and the ones defined in the @TestExecutionListener and passed them all to this for registration. To get what I want I need to slip the JNDI loader into this list at the first position.

    public void registerTestExecutionListeners(TestExecutionListener... testExecutionListeners) {
        JNDILoad jndi = getTestContext().getTestClass().getAnnotation(JNDILoad.class);
        if (jndi != null)
            super.registerTestExecutionListeners(new JNDIResources(jndi));
        for (TestExecutionListener listener : testExecutionListeners) {
        for (Annotation annotation : getTestContext().getTestClass().getAnnotations())
            TestExecutionListener testExecutionListener = getTestExecutionListener(annotation);
            if (testExecutionListener != null)

First we check the test class for an annotation @JNDI and if it is there we register it. Then we register the list we were passed. Finally I loop through other annotations on the test class. There's a method that figures out if we can use this annotation as a listener (it's a simple lookup/translate to a class thing, you can work out your own easily enough) and we register that too.

So now my test class looks like this:

public class SampleTestCase {

The @RunWith now names my own runner class. @ContextConfiguration is still there and still works the same way. The new annotations are @JNDILoad which specifies my JNDI loader. There's a simple implementation that reads data sources from a file and load them into a mock JNDI implementation. The main point is that it does this first.

The @DBLoad loads a listener that is configurable (as opposed to the ones defined in @TestExecutionListener which aren't). This particular one runs a database script using the connection defined in the bean. I use it when I am testing with a hypersonic or other in-memory database to pre-load the data.

Friday, 6 August 2010

Being dumb with JAXB and Spring

Yesterday I wasted a coupled of hours and it is my own fault. Just in case you hit the same problem (or in case I hit it again and forget) I'll note it here.

I'm using a mix of JAXB and Spring and I was getting an error message:
javax.xml.bind.JAXBException: "com.mypackage.MyClass" 
    doesn't contain ObjectFactory.class or jaxb.index

This is when Spring tries to load this bean:
    <property name="contextPaths">

You may have spotted the obvious mistake already but I didn't
The contextPaths property has to be a list of packages not classes.
It is trying to use the class name I have it as a package name and it is trying to find an ObjectFactory class or a jaxb.index file in that package. Since there is no such package then it reports that it cannot find them. Fair enough.

I, however, assumed there was some complication to do with JAXB versions (I have had several of those) and wasted more time than I should have.

For the record, the bean definition needs to be like this:
    <property name="contextPaths">

Monday, 2 August 2010


What a great idea.

I am quite careful about backups because I use my laptop for work and if something goes wrong with it then I stop earning, or at least I mess about the people I work for.

So we have two laptops the same, one for me and one for my wife. If mine goes wrong I figured I can pop the hard disk from mine, plug it into hers and I'm away (okay, she's one laptop short but we're good at sharing).

But what if my hard disk is the thing that dies? I can go to my backups and restore on a new disk but I just know there will be some files somewhere that missed the backup, probably some vital system setting that I'll spend precious hours sorting out.

Enter Clonezilla. This will clone your disk onto a spare one. Everything. You do need to have a spare disk (at $NZ100-ish that's easy) some way to connect to the laptop (USB drive).

Then you burn the Clonezilla software to a CD and boot from that, follow the instructions and it will clone your old disk to your new disk. For me the process took about 1.5 hours and I tested it by booting off the new disk. Everything was there.

I plan to redo this about every six months.

So if I lose my hard drive I can boot the clone, restore my backups to it, and I'm up and running with minimal fuss.

I usually back up weekly to another USB drive and all backup drives are kept on a separate site. We had a house fire once so I know that storing then in a cupboard in my house is not good enough. I'd lose both the laptop and the backups.

I have another USB drive for archive stuff, but I plan to copy those to archive quality DVDs now that I have all my photos scanned.

Sunday, 18 July 2010

Testable Java

Lately I have been modifying some code other people have written. There are some things about writing Java that I consider fairly basic, but not so I find. I will elaborate.

We use Spring Framework a lot. Spring supports the dependency injection concept and, used properly, makes life much simpler. To use it properly the idea is to 'inject the dependencies' (well, obviously, you'd think).

Dependency Injection

One of the huge advantages with this approach is that the code is more testable. You might have a class A which depends on class B. Without dependency injection you'd probably hard code a 'new B()' in class A, or have B a static class, and then call it. That means when you want to test A you necessarily have to test B. You can't mock it out.

It is much better to create a triplet of:
  • B the interface
  • BImpl the production code
  • BMock a mock implementation that does very little
Then you add a field of type B to class A with a setter and getter. When you write a unit test for A you set BMock into it so A calls BMock rather then BImpl when it is being tested.

Otherwise you find that to just test one class you have to do a massive amount of work to set up all the dependencies and their dependencies etc. Let's say B is a DAO which requires a database. BMock does not, it just returns some hard coded values. Now A's unit test (which uses BMock) needs no database. That's a help.

Sometimes there is more than one BImpl, for example you might want a version of B that fetches data from a web service or LDAP. A should not need to know anything about this (this is called encapsulation and is fundamental to OO concepts).

The instantiation of these classes is done by Spring. An XML file defines the relationships between the classes, or you can use annotations. But this post is more about Java than Spring. The point I must make is that your Java code is not doing the instantiation, otherwise it would have to know which B to instantiate.

It is worth including your mock classes in your distribution. We often find other internal projects using our classes (eg writing their equivalent of class A) and they need to use BMock to test their code. So it makes sense to bundle BMock in the same jar file as BImpl.

It makes even more sense to produce two jars and have them test against one and deploy the other, but that is more work.

Static Classes

So, is there no place for a class with a few static methods? Is that always the wrong answer?

Actually no. You should still use statics, I like to call them Utils classes, when the following criteria are met.
  • The class has no non-static internal storage (well, obviously, otherwise the static methods would not do what you want)
  • It needs to be called from lots of places and getting it injected into everywhere that needs is will be tedious.
  • It has no dependencies of its own. You can relax about JRE classes of course. You might even relax about 3rd party classes. But it should not depend on any classes you wrote yourself and anything it does depend on should be small/simple.
Consider Spring's StringUtils as an example. It has some static methods that manipulate Strings. Lots of code calls it and it depends on nothing else. Also there is only one StringUtils. There aren't several variations with different code needing different kinds of StringUtils.

But pretty much everything else needs to be an Interface/Impl/Mock triplet.


Sometimes you want to inject values into your classes rather than other classes. You might have a class that sends messages to a web service and you want to configure that externally. This allows people to adjust the final address used at deployment time. There are several ways to do this:
  • Spring allows you to load properties from one or more properties files and use them in your bean definitions.
  • MaduraConfiguration allows you to specify the values in an Apache Configuration file and (again) inject them into your classes by specifying them in the bean definition.
No doubt other people have other variations. But one thing you should never do is inject your configuration scheme into your class and have the class call it. The class should be unaware of where the configuration information came from. Something called the setter which told it the value.

That has two advantages. First, you can change the way you configure things without having to change the configured classes. They never knew what was doing the configuring so there is no need for them to change.

Second, you can avoid writing a mock configuration source for your class to call if it never calls anything to do configuration. This saves a little work.


This isn't the whole story but hopefully it gives an idea of how to make the code you write more testable.

Saturday, 10 July 2010

I wouldn't start from here... (solving serialization problems)

Last week we had a problem with a live system, therefore it was an emergency. The problem was around serialization. The application uses Spring Webflow and, to allow the user to save where they were up to and restore it later, we serialize the Webflow conversation to the database. And there lay the problem.

We used object serialization, which you probably know, stores a binary representation of the Java object. The Webflow conversation object has a bunch of other objects attached to it, including application objects. It works fine as long as those objects do not change, and there lay the problem.

A new version of the application was deployed and one of those objects changed. So immediately any of the older conversations refused to restore, causing all kinds of problems.

The way you're supposed to handle this is to ensure that all your Serializable objects have a serialVersionUID field added. Eclipse flags classes that implement the Serializable interface for this reason and you really ought to take the hint and add a serialVersionUID. Java uses the serialVersionUID to figure out that even though the class has changed you still want it treated the same because the serialVersionUID is the same (unless you changed it deliberately).

Now, while that is all fine, it is not much help if you've already serialized the objects. At this point I'd like to point out that I had nothing to do with this part of the design, just so you know. I only got involved with solving the problem. But it is a bit like the old joke where you ask an Irishman how to get to Dublin and he says 'well,I wouldn't start from here!'

To solve this I figured we'd need to restore the objects into the old version, then write them to some neutral format, then serialise from there to the new format. Initially I was thinking I would need to run two passes on this. The first to write the neutral format to a file, and second to read the file. The two passes would be implemented in two different jar files so that the two versions of the same objects would not get mixed up. But someone suggested we could make this simpler by messing about with the class loader and I remembered madura-bundle which would provide a neat way to manage this.

Madura-bundle is kind of a simplified OSGi that is closely tied with Spring. You can create a bundle containing your objects and a spring beans file specifying some of them as beans. Then you have a main program with a main spring beans file, and you inject bundled beans into that. Then, and this is the interesting bit, you can switch from one bundle to another. The bundled beans injected into your main program change automatically.

So for this problem I loaded the objects from the two versions of the application into two different bundles, then I could restore the serialised objects from the database under bundle-1, switch to bundle-2 and serialise them back. Easy.

I used xstream for my neutral format. So, having unpacked the objects from the database I called xstream to get them into an XML string. Then, after I switched bundles, I used xstream to create the new version objects which I could then serialise to the database.

Xstream worked pretty much first time and it handles complex structures where an object is cross referenced several times. Where XML would typically repeat the object definition xstream stores an XPath when it finds a repeated object. I did have a small issue with one of the webflow objects which uses Externalize rather than Serialize. The xx object has a protected constructor which, naturally, gives xstream a problem. To get around this I had to implement a converter for the org.springframework.webflow.engine.impl.FlowSessionImpl class.

Xstream's architecture uses a bunch of standard converter classes which handle pretty much everything as far as I can see. But when necessary you can add your own converter. In this case all I had to do was extend
and override the unmarshal method to instantiate the FlowSessionImpl class explicitly. It put the converter in the same package as FlowSessionImpl so it overcame the protected constructor.

The code ended up like this. To unpack from the blob I used this method:
public String getXMLFromBlob(InputStream is)
    XStream xstream = new XStream();
    Object ret = null;
    if (is != null)
        try {
            ObjectInputStream oji =
                new ClassLoaderObjectInputStream(
            ret = oji.readObject();
        } catch (Exception e) {
               throw new RuntimeException(e);
    return xstream.toXML(ret);
I put this into the first madura bundle, and all it does is take the input stream which comes from the blob and converts it. I did have to implement an extension of the standard ObjectInputStream because the standard one locked on to the main class loader. The extension just uses the following to override the resolveClass method:
protected Class resolveClass(ObjectStreamClass desc)
    throws IOException, ClassNotFoundException{
        String name = desc.getName();
        return Class.forName(name, false, classLoader);
    catch(ClassNotFoundException e){
        return super.resolveClass(desc);

To get the new objects from the XML I switch to the other bundle and then call this method:
public Object getObjectFromXML(String xml) {
    XStream xstream = new XStream();
    new org.springframework.webflow.engine.impl.FlowSessionConverter(
    return xstream.fromXML(xml);
This also shows how the custom converter is registered with xstream. The object returned is always the Webflow Conversation object, plus the other objects that were attached to it.

So that's how things are serialized at the detail level. The process is controlled by this code:
InputStream is = new;
String xml = getTranslator().getXMLFromBlob(is);
Object obj = getTranslator().getObjectFromXML(xml);
Not very much to it really. The setBundle call, obviously, selects a different bundle and this switches back and forth between them as we process each record. I have not shown reading and writing the blob in the database but that is easy enough to figure out.

Although this does provide a solution to the problem you should not get into this situation in the first place.
  1. Do not serialise objects to a database like this. Put them into a neutral format such as XML (perhaps generated from xstream) instead. This feature of Java is best confined to sending objects between applications rather than persisting them.
  2. If you must serialize binary objects then declare the serialVersionUID field in every class. This is not completely foolproof. You can change the object enough to break it. But you'll probably be okay.
As part of this exercise I had a need to find the serial version of the class so that I could tell if I had an old one or a new one. My understanding is that Java uses the serialVersionUID if there is one and if it doesn't then it calculates a value based on the bytecode of the class.  In my case there was no serialVersionUID so I needed some way to find out what it was. I used this:
ObjectStreamClass objStreamClass = ObjectStreamClass.lookup(obj.getClass());
long serialVersion = objStreamClass2.getSerialVersionUID()

Saturday, 12 June 2010

Ubuntu Upgrade (8.04->10.04)

I have been upgrading my laptop from Ubuntu 8.04 to 10.04.

The process was fairly simple. My laptop is a Dell Inspiron 9400 and, because I need to ensure maximum uptime, I bought another hard drive (WD Scorpio 250GB WD2500BEKT) to install the new version on. I like to have an easy bail-out strategy for these things and $NZ100-ish is a small price to pay for peace of mind. So I did not run the upgrade script on my existing setup, I installed from clean. This is a good thing to do from time to time anyway because rubbish does accumulate (stuff I thought I would use etc that is too much trouble to hunt down and remove).

How did it go? Very smoothly. After doing the main install I installed everything I needed using the package manager and that was almost all I needed to do. I had to install oracle XE from a separate download and I had a problem on my second monitor. The monitor display was 'wobbly', I could just about read it but certainly not well enough for it to be useful. Googling showed other people were struggling with this problem but the answer was from this guy. Very simple and it worked right away.

I had some minor issues with getting CVSNT working properly after I copied my local repository over but that was just a case of following the instructions closely and doing what they said.

Thunderbird files copied over just fine and set up all my folders and calendars etc. Firefox didn't but I used the export/import for that and it worked.

Things I have not yet verified:
1) Do the VirtualBox machines still see my USB ports? I had issues with that earlier and the answer was to use the non-open version of VirtualBox. No problem, but I'm not sure which version I installed this time.
2) I haven't yet installed my 3G modem. I rarely use it directly, I normally plug it into my wireless router and wireless is working just fine.
3) Phone sync. MyPhoneExplorer, which is a Windows program, was a pain to install on Linux earlier and it never has worked that well under Linux. It is very nice under Windows but when it runs under Wine on Linux enough extra features are crippled for me to look elsewhere for a simple sync program.

But my Java development environment works, email works, open office works etc etc. So I am all set. Now back to Madura Rules development.

Sunday, 6 June 2010

Madura Rules Part 2

My previous post covered how to specify rules for the Madura Rules engine. This post covers how to add Madura Rules to your application.

First, we are assuming that your application is already using Madura Objects and that all we need to do here is build on that.

Add something like this to your ant build script:

<taskdef name="xjr" classname="">
      <fileset dir="${basedir}/temp/lib" includes="*.jar" />

<target name="generateRules">
    <xjr destdir="${basedir}/generated"

Some explanation: this defines an ant task in the taskdef tag. We're assuming you copied the MaduraRules jar file and its dependencies into the ${basedir}/temp/lib dir.

Then we've defined a target which generates the rules. The rules file, schema file and target package for the generated Java must be specified. The xsdpackageName should be the same as the package you specified in your XJC when you generated the Java for your business objects. You can omit this if you the two packages are the same.

That will generate a class for each rule. But that would not be all that smart (translating a text file into Java is done by your Java compiler all the time). No, the rules are translated into Java but the engine decides which rules are relevant to fire, and the smart bit is in the engine.

Now let's add some things to your Spring file.

<context:component-scan base-package=""/>

Make sure the base-package name agrees with the packageName in your XJR command. This ensures that Spring scans that package looking for rules to gather up and use in this...

<bean id="MaduraRulesPlugin" class="" init-method="init">
  <property name="operations">
    <bean class=""/>
  <property name="decisionTableDocument">
    <bean class="">
      <property name="fileLocation" value="/choices.xml"/>
  <property name="constantsDocument">
    <bean class="">
      <property name="fileLocation" value="/choices.xml"/>

All the properties here are optional and we cover them later. The important thing for now is that the MaduraRulesPlugin is injected into the validationEngine plugin (ie Madura Objects) like this:

<bean id="validationEngine" class="">
  <property name="plugins">
      <ref bean="MaduraRulesPlugin"/>

Yes, you can wire multiple plugins into Madura Objects. They can do whatever you like, actually. Madura Rules is a rules engine, but you might like to add a pricing engine or something more exotic in there. The only thing to watch out for is that you must not cross map fields to more than one engine because the engines cannot see each other's updates.

That's the job done. We defined some rules in the text file, generated them with XJR and then wired the engine into our Spring file. Those rules are now active, and your application code doesn't have to know anything about them. So, of course, you can change the rules at any time without changing your application code.

In my next post on this I will cover those extra properties and some more about what the rules can do.

Saturday, 29 May 2010

Madura Rules part 1

My earlier posts described Madura Objects, which is a way to define Java objects that automatically call a validation engine whenever they change. A valid change is accepted and an invalid one is rejected, leaving the state of the objects unchanged (ie still valid). The objects are just POJOs with an added interface and one method used for fetching metadata. They are all generated using JAXB with a plugin, so you don't have to write any of the code to do all this.

But so far this only handles single field validation, relationships between fields cannot be described. So you can say a string field must be no more than 20 characters long and maybe has to look like an email address. But you can't say if the customer type field is "A" then the business type must be "Ag". Nor can you say there must be no more than n items in a list.

The way to manage this is to add a rules engine to the Madura Objects validation engine. I have implemented this as a plugin because, though I might think my own rules engine is the greatest, other people might want to implement their own. Also I'm not yet certain if I will open source the rules engine so I need to keep it separate from the open source Madura Objects.

What do the rules look like?

rule: Customer "Determine business from customerType"
  if (customerType == "A")
    business = "Ag";

This is a classic rule in that it has a condition and an action. Multiple actions are fine, but there's only one in this example. The syntax intentionally looks like Java. We have a Customer object and we want to ensure that if we set the customerType field to "A" the business field will be set to 'Ag'. Remember this happens automatically. It is probably obvious enough but the 'Customer' just after 'rule:' means the fields 'customerType' and 'business' are fields on the Customer object.

It is worth noting that if there is already a value in 'business', and it is different, then this rule will throw a constraint violation exception and roll back the last change.

constraint: Customer "check the count" "No more than 10 invoices"
  !(invoiceCount > 10);

This kind of rule has just a condition, no action. The condition must always be true. So, after any relevant change this rule is checked. If it fails the change is rejected and rolled back. In this case we might have invoiceCount derived from counting the number of invoices attached to this customer. Attaching the 11th invoice would be ejected by this rule. We can attach a message to this rule, "No more than 10 invoices" which is returned in the exception that is thrown.

formula: Customer "figure the invoice count"
  invoiceCount = count(invoices);

This is a formula rule, basically an algebraic statement that is aways enforced. This is how we figured the invoiceCount we used in the constraint rule. Yes, we could have combined the previous example and this one with a more complex condition, but that would not make such a good example.

With these rules slipped behind the validation engine you can perform complex validations as well as derive new values. New values that are deemed inconsistent with what has already been supplied are rejected using an exception. The inconsistent value is rolled back.

In my next post I will describe how this is configured.

Wednesday, 19 May 2010

New Headset

This replaces the headset I blogged about here.

The old headset suddenly stopped playing any base and went all faint. No, it isn't my hearing, just not enough sound. It is still under warranty so I considered calling that in, but actually I am sick of the way it doesn't stay in my ears when I walk, so I am constantly adjusting it. So, I got a new one, a different one. I got a Sony DR-BT21G, which is the sort that wraps around the back of my head and has pads over the ears rather than plugs that (are supposed to) stick inside my ear holes.

I've had it a few days and I'm quite happy with it. Things worth noting, though.
  • The instructions say to push in one of the buttons to pair it. I though it wasn't working at first because you have to hold that button down for quite a while before it takes. Don't give up too soon.
  • When I went walking the music skipped badly. The instructions say that when this happens it means it is trying to use a high bit rate and interference (various other radio sources) is messing the signal. I was able to switch to a lower bit rate and now it is fine, the odd skip, but not a problem.
  • The around the back of the head style takes a little getting used to. There's a slight inward pressure on each side of my head and that gave me a headache the first day. I've been working up to longer and longer with it and now it seems fine. So just something to get used to.
Other than those, the music sounds fine, and it manages calls perfectly well. It is generally comfortable and easy to drive. I think it is mostly invisible too, in spite of the ear pads, because my hair is long enough to cover it. That means when I am out and about and talking on the phone people assume I have 'the voices'. Maybe I do. Maybe I don't actually get any real calls. Works for me.

Thursday, 13 May 2010

Hibernate Mapping

Hibernate is such a cool product. I've been using it a few years now but I was a bit slow starting because I didn't connect the name with the function. I mean, something to do with going to sleep? Never mind it is excellent. Just in case you're still wondering what it is Hibernate lets you work with Java objects and looks after saving and fetching those objects to/from a relational database. But you probably knew that.

Naturally there are ways to map the database tables against the Java objects and, while this can be simple enough, sometimes it is complex and today was one of those times.

The problem I hit today concerned an existing (think legacy) database. I really wanted to avoid changes to the db structure, or even the data. The area of interest concerned two objects: ProdCatalog and ProdCatSection. ProdCatSection extends ProdCatalog.

These both map to the same table. Now, Hibernate handles this nicely enough. You define a discriminator column and have two discriminator values, one for each object, so Hibernate knows how to instantiate the objects from the row. All the examples I found for this assumed there was a common super class that two concrete classes extend, but if you have two concrete classes like I have it works just fine.

I put this in the top of the ProdCatalog class definition:

@Table(name = "PROD_CATALOG", schema = "PRODUCT2")
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
public class ProdCatalog implements,ProductCatalogMember {

In the ProdCatSection I only need this:

@Table(name = "PROD_CATALOG", schema = "PRODUCT2")
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
public class ProdCatSection extends ProdCatalog

That would work... except that there is no recordType in the table. No, the guys who put this database together decided to keep the discriminator in another table, called PROD_CAT_STRUCTURE.

Even worse, this doesn't have one simple field. There is a column in PROD_CAT_STRUCTURE that may be null, indicating we have a ProdCatalog and not null indicates a ProdCatSection. There can be multiple records in the join as well.

Now, Hibernate does not seem to have a way of saying @DiscriminatorValue(value=null) and, anyway, I need a null and and 'anything else'.
The answer is to use the DiscriminatorFormula

@DiscriminatorFormula("case when exists (select 1 from PROD_CAT_STRUCTURE pcs
where pcs.CAT_ID = CAT_ID and pcs.CAT_ID_PARENT is null)
then 'ProdCatalog' else 'ProdCatSection' end")

The formula is just SQL (Oracle's dialect in this case) and this formula will return ProdCatalog or ProdCatSection depending on the result of the query. I could test this using my usual SQL tools to verify the query was valid (though I had to edit a literal into the second CAT_ID for that).

Now that the formula (rather than DiscriminatorColumn) is being used we have the right values coming back and the DiscriminatorValue annotation can work. The names I chose for these can, of course, be anything at all, but it made sense to me to use the object names I hoped to create. The actual requirement is that the results of the formula matches the DiscriminatorValue values. The formula only needs to go on the ProdCatalog class.

Part of the exercise of getting this working involved generating the classes using the Hibernate tools, specifically the Eclipse plugin. I have had trouble installing this in the past and went abut it with a bit more determination than before. I'm using Eclipse 3.5 (Galileo). Invoking the update manager (help->install new software) seemed to work but, once it had finished, there was no sign of any Hibernate in my Eclipse workspace.

So I did the download and unzip approach. The version I used was 3.2.4.v200910211631N-H194-GA and that worked just fine.

Sunday, 9 May 2010

Serializing XML

I am trying to output an XML Document (org.w3c.doc.Document) from Java. I always forget how to do this, somehow it doesn't work with the way I think. So I googled and found this code:

TransformerFactory factory =
Transformer transformer = factory.newTransformer();
DOMSource source = new DOMSource(document);
StreamResult result = new StreamResult(System.out);
transformer.transform(source, result);

But this gave me a NullPointerException down in the org.apache.xml.utils.TreeWalker.
Huh? I seemed to recall that there was a better way to do this than calling a dummy xsl transform, which is what the above code does.
A bit more searching and I found the method I've used previously:

XMLSerializer serializer = new XMLSerializer();

And that works fine. It seems there is a bug in the tree walker class triggered when text nodes are empty. Not much detail but I found it here.

Friday, 7 May 2010

Madura Objects part III

Last entry I explained how, when using Madura Objects, you generate your Java Beans from an XSD file. The resulting classes have code injected into them by the Madura Objects JAXB plugin so that they validate automatically when the setters are called.

Madura Objects injects other things as well, notably metadata. With simple Java objects you don't have much in the way of metadata. You have the field name, type and current value and that's about it. Now that Java has annotations you can attach more static information to the field. Madura Objects uses annotations to define the validation data as well as the label. It generates code like this:
    @Label(labelName = "xxx")
    @Description(name = "this is a description")
    @Regex(pattern = "a*b")
    @Length(minLength = "0", maxLength = "30")
    public String getName() {
        if (m_validationSession!= null) {
        return name;

The Regex and Length annotations here drive the validation engine in Madura Objects. You can see a little of the generated code in the getName() method. Other validation related annotations are Digits, Email, Range and BeanValidator. The last can specify some custom Java code you can inject into the validator engine.

But it is not just about validation. The Label annotation is, fairly obviously, something a UI might use to paint next to the rendered field. There are others which declare the field to be read only, inactive (suggesting it ought not to be displayed), required etc. Description can be used for documentation or help text.

Remember these are all specified in the XSD file that generated this Java. You don't have to edit them into the classes. In fact you really must not edit these files because next time you generate they will be overwritten. So how did we specify the label and description in the XSD?
<element name="name">
                <md:Label labelName="xxx"/> 
                <md:Description name="this is a description"/> 
        <restriction base="string">
           <maxLength value="30"></maxLength>
            <pattern value="a*b"></pattern>

I've used the annox tool to help out here. This is another JAXB plugin which grabs my reference in the XSD and turns it into an annotation. Annox comes from the same people who produced HyperJAXB3. In practice this is all looked after transparently during the generation of the Java objects.

Another important bit of metadata we need is a list of valid values. We can do this two ways. The first way is just the 'normal' way expected by JAXB by itself. This looks like this:
<element name="business" type="tns:IndustryType"/>
<xsd:simpleType name="IndustryType">
    <xsd:restriction base="xsd:string">
        <xsd:enumeration value="Ag"/>
        <xsd:enumeration value="fish"/>
        <xsd:enumeration value="finance"/>

In this example we defined a field called business and pointed it at a simple type definition with a list of values. Remember this is JAXB out-of-the-box. The resulting code is a Java class called IndustryType, which is an enum, and a field that looks like this:
public IndustryType getBusiness() {
    if (m_validationSession!= null) {
    return business;

This, obviously, only accepts values from the enum which we know must be valid. The validation engine will check them anyway. This all works perfectly well as long as the list of values is quite static. To change the values you need to regenerate the Java.

If you need something more dynamic you can do this:
<element name="customerType">
                <md:ChoiceList name="customerType"/> 
        <restriction base="string">
            <maxLength value="30"></maxLength>

The important bit to note is the ChoiceList entry. Here is the generated code:
@ChoiceList(name = "customerType")
@Length(minLength = "0", maxLength = "30")
public String getCustomerType() {
    if (m_validationSession!= null) {
    return customerType;

Probably no surprises there. But the validation engine knows about the ChoiceList and uses that instead of an enum. I won't go into the Spring wiring that I use to arrange this (there are full examples in the project) but you create an XML document containing the choice lists and their contents. The file can be regenerated any time, making the valid choices easily revisable.

We can access this metadata very easily because the Madura Objects plugin generates the code in the classes to support it. The plugin adds the interface to the generated objects and this has a getMetadata() method which returns an ObjectMetadata object. This is just a wrapper for a list of FieldMetadata objects, and they describe the current metadata for the fields.

Yes, you could do this yourself using reflection. But the getMetadata() object is simpler and it also supports dynamic metadata. But dynamic metadata is a topic for next time. 

Monday, 3 May 2010

Madura Objects part II

I kept the previous post short because I know I never read long blog posts. So here is some more about Madura Objects. Recall that we generate Java using JAXB/XJC and that is injected with plugins. Our resulting classes are just beans... with some extra stuff.

Now, let's assume the XSD file you fed into XJC defined an object called Customer which has a field called name. The definition for name looks like this:
<element name="name">
     <restriction base="string">
          <maxlength value="30"/>
          <pattern value="a*b"/>

This is just ordinary XSD stuff and it says that this field must be no longer then 30 characters and specifies a regex pattern the contents must conform to.

Okay, now we build a little code....
    Customer customer = new Customer();
    boolean exceptionFound = false;
    catch (ValidationException e)
        exceptionFound = true;

All I did here was instantiate the Customer object ( was generated by XJC) and then bind it to the validationSession I'd created earlier. Then I attempted to set a value in the name. But the value does not pass the regex expression so I get an exception. The bind call will automatically bind to every object attached to Customer, including ones I add after the bind. So we don't need to bind very often. After the bind there is actually no API to remember. Just use the objects normally and handle exceptions.

If you instantiate an object, unpack it from XML using JAXB or fetch it from a database using Hibernate (all of which are easy to do with these generated objects) then you have to bind it. If you fetch from Hibernate make sure you don't use a lazy fetch because then the bind won't find the attached objects.

If you don't bind then the objects behave like normal beans, and this is sometimes just what you want.

Next time I'll blog about metadata.

Saturday, 24 April 2010

Announcing: Madura Objects

One of the reasons this blog has been a bit quiet is that I have been working furiously on Madura Objects. Now I have uploaded it to Github for all the world to see (if it wants).

What is it? Imagine you have a bunch of Java objects that represent real things like Customer, Order, Address, Product etc. We spend a lot of time plumbing these kinds of objects in our applications. Typically we write business logic to manage them, validate them, persist them etc. Using JAXB we have a way to serialise Java objects to and from XML, and I use this a lot when building web services.

Thanks to Hyperjaxb3 we can also take these same objects and persist them to Hibernate (and other places which I have not yet tried). This all simplifies the plumbing logic hugely. But I wanted to do a little more and the JAXB approach easily lends itself to transparently adding more 'stuff' to the Java objects.

I'll explain that a little more. You actually define your objects in XSD, pass them through the JAXB/XJC utility and that generates your Java classes. The resulting classes are just beans (empty constructor and getters and setters for each field). But, depending on what plugins you specified in the XJC command, there might be other stuff as well.

Now, for Madura Objects, what I did was add interceptors into the resulting setters. These do some logic based on the annotations on the relevant field and this allows them to do field validation. The annotations can be specified in various ways in the XSD file, so you get declarative validation completely transparent to your surrounding code.

This is different from Hibernate Validation (though I have kept the annotations as similar as possible). Hibernate Validation assumes you call the setters and then call a validator method. MaduraObjects does it automatically and it rejects the value without actually setting it. That means your objects are always clean and you don't have to write fixup logic to undo bad values.

There is somewhat more but I'll go into that next time. But you can check it out right now at Madura Objects.

Sunday, 11 April 2010

Water and luck.

Sometimes you get lucky.

It has been very dry here lately. Further north it is much worse but we average about 1500mm of rain a year normally, and January and February are our driest months. But it is now April and we've had a few showers over the last month but nothing much. Our pond is the lowest I have seen it since it was first dug (over 10 years ago). It is rain fed, not part of a stream. Our domestic water comes from a tank that is fed with water from our roof, just like all our neighbours. Our sheep use that water as well.

Our neighbours are buying water. A tanker truck delivers it and fills up their water tanks. But we are okay. Why? Because we got lucky.

Two reasons. First, we have a composting toilet which uses no water. Most households use 30% of their water flushing their toilet. We use 0%. We also have a front loading washing machine and that uses less then a top loading one. These were deliberate decisions, partly to conserve water but also because we liked the idea of using the output from the toilet as compost, and front loading washing machines give a better wash.

But where we really got lucky was in the house design. We had been looking at abbeys in southern France and that was what we wanted our house to look like. I certainly did not make the connection that these places are built for a dry climate. Their wide roofs are ideal for gathering water. So even a small shower delivers lots of water to our tank. I looked recently and the tank is still close enough to full, probably from the couple of showers we had a week ago.

I'd feel a lot more smug if I had planned this.

Friday, 9 April 2010

MaduraDocs 4.5 Released

I've just updated MaduraDocs. There are some minor cleanups in functionality but mostly tidy ups in the project structure making it easier to integrate into your ant projects. I'm now making use of ivyroundup which is a dedicated ivy repository. Previously I've used Maven repositories, which ivy can see okay, but they can be a bit inconsistent with naming, so it is easy to get duplicate jar files of different versions pulled into the project. ivyroundup is nicely 'moderated', and seems clean of such issues.

Before I say anymore about ivyroundup I should say just a little about Apache Ivy. Apache Ivy is a mechanism that manages dependencies in your project. You supply a list of products such as log4j, commons collections etc, that your project needs, plus one or more repositories to find them in. When requested ivy fetches the relevant jar files. But it also knows that those products need other products and so on so it fetches those too. That's a very brief overview, it does more but hopefully you get the idea. You can also easily build your own complete local ivy repository. This is normal for corporates who like the level of control. We do this at my day job.

The typical repository holds the dependency references as well as the artifacts (jar files). But not necessarily. There's an option to only store the dependency references and a url to find them, which can be elsewhere. So the ivyroundup repository doesn't have to consume a vast amount of space because it is essentially a bunch of xml files. That means you can pull a copy down to your local machine and mess about with it to add any missing bits you need. Then you attach the patch file with your changes to the helpful project owner who does some QA and then adds it in.

I've got all the dependencies sorted out now and they are in the ivyroundup repository. So that means the MaduraDocs project is somewhat smaller as well as easy to use. You just include it in your ivy dependencies and, during your ant build, you invoke a small ant file that gets pulled down from ivy (so ivy is not restricted to jars). This is described in detail in the docs which are, of course, generated by MaduraDocs.

Since I now have things so tidy I have uploaded the project to the SVN repository in googlecode.

Edit: I've moved this to a maven project on github. So it doesn't use ivy now.

Thursday, 4 March 2010


I have just spent several days hunting down what should have been a trivial problem but, because it was well hidden, it turned into a monster. There was no need for this to happen, there are ways to avoid it.

The actual problem was in a third party EJB. It threw a NullPointerException which it turned into an EJBException and passed to the client application. The first problem is that the source we have for the EJB doesn't actually match what we run, obviously not open source, but we do have a copy of an older version which is near enough. That is not an intrinsic problem, but it can be if the thing that logs the exception loses the stack trace. I was getting a stack trace okay, it told me there was a null pointer, and it showed down the the point where the exception was logged, not where it happened.

You'll appreciate, I'm sure, that stepping through the code without up to date source is awkward. My debug environment doesn't tell me local variable values if I don't have source, so I couldn't tell much at all. But, mostly by comparing the broken system with a similar, but working system, I was able to track down a minor difference in the way the database was set up.

It has made me think about throwing exceptions. Usually I just do the obvious and it works, but I notice PMD has been warning me about some of my exception handling and I am going to take more notice of its advice now. If this exception had been better reported I would have been able to find the problem faster.

This is really about failing safely, or at least helpfully. You're probably not going to handle every exception case that can happen, especially when you are fed stuff from external systems like databases and web services. So it is worth making sure the exceptions throw decent stack traces.

Sunday, 28 February 2010


I have just released a small open-source project onto google code called MaduraConfiguration.

MaduraConfiguration is for those times when you want to distribute a packaged up application in a jar file, war file etc but you want to enable the deployer to configure it using an external file. Apache Commons Configuration (ACC) did a good job of this already so MaduraConfiguration uses that, but it adds the ability to wire (or inject) the configuration information directly into Spring beans. This is done with various factories which deliver various data types, including JDom Documents and List<String>.

In practice you just write Java beans as normal and then wire them just a little differently. Then you write an ACC XML file (ACC supports various formats but XML is the only one we actually use). If you are deploying on an application server you probably want to use a JNDI reference to find the file. This is easy with Spring. If you want you can wire it so that if the config file changes everything reloads, or you can trigger it using JMX. The reload is all part of ACC, except for the Spring bit.

Yes, I do know that you can define a properties file and tell Spring to use the values from it. But XML is richer than properties and the reload stuff is useful. There are also events triggered by ACC that you can add listeners for.

Edit: I've moved this to a maven project on github.

Saturday, 16 January 2010

Where Sheep May Safely Graze

I've just uploaded a collection of short stories and poems to Smashwords.

It is called Where Sheep May Safely Graze because one of the longer stories is called that.
There are several about cats, especially our (departed) cats and there's some NZ culture/history buried in there as well.

Excerpt from 'Catflap"

The cat stared up at him as though it knew all about the entry and he was being tiresome about it. Then it proceeded to have a bath, the kind of cello-playing bath one feels obliged to look away from unless one knows the cat well.

Most of them are very short and it is all free.

Where Sheep May Safely Graze

You'll find my other writings on smashwords too:
Summon Your Dragons
The White Fox

I have to put in a word for Smashwords here. These clever people are making it easy for people like me to publish fiction. Publishing conventionally is hard, very hard. Writers have to work at getting a publisher to take notice of them. But actually most of us just want to write. I have no expectations of getting wildly popular, I just want somewhere to put what I've written and people can find it if they want. Smashwords does a fine job of that, they automatically convert the work into formats readable by various devices such as eBook readers and phones (yes, people read this stuff on their smart phones). They also allow me to collect a small fee or give it away free.

My first book: Summon Your Dragons, is free. The sequel, The White Fox, has a tiny charge on it. This is mostly because people download things and don't necessarily read them. I figure if they like the first book and then PAY to download the second they probably actually like it, especially since they can read half of the second one for free. That encourages me to spend some time writing the third book.

Now, my stuff is okay if you like historical fantasy, but for a really good read check out my wife's writing. These are stories set in 19th century New Zealand and they are brilliant.

Start with Sentence of Marriage and you probably won't be able to put them down.

Wednesday, 6 January 2010

Open documents

I was given a file on a CD yesterday. It is a .pub file, so I think it is a Microsoft Publisher file. Now, why is it a good idea to ever pass an MS Pub file around to people?

MS Word files are a de facto standard of sorts, though you cannot assume everyone runs Word. But there are several other products which read Word files and MS provide a free reader so that is a reasonable enough approach. Of course the free reader doesn't run on Linux but that's okay, Open Office reads them. It doesn't handle some of the odd templates I come across but it is usable.

But MS Pub doesn't have a reader, free or otherwise. You can install the whole thing on a trial basis and, I think I have this right, when the trial expires it will still read. Well... okay, but that doesn't help me on Linux does it?

This is just a document. PDF, ascii, Word, HTML are all reasonable options here, even better is my own MaduraDocs format which, like HTML can be viewed in a browser. But this is a pub file, so I have to boot Windows, download a product I don't want to use and then figure out how to drive it.


Sunday, 3 January 2010

Autowiring Spring collections

The problem: an undefined number of classes that need to be added to a collection which is then injected into a class. Assume Springframeworks for the injection.

The obvious thing to do is this:

<bean id="ContainerBean" class="test.example.ContainerBean">
<property name="MyList">
<bean class="test.example.InjectedBean1"/>
<bean class="test.example.InjectedBean2"/>
<bean class="test.example.InjectedBean3"/>

But that means every time I add a class I have to change the wiring.

Instead, using Spring 2.5 we can do this:

<context:component-scan base-package="test.example"/>

yes, apart from the usual stuff at the top, that's all I need in this file. The injected beans all have this in their declaration:

public class InjectedBean1 implements InjectedBeanInterface

and in the container bean we have this:

private List m_myList;

...along with the usual getter and setter, of course.

With that in place Spring happily finds the relevant classes and adds them to the collection. If I add one more injected bean then I just have to remember to add the @Component annotation to it and it will end up in the collection. I use this pattern quite a lot these days. Often other people are implementing the injected beans and I need to keep the list of things they have to remember as small as possible. This approach helps.

Spring is happy enough with a typed List or an array, so my bean implementers don't need to know what they are being injected into. In fact in my current project these injected beans are being generated by some other software. I can customise that to add the annotation automatically, but I don't have to figure out how to automatically edit the beans file.

References which helped me get there:

There's a complete code example on my website