Sunday, 30 November 2014

Lubuntu!

I finally did it. After putting up with years of sniggers and snide remarks from every time she runs a defrag or an anti-virus checkon her ageing Windows XP machine (I didn't need to say ageing there, I guess) Mrs finally succumbed and moved to Linux.

The trigger was that I upgraded my machine, which was just as ancient but I'd been running Ubuntu on it for years, more lately I've been running Xubuntu, which is basically the same thing but with a different UI layer. It's lighter and suits the old machine a little better, plus I'm not a fan of the Unity interface on Ubuntu. My new machine would handle the extra load of Unity no problem, but I still prefer the Xubuntu UI. It is interesting that we get this choice with Linux.

So, with me sporting a newer, faster machine Mrs realised just how old and crappy her machine was getting. Little things like some of the keys playing up, the fact it tends to crash if we pull the power supply even though the battery is just fine, and so on.

But, of course, she knew if she went from XP to Win8, which is what the new machines come with, it would be all so different to get used to she might as well switch to some Linux variant. I got the machine about a week ago, installed Lubuntu (another variation of Ubuntu, but this one has an even lighter UI and it feels more like WinXP to make it easier).

When I do this I pop the hard drive and put a new one in. That way I can switch back in a moment if there are any warranty problems. I don't mess about with dual booting. If you're going to switch then switch, dammit. There were some minor issues with getting the machine to boot off the replacement drive, nothing much there. The job could have been done in a couple of hours all up but I needed to find a quiet point in her schedule to switch over.

That happened yesterday. She has one rarely used thing on VirtualBox running WinXP, another couple of things on Wine, both of them infequently used, and the rest is mostly LibreOffice, Firefox and Thunderbird. She's had one glitch in LibreOffice relating to the behaviour of quotes which took 5 minutes on Google to sort out.

Otherwise she's all good. And no more defrags and virus checks. The keys all work too, though there's a different keyboard to get used to.

Sunday, 5 October 2014

Secret Scripts: the antipattern

The Secret Script is something I've been aware of for a while and, although I'm fairly sure I didn't make it up, I can't find any reference to it on Google. So I'll need to explain what I mean rather than just supplying a link.

A Secret Script is some procedure, possibly manual or automatic that you have to know about to complete a software build. It isn't intended to be secret as such, in fact it is one of those things that everyone knows, except the new guy who can't get his build to run.

You can see how these things start, especially in a small team under pressure to deliver. Cut a quick ant script to create a directory and copy some files to it, hack the maven settings file, and make sure you set up that environment variable. Sure, now everything works, and it works every time, so we can forget about it.

If you are building open source projects, where you put the entire thing up in the cloud with the expectation that anyone in the world can pull it down and build it on their machine you cannot have secret scripts. Not ever. If I pull down an open source project and it doesn't build first time that's a signal to delete it and find something else. But closed source have more leeway here, especially if the team is small and information can be easily passed around.

Note that I am not referring to documented steps. You can have a couple of dozen manual steps in your ReadMe file and, as long as that forms part of the project, ie anyone pulling down the software gets the ReadMe file in an easy-to-find directory. Those steps aren't secret, they are easily discoverable.

I said closed source projects have more leeway but not very much more. Every new hire programmer needs to get past these secret scripts. You either have someone who knows sit with them to get them going, which is time consuming, or you leave them to figure it out for themselves, which is even more time consuming, not to mention discouraging for the new guy. There is always the danger than the people who know have forgotten they had to edit some config file two years ago anyway and will be just as baffled as the new guy as to why his build fails.

And with modern tools there should be no need for this. If you're a Java shop you are probably using maven or maybe ant. On a new machine anyone ought to be able to type mvn or ant on a command line and get a working build. If they can't the system is broken with secret scripts.

Secret scripts have a sort of mirror pattern which is, unfortunately, still an antipattern. I call these decoy scripts. A decoy script is a script or procedure that is right there in plain sight looking like it is important and useful, but which does not actually work and everyone but the new guy knows to ignore it. The new guy tries it out, finds it doesn't work, and tries to fix what is obviously a local problem. The certainty that everyone else has this script working will lead him to waste hours trying to fix what has to be a problem local to his machine.

Here are some examples of decoy scripts:
  • Out of date instructions in the ReadMe file.
  • Unit tests that don't pass (and aren't flagged as @Ignore)
  • old ant scripts that refer to invalid paths
These things are fairly easily avoided, and fixing them can save a lot of time.

Saturday, 4 October 2014

My trousers caught fire!

There's the proof. Trousers and socks ablaze. Fortunately I was wearing other trousers at the time.

This time of year we burn the piles of prunings and other tree trimmings we've accumulated since the start of winter (it is spring here in NZ). We'd started it with some paper in a cardboard box which I took to the back door afterwards to fetch some tools.

When the blaze was starting to die down we headed back and found the box was just ashes and my gardening trousers that lay beside them were alight. Obviously the box had been too near that fire. Easily fixed and no damage done, except to a very old pair of trousers.

Could have been worse though.

Wednesday, 1 October 2014

Domestic Dramas

We got back from hols and, as you do, turned on the espresso machine. Bang!
What? Turned it off then back on. Nothing. But not on either. Checked the fuse box. Yep, one of the cutouts has flipped. I took the back off the espresso machine and wiggled some wires, one of which looked a bit blackened. Ficked the cutout back on and tried again. Bang! There were visible sparks this time. Okay, that's not good.

I disconnected the espresso machine altogether, called a service shop and arranged to take it in there to get fixed. It's a wonderful machine, one of those Electras with a brass eagle on the top, and it does a good job, been doing it for nearly 20 years now.

Meanwhile I fetched the emergency backup espresso machine (yes, we do have an emergency espresso machine cos of a more minor problem 3 years ago) and set it up on the kitchen bench. Espressos followed.

Getting the main machine to the repair shop is a little tricky because it has a plumbed in water supply. I remember when we set this up and there's a shut off on the incoming pipe so the machine can be disconnected without getting water everywhere. It is hidden behind a panel in the bench below. I took off the panel, found the shut off and turned it off. Then I set to disconnecting the machine.
The moment I got the water supply line off the machine I got water sprayed in my face and pretty much everywhere. What? I shut off the water! I did! (this, among other things was what I was yelling to Mrs as I asked her to go shut off the mains).

That's complicated too. We gather water from our roof and store it in a tank, then a pump sends it to the house. The pump is smart, it detects the drop in pressure when we turn on a tap and cranks up to maintain it. So Mrs ran outside to the pump switch while I jammed my finger across the tube to stop it squirting. It seemed to take a long time.

Next step was to find out what was going on with the shut off that didn't. I got a wrench and took it off... and noticed water was still dribbling steadily from the pipe. Huh? We turned off the pump. The kitchen tap is still on and is not running, that proves it.

Except that this particular pipe end is lower than all the taps because it under the bench. That means it is below the top of the water tank, so there's some pressure even without the pump running. I need to turn off the tap on the tank. Leaving Mrs in charge of a bucket to catch the dribble I went out and struggled with the tap which hasn't been turned in at least 5 years and didn't feel inclined to now until I started hitting it. That sorted out the dribble. So we were no longer in danger of flooding the kitchen any more than it was already.
I had enough bits and pieces to contrive a bung in the pipe and then I could turn the water on again. It held okay. Whew.

That's enough excitement for now, but I am going to take a long hard look at that shut off that didn't before I put this back together.

Sunday, 14 September 2014

Setting up Drools Execution Server

The Drools Execution server (6.2.0-Beta2) has a demo video which looks good, but I needed to know how to set it up on my own machine.

It is not quite obvious how to do this (but this is beta code, so there is a good excuse for that). Still, I thought I'd document how I did it here to save others time.

I'm running Xubuntu 14.04 but the operating system is unlikely to be relevant. Translate as necessary to Windows or OSX path names.

First you need an application server. I'm using Wildfly 8 (used to be called JBoss). Get Wildfly-8.1.0.Final.zip and unzip it into a directory we will call . You need to add a couple of user names before you start it up. Use the /bin/add-user.sh script to add two users we'll call user1 and user2. Make user1 an admin user and user2 an application user. Leave the groups field empty for user1 and (confusingly) make the group of user2 'admin'.
For the question after groups enter 'no'.

Now you can start Wildfly using /bin/standalone.sh and browse to http://127.0.0.1:9990 and log in as user1.

Download the two war files you'll need.
Drools Workbench (this is the Wildfly-specific version)
Drools Execution Server

Edit: I see those two links have gone dead. Here is how to find the right files. Go to this link (it may take a while to load). Now find the JBoss Releases repository. In the lower panel find the org/kie/kie-drools-wb and org/kie/kie-server-services. Under each of those are various versions you can chose from. You want two war files with the same version, bearing in mind the workbench has a specific war file for the app server in each version.

Use the Wildfly Management to load and enable the two war files.
Edit: it is a good idea to shorten the names when you load the war files, that's the second  question during the Wildfly load, and it means your url will be shorter.

We're nearly done...

Log into this URL using user2
http://127.0.0.1:8080/kie-drools-wb-distribution-wars-6.2.0-20140912.025916-106-wildfly8

(or, if you shortened the name you might use http://127.0.0.1:8080/kie-drools-wb)

Okay, now you are in the demo. The ones in the video have some different files loaded but the ones you have now seem to work. You can certainly deploy them and you can certainly call them with a REST client.



Saturday, 6 September 2014

People who say 'Well I think..."

Here I am listening to a radio broadcast of a panel discussion between several
political candidates in our impending general election. It's a well run discussion (thanks to the skills of Wallace Chapman), and unlike some similar events, everyone is behaving well so we all get to learn a lot about who thinks what about what. This is good. Here is the relevant

But what I'm more interested in here is not so much the specifics as the pattern they take. For example there is a question about what the voting age should be. Currently it is 18. I remember (just) when it was 21. Should it be higher? Should it be lower? What do you think?

This seems a good question, certainly worth asking. At 18 we can sign contracts, marry (actually we can do that at 16), vote, buy alcohol and join the army. We don't have any need to conscript soldiers just now, but when we have had such a need it has been tied to the voting age on the understanding that if you're not old enough to vote you're not old enough to be made to risk your life for your country.

So the panel tossed the question back and forth with answers starting with 'well I think...' and speculating that some younger people are old enough to vote and perhaps they should. Then others think it is about right or maybe could be higher, especially with alcohol purchasing.

What no one suggested is actually studying the people involved. Do we have any good data on how well informed 10 year olds actually are? 12 year olds? 14? 16? 18? Surely this is critical to setting the age. We know from other studies that IQ has been shifting upwards for years. This is called the Flynn Effect (Flynn is a New Zealand scientist, but the effect is observed world wide). So it is not unreasonable to suppose that our understanding about how younger people  make decisions is out of date.

Step back a bit, though. This is really not about the specifics. It is about gathering data. It is not enough to base policy on 'well I think...', and it seems we do too much of this. What any of us 'think' about some issue is far less relevant than what the data says.

Another example just to round it out: we have a minimum wage in NZ. Some people think it should be raised, others want to scrap it. The latter tend to say that raising it will cost jobs. This is a 'well I think...' statement. It is very easy to show data that raising the minimum wage does not cost jobs by looking at what has happened when this was done elsewhere. I've yet to see a study referenced that shows it has cost jobs. Maybe there is one, but let's see it instead of having people say 'well I think...'

So watch out for people saying 'well I think...'. If you can, ask them why they think that. Press them for data, some reference to some properly conducted study that demonstrates what they are saying is true (or at least not false, which is subtly different but okay). If you want to pursue this further get hold of Karl Popper's The Open Society and its Enemies which is about some other things as well but he argues better than I can for evidence based policy. Popper wrote this while he was lecturing in Christchurch, NZ during WWII (as an Austrian Jew he needed to leave Europe, but his German heritage made him unwelcome in the armed forces on our side). So there is a local connection there too.

Saturday, 2 August 2014

Do we really want one law for all?

One Law for All. It's a good slogan and, like Motherhood and apple pie, pretty difficult to object to. Some of our politicians have started saying this. We have an election coming up here in New Zealand. The sub text of the slogan is that some people feel that Maori are getting a better-than-fair deal. Specifically two things:

We have a separate Maori electoral role which elects its own MPs. Sounds very apartheid at first glance, but the first difference is that Maori themselves decide if they vote on the General role with the rest of us, or on the Maori role. And when I say 'decide' I mean each individual makes that choice, it isn't delegated to some representative body that is out of their control. If Maori don't want the role they can just (individually) stop using it. The other difference is that the MPs are real MPs, with as much status as any other. So not apartheid.

The second thing is that there is consultation about various issues such as how land gets used, where motorways get built and so on. One of the objections to this is that such consultation takes up time and time is money.
These two privileges get some people fired up and using the word 'unfair' and so on. I guess I'm in the demographic that gets fired up: white, middle class, middle aged, male. So why aren't I?

The answer is, quite simply, history.

If you were on the first boat load of people to arrive in this country and you were setting up a society for the first time, from scratch, 'One Law for All' is just obvious. But we aren't. Not by a long shot.

When the English took over New Zealand they did not do it by conquest. They signed a treaty with Maori. There are arguments over what that treaty says, not uncommon with legal documents, but it is pretty clear that there was a promise that Maori would be able to keep the rights they already had over the land. This is more complex than it might sound. In the UK the land is owned by 'the crown' and people actually own a 'title' to the land, which is a licence to use it rather than ownership. I've heard there are strange exceptions to that but it is generally the case. It is the same here in New Zealand. So the New Zealand land effectively became owned by Queen Victoria in 1940, with Maori now having a title.

This allows the state to exercise compulsory purchase when, say, they want to build a motorway through your house, dig for oil under and fly planes over it. We don't have a simple ownership right to our land, we have a title.

Okay, almost before the ink on the treaty was dry boatloads of settlers began arriving from England and they wanted land. The history is a bit complicated but dirty deals were done, fighting broke out, troops from the UK were brought in to quell the rebellious natives and land was confiscated. It wasn't only confiscated, there were other ways of prising Maori apart from their land which were more subtle. Remember the compulsory purchase? It was decided that Maori land did not need to be 'purchased' as such, it could be just taken if the state needed it. Since it was simpler to take Maori land rather than pay money for other land you can guess that was the preference. While the fighting was over fairly quickly, the compulsory 'purchase' arrangements continued well into last century.

Less formal was the casual prejudice that sidelined Maori economically. I grew up on a farm that was confiscated land and we had a Maori farm hand. I'm not sure my father and the farm hand gave much thought to the history, or even if they knew it. But the farm hand wanted to buy a car and he needed a loan. My father knew the bank manager well so he talked to my father about it who approached the bank manager on his behalf. My father told me he explained to the manager that this farm hand was in steady work, was a reliable character and that he himself was prepared to guarantee the loan. The bank manager explained that this, then state owned, bank did not lend to Maori. So there was no loan. My father had a loan, but he was white.

The result was that all the best land got into the hands of white settlers who cleared it and farmed it and built our economy on it.

But the unfairness of this came back to haunt us eventually and late last century we started doing something about it. The first major compensation settlement was for Tainui, the people who used to own the rich Waikato area. They got some land back and cash. I remember at the time people pointing out that Tainui were magnanimous. The value of the compensation was nothing like what they had lost. Everyone understood that the only land that could be returned was land that had not passed into private hands, and there was no way the cash could make up for it. This has been the pattern for subsequent settlements. No one, at that point, suggested there ought to be one law for all. One law for all would see farmers turned off their lands and any government trying to make that happen would never survive another election.

Since then there have been moves to respond to this magnanimity. The Maori electorate seats were already there, they've been in place since 1867, shortly after the fighting and they were brought in to ensure Maori could vote. In recent years the number has been expanded from 4 to 7, to more fairly reflect the number of people on the Maori electoral roll. Consultation with the local iwi (tribe) over various issues is commonplace, though not always comprehensive or even popular, but it is a move in the right direction.

We now get situations like this where a taniwha, or water monster, held up a highway construction for, it was claimed, 3 weeks while local Maori were consulted. Now, some people got very, very annoyed about this and talked up the cost of the project, the amount of time wasted and the 'ridiculous belief' in taniwha. I saw it as real progress. We often accomodate various 'ridiculous beliefs' such as the value of heritage and religious sites and surely Maori, to whom we owe so much, ought to get similar accomodation.

But let's not get carried away about having one law for all. If we insist on that it will cost us dearly.

Tuesday, 29 July 2014

Connectivity

I live in the woods, so getting the internet here is not as obvious as it is to most of you. I few months back when I realised my neighbour had wired broadband I tried getting it to my house too. The technician came out in a van and a hi-viz vest, poked at the phone line connection to the house, made that ticking sound with his tongue and drove off. Soon after I got a letter saying I was too far from the exchange to support wired broadband, oh and please send back the modem they had couriered me.

So, excitement over, I continued using my phone as a wifi hotspot, sharing its 3G connection. That works quite well but my plan is capped at 3GB. When I say capped that means no more data after that. Some plans allow you to pay something and put a bit more data on. Not this one. Capped means capped. It is actually an old plan and, based on the odd letter and phone call I get from Vodafone they would like me to switch, but the only things they have to switch to have less data and more voice time. Since I almost always hit the end of my data and never run out of voice this is not attractive.

You may imagine I look at the wired broadband plans that offer 80GB caps (they're pretty well always capped in NZ) with envy. I can add to my 3GB by going to the library where I get 200MB a day, but I feel kind of 3rd world doing that. My favourite cafes offer unlimited but if I camp there all day I end up drinking too much coffee, though I tend to get all my updates done by making sure I breakfast out most weeks.

But when I get doing something like installing new software, or developing new software (which inevitably needs me to pull down some new libraries) I can easily find my quota is blown. Checking G+ gobbles up about 50MB so I don't do that every day.

Mrs has a T-stick, a little USB 3G modem she plugs into her laptop. It will actually plug into a wireless modem we have so it would be a lot like everyone else has, ie wifi all over the house connected to broadband. Her plan has a cap as well, an extendable cap. That's because it is with Telecom instead of Vodafone. So I got thinking: if I got another T-stick I'd get another 3GB. Extending Mrs for another 3G is more expensive than a separate one, besides she doesn't quite trust me not to chew through all her quota (and she is right not to).

These T-sticks come trumpeting their support for Windows and Mac machines... but not Linux. So I expected to have to scratch around on the web and find out what to do. Mrs' T-stick doesn't work OOTB, I tried it. I believe it is possible and when I convince her to switch to Linux I'll need to figure that out. But my new T-stick (pictured... it's not a tampon) just worked. No config, nothing at all. It shows up as a wired connection. For anyone else trying this it is the E3531 T-stick and I have it running on Ubuntu 13.10. It has a little web server in it you can browse to if you need to change the config or send/receive TXT, or you can ignore it. Just browse to http://192.168.8.1

I'm not actually in the woods right now, I'm down in the city (Auckland) so I have it on my laptop, but when I get home I will plug it into the modem and use the house wifi. I probably need about 4GB/month so I can either top it up when I need to or I can use my phone, though I expect I will change the plan on the phone now, quite possibly I'll switch it over to Telecom since they seem to do a better job.

There's a second reason to switch the phone to Telecom. We don't have a land line connected. That's our choice, we used to have one, but we never used it because we have the mobiles. So when we had a big storm a while back the Vodafone network went down. The Telecom one stayed up. It meant Mrs could get online but with both phones on Vodafone we could not voice-call (and I had no internet). We didn't need to call anyone, but it might have been handy if we had an emergency. So it seems prudent to spread it around, especially since it is no longer vital to keep that 3GB plan.

With the house wifi providing internet I can use it like the rest of you do, ie multiple devices connecting to it, hang a USB drive off the modem for simple file exchange and backup and make better use of our wifi printer. My phone needs to be plugged into power to hotspot because that chews through the battery, so doing that less will be nice.

Monday, 9 June 2014

Winging it and the Peter Principle

This story from the Guardian came at me from multiple people who +1's it or tweeted it or whatever. It's good, feels true etc. Read it now, I'll wait.
http://www.theguardian.com/news/oliver-burkeman-s-blog/2014/may/21/everyone-is-totally-just-winging-it?CMP=twt_gu

Like I said, it feels so true and at one level it certainly is. I was sitting at my laptop struggling with writing some tricky code, trying to figure out why whatever ought to work wasn't and thinking 'yeah, I am totally winging it here' which was true except for the 'totally'.

We work on the edge of our expertise a lot of the time. For many of us that's where the fun is, the challenge. It makes us grow, or it tells us our limits or something. We don't even think about the things we do that are second nature.

This is related to the 'Peter Principle', the notion that we are all promoted to one level beyond our expertise, which is just another way of saying we work on the edge. The Peter Principle is about people being promoted until we find they are in a position they cannot perform in. They've been good up until now, but the latest rise up the corporate ladder was one too many. They stay there because the people who put them there are too gutless (or incompetent) to push them back down again.

Mostly this is rubbish. It assumes there is just one point where people switch from competence to incompetence and that that this cannot be changed. I suggest that every new promotion has its challenges and each one takes time to work through, but what is really true is that we work on the edge of our expertise all the time. And it is true even for those of us who don't work in a corporate environment where 'promotion' is a foreign concept. We take on challenges because they are fun.

But are we totally winging it? Of course not.

Take some basics. I code using a laptop, mouse, second monitor etc. Do I have the slightest difficultly operating the mouse, getting it to point where I want and so on? No. It's not a trivial skill. Try teaching it to someone over 80 who hasn't used a mouse before. The development environment I prefer is Eclipse and I use it to write Java and C++ code. Navigating the file structures in that environment is something I do without hardly a thought. And, of course, those two languages have syntax which is as natural to me as speaking English. This isn't blowing a trumpet. Anyone who programs in this environment works this way.

A little while ago I reorganised my projects to build with maven instead of ant/ivy. I didn't know maven and I hate reading manuals. I generally read a little and try it, fall over, do a little more reading (just enough, I'm lazy) and get past that problem and so on. I was winging it the whole time. Now I use maven every day, and I don't think about it.

So it is true that we wing it all the time, but that is because we like working on the edge, but we're only on that edge because we have a rump of expertise to be on the edge of. So winging it, sure, but not totally winging it.

Sunday, 1 June 2014

Madura Demos

I've re-deployed my demos. This is more interesting than it sounds, well I hope it is anyway. I've already posted about each of them. This is about the pain of getting them up there and running in the cloud.


Long ago I had them running on CloudFoundry and they worked fine, then the CloudFoundry business model changed and their free option went away. Since I make no money from this stuff the demo site has to be free, so I went to AppFrog who were running the same CloudFoundry software and they lived there for a while...until the same thing happened.

Recently, after completing the Workflow project, I had some free time so I thought I'd try and get them going on Google Application Engine. But I gave up on that yesterday (yes, it was only yesterday) and loaded them onto OpenShift (Red Hat's cloud offering). They worked first time. It was so easy. I was surprised.

The reason I was surprised was that it was so very hard to do Google Application Engine. There were two main issues.

First, GAE doesn't like Logback, which is a commonly used logging mechanism and I use it everywhere (because commonly used means it should work everywhere, right?) GAE only likes JUL and no one I know uses JUL. JUL is the native Java logging mechanism and it is the reason for products like Logback, ie people hate JUL and want something else.

But switching to JUL was not a big deal. I wouldn't want to have to work with it, but these demos are already debugged so that's mostly okay.

Next GAE needs lots of classes to be serializable so it can flick sessions between servers. I see the sense of this and I went off and made all my classes serializable. It didn't take very long, actually. Of course I tried it out using GAE's Eclipse plugin to make sure it was working and it was.

But when I deployed it for real it complained of more classes that needed to be serializable. However these aren't my classes, they're libraries from other people. Spring etc. I can't change them (well, not unless I want to maintain them ever after, and I don't). I suspect there is a solution to that particular problem but I'd noticed that the only way to find out any of this stuff is with the remote deploy. The local test told me nothing. This was true of the Logback issue too. So I realised this might go on forever and I bailed.

Openshift, in contrast, was quite happy with Logback and didn't care about Serialization. That may cost them something in efficiency, but it sure got up and running fast.

There seem to be two general approaches with Openshift: using Git and using scp. The Git option assumes you store your project source on their Git repository and they build and deploy from there when you tell them. Sounds fine, but I already have a public Git repository at GitHub, and my maven build works just fine. I don't feel a pressing need to learn their build syntax etc, though it is likely easy enough.

The scp option is basically 'I have this war file, upload it and deploy it'. Great! That's what I did. Worked first time. I suspect the running applications are not as fast as they might be but for a free service I can live with that. I'll even listen to arguments that GAE's serialization requirements really help Google deliver a faster application. But working trumps speed and I have it working.

Madura Pizza Order Demo

I'd meant to post an entry about putting my demos on line so people can play with them and not have to build the software. Not everyone, it seems, is actually a software engineer :-). Then I realised I had never actually posted anything about the basic pizza order demo. So here it is.

The purpose is to show off the Madura software I've been working on some time now. All of it can be found on my Github repositories, all open source and, with one exception, all of it free. The idea with Madura is that it makes use of rich business objects (Orders, Pizzas, Employees etc) to remove code from the application. These objects are very rich, in that they can include business rules that monitor what is going on. Most people do this with application logic and UI logic which is harder to write and maintain than what we have here.

The demo needs to be run from Firefox. It works on IE6 (I don't run Windows normally but I had an old version lying around). Don't use Chrome because that will show you the mobile UI, which is a different demo. I guess I should add don't run it on your mobile device either. It works, but it isn't the demo I'm about to describe.

We are supposedly ordering a pizza (but don't worry, no pizza will be delivered). There are several pizza related products but the most interesting is the Configured Pizza, which dynamically adjusts the choices to be compatible with what has already been picked. The premise is that only some combinations of topping, base and size of pizza are allowed and the Madura-driven UI only ever
presents valid choices, dynamically altering the fields as necessary.

Click on the URL and log in as admin/admin. You should see something like this:
We'll configure a pizza in a moment, but first click on the Customer button.

Notice the four buttons at the bottom and that one of them, Dynamic,
is disabled. Enter the name 'fred' (without the quotes) in the Name field and tab off it. The Dynamic button is now active. Change 'fred' to something else. The button is disabled again.

What is going on here? Do we have some custom javascript or custom java driving this? Actually no. That's what other people do.

There is a rule defined using Madura Rules which controls whether the button is disabled or not. This is the rule:

rule: Customer "dynamic" {
    if (name == "fred") {
        dynamic = true;

    }
}

 Normally this kind of thing gets done in something like Java, and you'd write code to test if the name field was 'fred', and you'd also write code to test if it was not 'fred' and then set dynamic to false. One of the nice things about rules is that their converse is implied without having to write it, so this is all we need.

The next demo step is to enter something into the email, eg 'abc'. Now you see an orange '!' beside email. This is the usual way Vaadin indicates an error. Roll over it with the mouse and you will see

Failed email: label=email not a valid email address, attempted=abc 

which is the default error message for this kind of error. The email field
requires an '@' sign so this is a simple validation error. There actually isn't a proper rule for this because it is too simple. We just add an Email annotation on the field and the rest happens. Naturally you can customise that error message and you can add I18n alternatives (which means it can show up in French etc if you need it to).

Also notice that the Save button is now disabled. This happens automatically because there is an error. You don't even have to write a rule. When the error is cleared the Save button will enable.

Change the email field to something valid and click the Save button to get back to the home page. You have a list of products on the left hand side. Expand the Sides heading and click on Boneless Chicken.
There isn't any configuration for this product, just a price. But note
that the message about the shopping cart now says it contains one item. This message is controlled by the following rules:

rule: Order "shoppingcartsize" {
    if (count(orderItems) > 0) {
        shoppingCartStatus = format("shopping.cart.status",count(orderItems));
    }
}
 

rule: Order "shoppingcartsize" {
    if (count(orderItems) == 0) {
        shoppingCartStatus = format("shopping.cart.status.empty",0);
    }
}


We could make this simpler if we were happy with saying 'Shopping cart contains 0 items', then we would only need one rule. But 'Shopping cart is Empty' looks nicer. The 'shopping.cart.status...' references are to a properties file that holds the actual messages we need, including the error messages mentioned above. The French messages are in a similar file, so if your user was French it would say "Le panier est vide" when the shopping cart is empty. To be clear, the language thing is just ordinary Java stuff, though it is rarely applied this well.

Now press the Save to Order button to keep the product in the shopping cart. The Cancel button would remove it so don't press that. The next step is to expand Pizzas and pick Configured Pizza.
This is a more complex form. There is a date field, which like all the other fields, is a standard Vaadin widget. The amount at the bottom is zero, until we pick enough options for the rules to work out a price.

But the first thing to note, which you cannot see very well on the form, is that this form is generated using exactly the same Java as the Boneless Chicken one. The only difference is that object it maps to is different. This is part of the 'rich objects' thing. The objects themselves have enough information in them for the Java code to build us a form. So defining a new product doesn't involve writing more Java (necessarily). It just needs the right object and the right rules.

Now we start configuring our pizza. Change the default Base to 'Puff' and pick Size='Medium'.
There is a new field called testing now and it has a red asterisk beside it, which is standard Vaadin for indicating a required field. The Save to Order button is disabled because we now have a required field that is not filled in. Finally you have an amount calculated. This is the relevant rule:

rule: Pizza "p3" {
    if (size == "Medium") {
        activate(testing);

       
require(testing);

       
amount = 15;
    }
}


activate(testing) makes the field visible and require(testing) makes it required. There are no rules to disable the Save to Order, that is handled in the same way as the error in the Email field, ie automatically.

But there is more. Check the options for Topping. They should be Italian and Spanish. Now clear the Size (and notice the testing field vanishes and the button re-enables). Check the options for Topping again. There are three others. The lists of valid values are being dynamically constrained depending on what else is picked. This is independent of the order they are picked, so if you pick Topping first then Size will be constrained and if you pick Size first then Topping will be constrained.

The rules behind this aren't convenient to express in the if/then format we've seen so far, and they are awful in an ordinary language like Java. Instead we use decision tables. The format of the table in this case is XML but it could be a database table or, if you're prepared to write some code, something else. I won't bore you with the XML here. Conceptually the table looks like this:

topping          size
Seafood            Small
Italian               Medium
Spanish             Medium
Hawaiian          Large
Greek                Large

Pick some valid combination, press Save to Order then press Checkout.
This just lists the resulting order items and their prices with a total. The total is
calculated using a rule:

formula: Order "sum" {
    amount = sum(orderItems.amount);
}


This is fired whenever we add or remove an item from the orderItems list, or if we change the amount of something already in there.

Part of the richness of the objects is that they include permissions on the fields. For example the admin user you are logged in as has all permissions. Another user operator/operator has slightly less capability. For example he doesn't have access to the address field of the customer so when he goes to that form it doesn't show up. There is no code to write to handle that, it is just part of the object definition. Similarly if you type 'fred' into the name field the Dynamic button doesn't enable. Operator doesn't have permission to use the Dynamic button, so regardless of the value of the name it won't enable even if the rule tries to enable it. Declared permissions trump rule permissions.

All of this, apart from making the application a whole lot simpler is taking us to an interesting question. Since the business objects contain all the logic how come the UI can know what is going on?

It does it by knowing only that the objects are rich, that it can query them for labels and currently valid picks (the pizza sizes and toppings) and if they are required or not etc. It does not ever know anything about the rules, including the simple validation things like the Email validation. And this actually makes it brilliant when we want to implement another user interface.

Usually when a business decides to implement a new interface, such as something that runs on a mobile platform, people discover they've implemented a lot of business logic in their UI and they have to rewrite it for the new system. Then they have to keep the two in sync, doubling the maintenance effort. But we don't have to do any of that. We just write the new UI and leave the business rules where they are.

To prove the concept I went ahead and wrote another Vaadin UI for the pizza order. If you browse to the same URL with your iPad or Android phone or a Chrome browser you'll see it. The login is a bit fiddly, you may need to press the user and password fields to activate them, depending on your device. They are over on the right.


This mobile demo is a bit simpler, it just does the configured pizza so there's no need for a blow-by-blow walk through. You know enough now to try picking different options and see what happens. It ought to be much the same as before.


We said it before but it is worth saying again. Although the UI is different all the rules and objects are the same. The only thing we had to change was the UI code. But we do mean the bare UI. We did not have to write code to restrict the choices based on earlier selections, nor did we have to write any code to track the total order or make the submit button (actually the Log Out button here) disabled until all the required fields are complete. All of those things are managed behind the UI with the same code as before. This shows the power of keeping the business logic clearly separated from the UI, and Madura is clearly an effective way to do that.

Wednesday, 21 May 2014

Deploying an Eclipse Update Site

I have been working on an Eclipse Plugin to support people who use my rules
engine and workflow engine. These both require you to specify stuff in a text file, rules in one case and process definitions in another. There is syntax to get right and the plugin provides a helpful editor that tells you about syntax errors, suggests things you might need to enter, colours keywords etc. It isn't nearly as smart as the Java editor in Eclipse but it should help.

Some people might be surprised I've done it this way rather than a pointy-clicky user interface that non-programmers can use to define their workflows and rules. But I have been around this stuff a while now and

I have yet to see a non-programmer actually write any rules or any workflow. Instead I have seen pointy-clicky user interfaces that are too dumb to actually do the job because some things are just too hard to make that easy.

Also, even if non-programmers did get directly involved, they would have to learn how to handle issues like version control and rollout which would mean they would turn into... programmers.

So these products accept reality and make it as easy as possible for programmers, especially Java programmers, to work with them. Hence the need for an Eclipse plugin that helps with the editing. I use Eclipse all the time, have done for years, partly because of the way people extend it with plugins. There are alternatives to Eclipse but one plugin is enough for now.

Actually writing the plugin is pretty simple. It took less than a week to throw it together. The writing and testing part just needs one Eclipse plugin project and you write the code and tell it to run. That launches a second copy of Eclipse with the plugin installed and you try it out, step through with the debugger etc. Easy stuff.

Deploying it however, that was much more complicated.

The way the people who designed the plugin architecture would have me do it is to create several different projects which are interrelated. I need one for the plugin, of course, another for the 'feature' which contains the plugin, another for the 'update site' which builds the repository other people can use to download from. If I use Tycho, which is what I'm supposed to do if I want to build with Maven, then I need another couple of projects as well. That seems too complicated, at least for a single plugin project. If I had several plugins I wanted to roll into one site then maybe, there would never be more than 3 'overhead projects' but...not this time.

Instead I cooked up an Ant script which builds everything in one project. That should make it very simple for anyone who wants to pull my source code from Github and try building it. One complication is that the build.xml file (which is what Ant looks for by default) kept getting deleted by the Eclipse wizards so I changed its name to build1.xml. The file is a bit messy still but it does run okay and it delivers an update site in the 'site' subdirectory.

That means I can commit the site directory and use the Github project as its own public update site. So people who don't care about the source code and just want to install the plugin binaries into their copy of Eclipse can point to there and get it. It took me ages to work that out. I was expecting to deploy it into Sonatype, which is where I deploy all my other binaries, but they are Maven binaries and Eclipse update sites have a different structure. Fair enough, but there were hints around that it was possible which made me keep looking too long. Anyway, this approach is easy and seems to work fine.

Thursday, 1 May 2014

Madura Workflow

Here's what I have been working on recently: Madura Workflow.

At this point your eyes glaze over, of course, but let me assure you that this stuff is really neat, even though you almost certainly don't need it.

Over the decades I've come across quite a few Workflow products, sometimes they're called Business Process Engines and such like. There may be a problem with specific definitions I've never got to the bottom of, anyway I was never happy with them. So here's my effort.

You use workflow when you're building the kind of computer application that needs multiple tasks done by different users, some of them external users, at different times, possibly over weeks or months. Maybe you need to handle a complex insurance application which needs several actuaries to look at different aspects of it, you might want to send parts of it out to external consultants and get a response back, some of those might be automated. The application's route through your system might need decisions made which influence its path. For example if some amount is larger than a cutoff point the application may have to go to a supervisor who is authorised to handle it rather than the lowly minion it would go to by default. It might need to send a credit check request off to the credit reference agency who have an automated response system, and they charge for each request so you want to avoid sending to them if you don't need to. Choices again.

The process definition works through the steps, but under that there are tightly integrated rules provided by my Madura Rules Engine. This means that you don't have to write code to make it all work. The rules fire automatically (tightly integrated, see?) and stuff just happens. This ensures the process definition is not polluted by a load of non-relevant logic. In other products like this they often are polluted, you see, and it means business rules are spread all over the place in the system, making it hard to find what you have to change. There are validation rules, process rules, UI handling rules and so on. In this they are all in Madura Rules.

We can send Web Service messages out to external services and process the responses. Again this is done with no code. We use Spring Integration to handle this at the moment, though the connection point to that is very thin and it could be replaced by something similar. Anyway, it means that, unlike many workflow systems, the process definition is not polluted by details of the messaging.

When you have a task that needs user interaction you can present a form generated from the data structure (again no code required) for the user to complete. Naturally the form is connected to an object that is monitored by Madura Rules. So you get all the validation and error handling etc you could want. You can, if you want something more specific, write some Java+Vaadin code for a specific form.

You'll gather by now that implementing a process is pretty simple because it is detached from the generally nasty business of defining application logic, UI code and messaging. That's the idea but there's another trick here too. In practice these kinds of coordinated processes can take weeks or months to pass through a business system. They might even get passed back and forth between two or more people several times. What happens during that time if you want to change the process definition, or the rules, or the form or the messages?

Obviously I wouldn't raise the issue unless I had good news. Madura Bundle allows us to package all of that stuff into a separate bundle file that is hot-swappable without having to restart the server(s). Existing process instances remember what bundle they started with and they keep using that. Newly launched processes use the latest bundle. Easy. So, for example, if that cut-off point mentioned above needed to change you could just regenerate the bundle with a new value.

If you've read this far you might want to take a look at www.madurasoftware.com. All this stuff is open source and, except for commercial implementations of the rules engine, entirely free.

There's a UI which is fairly complete based on Vaadin, but the underlying workflow library is agnostic to the UI technology, so if you prefer something else go for it. I already said the messaging platform, Spring Integration, can be switched for something else and actually even Madura Rules could be replaced because it is implemented as a plugin to Madura Objects and you could replace it with another plugin.

Monday, 7 April 2014

Strange Java issue with reflection and statics

Yesterday I found myself struggling with an odd Java problem. I suspect it is a Java bug, though I have deeply, deeply ingrained resistance to blaming anything on my environment. I can hear university lecturers and early employers pointing out 'it is always you' when I naively suggested that possibility years ago. But this just may be the exception.

I'm using reflection to discover the data type of the first parameter on this method:

public void init(Email annotation, PropertyMetadata propertyMetadata)

in this case Email is actually an annotation rather than an ordinary class. It has a full type of

nz.co.senanque.validationengine.annotations.Email

But when I locate the init method using reflection and get the type of the first parameter like this:

Class p = (Class)method.getParameterTypes()[0];

the type I get is

java.lang.annotation.Annotation

rather than

nz.co.senanque.validationengine.annotations.Email

and this causes my validation engine to ignore validating email fields. I have other kinds of validation (eg Length) which work perfectly well, the type for Length is returned correctly no problem. So I went over the code to see what the difference was. The good news is I eventually found it, the bad news is it doesn't really make a lot of sense.

The init method shown above lives in a class called EmailValidator. Again, I have a list of these such as LengthValidator, RegexValidator and so on. They all look the same, except that EmailValidator misbehaves. The one difference is that EmailValidator has some static Strings defined. It looks like this:

public class EmailValidator implements FieldValidator
{
    private static String ATOM = "[^\\x00-\\x1F^\\(^\\)^\\<^\\>^\\@^\\,^\\;^\\:^\\\\^\\\"^\\.^\\[^\\]^\\s]";
    private static String DOMAIN = "(" + ATOM + "+(\\." + ATOM + "+)*";
    private static String IP_DOMAIN = "\\[[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\]";
...


See? Just ordinary  static Strings. They don't seem to have any relationship to the init method. But they do. If I make them non-static (ie remove the 'static' from the definition) then... it all comes right. The datatype comes as

nz.co.senanque.validationengine.annotations.Email
 
as it should. There is no downside to making the statics into ordinary fields because these classes are singletons, instantiated only once. And, of course the validation now works. 

The result, when I edit '@' out of Amy's email address and tab off the field shows the error signal and disables the 'save' button, which is what is supposed to happen if there is an error. When I roll the mouse over the error signal it pops up with more detail. The totally cool thing about this is that all I need to do to add validation to the field is annotate it with @Email, everything else just happens (as long as those static fields are changed).

Friday, 4 April 2014

The End of Windows XP

Microsoft have announced they will stop supporting Windows XP around now. Apparently there is still a lot of it about. It was a pretty good operating system in that it was stable and didn't use too much resource. So people hung onto it (including Mrs).
It won't suddenly stop working, of course, but there won't be any new patches to plug security holes that inevitably get found and exploited. That's really what 'support' means. And I'm listening to a discussion on the radio about your options if you are still running it.
The obvious thing to do is pay a couple of hundred dollars to MS and get a copy of Windows 8, install that (it will be a clean install, not an upgrade) and learn to use a whole new interface. Except that Windows 8 probably won't run on your old computer because W8 needs more resources (memory, disk space, CPU). Your old 3rd party software will run okay though, probably, if there is enough resource.
What they didn't mention on the radio is another option: Linux.
Here's why you should consider Linux, specifically if you are running on an older machine you want to look at Lubuntu. Here is why:
  • It will run on your existing machine, you don't need to upgrade your hardware. I'm running it just fine on a machine I bought in 2005.
  • It looks enough like XP to make you feel comfortable enough, probably more like it that Windows 8 does.
  • It will probably run your existing 3rd party software. No guarantees, but Word and Excel run just fine under a compatibility product called Wine. Setup is simple.
  • It is free. This point would be unimportant if it did not work, but it does work, so I'll say it again FREE (as in beer). You just download it. You don't need your credit card.
  • Support is not going away and there is a vibrant community of helpful people. I've never yet had to ask anything, a quick Google always finds someone else with my question, and the answer.
Yes, you will have to do a complete re-install to get there, but you were up for that anyway. Linux is seriously simple to install these days, at least as easy as Windows and it happily auto-detects your hardware and sets it up without any fuss.
Lubuntu is one of many 'packaging' options for Linux and you can find lots of others, but Lubuntu was specifically designed to look more like XP to the user and to be lighter weight, ie to run on old machines.
You can even give it a quick try out. Download the CD image from here and burn it to a CD, then boot from the CD. The CD gives you the option of running stand-alone (without touching your disk drive, it is all in memory) or installing a system on your disk. Use option one to give it a try.
When you decide you like what you see you can install it alongside your Windows system but you are better off putting in a new disk and installing onto that. The reason I recommend this is so that you absolutely know that you aren't going to press the wrong button somewhere and overwrite you existing system.
You can then put your old drive into a USB enclosure and copy your data across to the new drive.
All this is no more hassle than installing Windows 8, far less if it saved you a hardware upgrade.

Wednesday, 19 March 2014

JMX, Tomcat and VisualVM

I've spent most of today wrestling with this and it ought to have been easier, but everywhere I looked for instructions had lots of steps I didn't need and almost all of them missed one vital step.

So what am I trying to do? Java has a feature called JMX which allows us to expose parts of our applications to the outside world for the purposes of monitoring and control. For example I have a small lock management system. Although it never goes wrong, and never will, of course, it seems prudent to expose a way for a sysadmin to go kill a lock that has been left in place by mistake. JMX exposes 'MBeans' which are essentially Java classes which have methods that JMX can let me call remotely.

The environment: Tomcat 7, JDK7, VisualVM (bundled with JDK)

Tomcat is my app server and it contains my application code. It has JMX services built in. VisualVM is my client, it gives me a UI that I can do the monitor/control stuff from. In addition I am using Spring 3.2.6 in my application because it has code to simplify exposing the MBeans.

Spring used to have a separate module for JMX called spring-jmx, but I noticed that has not been updated since version 2. They've rolled the JMX code into spring-context. I already have that library in my maven dependencies so that's fine.

I added the following code to my Spring configuration file:

<bean id="simpleLockerJMX" class="nz.co.senanque.locking.simple.SimpleLockerJMX" />

<bean class="org.springframework.jmx.export.MBeanExporter"
        lazy-init="false"
>

   <property name="beans">
     <map>
     
<entry key="bean:name=simpleLockerJMX" 

        value-ref="simpleLockerJMX" />
   
</map >
  
</property ></bean>



The first bean: simpleLockerJMX is the MBean I want to expose through JMX. The second one is the way to tell Spring this is what I want to do. They do make this very easy. The simpleLockerJMX bean doesn't need to know it is an MBean, it is just a simple Java class. There are many, many posts around to make this more complicated, including in the Spring docs, but all I need here is enough to prove the concept, and this works. It is, I think, limited to the local machine and has no security (other than being limited to the local machine, of course). Those options can be added if you want more complexity.

Now to tell Tomcat we want it to do JMX. This is done by adding these to the catalina.sh file:

 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8090 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.hostname=localhost

They get added to the CATALINA_OPTS argument. Adjust your port to one that is available on your system. If you're running Tomcat under Eclipse then add those entries to the VM Arguments in the arguments tab in the tomcat launcher. Also edit your tomcat-users.xml file to grant one of your users the role: manager-jmx

Now you can start Tomcat and it should be exposing the beans. You can check by logging into the Manager app in Tomcat (which is not there if you're running under Eclipse, so start it stand-alone for that step) and there's a way to display the exposed beans. This is using an internal view of things though, not quite the same as exposing them to the outside. Here is what you do:
  1. Browse to http://localhost:8080/manager/jmxproxy/
  2. log in as the user who has manager-jmx
You should see a fairly crude dump of the exposed MBeans, including the new one.

Next step is to start VisualVM. This is a utility that comes with Java. There used to be a similar utility called jconsole that VisualVM supersedes. Run it by typing jvisualvm on a command line.

The first thing you must do once you start VisualVM is install the MBean plugin. I wasted hours by missing this step. Go to Tools..Plugins then the Available Plugins Tab and check the VisualVM-MBeans plugin and follow the short install procedure.

Then you should see something like this:

There are more methods exposed on the class than I'd like but we are definitely seeing it in VisualVM so I call it working.

To reduce the excess exposure of the MBean I added Java attributes like this:

@ManagedResource(objectName = "nz.co.senanque.locking:name=simpleLocker", 
description = "manager for Simple Lock Factory")
public class SimpleLockerJMX {

    @Autowired private SimpleLockFactory m_simpleLocker;
    public SimpleLockFactory getSimpleLocker() {
        return m_simpleLocker;
    }
    public void SimpleLockerFactory(SimpleLockFactory simpleLocker) {
        m_simpleLocker = simpleLocker;
    }
    @ManagedAttribute(description="The current locks")
    public String getDisplayLocks() {
        return m_simpleLocker.toString();
    }
    public void setDisplayLocks(String s) {
        return ;
    }
    @ManagedOperation
    @ManagedOperationParameters ({
        @ManagedOperationParameter(

         description="Name of the lock to kill", name="lockName")  
    })
    public void killLock(String lockName){
        m_simpleLocker.unlock(lockName);
    }
    public void setSimpleLocker(SimpleLockFactory simpleLocker) {
        m_simpleLocker = simpleLocker;
    }
}

This bean just delegates to another bean, the SimpleLockerFactory, to do what we want. That bean is not an MBean, though it is a Spring bean, so it is not exposed through JMX. I added the @Managed... attributes to the SimpleLockerJMX I had before and changed the Spring configuration a little:
    <bean id="simpleLockerJMX"
     class="nz.co.senanque.locking.simple.SimpleLockerJMX" />
    <context:mbean-export/>

And that is all I need. The result in VisualVM is just getDisplayLocks and killLock are visible, which is what I want. If I annotate any other classes like this one they will be picked up automatically by Spring. It does mean I have Spring annotations in my Java, which means it is dependent on Spring, but I have Spring dependencies in other places already.

This was all just fine until I added another JMX bean. It looks more or less like the one above, but with a different delegation bean injected. And that doesn't work at all. The stack trace is confusing but it seems to be a problem with Spring's initialization. It is as if the MBean exporter requires all dependent beans to be initialized before the MBean can be completed. In this case they aren't, though it isn't clear to me why. I got around it by deferring the injection of the dependent bean. The code looks like this:
    @ManagedOperation
    public boolean isFrozen() {
        return getExecutor().isFrozen();
    }
    private Executor getExecutor() {
        if (m_executor == null) {
            m_executor = (Executor)m_beanFactory.getBean("executor");
        }
        return m_executor;
    }

This makes the class even more dependent on Spring, of course. But it does work.

Wednesday, 12 March 2014

Ash Wednesday in the South

Ash Wednesday has just come and gone again this year. It is one of those things that us Anglicans have a mixed commitment to. Some of us do the whole ashes on the forehead thing as shown, some of us let it go by, possibly some of us wonder what the point is.

For those of you who don't know, Ash Wednesday marks the beginning of Lent which is a time we're supposed to hold back on the material things of life and take some time to build up our spiritual side so we can more appreciate Easter which comes at the end of Lent. Anglicans don't have strict rules around what we do in Lent, so some of us get into fasting etc and some of us don't and no one minds. The ashes-on-the-forehead are supposed to come from the little palm crosses they pass around on Palm Sunday (Sunday before Easter) which we've kept all year and they get burned on Ash Wednesday. I knew this well as a kid, but I never did manage to keep track of my palm cross for a whole year. Somehow by next Ash Wednesday it had gone missing. Still, it is a nice idea if you can do it.

There's a point to it all as well, the ashes are a reminder that we are all going to die (ashes-to-ashes) so this life is all temporary. In pre-Christian times Roman generals while enjoying their triumphal march into Rome after victories in far off lands would have someone next to them whispering from time to time 'remember one day you will die'. It was to keep them from getting too up themselves. There may be a connection with the ashes, though maybe not.

In the old days in the North the subsistence farmers were nearing the end of winter and their stores were getting low. It was a really good time to cut back on food, but they knew spring was not far off and something to look forward to. Weaving this notion into the Christian story made good sense even though, it should be noted, the Christian story doesn't quite fit it. Jesus fasted for 40 days which is supposed to be the Lent period, but he did it before he began his ministry, not three years later just before he was crucified. But they worked with what they had and made an annual cycle out of it all. It was probably helpful to have spiritual leaders prompting people to eke out their stores for the last of the lean season. Mardi Gras, or Carnivale, which is just before Ash Wednesday may be connected with the idea that there would not be enough food for any excess livestock in the next couple of months so best to eat them now. Carnivale and carnivore are closely related words.

Here in the southern hemisphere the whole Lent thing is awkward. We're in our abundant season. Fruit is falling from our trees uneaten because we can't keep up with it. I feel the need to just go out there and munch down a few more pears and apples and plums and figs and... well it would stop them going to waste. Sure, we preserve stuff, and even give some away, but it is still hard to keep up with. Tightening my belt around now just doesn't seem right.

There are aspects of Lent that do not involve food, but I like to think I do my share of those the rest of the time, prayer and kindness and so on. So Lent does sort of pass me by usually. Possibly there would be value in doing something Lenten in six months time, around September, but in our climate that is when spring is well underway and we're getting the first asparagus. Possibly July, which is really winter, would make more sense.

So I don't really worry about Lent too much, and I enjoy Easter when it comes.

Sunday, 26 January 2014

One Smart Cop

This morning we saw an interesting catch by a traffic cop. It's a long weekend here and there are lots of cops around. Just south of where we live is a longish 80km/h section (usual open road limit is 100km/h). It's slower because there are several sneaky corners interspersed with tempting straight sections. Our car has a speed limiter on it and we make use of that to keep to the limits. This morning Mrs was driving with the limiter on, and a car behind tailgating 'cos he wanted to go faster, which is not unusual through there.

We passed a cop parked on the side of the road, no doubt with radar on etc (also not unusual through there). The tailgater pulled back when he saw the cop, of course, and we both passed him looking quite sedate.

Mrs noticed the cop pull out just after we passed and checked her speed just in case, it was fine. Once we were around the next corner the tailgater passed us. Shortly after that the cop passed us at speed. We wondered why. He didn't have his lights flashing and we made irreverent jokes about him being late for his tea break etc.

A little later we found him with his flashers on stopped behind the tailgater and with his pad out etc. Well, we knew the tailgater wanted to go faster than us, and so it seems did the cop. Presumably the cop had spotted him tailgating before he had a chance to pull back.

One smart cop. I wonder how often they use that trick?

Tuesday, 21 January 2014

On Reading a Book

I'm reading a book just now, or trying to. When I say a 'book' I mean a book made of paper, not an eBook. This is the kind of book people talk about when they wax eloquent about the joy of real books. It is 'London: The Autobiography' by Peter Ackroyd and it is a fine work. The cover is interesting, the paper is good quality and the binding is well done. It needs to be well bound because it is a thick book and fairly heavy. If it were new it would probably have a 'new book smell'.

The writing is excellent and the material is riveting. So there is every reason for me to be racing through this book.

And yet I am not, and I found myself wondering why.

I often read at breakfast. It is a good time to catch up on reading my weekly 'New Scientist' and my brain appreciates the warm up before the day really starts. But I cannot read this book at breakfast. It wants to flip closed all the time and it takes one hand to hold it open and two hands to turn a page. I need between one and two hands to eat so it doesn't work. The second problem is that there is a good chance I will spill something on it, especially when I'm struggling to hold it open and eat at the same time.

These aren't issues with magazines like New Scientist because they lie flat and they are ephemeral enough that the odd bit of egg or cereal landing on them doesn't matter. I also often pick up my tablet (iPad Mini) and check the newspapers. Again, I can work that with one hand and food spills wipe off without damage.

The other time I read is in bed before I put the light out. There's no food and I have both hands free. But propping up a heavy book gets a bit wearing and if Mrs wants to put the light out sooner than I do then we have to compromise. The tablet wins out there as well. It is not nearly as heavy as the book, and I can read it in the dark. I often wake up early and I can read in the dark before Mrs wakes.

I do rather like nice books, and I have a fair collection of them. But as for actually reading them, the tablet seems to do a better job. And books that are less than nice, such as cheap paperbacks, they come a poor third.