Saturday, 4 July 2015

Cyber bullying legislation

We've just passed a new law here in NZ that is designed to curtail cyber-bullying. It's a tricky area because of the tension between restricting free speech and, for example, ensuring we can take down messages exhorting vulnerable people to kill themselves. It seems people are a lot meaner on-line than in real life, or something.

I have a bit of trouble relating to this. If someone was sending me hate txts I would just block their number. I don't know if their phone company would still charge them for sending the txts but I just wouldn't see them. As far as I can tell pretty much every social medium I use has some equivalent. Ignoring the whole space, for me anyway, would not be a hardship. Still, it would be wrong if the haters won that fight so we need something.

But I wanted to say something about bullying in general, rather than cyber-bullying alone, and I wonder if this ought to have been considered in the legislation. Bullying of any kind is about power. Powerful people exercise control over the less powerful. We all understand this, but it is easy to lose sight of in the murk sometimes.

When the boss makes a lewd comment at the new office girl and she gets upset he says something like 'Can't you take a joke?' and we all know he's being a bully and calling it humour. What if she makes a joke about his appearance (that overhanging gut, perhaps) which might seem to be just as hurtful, just as mean? Is it the same thing?

To say it isn't the same might seem like we're being unfair but I suggest it is different (mean is still mean, I'm not saying either of these comments are good). He's in a position of power, she's not. She simply cannot bully her boss, unless there is something else going on we cannot see like blackmail. So while she is being offensive, she is not being a bully.

The distinction is important. We had a story I found astonishing here a few months back. When I first read it I assumed it was some kind of parody that hadn't quite worked, but it turned out our prime minister repeatedly 'playfully' pulled a waitress' pony tail at a cafe he frequented. This is after she asked him to stop. He suggested the cafe was the kind of place where all kinds of hi-jinx went on and he was part of that and it was all good fun etc.

Now turn this around (we have to imagine our prime minister has a pony tail which he doesn't) and have the waitress pulling the PM's pony tail. Here's the leader of the country surrounded by his goons (we didn't used to do this but now politicians, especially our PM, always have goons) getting his pony tail pulled. Yes, that's kind of funny. Everyone could laugh, call it hi-jinx, no one felt intimidated. But the other way around adds the kind of power difference that turns it into bullying. It's conceivable that the waitress might find it funny, but it is pretty obvious that it could so easily go wrong.

So bullying is about power. If someone makes a mean comment about a politician on twitter it isn't about power. You go into that job with a thick skin or you don't go into the job (or maybe you just stay away from twitter). But part of the story behind this new legislation includes politicians quoting the mean comments (mostly about their appearance) they have to put up with. Mean is mean, but it isn't bullying. Look where the power is.

Monday, 18 May 2015

Workflow Rant

Over the years I've done a bit of work with Workflow or Business Process products. Not quite enough to learn the difference between the two but enough to know what I like and what I don't like. Most of my recent work has been on jBPM so I am heavily influenced by that product, when I look at others I compare them to it, not because I like it but because I know it. Most are similar with respect to the points I raise here. Anyway, I'm going to call everything 'workflow' and ignore the pedants who might attempt to correct me.

Workflow is essentially about handling long running transactions, things that might take weeks or months to work through like a complex insurance claim that involve coordinating different participants such as assessors, surveyors and so on, all through some process that has been actually designed by someone to deliver what is needed. This is in contrast to people figuring things out as they go along, missing steps, going back, screwing up etc.

You need to store the current state of things in a database, have queues of tasks that are directed at different people, ways they can access those queues and do the work (surveyor must enter her assessment of the damage to the building) and ways things can escalate automatically if they take too long (your support request has not been addressed within 2 hours, bump it up to a supervisor).

This is all really worth doing.

How you go about it is the question. There is a belief that the people who should write these processes should be business analysts rather than programmers. Consequently the tools to write them come with point-and-click graphical editors. Some of these are really impressive and a lot of work has gone into them. I, however, am always suspicious of point-and-click graphical editors. This stems from a time in the 80s when I worked with a rules based product that had a truly wonderful graphical UI (especially for the time) and underneath it was a rather shoddy inference engine. It demoed very, very well. Actually building stuff was a real pain. I have to contrast that with any modern spreadsheet application which are in daily use by accountants (not programmers) who manage to get them to do great work. So it is possible to get this right, but I'm wary.

The point-and-click graphical editors let you design processes and they save them into a format called variously BPMN, BPEL etc. There seems to be a tension here between adhering to a standard and coping with a standard that doesn't cover all the needs, so there are local extensions, and that means processes developed in one system aren't easy to port to another system. I just went through the exercise of migrating a demo process (ie a small one) from jBPM to Activiti and I ended up rewriting it rather than importing it. The pretty diagram imported just fine, but everything underneath it was different, for example how to define forms, how to define calls to external systems, how to define conditional branching. Activiti started out as a clone of an earlier version of jBPM so you might expect them to be similar in some respects but no.

And, while you will get business analysts making up the diagrams, that underlying part of the process definition (which is an integral part of the process) is quite fiddly. More fiddly than, say, spreadsheet formulae. Is this just a limitation of the tools? Maybe. But it is real. And it looks a lot more like programming than business analysis.

These two things, the lack of portability of process design and the underlying detail of it lead me to conclude that the people who actually use these tools are programmers rather than BAs. And this is also my experience. The BA will do a fine job specifying in detail what the process must do, a programmer will actually go write it.

That leads me to the obvious conclusion: the tools need to be programmer-friendly rather than BA friendly. So I went and did that. I cooked up a product called Madura-Workflow where the process definitions look enough like Java for a programmer to read them immediately and be able to write them in half an hour or so. The point-and-click tools are much more complicated than that, there really is a lot of new stuff to know such as different kinds of conditional gateways and so on. Java already has conditional statements that do much the same thing so I use that syntax. Here's a sample:

queue: name="Q1" permission="ORDERCLERK";
queue: name="Q2" permission="STOCKCLERK" read-permission="ORDERCLERK";
queue: name="Q3" permission="SUPERVISOR" read-permission="ORDERCLERK";

process: Order "AcceptOrder" "Accept an order" 
    launchForm=InitialForm queue="Q1" {
    if (decisionField) {

         try {
             form=SecondForm queue="Q2";
             compute=orderCompute log="some message";
         catch (timeout=10 HOURS) {
             form=SupervisorForm queue="Q3";
         if (rejected) { 
            abort "Rejected this order"
         } else {

If you already know Java you probably know what is going on there, though you would have a some questions. I'll answer those now:
  • The process named AcceptOrder is focused on an object named Order, which is a bean (with getters and setters) and all fields referenced are on the Order object eg Order.decisionField.
  • There are forms defined somewhere (else) named InitialForm, SecondForm and SupervisorForm. We don't care too much what they look like, but the process waits when they are invoked and they gather data and set fields in the Order object.
  • We can call Java classes with 'compute' statements and these can perform whatever Java we choose to write. Normally these are smallish computational things.
  • When we want to send a message somewhere we use a 'message' statement. These connect to Spring Integration (or similar) to orchestrate sending/retrying messages to external servers, getting responses back etc.
  • The launchForm is presented just before we start the process and it populates the Order object with initial information.
  • The queue definitions determine which users see the task generated. The ORDERCLERK can launch this process, can see the STOCKCLERK's tasks (eg SecondForm) but cannot see the SUPERVISOR tasks (SupervisorForm).
Sure there is still defining the data entry forms and the messages, but those are (quite rightly) outside the process definition and do not clutter it. In practice the message definitions consist of some XSL and some XML in a Spring context file. The data entry forms depend on your choice of technology.

My technology preference for defining forms is to use a mix of Vaadin, Madura Objects and Madura Rules. That way my forms are automatically validated and monitored by rules. For example the decisionField is a boolean that might be set by something on the InitialForm. It could be directly input as, say, a checkbox, but it might be invisible to the user and set by a rule triggered by the user entering some other field like a particular stock code. I supply a generic Vaadin form which generates itself from the object (eg the Order) and all it needs is an implementation class that extends it. You can leave the implementation empty for a generic form, or you can add code to customise the form if you need to.

The product does some other things too. You can add attachments to processes and this is fully supported by the UI, and by attachments I mean as many as you like and to any and every process instance. This is strangely missing from some workflow products I've used and seems to be needed by every workflow process I write.

However the core workflow is not dependent on a specific UI so people can choose what they like best.

Other features
  • On-the-fly deployment of processes, including their rules and UI details.
  • Field security on the forms, some users can see some fields but not others etc. This is a consequence of using Madura Objects.
  • Security based on Spring Security, so endlessly configurable.
  • Multiple servers for added throughput
  • A full workbench UI for launching and operating processes but, unlike other workflow offering it doesn't let you define processes in it, you do that from Eclipse.
  • And Eclipse plugin to assist programmers writing their processes (syntax checking etc)
None of which is so very different from other workflow products.

But my focus here is really about the process definitions. I do believe we are going down the wrong path with clever diagrams that end up being a real pain for the people who end up having to implement them.

If you are interested in more tedious detail there are documents on the core workflow (PDF) and the workbench (PDF) which includes a demo script. The whole thing is in github and releases are published to maven.

Wednesday, 8 April 2015

In Defence of Anemia

The 'Anemic Data Model' is regarded by some influential people as an anti-pattern. Martin Fowler describes it as 'contrary to the basic idea of object-oriented design; which is to combine data and process together'. So, with some trepidation, I disagree. But I'm not sure Fowler would call what I do 'anemic'.

For those who don't know what an Anemic Data Model is imagine you have a collection of Java objects (but any OO language will do). These are things like Customer, Order, OrderLine and so on. We have fields in them like customerNumber and (in Java anyway) we normally have setter and getter methods to set and query the field values. So far so normal. Now if we leave it like that it is an Anemic Data Model. If we add logic to the Customer object to, say, validate the customerNumber then we have a non-Anemic Data Model. We might add other code, for example we might have code in the Order to create a new OrderLine etc.

The opposite view, ie the one that prefers the Anemic Data Model, says you must put all this code into a Service Layer of objects that are there specifically to hold the business logic and manage the housekeeping.

There's actually quite a good summary of the pros and cons of ADM on stackoverflow, and I particularly like the #92 comment (the second one down, there aren't so very many to read) where Eric P says "Possibly it is because with Rich Domain [ie non-Anemic] it is more difficult to know on which classes to put the logic. When to create new classes? Which patterns to use? etc."

I'm working with a system at the moment where I suspect no one thought to much about these issues way back when it was written, so there are lots of service layer objects and data model objects, and they all have business logic scattered through them in a pattern I have yet to detect. Actually they have some of their business logic in the UI layer too and this is not uncommon.

Why is this even important? It is not really that things are hard to find. The IDEs we use these days have really good search facilities. It might take a couple more steps to find something but it is not a huge problem. For example finding the validation for customerNumber just needs me to invoke a search for all references to that field and maybe to the setter for that field. Job done. No, the problems come about when you need to change things around, especially in bulk.

The difference between data model objects and all the others is that they end up being serialized, which usually means they get written to a database. The database structure is not defined by our Java domain objects, at least not directly, but they need to be in sync. So various tools are around to generate SQL from annotated Java, or generate Java from a database. And the moment you decide to re-generate your domain objects from a database what happens to that customerNumber validation you coded into the Customer object? Gone.

People don't change their database so very often but there is an annoying trickle of triple maintenance requests ever after: add a new field to the Customer object, fix the SQL scripts to include the field, add it to existing databases.

So most people who decide to implement an Anemic Data Model structure things like this:
The Service Layer tends to be a bit vague on diagrams like this but I suppose you'd say it was in the arrows. It sure isn't in the Domain.

But even if you say all the logic has to be outside the Domain putting it in the Service Layer is only an implication. It can still end up in the UI layer (and it does!). This UI thing is a real problem, actually. People put validation up there and maybe some computation. When there is a requirement to shift the UI to a new technology you suddenly have to re-implement a ton of business logic (and you probably did not realise it was there so this wasn't in the budget, cue death march music....).

So we're better to say very, very definitely where all the business logic goes. Service layer! Well of course, but let's get a bit more formal. And let's take the opportunity to simplify things. Remember simplifying things means your new-hire programmer will be able to understand your code faster. Easier coding, fewer screw ups, all good stuff. Sure, but how? Well first we encapsulate.

Encapsulation is a well proven approach for simplifying things. You encapsulate the components in such a way that when someone is looking at one part of the system they don't have to know the details of all the other parts. If you code you already know this.

I mentioned earlier that there are tools which generate Java objects from database structures and vice versa. The limitation with these tools is that is all they do and this approach will need a little more. So we can turn to JAXB which generates the Java objects from an XSD file. XSD is a standard descriptor file for XML structures. XML? Well, yes. XML is another way of describing an Anemic Data model. You can't add code to XML. So our first step is to write an XSD file that describes our objects.

JAXB is a smart beast in that it accepts plugins that assist it generating the Java representation of the objects. One of those plugins adds JPA annotations, it's called HyperJAXB3. So the resulting objects are all set up to map to a database. You can generate SQL scripts or databases from there using the usual tools. You can also serialize to XML if you want.

JAXB can add all kinds of stuff to the generated code. For example maybe you want to add Ehcache annotations to the classes, that's a tiny tweak to the JAXB configuration. Then you regenerate and you haven't broken anything. You haven't lost any code because you never edit the generated objects. Actually this is a scenario worth exploring. You might want to add Hibernate caching to all your database objects. JAXB can do that no trouble. Then you might decide you'd prefer to use JPA caching. Again, no trouble, just change the settings and regenerate.

But we're not done yet. We can add JSR-303 validation to the objects. You specify the validations you want in the XSD file and the annotations will end up on the generated Java fields. It doesn't mean they will be validated, but it means you can provide generic code that will validate. This generic code doesn't count as business logic. The business logic is the annotation, it is an important difference.

Maybe we can do more? Yes, of course we can. We can use Madura Objects which implements a setter triggered delegation to a validation engine (yes, still validation so far). The important thing about this is that normally JSR-303 lets you set a value in a field and then check it. Madura Objects checks it beforehand and throws an exception if it is wrong. It means you don't need the logic to switch the value back if it fails. The validation function of Madura Objects still counts as generic code because it is driven by the validations (ie business logic) originally specified in the XSD.

This is still an Anemic Data Model but it is a delegating Anemic Data Model. And we want to put as much of the business logic into that delegated layer as we can. Want to add up the order lines into an order total? Do it in the delegation, triggered by the setting of a value on any order line. Want to calculate the tax on an order? Do it in the delegation, triggered by setting the total on the order.

The Service Layer now only needs to care about housekeeping stuff like creating objects, attaching them to other objects (adding an order line to an order), but never the real business logic. To extend that business logic we can add a rules engine, a plug-in to Madura Objects called Madura Rules. That implements declarative rules in a constraint engine. It also implements truth maintenance which means if you change a field that it derived some value from (say the value in an order line that fed into the order total) then the order total is automatically recalculated in much the same way as a spreadsheet.

What should that diagram look like now?
I've drawn the arrows between the Domain and the Business Logic tiny to show that this is not Service Layer stuff. The Service Layer calls setters, but it has no idea that the Business Logic is there. Neither does the UI or, of course, the database.

Remember that XSD file? I said we could add other stuff to it. Well now is the time to tell you that we also added permission flags, readonly flags and so on. We can use these to make the UI smarter. We don't hard code that the orderTotal is readOnly in the UI. We declare it readOnly in the XSD file and have the UI figure it out from the resulting annotations. If the field is to only display to users with SUPERVISOR permission again we declare it in the XSD and have the UI suppress it if this user doesn't have that permission. This removes the need for lots of business logic decisions in the UI and reduces it to generic code.

We do cheat just a little in two places. First the Service Layer needs to handle exceptions generated by the Business Logic. It usually just passes the exception message to the UI because such exceptions are normally user input errors. That makes it not totally transparent, but still generic. The second cheat is that we add a little metadata API to the Domain Objects. Most of the time the field metadata, eg whether the field is readOnly or not is static or permission dependent, and we can hold that in annotations. But sometimes it is dependent on what else this user just entered. Where that happens the rules can manipulate the metadata which, in turn, can influence the UI. The UI still avoids actual business logic this way.

For example, the Customer has a flag that tells us he is a credit risk. That means we want the user to enter some more information than usual to complete the order, so some fields that were not visible before become visible. The UI doesn't have code referring to credit risks, but the rules do and they set the status of the field.

Of course the UI layer has to be smart enough to do this, but if it is smart enough to do business logic it ought to be smart enough to have this added.

There is an implementation of this available using Vaadin for the UI, including a demo.

Saturday, 14 March 2015


We have four sheep. They are pets and they all have names. They don't actually know they are pets, nor do they know their names, but they are safe from being eaten, even though we do eat other sheep. At some level they seem to know that.

I grew up on a dairy/sheep farm and I never took much notice of the sheep. They were just woolly blobs that roamed the hills. My father spend a lot of time in early spring 'going around the sheep' which meant checking if they were in difficultly giving birth. In my early childhood one or two abandoned lambs were brought home for us kids to bottle feed. My sister raised a black lamb which was very tame, for her anyway, even when it was grown up.

Our current sheep all arrived as adults at various ages, though one of them was barely grown. With just four we get to observe them closely and they have distinct personalities. They are actually quite clever or, at least, they are good at what they do ie being sheep.

For example at this time of year they really, really like to spend time in the orchard. There is fruit on the trees which they mostly can't reach (this is the reason we don't keep llamas) but they can watch for when it falls. We go up there once a day or so and they mill about knowing we will toss them some crab apples. Crab apples seem to be the best thing in the whole world, they lick their lips as they run up to us. At other times of the year they are easy to move to the other paddock, but when the crab apples are there they are quite reluctant.

Except today. Today we're expecting a cyclone and the other paddock has more shelter. This morning there wasn't any wind to speak of but we had the weather forecast. So we went out to argue with the sheep who would not want to leave the crab apples. But they did. They'd felt the weather, well we could feel the drop in air pressure too, and they wanted to move.

When they want to move it is just a matter of opening the two gates and they run from one to the other. They know the drill, they're good at this.

But, like I said, they are all different. One time I was explaining to visitors who was who, I can recognise them from any angle and I noticed some skepticism. They were thinking 'they're just sheep, they're all the same.' But really they aren't.

The oldest (we think) is Limpy. He's a wether, a castrated male. and he still knows he's the boy, he's in charge. He's quite brave in his way. There's a little yappy dog next door who sometimes runs across our paddock. It's harmless, but it is a dog, therefore a wolf. The girls all flock up, Limpy places himself between them and the wolf, and stands his ground. It's his job. He's not at all scared of me either. One time I was in the forest near the fence to their paddock and I kind of burst out of the trees. Limpy was right by the fence on the other side and I gave him a start. He gave me a pained look as if to say 'thought it was something dangerous but it was just you.' One time I was trying to block his way (we'd got into some rare confusion when shifting them to the other paddock) and he just put his head down and pushed past me. He can be a bit of a bully with the others. He will push them away from any fruit we throw them if he can, but he usually doesn't bother because there's plenty.

Next is NDF (for No Distinguishing Features). Well we couldn't see anything different about her back when we were naming them. In fact she's a romney where the others are perendales, it means she's a bit bigger (she gets any low hanging fruit) and broader in the face. She is inclined to stand and stare at us with an astonished look on her face which makes us think she's not too bright. She is also the most nervous of the four. She does love crab apples though and we see her screwing up her courage to come up close to where we're throwing them. If it's not crab apples we have to throw whatever it is a little further for her.

The smartest of the four is Curious. She's the one who gets ideas. Sheep tend to follow each other but it takes one of them to think 'let's go over here' and the others to follow. That's usually Curious. She likes to eat the grass right at the fence line possibly because no one has peed on it ever. Also she's likely to be the one calling out to us when she sees us in the garden. She only does that in crab apple season, except one or twice when we've had unexpected visitors, so she's a sort of watch-sheep too. Not a very good one though. She's fairly tame but she won't take apples from our hands, only from the ground. Given that she sees the others doing it I have to conclude that she doesn't want to.

She is also inclined to sleep very soundly. One time we were moving them we thought she was dead. She was lying there in a heap and the others all ran through the gate when we opened it. We called her, clapped our hands and got no reaction. So we finished moving the others and went back, we expected, to bury her. As I approached her she suddenly jumped up, looked alarmed she was by herself and ran off after the others. I wonder if being smarter means her brain is wired a bit differently and it changes the way she sleeps. But I don't know.

The youngest, though several years old now, is Lambkin. She was definitely a pet when she was tiny, probably bottle fed by some kids, and she's the tamest. During crab apple season she is rubbing around our legs almost like a cat, nibbling our trousers and generally looking very excited. She never wants to be patted but she's friendly enough. And she really enjoys life. She tends to toss her head when she runs and sometimes she does what we call her happy dance. I still haven't caught this on video yet but imagine a kind of bounding gait where all four feet leave the ground as she prances about. It is hilarious and it is obvious she is doing it for sheer joy. Lately she's got Curious doing it too.

The happy dance turns up after a good feed of crab apples or when she's moved to fresh pasture. But it isn't inevitable, she does it about four times a year, the rest of the time she just seems normally cheerful.

We get to observe them every day because we have a small property and very few sheep. Most farmers I know think of their sheep as stupid because they get in a panic whenever the farmer does anything with them. But those farmers bring dogs to help (they have to with that many sheep) and the sheep don't know them anyway. The thing is we do eat sheep, and they know it. We'd be in a panic if we were in a room with a tiger, even one that didn't seem hungry right now. Ours seem to know they're safe from us. Fairly safe anyway. They'll run if I chase them (once a year I need to catch each one to administer tick medicine and they run away then). But no way do they panic when they see us.

But when we have visitors they don't behave the same way. They don't know the visitors, they tend to flock up and look nervous. They're good at being sheep.

Monday, 23 February 2015

Installing Oracle XE on Linux

This is about getting Oracle XE running on a Linux laptop, specifically my laptop. running Xubuntu, a variation on Ubuntu, but not the kind of variation that would mess up a database. Back in  2009 I was comfortable with Oracle XE, fairly comfortable anyway. But it seems installing Oracle on Linux has moved on to something much, much harder. I did get it working, though it took a lot of swearing.

First take a look at this, then assume you're taking the third way, ie use Docker. You will need to install Docker if you haven't already.
I did try the first option (twice actually) and it failed the same way both times, near the end (I think) with no diagnostics.

The details of the Docker image you use are here:
It is a 300MB download, and it is trivial to start. I did not have any success with the apex stuff though. It kept asking me for the same password over and over. But that is likely an Oracle thing rather than a Docker thing. I didn't need apex, though. All I wanted to do was get a database running and import a known good backup to it.

With the docker image you already have the first part, all that remains is the import.

First, copy your dump to somewhere you can map easily, I used /tmp.
Now you need to modify your docker run command to look like this:

sudo docker run -d -p 49160:22 \

-p 49161:1521 -p 49162:8080 \
-v /tmp:/tmp alexeiled/docker-oracle-xe-11g

That makes docker map the tmp file on your host system as tmp on your docker image.

sudo ssh root@localhost -p 49160
(password admin)
--you're logged into root on the docker image.
cp /tmp/*.dmp /u01/app/oracle/admin/XE/dpdump/
chmod o+rw /u01/app/oracle/admin/XE/dpdump/*.dmp

next start sqlplus and do the following:

sqlplus system/oracle
create user platinum identified by mypassword;
create tablespace PLATINUM_DATA DATAFILE 'tbs_f2.dbf' SIZE 10M online;
alter database datafile 'tbs_f2.dbf' autoextend on maxsize unlimited;

The dump user is platinum and the dump has a tablespace of platinum_data. The only way I know to find this out is to try it before creating the tablespace, note the errors and then create what you need.

impdp system/oracle dumpfile=D16-platinum-previous.dmp logfile=impschema.log full=y

There are dozens of more complex variations of the impdp command but that seems to be all you really need. Your dmp file will be different to mine, of course.

Finally you should change the system password or it will expire:

sqlplus system/oracle

(change the password to something other than oracle 'cos it will expire)

And that is all you need.

But I have to add that I really dislike doing anything with Oracle. To do anything they expect you to have a depth of knowledge that a DBA ought to have, not a software developer. Lots of options, lots of complications which I'm sure makes everything run very fast, but if you can't get it running at all (and I gave up on this more than once) then it is no use.

In the end the import was not too hard but it took me half a day to wade through the complexities and rat holes some simple documentation would fix. The actual install remains appalling, so I'm thankful for Docker.

My next task is cleaning up the debris left over from the failed installs....

But one more thing. If you restart your Docker image all the data will be gone, including the changed password. This might be a good thing in some cases but I wanted it to remember my import. For that I needed to make a new Docker image from the running one. It is quite simple. Follow the instructions under 'commit' on this page.

Sunday, 30 November 2014


I finally did it. After putting up with years of sniggers and snide remarks from every time she runs a defrag or an anti-virus checkon her ageing Windows XP machine (I didn't need to say ageing there, I guess) Mrs finally succumbed and moved to Linux.

The trigger was that I upgraded my machine, which was just as ancient but I'd been running Ubuntu on it for years, more lately I've been running Xubuntu, which is basically the same thing but with a different UI layer. It's lighter and suits the old machine a little better, plus I'm not a fan of the Unity interface on Ubuntu. My new machine would handle the extra load of Unity no problem, but I still prefer the Xubuntu UI. It is interesting that we get this choice with Linux.

So, with me sporting a newer, faster machine Mrs realised just how old and crappy her machine was getting. Little things like some of the keys playing up, the fact it tends to crash if we pull the power supply even though the battery is just fine, and so on.

But, of course, she knew if she went from XP to Win8, which is what the new machines come with, it would be all so different to get used to she might as well switch to some Linux variant. I got the machine about a week ago, installed Lubuntu (another variation of Ubuntu, but this one has an even lighter UI and it feels more like WinXP to make it easier).

When I do this I pop the hard drive and put a new one in. That way I can switch back in a moment if there are any warranty problems. I don't mess about with dual booting. If you're going to switch then switch, dammit. There were some minor issues with getting the machine to boot off the replacement drive, nothing much there. The job could have been done in a couple of hours all up but I needed to find a quiet point in her schedule to switch over.

That happened yesterday. She has one rarely used thing on VirtualBox running WinXP, another couple of things on Wine, both of them infequently used, and the rest is mostly LibreOffice, Firefox and Thunderbird. She's had one glitch in LibreOffice relating to the behaviour of quotes which took 5 minutes on Google to sort out.

Otherwise she's all good. And no more defrags and virus checks. The keys all work too, though there's a different keyboard to get used to.

Sunday, 5 October 2014

Secret Scripts: the antipattern

The Secret Script is something I've been aware of for a while and, although I'm fairly sure I didn't make it up, I can't find any reference to it on Google. So I'll need to explain what I mean rather than just supplying a link.

A Secret Script is some procedure, possibly manual or automatic that you have to know about to complete a software build. It isn't intended to be secret as such, in fact it is one of those things that everyone knows, except the new guy who can't get his build to run.

You can see how these things start, especially in a small team under pressure to deliver. Cut a quick ant script to create a directory and copy some files to it, hack the maven settings file, and make sure you set up that environment variable. Sure, now everything works, and it works every time, so we can forget about it.

If you are building open source projects, where you put the entire thing up in the cloud with the expectation that anyone in the world can pull it down and build it on their machine you cannot have secret scripts. Not ever. If I pull down an open source project and it doesn't build first time that's a signal to delete it and find something else. But closed source have more leeway here, especially if the team is small and information can be easily passed around.

Note that I am not referring to documented steps. You can have a couple of dozen manual steps in your ReadMe file and, as long as that forms part of the project, ie anyone pulling down the software gets the ReadMe file in an easy-to-find directory. Those steps aren't secret, they are easily discoverable.

I said closed source projects have more leeway but not very much more. Every new hire programmer needs to get past these secret scripts. You either have someone who knows sit with them to get them going, which is time consuming, or you leave them to figure it out for themselves, which is even more time consuming, not to mention discouraging for the new guy. There is always the danger than the people who know have forgotten they had to edit some config file two years ago anyway and will be just as baffled as the new guy as to why his build fails.

And with modern tools there should be no need for this. If you're a Java shop you are probably using maven or maybe ant. On a new machine anyone ought to be able to type mvn or ant on a command line and get a working build. If they can't the system is broken with secret scripts.

Secret scripts have a sort of mirror pattern which is, unfortunately, still an antipattern. I call these decoy scripts. A decoy script is a script or procedure that is right there in plain sight looking like it is important and useful, but which does not actually work and everyone but the new guy knows to ignore it. The new guy tries it out, finds it doesn't work, and tries to fix what is obviously a local problem. The certainty that everyone else has this script working will lead him to waste hours trying to fix what has to be a problem local to his machine.

Here are some examples of decoy scripts:
  • Out of date instructions in the ReadMe file.
  • Unit tests that don't pass (and aren't flagged as @Ignore)
  • old ant scripts that refer to invalid paths
These things are fairly easily avoided, and fixing them can save a lot of time.