The NHS Works

The NHS gets a lot of flack from all sorts of sources, and the media tend to delight in hi-lighting the terrible experiences people have had with the NHS, including horror stories about people being left in trolleys in A&E, people not being able to get access to Doctors out of hours

Over Christmas, my 15 month old daughter caught a nasty virus/cold that knocked her for six, and we ended up calling our local surgery out-of-hours. Several times.  Our surgery participates in the local out-of-hours scheme, where there is a central number to call, they take your details & a Doctor calls you back.  On each occasion we had a call back within the hour.  On each occasion the Doctor was friendly & helpful.  On each occasion we ended up taking Zoe to the out-of-hours clinic to be checked, after being given a specific appointment time.  

  1. One the first visit, we were seen almost exactly on time, given a prescription and told which chemist near us was open and able to complete the prescription.  We had the visit, prescription fulfilled and were on our way home within the hour.  This was on a Sunday 28th December.
  2. Our second visit, we were seen within 10min of our time slot, given a slightly stronger antibiotic, which the Doctor made up there & then as it was late and we wouldn’t be able to find an open chemist until the morning.
  3. The third visit was much the same, seen within 10min of the time slot, Zoe was thoroughly checked over & we were advised to finish the current course of medication.


The out-of-hours clinic is 15min from our house, there’s plenty of parking, the staff are friendly, the staff there are who the out-of-hours telephone number goes to.  

Craigavon Area Hospital & Lurgan Medical Practice, hats off to you, your system works, you were there when we needed you and you delivered a service you should be proud of.  I for one am glad that my income tax is being spent wisely.

More noise needs to be made when the NHS does something right, constant negativity is only self fulfilling.

I’m fully aware that this was fairly simple primary care, and that things get a lot more complex with serious medical conditions, but that why I choose to pay for medical insurance that covers these major things.  Having a service that’s available 24 hours during the holidays when you have a sick child is a mind saver, if not a life saver.

WRR DNS with PowerDNS

I had an interesting challenge in work recently, we have 3 data centres running our applications, currently the RR DNS system does what it’s supposed to, spreads the data round each of the 3 DCs evenly.  This works fine when all of your data centres have a similar capacity.  But ours don’t.  This causes problem when your load/traffic gets to the point where one of the DCs can’t cope.  Now, there are many expensive and complicated solutions to this, this how ever isn’t one of them, it’s quite simple, has it’s weaknesses, but as you’ll see it’s also quite elegant.


Our infrastructure already relies heavily on MySQL replication & PowerDNS, both of those are installed on all our public machines, indeed, we have a large MySQL replication loop with many spokes off the loop, ensuring that all of the MySQL data is available everywhere.  PowerDNS is used for both internal & external DNS services, all backed off the MySQL backend on the aforementioned MySQL replication loop.  This is important to us, as this solution required no new software, just some configuration file tweaks & same database table alterations.


Each record is assigned a weight. This weight will influence the likelihood of that record being returned in a DNS request with multiple A records. A weight of 0 will mean that the record will always be in the set of A records returned. A weight of 100 will mean that the record will never be returned (well, almost never).


  1. Add an extra column to the PowerDNS records table, called weight, this is an integer.
  2. Create a view on the records table that adds random values to each record every time it is retrieved.
  3. Alter the query used to retrieve data from the records table to use the view and filter on the weight and random data to decide if the record should be returned.

This is achieved by using the view to create a random number between 0 and 100 (via rand()*100).

create view recordsr AS select content,ttl,prio,type,domain_id,name, rand()*100 as rv, weight from records;

We use this SQL to add the column:

alter table records add column `weight` int(11) default 0 after change_date;

The random data is then compared against the record weight to decide if the record should be returned in the request. This is done using the following line in the pdns.conf file:

gmysql-any-query=select content,ttl,prio,type,domain_id,name from recordsr where name=’%s’ and weight < rv order by rv

For small sample sets (100), the results are quite poor & the method proves to be inaccurate, but for larger sets, 10,000 and above, the accuracy improved greatly.  I’ve written some scripts to perform some analysis against the database server & against the DNS server itself.  To test the DNS server, I set cache-ttl=1 and no-shuffle=on in pdns.conf.  With the cache-ttl=1, I waited 1.1 seconds between DNS queries.

Here’s some results, was used to gather this data:

Sample Size = 1,000

#### WRR DNS Results
dc1: 462, 46.2% (sample size), 23.38% (total RR)
dc2: 514, 51.4% (sample size), 26.01% (total RR)
dc3: 1000, 100% (sample size), 50.60% (total RR)
total_hits: 1976, 197.6% (sample size), 100% (total RR)

Desired priorities were:
dc1 2/100, 80%
dc2 5/100, 50%
dc3 0/100, 100%

Sample Size = 10,000

#### WRR DNS Results
dc1: 10000, 100% (sample size), 50.57% (total RR)
dc2: 5821, 58.21% (sample size), 29.43% (total RR)
dc3: 3952, 39.52% (sample size), 19.98% (total RR)

pos-1-dc1: 5869, 58.69% (sample size), 29.68% (total RR)
pos-1-dc2: 2509, 25.09% (sample size), 12.68% (total RR)
pos-1-dc3: 1622, 16.22% (sample size), 8.20% (total RR)
pos-2-dc1: 3332, 33.32% (sample size), 16.85% (total RR)
pos-2-dc2: 2548, 25.48% (sample size), 12.88% (total RR)
pos-2-dc3: 1540, 15.4% (sample size), 7.78% (total RR)
pos-3-dc1: 799, 7.99% (sample size), 4.04% (total RR)
pos-3-dc3: 790, 7.9% (sample size), 3.99% (total RR)
pos-3-dc2: 764, 7.64% (sample size), 3.86% (total RR)

total_hits: 19773, 197.73% (sample size), 100% (total RR)

#### Desired priorities were:
dc3 60/100, 40%
dc2 40/100, 60%
dc1 0/100, 100%

As you can see, with the larger sample size, the weighting becomes much more transparent.

dc1 appeared in the returned records 100% of the time, as expected, dc2 appeared 58.21% (desired percentage was 60%) and dc3 appeared 39.52% (desired percentage was 40%).

What is possibly more interesting & relevant is the number of times a particular dc appears in the top slot (pos-1) of the returned results, this is the A record most likely to be used by the client.  dc1 appears in the top slot 58.69% of the time, with dc2 appearing 25.09% and dc3 16.22%.  These results diverge from the desired prioroties quite a bit, but are still in order with the desired distribution.


  1. No new code/binaries to distribute
  2. Reuse existing infrastructure
  3. Easy to roll-back from.


  1. Fairly coarse grained controls of load balancing (no feedback loop)
  2. At least 1 site should have a weight of 0
  3. No gurantee on number of records that will be returned in a query (other then records with a weight of 0)
  4. Increased load on the database generating 1 or more random numbers on each query against the view

jBPM Community Day

Friday 6th June 2008 was the first jBPM Community Day, held in the Guinness Store House in Dublin, this is practically on my doorstep, and as we’ve been looking at jBPM for some pilots recently, I couldn’t not go.

The speakers on the day were Tom Baeyens, Joram Barrez, Paul Browne and Koen Aers. It was great to hear that jBPM is being used in all sort of environments, in some very large projects and most of all the direction of the project from the project leaders. It was also good to hear about local take up in & around Ireland (there were guests from all over Europe, including some Americans based in Budapest)

Tom & the rest of the team are taking their collective experience in the BPM and building the Process Virtual Machine, and state engine that can be used to execute processes described in many different languages, starting with jPDL, but already on the horizon are BPEL and Seam PageFlow. The PVM looks set to be the definitive state machine for process management, with plugin interfaces for persistence, task management etc.

It was a great day, many thanks to all of those who contributed to the smooth running & interesting content, and selection of a great venue!

[It’s only just struck me what a great venue, making a product that’s as consistently good as Guinness requires clearly documented processes, which soon becomes clear when you take the tour of the Store House that describes the process involved in taking the raw ingredients and producing something as fine as a smooth pint of Guinness]

Questions for the jBPM Community/Things I’m going to try and answer over the coming weeks

  • Where’s the absolute beginners guide? [or, as this is in a community, where can I start one and what needs to be in it? :-)]
  • What are the requirements/guidelines on replacing the jbpm-console or integrating functionality into your own app?
  • What are the interface points/techniques in PVM for other languages?
  • Drools/jBPM – what are the integration scenarios?
    • populate Drools with data/beans in a node of a process?
    • do both things operate independently?
  • Integration with authentication systems? (AD/LDAP instead of SQL based accounts)


There’s a flaw in ssh-vulnkey, it doesn’t always show you the name of the file with an offending blacklisted key in it. Here’s a couple of ways round this:

For a small machine, inspect the files by hand:

strace ssh-vulnkey -a 2>&1 | grep ^stat64| grep -v NOENT| cut -d” -f 2| sort | uniq | xargs vi

Or, a little longer, using ssh-vulnkey to find all relevant keys & reprocess them displaying the filename & then the result of the ssh-vulnkey for the individual file:

strace ssh-vulnkey -a 2>&1 | grep ^stat64| grep -v NOENT| cut -d” -f 2| sort | uniq | xargs -i bash -c “echo ; echo {} ; ssh-vulnkey {};”

This really is a dirty hack, using strace to extract the files ssh-vulnkey and then reprocess them individually, there are a million ways this could be done better, but not on a single bash line 🙂

Thunderbird Essentials – Revisted.

I wrote this some time ago, Thunderbird has moved on quite a bit since then (I’m now running on my XP laptop), so here’s a quick roundup on the Add-ons/Extensions installed:

  • British English Dictionary
    Hey, colour has a u in it.
  • GMailUI
    mostly just so that I can use j/k to navigate around
  • Lightening
    Quickly check my published calendars (also see this: exchange to ics).
  • Mnenhy
    Mostly so that I can see what SpamAssassin (personal) or PureMessage (work) has thought of the email.
  • Nostalgy
    The major time saver for me these days, rapid filing of email, works well across IMAP accounts.

I was having various problems with GMailUI & Keyconfig not playing together properly, but Nostalgy has largely removed my need for keyconfig, as the rapid filing using Nostalgy’s Save function is great.