Faking Production – database access

One of our services has been around for a while, a realy long time.  It used to get developed in production, there is an awful lot of work involved in making the app self-contained, to where it could be brought up in a VM and run without access to production or some kinds of fake supporting environment.  There’s lots of stuff hard coded in the app (like database server names/ip etc), and indeed, and there’s a lot of code designed to handle inaccessible database servers in some kind of graceful manor.

We’ve been taking bite sized chunks of all of this over the last few years, we’re on the home straight.

One of the handy tricks we used to get this application to be better self-contained was avoid changing all of the database access layer (hint, there isn’t one) and just use iptables to redirect requests to production database servers to either local empty database schema on the VM, or shared database servers with realistic amounts of data.

We manage our database pools (master-dbs.example.com, slave-dbs.example.com, other-dataset.example.com etc) using DNS (PowerDNS with MySQL back end), in production, if you make a DNS request for master-dbs.example.com, you will get 3+ IPs back, one of which will be in your datacentre, the others will be other datacentres, the app has logic for selecting the local DB first, and using an offsite DB if there is some kind of connection issue.  We also mark databases as offline by prepending the relevant record in MySQL with OUTOF, so that a request for master-dbs.example.com will return only 2 IPs, and a DNS request for OUTOFmaster-dbs.example.com will return any DB servers marked out of service.

Why am I telling you all of this?  Well, it’s just not very straight forward for us to update a single config file and have the entire app start using a different database server. Fear not, our production databases aren’t actually accessible from the dev environments.

But what we can do is easily identify the IP:PORT combinations that an application server will try and connect to.  And once we know that it’s pretty trivial to generate a set of iptables statements that will quietly divert that traffic elsewhere.

Here’s a little ruby that generates some iptables statements to divert access to remote, production, databases to local ports, where you can either use ssh port-forwarding to forward on to a shared set of development databases, or to several local empty-schema MySQL instances:

require “rubygems”
require ‘socket’

# map FQDNs to local ports
fqdn_port = Hash.new
fqdn_port[“master-dbs.example.com”] = 3311
fqdn_port[“slave-dbs.example.com”] = 3312
fqdn_port[“other-dataset.example.com”] = 3314

fqdn_port.each do |fqdn, port|
puts “#”
puts “# #{fqdn}”
# addressess for this FQDN
fqdn_addr = Array.new

# get the addresses for the FQDN
addr = TCPSocket.gethostbyname(fqdn)
addr[3, addr.length].each { |ip| fqdn_addr << ip }

addr = TCPSocket.gethostbyname(‘OUTOF’ + fqdn)
addr[3, addr.length].each { |ip| fqdn_addr << ip }

fqdn_addr.each do |ip|
puts “iptables -t nat -A OUTPUT -p tcp -d #{ip} –dport 3306 -j DNAT –to 127.0.0.1:#{fqdn_port[fqdn]}”
end
end

And yes, this only generates the statements, just pipe the output into bash if you want the commands actually run.  Want to see what it’s going to do?  Just run it.  Simples.

Windows 7 Essentials

I’ve just rebuilt my laptop (a combination of McAfee Whole Disk Encryption slowing the current build down & a Crucial ReadSSD 128Gb that was too cheap to resist forced me to, honest guv), so it’s time to refresh & re-document the essential software list:

  1. Windows 7 Professional 64bit
  2. VistaSwitcher (better alt-tab)
  3. WindowSpace (snap windows to screen edges & other windows, extended keyboard support for moving/resizing)
  4. Launchy
  5. Thunderbird6
    1. Lightning (required for work calendars)
    2. OBET
    3. Provider for Google Calendar (so I can see my personal calendar)
    4. Google Contacts (sync sync sync)
    5. Mail Redirect (bounce/redirect email to a ticketing system)
    6. Nostalgy (move/copy mail to different folders from the keyboard)
    7. Phoenity Shredder or Littlebird (the default theme is a bit slow, these are lighter and quicker)
    8. Hacked BlunderDelay & mailnews.sendInBackground=true
  6. Chrome + Xmarks
  7. Xmarks for IE
  8. Evernote
  9. Dropbox & Dropbox Folder Sync
  10. PuTTY (remember to export HKEY_CURRENT_USERSoftwareSimonTathamPuTTYSessions)
  11. WinSCP
  12. Pidgin + OTR
  13. gVim
  14. Cisco AnyConnect (main work VPN)
  15. Cisco VPNClient (backup & OOB VPN)

I think that’s it for now.

The New Toolbox

In days gone by, any computer guy worth his salt had a collection of boot floppies, 5.25″ & 3.5″, containing a mix of MS-DOS, DR-DOS, Toms Root Boot & Norton tools. These days passed and the next set of essentials was boot cd-r, containing BartPE, RIPLinux, Knoppix etc. People quickly switched to carrying these tools USB sticks, smaller, easier to change, great when the dodgy PC you were trying to breathe life into supported USB booting.

I think there’s a better way, based on the last 3 days of hell spent setting up what should have been identical touchscreen machines (no cd, slow USB interfaces)

Your new toolkit is a cheap laptop, with a big hard disk, running the following:

  1. Your favourite Linux distro (I’ve used Ubuntu for this laptop)
  2. tftpd, dhcpd & dnsmasq setup for PXE booting other machines from this laptop (FOG uses dhcpd for all it’s automatic DHCP magic, use dnsmasq for simple local DNS, required for Unattended)
  3. FOG Cloning System
  4. Unattended Windows 2000/XP/2003 Network Install System
  5. CloneZilla PXE Image (for good measure)
  6. RIPLinux PXE Image

Why?  USB booting stills seems troublesome, installing Windows from flash seems very slow.  Nearly everything supports PXE these days, if it has a built in ethernet port, it’s pretty much guaranteed to support PXE booting.  There is nothing like the feeling of being able to image a machine into FOG over a 1Gb crossover cable in a matter of minutes.  Got everything working? image it and walk away, safe in the knowledge that if somebody comes along and breaks things, you can image it back in minutes, instead of having to do another clean install and build all your updates & software back on top.

There’s a little bit of plain in getting all of separate packages to run from the one /tftpboot/pxelinux.cfg/default, but it’s just a matter of careful copy & paste from the canned configs.

WRR DNS with PowerDNS

I had an interesting challenge in work recently, we have 3 data centres running our applications, currently the RR DNS system does what it’s supposed to, spreads the data round each of the 3 DCs evenly.  This works fine when all of your data centres have a similar capacity.  But ours don’t.  This causes problem when your load/traffic gets to the point where one of the DCs can’t cope.  Now, there are many expensive and complicated solutions to this, this how ever isn’t one of them, it’s quite simple, has it’s weaknesses, but as you’ll see it’s also quite elegant.

Background

Our infrastructure already relies heavily on MySQL replication & PowerDNS, both of those are installed on all our public machines, indeed, we have a large MySQL replication loop with many spokes off the loop, ensuring that all of the MySQL data is available everywhere.  PowerDNS is used for both internal & external DNS services, all backed off the MySQL backend on the aforementioned MySQL replication loop.  This is important to us, as this solution required no new software, just some configuration file tweaks & same database table alterations.

Overview

Each record is assigned a weight. This weight will influence the likelihood of that record being returned in a DNS request with multiple A records. A weight of 0 will mean that the record will always be in the set of A records returned. A weight of 100 will mean that the record will never be returned (well, almost never).

Method

  1. Add an extra column to the PowerDNS records table, called weight, this is an integer.
  2. Create a view on the records table that adds random values to each record every time it is retrieved.
  3. Alter the query used to retrieve data from the records table to use the view and filter on the weight and random data to decide if the record should be returned.

This is achieved by using the view to create a random number between 0 and 100 (via rand()*100).

create view recordsr AS select content,ttl,prio,type,domain_id,name, rand()*100 as rv, weight from records;

We use this SQL to add the column:

alter table records add column `weight` int(11) default 0 after change_date;

The random data is then compared against the record weight to decide if the record should be returned in the request. This is done using the following line in the pdns.conf file:

gmysql-any-query=select content,ttl,prio,type,domain_id,name from recordsr where name=’%s’ and weight < rv order by rv

For small sample sets (100), the results are quite poor & the method proves to be inaccurate, but for larger sets, 10,000 and above, the accuracy improved greatly.  I’ve written some scripts to perform some analysis against the database server & against the DNS server itself.  To test the DNS server, I set cache-ttl=1 and no-shuffle=on in pdns.conf.  With the cache-ttl=1, I waited 1.1 seconds between DNS queries.

Here’s some results, sample-pdns.pl was used to gather this data:

Sample Size = 1,000

#### WRR DNS Results
dc1: 462, 46.2% (sample size), 23.38% (total RR)
dc2: 514, 51.4% (sample size), 26.01% (total RR)
dc3: 1000, 100% (sample size), 50.60% (total RR)
total_hits: 1976, 197.6% (sample size), 100% (total RR)

Desired priorities were:
dc1 2/100, 80%
dc2 5/100, 50%
dc3 0/100, 100%

Sample Size = 10,000

#### WRR DNS Results
dc1: 10000, 100% (sample size), 50.57% (total RR)
dc2: 5821, 58.21% (sample size), 29.43% (total RR)
dc3: 3952, 39.52% (sample size), 19.98% (total RR)

pos-1-dc1: 5869, 58.69% (sample size), 29.68% (total RR)
pos-1-dc2: 2509, 25.09% (sample size), 12.68% (total RR)
pos-1-dc3: 1622, 16.22% (sample size), 8.20% (total RR)
pos-2-dc1: 3332, 33.32% (sample size), 16.85% (total RR)
pos-2-dc2: 2548, 25.48% (sample size), 12.88% (total RR)
pos-2-dc3: 1540, 15.4% (sample size), 7.78% (total RR)
pos-3-dc1: 799, 7.99% (sample size), 4.04% (total RR)
pos-3-dc3: 790, 7.9% (sample size), 3.99% (total RR)
pos-3-dc2: 764, 7.64% (sample size), 3.86% (total RR)

total_hits: 19773, 197.73% (sample size), 100% (total RR)

#### Desired priorities were:
dc3 60/100, 40%
dc2 40/100, 60%
dc1 0/100, 100%

As you can see, with the larger sample size, the weighting becomes much more transparent.

dc1 appeared in the returned records 100% of the time, as expected, dc2 appeared 58.21% (desired percentage was 60%) and dc3 appeared 39.52% (desired percentage was 40%).

What is possibly more interesting & relevant is the number of times a particular dc appears in the top slot (pos-1) of the returned results, this is the A record most likely to be used by the client.  dc1 appears in the top slot 58.69% of the time, with dc2 appearing 25.09% and dc3 16.22%.  These results diverge from the desired prioroties quite a bit, but are still in order with the desired distribution.

Advantages

  1. No new code/binaries to distribute
  2. Reuse existing infrastructure
  3. Easy to roll-back from.

Disadvantages

  1. Fairly coarse grained controls of load balancing (no feedback loop)
  2. At least 1 site should have a weight of 0
  3. No gurantee on number of records that will be returned in a query (other then records with a weight of 0)
  4. Increased load on the database generating 1 or more random numbers on each query against the view

Exchange to ICS

I found this post Ryan Hadley a few days ago, which I got working with a little bit of time, I noticed that Thunderbird was displaying all-days events oddly, so I checked the VEVENT info being generated & tweaked to work correctly with Thunderbird/Lightening. I also dropped in the URL of the event in OWA & fixed it for situations where there are public & private names for the OWA/Exchange instance, handy when you want to go and amend an entry etc.

Hope you find it useful.

Exchange2ICS.tar.gz

Bright Ideas III: Flexible Project Management

There are lots of different ways of tracking a project (i.e. a list of tasks, dates, calendars, time frames, notes etc), with various tools (MS Project, Basecamp from 37Signals, Google Calendar, Horde and a gazillion other applications and online tools).

But so far not all of them manage the ideal all of all of the information everywhere. I would love to have the information spread acoss my PDA (Palm, online & offline), Laptop(Outlook, Thunderbird, online & offline) & web-based online access. Certain amounts of this can be done with SyncML, various sync tools & sites (Zyb, ScheduleWorld, Funambol). I like replication. Safety in numbers. Add into this the fact that I work in some awkward environments (I have my corporate laptop, with Outlook/Exchange, I have client sites where I only have OWA, and sometimes I want to get at the data when I’m out and about)

So, how to solve this problem? An application that supports various storage backends.

Tasks & Calendars that can save out details to Horde, Google, Exchange (over WebDAV/IMAP)