Pandering to prejudice

It’s hard to express how profoundly depressed I am by the current state of politics in the UK. I mean, I try to avoid it, but I keep on hearing about it.

The BNP is a racist party. That’s bad. UKIP seem a bit racist. They may not actually be racists, but they certainly have policies that appeal to racists. The current government, on the other hand, running scared from mid-parliamentary-term council election results, actively develops policies specifically to attract the votes of racists and xenophobes, because it’s easier to pander to prejudice than to educate.

It’s disappointing because it reinforces the divisive rhetoric of the xenophobic fringe. It’s disappointing because the Conservatives really should know better. They do know better.

Thus, we get nonsense like this:

The Immigration Bill, introduced in the Queen’s Speech today, would require future private landlords to make simple checks on new tenants to make sure that they are entitled to be in this country. The government will ensure that UK nationals are not adversely affected and avoid red tape on honest landlords in the private rented sector.

The private rented sector in England is desperately in need of regulation. Letting agents are almost universally terrible. I’ve had some pretty awful experiences dealing with them, and I’d prefer never to go near one again (unless I’m armed with some kind of edged weapon or other means of removing the head or destroying the brain, I suppose).

There are a hundred useful laws that could be written to clean up the rotten business of residential lettings. The government does none of this. Instead, it chooses jingoistic nonsense for the UKIP-panic-inspired Queen’s Speech, forcing landlords to check the immigration status of tenants.

This is a country in which rogue slumlords let converted garages and walk-in-freezers at extortionate prices to desperate people. Do you think they’ll check the immigration status of their tenants/victims?

If this pointless requirement has any effect other than to give letting agents another excuse to tack another bullshit fee (common here, but illegal in Scotland!) onto the process of renting, I’ll eat my hat.

£53 a week

Ian Duncan Smith claims that he could live on £53 a week. I don’t doubt it. I could easily live on £53 a week.

I could save on travel by cycling from my flat, conveniently located in London’s Zone 2. If my bike breaks down, I can repair it with all my tools—or just ride my other bike.

For short journeys, I can walk, and if my shoes wear out, it’s OK: I’ve half a dozen other pairs.

I’d have to cut down on the electricity: storage heaters aren’t cheap. Fortunately, my flat is modern enough that it never really gets freezing cold, even with the heating off. I can always just put on some warm clothing. I’ve plenty of that.

Food’s no problem, either: I’ve a rack full of pans, and a cupboard full of herbs and spices that let me cook simple, wholesome food on the cheap. Dal and rice costs pennies per serving.

I’d have to cut down on wine and beer, but I could still drink plenty of fancy Japanese tea. I’ve a few nice teapots to brew it up in. It probably doesn’t cost much more per cup than generic teabags from the corner shop.

For the well equipped, it’s easy to economise. If not, it’s a lot harder. If you can drive to Tesco, buy loo roll by the cubic metre, then store it in a store room of your mansion, you’ll pay less per movement than someone who buys a couple of rolls a time from Costcutter. If you have a well-stocked kitchen, you can make nourishing food from a few cheap ingredients.

I suspect, however, that for those who have little, £53 doesn’t go very far, and that discussions of cost rather miss the underlying problems.

Writing Rails apps without wanting to kill everybody

I don’t think it’s any secret that working on a significant-sized Rails codebase is not nearly as free and easy as any number of make-a-blog-in-fifteen-minutes screencasts would have you believe.

The good news is, I think I have the solution, and it’s simple. The bad news is, that doesn’t necessarily mean it’s easy.

The short explanation is: write small components and minimise their coupling. This allows you to develop and test them largely in isolation. If that all sounds glaringly obvious, well, it is, in a way. The gulf between intention and action remains nonetheless huge.

This means that your controller actions should just create, update, and destroy resourcs. If those resources don’t correspond to database tables, that’s OK: just create an intermediate plain old Ruby object that does. It also means that your ActiveRecord objects should not be responsible for sending emails, resizing photos, or anything else not related to record persistence, retrieval, or validation. Trust that ActiveRecord works as advertised.

Finally, disable database access (use NullDB) in all tests except for some full-stack integration tests that run through the happy path for each feature, and, with luck, you’ll find that you can make changes with relative ease.

On the last two greenfield projects on which I’ve tried this approach, the test suites run in under ten seconds. That’s a significant improvement in programmer happiness over typical test suites that take five, ten, or thirty minutes to run.

It’s not quite as easy as all that, though. There are a couple of things in Rails that make it a hard path to follow, and they’re almost all related to ActiveRecord.

First, the fact that ActiveRecord is responsible for a large part of the work of translating to and from the HTML-forms representation of data makes it difficult to swap in other Ruby objects.

Second, the natural object-oriented way of dealing with entity relationships—of delegating everything to the immediately related object—tends not to produce efficient queries. In fact, it can produce almost comically inefficient database usage. This is, of course, the evergreen object-relational impedance mismatch, but it’s still a problem.

It’s my strong suspicion (based on about eight years of wrestling with it) that ActiveRecord might be fundamentally inimical to making scalable and maintainable applications, and that we’d be better served by an easier way to build plain Ruby objects that can interact with forms and form data combined with a means of expressing the full object graph required for each action.

If that sounds like I’m suggesting that Rails’s path of least resistance—of one database table to one model to one controller—is not the right convention, then yes, I think that’s what I’m suggesting. I can’t think of many applications I’ve worked on in which resources as seen by the user correspond one-to-one with database tables, and it seems almost absurdly technocentric to even think that it would.

I still don’t know quite what the solution looks like, but I feel like I’m getting closer.

I’d be very interested to learn about alternatives that might avoid the same pitfalls, in any language or on any platform.

Rail Travel Vouchers

I complained earlier about receiving compensation for a cancelled train in the form of £25 Rail Travel Vouchers rather than a more fungible means of exchange, but they’re not as useless as you might think, at least if you live in London: you can turn them into Oyster credit.

I successfully did this at an Overground station. I don’t know whether that’s significant: Overground stations are part of National Rail, despite their TfL branding (hence why the measurements are railway Imperial rather than Underground metric, among other little details); whether you can do the same at an Underground station I can’t say.

One does not simply … (Turbolinks edition)

The web is a profoundly broken medium in many ways. The network is unreliable, servers are unreliable, and the human beings who write code for dynamic services are most unreliable of all. Nonetheless, amongst all those problems, one that I’ve never really found significant is the milliseconds taken to load and process a page’s JavaScript and CSS—and if that were an issue, I think my solution would be to optimise and minimise the JavaScript and CSS, not write more of it!

The beta of Rails 4.0 is now available. One of the new features is Turbolinks. It’s a Ruby gem that actually contains code written in CoffeeScript, almost as if someone is trying to troll a large number of developers, but let’s leave that to one side. Here’s what it does:

Speed-up the client-side with Turbolinks, which essentially turns your app into a single-page javascript application in terms of speed, but with none of the developmental drawbacks (except, maybe, compatibility issues with some existing JavaScript packages).

I was wary of Turbolinks when first announced, as I said at the time:

I understand the motivation for this kind of stuff, and it’s neat, but I’m wary of it because of the additional complexity it introduces for a relatively small benefit.

I may be misleading myself, but it’s rare (on a desktop browser, at least) that it’s the page rendering time that I really notice: far more significant is usually the latency, or the time taken to transfer the significant proportion of a megabyte of HTML that’s smothering a few kilobytes of text.

On the downside, it replaces something that just works with something that … mostly just works. See elsewhere on this page: “Loads blank white screens in firefox 15” / “This is now fixed”. And that’s the problem: you’ve replaced something that works in every browser with something that you have to (or someone has to) test in every browser, and whose failure modes must all be handled. What happens when you click on a “turbolink” on an overloaded server, for example? My experience so far has been that this kind of enhanced link is usually faster, but the probability of nothing happening in response to a click is not insignificant.

I’m aware that I probably sound like an old grouch.

Others have made similar comments; Yehuda Katz wrote:

At the end of the day, unless Turbolinks can perfectly emulate the browser’s behavior, attempts to use Turbolinks with third-party JavaScript will either fail often or require an ever-growing library that handles more and more targeted edge-cases.

It seems that I wasn’t wrong: there have been 183 closed bugs on Turbolinks to date and five are currently still open. Don’t misinterpret me: it’s good that they’ve fixed these issues, but it reinforces the fact that it’s not just a simple case of swapping in the page body with a bit of JavaScript and an asynchronous request.

One does not simply swap in the page body with a bit of JavaScript

And that’s my objection, really: Turbolinks fixes a non-problem by adding a lot of complexity. That just seems a bit self-indulgent. Like those mobile websites that replace scrolling (just works everywhere) with heavy and broken swipe pagination (hey, we tested it on iOS 6 on WiFi!), it’s not driven by user needs.

Many of the websites I use on a daily basis are too clever for their own good, and break horribly when exposed to network latency, dropped connections, busy servers, or browsers in the wild. It doesn’t save me time if I have to phone up to give my electricity meter reading because the asynchronous form submission just doesn’t work. It doesn’t save me time if I have to log into a website in Firefox to close an item because nothing happens when I press the button in Chrome. Both of those happened yesterday, on the websites of companies worth hundreds of millions of dollars. My user experience would have been better if they’d been plain old CGI scripts! At least then I’d have got a sensible response from the submission: a timeout, or an error, but something.

If you must replace pages with an asynchronous call rather than just GET followed by 200 OK, then please do use Turbolinks, because there’s zero chance that you’ll get it working right on your own. But even then, maybe still don’t bother, because your visitors might actually prefer predictable click behaviour and memory usage, and websites that just work across a wide range of browsers and network connections. You’ll save yourself trouble, too.

Thesaurus attack!

I received a scam email today that had obviously been put through some kind of automatic synonym replacement filter with results that were both amusing (see below) and pointless (because it was flagged as spam anyway):

Hello very expensively in Christ I am called Bruno Lopez. Every day which passes, I very feel sick and I have more and more enough fear in me because of my diabetes

I have a bottom in the nap of ’ 450.000 euro which is nothing else than the rest of my properties(goods) and I would like to donate it, in the concerns(marigolds) to set by you to help persons in need. I ask you to accept this sum because she could well you are very useful. I ask you for nothing in return.

I can see that very expensively is just an inappropriate replacement of dearly, but I’m a bit puzzled by a bottom in the nap of. I guess that nap must be rest, and bottom might be base, though it still doesn’t make much sense.

But what in the alias of deity are concerns(marigolds)?!

One trillion bytes

I see that Google are offering 1 TB of free online storage as a sweetener for people who buy the rather expensive 239 PPI Chromebook Pixel:

Since this Chromebook is for people who live in the cloud, one terabyte of Google Drive cloud storage* is included with the Pixel.

I thought I’d calculate how long it would take me to upload a terabyte with my fairly-average-for-the-UK ADSL2+ internet connection.

Upstream data rate: 444 kbps
÷ 8 = 55.5 kB/s
× 60 × 60 × 24 = 4.79 × 106 kB/day = 4.79 GB/day

1 TB = 1000 GB
1000 GB ÷ 4.79 GB/day = 209 days

At a nominal 30 days per month, that’s

209 days ÷ 30 days/month = 6.96 months

SEVEN MONTHS. And that’s an unrealistic maximum that doesn’t allow for overheads. In reality, I think it would be closer to a year. If you wanted to be able to use your ADSL connection for anything else during the same period, a couple of years.

I envy anyone who can actually use a terabyte of online storage. I don’t think this offer is going to cost Google very much.

Using UPnP IGD for simpler port forwarding

If your router or ADSL modem supports the UPnP Internet Gateway Device protocol (and most of them do), you can forward ports to services on your network much more easily and more flexibly than through the admin interface.

I had to buy a new ADSL modem/wireless router this week, as my old one was no longer working properly: instead of the normal slightly disappointing 6Mbps I usually get here in ‘Digital Britain’, it was down to a few hundred kbps, with highly variable performance. I thought it might be the slightly swollen capacitor on the board, so I replaced that, but to no avail. Fortunately, you can now buy decent ADSL modem/wireless routers from the supermarket for not much money, so, whilst it did cost me £44, it was a fairly easy problem to solve. As a fringe benefit, I now have a much faster wireless network in my flat, so it’s not all bad.

My new router has all kinds of complex options on its management interface, but it’s much more limited than its predecessor in one respect: port forwarding. On the old one, I could forward arbitrarily many ports, and I could choose to map an external port to a different internal one—useful for slightly obfuscating SSH access without having to change the configuration of the internal network. On my new router, however, I can only forward ten distinct port ranges, and the external and internal ports must match. At least, that’s all I can do through the clunky and slow management interface. But it supports UPnP, and UPnP does allow mapping an external port to a different internal port.

Enter MiniUPnP, a project that provides a client and a daemon that implement the UPnP Internet Gateway Device specifications. We only need the client, which is available on Ubuntu in the miniupnpc package.

You can then forward a port as simply as:

upnpc -a 192.168.1.2 22 3333 TCP

This will forward TCP connections from the internet on port 3333 to port 22 on 192.168.1.2. To remove it, use:

upnpc -d 3333 TCP

That’s a bit slow, though, as it has to discover the router every time. You can speed that up by supplying the root description URL. First, find it:

upnpc -l | grep desc:

Then supply it as the -u parameter every time you use upmpc, e.g.:

upnpc -u http://192.168.1.1:80/DeviceDescription.xml -l

The remaining step is to set up the connection automatically. As my server is configured via DHCP, I can make this happen every time it’s connected to the local network by putting an executable script in /etc/dhcp/dhclient-exit-hooks.d/ (I called mine upnp, but the name doesn’t really matter). I’ve chosen to use upnpc to tell me the local IP address of the server:

#!/bin/bash
export LC_ALL=C

upnpc="upnpc -u http://192.168.1.1:80/DeviceDescription.xml"
external=3333
port=22
ip=$($upnpc -l | grep "Local LAN ip address" | cut -d: -f2)

$upnpc -d $external TCP >/dev/null 2>&1
$upnpc -a $ip $port $external TCP >/dev/null 2>&1

Now, as soon as the server gets a DHCP lease, it will delete any existing port forwarding and forward port 3333 to its SSH server. The really nice thing is that the router doesn’t need to know about the server at all.

LRUG lightning talk: 1 + 2 = 3

Along with eight other people, I presented a lightning talk at LRUG on Monday night. Twenty slides, twenty seconds each, with automatic advance, a bit like Pecha Kucha or Ignite.

Photograph of me talking

I talked about addition and subtraction in Ruby, coercion, and how to subvert it to make Ruby treat numbers and strings with reckless abandon. I also defined a method called numberwang?

Skills Matter kindly recorded a video of the talk. (Unfortunately, the video is rather small on their site, and the access settings mean that you can’t watch it on the main Vimeo site, but get-flash-videos will let you download and watch it at a sensible size, if you like.)

Rather than mess around with the infinite time-sink of presentation software, I just wrote out what I wanted to say in Markdown format, with horizontal rules between slides, then wrote a few lines of Ruby to extract the slide contents, centre them in the terminal, and advance after 20 seconds. It worked out pretty well.

The other possibly interesting fact is that all the Ruby in my slides was evaluated. I wrote a small script to take everything that was marked up as a block of Ruby code, evaluate it, and replace any # => comments with the result of evaluation, ensuring that everything I said actually worked!

All the source documents for the talk are in a Git repository on BitBucket: I took advantage of their free private repositories to store my talk privately before the event, then opened access after I’d presented. Very handy.

Thanks to Stuart Eccles for the timely photograph.

Lubuntu 12.10: problems and solutions

In my ongoing quest for computing minimalism, I’m using the LXDE desktop environment (via Lubuntu) on my new laptop in place of XFCE, which (via Xubuntu) I’d been using for the past few years. I’m happy with it: it’s fast, flexible, and it doesn’t get in the way. It didn’t quite work right out of the box, though. The fixes were all simple, but not so easy to find.

My Lubuntu 12.10 desktop

It’s not really Lubuntu’s fault that things aren’t quite right. Most of its packages come from regular Ubuntu, and all the problems I encountered were due to Ubuntu packages that either contained bugs or didn’t specify all their dependencies.

I hope that if I write down the problems and fixes here, I’ll at least save someone else a late night.

Problem 1: No IBus IME indicator

If you use an IME for entering complex scripts (like, say, Japanese, as I do), you’ll notice that the indicator that shows the current input status is missing.

This is simply due to missing dependencies. The indicator applet uses Python, and needs a few libraries to work properly.

The solution is to install the following packages (via apt-get install or similar):

  • libappindicator1
  • python-appindicator
  • python-gconf
  • python-glade2
  • python-pexpect

Problem 2: Dead keys don’t work when IBus is used

If you use IBus in Lubuntu for the aforementioned complex script input, you might find that typing simpler scripts becomes impossible. For example, I use a keyboard layout that allows me to type AltGr + " followed by a to produce ä.

What I got instead was just a plain old a, at least in GTK apps.

The solution is to install GTK support for IBus, by installing these packages:

  • ibus-gtk
  • ibus-gtk3

Problem 3: No window shadows when using xcompmgr for compositing

This isn’t a serious problem. Openbox, LXDE’s window manager, doesn’t do compositing by itself. It’s easy to add compositing just by running a compositing manager like xcompmgr. This gives buffered windows, true transparency, and window shadows. Window shadows aren’t essential, but I find them helpful in making window ordering and edges clearer without wasting precious pixel space.

Version 1.1.6 of xcompmgr doesn’t draw window shadows with Openbox. The previous version, 1.1.5, does.

The solution is to install the older version and lock it. Version 1.1.5 can be found via pkgs.org (amd64 or i386 versions). Download it, then:

sudo dpkg -i xcompmgr_1.1.5*.deb
echo xcompmgr hold | sudo dpkg --set-selection

Incidentally, my settings for xcompmgr are:

xcompmgr -cCfF -r4 -l-5 -t-5

Apart from those small (now resolved) hassles, I’m really happy with Lubuntu so far.