Japanese elections STFU!

Japanese elections are a remarkably noisy affair: trucks drive around playing recorded exhortations to vote for a particular candidate for weeks beforehand. Candidates stand in public areas and drone on through microphones. It can be quite tiresome. But here’s how not to deal with it:

A Briton has been arrested in Tokorozawa, Saitama prefecture, on charges of disrupting the electoral process after grabbing a candidate’s microphone and shouting, “Japanese elections need to shut up!”

The Tokorozawa office of Saitama police on 23rd April arrested British citizen Edward Jones (34), a teacher of conversational English based in Nishinippori, Aragawa-ku, Tokyo, on charges of violating the public election law.

It is alleged that, on the evening of 23rd April, on the pedestrian path in front of JR Tokorozawa Station, the suspect grabbed the microphone being used by a city council election candidate in mid-speech, and shouted [in Japanese], “Japanese elections need to shut up!”

According to the city police, Jones had been drinking with a friend immediately before the incident. A campaigner reported the incident at a police box. A member of the station staff then detained the suspect on the station premises.

I translated the article from Sankei News.

What the suspect is actually alleged to have shouted down the microphone is “日本の選挙はうるさい”. The literal meaning is “Japanese elections are noisy”, but the word うるさい (noisy) is commonly used not so much as an observation of sound pressure levels as as an invitation to the originator of the noise to desist from producing it. A better translation might even be “Japanese elections STFU!”

Thanks to Andrew Plummer for directing me to this story.

Crisis? What crisis?

Repeat after me: I will not let your lack of planning become my crisis. Keith Mitchell at PIPEX taught me this.——Bill Thompson

I don’t care about downtime nearly so much as I care about data loss, so, from my point of view, I’m pretty happy with the outcome of the Amazon Web Services problems last week.

To recap: Amazon AWS services hosted in Virginia suffered connectivity problems which weren’t fully resolved for several days. This affected a number of large sites, such as Reddit, and the hosting platform Heroku.

The reasons for using AWS instead of dedicated hosting are still valid: it’s quick, easy, and cheap to provision servers and storage because it’s an anonymous, automated system. Of course, those advantages turn to disadvantages when something goes wrong: it’s still an anonymous automated system that doesn’t really tell you much about what’s going wrong or when it’s going to be fixed.

But you know what? I think that’s OK. Computer systems fail, sometimes catastrophically. Hardware breaks. The only reason anyone noticed the Amazon outage is that it affected a large number of sites simultaneously. Running your own systems may make it easier to find someone to blame when they go down, and to nag while they try to fix it, but would it really be better? Can you hire the networking and database expertise you’d need at the price you’re willing to pay? Can you set up a multiple-location backup system that actually works?

A sense of perspective is important here. We’re talking about hosting websites, not brain surgery or moon landings. No one’s dying. People overreact, it may be a bit embarrassing, and it’s not nice to be the target of frothy-mouthed panic from jittery middle managers, but that’s not really critical. If you didn’t have a continuity plan a week ago, then you didn’t think the service was that important. As the quote above says, your lack of planning is not my crisis.

The worst possible outcome of the AWS downtime would be a reactionary exodus from AWS to another hosting platform that hasn’t failed yet. It won’t be any better.

Now, you might complain that Amazon said that this outage wouldn’t happen, so you didn’t plan for it. Have you planned for nuclear strikes on the eastern seaboard of the United States of America? If you have, then congratulations: the AWS outage probably didn’t affect you. If you haven’t, then you implicitly accepted the risk of downtime. In most cases, that’s the rational choice, but it’s also an acceptance of the fact that your website is not critical.

Amazon’s recent outage affected connectivity to virtual servers on the US east coast, and, by extension, RDS, their hosted database service. One of the interesting features of RDS is that it provides automated snapshots of the database at daily intervals. During the downtime, anyone using an affected RDS database was able to create a new working database instance from the last snapshot with two or three mouse clicks. Far from being a failure, that’s actually a remarkably resilient system, and a quality of backup far better than most people would manage running their own database servers. However, given the potential loss of up to 24 hours’ data, or the difficulty of re-integrating the data from that period later, I suspect that many people would have chosen simply to wait for connectivity to be restored. That’s an acknowledgement that preserving data is more important than uptime.

Take fright at ‘the cloud’ if you will. Run back to traditional hosting providers. Pay them lots of money. Wait days for them to rack up servers. Fill in change requests in Word documents whenever you want to do something. Make sure that they set up regular off-site backups of your databases and storage. Ensure that those backups work. And then, next time Amazon goes down, you’ll be safe. But don’t fool yourself that you won’t have downtime, or that it will be any easier or more reliable.

There are ways to architect applications for high availability, but they come with costs and trade-offs of their own. It’s your choice.

Said.fm hack weekend: RadioBox

A couple of weeks ago, along with a few others, I joined my friends at Said.fm for a hack weekend. We threw around some suggestions before and during the event; one of the ideas that coalesced was a visualiser for Said.fm’s RadioBox events, and that’s what I ended up working on.

What we wanted to achieve was to make it easy to produce a synchronised visual accompaniment to an audio recording, the idea being that the audience can sit and listen whilst simultaneously being presented with appropriate visual information.

SoundCloud lets you attach comments to specific points in tracks, which makes it a useful interface for easily synchronising data with an audio stream. Our hack uses the SoundCloud API to get these comments and, depending on what kind of information they contain, presents that in the browser as the audio reaches the appropriate point. For example, a Flickr URL becomes (via the Flickr API) a background image. A Wikipedia link is turned into an abstract of that encyclopedia page. Plain old text is displayed as-is.

At the moment, it’s a fairly basic system: the CSS is perfunctory, and it doesn’t look particularly appealing. But it is easily extensible: all that it takes to add a new kind of information is to define a server action that decodes the URL and to add a regular expression for that URL. With a little design love, it could be rather beautiful.

The demo works in Chrome; it might work in Safari or even in Internet Explorer, though I haven’t tested those. It definitely won’t work in Firefox, unfortunately, because that browser doesn’t support MP3 audio, which is all that’s available from the SoundCloud API. It’s sad when politics and legal arguments get in the way of technology, but that’s the way it is.

If you’re on a slow connection, it will take a while to start: it loads all the images and data beforehand to ensure that everything can be shown at the right point in the audio without waiting for HTTP requests.

That said, here are a couple of examples (tip: move the mouse to the bottom of the page to get to the audio controls):

You can also see the source code on GitHub.

For comparison, have a look at the original content on Said.fm’s SoundCloud page.

Using RVM with tcsh

RVM is a very useful tool for working with multiple Ruby interpreters, and it’s especially handy for testing libraries against multiple interpreters. Unfortunately (for me), it only works with bash, zsh, and similar shells, and I use tcsh—but I’ve found a workaround.

I suspect that very few people use tcsh these days, and that the proportion of Ruby programmers is lower still. But I do, for reasons that I’ll have to write about another time.

A comment by chetstone pointed me in the direction of a solution. tcsh doesn’t have functions, so it’s not possible to modify the current environment in quite the same way as RVM does in bash, but it’s easy enough to use RVM to modify a bash environment and then load tcsh within it. You only need to do this for certain commands, such as rvm use. Here’s how to do it:

First, install RVM:

> curl -Lo rvm-install.sh https://rvm.beginrescueend.com/install/rvm
> bash rvm-install.sh

Next, create an rvm.tcsh wrapper script somewhere in your path, and make it executable:

#!/usr/bin/env tcsh

set rvm_command="source ${HOME}/.rvm/scripts/rvm; rvm $*"

if ($1 == "use") then
  bash -c "$rvm_command && tcsh"
  bash -c "$rvm_command"

Finally, add an alias to ~/.tcshrc:

alias rvm rvm.tcsh

From now on, when you type rvm in tcsh, it will pass that command to the real RVM. If the command starts with rvm use, you’ll be dropped into a new tcsh session using the specified RVM environment.

I’ve added annotations to my prompt to tell me if I’m in an RVM sub-shell (also in ~/.tcshrc):

set prompt_info = ""
if ($?RUBY_VERSION) then
  set prompt_info = "[$RUBY_VERSION] $prompt_info"
# plus any other information you want

set prompt = "$prompt_info%. %# "
# yours may be more colourful

You can do all the normal kind of operations:

~ > ruby -v
ruby 1.8.7 (2010-06-23 patchlevel 299) [x86_64-linux]
~ > rvm install 1.9.2
Installing Ruby from source to: /home/paul/.rvm/rubies/ruby-1.9.2-p180,
this may take a while depending on your cpu(s)...
Install of ruby-1.9.2-p180 - #complete
~ > rvm use 1.9.2
Using /home/paul/.rvm/gems/ruby-1.9.2-p180
[ruby-1.9.2-p180] ~ > ruby -v
ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-linux]

It’s a simple wrapper, but that’s all that’s needed to use most of RVM’s functionality.

A website is not a ship

A new crime-mapping website for England and Wales is experiencing a “temporary problem” as millions of people log on every hour, the Home Office has said.——BBC News

A website is not a ship. You don’t crack a bottle of champagne against its bows, push it down the ramp, and wave it off to the high seas. This ought to be obvious. And yet, it seems that many—maybe most—people don’t grasp this fact.

If your website is useful, you don’t need to launch it. It will be useful at any point in its lifespan: today, tomorrow, maybe even next year, insha’allah.

Conversely, if your website is not useful, then—quite apart from the fact that you shouldn’t have bothered in the first place—it won’t be any more useful in the next few hours than at any point in its sad, drawn-out existence.

All that launching your website—with the accompanying spots on breakfast news and columns in the morning freesheets—will achieve is to drive thousands of times the regular traffic to the site in a brief period. Can it handle that? Maybe, if you’ve designed it to handle that. But the time you spent doing that optimisation was almost certainly wasted, because you created the problem that you now need to solve.

I’ve said it many times before, and I’ll say it again: narcissism is one of the biggest causes of technical problems.

A website isn’t a TV programme. It’s not the X Factor, where a significant portion of the nation tunes in to share the ephemeral moment of watching some no-talent assclown gurn their way through a Beyoncé cover. (Caveat: I’ve never actually subjected myself to X Factor.) There is nothing to be gained by having everyone click at once.

If your website is indeed useful, then you certainly do want to get the word out. But a saturation media campaign is not, emphatically not the way to do it, because it will not deliver a good service to the visitor.

There are ways to design a website to handle massive spikes of traffic. The best one of all, though, is to realise that the internet is a pull medium, not a push one, and to work with its limitations and peculiarities rather than pretending it’s just another broadcast channel. It’s not, it doesn’t work like that, and treating it like one will ultimately not work very well.

Make reasonable optimisations. Allow for traffic spikes: if your website is useful, then you should expect those to occur. But don’t try to drive the entire population of a country of 70 million people to your website on day one. That’s just stupid.

How much for a favicon?

Web development work (Logo and fonts £2,317.50, Favicon £585, E-newsletter £1,080)——Costs of new ICO corporate identity as at 21 July 2010

You know what a favicon is? It’s that little icon that you get in the corner of a browser tab (unless you’re using Safari). It’s a square image of 16 or 32 pixels on a side (or both). It’s pretty easy to make, and it’s straightforward enough to deploy: at the simplest, you put it in the root of the web server’s directory tree, and it just works.

So £585 for a favicon is surely some kind of obscene rip-off, right?

ICO favicon

I’m not so sure. I mean, yes, £585 solely for making a 32 by 32 pixel image like this would be daylight robbery, but I bet there’s more to it than that. I bet that actually making the icon was the least of the work.

Don’t believe me? I’ve had to deal with enterprise grade hosting providers that won’t as a matter of policy lift a finger until you’ve filled in and emailed a highly specific three-page Word document and given a magic password.

In any case, I wouldn’t expect an important government website to be deployed simply as and when a cowboy developer feels like it. So there’s probably a sensible process for testing beforehand. And, even though it seems a bit silly, even something as simple as adding a favicon has to go through that process. It probably makes sense to roll it into a bigger release, doesn’t it?

I’ve seen how febrile corporate types get when you helpfully add in a favicon in the course of other work. Never mind the fact that their frivolous, barely-visited, buggy website has barely worked for years: the sudden appearance of those 256 pixels is the most urgent existential threat that has ever impinged on their tiny brand-obsessed minds. That good turn will be punished with sudden, frantic phone calls demanding the immediate removal of said icon. (Yes, I am bitter about that. Buy me a beer and I’ll tell you all about it.) So you can’t just make a favicon. It needs to be approved by all the relevant stakeholders [shudder].

I know how everyone and their 12-year-old son who’s ‘quite good at art’ is an expert design critic whose opinion must be sought and concerns placated before any design can be considered finished.

I know how many hours of horse-trading it takes to prioritise development, and to get a feature approved that could actually just have been done in half the time.

So, yeah, I’ll put a favicon on your site for £10, but I’ll also bill you for having to deal with the bureaucracy that I have to go through to get to that point.

  • Making a favicon: £10
  • Stakeholder engagement process & deployment planning: £575


You can make websites quickly and cheaply, but not if you’re riding on the back of a lumbering pachyderm, whether it’s public or private sector. That’s why small businesses will always have some advantages over large organisations.

I wish government could be more responsive and efficient, but I think it’s probably unrealistic to expect it to have costs of the same order as a hobbyist developer sitting in his bedroom.

See also: Oh. Christmas tree. by Paul Clarke.

Edit: and also: On £585 favicons… by Harry Metcalfe.


anything you say may be taken down and coloured in——irkafirka.com

Sometimes, the internet delivers us delightful things.

In the past few days, I’ve read a few articles about the Japanese scientist who has promised to grow a live woolly mammoth within five or six years. I would love to see a real live woolly mammoth! I’ve seen some bits of mammoth from the Siberian permafrost (at the Aichi World Expo in 2005), and that merely whetted my appetite.

Today, I wrote on Twitter:

The mammoth cloners have really got my hopes up. They’d better not disappoint me. I demand woolly mammoths!

A few hours later, I received this illustration in reply:

drawing of mammoths on each other's backs

Irkafirka is an interesting project: they choose random tweets that inspire them, and post an illustration. Today, I was that random inspiration.

Library cards are useful

I just found out that my library card gives me online access to a whole range of reference material. Maybe you knew that already; maybe I’m the very last person in the UK to find out.

Here’s a selection of what’s available to me with a Southwark Libraries card:

  • The Oxford English Dictionary
  • Oxford Music Online (a superset of The Grove Dictionary of Music and Musicians)
  • A database of articles from of major British and Irish newspapers—this even includes the now-paywalled Times.
  • The Times Digital Archive: every page from that newspaper, scanned, and indexed.

I was utterly delighted to find that I could use the OED free of charge with my (also free) library card.

You might have a different selection available; you’ll have to check your own local library’s website.

The Times Digital Archive is an entertaining resource; a quick search for bicycle led me to this delightful article from 1869:

VELOCIPEDING.—A journey on bicyles from Liverpool to London, by way of Oxford and Henley, has just been accomplished by two of the Liverpool Velocipede Club. On Wednesday evening, Mr. A. S. Pearson and Mr. J. M. Caw, the honorary secretary of the club, set off from the shores of the Mersey for a “preliminary canter” to Chester, from which city they started in earnest on Thursday morning. After a ride of 59 miles they arrived at Newbridge, near Wolverhampton, where they stayed the night. On Friday the velocipedians, having traversed the Black Country, went on to Woodstock, a distance of 69 miles, where they slept. On Saturday night the tourists arived in London, feeling none the worse for their long ride. Their bicycles caused no little astonishment on the way, and the remarks passed by the natives were most amusing. At some of the villages the boys clustered round the machines, and, when they could, caught hold of them, and ran behind until they were tired out. Many inquiries were made as to the name of “them queer horses,” some calling them “whirligigs,” “menageries,” and “valaparaisos.” Between Wolverhampton and Birmingham attempts were made to upset the riders by throwing stones. The tourists carried their luggage in carpet bags, which can be fastened on by strapping them either in front or on the portmanteau plate behind. This is stated to be the longest bicycle tour yet made in this country, and the riders are of the opinion that, had they been disposed, they could have accomplished the distance in much less time.

Some things haven’t changed much in 140 years: the public is still confused by and occasionally antipathetic towards bicycles. Other things have, such as vocabulary: it’s no longer customary to refer to the denizens of the provinces as natives.

BBC fights against openness again

The BBC believes that you should only use its online services through the proprietary software of one of its preferred technology companies. As a result of this, you can no longer use iplayer-dl/iPlayer Downloader.

The BBC has recently changed the iPhone iPlayer to use an HTTPS connection for part of the process. This requires the client to offer an Apple-signed client certificate as part of the connection. This has always been the case for the iPad version, but the iPhone change is new.

This means that a client device that does not have an Apple certificate can no longer connect to the BBC servers. This includes my downloader.

I have a few options:

  1. Extract the client key from an iPhone or iPod Touch.
  2. Rewrite the downloader to use RTMPE instead of HTTP.
  3. Give up.

If anyone can explain how to extract the client certificate from an iOS device, I’ll be very grateful.

I shan’t be making any upgrades in the immediate future. In the meantime, however, if you’re looking for an alternative, you could look at get_iplayer.

The reason the BBC gives for restricting access to non-blessed clients is piracy concerns. I would therefore like to appeal to irony by suggesting that both BitTorrent and Usenet are excellent ways to get hold of BBC TV programmes. I encourage you to check them out.

Please don’t use the comments as a support forum for get_iplayer! There’s a mailing list for that.

The secret truth about WikiLeaks

When the wise man points at the moon, the fool looks at the finger––attributed to Confucius

Come, gather round; I’ve got a secret to reveal to you: WikiLeaks is not a website.

I mean, sure, it has a website. Several, in fact. It makes use of the internet to obtain and disseminate information. But it is not itself a website.

This seems to be a difficult concept for the establishment and sections of the media to grasp. When BBC News reported that:

US cables released by the Wikileaks website suggest that Yemen allowed secret US air strikes against suspected al-Qaeda militants.

It wasn’t really accurate. The cable was first published by the Guardian, who obtained it from WikiLeaks, and subsequently published on WikiLeaks’s own growing online archive of publicly-available documents. But the Guardian printed the information on paper, too. The web is just one of several media by which it was reported.

When Amazon dropped WikiLeaks’s website from its cloud hosting platform, it was an inconvenience for them, but it made no difference to the flow of information: five newspapers in five countries already have all quarter of a million cables. It doesn’t matter to WikiLeaks whether Amazon dropped them because Joe Lieberman asked them to or whether they spontaneously decided to. It matters a lot with regard to freedom of expression in this corporate age, but that’s another question. The fact that WikiLeaks.org was soon back online, hosted in Sweden, is similarly irrelevant.

When the WikiLeaks.org domain name registration was withdrawn, it raised difficult questions about the vulnerability of the domain name system to political and extrajudicial interference. It didn’t stop the cables coming, though. The papers already have them. And, although it doesn’t matter, WikiLeaks was still available directly via its numeric IP address, or through a number of alternative host names such as WikiLeaks.ch.

That the powers that be—and I realise that there is no great conspiracy of authority here; the cables themselves tell us that!—appear to be playing whack-a-mole with the WikiLeaks website makes me think that they don’t really understand the problem in front of them. In fact, it leads me to suspect that the portrayal of WikiLeaks as a website might have been a brilliant piece of misdirection. People in general don’t tend to grasp information theory, but it’s sometimes particularly easy to laugh at just how little understanding some sections of the establishment appear to have:

The Defense Department demands that WikiLeaks return immediately to the U.S. government all versions of documents obtained directly or indirectly from the Department of Defense databases or records

(That reminds me a lot of this exchange.)

There are, I think, two important things about WikiLeaks. The first is the use of technology—of the internet and cryptography—to facilitate the collection of information from anonymous sources. The second is the fact that information is available in a digitised form. This latter property means that leaking a gigabit of information is hardly more difficult than leaking a single bit. If someone has the information and the motivation to leak something, it will be leaked. All that WikiLeaks does is to solicit this information actively. It’s a brand, and an organisation, and a network, but it’s not really a website.

Still, something must be done! And trying to shut down websites does look like doing something. Keeps ’em busy, I suppose.