Not in fact any relation to the famous large Greek meal of the same name.

Wednesday, 4 January 2012

Giving An FTDI Serial Port A Persistent Device Node

There are plenty of perfectly good reasons why one might have several FTDI USB-to-serial adaptors attached to the same PC at the same time. Anyone who does, though, will notice that they’re a royal pain, because every time you unplug and replug one, it gets a different /dev/ttyUSBn device node.

Fortunately, there’s a way to use udev to set up unchanging aliases for those ever-changing device nodes. This is done by setting up a udev rule for each FTDI adaptor that you care about, that picks it out by serial number and gives it a custom symlink.

Some of the details are here, but just to sew it all together, here’s what you need to do.

  • First attach your FTDI adaptor, making sure all others are unplugged.

  • ls /dev/ttyUSB*
    — only one should be listed.
  • udevadm info --name=/dev/ttyUSB0 --attribute-walk
    — (using the ttyUSB number listed in the previous step) should dump a huge list of attributes
  • udevadm info --name=/dev/ttyUSB0 --attribute-walk | grep serial
    — will list only a few lines, such as the following —
    SUBSYSTEMS=="usb-serial"
    ATTRS{serial}=="XR00U1BU"
    ATTRS{serial}=="000000000000"
    ATTRS{serial}=="0000:00:1d.7"
  • The first ATTRS line is the one we’re interested in. That’s the USB serial number of the FTDI adaptor. (The subsequent lines are the USB serial number of an intermediate hub, and the PCI address of the USB controller.)
  • Now you need to go and find the udev rules directory: in a standard udev install, that’s /etc/udev/rules.d. In that directory, make a file called 99-local.rules (you’ll need to be root), and add a line like the following (and it must be all on one line; this blog might show it split) —
    KERNEL=="ttyUSB?", ATTRS{serial}=="XR00U1BU", SYMLINK+="ftdiDUINO", MODE="0666"
  • (An earlier version of this page suggested the name 10-local.rules, which with modern versions of udev is checked too early in the process; using 99-local.rules makes it work with any version of udev.)
  • Obviously the ATTRS clause must match the one your adaptor listed above. The SYMLINK clause can be anything (that doesn’t clash with any other device), so name it after the thing it’s plugged into at the other end.
  • Unplug and replug your FTDI adaptor. Whatever ttyUSB number it gets, the symlink /dev/ftdiDUINO will be pointing to it.
  • Oh yes, that udev rule also makes the device node world-writable. So you won’t need to be root to issue
    miniterm.py /dev/ftdiDUINO 115200

Saturday, 9 April 2011

The Failure Mode Of Agile Development

Here’s the “traditional” waterfall model of software development:

+------------+------------+------------+
| Design     | Implement  | Test       |
+------------+------------+------------+
^ Start                                ^ Deadline

And here’s its almost inevitable failure mode: design takes too long, or implementation takes too long, but the deadline doesn’t move, so test is cut short and the product ships full of bugs:

+------------>>>+------------>>>+------+.....+
| Design        | Implement     | Test |
+------------>>>+------------>>>+------+.....+
^ Start                                ^ Deadline

Software development organisations have been wearily familiar with this outcome, whether in their own code or other people’s, since at least Brooks (1975). Both the cause and the effect of schedule crunch are widely-known and well-understood: often the effect is so well-known that it can be specifically detected by reviewers and end-users.

Here, by contrast, is the fashionable new agile model:

+------------+------------+------------+
| Red        | Green      | Refactor   |
+------------+------------+------------+
^ Start                                ^ Deadline

In this case, “red” means you write the unit-tests first, but of course they fail (red light), because you haven’t written the code yet. Then you write the code, so they pass (“green” light). But so far what you’ve done is “the simplest thing that could possibly work” – in other words, you’ve been deliberately closely-focussed, overtly short-termist, to get your tests to pass, and a refactoring stage is needed to reintegrate your new functionality with the bigger picture.

Agile, of course, involves many of these cycles between project start and project deadline, not just one. (Indeed, some say that each cycle should be as small as a small tomato. I find that the going rate is two small tomatoes to one cup of tea.) So the agile diagram isn’t drawn to quite the same scale as the waterfall one: still, though, developers acquire the same sense of schedule crunch, they skip on to the next task too soon, and the corresponding failure mode occurs:

+------------>>>+------------>>>+------+.....+
| Red           | Green         | Refac|
+------------>>>+------------>>>+------+.....+
^ Start                                ^ Deadline

The cause is the same, but what’s the effect? The code was complete, and – if not perfect – then at least unit-tested, at the end of the green phase. So the product as actually shipped works, which is more than can be said for its waterfallen equivalent.

All that’s missing, in fact, is some of the refactoring effort. Unfortunately, that’s the only place in agile development where any large-scale design work gets done: the design debt that an agile shop takes out by not doing Big Design Up-Front, is paid off only in these refactoring installments. This means that in fact the effect of schedule crunch on agile projects, is that the system ends up under-designed and directionless at any other than the lowest level.

And unlike the paper-bag obviousness of the waterfall model’s failure mode, the agile model’s failure mode is subtle and pernicious. Product 1.0 ships and works – because agile development “releases” every sprint, and thus is perfectly fitted for triaging features against time. But the system is a ball of mud. Feature development on Product 1.5 and Product 2.0 takes longer than expected – which agile development also helps to hide, given its stubborn reluctance to countenance long-term planning – because developers eventually spend all their time battling, not intrinsic problems of the target domain, but incidental problems caused by previous instances of themselves.

Only the most obsessed agiliste would claim that agile development doesn’t have a failure mode. But because agile development is new, its failure mode is unfamiliar to us; and because that failure mode is less visibly catastrophic than Brooks’s, it’s easier to overlook. It is, however, real; its very subtlety requires us to pay particular care to look out for it, and to get right on top of fixing it once we see it start to happen.

Conversely, the fact that there are problems that agile development can’t solve, isn’t a fatal blow. Inevitably such problems are the most visible ones – because all the problems which agile development does solve easily, come and go without anyone really noticing. And the failure mode of agile development – the system’s complexity spiralling out of control – can be fixed without doing too much damage to the theory.

And how to fix it? Schedule in some serious refactoring, one subsystem at a time. In his paper Checklist for Planning Software System Production, RW Bemer, writing in August 1966 (August 1966!) says:

Is periodic recoding recommended when a routine has lost a clean structural form?

Nearly 45 years later, that’s still effectively the best available advice. Whether you call it refactoring or periodic recoding, it of course takes advantage of all the unit tests that the ball of mud already contains. This time round, it also takes advantage of knowledge about how all the parts of the subsystem operate. That knowledge is unlikely to be acquired in a single two-week sprint, so unless you can put someone on the task who already knows the subsystem inside-out (and most of the time you’re in this state, there won’t even be such a person), you’ll find yourselves breaking some of the rules of agile development by formally or otherwise block-booking someone’s time for a larger period. (Agile development aims at having any team member able to take on any task in any sprint – but for that to be okay, there mustn’t be any software problems complex enough to require more than two weeks’ thought. Some software problems just are that big, and context-switching can break the developers’ train of thought.)

This is the answer to the sometimes-asked question, “If, in agile development, everyone does design [as often as once per small tomato], what’s the rôle of the architect?”. Agile development is, in a way, Christopher Alexander’s observation that most things can be made piecemeal. But simplicity cannot be made piecemeal. The contribution of the software architect is simplicity.

Saturday, 22 January 2011

Is “Factory Method” An Anti-Pattern?

Let’s take another look at this version of the “two implementations, one interface” code from that one about portability:

// Event.h v8
class Event {
public:
  virtual ~Event() {}
  virtual void Wake() = 0;
  ...
};

std::auto_ptr<Event> CreateEvent();

// Event.cpp
std::auto_ptr<Event> CreateEvent()
{
  ... return whichever derived class of Event is appropriate ...
}

What I didn’t say at the time is that this is sort-of the Factory Method pattern, though a strict following of that pattern would instead have us put the CreateEvent function inside the class as a static member, Event::Create(). And the pattern also includes designs where CreateEvent is a factory method on a different class from Event, but it‘s specifically “self-factory methods” such as Event::Create that I’m concerned with here.

(As an aside: the patterns literature comes in for a lot of criticism for being simplistic. Which it is: the GoF book could be a quarter the size if it engaged in less spoon-feeding and pedagoguery. (And by pedagoguery I mean pedagoguery.) But in a sense the simplicity and obviousness of the patterns they’re describing is the whole point: thanks to the patternistas (patternauts?), a lot of simple and obvious things that often come up in programmers’ natural discourse about code, and that didn’t previously have names, now do. Reading a patterns book might not make your code much better unless you’re a n00b, but that’s not what it’s for. It’s for making your discourse about code better. In any n>1 team, being able to discourse well about code is a vital skill.)

But what I also didn’t say at the time, is that whether CreateEvent is a member or not, so long as it’s in event.h, this code appears to have cyclic dependencies — to be a violation of the principle of “levelisability” set out in the Lakos book.

What’s going on, as you can see on the left, is that, although the source file dependencies themselves don’t exhibit any cycles, viewing, as Lakos does, each header and its corresponding .cpp file as forming a component — the three grey rectangles — produces a component dependency graph with cycles: win32/event ↔ event ↔ posix/event.

One way around that would be to move CreateEvent out into its own component — a freestanding event factory — as seen on the right. With this change, the design is fairly clearly levelisable at both the file level and the component level. This refactoring is an example of what Lakos (5.2) calles escalation: knowledge of the multifarious nature of events has been kicked upstairs to a higher-level class that, being higher-level, is allowed to know everything. (The file event.cpp now gets a question-mark because, as the implementation file for what may now be a completely abstract class, it might not exist at all — or it might exist and contain the unit tests.)

But is it worth it? We’ve arguably complicated the code — requiring users of events to know also about event factories — for what’s essentially the synthetic goal of levelisability: synthetic in the sense that it won’t be appearing in any user-story. Any subsequent programmer working on the code would be very tempted to fold the factory back into event.h under the banner of the Factory Method pattern.

Moreover, in this case the warning is basically a false positive: if Event is truly an abstract class (give or take its factory method), then the apparent coupling between, say, posix/event and event is not in fact present: posix/event can be unit-tested without linking against event.o nor win32/event.o. (Not that, in this particular example, posix/event and win32/event would both exist for the same target — but factory methods obviously also get used for cases where both potential concrete products exist in the same build.) Though conversely, if Event had any non-abstract methods — any with implementations in event.cpp — then it’d be a true positive not a false positive, as all the different event types would be undesirably link-time coupled together.

One reason that the refactoring is worth it, is the same sort of reason that fixing compiler warnings, i.e. altering the code so they don’t trigger, is worth it, even in instances when the warning doesn’t point out a bug: because if you let warnings proliferate, real ones will get lost in the noise, and ideally you aim for the zero-warnings state in order that the introduction of any new warning is an easily-visible alert telling you that there’s a new potential bug to check for. Steve Maguire is talking here about unit tests, but the same applies to compiler warnings: “[W]hen they corner a bug, they grab it by the antennae, drag it to the broadcast studio, and interrupt your regularly-scheduled program”.

Exactly like compiler warnings, cyclic-dependency warnings — which are really design warnings — are sometimes false positives, but likewise it’s worth aiming for the “zero design warnings” state, because it makes new design warnings stand out so. I ran a cycle-checker script (in the spirit of the ones in Lakos) over my own Chorale project, and the result was that it effectively shone a spotlight on all the parts of the code where the design was a bit icky. Every cycle it produced — one was dvb::Service ↔ dvb::Recording, another was all the parts of db::steam depending on each other in a big loop — was a place where I’d at some time thought, “Hmm, this isn’t quite right, but it works for now and I’ll come back and do it properly”. And of course without anything to remind me, I never had gone back and done it properly.

So it turns out that you can’t have both factory methods and levelisability. You have to pick one or the other. And levelisability is better.

Wednesday, 6 October 2010

Quirking a Mac Pro into AHCI mode

The Apple Mac Pro (2006 model) has an Intel ESB2 south-bridge, which includes the SATA controller. Under MacOS, this SATA controller is driven in AHCI mode, which is a great improvement on the “traditional IDE” mode — for one thing, it enables “NCQ” SCSI-like command queueing. But the pseudo-BIOS that boots non-MacOS operating systems on Intel Macs, forces the chip into traditional-IDE mode. This is certainly a good idea for wide compatibility, but for an OS which has AHCI drivers — such as Linux — it’d be good to force it back again.

Instructions do exist for forcing it back again using a hacked Grub bootloader, but for one thing that’s a hack, and for another I don’t use Grub, I use Lilo. The “right” way (well, other than convincing Apple to put an AHCI mode into their pseudo-BIOS) is to use Linux’s “PCI quirk” infrastructure. This patch (against 2.6.35.6, but similar should apply almost anywhere) gives me toasty AHCI goodness under Linux:

--- drivers/pci/quirks.c~ 2010-09-27 01:19:16.000000000 +0100
+++ drivers/pci/quirks.c 2010-10-06 20:29:04.000000000 +0100
@@ -1044,6 +1044,15 @@ DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDO
 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_HUDSON2_SATA_IDE, quirk_amd_ide_mode);
 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_HUDSON2_SATA_IDE, quirk_amd_ide_mode);
 
+static void __devinit quirk_intel_esb2_ahci(struct pci_dev *pdev)
+{
+    pci_write_config_byte(pdev, 0x90, 0x40);
+    pdev->class = PCI_CLASS_STORAGE_SATA_AHCI;
+    dev_info(&pdev->dev, "Intel ESB2 AHCI enabled\n");
+}
+
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2680, quirk_intel_esb2_ahci);
+
 /*
  * Serverworks CSB5 IDE does not fully support native mode
  */
And here it is in my dmesg:
pci 0000:00:1f.2: Intel ESB2 AHCI enabled
ahci 0000:00:1f.2: flags: 64bit ncq pm led slum part 
ata1.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32)

All six SATA ports (four for the drive carriers, two by the optical drives) are seen by the ahci driver (which I have built-in to the kernel); loading the ata_piix driver as a module also enables the PATA port by the optical drives. (It's probably not a good idea to compile both ahci and ata_piix into the kernel, in case they fight over the ESB2 controller.)

Job’s a good ’un.

Monday, 31 May 2010

Portability, Or Cut-And-Paste?

Suppose you’ve got some functionality with two completely different implementations on different platforms:

// Event.h v1
class Event // Win32 waitable event
{

  void *m_hEvent;
public:
  Event() : m_hEvent(::CreateEvent()) {}

  void Wake();
  ...
};
versus
// Event.h v2
class Event // Posix waitable event
{
  int m_fd[2];
public:
  Event() { ::pipe(m_fd); } // or maybe eventfd on Linux
  void Wake();
  ...
};

And, by the way, if you’ve got a class where only some of the implementation is different on different platforms, you’ve first got an opportunity for refactoring to extract that part of the implementation as a self-contained class. Portability and maintainability are aided if each of your classes is either wholly portable, or wholly platform-specific. But as it is, you’ve got a more immediate problem, which is that there’s two classes both called Event, and any translation unit that includes both headers isn’t going to compile. Here’s one solution:

// Event.h v3
#ifdef WIN32
class Event
{
  ...
};
#else
class Event
{
  ...
};
#endif
This, plus its close relative
// Event.h v4
#ifdef WIN32
#include "win32/Event.h"
#else
#include "posix/Event.h"
#endif
could well be the most commonly-encountered solutions to this problem in real-world code. But they’re not without problems. One problem is with code analysis tools, including such things as production of automated documentation with Doxygen. Really you want such tools to analyse your whole codebase, and if they do they’ll get hopelessly confused with the two classes both called Event. (You can set up preprocessor defines in Doxygen so that it only sees one Event class -- but that only means the other doesn’t get documented, or that you have two entire sets of Doxygen output for the two platforms, neither of which sounds desirable.)

To solve this problem, you’ve going to have to give the classes different names:

// Event.h v5
class Win32Event
{
  ...
};

class PosixEvent
{
  ...
};

No, no, no, this is C++, there’s a built-in language facility especially for namespaces, you don’t have to reinvent it in the class names themselves:

// Event.h v6
namespace win32 {
class Event
{
  ...
};
} // namespace win32

namespace posix {
class Event
{
  ...
};
} // namespace posix

(Why the comments on the “}” end-of-namespace lines? Because the error messages you get from C++ when you accidentally miss out an end-of-namespace brace are often insane and impenetrable, especially if you do it in a header file, and it’s convenient to be able to grep namespace *.h *.cpp and check whether braces appear in matching pairs in the grep output.)

Notice that both declarations are parsed on every compilation on either platform. This might require you to move some method definitions (those calling such unportable APIs as ::CreateEvent and ::pipe) into a cpp file. But even compiling the other platform’s declarations helps ensure that changes on one platform won’t break the other platform without somebody noticing. And a Doxygen run, or other automated code analysis, will at least get to see both sets of declarations even if not both sets of definitions.

While going down the route of helping platform developers avoid breaking other platforms, it’s been conspicuous so far that there’s nothing in Event.h making sure that the Win32 and Posix implementations keep the same API. Without such enforced consistency, there’s a risk of client code unwittingly using non-portable APIs. The way to enforce the consistency, of course, is to derive from an abstract base class containing the API:

// Event.h v7
class EventAPI {
public:
  virtual ~EventAPI() {}
  virtual void Wake() = 0;
  ...
};

namespace win32 {
class Event: public EventAPI
{
  ...
};
} // namespace win32

namespace posix {
class Event: public EventAPI
{
  ...
};
} // namespace posix

This is, after all, exactly the sort of thing you’d do if you had two different sorts of Event that were both used in the same build of the program. Really the fact that no one individual binary will contain instances of both classes, doesn’t mean that the quest for well-factored design should be thrown out of the window and replaced with cut-and-paste.

Now, though, all the method calls have become virtual function calls. For most of any given codebase, that won’t matter, but there’ll always be hot paths and/or embedded systems where it does, and indeed an event class might quite plausibly be on such a critical path. And after all, compiling the client code itself provides a, potentially large, body of checks that the API has remained consistent over time. It’s reasonable to adopt the position that using design techniques to discourage unthinkingly changing core APIs in a non-portable way isn’t actually necessary, so long as the compilation-failure results of such a change are always speedily available from all target platforms, such as from an automated-build or continuous-integration system.

If, conversely, your class is only used in situations where nobody’s really counting individual machine cycles, you could hide the implementations altogether, at the cost of ruling out stack objects and member variables of type Event and requiring a heap (new) allocation every time one is created:

// Event.h v8
class Event {
public:
  virtual ~Event() {}
  virtual void Wake() = 0;
  ...
};

std::auto_ptr<Event> CreateEvent();

// Event.cpp
std::auto_ptr<Event> CreateEvent()
{
  ... return whichever derived class of Event is appropriate ...
}

But for now let’s assume you can’t really conceal the two platform-specific declarations, and go back to Event.h version 6 or version 7. Let’s look at how client code gets to pick which of the two implementations it uses. For a start, not like this:

class NominallyPortableDomainAbstraction
{
#ifdef WIN32
  win32::Event m_event;
#else
  posix::Event m_event;
#endif
  ...
  m_event.Wake();
  ...
};

Flouting our design rule that each class is either wholly portable or wholly platform-specific, this sort of thing couples portability decisions into all client code, which is very undesirable. This is much better:

// Event.h v9
... as before in v6 or v7

#ifdef WIN32
typedef win32::Event Event;
#else
typedef posix::Event Event;
#endif

// Client code
class NominallyPortableDomainAbstraction
{
  Event m_event;
  ...
  m_event.Wake();
  ...
};

It’s still not great, though: if there are lots of classes involved, lots of code ends up inside the #ifdefs. How can we minimise the amount of #ifdef’d code? Well, here’s one way:

// Event.h v10
... as before in v6 or v7

#ifdef WIN32
using namespace win32:
#else
using namespace posix;
#endif

But this, of course, introduces everything from those namespaces into the surrounding namespace. Not only is that namespace pollution, it’s potentially misleading for client code, as there may be Win32-specific classes or functions -- ones with no Posix equivalent, for use in Win32 situations only -- which are now accessible in portable code without the telltale win32:: prefix. (And, of course, vice versa for Posix-specific ones.) Really we want to explicitly enumerate which classes and APIs are intended to be portable. So we can do this:

// Event.h v11
... as before in v6 or v7

#ifdef WIN32
namespace platform = win32;
#else
namespace platform = posix;
#endif

using platform::Event;
using platform::PollThread;
...

The intention is that the using declarations list precisely those facilities which are available on all platforms, but with differing implementations on each. Non-portable classes or functions can stay in namespace win32 (or namespace posix) so that any use of them in client code immediately flags that code up as itself non-portable.

There is one problem with this neat solution, though: it doesn’t actually compile. Or rather, once you do the same thing in several different header files, you’ll find that multiple namespace X = Y; statements for the same namespace X aren’t allowed, even if Y is the same each time. So, sadly, you have to come up with a different name for “platform” each time (or centralise the using in one header file, causing potentially-undesirable widespread dependency on that file):

// Event.h v12
... as before in v6 or v7

#ifdef WIN32
namespace eventimpl = win32;
#else
namespace eventimpl = posix;
#endif

using eventimpl::Event;
using eventimpl::PollThread;
...

There’s only one remaining wrinkle, which is that you can’t forward-declare a class-name if that name only exists due to using. So if there are header files that traffic in Event* or Event& and could otherwise be satisfied with a forward declaration class Event; and avoid including Event.h, then you can’t use Event.h version 12. (Version 9 is also out, as you can’t forward-declare typedefs either.) The best you can do is probably this:

// Event.h v13
... as before in v6 or v7

#ifdef WIN32
namespace eventimpl = win32;
#else
namespace eventimpl = posix;
#endif

class Event: public eventimpl::Event {};
class PollThread: public eventimpl::PollThread {};
...

although naturally that only works as-written if the eventimpl base classes don’t have constructors other than the default constructor; if they did, you’d have to write forwarding constructors in each derived class, making the code a lot less neat.

Wednesday, 26 May 2010

Think Same: Mapping An Apple Keyboard Like It’s A Normal One

The aluminium “laptop-style” Apple keyboards are really nice: quick and easy to type on. I’ve got one on my main Linux machine. But they’re mapped oddly (or at least the UK one is): backtick is next to Z, and backslash next to Enter. As I’m usually watching the screen and not the keyboard while typing, the fact that these think-different mappings are borne out by the printing on the keycaps doesn’t help.

So I needed to swap them back again. And, X being the stovepipe it is, I needed to do it twice, once for inside X (including Gnome and KDE) and once for the console.

X was actually easier: I added this to ~/.xinitrc:

# Apple silver keyboard has these keycodes swapped
xmodmap -e "keycode 94 = grave notsign grave notsign bar bar"
xmodmap -e "keycode 49 = backslash bar backslash bar bar brokenbar"

For console use there doesn’t seem to be a similar way of modifying just certain keycodes, so you end up needing to make a whole new keymap:

dumpkeys > keymap.txt
(edit keymap.txt)
loadkeys < keymap.txt

Here’s the diff I had to apply to the standard UK map to get the Apple keyboard working the way my fingers expect:

--- std_keymap  2010-03-12 13:49:34.000000000 +0000
+++ silverapple.keymap  2010-03-12 13:50:39.000000000 +0000
@@ -93,9 +93,9 @@
        shift   control keycode  40 = nul             
        alt     keycode  40 = Meta_apostrophe 
        shift   alt     keycode  40 = Meta_at         
-keycode  41 = grave            notsign          bar              nul             
-       alt     keycode  41 = Meta_grave      
-       control alt     keycode  41 = Meta_nul        
+keycode  86 = grave            notsign          bar              nul             
+       alt     keycode  86 = Meta_grave      
+       control alt     keycode  86 = Meta_nul        
 keycode  42 = Shift           
 keycode  43 = numbersign       asciitilde      
        control keycode  43 = Control_backslash
@@ -201,10 +201,10 @@
        control alt     keycode  83 = Boot            
 keycode  84 = Last_Console    
 keycode  85 =
-keycode  86 = backslash        bar              bar              Control_backslash
-       alt     keycode  86 = Meta_backslash  
-       shift   alt     keycode  86 = Meta_bar        
-       control alt     keycode  86 = Meta_Control_backslash
+keycode  41 = backslash        bar              bar              Control_backslash
+       alt     keycode  41 = Meta_backslash  
+       shift   alt     keycode  41 = Meta_bar        
+       control alt     keycode  41 = Meta_Control_backslash
 keycode  87 = F11              F23              Console_23       F35             
        alt     keycode  87 = Console_11      
        control alt     keycode  87 = Console_11

And to get this loaded on every boot, I stuck it in /etc/rc.local:

# Apple keyboards start up in "F-keys are magic" mode; this puts them in
# "F-keys are F-keys" mode
echo 2 > /sys/module/hid_apple/parameters/fnmode

# Also they have the backslash and backtick keys swapped
loadkeys < /etc/silverapple.keymap

Oh, right, yes, the function keys: they think different too, or at least they default to think-different mode on power-up. To get them thinking same, you need to have the Linux hid_apple module loaded, and to set its “fnmode” parameter as above. (Other systems may use other ways of setting module parameters.)

If you’ve found this and want to check that your fingers are mapped the same way mine are, this is the list of keys that, with the setup above, do not generate the symbol printed on the keycap:

labelledgenerates
§` (backtick)
± (shift-§)¬ (notsign)
@ (shift-2)"
" (shift-')@
\#
| (shift-\)~
` (backtick)\
~ (shift-`)|

Overall you lose the ability to type “§” and “±”, and gain “¬” (meh) and, rather surprisingly, the hash character “#”, which isn’t marked anywhere on the Apple keyboard.

Sunday, 31 January 2010

How Facebook Wasted An Hour And A Half Of My Life

Now it should be admitted up-front that in fact, taken as a whole, Facebook has in fact wasted far more than an hour and a half of my life. But all the other hours and halves, I at least felt it was my choice to waste them. This time, however, it was definitely Facebook’s choice instead — or Apple’s.

The problem started when Facebook for Iphone refused to do anything, showing only the message “This version of the Facebook application is no longer supported. Please upgrade to version 3.0.”

Despite having always thought that the whole point of web applications, or websites in general, is that you design them carefully-enough to start with that you can support old URLs or clients in the field indefinitely — you can’t, after all, reach out and rewrite everyone in the world’s bookmark files if you want to change a URL — I consoled myself that at least updates to Facebook for Iphone are free, so went off to the Iphone App Store to download the newer version. But that failed too: apparently version 3.0 of Facebook for Iphone required at least Iphone OS 3.0, and mine was still on 2.2.

Upgrading the OS on a jailbroken Iphone can be stressful, but I thought that, as Facebook was pretty much the only Iphone application I actually used, even the worse-case scenario of bricking the phone and going back to the Nokia 8890 wouldn’t lose me that much. It turns out that the procedure for upgrading a jailbroken Iphone to OS 3.1.2 is as follows:

  • Find out how. I’m not even including the finding in the hour and a half.
  • Reboot, because it only works from MacOS.
  • Download Pwnagetool, BL-3.9, and the IPSW. (I always think that the file icon for IPSW files ought to be a blue scarf, but it isn’t.)
  • Try and download BL-4.6, but the link doesn’t work.
  • Go and fetch BL-4.6 on a USB stick from the Windows laptop you last did an Iphone upgrade from. (This is actually an improvement over the previous process, which only lets you know you’ll need the BL files halfway through the upgrade itself.)
  • Run Pwnagetool, which immediately stops because you need at least Itunes 8 for the “restore from specific IPSW” feature, and you only have Itunes 7.
  • In Itunes, run “Check for updates”, which runs the system Software Updater program, which offers loads of nonsense. Untick everything except Itunes, and download and install it.
  • Run Pwnagetool again, which now does everything it needs to and creates the custom IPSW.
  • Put the Iphone into recovery mode.
  • Run Itunes, which refuses to start because it needs Quicktime 7.5.5 or later.
  • Go and find Software Updater (it’s in System Preferences) and run it again. Untick everything except Quicktime, and download and install it.
  • Reboot, because the Quicktime installer demands it.
  • Put the Iphone into recovery mode again because it’s fallen out of it during the reboot.
  • Re-run Itunes, and have it tell you “Itunes could not contact the Iphone software update server because you are not connected to the internet.”
  • Click on “More information” about this “no internet” error, and watch it open a web page in Firefox to tell you about it.
  • Start to wonder whether Apple are just deliberately messing with your head in order to scold you for jailbreaking their phone.
  • Unplug the network cable from the Mac and plug it directly into the router, bypassing the HTTP proxy.
  • Reconfigure both MacOS and Firefox not to use the proxy.
  • Quit and re-run Itunes, because there’s no obvious way of telling it to rescan for devices.
  • Actually install the upgrade, which takes ages to “prepare”, much of which time you spend eyeing the old Nokia 8890 which never caused you any of these problems.
  • Once the Iphone restarts, observe with relief the little magnifying-glass icon that 2.2 didn’t have, thus providing evidence that the upgrade actually did something. Also notice that it’s rearranged all your icons; arrange them back the way you like.
  • Install the Facebook application from the Iphone App Store.
  • Open “Contacts” and realise that in fact the upgrade has wiped all your user data.
  • Notice with relief that Itunes is now asking you whether to restore the Iphone from a backup.
  • Notice with alarm that the backup in question is from 2008.
  • Restore from the backup anyway because it’s got to be better than nothing.
  • Rearrange all your icons back the way you like them again.
  • Reinstall the Facebook application from the Iphone App Store, because the restore restored the old version.
  • Unplug the network cable again and plug the Mac in back behind the firewall where it belongs.
  • Reboot back into Linux.
  • Start wondering what important phone numbers you’ve obtained since 2008.

About Me

Cambridge, United Kingdom
Waits for audience applause ... not a sossinge.
CC0 To the extent possible under law, the author of this work has waived all copyright and related or neighboring rights to this work.