1. ...as if millions of voices suddenly cried out in terror and were suddenly silenced. I fear something terrible has happened.

    Download an EPub edition of this post courtesy of redditor agonnaz

    Update: My erstwhile colleague Mathias wrote up his thoughts about his role in this story

    scribbled design notes

    Some time on Friday, IMDb announced that they intended to shut down their message board system, permanently. I don't find this to be a particularly surprising decision. I'm more surprised that the message boards are still there, in 2017, seemingly essentially unchanged for the last fifteen or so years. They've had a few coats of paint, and a handful of feature improvements, but they largely seem to be backed by the same system design developed by the in-house tech team, way back at the dawn of the century. And for the bulk of that early development time, I was the primary developer. As it has said on my homepage for many years, 'you can blame me for the message boards'.

    A long time ago in a galaxy far, far away

    I was incredibly excited to be asked to join the IMDb developer team at the end of 2001. Aged 30, with almost a decade of professional software development under my belt already. Although 2001 sounds today like it was the relative stone-age of the modern web, which of course in many ways it was. At this point I had already spent several years working on basic web applications in the original dot-com boom, and I was in-awe of the IMDb, which even back then was a somewhat venerable internet institution. Founded in 1990, it thus predates the invention of the World Wide Web by several years, having started out as lists of data shared via USENET posts. At the time I joined, they were a couple of years into their Amazon ownership, and starting to expand the team.

    As I started, they were just on the cusp of launching IMDbPro and had an ambitious roadmap to completely rebuild the main website from the inside out, using the shiny new technology stack the small development team had built from the ground up to power the IMDbPro application server. This, I thought was a very clever hack - imdb.com was a hugely popular website, and this approach of adding industry focused features to a subscription remix of the site built on top of the same data feeds (still basically formatted text lists, using the conventions of the old USENET based tools) meant that in effect we could use the far smaller user base of the pro site as a test-bed for the new tech, and gradually port sections of this across to the terrifyingly high volume 'consumer' site, without having to do a rewrite and a relaunch. To further sweeten the deal, if you look at this arrangement, this meant that the test-bed users would actually be paying to break in the newer software, and helping you iron out the bugs.

    In 2001, a shiny new high performance web stack meant perl. Apache 1.3.x running mod_perl to be more precise. In case you don't know what mod_perl is, it's a piece of semi-deranged brilliance that wraps the perl language interpreter into an apache module as a persistent runtime and exposes the internal API of the HTTP server to it. This lets you write applications that are now effectively themselves apache webservers, with direct access to every part of the HTTP serving lifecycle. Furthermore, by using the other neat hack, Registry.pm you could use modules or scripts that had been designed to work as CGI scripts, and get the some of the same speed boosts, unmodified. With these techniques, you could write perl applications that went almost as fast as Apache could, and in the late 90s/early 00s it was this or PHP. PHP back then was pretty grotty, I thought, and the cool kids were all using perl. Perl had libraries, and excelled at gluing existing bits of UNIX together. This meant you had to write far less of the application by hand. Yup, by hand. Let me dig into that a little bit

    It's the pictures that got small

    Writing web software back then was a fairly different prospect. In my circles, we didn't really have much in the way of frameworks. There were a few enterpris-ey things floating around that converted your big IBM and Oracle and Microsoft client/server application into some kind of terrible intranet suite that required ActiveX support to load any pages, and I'd poked around with Zope with some interest, but by and large if you were doing anything interesting, you used FreeBSD, or linux (2.2, with SMP support!).

    You'd most likely use Apache 1.3, forking, and write your site as a combination of static pages, server side templating and CGI exec-d programs, in some kind of UNIX scripting language (usually perl, but any of the usual suspects were relatively common, including actual honest-to-god shell scripts), or maybe you'd write a performance critical CGI as binary in C.

    For data processing, you might connect your application directly to a pre-existing company RDBMs, if you had such a thing and your DBA, if you had such a thing, let you, or you might deploy a SQL db on or nearby to your web host - usually MySQL 3.22 with ISAM and a quasi-religious intolerance for foreign key support but that was OK you could do all the data validation in application code. (A bit like JavaScript databases in 2017)

    We had libraries for common tasks, like parsing wire protocols and file formats, and wrapping utilities to do things like generate or resize graphics, but you'd stitch a selection of these together in an ad-hoc fashion to make a 'system'. A typical web stack would be table-based HTML with attribute styling and inlined images for typography and spacing, possibly pre-rendered, but maybe dynamically generated, then some CGI scripts for user management full of hand coded cookie and session tracking. A relational database for persistence, using hand coded SQL and a custom database schema. Page generation via a self-written templating system, gluing skeletons of layout-oriented HTML around variable interpolation with inline conditionals. This part would often run as server-side includes, but sometimes this would also have just been handled by CGI scripts.

    Maybe you'd have a hand built filesystem cache in front of this. 'Front-end' back then would often build static page representations, first in Photoshop or Illustrator, which would then be converted into single HTML page masters in Dreamweaver or FrontPage and then handed over to the back-end coders to clean up and crack apart into templated fragments, by hand. Single byte string encodings through-out, no threading, a light veneer of Object Orientation over internal data structures - you'd have a small cluster of actual physical servers, perhaps in a data center, but often on-premises, sometimes in racks, sometimes actual tower servers in the corner, directly connected to an internet router of some pitiful capacity. Sometimes your cluster was as small as one machine.

    Architecturally you'd have a webserver, perhaps two if you wanted to split 'heavy' dynamic serving from lighter or static content. Your database might end up on its own box with better IO and networking. If you had enough web servers you might put some kind of load balancer in front, perhaps a HTTP reverse proxy as an accelerator cache (often another Apache, sometimes Squid). In 2001 I'm not sure I fully understood what a CDN even was. You'd deploy with FTP or maybe rsync, sometimes the production filesystems were locally mounted via NFS or SMB and you'd just copy stuff over, or edit it in place. Version control, if you even had any might just be renaming files, perhaps SCCS or RCS. Advanced users might have CVS. Designers might have a pre-OS X Macintosh, suits would use Windows, developers had something more of a free-for-all - windows 2k, desktop linux, I used BeOS for several years whilst that was still a thing, and seemingly everybody, but everybody used emacs to write code - GNU emacs was common, but the cool kids were using XEmacs. Sometimes a remote XEmacs client on your deploy host attached to your local X11 server over the wire. Crazy days.

    My God, it's full of stars

    So that's the scene in 2001 when I joined the amazon.com family as an SDE, working on the new IMDb platform. I was a fairly hot perl programmer, having spent a good few years designing and rewriting custom web 'frameworks' and optimising mod_perl architectures. I was really good at SQL, at least I thought I was in comparison to most of my peers, and I had developed a particular fondness for the then slightly uncommon PostgreSQL database engine. I'd done quite a few web things - early corporate intranet portals, hobby sites, moderately popular dot-com publishing houses, but this was a step change into an entirely bigger league.

    In reality, especially as I look back with hindsight, I can see I had very little idea what I was doing, but hardly anyone did. There wasn't a lot of published material on architecture - everyone read Greenspun, but there was nothing like the modern tech web, scalability porn, conference circuit. No HN, no Reddit, no twitter, no Facebook, and looking things up on StackOverflow was still almost a decade away. It wasn't even that easy to find what scant information there was, you have to remember that Google was barely yet a thing. Information sharing tended to happen on mailing lists, using actual email, or maybe still on USENET. (Paul Graham hadn't yet written 'A plan for spam', and we didn't really have functional automated spam filtering).

    IMDb had an unusual working setup for the day, as befitted it's birth from a federation of USENET correspondents. Everyone worked completely remotely, scattered around the world. At the time I joined, there was an express preference for staff who could attend a weekly company meeting over lunch, near Bristol (in a cafeteria, attached to a swimming pool), and the majority of the tech team building the software was now based around this area. Home Internet connectivity was still largely 56kbps or lower dial-up, possibly metered, although I was lucky enough to be in a part of Bristol eligible for an insanely fast 1Mbps cable connection.

    Anticipating having to work on significant amounts of DP, potentially offline, I asked if I could be provided with a small server with SMP and RAID capacity, and was rather surprised by a small tower HP Proliant rig turning up at my house, cocooned onto a loading pallet too big to fit through the front door. I had to unglue it piece by piece and carry it up to my 'home office', a box bedroom full of IKEA tables, slightly too tall to be comfortable desks, and assemble it in place. I christened it mavis.imdb.com, and installed Debian stable on it, which involved most of a day figuring out the hardware RAID drivers, and from that point on it's shrieking fans and disks were a constant part of my daily life for the next half-decade. Eventually a house move allowed me to get it into a makeshift server cupboard where I could deaden this persistent din behind a door and blankets and curtains. I occasionally wonder now, in my middle-age, if I have a frequency gap in my hearing to match that particular pitch, but if so, it's not affected me enough to care to get it measured. As the noise tended to interfere with music, for the first few years I developed a habit of listening to BBC Radio 4 morning to midnight, and therefore, when there wasn't a test match to listen to, for a brief period of my life I developed an unusual degree of expertise in the comings and goings of 'The Archers'.

    One consequence of the remote working, and patchy connectivity was that the development work in the tech team was informally silo-ed up into sub-systems that individual engineers had ownership over. The very first task I worked on, after getting a working build of the entire stack onto mavis, was porting the statistics page across to the new web stack (internally known as 'mayhem', after project mayhem, everyone was big on movie references, naturally) by way of familiarising myself with the application and infrastructure. I made a perfunctory stab at that, and then I was searching around for something more substantial to own. The forums, or 'message boards' seemed to be a natural candidate.

    The most recent piece of work I'd done at my previous gig, had been to contribute a threaded discussion system to our general purpose content management system, which allowed a tree of conversations to be attached to any content id in the catalogue, so the site users could have a threaded comments section attached to any content. This had worked pretty out well. By contrast, IMDb had a pretty threadbare generic forum system, a standalone phpbb installation, almost entirely isolated from the rest of the system, organised into a few dozen general purpose with I think even a separate login system.

    A business goal for the next year was to drive up user registrations, and the forums system seemed like a good feature to assist with this. It offered additional site value that was only viable to registered users. Another target was to integrate the boards system more directly into the movie database, allowing people to have conversations directly attached to the pages for movies and shows. Another important requirement was to allow for a system that would let the data contributors directly communicate with the data management team. So I was tasked to do something with the forums to meet these broad goals, and the implementation and design of it was largely up to me, informed by regular feedback from the wider team onto weekly progress reports and via the team lunch meeting.

    We're going to need a bigger boat

    I considered a number of approaches.

    • I could have extended the PHP forum system as was, to support the new features, but I didn't really consider that for more than a couple of minutes - it was PHP, which I didn't know terribly well, and disliked, and would be harder to tightly integrate with the rest of the mayhem app, which was a domain optimised mod_perl web service.
    • I wondered about wrapping a USENET service, which had a lot of appeal, in as much as a lot of the base mechanics of hierarchy would be already covered, and a highly scalable architecture with a portable standard with several existing back-end implementations. I really liked this idea a lot, but I rejected it eventually when I realised that it would be difficult to build an integrated web front end that offered as much functionality as a stand-alone newsreader. If I had been able to find a decent open-source web NNTP client I might very well have done this.
    • Another alternative would have been to find an alternative forum system that was more amenable to customisation. I considered using the slash system that powered slashdot.org, but I rejected that because at that time it had a reputation for poor performance and uptime, and was struggling with coping with trolls. I really should have paid more attention to these ideas, both of which would come back to haunt me
    • eventually using a mixture of naivety, hubris, ego, enthusiasm and pragmatism I decided I'd build something custom, scaling up over the ideas I'd used for the comments module in my previous job.

    The basis for that system was something I was quite proud of, and in some senses it was quite a clever hack. We had wanted threaded discussions, but it's famously tricky to model trees in SQL. My first attempt, with hydrating flat lists into trees at runtime from a SQL result set was computationally a little bit expensive for the hardware of the time, and slowed up page rendering in the articles with comments.

    So I came up with an ingenious scheme. I'd store several sort fields against the comment records - one representing the vertical position in the thread, and one representing the indentation level, and every time a reply was inserted into a comment thread, I'd compute the correct indent level by adding one to the parent reply, set the vertical position to one larger than the parent, and then update every larger sort sequence to increment it by one; so that they were sequentially stored in thread order when read by that index. As I was storing the timestamp, and a sequential post id, I thereby had a system that could trivially read back conversations by order of time, order of posting or order of reply. This meant that posting was relatively computationally expensive, but only on the database server, whereas reading was simple and fast. I reasoned that reads were many times more frequent than writes, and biasing the system this way would optimise it for the common case, and avoid the need to build a cache invalidation system.

    This system actually had worked out pretty well in practice, at least for Accounting Web comments sections. Although it's conceptually neat, it's also actually pretty fucking dumb for a couple of reasons.

    1. updating records has a high overhead in PostgreSQL because of the mechanics of its concurrency implementation
    2. this system means that adding comments becomes linearly more expensive as threads grow in size. The more popular a system gets, the work needed to post an individual comment increases in a polynomial fashion

    Oops.

    I wasn't entirely stupid, I had calculated this downside, and I'd done some scaling calculations on paper to see what the cost of implementing this for the IMDb would be, and here I made my first actually stupid mistake, I used the metrics of the existing forum system to try and predict the capacity of the new one. I can't remember the exact numbers now, and I've long misplaced the notebooks, but it was something lower than a thousand posts a day, and the average thread length was a few dozen posts. Amazon could afford a useful database server, and it seemed like I easily had a couple of orders of magnitude of headroom. Telling myself that premature optimisation was the root of all evil, and conveniently ignoring the fact that this design was literally entirely borne of an optimisation hack, I decided to proceed with this scheme.

    Show me all the blueprints

    I gave the design a lot of thought. I had been a USENET user back in the glory days before spam and binaries had rendered it toxically uninhabitable. I adored slashdot. I'd used a lot of shitty web forums since then, and I had designed a flexible engine that could handle any kind of post based discussion grouping. I thought this was a great opportunity to design a discussion system that I'd want to use myself. scratch your own itch. I think I already mentioned, I didn't really have much idea what I was getting myself into. Ah, youth.

    I thought that most of the grief and spam I'd seen in other systems, was primarily because of the cheapness and disposability of user identity. I figured we could tie that down by disallowing anonymous posts, which was aligned with the goal of increasing user registrations already - maybe ultimately we could link them into amazon.com accounts, and therefore real identities. I wanted to give the users the ability to personalise and curate their site home page, so they'd have an investment in a community they valued, and would be publicly accountable to.

    Another thing I'd noted about other forums was how quickly they stagnated into a dominant clique, and deterred new joiners. I decided this was in part because of the permanent record; the conversations got stale because everything had already been said, and the groups then tended to be dominated by handfuls of high-status members with visible post-history. Groupthink dominates, outsiders are shunned, filter-bubbles prevail. I thought that an interesting solution to this would be to actively expire user posts. IMDb already had a system of user reviews for more static user content attached to database entries. The boards were for conversation - so we'd just periodically remove older content, and make no secret about it. This should stop the entropy lock-down, and also give us a mechanism to keep a lid on the database / thread size to help with performance. Everything should stay fresh and sparkling and self-rejuvenate.

    I know lots of this was naive thinking and with 2017 hindsight, it's easy to see the flaws. In 2001 though, there was much less experience of online community management. We thought we knew about trolling, because we'd experienced previous communities, but I don't think anyone yet had a handle on the scale and the scope of it in a significantly mass-medium consumer Internet.

    I really wanted nested threading, which is a very good, perhaps too good, way to promote reply-oriented posting and reading. For that same reason, I didn't want threading to be the implied default mode, because I thought it promoted point-by-point refutation, which lead to arguments and flame-wars. So I envisioned a system that could seamlessly move between a flat or a nested view, with a cookie to fix it to your individual preference.

    Each post would have two actions - a new top level post in the thread, or a reply to the particular post, and the different view options would allow you to see how the thread timeline fitted together from each point of view. I felt this would encourage replies, without mandating them as the only form of discourse. This meant that the organisational system was topic ( either a generic, or a database object ) , consisting of a thread - which was defined by the opening post made by any user at the topic level. This then collected numerous replies, which themselves could have sub-threads of reply.

    Mindful of the fact that this was still an era of expensive and slow dial-up and low end computers, I wanted the ability to view in narrow or expanded views. I didn't want to force people to download gigantic pages of browser and modem-choking deeply-nested table layouts, so we would flip between outline and expanded views as well as flat or nested. I wanted people to have a static, but customisable home page that they could add content, style and flair to; hoping to give them a sense of curation and ownership and identity, that should help act as a brake on too much antisocial or negative behaviour. I'm not sure I was even smart enough to wonder if people would use their home page to host offensive content. (Of course, some did).

    So I started to build it. Initially it went really well. On the data model and storage engine side of things, I was on a pretty solid footing, it was familiar ground. I carried on using PostgreSQL, and we specified a decent (for the times) server to host it on. No H/A or replication at all. I'm shocked at that idea now, but at the time I had reasoned that we were building an ancillary, purposefully ephemeral side-car discussion system with a different storage layer to the main site, and we'd be fine with regular hot backups - in the case of disaster we could shut them down without affecting the main site, and restore from backup. In the case of total and utter catastrophe, we could just reset them to zero and start again, they weren't designed for permanence anyway. Feedback about the design and features from the rest of the team was positive, with plenty of enhancements and suggested tweaks, and the system started to take shape.

    The UX layer was way harder than I'd anticipated, and because of this, I started to get a bit bogged down in the 'second 90%' of the first deliverable. The mayhem engine that the team had built (a really clever piece of software design, that I don't really have time to give justice to here) had never yet really had to cater to highly dynamic pages - it's core purpose was to serve flexible views of an almost read-only statically compiled dataset of movies and people. It was originally built around doing that in a particularly optimised way.

    I had to build up my own HTTP POST and form handling layers that would integrate with the existing HTTP handlers, from a somewhat lower starting point than I was used to doing, and this soaked up quite a bit of testing and debugging time. Even worse was the display code. We didn't really have much facility for dynamic page layout in the templating system - which was both highly customised, and complex; the site page templates were used to drive the static build system, via a custom compiler - the markup in the template specified what data views would be generated by the build, which directed the data builders that compiled the binary movie database- the pages were effectively just compiled to a stub handler for a specific route which would seek to the object index in a particular data index, and then basically sprintf the data out port 80 as a hydrated web page. This was a fast way to serve varying pages with identical structure, but not immediately well suited to highly adaptive constantly updated live pages or submission forms. Still, I wanted the boards system in the existing stack as well as I could manage, and so I laboured to build the missing features into my system in a way that could integrate well, which involved at least one complete abandoning and rewriting of the internal API.

    The actual boards display templates themselves were a significant time soak. We had a great designer, who took my ugly box tables prototype output, and turned out nice looking blueprint designs for all the various view modes and forms as static web pages. This was of course the era of the browser wars, and we were expected to support a bewildering array of user agents from the Netscape 3.x era onward, inclusive of weird-ass things like AOL clients and MSN web-tv set top boxes and goodness knows what else...

    Busting these intricate table-based views apart and back together again into a cryptic markup and logic language, adding the various (session global) mode flags such that all the different view combinations rendered as functional pages that degraded gracefully took me weeks. I was slipping past shipping dates and entering a terrible crunch death-march to just try and get something out of the door. Unhelpfully, this was all happening at a time when I was having a few strains in my family life, and also struggling a little bit to balance this into a sensible routine of working from home, I was ping-ponging between getting distracted away from 9-5 and then overcompensating by working across nights and weekends. Eventually we had to pull out features to ship.

    I drastically cut back the home page customisation, abandoned all the planned but unstarted work for a search index, and only had time to add the most rudimentary admin features. I had wanted to migrate the existing posts across to the new system, but I'd not even begun to start on that, and that also hit the cutting room floor. With a lot of assistance from the rest of the tech team to get it over the line, we hit publish on the initial TNG boards system some time in the summer of 2002, later than planned by some months. This pattern of the message boards being more work than expected for all parties that touched them would be the prevailing tone for the next several years.

    A test designed to provoke an emotional response

    User feedback was immediately negative, and highly vocal. Lobbying started instantly for the reinstating of the previous system. People complained about the new designs, the complexity of the new display options, the inevitable launch bugs. I was silly enough to join in the conversation to help explain the launch and solicit feedback, and from that point on I had an onslaught of direct contact messages and emails, occasionally positive and friendly, but more often than not weird and offensive, sometimes abusive. You do try to tell yourself that you can just ignore the trolls, but in truth it is quite difficult to remain completely unaffected by emails that compare you to a child rapist and calling for your death in offensive terms, even if it was only provoked by you breaking a font size in a particular version of Internet Explorer 3. You never quite get used to that, I find. I was pretty crestfallen with all the negativity after all that work, although the team were positive and assured me that some of the board users could be like that, and that in general people are more vocal when they're complaining, and are naturally somewhat resistant to change. I still felt pretty down.

    My mood did soon change after a few weeks. The new boards were kind of a hit. Maybe a smash hit. They quickly overshot my scribbled calculations of scale in a slightly worrying manner. With some judicious database tuning, the performance stayed OK though. For now. Then we added links from every title page (IMDb pages were sub-grouped into title pages, for tv shows and movies identified by a key called a tconst which looked like tt1234567 and name pages, for people, robots, animals etc. from cast and crew which were identified by a key called an nconst which look like nm1234567; top level boards un-linked to other database objects therefore got a new key type called a bdconst, somewhat inconsistently, these looked like bd1234567 and didn't matter very much because there was only ever a few dozen standalone boards) and the numbers started to properly hockey stick.

    At the time we used to compute the page views in a weekly report which broke out the top N subsections according to first level directory. We never shared traffic numbers publicly, and so even after all this time I will be respectfully coy, but the highest chart topping positions were obviously things like /title, /name, /search /news /chart etc. At launch, the boards were lurking down the bottom, nowhere to be seen, but after we started the title conversations they were solidly into the top five, where they remained with ever-accumulating numbers, and user registrations clocking up correspondingly.

    From that point on, I spent a significant amount of my waking life 'doing the boards' for the next several years. Initially I was scrambling to put in the missing features we'd pulled before launch - post editing, markup for posts and then profiles in a hand-rolled version of BBCode; again with a stupid insistence on display time optimisation, I converted this to HTML at write time, which meant that when we added post editing, I had to backward parse the HTML back into bbcode to be re-edited, all with a misconceived series of chained regular expressions. This lead to an endless sea of parse bugs that pretty much guaranteed that the markup and emoji (although they weren't called that yet, we called them 'smileys') set would be once fixed effectively sealed forever, even though I'd taken the trouble to add an admin edit tool, that allowed for updates to markup to be made by non-developers through the CMS API.

    I'd thrown together a naïve search API, entirely based on un-indexed SQL substringing, which I'd fully intended to replace after launch. It never worked, and the system filled up so quickly that it killed the page cache entirely by constantly table scanning the texts, so much so that I spiked it in the first week, and never got a chance to work on it's planned replacement. I was still getting emails complaining about that five years later after I'd left.

    With the surging popularity, came increasing amounts of negative user behaviour, and I had to increasingly devote development time to adding abuse processing tools for our small moderation team, onto what had only ever been an afterthought of an admin system. We never proceeded to link up the user accounts to amazon accounts, and I'd never planned to add user-driven moderation. My quixotic hopes for user killfiles (renamed to 'ignore lists', which is a far better and kinder name), global killfiles (known as the 'Phantom Zone', because I love Superman) with account history purging and deletion weren't enough on their own, and the tooling for processing abuse reports were too clunky and slow, largely because I hadn't planned enough for them from the offset.

    I was now fighting a constant war on two fronts. With the popularity of the system way beyond my original estimate of a few thousand posts a day. We quickly escalated to a point where the really popular off-topic boards were ersatz real-time chatrooms, accepting hundreds of posts a second at peak-times. All of this in a cursor-pooled synchronously blocking database directly attached to the HTTP display servers. I spent a great deal of my work time just constantly rewriting sections of it all to squeeze efficiency out of this setup. First with indexes and schema changes, then with hardware upgrades and tuned and profiled system software, then with a complete rewrite of all of the database logic to use stored procedures, and finally a long overdue table sharding so we could cluster boards between different tables and tablespaces to balance the IO and garbage loads. At the same time on the other front we were trying to come up with ways to lower the proportionally increasing cost of trolling and abuse.

    My partner was temporarily stationed away in London by this point, so I was home alone, aside from the dog. Workdays at this point quite often consisted of walking 12 paces from the bedroom, still brushing my teeth at about 09:30, getting a support email, starting to poke at something interesting with the boards, and then not giving up until the small hours of the next morning. I was fairly obsessed with all of it, and my health was suffering, although I was too close to all of it to properly see this at the time. I developed a weird collection of neurological symptoms which stubbornly refused diagnosis, and subsequently appear to have been entirely stress-induced.

    We still were choking out at peak load times, and it was starting to have a knock-on effect to the rest of the site. Eventually, a super-talented colleague helped me out by implementing a workable version of my poorly articulated designs for a caching database proxy; implemented seemingly overnight by him in C, it spoke postgresql wire protocol and cached result sets in a filesystem that we mounted on ramdisk. Kind of a home-brewed combination of memcached and pgbouncer. The simplicity and effectiveness of this just took my breath away, just as much the lesson that if a software thing doesn't exist, you can just make it yourself. Everything is just ones and zeroes, as I am very fond of saying to this day.

    With this addition we got to a place where the system was in enough of a steady state. We implemented more banning and reporting, added a reputation score based system that slowed the rate of posting for users with lower reputation scores, which also helped reduce the saturation write loads at peak. Eventually we added an automatic moderation robot with a learning capacity and pluggable rulesets. I called him Spike. He worked fairly well, if a little bluntly at some times.

    I hope I'm not giving out the impression here that it was all entirely negative. It was definitely a rollercoaster few years. Exhilarating, and also very entertaining. The boards were a living thing that had sprung out of nowhere, literally something I'd created in my spare bedroom. It sort of felt like a Pacific-Ocean sized colony of sea-monkeys eternally fizzing away with unexpected activity right there in my spare room.

    Although they were often frustrating, the users were also inspiring, and creative, and surprising, and occasionally pretty funny, even some of the (gentler) trolls. On top of an understandable level of frustration and annoyance, I generally found I felt a sense of sympathy for them, and their complaints and frustrations with the system. All of this was before the age of 'social media', and I could almost feel the shape of it hanging there, slightly beyond where we were heading, off-piste and in a direction we probably shouldn't venture into.

    A consistent surprise was the amount of effort people put into curating their limited patch of profile space, and how social and to us off-topic, it all was. We were constantly running into people trying to use the boards for personal social spaces - I argued for providing individual personal boards for every user at one point, but the management team explained that we weren't really in the core business of general social networking. It confused me at the time, and I had to think about it for a while, but I think that was correct thinking, and there's a lot of wisdom there. You simply can't do all possible things well. With a small team, and a big world, you benefit most from focusing entirely on the things you're best at and the things you want to be better at.

    A few of the sillier trolls stand out. There was one early griefer, who we very easily IP traced to a school library, I think based in Canada. We waited until he was in mid-session one afternoon, and then if I recall correctly, management called his head teacher, who was then able to apprehend them in the act. There was another, very silly catfish troller called tabitha_cyeg, with an obviously manufactured identity. Their M.O. was posting bizarre conspiracy theories about the site technology, and myself, during which they'd claimed to have hacked into using l33t-sounding but completely irrelevant NetBIOS vulnerabilities replete with faked server logs, and on one occasion 'hacked' emails from myself revealing my true name to be something along the lines of 'Claude M. Savoire'. Quite a few users were seemingly entirely convinced, but to me it was pythonesque.

    Getting contacted by the Feds to deal with users who'd been posting death threats about President Bush was weird, at least it was the first time, and I got a few PMs and emails from actual industry figures, which was always quite exciting. I personally banned a moderately famous Hollywood producer this one time, for abusive posting, which is something of a curiosity. I remember going to watch Jay and Silent Bob Strike Back at the cinema around this time frame, and getting a particular kick from the sub-plot where they individually visit all the internet forum posters who have been rude about their previous films.

    I watched people fight and friend. Saw a few romances and a marriage or two emerge from the regulars. I read, and occasionally got involved, against my better judgement, in fascinating and productive conversations. I still bump into people IN REAL LIFE who reminisce about the boards and are to this day impressed with me when I tell them I had a big hand in their genesis. I once spent an evening in a darkened restaurant patio overwhelmed to tears as a kind man explained to me his young daughter, hospital-bound and dying of cancer, had used the Harry Potter IMDb boards as her main social life in her last year, and how much that had meant to him and her. Stories like that are just a profound privilege to have had even the most tangential involvement in.

    And I learned so much. Working with such a smart team, on such a great and special piece of the internet. Learning about every aspect of scaling a web stack from the disk blocks up to the network and back down again. This era was still 32-bit Intel hardware, and I learned a huge amount about that, and UNIX profiling, and the linux virtual memory system and file system, and HTTP caching. I made so many mistakes, because there just wasn't any other way for me to learn, and I did figure out how to fix or improve on many of them.

    I learned about PostgreSQL internals from the wire protocols all the way down to the storage models in some detail, and to this day I'm a pretty great PostgreSQL DBA, when I need to be. I learned a lot about UX influence and steering behaviour, albeit by mostly getting it wrong. I learned about building search engines, and service orientated architecture, and why you really shouldn't hang responsive systems off of blocking I/O, and maybe message queues are useful. I learned how to measure system performance all the way down to the CPU cache level. I learned how to keep focused on problems I didn't yet know how to solve, or perhaps didn't yet understand. I learned lots and lots of things about movies and cinema history, much of it just by osmosis poring over the data sources. I learned how to better manage my own time and projects, and I learned what it feels like to burn out, and what you should do about it when you know that you are. Since I left Amazon.com, I've had a great and varied career, and I think at least 75% of the useful things I know how to do well I learned first-hand on that gig, and I've always treasured, and respected that.

    Always. Be. Closing.

    And now they're shutting the boards down. I first heard about it via text message, oddly enough; but shortly after that it was all over my news feeds followed by a slow stream of emails, checking in. Friends, ex-colleagues, some of them from former boards users. I felt an odd sense of shock about it, in a way, and slightly emotional. Sixteen years is a ridiculously long time in Internet years. The web itself wasn't sixteen years old when I joined Amazon, and nor was the even older still IMDb. I don't use the boards myself any more, although I do occasionally look over them, perhaps once or twice a year. It's been clear for a while that they're not getting a fraction of the use that they once did, and that's fair. The web is a different thing in 2017 entirely, and that's also a good thing.

    Communications technology evolves, and hopefully improves all the time. People have all kinds of social networking now for communicating, and the bulk of this is happening on different, smaller screens than anything I could have envisioned when I was first sketching out some pencil ideas in a gridded notebook. An actual Filofax I believe. It was very humbling to see the amount of twitter traffic noting the IMDb announcement, as well as the number of actual proper news sites that wrote this up as something significant. The Verge report seemed to think the IMDb message boards were era-defining. That's something, I guess. All things must pass.

    There's just one more thing that's bothering me

    'Mjeyds'. On the imdb board bbcode syntax, there's a particular smiley that you markup using this bizarre word. People occasionally ask what the term means, and I've always enjoyed the mystery, being one of very few people in the world to have any claim to know the answer. I guess it's now or never for the reveal.

    The emoticon set was curated, uploaded and configured by my erstwhile designer colleague. He took responsibility for naming them. He wasn't English, hailing from Denmark, I believe via several other countries. When I pressed him for an explanation of 'mjeyds', he said it was supposed to be an onomatopoeic of the way the late Graham Chapman said a languorous 'yes' whilst sucking on a pipe in a scene from Monty Python's the meaning of life. If it is, I guess it works better if you're using a Danish alphabet? If you've got all this way through this post only to find out the answer to that question, then I am sorry if it is an anticlimax, but thank you for reading. Maybe some things are better left mysterious. Another lesson learned.

    Crazy Credits

    this is a personal web page, and an entirely personal and subjective retelling of my own experience building and maintaining a small section of IMDb.com a long time ago. Whilst I'm happy to take personal responsibility for a large amount of the boards creation and inspiration, I don't want people to get the impression that this was in any way a solo effort. All of the work outlined above was produced in the context of a small dedicated team, and although I've refrained from naming names, and attributing ideas elsewhere this is borne more from a desire not to miss anyone out - after this amount of time there's simply no way I can credit individuals for parts I can remember without failing to attribute others for equally important contributions I have forgotten. I've done my best to be honest about facts and timelines, and tried not to infer too much about third party motivations, but I know I've forgotten things and misremembered others. Working from memory, after this amount of time, such errors are only human. If you spot anything terribly wrong, or have any questions or corrections, please get in touch. I'd like to thank the entirety of the IMDb team 2001-2005 for working with me on all the aformentioned things, and more. Great team, great times

    • 2017-02-05
  2. And just like that we're back. What happened cms?

    It was never entirely my intention to go offline for such an extended hiatus. Even though the web is intrinsically brittle and ephemeral, I like to do my bit to keep my little backwater serving 200 OKs to the half-dozen people who stop by to check in regularly, and the couple of dozen who linked to something I put up at some point. It's basic web-citizenship as far as I'm concerned.

    Before we went fully dark, I'd not posted for a long time already. And before that I'd slowed my posting down to something of a crawl. I think there's a few reasons for that. It's easy to get bored with blogging for the sake of blogging, especially in our current age where everyone shares profligately across many social platforms. It's fairly common to see blogs that have fallen into a recursion of no posts for months, then a post apologising about that, and then further disuse. I don't think this is one of those, but the proof is in the posting I suppose.

    There's certainly been less time in real life for auxilliary pursuits like online rambling, and that's a big part of the reason. No time for any proper content posts, concomitant with a surge of alternative social platforms to play around with, meant it often seemed a bit redundant to post arrays of short-links, when I could just throw them up on twitter/adn/diaspora*/flickr/ello/imzy/whatever, with a bigger audience, and more interaction.

    I was also feeling a bit self-conscious about standing up in public. After leaving last.fm (fairly amicably, as these things go, fwiw, albeit with a slightly battered heart), which felt like a fairly visible shift sideways, I was quite deliberately courting more obscure, maybe more unexpected job roles, and I remember feeling like I really didn't want to bare my thoughts to the internet judgement machine whilst I wasn't even entirely sure what I was doing myself a good deal of the time. Also busy! Young family plus startups really left little time for anything much else.

    I also was really feeling the pain of Wordpress. I never quite managed to find an authoring approach to use with it that didn't make writing anything seem like far harder work than it ought to be, also because I always insist on self-hosting, the sheer weight of it for maintainence and security updates, and backups, and DBA-ing, and having to write PHP or perhaps even plugins to do the inevitable customisations someone like myself inevitably finds themselves suckered into doing. So Wordpress was a drag, which was feeding my reluctance to contribute much of substance. So I decided to pause on updating whilst, in true wannabe-hacker style, I whipped together some kind of alternative content publishing system.

    I'll just take a paragraph out to stress that I actually admire WordPress a great deal. It's a very sophisticated and flexible web platform, and a great choice for site management, in either managed or self-hosted configuration. It kept this site ticking along for years. It just isn't a particularly good fit for my requirements, which are extremely simple

    I thought about using another off-the-shelf blogging system, which would have been the sensible route, but I figured that would just lead to a similar frustrated stalemate. So I started to sketch out an application that would allow me to quickly fling out tagged and dated content without much overhead of hosting or writing. And I carried on intermittantly piecing this app together, often on trains, for a couple of years. As an exercise in procrastination, it worked out better than I expected, and I carried on posting short content to twitter and others, reasonably happy to continue to defer the responsibility.

    But then the site went dark. I was hosting it all on a linode instance. I've been a very enthusiastic linode user for perhaps ten or more years, I think they have an excellent product, offering well-provisioned VPS instances, inexpensively, with an easy to use management site. Generally I've been very happy with them to date.

    This changed somewhat last year, and my confidence deflated a little. There was an extended outage of service across linode in December 2015, apparently as a result of a targetted DDOS. This lasted for many days, and the communications about it from linode were muted and suspiciously vague. This isn't really what I expect from a first-tier ISP. I came away with the impressions that there were some significant architectural problems with their infrastructure, probably from acrued technical debt, and potentially some exploitable vulnerabilities in their public facing application software. I decided it was time for a change.

    I did some reasearch and rented a couple of new hosts. This time I've gone for low end, physical servers. This represented another procrastination opportunity, because when I originally set up the beatworm.co.uk linodes, almost ten years ago, I just hand configured everything by remote shell. Now I like to use the ansible configuration management system to set up hosts, and I took this opportunity to port my public infrastructure across to use repeatable playbooks. This turned into another major yak-shave, because there was slightly more to it than just a WordPress deployment, I was hosting mail, calendars, media streaming, IM, DNS, the works. After getting lost in this tarpit for a couple of months, I decided to move the application tier over to use the playbooks from the sovereign project, which covers much of the same ground, but is already written, and uses more modern components. Of course it wasn't entirely straightforward to integrate these plays over my existing base provisioning, and I ran into a couple of glitches and gotchas with some of the choices they'd made for configuration, but it only took a couple of weekends worth of fiddling to get it all running in a fairly acceptable shape. I moved the DNS across, at which point the wordpress site was left behind, and everything went dark.

    I was surprised at how much this bothered me.

    I like an outlet for sharing things. I enjoy the idea of having a stable internet identity. I don't like the way the modern web has folded these ideas into a handful of consumer products run by just a couple of corporate gatekeepers. That's not the web I grew up with, and it's not the web I want to see either. A very loosely federated ecosystem of ad-hoc resources, all mixed together as hypermedia, aggregated and accessed via an assorted bag of user-agents. That's how it works best. I like to write, because I like the practice and discipline of working toward articulating my thoughts for a general reader.

    I like being able to curate an archive, and keep control over how that information persists and is presented. This is hard enough to do when you have primary jurisdiction over the medium and material (there is plenty of bitrot on view in my archive, particularly in the really old material, which has been migrated across multiple publishing platforms now), and basically impossible if you're relying on a third party service, which periodically re-invents itself to better serve it's own objectives, which are only ever to be tangentally aligned with your own, at best.

    I don't like the sense of obligation I get from formal social media platforms. There's a subliminal sense of pressure to perform, to update, to observe the conventions, to consider and measure the implied audience. I'm not a joiner by nature. I just end up gently resenting the throng. I like to feel like I have a voice, but I don't want, or even expect to reach, an automatically provided audience.

    So, I picked back up my now-neglected website platform experiment, and knocked it together enough to get an MVP out of the door. It serves HTML over HTTP. It has a relatively minimal set of style rules that should allow it to work gracefully across various screen dimensions. It has rudimentary support for RSS (not that many people use newsreaders any more). It's simple to run in a staging environment, and I can write posts in plain text in emacs, and edit and post them without much extra grief. It's only got about 22% of the functionality I had originally planned, but I feel the urge to ship it, use it, and hopefully I'll refine it in production.

    There's a couple of interesting quirks to this new hosting setup. It's an ARM-based micro-blade, hosted on a scaleways C1. The blogging software is semi-static, in as much as it serves generated content from the filesystem. It's written in common lisp, and deployed in a different lisp to the one it's developed on There's no frameworks (aside from using zurb foundation classes to base the CSS). There's no database. There's no comments, because I haven't yet decided on a productive way to support them.

    • 2016-09-04
  3. Apple Vs GPL: Apple’s attitude to GPLv3 is making OS X an increasingly shonky UNIX developer system

    • 2014-10-12
  4. LambdaPi: A bare metal scheme based lispOS for the rPi

    • 2014-07-14
  5. ANS: Once upon a time, Apple used to ship some fairly funky, fairly chunky AIX boxes.

    • 2014-04-02
  6. Serapeum: is a conservative library of Common Lisp utilities. It is a supplement, not a competitor, to Alexandria.

    • 2014-03-31
  7. Bup: A highly efficient file backup system based on the git packfile format

    • 2014-03-30
  8. I've been using a set of superman covers I scraped from Superdickery.com as a screensaver on my Mac for a couple of years. I just dropped them all in a folder, and pointed the built in "slideshow" saver at it. Set to "Shifting Tiles" with 'shuffle slide order' it makes a nice regular grid of comic books that zip in and out regularly.


    Last week I had a notion. I dusted off my old Canon LiDE A4 USB scanner, fired up VUEScan and set about scanning a couple of boxes of my own comic book collection. It was a suprisingly therapeutic couple of hours mechanical work to scan a few hundred, and the result is a more pleasingly personalised slideshow, with a larger number of member images. 


    After running with it for a couple of days, I'm really pleased with the results. It could do with a little more variety, because I scanned from boxes where the material was alphabetically organised by titles (what am I, some kind of nerd?). Some other observations - the 90s were really dark, both in the stupid post-Watchmen 'gritty heroism' sense, but also more literally in the colour palettes. This is really obvious contrasted against the four poster colour silliness of the classic Super titles I've switched from. Ironic that high grade reproduction technology and digital colouring options, as well as the shift to fully painted illustrations seems to have lead to a more muted spectrum of offerings. Perhaps this says a little about my youthful tastes. Also, what was I thinking sticking with that second run of 'Mage' ("The Hero Defined"). That book was pretty terrible as I recall, and I've certainly got no urge to reread and check my assumptions. I'm leaving them in the set, because it seems dishonest not to. 


    I've got another dozen or so boxes to scan. I should do some sums to work out what the storage implications of that represents before I commit to bunging the rest of them on my 256GB SSD though. 


    As a side thought, I realised that everpix had diligently uploaded all my scan jpgs, so I can present a public gallery of the work so far for your bemusement.

    • 2013-10-26
  9. If you have a Mac, and you use Terminal.app to run UNIX commands, try executing this for a cool shell prompt


    export PS1="\360\237\220\232 $ "

    See what I did there?


    If you are using a UTF-8 encoding for your terminal, which you probably are, and if you're using a recent OS X, and have the right fonts installed, which you probably do, you should have a little sea-shell graphic for your prompt. Literally a cool shell prompt.


    Screen Shot 2013 04 09 at 19 11 42


    In a recent revision to Unicode, code points were assigned for many emoji. Emoji-what-now? These are little emoticon glyphs that rose to popularity in Japan. Apple have included a nice typeface with full colour icons for a subset of these in the last couple of releases of both iOS and OS X, so you can use them in most applications that use the system type rendering library, like Messages. On OS X, this includes the bundled Terminal.app terminal emulator. So you can print little icons in your shell, if you know an encoding for a particular glyph.


    Here's the ever popular 'pile of poo' ( U+1F4A9


    Screen Shot 2013 04 09 at 20 09 46


     


    Not sure what that is supposed to be used for, but it's terribly popular on the internet. "But how", I hear you ask, "do you find out the encoding sequences for these appealing novelties?"


    Well, you can search for unicode code tables on the internet. On the Mac though, the easiest thing to do is probably to enable the Character Viewer tool via the Language and Text System preference pane. 


    Screen Shot 2013 04 09 at 20 19 23


    This gets you a panel like this, where you can browse all the characters your computer knows how to render, including all the emoji sets, and find out their Unicode code points, and more importantly, a way to encode that code point in UTF-8.


    Character viewer copy 


    So, as you can see in my fecal example, the UTF-8 byte sequence for 'pile of poo' ( U+1F4A9 ) is F0 9F 92 A9, and we can print that in a bash shell, using echo with the -e flag to enable interpreting of escape sequences, using the \x escape prefix to indicate bytes in hex. 


    Going back to the original shell trick, the shell emoji ( U+1F41A ) has the UTF-8 encoding F0 9F 90 9A. The bash shell doesn't seem to have an escape sequence for hex encoded bytes in it's prompt string, but it does interpret 3 digit codes prefixed with a plain \ as octal encoded literal bytes, so if we convert this hex string to four octal numbers, using bc or od, or emacs or just Calulator.app, we get the escape sequence from my initial shell example - "\360\237\220\232"


    So far so cute. But is there anything vaguely useful you can do with this sort of thing? Sort of. A picture's worth a thousand words. So we could perhaps encode mnemonic information in icons, and somehow dynamically update the prompt to reflect this.


    Bash will execute the contents of an environment variable PROMPT_COMMAND as a shell command immediately before the shell prompt is printed. Typically this is used to update terminal colours or title strings with escape sequences, or update PS1 to add some content that can't be printed using the built-in prompt escape functions. I decided to make my prompt respond to the result of my most recent command.


    Here's the relevant shell glue I just stuck in my .bashrc 



    emoji () {
    if [ $1 -eq 0 ]
    then
    echo "\360\237\230\203 $ "
    elif [ $1 -gt 0 ]
    then
    echo "\360\237\230\225 $ "
    fi
    }

    export PROMPT_COMMAND='PS1=$(emoji $?)'

    This runs a shell function called emoji in a subshell, which returns a string based on the input argument. The input argument I'm using is the exit status of the last shell command. This gets me a smiley face in my shell prompt, unless the last command I ran returned a non-zero exit state, which in UNIX, indicates a problem happened. This makes my prompt draw as a 'confused smiley', if something has gone wrong.


    Screen Shot 2013 04 09 at 20 41 56


    Still cute, and almost useful!


    I think I'll keep it for a while.


     

    • 2013-04-09
  10. I'm experimenting with desktop email clients again.


    I like Apple Mail a lot, it's one of my favourite examples of GUI desktop application, but the last couple of iterations have made it a little more clumsy to use with keyboard navigation, and it doesn't scale terribly well to managing multiple, high-volume IMAP accounts. Particularly, I find refiling groups of similar emails to be more labour intensive than this task would seem to require. By means of contrast I love refiling mail on my iPhone using Apple Mail for iOS, in truth I love using Mail on my iPhone for any mail task way more than I'd expect, it's insanely usable for an email client on a tiny, squeezable hand-toy. 


    The real impetus for investigating a desktop alternative has come from our recent switch to using GMail for our corporate mail service at work. I hate google mail's not-quite-IMAP IMAP implementation, I hate it's sluggish IMAP performance through Mail.app, and I hate hate hate it's god-awful webmail interface. So I've been putting some thought into rethinking the way I process email. Naturally my first line of attack is to retreat to emacs.


    I've used emacs for mail before, on and off. When I first switched to using linux for my desktop systems, way back in the 90s, I used gnus on emacs for mail for a while, then when I made the switch to XEmacs for a couple of years I discovered VM, which was my main INBOX on and off, following me back to GNU Emacs, with occasional experiments with Netscape Navigator, and Evolution up until I switched to a Mac full-time, around 2001. I do recall trying Thunderbird a couple of times, but I could never tolerate it for much longer than a half-hour. I also used Wanderlust for emacs for a few months when I first started working at last.fm, but I switched to using a Mac at work shortly after that, and added my work email to my Apple Mail setup. 


    This time around I'm trying to re-organise the way I approach mail fundamentally. A few years ago, I started deleting mail after I'd read it, unless I definitely felt it warranted keeping. I really liked the feeling of freedom that seemed to open up, releasing me from worrying about tidy filing of hierarchical mail archives that always needed archiving and backing up. Inspired by GMail's approach to tagging and searching, the mail I did keep I filed into a small set of IMAP buckets and indexed them in Apple Mail with labels and "smart folder" searches. So I'm trying to push that even further, and I'm trialling mu, a decidedly minimalist interface to email.


    mu works over a local mail store, ideally Maildir. So I've started syncing my work GMail account to my laptop, using the mature, Free software syncing tool offlineimap ( I installed it from macports ). offlineimap has specific GMail support, and it's super-easy to set this up to sync to a GMail account, although I had to add a 


    folderfilter = 
    lambda foldername: foldername not in ['[Gmail]/All Mail']

    to the account configuration in ~/.offlineimaprc to stop it syncing the Gmail "All Mail" filter as an IMAP folder, meaning I had 2 copies of every email going down. I set up a User launch agent via launchd to run offlineimap every 5 minutes, syncing to ~/Library/OfflineIMAP/lastfm/


    Once the mail was syncing both ways, I ran 


    MAILDIR=~/Library/OfflineIMAP/lastfm/ mu index 

    to initialise the mu indexes. I can now explore the mail archive from the shell using commands like 


    mu find from:jira date:2w..today

    which would return a summary list of emails matching the search criteria (i.e. all mail sent from JIRA in the last 2 weeks). mu is based on the xapian indexer library, and these queries run lightning-quick. The indexing process is thus entirely separate from the imap sync, and the indexes need to be updated by re-executing the 'mu index' command to keep them fresh. This takes fractions of a second after the original indexes are built.


    I'm not really interested in running searches from the shell though. mu is really an archive browser; ideal for integrating with other mail reading and sending utilities. mu ships with a nice keyboard friendly emacs interface called mu4e. mu4e offers quick navigation short cuts to browse IMAP folders, a simple syntax to mu searches, and a list of bookmarked searches, much like virtual folders. mu4e can be set to periodically update the mu index, and even run a Maildir sync, such as offlineimap. Here's the config elisp block from my startup files. 


    (setq-default
    mu4e-maildir "~/Library/OfflineIMAP/lastfm"
    mu4e-drafts-folder "/Drafts"
    mu4e-trash-folder "/Deleted Messages"
    mu4e-sent-folder "/Sent Messages"
    mu4e-refile-folder "/Archive"
    mu4e-mu-binary "/usr/local/bin/mu"
    mu4e-sent-messages-behavior 'delete
    mu4e-get-mail-command "true"
    mu4e-update-interval 300)

     all of which is quite straightforward. The root of the various folder paths is the top level Maildir. mu4e-sent-messages-behaviour is set to the symbol delete, which is recommended for GMail accounts, as GMail auto populates one of it's magical pretend folders with all sent messages. I have set mu4e-get-mail-command to true because I prefer to have the Maildir synced via my launch agent, independently from emacs.


    There's a very nice mu4e manual which documents the package in detail, I haven't managed to work through it all yet. So far I'm managing quite well with manual searches, and the default set of keybindings and stored bookmarks. List view management follows the usual emacs semantics of building up 'marks' on list entries and then applying the actions in bulk, familiar to habituated emacs users from org-mode, wanderlust, dired etc. 


    The mail and editing and sending is borrowed from the usual emacs GNUS / smtpmail combination, which is fine, as these work perfectly well.


    I've found only one tricksy wrinkle; mu4e, like any sensible thing expects email to be in plain text. If the viewer is summoned on a rich text ( usually HTML ) mail, it tries to convert it to plain text for viewing. By default is set up to use emacs built in html2text method, which frankly sucks, and failed to convert the majority of HTML mail in my INBOX. mu4e has a configuration variable mu4e-html2text-command option to use an external conversion command. This should be a utility that accepts HTML input on stdin, and returns converted text on stdout. The manual suggests using the python-html2text utilities, but I think on a Mac it makes more sense to use the mildly obscure, but occasionally useful, Apple provided shell tool - textutil


    It needs to be invoked like this to work with mu4e. 


     (setq mu4e-html2text-command
    "textutil -stdin -format html -convert txt -stdout")

    And with that, everything works great. I'm going to try living with it for a few weeks before I customise it further, but I'm looking forwards to setting up Wanderlust-style dynamic refiles, and integrating crypto support, so I can return to GPG encrypting and signing my mail again, like I ought to, at my age. Never forgetting, of course, cms 1st law of software :- "All mail clients suck, intrinsically"

    • 2013-02-04
  11. If you find that your Mac's 'Open With' menu is growing cluttered with identical menu entries for the same application, this indicates that your Launch Services database is confused. 


    In the normal course of action your computer scans for entries to merge into this database at boot time, and then at login for the user domains. The Finder updates it with new application information, as and when new App or Framework bundles are encountered during it's normal operation. Unfortunately this database does seem to be capable of becoming persistently corrupted, which will result in symptoms like a duplicate-riddled 'Open With' menu, or incorrect or inconsistent Filetype/Application associations. 


    On Mountain Lion, you can interact with the system database from the shell, using the lsregister utility. Run it without arguments to get basic usage instructions. It is not on any default, paths, it's buried away inside /System/Library/Frameworks/CoreServices.framework.


    /System/Library/Frameworks/CoreServices.framework\

    /Versions/A/Frameworks/LaunchServices.framework\

    /Versions/A/Support/lsregister -dump 

    will show you the current database in human readable form. To scrap and rebuild the database completely you might do something like this 


    /System/Library/Frameworks/CoreServices.framework\

    /Versions/A/Frameworks/LaunchServices.framework\

    /Versions/A/Support/lsregister -kill -all u,s,l -r -v 

    The -domain argument there is specifying that we should recursively ( -r ) scan for bundle directories in the the user,system and local domains (i.e. "~/{Applications,Library}/" , "/System/{Applications,Library}", and "/{Applications,Library}" ) and register their document type bindings and other information with the Launch Services agents, which will update their database with this information. The -v switch turns on progress logging, which is all done to stderr.


    If you're in the habit of installing apps or library bundles to some alternative roots than the builtin domain types, you can add those paths to the command, instead of the domain flags.

    • 2012-11-29