Smartphone OS market share - Kantar Worldpanel data

On one hand, its useful to know how market share is changing for the different smartphone operating systems.

On the other though, its hard to draw any sort of meaning out of change in market share without knowing what the change in the market size is. I know that there are more smartphones than 12 months ago, and that Apple's share has declined. But does that mean that Apple is selling more phones? Or the same amount of phones in a growing market?

Death of The Manual

It must be about ten years ago now when, stuck in a dead-end job and wondering what I wanted to do with myself, I decided that I wanted to write manuals for gadgets.

I had some experience in technical writing, I had a love of gadgets, and I felt that I could turn the terrible guides that came with a typical piece of technology into something that would actually be useful.

So, I started working on building up a portfolio and spoke to some people in that line of work. Generally doing my research — who is doing it well? Where is it all going? (Why are they so terrible?) But I learned three things;

  1. Technology was going mainstream. iPods were fairly new at the time, and nobody needed a manual for an iPod. But as more people wanted the gadgets, there would be more need for help for people who wanted the gadgets, but didn't love the gadgets.
  2. If printing something the size of a phone book containing instructions in every imaginable language and ship them all over the world seemed like a good idea, then having a better manual, in full colour (including videos and interactive elements) available online seemed like a great idea. (So I started learning about web technologies instead.
  3. The reason those manuals are terrible isn't because they are being translated from Japanese. Its because 90% of the work done by the people who write them is writing for engineers. Engineers and

The second two made me change direction - 2. because I got caught up in web technologies, and 3. because the actual job of writing the kind of documentation I thought I could do well would involve doing another job which I thought would directly harm my ability to do what I wanted to do well.

The first point seems to be having a strange impact on gadgets in general though.

Russell Davies has documented his recent experience of buying a new gadget. (Spoiler: His discovery was that "the manual" hasn't gone online. Its just gone.)

The thing I find most alarming is that the gadget he's bought appears to be a digital camera, made by one of the biggest consumer electronics brands in the world.

Dedicated digital cameras are currently in danger of becoming one of those gadgets that suddenly gets very old and dies - like VHS recorders in the age of Sky+, or portable CD players in the age of iPods. Whether there is a place for the dedicated digital camera in the age of smartphones remains to be seen.

Actually, its not quite that simple. Sky+ is better than VHS. iPods are better than portable CD players in certain ways. Probably not in terms of how good the sound is though - the music on iPods (and other MP3 players) is — usually — compressed. The amount of music that fits on there is more important than the fidelity of the sound.

Dedicated cameras are generally better than smartphones. I say "generally" because it depend on what you want from a camera. If you want;

  • Better ISO for low-light picture quality
  • More space for storage (and hot-swappable SD cards)
  • Good zoom lens
  • Adjustable aperture/shutter speed
  • Specialised features
  • RAW images

…then you're probably thinking of dedicated compact cameras as "better" than smarphones.

But, if you want;

  • A camera that is always with you
  • Something to show your pictures as well as take them
  • Quick sharing online (as soon as you've taken the picture)
  • Ability to add titles, comments and conversations around your pictures

…then you might well be thinking about investing that £100-200 in a better smartphone, instead of a new camera.

But here is the problem. Apart from the zoom lens (which is pretty straightforward to use), all of those benefits of dedicated cameras need some sort of explanation. The typical person (who has never read a photography book, taken a class, followed a photography blogger, joined something like a Flickr community etc.) needs to be told what those features are before they can figure out why they would want them, when they should think about them, and how to actually use them.

So, maybe that isn't where cameras are going. Maybe they are all about convenience — point-and-shoot for the masses, who are never going to read the manual (whether its a multi-lingual tome or an interactive online experience) and don't want a £600 telephone?

It doesn't look like it. Camera complexity seems to not just be a selling point — it isn't even a choice.

But camera manufacturers don't seem constitutionally capable of making a super-simple camera. They must be deeply convinced that the complexity of the feature set (which certainly does appeal to a lot of us) is an indivisible part of how they add value to their product, and the temptation to add more and more is something they can't forswear even for one product. I mean, with hundreds of cameras on the market, wouldn't you think they could make one that was super-simple, just for that segment of the population that wants it? And market it that way. You'd think. But no.
I think it's one of the "stealth reasons" why cellphones are encroaching on the camera market so rapidly. Not the only reason, not the main reason, but a reason. (I also think that as cameraphones gain an ever-enlarging share of the camera market, the cameras in them will inexorably get more complicated.)

via DaringFireball

I can only assume that the likes of Sony either believe that people today don't need a manual, because they can just google for a general "how to use a camera" blog post and figure out what they need to know. Or, that they are making gadgets like iPods — so simple and intuitive that they don't need manuals or handbooks.

So, thats the situation for the future of "manuals";

  • Being written by the worst possible writers — people who don't write for "normal people"
  • Its existence is an indication that the gadget isn't good enough
  • Seems to be worth less than the paper its printed on

I'd say that things aren't looking too good for a well documented future for gadgets…

Why "Natural Interfaces" are important

"A bit of video I was looking at recently stuck with me over the past few months. It showed a toddler sitting up holding a magazine. She tries to swipe it – she tries to expand it – she bangs it to try to make it play. Nothing happens. And in frustration she throws it away. To a toddler a magazine is a tablet that’s broken. That’s how this generation is growing up. It will have a totally different set of norms and behaviours."
Director-General of the BBC Tony Hall, in a speech about the future of the BBC.

This is a refrain I've heard many, many times over the last few years (I assume that this video is the one he is talking about); todays toddlers have grown up with touch screens. They expect screens to do things when you touch them. Something that doesn't react to physical interaction is broken

There is a fundamental truth here. Touchscreens are just a part of it — "natural interfaces" are the new category of user interfaces. Once you start using a touch screen to interact directly with content, its jarring to go back to a similar device where you have to operate a cursor with a keypad.

But… does a toddler of today think that a magazine is "broken"?

I think its nonsense. I can't speak from experience — my eldest child was playing with an iPad before he could talk — but I'm pretty sure that before 2010, toddlers were not a significant part of the Vogue/Heat/FHM audience. (Its hard to be sure, because the NRS doesn't measure readership of under 15s. But I'm pretty confident.)

I'm pretty sure that before 2010, a toddler's reaction to a glossy magazine would have been to touch it, see if it did anything — maybe try to eat it, if they were below a certain age — and then either ignore it or rip it up. (Which is one of the reasons we tend not to give magazines to toddlers unless we don't mind them being screwed up, ripped and eaten. I do speak from experience when I say that toddlers are agents of chaos, on a mission to destroy everything they come into contact with.)

Earlier this week, I attended Microsoft's Advertising Retail Forum, where I heard a great example of what not to do with in-store technology; a big "outdoor clothing" brand had put a nice, interactive thing in their stores — you've probably seen the kind of thing; a computery thing where you can browse their catalogue or see a store guide or watch their TV adverts or whatever is is that they thought their customers would want to do. All well and good.

Except… they had to put a sticker on the thing to ask people not to touch the screen, because below was a keyboard and trackball, because this was a computer, and people's greasy fingerprints on the non-touch screen were making it very dirty.

That is an illustration of why natural interfaces are important. Because grown ups, who have spent their entire adult lives (and probably more) using computers with keyboards and mouses are suddenly assuming that big screens are things to be touched. I've seen it happen with large screen installations, with non-touch smartphones, with computers in kiosks, car park ticket machines - you name it. Touch screens are now the default for many adults. Thats why anyone dealing with technology today — hardware or software — needs to be aware that expectations have changed.

Not because toddlers don't read magazines.

Today's Twitter

Twitter is looking for a new "Media Evangelist" — officially titled "Head of News and Journalism", NBC News Chief Vivian Schiller is currently the favourite for the position. But an opinion piece by Ruth Bazinet on Medium says that she is the wrong person for the job.

Why? Because of her Twitter profile.

But it lacks the most important element that should be ringing alarm bells at Twitter HQ —a significant number of tweets. How can someone who has tweeted less than 1,200 times have the practical, hands-on knowledge of the platform required to evangelize it to other news media professionals? Twitter needs a veteran, someone who is an expert not only about the platform itself, but who also understands how people, including other journalists, are using it.

In short, the view is that Twitter is heavily reliant on "power users" — those who are tweeting dozens of times a day.

I think thats a view that misses the point of what Twitter is and where its heading. Maybe three or four years ago, when Twitter was a social network for the bloggers, journalists and technorati, it would have seemed a more valid point; but today, Twitter is something different. It has changed.

Most obviously, it is bigger. "Power users" today don't have follower counts in the tens of thousands any more — at the time of writing, there are 839 Twitter users with more than 2 million followers (with Mohammed Morsi just about to cross the mark.)

The best way of summing up this change probably comes from Twitter itself — at the top of their "About Twitter" page is the big, bold sentence;

The fastest, simplest way to stay close to everything you care about.

Below that;

An information network.
Twitter is a real-time information network that connects you to the latest stories, ideas, opinions and news about what you find interesting. Simply find the accounts you find most compelling and follow the conversations.

Compare that to what it said a couple of years ago;

Twitter is a real-time information network powered by people all around the world that lets you share and discover what's happening now.

Note the differences; out with "share", in with "follow." Out with "powered by people all around the world", and in with "latest stories, ideas, opinions and news".

Those screengrabs come from this blog post at Harvard Business Review, talking about a study on how people's Twitter usage changes when they get more followers;

We had two hypotheses as to why they do post. One is that they like to share information with world, that they want to reach others. This is an intrinsic motivation. They enjoy the act of contributing. The second hypothesis is that posting is self-promotional, a way to attract followers to be able to earn higher status on the platform. Judging by how people behaved once they achieved popularity—they posted far less content—we believe the second hypothesis is probably the primary motivation. If the primary motivation were to share with the world, most people would not slow down posting just because they were popular. But most people did slow down as they gained followers.

So I don't think the role of Twitter's "Head of News and Journalism" is going to be about showing journalists how they can talk to their audiences; its in showing news organisations how they can use Twitter to broaden their audiences.

It isn't about showing editors how they can "listen" to what their readers are saying; its about showing them what they can learn from the data coming from Twitter.

In other words, its going to be showing news organisations how to move forwards from the "old Twitter" world that Ruth Bazinet's article seems to be talking about, and towards the "new Twitter" that it is becoming, where Twitter isn't a platform for "engaging" or "interacting", but a platform for distribution.

Old Twitter wanted to be the internet's watering hole, where everyone came together to talk. New Twitter wants to be the internet's front page; Google will tell you what you want to know, but Twitter will tell you what you didn't know you wanted to know. Discovery, rather than Search. Ultimately, thats not really a change in what Twitter wants to be — but it is a slightly different way of becoming it.

I should probably note that I'm not particularly in favour of this shift that is going on (or rather, has already happened.) I like old Twitter, where it felt like the place where interesting things on the Web were happening, and it was small enough to feel like a community — where a celebrity making a typo or grammatical error wasn't seen as an invitation for hundreds of people to correct them. But… its probably just a natural consequence of Twitter's need over time to grow its user base and develop its business. If they had decided against advertising as a core business model, perhaps it would be a very different story today.

But thats a whole other story…

(Thanks to Mat Morrison for pointing out the change in Twitter's description to me.)

[Edit 18:10, 11/10/13 - added screengrabs]

W3C and 'protected content'

An interesting point of view from the EFF on the decision by W3C that "playback of protected content" is within the scope of the W3C HTML Working Group, meaning that an Encrypted Media Extension protocol may be included in the HTML5.1 standard.

  1. This means that ultimately, the user is no longer fully in control of their web experience (in the way that they have been to date.)
  2. It also raises the question of what makes video so special? (Why don't writers, font designers, photographers – and web designers' HTML, CSS and Javascript code get this special protection for their work?)
  3. The EFF also argues that it could be damaging for the W3C – the view being that engineering consensus is that DRM is a headache for designers, fails to prevent piracy, and is ultimately bad for the users.

It seems to me like the principle of the web is device independance; ie. no matter what hardware, operating system or browser you are using, web standards mean that web content is still accessible. The intrinsic 'openness' of the web is (or, maybe, has been) a side-effect of this principle, rather than a fundamental component of it. If this is what it takes to make sure that the future of video over the internet lives in web browsers rather than closed applications (which, by its nature, means limited to the most popular platforms), then it might be a pragmatic choice.

But the alternative is most likely going to be proprietary browser plug-ins (ie. Flash or Silverlight) alongside native mobile applications for iOS and Android (maybe Windows Mobile at a push – I'd be surprised if anyone is focussing much development time on BlackBerry or Symbian these days), then it doesn't really change too much.

So, if this isn't something that is going to improve the web – but could start closing down other technologies, resulting in a less open World Wide Web – then its probably not the kind of change that is going to get much love from the Mozilla guys. My guess is that whether it succeeds is going to depend almost entirely on what Google think of it. If they are in favour, then we should expect to see it embraced by Chrome and implemented in YouTube. If not, then don't expect one of the world's most popular browsers and biggest video platform to rush to get on board. (Also bear in mind that Apple don't have any particular reason to want to support it, and if Google aren't in a rush to implement it on Android devices it could be a technology that is dead in the water on mobile platforms.) Which makes me worry about the amount of control that Google have over the widear web.

Worth a read

(Vaguely related; I recently spoke to someone who had been on a short coding workshop, who assured me that one of the great things about Javascript was that you could look at any website, inspect the Javascript code and figure out how it works. I held back the urge to ask them to explain how the javascript on Google+ was doing its thing…)

The New iPhone

Every year, Apple has a big iPhone event where they announce their latest handset. Every year, the consumer tech, media, marketing and mobile industries get very excited. And every year, I tell myself that I'm not going to add to the noise of chatter, misinformed speculation and poor analysis.

And pretty much every year, I give up and post something at the last minute. 2013 is no exception… So, too late to be a part of the conversation and too early (at least, I think so anyway…) to be a snappy reaction…

Is the iPhone 5 "good enough"?

Last year, I said that the interesting thing about the iPhone 5 was that there wasn't a clear "interesting thing" about the iPhone 5.

But one thing that is a little different is that this time, there isn't really a 'headline feature' that is a unique selling point for the latest iPhone;
* Original iPhone – multitouch screen
* iPhone 3G – 3G, GPS, App Store
* iPhone 3GS – Main selling point was Speed – faster processor, faster networking, better camera. But it also introduced a built in compass, which enabled Augmented Reality for tech nerds, and better maps/navigation for normal people.
* iPhone 4 – Retina display, new design.
* iPhone 4S – Siri, iCloud
On stage, Phil Schiller said that every element of the iPhone 5 has been improved – display, wireless signal, voice processing, speakers and earphones have all been refined, physically the handset is thinner and lighter, yet the processor is faster and battery life is the same. […] If the story is that everything is improved, then nothing is truly different.

For the iPhone 5, I guess it was the taller screen (nice - but a selling point?), new dock connector (again — nice for a few reasons, but not going to sell handsets), LTE (only supported on one network in the UK until a few weeks ago). Smaller, thinner, lighter… All in all, good reasons for someone out of contract to buy the iPhone 5 over the older and cheaper iPhone 4S — but not quite convincing enough for me as an iPhone 4 owner to justify £600+ for the additional handset and contract commitments.

So far, the new iPhone rumours sounds quite similar; incremental improvements (and a fingerprint scanner), but no major hardware features, unique to the new handsets (as opposed to being a part of online services or iOS7.) The 3GS and 4S were incremental updates — "the same, but different." So it seems reasonable to expect the same for a 5S.

Which means figuring out what Apple's story tomorrow might be seems a bit trickier than usual, without a strong product story to tell…

"Designed by Apple in California."

Apple's WWDC conference sets the tone for the new iPhones, as Apple tells iOS (and OSX) developers what they need to know to get their apps ready for release date. This year was a particularly notable one, with iOS7 bringing a complete overhaul of the visual design as well as the usual new APIs.

Breaking from the usual format for an Apple event, the WWDC keynote opened up with a video – a public statement about their new brand signature, "Designed by Apple in California". For a brand well-known for their product-centric marketing, this seemed like an unusual departure — a message that is purely about the Apple brand.

At stratechery.com, Ben Thompson had this to say about Apple's intended audience;

The truth about the greatest commercial of all time – Think Different – is that the intended audience was Apple itself. Jobs took over a demoralized company on the precipice of bankruptcy, and reminded them that they were special, and, that Jobs was special. It was the beginning of a new chapter.
“Designed in California” should absolutely be seen in the same light. This is a commercial for Apple on the occasion of a new chapter; we just get to see it.

I think the way that Apple Inc. have set out to define the Apple brand says something about how they want to differentiate themselves from their competitors. While its easy to focus on the "…in California" part (which isn't something many of their competitors can really compete with), its the 'Designed by…' part that is probably most unique to Apple; they build everything from the CPUs and the devices that they sit in (at least for their mobile products), right through the software that powers them, the applications that run on them — and increasingly, the services (iCloud, Maps, iMessage etc.) that they use.

Presumably, the iPhone 5S will include a new chip at its core. It seems a safe bet that it will be called the A7. But the key point here is that while Google/Motorola and Microsoft/Nokia are getting their OS and hardware integration lined up, Apple are designing everything from the CPU to the interface. This gives them something to talk about from a marketing perspective (ie. "Designed by Apple, in California" — nobody else is designing the whole product in the way that Apple is doing).

Why is that so important?

Well, 5 years since the iPhone 3G really changed the smartphone market, it is now getting mature. By which I mean that most people buying smartphones today are smartphone owners already — they have a clear idea what they want. They aren't buying into the idea of "smartphones" - they are buying into a particular platform, whether that is iPhone, BlackBerry, Windows Phone, Galaxy etc. The 'early adopters' of 2008-2009 (iPhone 3G/3GS or early Android devices) will now be, assuming a 2 year smartphone contract/lifecycle, looking at their third device. They know what they want, what they don't want, how much they value it, and what it means to enter into a 2 year contract commitment in a fast-moving market.

Now, I've deliberately left Android off that list, because I don't think its something that "normal people" see as a platform. Samsung — the most significant manufacturer of Android phones in the western market — don't even use the Android brand in their marketing materials. Have a look at the HTC One website and see if you can find out what version of Android it runs on. I can confidently predict that it will be hard to miss "iOS7" on Apple's iPhone page after today's event.

Android simply isn't a meaningful brand to the people selling the devices, and it isn't a meaningful brand to the people buying it. And for those who it is meaningful to, then they are probably more interested in the Nexus brand, which promises an 'Android as Google intended it' experience; hardware designed by HTC/Samsung/LG, software designed by Google. But not Motorola, who design Android phones and is owned by Google… Oh - and then there is the "Google Experience" brand for non-Nexus devices that still have the stock Android OS and…

Android clearly has market share, and might even have devices that are as good as the iPhone. But it has issues with a fragmented marketplace, causing issues for developers, which causes quality and usability issues for users. Even the branding is fragemented.

My question is whether there is another opportunity here — something that Apple can do with a complete overhaul of the iPhone today (ie. new hardware and new OS) that the Android ecosystem wouldn't be able to reproduce? I don't know enough about the hardware side of things, but it feels like there is a space for innovation here — maybe its stripping the hardware down to its bare essentials to make the most power-efficient, thinnest and lightest device. ("iPhone Air"?) Maybe its some new service running at a level so deep in the internals that it would take years for an OS/Hardware partnership to reproduce?

'Value' model.

I saved the "obvious" stuff until last, because the rumours are all but confirmed that the iPhone is going from a "last years 'Great' model is this years 'Good' model" strategy to a "two new iPhones" line up.

In 2009, I said that the interesting thing about the iPhone 3GS's launch was the fact that the iPhone 3G was also remaining on the market - how this marks a split in the iPhone product line from being a "premium" smartphone to a "regular" smartphone with a "premium" alternative. - although the 3G was only available in its smallest storage option, it meant 2 phone choices with 4 different price points.

When the iPhone 4S was released (2 years later), the iPhone 4 and 3GS stayed on the market (in the smallest storage option only) — so 3 iPhone choices with 5 different price points. The pattern was the same with the iPhone 5 launch last year, so the options today (at the end of the "iPhone 5" cycle) are;

  1. iPhone 4, 8Gb, £319
  2. iPhone 4S, 16Gb, £449
  3. iPhone 5, 16Gb, £529
  4. iPhone 5, 32Gb, £599
  5. iPhone 5, 64Gb, £699

If the pattern were to continue, then you would expect to see the line up with an iPhone 5S/6 (or whatever it will be called) as;

  1. iPhone 4S, 8Gb, £319
  2. iPhone 5, 16Gb, £449
  3. iPhone 5S, 16Gb, £529
  4. iPhone 5S, 32Gb, £599
  5. iPhone 5S, 64Gb, £699

But it seems pretty clear from the rumours that something different is happening - we will be seeing an iPhone 5S and an iPhone 5C. If the 4S were to remain, then we would be left with an old phone with an old screen size, an old dock connector and old (2.1) Bluetooth support. Given a year on the market (and a 2 year lifespan), I don't see this happening - I don't think Apple want people to be buying accessories with the old dock connector in 2016. (It seems like a pretty safe bet that the iPad 2 will be retired this month for similar reasons.)

And 8Gb just isn't enough any more. With some music and videos and a reasonable collection of apps, 16Gb becomes pretty tight pretty fast. So, assuming the 5C isn't just a "smallest space" model, my guess is that we will see something like;

  1. iPhone 5C, 16Gb, £319
  2. iPhone 5C, 32Gb, £359
  3. iPhone 5C, 64Gb, £449
  4. iPhone 5S, 32Gb, £529
  5. iPhone 5S, 64Gb, £599
  6. iPhone 5S, 128Gb, £699

I say "guess" — there is a lot of speculation about price points, based on things like subsidy values and the Chinese market — neither of which I will pretend to understand. But it seems to me that a "new" phone will outsell an "old" phone at the same price; Apple is doing very well out of the iPhone, and changing a successful balance in a way that might pull high-value customers into the low-value alternative model just doesn't make sense to me. (But like I said; I don't pretend to understand the Chinese and network market forces…)

Incidentally, although the names seem to be accepted as truth, I wouldn't be surprised if they turned out to be purely internal codenames rather than the brands to be marketed — 2 new phones that aren't really "new" but updates on last years model seems like an odd move (especially considering the iOS overhaul.) Those of us who see the name as a meaningless label when you're getting entirely upgraded internals seem to be outnumbered by those who see the most important things about the phone as the name and casing design.

The other thing that I haven't seen anyone address is what the "new" naming pattern would be — ie. what happens next year? Will we get the iPhone 6 and 6C? Followed by the 6S and 6…CC? 6D? Or maybe next year will be a new naming system - the "new iPhone" and "iPhone mini"?

Who knows. Whatever Apple's plans for next year are, it seems a safe bet that they already know what they plan to offer going into today's event. Whether or not we will get a hint of it remains to be seen.

… and the rest

The space that I think is going to be really interesting isn't so much the phones and software as the accessories — what happens when your phone is talking to your TV, your stereo, your home lighting, your fridge etc.

At WWDC, Apple opened with a demo from a new company called Anki; announcing the launch of their company, using iOS development platform and devices "to bring artificial intelligence and robotics into our daily lives".

The commentary I've seen since the event seemed to be fairly dismissive of what is essentially a 3rd party apps and accessories developer showing off something that looks like a hybrid of toy cars and computer games. Perhaps it's down to simple confusion — why was this first up? Is it a toy, or a game, or a tech demo? If its a game, how do you play it? If its a toy, isn't this kind of AI a bit over the top just to entertain children for a while? (And therefore, won't the price tag be a bit much for the toys market?)

But despite the actual demo hardware, this company doesn't really seem to be about toys or games to me. This is about the power of an iOS device to do much more than run apps and surf the web, and how 3rd parties are building on this platform to do something a bit different.

While Google are talking about their project to build self-driving cars for around $150,000, Apple have shown how an iPhone can control a bunch of cars in real-time (albeit in a highly controlled environment.) I don't think anybody is thinking about putting their phone in control of their car, but the fact that a modern smartphone is even capable of this kind of data processing and wireless communication should be food for thought for anyone thinking about where this technology is heading. The focus for the last few years has been on handsets and apps — I'd love to see what happens if the industry shifts its thinking to what can be done with accessories.

Oh - and one more thing. iOS7 — lovely, "flat design", but with some interesting 3D layered effects, responding to your movements… Isn't anyone else wondering whether a 3D retina screen would be a possibility?

No? Just me then…

Who really invented "Inventing the future"?

A couple of months ago, I wrote about (among other things) a quote about "inventing the future".

This morning, via a Wired article, I came across the Quote Investigator blog, who has a large archives of quotes for which he has dug out the true sources.

Needless to say, he did a more in-depth job than me; although Alan Kay has stated that he originated the maxim of the form "The best way to predict the future is to invent it", and began using it by 1971, but in 1961 the line "The future cannot be predicted, but futures can be invented." appears in a book by Dennis Gabor (inventor of holography) called "Inventing the future".

The future cannot be predicted, but futures can be invented. It was man’s ability to invent which has made human society what it is. The mental processes of inventions are still mysterious. They are rational but not logical, that is to say, not deductive.

Would you trust a photocopier?

Like a lot of people, I spend quite a lot of my working day for one reason or another in Excel. And I've learned not to trust it. Not the software - it seems pretty reliable (if a little inflexible at times.)

I don't trust my own work - its too easy to make a mistake (mistyping a number or formula, putting the variables in an equation the wrong way around etc.) I double (or triple) check everything. But I have a reasonable idea how enthusiastic/bored I was when I was doing a particular piece of work, and how likely I was to have made a simple error at the time.

For other people's work, I'm less trusting. If someone has figures where I would expect to see a formula, I'll try to put together the formula to check that the figures are right. I'll check that percentages add up to 100% - basic checks that I probably wouldn't do on my own work.

But print… well, I trust that a bit more. Because the numbers on a page are what they are. Its quite literally black and white. Partly, I think this is because its easier to assume that, for example, the cells that should be formulas are actually formulas (I'm an optimist…) But also because there is a finality to print — if I'm saving a working file to a network drive or emailling someone a work in progress, I'm not going to double check it in the same way as if I'm printing a copy off.

So I was pretty surprised to see this story about Xerox photocopiers 'randomly' altering numbers that they were scanning or photocopying. I had assumed that a copy was just that — it had never occurred to me that in a digital age, those massive copiers would be running compression/decompression algorithms.

So now I don't know what to trust any more…

Computers don't do amazing things

Thinking about a recent post and the idea of "augmenting human intellect" got me to thinking about what we look for in computer systems.

I think there is an idea among people who want a piece of work to be done without doing it themselves that computers do the work, and the person using the computer is just the "operator". Whether that is an image that you want someone to photoshop, or complex analysis using something like Excel, SPSS, or a social listening dashboard, the underlying (and probably unconscious) assumptions are;

  • Computers do amazing things
  • If I had the software/knew how to use it/had the time, I could do it myself
  • Even though I don't understand it, it seems pretty simple

I think that they are 3 common assumptions, which are all wrong. But I'm going to focus on the first one; the fact is, computers don't do amazing things.

People do amazing things with computers.

That's what the idea of "augmenting human intellect" is all about. Computers don't do the jobs — they help people to do the jobs.

Another way of looking at it is to think of the computer as an assistant. No good leader/manager would ever say that their problem is that they are leading the wrong people, or that they would be a better leader if they had a better assistant. But an assistant, by definition, has nothing to do without someone to assist.

And that's why looking for a magic system (ie. the best technology) to do a particular job is always going to be time that could be better spent looking for the right person.

If you work somewhere where technology is being brought in to do a job that you don't already have people given the time and resources to do the job, then I would say that it is pretty inevitable that the job is going to fail.

End of "gaming"

This September, the new GTA game comes out. I am vey excited about this.

Towards the end of the year (in time for Christmas), the PlayStation 4 and Xbox One are expected to launch.

They will be expensive (at least, more expensive than I can really justify, given how much less time I have for playing games these days). And presumably, they will have exclusives for all of the AAA titles within the next year or so.

Meanwhile, Lovefilm have announced that they are going to stop renting games. So if I want to play a new game, I'll have to pay £50-60+ to buy a copy— which again, is something I don't do very often.

So, after GTA V, it's looking like my computer gaming days are going to be effectively over. In a few years time, my son (currently 4) will be old enough to get involved with what will probably be the 9th or 10th generation of consoles (assuming that there is still a games industry like the one we have today by that point — which isn't an assumption I would personally put money on), and I've got little doubt that I will a) be encouraging Father Christmas to bring one and b) want to play on it myself. But I know it won't be the same.

But maybe it's not all bad.

For all the hours of fun I have had playing games like Mass Effect 3, Skyrim, various Call of Duties, and other franchises, movie spin offs and so on, I can only think of two games which have really blown me away in the years of my PlayStation 3; Journey and Portal. Neither of which were £50+ "triple-A" titles, but 'experimental' games, both priced at less than £10. Both used the medium of video games to do something completely different with the way they told a story.

Meanwhile, mobile platforms have moved forwards at such a pace that not only has an entire business emerged in less time than a new generation of 'proper' consoles (albeit heavily focused on “casual” gaming so far), but it seems perfectly feasible to me that the next big experiments with storytelling through games will be coming to portable touch-screen devices, rather than to 'traditional' consoles.

So maybe, without the next wave of first person shooters (which don't translate too well to an iPhone or iPad environment), I'll be more invested in looking for quality mobile games — the kind that leave you wanting to find out what happens next (as opposed to just wanting to clear the next short level).

Or maybe I'll just be telling my son in a few years that 'in my day, we played proper games'…

Amateur web design

A post by Mark Boulton, a web designer who I have a lot of respect for, on the topic of the craft of web design;

For starters, it’s a designer-centric way of working. It’s a selfish exploit to pour love into your work. If you’re working commercially, who pays for that time? You? Well, that’s bad. The client? Well, that’s ok if they see the value. But many don’t.

This is why I'm happy to call myself an "amateur web designer/developer" — because I get to treat my projects the way I want to.

I think the difference between an amateur and a professional (not just in web design) has little if anything to do with "quality" of work — it's the ability to understand how much work is needed for a project, to set a deadline, set a value, and then manage the project to meet those two constraints. Because when someone else is paying for your time, you are responsible for setting their expectations and then meeting them.

As an amateur, I'm paying for my time. I find it slightly strange that more people don't think that way about web design or coding — it seems that its perfectly acceptable to be an amateur painter, musician, writer, poet, etc. etc. But I don't seem to hear much about amateur developers or designers.

Maybe there aren't many people who think its fun to spend time in BBEdit, Photoshop etc. (Believe me, when you don't need to worry about things like hacks to make your code work around a bug in Internet Explorer, it's a lot more fun…)

Or maybe there are more people who want to deal with deadlines and project management as a part of the design/development process.

I doubt it though.

Faith in the hard jobs

Last Friday, I was trying to send an email back to the office, but what seemed to be a flaky WiFi hot spot meant that although I could apparently connect, I couldn't actualy send it.

The odd thing was, I was on a boat, in the middle of the Irish Sea. There was a WiFi network, which I could connect to, which is great. But then you get a splash screen (once – which I couldn't get to come up again) before you can actually connect to the internet.

Thing is, the wifi network is then connected to the internet through space - a satellite connection of some sort which I won't even pretend to understand. As far as I'm concerned, this goes beyond the kind of 'magic' that makes almost all of the video in the world stream in high quality to a little thing in my hand that is smaller than a C90 tape and into the kind of 'magic' that I can only take on faith that its actually how it works (as opposed to what, I'm not sure. Perhaps the ferry drags a very long ethernet cable behind it as it travels across the sea.)

But I have more faith that the satellite connection is working properly than I have that the wifi network and its silly splash screen is going to work properly in letting me actually connect to the local network.

I think the reason I think that way is that if someone is going to go to the effort of setting up an internet connection that goes through space and then claim that it works, then I tend to believe them. But if someone says that they have a simple screen to pop up and tell you where your internet connection is coming from, that you just have to click "OK" to some small print agreement before you can connect, then I tend to assume that it won't work with something other than a Windows XP PC running an old version of Internet Explorer.

When it does work on a Mac, or an iPhone, or anything that seems to be newer than the interior design of wherever I happen to be at the time, I'm pleasantly surprised. But when I'm 50 miles away from land and I'm told that there is an internet, that is provided wirelessly, for free, and from space, then its not much more than meeting my expectations.

It seems like my expectations might be a little backwards.

"Augmenting Human Intellect"

Earlier this month, on the 2nd July, Douglas Engelbart died aged 88.

I started putting together a brief note about his achievements as a kind of personal tribute/obituary, but it didn't seem quite right. Nailing down what he did by talking about the technologies he pioneered felt like it was somehow falling short, and I think the reason for that was because his 'big idea' was so big that I was missing the wood for the trees.

So instead, I'm going to try to explain why I believe he is a rare example of a true genius.

Inventing the future

If you try to imagine what the future will look like, the chances are that you will take things that you know about the past and current trends and extrapolate those to create some sort of 'future vision'. In other words, things that are getting faster will be faster, things that are getting smaller will be smaller, and things that are getting cheaper might even become free.

That is one way to get to a 'future vision'. There are plenty of people making a living out of that kind of thing (especially in areas like market sizing) and it works reasonably well on a short term basis, where you are talking about things like how many widgets are likely to be sold over the next 3-5 years. Tomorrow will be the same as today, but a little bit different; repeat to fade…

The thing is, that isn't really about the future. Its about understanding the present – where we are, how we got here, and assuming that we stay on the same trajectory.

What that kind of 'vision' misses are actual events; the 'game changers' that disrupt or transform businesses and industries. (For example, the effect that MP3s had on music sales in the late 1990s/early 2000s.) Usually, those kinds of events are clear in hindsight; often a continuation or convergence of technological trends, brought together and executed well to create a new product. They come from a deeper understanding of the present – how the current trajectory is going to be changed by different events from the past that aren't immediately obvious. (To continue with the MP3 example; hard drives and embedded computers were getting smaller, so it was possible to put much more music into a much smaller music-playing device.)

But occasionally, someone has a new idea, or an insight, which changes the future.

"Genius is a talent for producing something for which no determinate rule can be given, not a predisposition consisting of a skill for something that can be learned by following some rule or other" Immanuel Kant

"Talent hits a target no one else can hit. Genius hits a target that no one else can see." Arthur Schopenhauer

It is very rarely that something happens which genuinely changes the trajectory of history. Especially when it comes from a vision; a possible or potential future that someone imagines. More rarely still, it comes from a vision that someone is able to not just articulate, but execute.

For example, some might call the concept of the iPhone 'genius' – it changed the smartphone industry, and what we think of as "portable" or "mobile" computers, but the development of a pocket-sized computer built around a touch screen wasn't new. Smartphones weren't new. Although it was a very well executed idea, it is hard to say that what Apple achieved with the iPhone was significantly different to the target that Palm, Nokia, RIM, Microsoft and others were aiming for with their phones, smartphones and pocket PCs.

I find it hard to think of a better example of what 'genius' really means than Douglas Engelbart.

Recently, I wrote a blog post about "inventing the future" where I said that;

[…] if you want to see the real meaning of "inventing the future", then you could do far worse than looking at [Alan] Kay and the work that was going on at Xerox PARC (Palo Alto Research Centre, where he worked in the 1970s). Because PARC was basically where most of the ideas behind what we think of as 'computers' were put together into a coherent product. At a point in time when the science fiction future of computers involved banks of switches, blinking lights and beeping computers, the guys at PARC were putting together computers with graphical interfaces (ie. WIMP - the idea of a user interface using Windows, Icons, Mouse and Pointer), the Paper Paradigm (the idea that the computer interface would be made up of analogues to the traditional desktop – so, the "desktop", files and folders, the trash can), WYSIWYG ("What You See Is What You Get" – beforehand, what you would type in a word processor wouldn't really give you any clear idea of what it would look like when you printed it out on paper.)

What I failed to mention (because I was focussing on the idea of "inventing the future" that Kay articulated, rather than the actual "inventing" part) was that while the work that was being done at Xerox PARC was putting together the ideas behind what we think of as "computers", they were very much standing on the shoulders of what Douglas Engelbart and his research team had achieved at SRI in coming up with those ideas. (The PARC team also included several of Engelbart's best researchers

Speaking of Alan Kay, he is quoted in Wired's obituary of Engelbart;

"The jury is still out on how long -- and whether -- people are actually going to understand," he said, what Engelbart created. But at least we have started to.

Ultimately, Douglas Engelbart's big idea was simply too big to fit into a simple blog post or article. The general theme of most of the obituaries I have read summarise his life's work as 'inventing the mouse.'

For example, this piece on the BBC website illustrates the shortcomings of over-simplifying what he did;

Douglas Engelbar [sic], the man who invented the computer mouse, has died. He passed away aged 88 and did not become rich through his invention, because he created the mouse before there was much use for it. Bill English explained: "The only money Doug ever got from it (the mouse) was $50,000 licence from Xerox when Xerox Parks [sic - actually Xerox PARC)] started using the mouse. Apple never paid any money from it, and it took off from there."

Aside from the transcription errors, that brief summary puts this particular achievement into nice simple, concrete terms that anyone who has used a computer would understand. But in doing so, it pulls it out of context and massively over-simplifies what he did. (With the added distraction of how much money he failed to make from his invention. Failing to mention the $500,000 Lemelson-MIT prize he was awarded in 1997, for example.)

To put this particular invention into context; imagine using a computer without a mouse. I would guess that you're probably imagining the same kind of computer, but with a different kind of device to move a pointer around the screen. (Such as a laptop's trackpad, or a trackball, or a joystick.) If so, then you're missing the point of what he actually invented – not just the mouse, but the basic concept of the computer interface using a "pointing" device.

So, try again to imagine a computer interface that doesn't use a mouse, or a pointer. Now, I would guess that you are thinking about the kind of modern applications that don't involve a lot of mouse/pointer work (so, no Photoshop, no Powerpoint etc.) and maybe something more like a word processor. In other words, different ways of using a graphical user interface to operate a computer – which again was part of Engelbart's creation.

Hopefully, you're starting to get an idea of how much of the 2013 idea of a "computer" owes to what Engelbart was imagining half a decade ago.

Robert X. Cringely sums it up;

In addition to the mouse and the accompanying chord keyboard, Doug invented computer time sharing, network computing, graphical computing, the graphical user interface and (with apologies to Ted Nelson) hypertext links. And he invented all these things — if by inventing we mean envisioning how they would work and work together to create the computing environments we know today — while driving to work one day in 1950.

Incidentally, that article closes with this beautiful quote;

I once asked Doug what he’d want if he could have anything. “I’d like to be younger,” he said. “When I was younger I could get so much more done. But I wouldn’t want to be any less than 50. That would be ideal.”

He has been widely credited with creating many of the basic concepts of modern computers, demonstrating many of them to the world for the first time at what has since been dubbed "the mother of all demos". But the impact of what he envisioned was much greater than the sum of its parts.

Augmenting Human Intellect

Even then, its still an over simplification. It still hasn't got to the bottom of Engelbarts vision. The mouse, GUI, videoconference, networked computing are all just details of execution – they don't get to the bottom of why he was developing those ideas, and what they were for.

His vision of the potential of the computer went beyond what they did, or how a user would interact with them. It was – in an age of hugely expensive room-sized workstations, punch-cards and teletype terminals – about the role that a computer would have in people's lives.

In a blog post on Understanding Douglas Engelbart, John Naughton has this to say;

But reading through the obituaries, I was struck by the fact that many of them got it wrong. Not in the fact-checking sense, but in terms of failing to understand why Engelbart was such an important figure in the evolution of computing. Many of the obits did indeed mention the famous “mother of all demonstrations” he gave to the Fall Joint Computer Conference in San Francisco in 1968, but in a way they failed to understand its wider significance. They thought it was about bit-mapped, windowing screens, the mouse, etc. (which of course it demonstrated) whereas in fact that Engelbart was on about was the potential the technology offered for augmenting human intellect through collaborative working at a distance. Harnessing the collective intelligence of the network, in other words. Stuff we now take for granted. The trouble was that, in 1968, there was no network (the ARPAnet was just being built) and the personal computer movement was about to get under way. The only way Engelbart could realise his dream was by harnessing the power of the time-shared mainframe — the technology that Apple & Co were determined to supplant. So while people understood the originality and power of the WIMPS interface that Doug created (and that Xerox PARC and, later, Apple and then Microsoft implemented), they missed the significance of networking. This also helps to explain, incidentally, why after the personal computer bandwagon began to roll, poor Doug was sidelined.

To put it another way, before Engelbart, a "computer" was a device to process information – you put data in, it ran the numbers and gave you data out. In other words, a computer was something you gave some 'work' to, and it did it for you. (For example, you would use a keyboard to punch some information into a card, then put the card into the computer.)

Engelbart's vision was computers as devices to augment human intellect – partly by doing the "computing" work for you, and partly by doing the work with you (for example, by using an interface that was live, giving the user feedback in real time), but through networking and sharing resources, by connecting people and by working together, becoming a working team greater than the sum of its parts.

If you focus on his achievement as the tools he created to make the first part of this vision a reality — the mouse, the GUI and the desktop computing environment — then you could be forgiven for thinking that as we move to mobile devices and touch screens, we are leaving what he did behind.

I think that couldn't be further from the truth. When we move forwards to always-on, mobile, networked computing, with permanent availability of resources like Wikipedia, to social networks, to Dropbox and Maps and so on, the role of the device for "augmenting human intellect" becomes clearer then ever.

The Power of the Network

The system that Engelbart designed and helped to build was NLS - the "oN Line System", which enabled several users to work on the same computer at the same time. (This was the system that was shown off at 'the mother of all demos'.)

In 1969, the beginnings of ARPANET – one of the first packet-switching computer networks – were put into place. The second computer on the network (so the first network connection) was up and running in October, connecting a machine at UCLA machine to Douglas Engelbart's NLS system at the Stanford Research Institute. As this network developed, it was the first to use the TCP/IP protocol designed to allow computer networks to connect to one another, allowing machines on any of the connected networks to communicate with one another directly. The public 'network of networks' built on this protocol is the internet.

There is a great anecdote in this article from 1999 about a meeting between Engelbart and Steve Jobs which I think illustrates this friction between Engelbart's vision of the power of the network being the key to the computer, and the similar but competing vision of the personal, desktop computer as a self-contained box with 'all the computing power you need';

Apple Computer Inc.'s hot-shot founder touted the Macintosh's capabilities to Engelbart. But instead of applauding Jobs, who was delivering to the masses Engelbart's new way to work, the father of personal computing was annoyed. In his opinion, Jobs had missed the most important piece of his vision: networking. Engelbart's 1968 system introduced the idea of networking personal computer workstations so people could solve problems collaboratively. This was the whole point of the revolution. "I said, 'It [the Macintosh] is terribly limited. It has no access to anyone else's documents, to e-mail, to common repositories of information, "' recalls Engelbart. "Steve said, 'All the computing power you need will be on your desk top."' "I told him, 'But that's like having an exotic office without a telephone or door."' Jobs ignored Engelbart. And Engelbart was baffled. We'd been using electronic mail since 1970 [over the government-backed ARPA network, predecessor to the Internet]. But both Apple and Microsoft Corp. ignored the network. You have to ask 'Why?"' He shrugs his shoulders, a practiced gesture after 30 frustrating years, then recounts the story of Galileo, who dared to theorize that the Earth circles the sun, not vice versa. "Galileo was excommunicated, " notes Engelbart. "Later, people said Galileo was right." He barely pauses before adding, "I know I am right."

Apple's vision for the Mac in 2001 was the "digital hub", which would connect to all of your other electronic devices. It wasn't until just 2 years ago that the focus of Apple's vision shifted from the desktop computer to the network – specifically iCloud – as the "digital hub" which everything would connect to. Arguably, Apple's reputation for online services, and specific examples like the iWork for web applications announced last month (which work through a web browser, but still offer no way for people to work collaboratively on the same document at the same time) indicate that they still don't get it.

So – yes, he invented the mouse. And the idea of the computer interface that the mouse works within. But his greater idea was one that I think we are still getting to grips with; the idea of the computer as a tool for extending ourselves, for bringing people together, connecting them across countries and continents so that they can work together, share their things, talk, write and speak to one another.

All of this sparked by a man in 1950, driving on his way to work the day after getting engaged and realising that he needed to set himself some professional goals to keep himself interested once he had got married and was 'living happily ever after';

I finally said, "Well, let's just put as a requirement I'll get enough out of it to live reasonably well." Then I said, "Well, why don't I try maximizing how much good I can do for mankind, as the primary goal, and with the proviso that I pick something in there that will make enough livable income." So that was very clear, very simple as that.

Realising that the complexity of human problems was growing, as well as becoming more urgent, and realising that computers could provide a way to solve those problems, his mission (or 'crusade', as he later called it) was to turn that into a reality.

From that vision sprang the ideas behind what we think of as the computer, in terms of its graphical user interface, and the tools we use to connect with that interface. From his own computer system came the first network connection, to the network that later became the internet. But the vision he was putting together in the 1960s is only just now becoming clear to those of us who have moved into a world of ubiquitous, always-on, always-connected computers – as it moves past the desktop-bound paradigm that he saw and into a pocket-sized, portable and wireless world.

Whether he will forever be 'the man who invented the mouse', or eventually get wider recognition for the scope of that original vision remains to be seen, but no history of either the personal computer or the internet could be complete without mentioning his work. But the fact is that thanks to Douglas Engelbart's vision, pretty much anyone today with even a passing interest in where the ideas of the personal computer, the networked computer or the internet came from will be able to pull out their pocket-sized personal, networked computer and quickly trace them back to him.

A Day One bookmarklet for iOS

There is a promotion in the Apple App Store at the moment, giving away 10 apps for free to mark the 5 year anniversary of the App Store's launch.

One of the apps there is Day One, which I had heard some good things about, so decided to give it a whirl. And I like it.

One thing that I thought would be useful was a bookmarklet to send web pages from Safari into Day One entries. I had a quick look, but couldn't find anything. So I had a stab at building one myself.

This is the Day One URL scheme;

CommandURL
Open Day Onedayone://
Start an entrydayone://post?entry="entry body"
Open Entries listdayone://entries
Open Calendardayone://calendar
Open Starred dayone://starred
Edit Entrydayone://edit?entryId=[UUID]
Preferencesdayone://preferences

And this is a bookmarklet I had previously made to work with the Drafts app;

javascript:window.location='drafts://x-callback-url/create?text='+encodeURIComponent(document.title+'\n')+encodeURIComponent(location.href)

To start with, I put the Day One URL into a very simple JavaScript bookmarklet;

javascript:window.location='dayone://post?entry="entry body"'

And it works- good start!

Taking the code from my Drafts bookmarklet to get the URL and page title gave me this;

javascript:window.location='dayone://post?entry="'+encodeURIComponent(document.title+'\n')+encodeURIComponent(location.href)+'"'

Which also worked. So this is basically the same as my Drafts bookmarklet (but without the Actions to trigger).

I had a more complex Drafts bookmarklet which checks for selected text (only works on the iPad when the Bookmarks bar is visible - otherwise any text is deselected when you pull up the bookmarks menu) - switching the base URL gave me this (I've added line breaks to make it readabl here- you probably don't want them if you're using this bookmarklet yourself. Just copy/paste the code into a text editor and remove the line breaks so it is all on a single line.)

javascript:function%20 getSelText()%7Bvar%20txt=%27%27; if(window.getSelection)%7Btxt=window.getSelection();%7D else%20if(document.getSelection)%7Btxt=document.getSelection();%7D else%20if(document.selection)%7Btxt=document.selection.createRange().text;%7D else%20return%20%27%27; return%20txt;%7D var%20q=getSelText(); if(q!=%27%27)%7B q='%3Cblockquote%3E%5Cn'+q+'%5Cn%3C%2Fblockquote%3E%5Cn';%7D var%20l='dayone://post?entry='+'%5B' +encodeURIComponent(document.title)+'%5D%28' +encodeURIComponent(location.href+'%29%5Cn'); if(!document.referrer)%7Br='';%7D else%7Br='via%20'+encodeURIComponent(document.referrer);%7D window.location=l+r+'%5Cn'+q+'%5Cn';

Which, to my surprise (once I had got rid of some stray commas and semicolons) worked!

With some text selected on a web page, this bookmarklet now opens Day One, creates an entry and populated it with the web page title (as a markdown link to the page URL) and any selected text in a blockquote HTML tag, and looks something like this;

Drafts and Safari bookmarklets — Some Random Nerd

It occurred to me that an app that plays so nicely with URL schemes (ie. sending things to other apps via their URL schemes) would probably have a scheme of its own for pulling things in. A little googling later and I found that you can; like this bookmarklet

Not a bad result at all - especially considering I managed to put it all together on my iPad on a 25 minute train journey.