Using a computer
I have a theory that there are three types of computer users…
Daniel Finkelstein, on the two rules of politics;
After several fruitless exchanges I fear that I responded to a personal comment by telling my interlocutor that I had reluctantly reached the conclusion that he was “pathetic”. I can’t say that I immediately regretted doing this. Or that, even now, I resile from the judgment. But the next day it didn’t feel as good as it did at the time. And a couple of days later I began to wince at the error.
The first might be considered the “shoot the messenger” rule…
When someone issues an angry rebuke, observers associate that person with anger. When accusing someone else of being pathetic, whatever the merit of my case, I mainly succeeded in making myself appear pathetic.The second rule one might call the university fraternity rule. My favourite social psychologist, Robert Cialdini, points to a study of fraternity initiation rituals in the US. The more humiliating the initiation, the more the membership was valued. It’s a psychological trick we play on ourselves to make us feel the humiliation was worthwhile.
Thus by calling someone pathetic I was increasing my antagonist’s commitment to the position I was arguing against. He would become even more wedded to it in order to justify to himself the insults he was enduring.
I had made myself look small while simultaneously making my adversary feel more certain that he was right. Good work there.
On one hand, its useful to know how market share is changing for the different smartphone operating systems.
On the other though, its hard to draw any sort of meaning out of change in market share without knowing what the change in the market size is. I know that there are more smartphones than 12 months ago, and that Apple's share has declined. But does that mean that Apple is selling more phones? Or the same amount of phones in a growing market?
From happy to share my personal information to carefully ensuring that I don't, through one simple act of web form design.
Especially for the "more megapixels means better photos" crowd.
(via DaringFireball)
It must be about ten years ago now when, stuck in a dead-end job and wondering what I wanted to do with myself, I decided that I wanted to write manuals for gadgets.
I had some experience in technical writing, I had a love of gadgets, and I felt that I could turn the terrible guides that came with a typical piece of technology into something that would actually be useful.
So, I started working on building up a portfolio and spoke to some people in that line of work. Generally doing my research — who is doing it well? Where is it all going? (Why are they so terrible?) But I learned three things;
The second two made me change direction - 2. because I got caught up in web technologies, and 3. because the actual job of writing the kind of documentation I thought I could do well would involve doing another job which I thought would directly harm my ability to do what I wanted to do well.
The first point seems to be having a strange impact on gadgets in general though.
Russell Davies has documented his recent experience of buying a new gadget. (Spoiler: His discovery was that "the manual" hasn't gone online. Its just gone.)
The thing I find most alarming is that the gadget he's bought appears to be a digital camera, made by one of the biggest consumer electronics brands in the world.
Dedicated digital cameras are currently in danger of becoming one of those gadgets that suddenly gets very old and dies - like VHS recorders in the age of Sky+, or portable CD players in the age of iPods. Whether there is a place for the dedicated digital camera in the age of smartphones remains to be seen.
Actually, its not quite that simple. Sky+ is better than VHS. iPods are better than portable CD players in certain ways. Probably not in terms of how good the sound is though - the music on iPods (and other MP3 players) is — usually — compressed. The amount of music that fits on there is more important than the fidelity of the sound.
Dedicated cameras are generally better than smartphones. I say "generally" because it depend on what you want from a camera. If you want;
…then you're probably thinking of dedicated compact cameras as "better" than smarphones.
But, if you want;
…then you might well be thinking about investing that £100-200 in a better smartphone, instead of a new camera.
But here is the problem. Apart from the zoom lens (which is pretty straightforward to use), all of those benefits of dedicated cameras need some sort of explanation. The typical person (who has never read a photography book, taken a class, followed a photography blogger, joined something like a Flickr community etc.) needs to be told what those features are before they can figure out why they would want them, when they should think about them, and how to actually use them.
So, maybe that isn't where cameras are going. Maybe they are all about convenience — point-and-shoot for the masses, who are never going to read the manual (whether its a multi-lingual tome or an interactive online experience) and don't want a £600 telephone?
It doesn't look like it. Camera complexity seems to not just be a selling point — it isn't even a choice.
But camera manufacturers don't seem constitutionally capable of making a super-simple camera. They must be deeply convinced that the complexity of the feature set (which certainly does appeal to a lot of us) is an indivisible part of how they add value to their product, and the temptation to add more and more is something they can't forswear even for one product. I mean, with hundreds of cameras on the market, wouldn't you think they could make one that was super-simple, just for that segment of the population that wants it? And market it that way. You'd think. But no.
I think it's one of the "stealth reasons" why cellphones are encroaching on the camera market so rapidly. Not the only reason, not the main reason, but a reason. (I also think that as cameraphones gain an ever-enlarging share of the camera market, the cameras in them will inexorably get more complicated.)
I can only assume that the likes of Sony either believe that people today don't need a manual, because they can just google for a general "how to use a camera" blog post and figure out what they need to know. Or, that they are making gadgets like iPods — so simple and intuitive that they don't need manuals or handbooks.
So, thats the situation for the future of "manuals";
I'd say that things aren't looking too good for a well documented future for gadgets…
An email newsletter I received claimed that "One in three millennials watch no broadcast TV".
I didn't believe it…
"A bit of video I was looking at recently stuck with me over the past few months. It showed a toddler sitting up holding a magazine. She tries to swipe it – she tries to expand it – she bangs it to try to make it play. Nothing happens. And in frustration she throws it away. To a toddler a magazine is a tablet that’s broken. That’s how this generation is growing up. It will have a totally different set of norms and behaviours."
Director-General of the BBC Tony Hall, in a speech about the future of the BBC.
This is a refrain I've heard many, many times over the last few years (I assume that this video is the one he is talking about); todays toddlers have grown up with touch screens. They expect screens to do things when you touch them. Something that doesn't react to physical interaction is broken…
There is a fundamental truth here. Touchscreens are just a part of it — "natural interfaces" are the new category of user interfaces. Once you start using a touch screen to interact directly with content, its jarring to go back to a similar device where you have to operate a cursor with a keypad.
But… does a toddler of today think that a magazine is "broken"?
I think its nonsense. I can't speak from experience — my eldest child was playing with an iPad before he could talk — but I'm pretty sure that before 2010, toddlers were not a significant part of the Vogue/Heat/FHM audience. (Its hard to be sure, because the NRS doesn't measure readership of under 15s. But I'm pretty confident.)
I'm pretty sure that before 2010, a toddler's reaction to a glossy magazine would have been to touch it, see if it did anything — maybe try to eat it, if they were below a certain age — and then either ignore it or rip it up. (Which is one of the reasons we tend not to give magazines to toddlers unless we don't mind them being screwed up, ripped and eaten. I do speak from experience when I say that toddlers are agents of chaos, on a mission to destroy everything they come into contact with.)
Earlier this week, I attended Microsoft's Advertising Retail Forum, where I heard a great example of what not to do with in-store technology; a big "outdoor clothing" brand had put a nice, interactive thing in their stores — you've probably seen the kind of thing; a computery thing where you can browse their catalogue or see a store guide or watch their TV adverts or whatever is is that they thought their customers would want to do. All well and good.
Except… they had to put a sticker on the thing to ask people not to touch the screen, because below was a keyboard and trackball, because this was a computer, and people's greasy fingerprints on the non-touch screen were making it very dirty.
That is an illustration of why natural interfaces are important. Because grown ups, who have spent their entire adult lives (and probably more) using computers with keyboards and mouses are suddenly assuming that big screens are things to be touched. I've seen it happen with large screen installations, with non-touch smartphones, with computers in kiosks, car park ticket machines - you name it. Touch screens are now the default for many adults. Thats why anyone dealing with technology today — hardware or software — needs to be aware that expectations have changed.
Not because toddlers don't read magazines.
So, how many tweets does one need to be considered a social media expert?
— Jason Kottke (@jkottke) October 10, 2013
Twitter is looking for a new "Media Evangelist" — officially titled "Head of News and Journalism", NBC News Chief Vivian Schiller is currently the favourite for the position. But an opinion piece by Ruth Bazinet on Medium says that she is the wrong person for the job.
Why? Because of her Twitter profile.
But it lacks the most important element that should be ringing alarm bells at Twitter HQ —a significant number of tweets. How can someone who has tweeted less than 1,200 times have the practical, hands-on knowledge of the platform required to evangelize it to other news media professionals? Twitter needs a veteran, someone who is an expert not only about the platform itself, but who also understands how people, including other journalists, are using it.
In short, the view is that Twitter is heavily reliant on "power users" — those who are tweeting dozens of times a day.
I think thats a view that misses the point of what Twitter is and where its heading. Maybe three or four years ago, when Twitter was a social network for the bloggers, journalists and technorati, it would have seemed a more valid point; but today, Twitter is something different. It has changed.
Most obviously, it is bigger. "Power users" today don't have follower counts in the tens of thousands any more — at the time of writing, there are 839 Twitter users with more than 2 million followers (with Mohammed Morsi just about to cross the mark.)
The best way of summing up this change probably comes from Twitter itself — at the top of their "About Twitter" page is the big, bold sentence;
The fastest, simplest way to stay close to everything you care about.
Below that;
An information network.
Twitter is a real-time information network that connects you to the latest stories, ideas, opinions and news about what you find interesting. Simply find the accounts you find most compelling and follow the conversations.
Compare that to what it said a couple of years ago;
Twitter is a real-time information network powered by people all around the world that lets you share and discover what's happening now.
Note the differences; out with "share", in with "follow." Out with "powered by people all around the world", and in with "latest stories, ideas, opinions and news".
Those screengrabs come from this blog post at Harvard Business Review, talking about a study on how people's Twitter usage changes when they get more followers;
We had two hypotheses as to why they do post. One is that they like to share information with world, that they want to reach others. This is an intrinsic motivation. They enjoy the act of contributing. The second hypothesis is that posting is self-promotional, a way to attract followers to be able to earn higher status on the platform. Judging by how people behaved once they achieved popularity—they posted far less content—we believe the second hypothesis is probably the primary motivation. If the primary motivation were to share with the world, most people would not slow down posting just because they were popular. But most people did slow down as they gained followers.
So I don't think the role of Twitter's "Head of News and Journalism" is going to be about showing journalists how they can talk to their audiences; its in showing news organisations how they can use Twitter to broaden their audiences.
It isn't about showing editors how they can "listen" to what their readers are saying; its about showing them what they can learn from the data coming from Twitter.
In other words, its going to be showing news organisations how to move forwards from the "old Twitter" world that Ruth Bazinet's article seems to be talking about, and towards the "new Twitter" that it is becoming, where Twitter isn't a platform for "engaging" or "interacting", but a platform for distribution.
Old Twitter wanted to be the internet's watering hole, where everyone came together to talk. New Twitter wants to be the internet's front page; Google will tell you what you want to know, but Twitter will tell you what you didn't know you wanted to know. Discovery, rather than Search. Ultimately, thats not really a change in what Twitter wants to be — but it is a slightly different way of becoming it.
I should probably note that I'm not particularly in favour of this shift that is going on (or rather, has already happened.) I like old Twitter, where it felt like the place where interesting things on the Web were happening, and it was small enough to feel like a community — where a celebrity making a typo or grammatical error wasn't seen as an invitation for hundreds of people to correct them. But… its probably just a natural consequence of Twitter's need over time to grow its user base and develop its business. If they had decided against advertising as a core business model, perhaps it would be a very different story today.
But thats a whole other story…
(Thanks to Mat Morrison for pointing out the change in Twitter's description to me.)
[Edit 18:10, 11/10/13 - added screengrabs]
Fascinating story of an exercise by a teacher in the 1960s the day after Martin Luther King was shot, and the impact it has had.
Well worth a read.
(via a Mental Floss article of famous experiements that could never happen today.
An interesting point of view from the EFF on the decision by W3C that "playback of protected content" is within the scope of the W3C HTML Working Group, meaning that an Encrypted Media Extension protocol may be included in the HTML5.1 standard.
It seems to me like the principle of the web is device independance; ie. no matter what hardware, operating system or browser you are using, web standards mean that web content is still accessible. The intrinsic 'openness' of the web is (or, maybe, has been) a side-effect of this principle, rather than a fundamental component of it. If this is what it takes to make sure that the future of video over the internet lives in web browsers rather than closed applications (which, by its nature, means limited to the most popular platforms), then it might be a pragmatic choice.
But the alternative is most likely going to be proprietary browser plug-ins (ie. Flash or Silverlight) alongside native mobile applications for iOS and Android (maybe Windows Mobile at a push – I'd be surprised if anyone is focussing much development time on BlackBerry or Symbian these days), then it doesn't really change too much.
So, if this isn't something that is going to improve the web – but could start closing down other technologies, resulting in a less open World Wide Web – then its probably not the kind of change that is going to get much love from the Mozilla guys. My guess is that whether it succeeds is going to depend almost entirely on what Google think of it. If they are in favour, then we should expect to see it embraced by Chrome and implemented in YouTube. If not, then don't expect one of the world's most popular browsers and biggest video platform to rush to get on board. (Also bear in mind that Apple don't have any particular reason to want to support it, and if Google aren't in a rush to implement it on Android devices it could be a technology that is dead in the water on mobile platforms.) Which makes me worry about the amount of control that Google have over the widear web.
(Vaguely related; I recently spoke to someone who had been on a short coding workshop, who assured me that one of the great things about Javascript was that you could look at any website, inspect the Javascript code and figure out how it works. I held back the urge to ask them to explain how the javascript on Google+ was doing its thing…)
Do You Miss the .com Button on the iOS 7 Keyboard? Use This Trick
Have to admit I hadn't noticed that it had gone away (although it constantly bothers me when apps and websites don't make use of the different iOS keyboard layouts, so I might have assumed that it was the 'wrong' keyboard instead.)
Every year, Apple has a big iPhone event where they announce their latest handset. Every year, the consumer tech, media, marketing and mobile industries get very excited. And every year, I tell myself that I'm not going to add to the noise of chatter, misinformed speculation and poor analysis.
And pretty much every year, I give up and post something at the last minute. 2013 is no exception… So, too late to be a part of the conversation and too early (at least, I think so anyway…) to be a snappy reaction…
Last year, I said that the interesting thing about the iPhone 5 was that there wasn't a clear "interesting thing" about the iPhone 5.
But one thing that is a little different is that this time, there isn't really a 'headline feature' that is a unique selling point for the latest iPhone;
* Original iPhone – multitouch screen
* iPhone 3G – 3G, GPS, App Store
* iPhone 3GS – Main selling point was Speed – faster processor, faster networking, better camera. But it also introduced a built in compass, which enabled Augmented Reality for tech nerds, and better maps/navigation for normal people.
* iPhone 4 – Retina display, new design.
* iPhone 4S – Siri, iCloud
On stage, Phil Schiller said that every element of the iPhone 5 has been improved – display, wireless signal, voice processing, speakers and earphones have all been refined, physically the handset is thinner and lighter, yet the processor is faster and battery life is the same. […] If the story is that everything is improved, then nothing is truly different.
For the iPhone 5, I guess it was the taller screen (nice - but a selling point?), new dock connector (again — nice for a few reasons, but not going to sell handsets), LTE (only supported on one network in the UK until a few weeks ago). Smaller, thinner, lighter… All in all, good reasons for someone out of contract to buy the iPhone 5 over the older and cheaper iPhone 4S — but not quite convincing enough for me as an iPhone 4 owner to justify £600+ for the additional handset and contract commitments.
So far, the new iPhone rumours sounds quite similar; incremental improvements (and a fingerprint scanner), but no major hardware features, unique to the new handsets (as opposed to being a part of online services or iOS7.) The 3GS and 4S were incremental updates — "the same, but different." So it seems reasonable to expect the same for a 5S.
Which means figuring out what Apple's story tomorrow might be seems a bit trickier than usual, without a strong product story to tell…
Apple's WWDC conference sets the tone for the new iPhones, as Apple tells iOS (and OSX) developers what they need to know to get their apps ready for release date. This year was a particularly notable one, with iOS7 bringing a complete overhaul of the visual design as well as the usual new APIs.
Breaking from the usual format for an Apple event, the WWDC keynote opened up with a video – a public statement about their new brand signature, "Designed by Apple in California". For a brand well-known for their product-centric marketing, this seemed like an unusual departure — a message that is purely about the Apple brand.
At stratechery.com, Ben Thompson had this to say about Apple's intended audience;
The truth about the greatest commercial of all time – Think Different – is that the intended audience was Apple itself. Jobs took over a demoralized company on the precipice of bankruptcy, and reminded them that they were special, and, that Jobs was special. It was the beginning of a new chapter.
“Designed in California” should absolutely be seen in the same light. This is a commercial for Apple on the occasion of a new chapter; we just get to see it.
I think the way that Apple Inc. have set out to define the Apple brand says something about how they want to differentiate themselves from their competitors. While its easy to focus on the "…in California" part (which isn't something many of their competitors can really compete with), its the 'Designed by…' part that is probably most unique to Apple; they build everything from the CPUs and the devices that they sit in (at least for their mobile products), right through the software that powers them, the applications that run on them — and increasingly, the services (iCloud, Maps, iMessage etc.) that they use.
Presumably, the iPhone 5S will include a new chip at its core. It seems a safe bet that it will be called the A7. But the key point here is that while Google/Motorola and Microsoft/Nokia are getting their OS and hardware integration lined up, Apple are designing everything from the CPU to the interface. This gives them something to talk about from a marketing perspective (ie. "Designed by Apple, in California" — nobody else is designing the whole product in the way that Apple is doing).
Why is that so important?
Well, 5 years since the iPhone 3G really changed the smartphone market, it is now getting mature. By which I mean that most people buying smartphones today are smartphone owners already — they have a clear idea what they want. They aren't buying into the idea of "smartphones" - they are buying into a particular platform, whether that is iPhone, BlackBerry, Windows Phone, Galaxy etc. The 'early adopters' of 2008-2009 (iPhone 3G/3GS or early Android devices) will now be, assuming a 2 year smartphone contract/lifecycle, looking at their third device. They know what they want, what they don't want, how much they value it, and what it means to enter into a 2 year contract commitment in a fast-moving market.
Now, I've deliberately left Android off that list, because I don't think its something that "normal people" see as a platform. Samsung — the most significant manufacturer of Android phones in the western market — don't even use the Android brand in their marketing materials. Have a look at the HTC One website and see if you can find out what version of Android it runs on. I can confidently predict that it will be hard to miss "iOS7" on Apple's iPhone page after today's event.
Android simply isn't a meaningful brand to the people selling the devices, and it isn't a meaningful brand to the people buying it. And for those who it is meaningful to, then they are probably more interested in the Nexus brand, which promises an 'Android as Google intended it' experience; hardware designed by HTC/Samsung/LG, software designed by Google. But not Motorola, who design Android phones and is owned by Google… Oh - and then there is the "Google Experience" brand for non-Nexus devices that still have the stock Android OS and…
Android clearly has market share, and might even have devices that are as good as the iPhone. But it has issues with a fragmented marketplace, causing issues for developers, which causes quality and usability issues for users. Even the branding is fragemented.
My question is whether there is another opportunity here — something that Apple can do with a complete overhaul of the iPhone today (ie. new hardware and new OS) that the Android ecosystem wouldn't be able to reproduce? I don't know enough about the hardware side of things, but it feels like there is a space for innovation here — maybe its stripping the hardware down to its bare essentials to make the most power-efficient, thinnest and lightest device. ("iPhone Air"?) Maybe its some new service running at a level so deep in the internals that it would take years for an OS/Hardware partnership to reproduce?
I saved the "obvious" stuff until last, because the rumours are all but confirmed that the iPhone is going from a "last years 'Great' model is this years 'Good' model" strategy to a "two new iPhones" line up.
In 2009, I said that the interesting thing about the iPhone 3GS's launch was the fact that the iPhone 3G was also remaining on the market - how this marks a split in the iPhone product line from being a "premium" smartphone to a "regular" smartphone with a "premium" alternative. - although the 3G was only available in its smallest storage option, it meant 2 phone choices with 4 different price points.
When the iPhone 4S was released (2 years later), the iPhone 4 and 3GS stayed on the market (in the smallest storage option only) — so 3 iPhone choices with 5 different price points. The pattern was the same with the iPhone 5 launch last year, so the options today (at the end of the "iPhone 5" cycle) are;
If the pattern were to continue, then you would expect to see the line up with an iPhone 5S/6 (or whatever it will be called) as;
But it seems pretty clear from the rumours that something different is happening - we will be seeing an iPhone 5S and an iPhone 5C. If the 4S were to remain, then we would be left with an old phone with an old screen size, an old dock connector and old (2.1) Bluetooth support. Given a year on the market (and a 2 year lifespan), I don't see this happening - I don't think Apple want people to be buying accessories with the old dock connector in 2016. (It seems like a pretty safe bet that the iPad 2 will be retired this month for similar reasons.)
And 8Gb just isn't enough any more. With some music and videos and a reasonable collection of apps, 16Gb becomes pretty tight pretty fast. So, assuming the 5C isn't just a "smallest space" model, my guess is that we will see something like;
I say "guess" — there is a lot of speculation about price points, based on things like subsidy values and the Chinese market — neither of which I will pretend to understand. But it seems to me that a "new" phone will outsell an "old" phone at the same price; Apple is doing very well out of the iPhone, and changing a successful balance in a way that might pull high-value customers into the low-value alternative model just doesn't make sense to me. (But like I said; I don't pretend to understand the Chinese and network market forces…)
Incidentally, although the names seem to be accepted as truth, I wouldn't be surprised if they turned out to be purely internal codenames rather than the brands to be marketed — 2 new phones that aren't really "new" but updates on last years model seems like an odd move (especially considering the iOS overhaul.) Those of us who see the name as a meaningless label when you're getting entirely upgraded internals seem to be outnumbered by those who see the most important things about the phone as the name and casing design.
The other thing that I haven't seen anyone address is what the "new" naming pattern would be — ie. what happens next year? Will we get the iPhone 6 and 6C? Followed by the 6S and 6…CC? 6D? Or maybe next year will be a new naming system - the "new iPhone" and "iPhone mini"?
Who knows. Whatever Apple's plans for next year are, it seems a safe bet that they already know what they plan to offer going into today's event. Whether or not we will get a hint of it remains to be seen.
The space that I think is going to be really interesting isn't so much the phones and software as the accessories — what happens when your phone is talking to your TV, your stereo, your home lighting, your fridge etc.
At WWDC, Apple opened with a demo from a new company called Anki; announcing the launch of their company, using iOS development platform and devices "to bring artificial intelligence and robotics into our daily lives".
The commentary I've seen since the event seemed to be fairly dismissive of what is essentially a 3rd party apps and accessories developer showing off something that looks like a hybrid of toy cars and computer games. Perhaps it's down to simple confusion — why was this first up? Is it a toy, or a game, or a tech demo? If its a game, how do you play it? If its a toy, isn't this kind of AI a bit over the top just to entertain children for a while? (And therefore, won't the price tag be a bit much for the toys market?)
But despite the actual demo hardware, this company doesn't really seem to be about toys or games to me. This is about the power of an iOS device to do much more than run apps and surf the web, and how 3rd parties are building on this platform to do something a bit different.
While Google are talking about their project to build self-driving cars for around $150,000, Apple have shown how an iPhone can control a bunch of cars in real-time (albeit in a highly controlled environment.) I don't think anybody is thinking about putting their phone in control of their car, but the fact that a modern smartphone is even capable of this kind of data processing and wireless communication should be food for thought for anyone thinking about where this technology is heading. The focus for the last few years has been on handsets and apps — I'd love to see what happens if the industry shifts its thinking to what can be done with accessories.
Oh - and one more thing. iOS7 — lovely, "flat design", but with some interesting 3D layered effects, responding to your movements… Isn't anyone else wondering whether a 3D retina screen would be a possibility?
No? Just me then…
A couple of months ago, I wrote about (among other things) a quote about "inventing the future".
This morning, via a Wired article, I came across the Quote Investigator blog, who has a large archives of quotes for which he has dug out the true sources.
Needless to say, he did a more in-depth job than me; although Alan Kay has stated that he originated the maxim of the form "The best way to predict the future is to invent it", and began using it by 1971, but in 1961 the line "The future cannot be predicted, but futures can be invented." appears in a book by Dennis Gabor (inventor of holography) called "Inventing the future".
The future cannot be predicted, but futures can be invented. It was man’s ability to invent which has made human society what it is. The mental processes of inventions are still mysterious. They are rational but not logical, that is to say, not deductive.
Like a lot of people, I spend quite a lot of my working day for one reason or another in Excel. And I've learned not to trust it. Not the software - it seems pretty reliable (if a little inflexible at times.)
I don't trust my own work - its too easy to make a mistake (mistyping a number or formula, putting the variables in an equation the wrong way around etc.) I double (or triple) check everything. But I have a reasonable idea how enthusiastic/bored I was when I was doing a particular piece of work, and how likely I was to have made a simple error at the time.
For other people's work, I'm less trusting. If someone has figures where I would expect to see a formula, I'll try to put together the formula to check that the figures are right. I'll check that percentages add up to 100% - basic checks that I probably wouldn't do on my own work.
But print… well, I trust that a bit more. Because the numbers on a page are what they are. Its quite literally black and white. Partly, I think this is because its easier to assume that, for example, the cells that should be formulas are actually formulas (I'm an optimist…) But also because there is a finality to print — if I'm saving a working file to a network drive or emailling someone a work in progress, I'm not going to double check it in the same way as if I'm printing a copy off.
So I was pretty surprised to see this story about Xerox photocopiers 'randomly' altering numbers that they were scanning or photocopying. I had assumed that a copy was just that — it had never occurred to me that in a digital age, those massive copiers would be running compression/decompression algorithms.
So now I don't know what to trust any more…
Thinking about a recent post and the idea of "augmenting human intellect" got me to thinking about what we look for in computer systems.
I think there is an idea among people who want a piece of work to be done without doing it themselves that computers do the work, and the person using the computer is just the "operator". Whether that is an image that you want someone to photoshop, or complex analysis using something like Excel, SPSS, or a social listening dashboard, the underlying (and probably unconscious) assumptions are;
I think that they are 3 common assumptions, which are all wrong. But I'm going to focus on the first one; the fact is, computers don't do amazing things.
People do amazing things with computers.
That's what the idea of "augmenting human intellect" is all about. Computers don't do the jobs — they help people to do the jobs.
Another way of looking at it is to think of the computer as an assistant. No good leader/manager would ever say that their problem is that they are leading the wrong people, or that they would be a better leader if they had a better assistant. But an assistant, by definition, has nothing to do without someone to assist.
And that's why looking for a magic system (ie. the best technology) to do a particular job is always going to be time that could be better spent looking for the right person.
If you work somewhere where technology is being brought in to do a job that you don't already have people given the time and resources to do the job, then I would say that it is pretty inevitable that the job is going to fail.
This September, the new GTA game comes out. I am vey excited about this.
Towards the end of the year (in time for Christmas), the PlayStation 4 and Xbox One are expected to launch.
They will be expensive (at least, more expensive than I can really justify, given how much less time I have for playing games these days). And presumably, they will have exclusives for all of the AAA titles within the next year or so.
Meanwhile, Lovefilm have announced that they are going to stop renting games. So if I want to play a new game, I'll have to pay £50-60+ to buy a copy— which again, is something I don't do very often.
So, after GTA V, it's looking like my computer gaming days are going to be effectively over. In a few years time, my son (currently 4) will be old enough to get involved with what will probably be the 9th or 10th generation of consoles (assuming that there is still a games industry like the one we have today by that point — which isn't an assumption I would personally put money on), and I've got little doubt that I will a) be encouraging Father Christmas to bring one and b) want to play on it myself. But I know it won't be the same.
But maybe it's not all bad.
For all the hours of fun I have had playing games like Mass Effect 3, Skyrim, various Call of Duties, and other franchises, movie spin offs and so on, I can only think of two games which have really blown me away in the years of my PlayStation 3; Journey and Portal. Neither of which were £50+ "triple-A" titles, but 'experimental' games, both priced at less than £10. Both used the medium of video games to do something completely different with the way they told a story.
Meanwhile, mobile platforms have moved forwards at such a pace that not only has an entire business emerged in less time than a new generation of 'proper' consoles (albeit heavily focused on “casual” gaming so far), but it seems perfectly feasible to me that the next big experiments with storytelling through games will be coming to portable touch-screen devices, rather than to 'traditional' consoles.
So maybe, without the next wave of first person shooters (which don't translate too well to an iPhone or iPad environment), I'll be more invested in looking for quality mobile games — the kind that leave you wanting to find out what happens next (as opposed to just wanting to clear the next short level).
Or maybe I'll just be telling my son in a few years that 'in my day, we played proper games'…
A post by Mark Boulton, a web designer who I have a lot of respect for, on the topic of the craft of web design;
For starters, it’s a designer-centric way of working. It’s a selfish exploit to pour love into your work. If you’re working commercially, who pays for that time? You? Well, that’s bad. The client? Well, that’s ok if they see the value. But many don’t.
This is why I'm happy to call myself an "amateur web designer/developer" — because I get to treat my projects the way I want to.
I think the difference between an amateur and a professional (not just in web design) has little if anything to do with "quality" of work — it's the ability to understand how much work is needed for a project, to set a deadline, set a value, and then manage the project to meet those two constraints. Because when someone else is paying for your time, you are responsible for setting their expectations and then meeting them.
As an amateur, I'm paying for my time. I find it slightly strange that more people don't think that way about web design or coding — it seems that its perfectly acceptable to be an amateur painter, musician, writer, poet, etc. etc. But I don't seem to hear much about amateur developers or designers.
Maybe there aren't many people who think its fun to spend time in BBEdit, Photoshop etc. (Believe me, when you don't need to worry about things like hacks to make your code work around a bug in Internet Explorer, it's a lot more fun…)
Or maybe there are more people who want to deal with deadlines and project management as a part of the design/development process.
I doubt it though.
Last Friday, I was trying to send an email back to the office, but what seemed to be a flaky WiFi hot spot meant that although I could apparently connect, I couldn't actualy send it.
The odd thing was, I was on a boat, in the middle of the Irish Sea. There was a WiFi network, which I could connect to, which is great. But then you get a splash screen (once – which I couldn't get to come up again) before you can actually connect to the internet.
Thing is, the wifi network is then connected to the internet through space - a satellite connection of some sort which I won't even pretend to understand. As far as I'm concerned, this goes beyond the kind of 'magic' that makes almost all of the video in the world stream in high quality to a little thing in my hand that is smaller than a C90 tape and into the kind of 'magic' that I can only take on faith that its actually how it works (as opposed to what, I'm not sure. Perhaps the ferry drags a very long ethernet cable behind it as it travels across the sea.)
But I have more faith that the satellite connection is working properly than I have that the wifi network and its silly splash screen is going to work properly in letting me actually connect to the local network.
I think the reason I think that way is that if someone is going to go to the effort of setting up an internet connection that goes through space and then claim that it works, then I tend to believe them. But if someone says that they have a simple screen to pop up and tell you where your internet connection is coming from, that you just have to click "OK" to some small print agreement before you can connect, then I tend to assume that it won't work with something other than a Windows XP PC running an old version of Internet Explorer.
When it does work on a Mac, or an iPhone, or anything that seems to be newer than the interior design of wherever I happen to be at the time, I'm pleasantly surprised. But when I'm 50 miles away from land and I'm told that there is an internet, that is provided wirelessly, for free, and from space, then its not much more than meeting my expectations.
It seems like my expectations might be a little backwards.