Losing a Copilot

In July 2008, fifteen years ago, I wrote a blog post titled “Losing a Virtual Limb”, which was trying to articulate that funny feeling I was getting that buying my first iPhone was going to change everything.

I’d had something similar before a couple of times - in 1995, at my first job after leaving school, I was working in an office with a guy who was doing some sort of computer science degree, who introduced me to the internet. About a year later, I had a job running tests on software for mobile phones when I was introduced to text messaging - something that the ‘grown ups’ in the company dismissed as a pointless thing that was only sueful for network engineers and probably would be switched off for the "consumer version" of the software, but was screamingly obvious to me (and the other teenagers I was working with) that it was going to be a feature that would be really useful. (Teenagers in 1995 simply didn’t have mobile phones - because teenagers in 1995 didn’t have mobile phones; a kind of reverse network effect.)

I also started learning how to use a computer, and write code that did my work for me. In hindsight, it was quite a formative year - what I was just starting to learn about in the 'real world' had much more to do with my career over the next thirty-odd years than pretty much anything I had learnt at school.

Recently, I got a similar funny feeling again.

I’ve been doing some work over the last year or so writing Python code. I’ve been writing code of some sort or other for decades - much longer than I’ve had an iPhone - but a few months ago, I started using Visual Studio Code as my coding environment so that I could try out Github Copilot. Essentially, Copilot is a GPT-powered tool that “guesses” what the next bit of code you’re going to write will be.

Its great. (It wasn’t the “funny feeling” moment though.)

You can write out a comment saying “I’m going to do this thing next” and, nine times out of ten, it will generate a (useful) suggestion for the next line of code that will actually do the thing that you’re talking about doing. Sometimes - actually, quite often - I get to the end of a chunk of code I’m working on and - before I’ve really paused to think about what I need to do next - Copilot will autosuggest the next bit of code that will do the thing that I haven’t yet figured out that its exactly what I want to do. You still need to know the language - in that sense, its a bit like Google Translate, in that it will help you communicate with someone who speaks French, but it will help you more if you already have a basic understanding of the French language in the first place. (And its much more practical than an English-to-French dictionary.)

I know it’s “just” a large language model, and it’s “just” some probabilistic modelling etc. etc. But it feels like magic.

And, even more so than ChatGPT, it’s really useful. The more you use it, the more you get used to working in the way that it works - I’m recognising the sort of chunks that it will generate for me, and what it is that I’m doing that prompts useful completions. For example; suppose I’ve just been cleaning a bunch of data and I want to visualise it; I can write a comment along the lines of “plot X against Y” to prompt it in the right direction - or I can just type in “fig, ax” and it will suggest something like;

= plt.subplots(figsize=(10, 5))
ax.plot(test_full['Individuals'], label='Actual')
ax.plot(forecasts, label='Forecast')
ax.legend()
ax.set_title("Individuals/Weekly Total TV")
plt.show()

(Thats an actual example of some code that did exactly what I wanted it to do, and only took me a single press of the tab key to “write”.)

In a nutshell, writing code on a computer used to be kind of like written correspondance with something like a terribly pedantic customer service department - I’d write something out, I’d send it to the computer by hitting “run”, and the computer would give me the most basic response possible, following my instructions to the letter - whether or not what I asked it to do was what I meant.

Now, it’s more like a conversation - the computer is doing the equivalent of finishing my sentences, making suggestions, perhaps adding clarifications (such as when I think I’ve finished writing something and it suggests an addition I might not have considered).

And remember - what we have today is the “worst” version of this technology that there will ever be.

But here’s the thing.

At the same time as I'm writing code to do stuff, I have other open apps on my computer. I have one window that is almost always open, which I write brief notes in - things being mentioned on calls, or other stuff I think I might need to remember or deal with later, but I’m not going to deal with right now. I have another app that I write drafts of things like blog posts in. Plus another one with email, at least one MS Office app... Basically, I type text into various boxes throughout my day.

What happened was that I switched from my AI-enhanced, Copilot-supported coding app to a note-taking app and typed out a couple of lines - and then I paused for a moment, because there was an obvious pattern in what I had typed, so it was obvious what the next thing I was going to type was going to be. So, I paused, because I was waiting for the computer to join in the conversation and autocomplete what I was writing...

Obviously, it didn’t. Its just a text box.

What it felt like was when you get your phone out to look something up and you’ve got no signal. My computer was doing exactly what it’s been doing for years - decades, even - exactly what I told it to do and only what I told it to do... but now, suddenly, it felt broken.

I’m not sure how much of this is a case of my fingers quickly getting too lazy to type, or my brain getting too lazy to tell my fingers what they should be typing, but its definitely a shift in my mental model of the world. A bit like when I realised that I don’t know anyone’s phone number any more, because the only time I called anyone was with speed dial from my mobile. A job that my brain always used to do... suddenly wasn’t. Or rather, it hadn’t been doing it for a few years, but I hadn’t noticed for a while.

The thing is, this has arrived so fast - I’m not even sure whether science-fiction has really had a chance to catch up. I mean, there’s Spike Jonze’s "Her", about a man who falls in love with his AI assistant - but really, thats a story about human relationships at its heart, and I think this is something different. Sure, there are plenty of stories where Artificial Intelligence goes apocalyptically crazy (my "losing a virtual limb" post originally had a Terminator arm as its accompanying image), or develops a soul/godlike powers - but this isn't really about either of those. (Not yet, anyway.) Its something much more subtle.

Its more like when loads of films suddenly didn't make sense, because a plot point that worked fine suddenly turned into "why didn't they just pick up the phone?", and every film suddenly seemed to have an establishing scene where the protagonist lost their phone or ran out of battery to have the tension be believable.

Now - some people are saying that this technology is going to inevitably start appearing in every text box; I don’t think that is something I want. I'm not a Windows user, so I'm not sure what Microsoft's rollout really means; I'm guessing "Siri, but better" - at least for now...) Sometimes, I’m writing because a thing needs to be written (eg. an email for work) - a job needs to be done, and it needs to be done quickly. Technology that makes that faster is - probably - a good thing.

But sometimes, I’m not writing because I need a thing to be written; I'm writing because I need/want to actually do the writing - the reason I’m writing all this down now is because writing is how I get my thoughts in order. By writing it out, I’m asking myself questions along the way that I probably wouldn’t, and that is clarifying what I’m thinking. Sometimes, the thing that I start writing ends up having nothing to do with the thing I actually write - and I don’t think I want that sort of “thing I end up thinking about” to get potentially derailed by whatever a large language model thinks I wanted to be doing when I started. (Which, to be perfectly clear, is what I would tell it that I want to get done.) At least for now, large language models are too blunt a tool for that particular kind of ‘job’.

But I think the important question that Copilot raises is exactly the same question that I wrote about when I got my first iPhone fifteen years ago.

Am I ready to lose it?