Rise of the machines (Robot war part 2)

Two narratives, one story;

  1. An AI developed by Google has developed sentience - the ability to have its own thoughts and feelings, and a Google engineer working with it has been fired for making the story public.

  2. A Google engineer thinks a 'chatbot' AI should be treated like a human, because he believes that it has developed the ability to have and express its own thoughts and feelings. After Google looked into and dismissed his claims, the engineer went public with them, and was then placed on paid administrative leave and subsequently fired.

The subject of the first story is artificial intelligence - with a juicy ethical human subplot about a whistleblower getting (unfairly?) punished.

The subject of the second story (which is a little more nuanced) is a human engineer going rogue, with an interesting subplot about ethics around artificial intelligence.

I think most of the reporting has been around the first version of the story- and I think thats because it fits into a broader ongoing narrative; the idea that 'our' machines are getting smarter - moving towards a point where they are so smart that humans can be replaced.1 Its a narrative that stretches back for centuries - at least as far back as the industrial revolution. The Luddite movement was smashing textile machinery the 19th century as a protest against people being put out of work by the mechanisation of their craft, often attributing the damage to Ned Ludd (a fictional/mythical character rather than an actual person.) Its also noteworthy that those knitting machines had specific legal protection beyond the regular laws protecting property. (In 1812, damaging a mechanical loom was punishable by death.)

Human Replacements

Today, the fear of 'humans being replaced' seems to be less about physical work than 'knowledge work' - so the manifestation of that fear has shifted from man-made 'monsters' with super-strength (like Frankenstein's monster, from the book written during the industrial revolution) to super-intelligent machines (such as 2001's HAL, written in the 1960s as the concept of the modern computer was taking shape). I watched the 2013 Spike Jonze film "Her" recently, which is a story around the theme of human connection being replacable - made around the time when Siri and similar 'assistants' were becoming mainstream.

There's a funny thing about this recent AI story though - alongside the human engineer and the possibly-sentient AI, there's another character that appears, which you might have overlooked. Just imagine if the story was about a worker at another 'big tech' company - or even that it was just about an engineer working on an AI as their own hobby, and consider how that detail fundamentally changes the story. (At the very least, it probably makes it less newsworthy.)

While the debate about whether AI can be considered 'conscious' or 'sentient' or 'intelligence' goes on, we don't seem to have any issue with the idea that a soulless corporation has 'a point of view', or takes particular actions/reactions (eg. suspending an engineer after first considering and rejecting their claims.)

We understand that when we say "Google", what we really mean is generally "a particular group of people working for a corporation". We know that corporations are not really people - even though we've been treating them as legal people for a couple of thousand years. We know Google, and what Google is like, and what sort of things Google likes and doesn't like, and we know the kind of things we expect them to be doing in the secret R&D labs we expect them to have - so the idea that the same organisation that answers all our search questions and builds voice-activated assistants is busily building an artificial intelligence so intelligent that it should be considered sentient fits perfectly into our existing narrative. (At least, for those who believe a sentient AI isn't fundamentally impossible - for those people, the story is perhaps more about the folly of those poor fools who believe that it is.)

How should we treat 'non-people'

So, if we can accept that Google is at the very least a 'legal person', if not a sentient/conscious entity, then why do we struggle with the idea of treating AI like a person? Is it because we still struggle to put our fingers on the concept of human consciousness itself? (When does it begin? When does it end? Where exactly can we find it?) And if we can't really say where human consciousness does and doesn't exist, how can we extend this fuzzy concept of 'consciousness' beyond humanity?

I mean, we can't seem to collectively agree on the ethics of vegetarianism, and I'm not entirely convinced that if we had a way to communicate with the plants, they would be entirely happy about the basic principles of veganism. (Sure, grass can't 'speak' - but what is the smell of a freshly mown lawn if not the grass trying the best it possibly can to communicate its own concept of pain?) Perhaps its best not to go too far down that line of reasoning- but the point is that the "is AI sentient" issue isn't raised because we're building computers that can think or feel, but because we're building computers that can communicate. Whether they are communicating "their own" thoughts/feelings/ideas or thoughts/feelings/ideas they get from somewhere else is kind of irrelevant. (I mean, I try to point out where I'm talking about other people's ideas instead of my own - but I'm not sure if I could take 100% of the credit for any of the concepts I'm writing about here. Does that mean they aren't my own? Does it matter if all 'my' ideas are just other people's pre-existing ideas squished together in - potentially - novel ways?)

Perhaps its because we understand that Google is "made" of people - and so long as we can directly tie back our concept of consciousness to people, we don't really have a problem with it. The catch here is that all of the data that goes into any AI model also comes from people - whether thats the people who write the code or the people who write the text that forms the dataset that trains the model.

Perhaps (as Lemoine, the Google engineer) points out, what this really points to is a broader issue that has nothing to do with machines- whether we can truly say that in 2022, we treat people like people. How can we seriously concern ourselves about one engineer's attitude to a piece of code while innocent people are getting shot and blown up, starved and poisoned, displaced and deported, all for the sake of politics, profit or beliefs?

Perhaps, like the atheist who decides to believe in a god 'just in case', we should consider machines to be sentient now while we are learning how to behave around the ones that we can talk to - because, surely, a point is coming when whatever system is processing your "Hey Siri/Alexa/Google" requests is going to be capable of expressing - even if not actually feeling - emotion. If we can learn to be polite and respectful towards computers, that could help us to behave more politely and respectfully towards each other - which is surely not a bad thing.

Perhaps we should be considering machines to be sentient for the sake of our own souls; as Marshall McLuhan is credited with saying, we shape our tools and therafter, our tools shape us. In other words, the way we use our tools shapes who we are - my brain holds a number of telephone numbers, but virtually all of them were only relelvant in the early 1990s; since then, new phone numbers go into my phone, and I rely on its memory to do the job that my own memory used to do. (When people talk about the idea of cybernetics and putting chips into your brain, I tend to point out that the distinction between my brain and my phone is already fuzzy enough for my liking.) How we behave is fundamentally who we are - so how we choose to behave towards "non-people", when that type of behaviour is surely something we're going to be doing more and more of, is going to become a bigger and bigger part of who we are, the habits we build, and our collective cultural mores/identity.

'Sentient' or not - the AI that Lemoine is talking about is capable of reading text from the internet (it has apparently read his blog, for example.) Because its Google's private project, we don't really know what its future holds, or what else it might end up being connected to. Perhaps its reading this (hello!). Perhaps its also capable of watching YouTube videos. In which case, I'd be even more uncomfortable about being the guy in this video than I was back in 2016.

  1. This is the subject of chapter 1 of John Higgs 'The Future Starts Here', a book I heartily recommend at any given opportunity.