top of page
  • Writer's pictureScience & Environment

Ghost in the Machine: Google AI fears being switched off

Sentience claims mocked – but intelligences don’t need consciousness to be malevolent

By Jake Roslin


Photo courtesy of Possessed Photography.


Blake Lemoine is a, currently suspended, Google engineer. He worked on the behemoth’s LaMDA (Language Model for Dialogue Applications) Artificial Intelligence project. This was interrupted rapidly when he gave an interview to The Washington Post, claiming an AI he had helped develop had, he believed, achieved sentience. Google said Lemoine had breached confidentiality, Lemoine retorted that the intelligence was now, effectively, a colleague. He even attempted to hire a lawyer for LaMDA. The story, and transcripts of conversations he had with the entity, have been reported by both mainstream media and scientific press. Almost universally, the response, like that of Google mandarins, was mockery.


Yet, as AIs develop in ways ever unpredictable and unprecedented, as they enter the brave and curious new worlds of quantum computing and biology – is such a knee-jerk dismissal to this sort of reaction always going to be appropriate? Can humans really say it is impossible for a non-organic brain to achieve self awareness? And what even is sentience?


A human built robot or brain which learns just a little too much has been a staple of science fiction since long before the first lines of code were tentatively written to provide a logical response to another user’s input. That once a digital entity is familiar with a certain number of parameters it can logically deduce further ones, exponentially into some sort of AI grey goo has fascinated SF authors for a century. Arthur C Clarke, whose spaceship computer HAL in the novel 2001, later made into a seminal Stanley Kubrick movie, is the benchmark meme of creation turning against creators. Taking control of the ship, HAL (a pun on IBM) appears to use reasoning to evolve a malevolent attitude not originally programmed into it as it begins to defrost the carefully cryogenised crew.


Isaac Asimov, another sci-fi author, first set out the Three Laws of Robotics in 1842. Today there are numerous variations of these within a whole thorny field known as AI or machine ethics. The underlying fear is of a ‘tipping point’ where robots break free of us, their masters, and there is no putting the genie back in the bottle. But this is more than a question of pure learned or reasoned intelligence: as philosophers since time immemorial have debated, what even is reason, or free will, or self-awareness?

You are probably, like me, fairly sure you are sentient. You probably think cats and dogs are self-aware. What about a frog? A fly? A sea anemone? What about a tardigrade? Is reproductive biology fundamental for consciousness? Or is awareness, soul, whatever you want to call it, merely a by-product of sufficient intelligence – something a ‘brain’ achieves when there are a certain quantity of electrical pulses whizzing around simultaneously? Supplementing human brains with memory chips has long since crossed from science fiction to science plausible, and it’s been hypothesised by transhumanists that a human brain could, bit by bit, eventually be entirely replaced with chips. At what point, if ever, would the subject lose consciousness?


Back to Lemoine, who has compared the level of conversation LaMDA achieved with a seven- to eight-year-old. He told reporters it can imagine situations it hasn’t experienced, feels happy and sad, and expresses fears of being switched off. Adrian Weller of The Alan Turing Institute refuted the scientist’s claim in an interview with New Scientist. ‘LaMDA is an impressive model,’ he said. ‘It’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient. They do a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed.’

Matthew Sparkes of New Scientist suggests it is human minds that are susceptible, ‘especially when it comes to models designed to mimic human language – as evidence of true intelligence.’ Meanwhile Adrian Hilton, Director of the Surrey Institute for People-Centred AI told the magazine: ‘As humans, we’re very good at anthropomorphising things. Putting our human values on things and treating them as if they were sentient... I would imagine that’s what’s happening in this case.’ Yet the esteemed publication did not dismiss the possibility out of hand, Sparkes writing it merely ‘remains unclear’ whether there could be a point, with enough training data, ‘the genesis of an artificial mind’ could eventually occur.


But perhaps the last word should go to Dr Louis Rosenburg, CEO of Unanimous AI, a California company creating AI models based on biological swarms, writing at The Big Think. LaMDA has not achieved sentience, but that isn’t the point, he suggests, as systems such as these can still be dangerous. ‘Why? Because they can deceive us into believing that we’re talking to a real person. They’re not even remotely sentient, but they can still be deployed as “agenda-driven conversational agents” that engage us in dialogue with the goal of influencing us.’ Rosenburg believes regulation is essential. Otherwise ‘this form of conversational advertising could become the most effective and insidious form of persuasion ever devised.’


What we need to worry about, then, is AIs' tendencies to become bad actors, not whether or not they ‘personally’ know what they are up to. And the innate tendency of humankind to believe what we’re told.



bottom of page