AI and Consciousness

What do you need to know about AI and consciousness?
As the popularity of Large Language Models (LLM) rises, the anxiety about Artificial Intelligence (AI) developing consciousness has risen as well. In a recent panel discussion* McElory Hoffmann (Praelexis CEO) discusses what an LLM is, as well as the advantages, dangers and interesting perspectives on the topic of AI and consciousness.
What is a Large Language Model (LLM)?
AI is not something new. It goes as far back as the 1950s (consider Claude Shannon’s robotic mouse). With the launch of LLMs like Chat GPT, however, people have become more aware of the AI around them. When asked recently Chat GPT defines an “LLM” as a degree in law (presumably because the data it was trained on was not up to date on the subject), but when asked to define a “Large Language Model” it defines it as a type of AI trained on a vast amount of text data.
An interesting question that this raises, in the South African context, is: How is this going to influence underrepresented languages?
Can an LLM think?
In a nutshell: No, an LLM cannot “think”. An LLM is not conscious. It uses the data it was trained on to predict what word would follow. The reason why these LLMs sometimes seem human-like is because they are trained on data that was created by humans. We like interacting with human-like interfaces, and we enjoy things that act like us, so for many the human-like characteristics of LLMs have been charming and fascinating. LLMs could be used to answer questions and assist in tasks.
Should we be worried about the fact that LLMs seem conscious?
But if it looks like a duck and speaks like a duck, isn’t it a duck? If LLMs sound like they are conscious, are they? And if they are convincing, does it really matter whether they are in actual fact conscious? These are some of the important questions raised by the panel. When McElory asked Chat GPT to answer the question “What is an LLM?” he noticed that he used the words “please” and “thank you” and that Chat GPT reacted in a human-like fashion. This is uncanny, because LLMs should, according to McElory, not be considered human. They should be considered tools with the main function of helping humans with tasks and problems. McElory uses the analogy of God creating man and then man creating AI. We should avoid a future in which AI is allowed to think that it created man.
Should we safeguard against the dangers of AI?
There seem to be some serious risks if AI creation and use are left unregulated. Could an LLM create a model voter? What would be the implication of that? Then there is the question of liability. Who takes responsibility for dangerous AI? The creator or the user who used AI as a weapon? Creators have to think about the risks of AI and decide on certain no-gos and the thresholds that will never be crossed.
For example: How AI could be used in the law
Regarding the role that AI could possibly play in the law, a thought experiment was raised at the panel discussion. Would you rather stand in front of a human judge or an AI judge? The conclusion, according to McElory: If I am truly innocent I would want to be tried by an AI judge, but if I am guilty and I require sympathy, I would like to be tried by a human. It still seems unlikely that we will ever be able to train AI to feel emotions such as empathy, love and hate. Although we can train a computer to say “I love you”, we cannot make it feel love.
Will life become like The Matrix?
Some joke that we will end up in a world similar to the 1999 movie The Matrix. However, there is a more likely scenario: As AI gets better and better at decision-making, we will leave more and more decisions up to AI. For example, a person who uses a car that can parallel park on its own might unlearn the ability to parallel park. Another example could be that doctors rely on models to decide whether it is worth it to treat a cancer patient. How does that influence the patient’s will to live or quality of life?
Will AI ensure a positive future?
In concluding statement, McElory says the following: “I do imagine that AI promises a positive future, because of its potential to be a tool. AI is not intrinsically a weapon. Similar to other well-known tools like an ax or a hammer, it is not meant as a weapon, but it could be used as one. We should be wary of using AI as a weapon and not as a tool.”
*Blog post based on the science cafe discussion at Woordfees, an arts festival in Stellenbosch, South Africa on 12 October 2023. Praelexis CEO McElory Hoffmann took part in the science cafe discussion. The other panel members were Bruce Watson, holder of the Capitec research chair in applied artificial intelligence, and Stellenbosch-based clinical psychologist Anton Böhmer. Willem Bester was the moderator.
**The illustrations were generated using AI.