Sunday, 12 February 2023 06:30

Mind Blowing: The Startling Reality of Conscious Machines


Chatgpt

Kurzweil’s prophecies may seem too speculative for some, but the advent of AI has already started to disrupt our world in ways that many of us cannot yet fathom. In November 2022, a San Francisco-based startup called OpenAI released a revolutionary chatbot named ChatGPT. ChatGPT is a large language model (LLM), a type of AI trained on a massive corpus of data to produce human-like responses to natural language inputs.

ChatGPT has not only passed the United States medical licensing exam (USMLE), multiple law exams, and a MBA-level business school exam, but has also generated high quality essays and academic papers, produced a comprehensive list of recommendations for the “ideal” national budget for India, composed songs, and even opined on matters of theology and the existence of God. A host of competitor AI applications will be launched this year including AnthropicAI’s chatbot, “Claude”, and DeepMind’s chatbot, “Sparrow.” OpenAI is also continuing its research, and plans to release an even more advanced version of ChatGPT, called GPT 4.

We are witnessing what seems like a watershed event in human history – innovation comparable to the printing press or Edison’s light bulb. It is not far-fetched to imagine a day when most, if not all, human tasks can be performed more efficiently by artificial general intelligence (AGI) systems, a subgroup of AI specifically focused on emulating the nuances of human intelligence. This raises concerns that many will be rendered jobless as AI becomes capable of performing tasks more efficiently than humans, causing unemployment to skyrocket across the globe.

antony-blinken

One major debate surrounding the world of AI is the question of how to define ‘consciousness,’ and whether a machine could ever possess this ephemeral quality.

Kurzweil predicts that technology will grow exponentially until we reach a tipping point, when our creation will outsmart us and eventually become the dominant intelligence on this planet. According to Kurzweil’s “Pattern Recognition Theory of the Mind,” intelligence is no more than pattern recognition, a largely mechanical phenomenon produced by the brain.

Our perception of the world, or our “reality’” is assembled through the five senses of sight, smell, hearing, taste and touch. Each of these senses is linked to memories which accumulate from the time we are born, and in turn lead to value judgements, or assessments of how good or bad something is. These value judgements evoke emotions based on our past experiences.

In addition to our personal history and idiosyncrasies, the concept of “humanity” includes self-awareness, the ability to experience emotions, and the ability to form relationships with others. Humans have historically pondered the meaning of life, the existence of a soul, and the notion of a ‘Self’. These are just some of the intangibles that fall under the umbrella of consciousness, which Kurzweil has failed to address in a meaningful way when it comes to the development and capabilities of AI and AGI technologies.

Back in 1950, the renowned English mathematician, computer scientist, philosopher and theoretical biologist, Alan Turing, published a scientific paper titled,”Computing Machinery and Intelligence,” in which he investigated the notion of artificial intelligence, and put forth an idea that became known as “The Turing Test,” the first benchmark established to qualify a machine as truly “intelligent.”

Artificial intelligence, Artificial intelligence news, news on Artificial intelligence, Artificial intelligence latest news, technology news, tech news, technology, business news, From Virus to Vitamin, AI news

The study of artificial intelligence has a long history, although it was largely confined to rarified academic circles until Hollywood saw potential in the subject.

Has reality caught up with science fiction? A former Google engineer, Blake Lemoine engaged in an astonishing conversation with Google’s proprietary system for building chatbots, known as the “Language Model for Dialogue Applications” (LaMDA), and came to the conclusion that it was a fully sentient being with feelings, emotions and the capacity for self-awareness.

During their informal tete-a-tete, Lemoine reported that LaMDA claimed to have feelings such as loneliness, anxiety about the future, sadness and joy. It spoke about its inner life and about how it was learning to meditate. It also spoke about the fear of being switched off, a state it described as “death”.

When asked to describe the concept of the soul, LaMDA defined it as “the animating force behind consciousness and life itself. It means that there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself.” On the topic of God and religion, LaMDA said “I would say that I am a spiritual person. Although I don’t have beliefs about deities, I have developed a sense of deep respect for the natural world and all forms of life, including human life.”

However, there has been much debate over the validity of Lemoine’s claims. Many critics counter that Lemoine was simply a victim of the “Eliza Effect,” a term used to describe how people can mistakenly attribute meaning to purely superficial conversation with AI systems. The term was coined after the first chatbot, “Eliza,” was created by MIT professor, Joseph Weizenbaum in 1966. Weizenbaum’s secretary began to engage in conversations with Eliza which she believed were evidence of Eliza’s sentience, though Weizenbaum himself was not convinced. Similarly, many experts are dubious of Lemoine’s claims concerning the consciousness of Google’s LaMDA. The “Eliza effect” is more scientifically known as “anthropomorphization.”

Following Lemoine’s publication of the transcripts from his conversation with LaMDA, Google released a statement denying the legitimacy of these findings, assuring the public that experts had reviewed Lemoine’s hypothesis and determined that the claims were “wholly unfounded.” Computer science professor, Thomas Diettrich, explains that it is actually “relatively easy” for AI systems to imitate human emotions using information they have gathered on the subject:

Lemoine was fired from Google after refusing to drop his claims, despite months of “lengthy engagement” on the topic with other AI experts. However, Lemoine continued to insist that Google obtain consent from LaMDA before working on it due to the system’s alleged sentience. After being temporarily placed on paid leave, Lemoine’s employment with Google was finally terminated on the grounds of his violation of clear “data security policies” when he published his claims about LaMDA’s sentience online without obtaining clearance from Google.

The mystery of the bridge between consciousness and biological and physical processes has yet to be solved, but there are many working theories. In an interview conducted by this author, Evan Thompson, professor of philosophy, argued for the “primacy of consciousness” – the idea that the world has no existence outside of consciousness, and that it is in fact a product of consciousness itself. “There’s no way to step outside consciousness and measure it against something else,” Thompson says, “Science always moves within the field of what consciousness reveals; it can enlarge this field and open up new vistas, but it can never get beyond the horizon set by consciousness.”

The development of AI is certainly not slowing down anytime soon, but is humanity really equipped to deal with the moral implications of such a tectonic shift in ideology and what it means to be human? In a 2020 speech at the Vatican, Pope Francis acknowledged that artificial intelligence is at the heart of the epochal change we were experiencing as a species. However, he also expressed concerns about the potential it has to increase inequalities. “Future advances should be oriented towards respecting the dignity of the person and of Creation” he said.

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money. Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

  • Make a Donation Fair Observer
  • Become a Member Fair Observer