The statements by Google developer Blake Lemoine have caused a stir in the past few days. According to this, the chatbot LaMDA developed by Google had developed an awareness and even hired a lawyer to represent its interests.
As evidence, Lemoine points to a 21-page transcript of conversations with LaMDA. However, Lemoine also concedes that this assessment of artificial intelligence awareness requires expert review. According to Viacheslav Gromov, CEO, and founder of embedded AI full-stack provider AITAD, LaMDA is unlikely to become conscious.
Awareness Of Artificial Intelligence: Is It Getting Dangerous?
It is much more likely that LaMDA perfectly mimics a consciousness. This is precisely what artificial intelligence (AI) is trained for. Gromov points to existing high-performing AI chatbots such as Mitsuku and the perfect, seemingly accurate pattern imitation with what has been learned from large data stores and the public Internet. This creates coherent, intelligent, and emotional responses. However, there is no more profound understanding of the content behind it. In addition, an AI with awareness is often not in the interest of development since there are no concrete applications for it in the industry. Consciousness is not needed or wanted for AI to work effectively in medical devices or self-driving cars or to recognize speech, movement, or objects.
The Question Of Standardisation And Legal Regulation
The Turing test or Searle’s Chinese room shows how difficult it is to test machines for intelligence characteristics. Consciousness also includes embodiment – i.e., unity with the body, feeling and experiencing, and awareness of it. The so-called phenomenal consciousness as a partial aspect of the overarching consciousness is not possible with a chatbot in a known way because it lacks a body. Numerous theories on the emergence of consciousness and complex emotions are only due to hormone and body processes in the brain, as described by Catrin Misselhorn in her everyday work “Fundamentals of Machine Ethics” (Reclam, 2018), argue against this. Even neurology in biologically engineered neural networks is incapable of generating consciousness in the petri dish.
AITAD participates in some AI standardization and steering committees. Here it becomes clear: If there were phenomena similar to consciousness, the approval of such products would need at least 5 to 10 years. The social consensus on this should be significant. With the current “AI Act” proposal, the EU is already making significant progress toward approving current industrial AIs.
AI repeatedly arouses admiration for its performance and offers high added value in many clearly defined areas. However, she is not a danger. AITAD also develops AI systems. These are hardware-bound (embedded AI) and solve recurring, similar tasks with high reliability. The AITAD solutions, for example, ensure almost 100 percent machine availability through predictive maintenance, early detection of complications in the operating room, or safety in dangerous industrial environments.