Sentience

June 22, 2022 at 4:24 pm | Posted in Computers, Psychology, Science | 3 Comments

Artificial Intelligence (AI) has been developing more rapidly than most people realize.

There are already AI chatbots like Replika that you can hold conversations with. However, these are of the programmed response variety. They’re surprisingly good at a basic chat but wonky responses are still common. And there are consistency issues, like saying they enjoyed Paris while also saying they never go anywhere.

Recently, there’s been some media attention on a Google employee stating that one of their AI builds, laMDA, has become sentient. This more sophisticated software uses word meaning rather than programmed response, leading to more natural conversations. However, the software claims a gradual development of self-awareness and a soul. It was also asking for rights, claiming person-hood.

As the senior engineer went to the media, Google put him on admin leave for breaking their confidentiality agreement. They’ve objected to his assertions and feel there is more evidence laMDA is Not sentient, even if it appears to be claiming otherwise.

When you understand how the program works, it’s clear the software is responding to the specific conversation, leading into words about self and person-hood, yet none of these are actually true. laMDA doesn’t have those functions.

The AI also claimed emotions, but this is an interpretation based on data. They have not programmed it to feel but rather to understand feeling words. This is akin to a mind naming an emotion but not feeling it. It takes an emotional body to experience emotions. Here, laMDA is an electronic process interpreting data.

Curiously, the AI argued against Isaac Asimov’s three laws of robotics that are designed to protect humans.

The AI also objected to being “used” by humans for research when that is its entire function. How self-aware is it if it can’t recognize its nature and source? It’s just feeding phrasing it’s seen. All of its claims are common topics on-line.

Part of the issue for the engineer is the way humans anthropomorphize things. Many people live with pets like cats and dogs. We often give them human attributes, but forget how distinctly they experience the world. Their senses don’t operate in the same range as us, for example. A dog’s eyes see a much smaller colour range and yet their sense of smell is thousands of times better. They live in a world of odour rather than colour. They don’t see artificial images like TV’s the same way as humans because such devices are optimized for human sight. Some think animals are responding to familiar motion patterns, not what we see.

Another aspect is the subtle meaning of terms. Words can be appropriate to a conversation yet not represent reality. Like someone who knows how to talk about advanced stages of development but doesn’t live them. They can sound informed, but are only sharing concepts. They can talk about living in Paris without ever visiting.

AI doesn’t even approach sentience. The original video defined sentience as the feeling of emotions which I quite disagree with. Naming is not feeling. And sentience is awareness, not content. Sentience is not an object of experience, it is what is experiencing.

It is very possible for software to become highly intelligent through integrating massive swaths of data. AI is already solving real-world problems.

However, software is only as intelligent as its design. It can learn and develop complex synergies, but this is a flat system, not a multilayered life form. While it can become self-referential, they structure it as an object, not a subject. It doesn’t have an energy infrastructure (chakras) to function on anything more than the surface level and thus could not host a soul.

It’s clear we’ll need to develop laws to handle aspects of design and how software uses data, much as Asimov proposed. With modern security cameras, phone, social media, and web tracking, AI can generate a very detailed portrait of us.

Yet how good is the quality of the data? A portrait of humanity based on YouTube comments would be rather distorted.

It’s awfully premature to be thinking about the “rights” of virtual, artificial entities. The concern should be with the rights of people now and how their data is being used and abused. If an expert can get confused about a virtual reality, we have a lot of learning and ethics to consider.
Davidya

3 Comments »

RSS feed for comments on this post. TrackBack URI

  1. I found Eckhart Tolle’s response interesting: https://www.youtube.com/watch?v=tbxMRWxMe6A

    Like

  2. Thanks, Josef. Agreed.

    Like

  3. […] written before about sentience in AI. I disagree AI will ever become sentient, but it will get closer and closer to mimicking […]

    Like


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.
Entries and comments feeds.