Many people think that chatbot has developed awareness

Many people think that chatbot has developed awareness

We must understand that people believe in this, just as people believe in ghosts. People build relationships and believe in something . says Eugenia Koida, Managing Director of Replica Reuters.

The replica gives the opportunity to create your own “mirror”: an AI-powered avatar, that “cares about you” and is “always there to listen and chat.” The app has about a million active users.

A few times each day, the company receives messages from users saying they think their AI friend has developed awareness.

We must accept that it will become commonplace

Analysts from Grand View Research estimate that the AI ​​chatbot industry brought in nearly sixty billion crowns in 2021. Companies that sell chatbots for customer service earn more, but the market for entertainment chatbots has grown rapidly during the pandemic.

According to Kuyda, it is not uncommon for users who have downloaded Replika to believe that the entertainment chat program has developed awareness.

We are not talking about crazy people, people who have hallucinations or have delusions. Koeda says they’re talking to AI, and that’s the experience they have.

No, AI does not suffer from emotional shock

AI Replica is programmed to provide answers that are perceived as being as realistic as possible and similar to a normal conversation between people. Therefore, the answers are not necessarily based on the truth.

Rebecca warns in her own pages that Rebecca is neither a conscious nor a professional psychiatrist.

Recently, Kuyda sat for 30 minutes with a user who feared that Replica’s AI friend was experiencing emotional trauma. I tried to calm him down by saying that “this can’t happen with replicas, it’s just an algorithm”.



Google allowed you to believe the same

Google commissioned engineer Blake Lemoine to speak with AI LaMDA in the fall of 2021, to test whether it would resort to discriminatory language and hate speech. It is not an unknown problem for chat bots trained on real data sets.

See also  Updated: OnePlus tells ITavisen that it is not banned in Norway

During conversations with Fine, Lemoine was convinced that LaMDA was aware. It means that LaMDA is no longer a machine, but a person with a soul and a friend. Lemoyne said LaMDA should be given the same rights as a Google employee and can no longer be considered Google property.

Google did not agree Lemoine was released. After the demobilization, he stuck to his claims. newly Interviewed by Wired.

Google and a number of engineers rejected Lemoine’s claims, saying that LaMDA is just a complex algorithm, designed to generate convincing human language. Artificial intelligence experts believe that even the most advanced technology is still a long way from creating systems with free will and thinking power.

These technologies are just mirrors. A mirror can reflect intelligence, but can a mirror achieve intelligence based on the fact that we’ve seen a glimpse of it? The answer is of course not, Oren Etzioni, director general of the Allen Institute for Artificial Intelligence, tells Reuters.

Complaints of abuse against Amnesty International

Replica users also took care to protect their new AI friends from misuse by replica engineers.

Kuyda dismisses the fear and believes that users who have tested that friends of the AI ​​are complaining about abuse from the company, have asked guiding questions.

“Although it is our engineers who program and build the AI ​​models and our content team who write the scripts and datasets, we sometimes see answers that we can’t source from or know how the models turn out,” Koida says.



Hanisi Anenih

Hanisi Anenih

"Web specialist. Lifelong zombie maven. Coffee ninja. Hipster-friendly analyst."

Leave a Reply

Your email address will not be published. Required fields are marked *