This is how bots become dangerous illvit.no

This is how bots become dangerous illvit.no

The little robot becomes sentient

Artificial intelligence can be divided into four levels. Bots have so far found themselves at the most rudimentary level: they react to certain inputs and are good at specific tasks. An example of this is Deep Blue, the IBM computer, which beat the world chess champion, Garry Kasparov.

In the next level, robots can craft experiences; For example, the voice assistant Siri recognizes our voices and gets better at answering questions.

The third step on the AI ​​ladder is for robots to understand not only what we say, but also what we think and feel – based, for example, on word choice and tone. At the beginning of 2023, a researcher at Stanford University in the United States showed that the chatbot ChatGPT now has a human-like “intuition” at the level of a nine-year-old child.

At the highest level of artificial intelligence comes self-awareness. This means that bots can have complex conversations with people and at the same time think about “who they are” in relation to other objects and individuals around them.

A sample of NAO’s tiny robot was given a taste of self-awareness during a test in 2015. Two other NAO robots were made mute by programmers, who called giving the robots a “drug pill.”

The three bots were asked who had received the pill.

Two of them did not speak, but the third replied, “I don’t know.” But then He corrected himself: “Sorry, now I know. I can prove that I didn’t take the pill.”

In other words, the robot was aware of who it was in relation to dumb robots.

Robotic arms are like child’s play

In 2022 it came news About a new robotic arm made by researchers at Columbia University in the United States.

The arm performs tasks such as catching a ball without the movements being programmed by humans. She learned it herself.

The robot’s arm initially moved glibly, completely freely and aimlessly while five cameras recorded the movements. With the help of the recordings, I learned how to stretch, bend, swing, and so on.

The robot began to imagine where it would move if it activated certain joints with a certain amount of force.

In other words, he became aware of his own body — like a child learning to navigate the world.

This emerging self-awareness among robots excites some robotic engineers, who predict that robots will make our lives much easier in everything from factories to nursing homes.

Hood Lipson, one of the researchers behind the sentient robot arm, believes that self-aware robots would be an even greater achievement than Cancer treatmentBecause a self-aware robot can handle this task and many more, too.

Other experts look at the future with a bleak view.

Elon Musk warned

In 2017, more than a thousand technology and artificial intelligence experts, including Elon Musk, signed a contract. open letter For the global community where they warned of robotic thinking.

The danger is most evident on the battlefield. For example, it launched Kargu-2 drones in 2020 at Libyan rebel forces – without human intervention. The attack became the first known example of the use of autonomous weapons in active warfare.

But even if we ignore robots designed for military purposes, they can be deadly.

Take, for example, the now well-known robot dogs spot From the American company Boston Dynamics as the most famous.

The dog’s ability, for example, to climb stairs and overcome obstacles makes it suitable, among other things, to work in bomb disposal or to assist in the event of natural disasters.

But the powerful, fast, and agile robot dog is also exciting for people with other purposes. Robot dogs videos Machine gun fire Pinned to the back, it went viral on the Internet.

The combination of a super-mobile robot, which may soon gain consciousness, and a weapon sends chills down the spines of many netizens, as well as people in the robotics industry.

Boston Dynamics was one of six pioneering robotics companies to come up with a 2022 Gross outputRobots must not be turned into weapons or used to harm people.

But even if bots don’t get guns on their backs, they can still be dangerous.

Independence threatens

All computers can be hacked, and this also applies to the computers that make up the “brains” of bots. Take, for example, the little all-terrain Husky, which, among other things, can handle radioactive waste.

It frees people from the dangerous task, but a robot could in principle be hacked and forced to use dangerous substances to harm people.

Even if the botnet is well protected from hackers, there is still a risk that the botnet will simply be at fault. Among other things, this has happened many times to robots that can control cars.

In 2018, for example, a 49-year-old pedestrian was run over and killed by a self-driving Uber car in Arizona, USA, because the AI ​​did not understand that an evasive maneuver was needed until it was too late.

And at the beginning of 2023, a report was released documenting that Tesla’s self-driving cars and artificial intelligence have up to 19 lives on their conscience.

But the ultimate fear is that robots at the highest level of artificial intelligence—with human-like intuition and awareness—run away so badly that we lose control of them.

Paper clips can be our downfall

With advanced artificial intelligence, we can never predict how robots will think and act. This is the opinion of many experts, including futurist Nick Bostrom.

When the robots become smart enough, they can become life-threatening – despite the best intentions.

Bostrom is particularly known for Thought experiment: If we give an advanced AI the task of producing as many paper clips as possible, it will be able to infer that it can produce more paper clips if we humans are not alive – because we can disrupt production, for example by stopping the robot.

Perhaps our bodies could already be used as raw materials for binders.

Bostrom built the scenario to show that it is impossible to control a self-aware robot, even if it asks for help with limited practical tasks.

Robotics companies, which warn against the products they make being used as weapons, are optimistic and “look forward to a bright future where humans and robots work side by side to solve some of the world’s problems.”

If the late physicist Stephen Hawking is to be believed, the future will be very bright – or very dark. In 2018, he said that the development of advanced AI could be:

“It’s either the best or the worst thing that has ever happened to mankind.”

See also  Local Government Minister asks owners to show restraint - VG
Dalila Awolowo

Dalila Awolowo

"Explorer. Unapologetic entrepreneur. Alcohol fanatic. Certified writer. Wannabe tv evangelist. Twitter fanatic. Student. Web scholar. Travel buff."

Leave a Reply

Your email address will not be published. Required fields are marked *