The Matrix, Westworld, I, Robot, Terminator, Bladerunner and so on. Common to these films and series as well as many others is the theme of the robots (trying) to take over the world.
The idea that robots can take over is made possible in the films by having invented AI, which makes the robots independent thinkers.
If you follow a little bit in technology news, you will know that Google, OpenAI and several major companies have made some major steps in the development of AI. But does this mean that we have to worry about robots taking over the world?
I say in this article why we do not think we need to worry about robots taking over the earth. And why we should be concerned about a number of other areas.
What is artificial intelligence?
There is a lot of confusion in this area, so let’s start with a definition.
What is artificial intelligence?
Artificial intelligence is a field of computer science that deals with getting the computer to perform tasks, based on calculations reminiscent of human thinking. Work is being done to enable the computer to perform more advanced tasks than the computer can do today.
It is important to define some different technologies if we are to become wiser or discuss artificial intelligence in a well-informed manner.
First, artificial intelligence is not the same as artificial emotion. Emotions are what we humans use as a motive for acting as we do, whether it is the desire to become richer, become more loved, and so on, so most of our actions have a rationale in our feelings.
It is, of course, somewhat simplistic, because humans also have drives and general rationals that defy our feelings and drives. It acts as a motive for our actions. But to make things easier, we call it emotions.
Why won’t the robots take over the world?
Machines do not have the ability to feel for themselves and thus the machines do not have the opportunity to self-motivate their own actions. A robot will well be able to make predictable actions, but it is based on the people who have made those who have made mistakes or missing limitations.
In the movie “I, Robot” the problem arises with the robots, because their limitation on always protecting man is compromised because humanity is the greatest threat to itself.
However, robots or machines do not work – at least not unless artificial emotions can be invented.
It’s hard to imagine if it is possible to create artificial feelings, but it is quite trivial to discuss whether there is more to human emotions than just a small electric charge in the right places in the brain. Independently of what you think about the dilemma, we are very far from the scenario.
There are two branches within the thinking of artificial intelligence:
Soft AI – here you don’t think the computer will be able to do everything a person can do.
Hard AI – here you mean that man alone is atoms and chemistry, and that it is therefore possible for the computer to do everything that a human being can do 100%. Eventually.
As you can see, it belongs to those who think hard AI is not possible.
The biggest AI problem is data!
Facebook, Google, Microsoft, the American net doctor and so on …. The scandals are many in relation to data security. There are so many different cases where data has been leaked, sold or passed on. And data is no longer just a question of what advertisements you get on Facebook.
Data can change election campaigns, data can predict diseases, pregnancies, when you die and so on. And when data can, companies that have these data can do so.
It seems surreal, but there are examples from the US, for example, where young women have felt a little lean. Then they visited the American version of the online doctor. This site has sold information about their visitors to eg Walmart, after which some of these women were sent baby packages.
Because of this data, the companies could find that the women were pregnant before they even knew it.
It is not for us to find the silver paper caps and move far away from all technology, but that means we have to decide on our movement on the web and how far we will allow the companies to use the data they have.
Lack of ethical rules for the use of AI
The new personal data law in the EU is a clear improvement for private people’s privacy, but there is still room for much improvement in the rest of the world, but also in the EU.
It’s not for appointing Google as the big bully at the moment, but because Google is a leader in terms of AI, it’s two cases from those we highlight here.
Google has enabled AI to assess the likelihood of a patient dying.
The new version of Android comes with an improved version of the Google Assistant. The new Google assistant has become the focal point of a major ethical discussion because the assistant, using machine learning, has learned to imitate people to such an extent that it is virtually impossible to know the difference between man and machine.
Specifically, it is a feature where you can get the assistant to book eg a time at the hairdresser for you. It will happen in the way that you say “Hello Google book a time by example Frisør Hansen on Nørrebro on Thursday between 10 and 14. Then the assistant finds the number himself and does not call the hairdresser with your number but with the assistant’s own.
Here it presents itself as your assistant and book a time.
This is not a problem in itself, but because Google has had so much data available in combination with machine learning, one cannot hear the difference between an ordinary person and Google’s machine.
- t’s a very nice feature for us users of the Google Assistant, but it requires a question of morality; must a machine pretend to be a human?
- Example 3:
A researcher has developed a program that can, with pretty good accuracy, assess whether a person is homosexual. The program can use data from eg public cameras to assess the likelihood of homosexuality. The researcher has developed it without a normative project, so as such it should not be used positively or negatively. But the program can have some terrible consequences especially if it is improved.
Homosexual rights are already squeezed in several countries and in some there are even prison or death sentences to be homosexual. If such a country or dictator gets its hands on this program, it could mean greater persecution of homosexuals and such a case would most likely mean loss of life.
How can a problem arise with the AI technology?
The problem with AI, machine learning and deep learning is that we are now starting to use machines to process data in much larger quantities than we have previously been able to.
Previously, we have had to process data even with human power, but more advanced computer programs have made this much easier. The Internet is a land for collecting data because we entrust most of our private data to businesses behind apps, computers and mobiles.
It is therefore important that we have whistle blowers and people who dare to question the use of data.