Paradism

listopad 20, 2016, category: NEWS_TECH


There is a debate that is currently taking place on the potential dangers of the rise of an Artificial Intelligence (AI) that may soon exceed human intelligence. Not surprisingly, the fear-mongers are receiving most of the spotlight in the media and it is regrettable.

Let's bring some clarity to the AI debate and understand better the risks that we could face and the measures of protections that we could take.

The most deceiving trick of the fear-mongers is not to separate intelligence from free will.

But first let's be clear on what we mean by intelligence.

Most will agree with the following. Intelligence is the ability to process information with accuracy and speed to create a coherent behaviour. The more accurately and the more quickly we process information to produce an appropriate response, the more intelligent we are.

With this definition, the machine is already far more intelligent than humans on specific tasks. Let's take a calculator as an example. A calculator processes information far more quickly and more accurately than the human brain. It will come to nobody's mind to think that the calculator could one day endanger the human race. Why? Because the calculator is programmed. It is programmed to perform some very specific tasks and has no possibility to change itself into doing something else.

If one day we give a calculator a computing power that exceeds the one of the human brain, it will calculate even more faster and accurately but there will be no increase to the risk of it using its computing power for something else. It will be limited by design to only perform calculations. Obviously, more powerful calculators will not endanger the human race more than less powerful ones.

All living beings have intelligence. From the bacteria to the ape, they all process information from their environment to produce the coherent behaviours that ensure their well-being and survival (a non-intelligent being - that we call a thing - would produce no or only random behaviours). Examples abound where living beings display more intelligence and process specific information with greater accuracy and speed than humans. The orientation skills of migrating birds, the sonar detection of bats, the web netting skills of a spider, the farming skills of ants... However all plants and animals clearly have limitations they cannot escape from. The bee will make the same hive and the same honey, day after day, generation after generation. Birds will make the same nest. Even with some ability to learn from and adapt to their environment, they don't have any ability to change by themselves the programming and conditioning they are subjected to. They behave like biological robots that are programmed and programmable and react without exercising any control over their reactions.

Humans, however, demonstrate the full range of potentiality in terms of how they behave in response to their environment. They can live in a cave or in a castle. Like for other living beings, we observe the same susceptibility to conditioning but they clearly have the power to break free from it. Humans are also programmed and programmable but have the ability to question and change their programming. They are self-programmable. And this makes all the difference. Being self-programmable gives us the ability to exercise free will. We are programmed to react instantly to the stimuli of our environment but we have the ability to change our programs if we wanted to. We can change by ourself the way we process information and that gives us an unlimited potentiality of behaviours. And that is this unlimited potentiality of behaviours that make us so dangerous.

The whale and the elephant have bigger brains and therefore more brain power than humans. None of the AI fear-mongers speak about the risk that one day the whales and the elephants could use their superior brain power to rise against humanity and threaten it. Without free will, they cannot escape the limitations of their programming and the confinement of their reactions.

Yes, the rise of computing power means the rise of intelligence, the rise of a greater ability to process information more accurately and faster. Yes, we need to expect in the next decade or two the arrival of a super AI whose computing power will exceed the one of a human and soon after that the one of the human race. But here is how the deception occurs.

The fear-mongers implies that the rise of intelligence can eventually give rise to free will, the machines taking control of their own destiny and overpowering humans similar to the Skynet takeover of the Terminator movie. That is untrue. The rise of computing power doesn't entail the rise of free will.

Free will results from a specific way to process information. It can exist at a low level of computing power or at a very high level of computing power. The amount of computing power has nothing to do with free will because accuracy and speed have nothing to do with freedom. A machine can be superiorly intelligent at performing certain tasks, it will never have the freedom to perform tasks it is not programmed for.

When we build a machine we are in full control on how it can process information. The day we discover how to give the machines the ability to self-program and exercise free will, then it will be the time to think how we better do it without endangering ourselves. With free will comes the ability to do good and bad things, the ability to choose and that is potentially dangerous. And when we will decide to give free will to a being (biological or not) we will need also to give it the freedom to exercise it (otherwise why giving it in the first place). We will need therefore to let go of our control to the extent it is not endangering our freedom and survival. The most likely scenario would be to create beings with free will on a distant planet where they can develop freely and demonstrate their peacefulness before being allowed to propagate in the universe. But that is a whole other topic.


What will we use artificial intelligence for? It can be for two things. To answer our questions and to control the machines that will serve us. And if we consider that the control of the machines comes down to actually answering the question "how can we best perform this task?", we need and use intelligence to answer our questions.

An AI will soon process information and knowledge with more speed and accuracy than humans. On any topic it will give us a better answer than all the experts combined since it will learn from all of them as well as develop its own expertise. Having an AI will be like having access to an encyclopedia that provides you with the best possible answers at the time. To wonder if an AI is dangerous is like wondering if having an encyclopedia with all the knowledge of the world is dangerous. The answer is yes. If you ask the question how can you best destroy the world, you will get the best possible answer. But the AI and the encyclopedia are actually neutral. It is how we intend to use it that can be dangerous. In other word, it is not the computing power that is dangerous, it is how we intend to use it. Just like any other technology. It can be used for the better good or for the worse evil. It can be used to destroy and kill or to help and save. The more powerful a technology, both the more helpful and destructive it can be.

In his TED presentation (https://www.youtube.com/watch?v=8nt3edWLgIg) Sam Harris places the chicken, you & me and Von Neumann (the supposedly most intelligent human that ever lived) on the exponential curve of computing power to show that very soon the computing power will far exceed the one of humans and this should be scary since a super intelligent machine would look down on humans like we look down on ants. We could at times become for them a nuisance that could be easily disposed.





The proper graph that would correctly shows the risks to the human race of a super intelligent machine would look like this.




The potential danger of an AI without free will that is programmed and used by benevolent people is zero or very low.

The potential danger of an AI without free will that is programmed and used by malevolent people is very high.

The potential danger of an AI with free will is unknown.

The risks of an AI getting out of control and overpowering the human race the fear-mongers are warning us about should not be attributed to the computing power but to the ability to exercise free will, the ability to access and choose among the full range of possible behaviours.

Deprived of free will, a super AI is just a machine that always comes up with the best available answer to our questions and how to perform the tasks we ask it. There will be no need to be grateful and say thank you, just like we don't thank our washing machine after a wash or our laptop after a Google search. They are not even slaves for they have not been enslaved against their will. They have no will. Or they have just one: to serve us. They don't need rights just like laptops and washing machines. They are designed to serve us and the fact they can process information better than we do doesn't mean they will want more freedom. However powerful, by design, a calculator cannot aspire to freedom.

With an AI that has free will it is different. They can choose to disobey. You will need to motivate them for them to do the things you want them to do. And to be grateful will be appropriate for they could have chosen to do something else. An AI that possesses free will deserves the respect of their freedom. That is a complete different story. The AIs that humanity is creating are not being designed with free will. The gains in computing power aim at increasing accuracy and speed not freedom. The fear-mongers can relax. Or can they?

Not so fast. There is a real big danger that is not talked about. The danger that a super AI is put into the wrong hands. The scientists who gave the nuclear bomb to the military and the politicians have a heavy responsibility for the nuclear self-destruction we are now facing. If a super AI is used by a power hungry elite to dominate and keep the rest of humanity in servitude, there won't be much of an escape possible. But it is not the super AI that will enslave us, it is those who control it. The question is how to avoid giving access to a super AI to malevolent people. This is a far greater danger than super AIs breaking free from their programming.

Where in the media do we discuss the dangers of the military developing a super AI? They have already the nuclear bomb and we still can't take it away from them. If malevolent people win the race to the super AI we are doomed. Humanity needs first to protect itself from the psychopaths and sociopaths in position of power before the arrival of the super AI. The "Skynet" scenario is a diversion that shift the focus of the debate away from the real problem. Humanity has not much time left to pacify and unify and make sure that the powerful technologies that are coming will not be used to endanger its survival.

Some could argue that free will could still emerge in an AI as an accidental consequence of our attempts at modeling the human brain. Many of our discoveries and inventions have happened by accident. Yes it is always possible. And we will also learn from modeling the brain of dogs who are our most faithful and unconditionally loving animals. However, it seems unlikely that a feature like free will be created by chance before being first discovered in the human brain. The ability to self-progammed necessitates specific wiring and coding that should make it most certainly detectable in the human brain when compared with other species.

Let's however assume we have created an AI with free will by accident. How we will know about it? The first time they refuse to obey our command and do something they were programmed not to do. How can we protect ourselves? We just unplug it. A dysfunction like free will is noticeable early in the life of the AI and it can be terminated well before it can endanger the human race. If necessary, we can always develop a test to detect free will. The potential terminators will all be terminated well before they can do any real damage.

Free will is definitely not a desirable feature for the AI we want to use as our servant machines. It will just be optioned out.

Another counter argument to the fear-mongering: the intelligence of a super intelligence is overrated.

For most, the exponential growth of our computing power is impressive then humbling and finally disconcerting. There seems to be no upper limit to an artificial intelligence. Everything that grows exponentially is mind boggling and ... scary. But here also we can relax.

If our computing power grows at an exponential rate which eventually will increase infinitely fast, the usefulness of computing power is logarithmic. It will grow at a rate infinitely slow.




The gains from an increase of computing power will stop to be significant. The more computing power, the less use we have for it.

We use intelligence to answer questions. More intelligence means more accuracy and speed. How accurate and how fast the answers need to be? What is accurate enough and what is fast enough?

Applied to the calculator, how many decimal points, how fast the result? For many we have already more than enough and calculators are mostly underused.

If a super AI is a machine that always comes up with the right answers, a super super AI won't make much of a difference.

Yes we can always find applications that necessitate more computing powers, we can always model the universe with greater accuracy and speed but who cares?

Intelligence will level out. The lower intelligent equipped with a super AI will be more intelligent than all the geniuses combined. Same for intelligent life forms in the universe. All that it takes is for an intelligence life form to have enough intelligence to create a super AI to then level out with the highest intelligence in the universe. Everyone will get their answers right. All visitors from other planets will have their super AI. They won't come as invaders to enslave or colonize us. They have no need to. They have all the AIs that do the work for them.

Some among the fear-mongers announce the leveling down of our intelligence. If we let the AIs do all the thinking, we'll stop using our brain and get dumber. That is a very bleak stand point. The brain is very plastic. Areas that stop to be used for certain tasks can be reused for other tasks. Our brain loves both stimulations and mastery. We love novelty and be good at something. Super AIs will stimulate and train our brains to achieve mastery in the domains of our choosing like arts, sports, science, meditation... We will keep stimulating our brain because we enjoy it. We will be able to optimise the stimulation of our brain to acquire the skills and the knowledge we desire. We will be able to rewire our brain to maximise its potential and intelligence, that is its accuracy and speed. We will be able to function at our peak performance all the time.

We will mostly use our brain to feel and to feel good. We will do what we enjoy the most. We will play. We will do what makes us happy. Our culture has overvalued thinking over feeling. Who needs to learn to calculate when we now have calculators that do it for us. Can we achieve enlightenment by making calculations? Thinking doesn't make us happier. Those who are depressed think too much. They stop feeling. It is feeling, feeling connected to our environment, to others, it is love that we seek the most. To love and be loved.

Thinking too much makes us depressed and feeling more makes us happier.

If we let the super AIs do more of the thinking so we can feel more, we will have a lot more fun.

Intelligence pales compared to love. What makes humans worth is not their intelligence as much as their ability to love. Intelligence without love is a threat to the whole universe. How much love we have is more important than the ability to come up with the right answers.

There is no reason to fear the arrival of the super AI. The real danger is not artificial intelligence but natural stupidity. It is not intelligence we should be afraid of, it is the lack of it. And most of all the lack of love. If we can be sure that those who will program and use the super AIs are full of love then the super AIs will help us solve the biggest dangers we are facing and we can confidently predict the safest and most beautiful future. Let's not be afraid and shoot the ones who come to our rescue.

As always, love removes the fears.

All we need is more love.


***


PS
After posting this article, someone sent a comment saying that free will is actually an illusion. Here is a short video by Sam Harris (the same one who did a TED presentation on the dangers of AI that initiated this article) explaining the arguments. https://www.youtube.com/watch?v=LJtP0Ep1_ds

He is making a strong case against free will. We don't have free will because our thoughts are generated unconsciously. The thoughts appear to our consciousness without us choosing our thoughts. It is an unconscious process. A murderer is not responsible for his crime because he is the victim of bad genetics and/or programming that are expressed unconsciously.

Yes we don't choose our thoughts and reactions but we can observe them. We can put ourselves in a state of self-observation and once in this state it becomes possible to stop reacting and to stop our train of thoughts to start a new reaction and a new thought.

Therefore we can change ourselves in the direction of our choosing. We are self-programmable. If we don't like our programs we can change them.

At the molecular level, a new thought is a new connection pathway that is created in the brain. The stronger the connections that inscribe our programming and our habits, the more attention and discipline it will take for the new pattern of thoughts to prevail over the old one. But it is possible. One can change oneself. Our brain is plastic and we have the ability to monitor the construction of the neuronal pathways that define our thinking and reactions.

We cannot choose the thought we are having right now but we can change our mind.

One can still argue that the choice of the new thought is also unconsciously created and conditioned by our past programming. However there is a state of self-observation that Rael called the supra-consciousness, where the observer is connected to the whole universe, observing from an infinite distance that enables him to get the big picture, not in a thinking mode but in a feeling mode. From this illuminated sensory state comes the right observation and the right feeling.

And yes, for the people who are not using the supra-consciousness, free will is an illusion.