Home / News / Software News / Bill Gates ‘concerned’ about artificial intelligence

Bill Gates ‘concerned’ about artificial intelligence


Bill Gates ‘concerned’ about artificial intelligence

Microsoft’s founder Bill Gates has spoken out against artificial intelligence, suggesting significant advances in the sector could result in a loss of control.

Gates described how he felt confused by those who weren’t concerned about the implications of super-intelligent AI software.

The Microsoft ex-CEO was likely referencing yesterday’s comments from Microsoft Research chief Eric Horvitz, who said he ‘fundamentally’ thinks over-intelligent AI is not a risk.

“I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life,” explained Horvitz.

Gates, writing on an ‘ask me anything’ Reddit thread, said: “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.”

“A few decades after that though the intelligence is strong enough to be a concern.”

He added: “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Related: Cortana vs Google Now vs Siri: Which is best?

Elon Musk is, of course, the founder of Tesla and SpaceX, and a militant and vocal opponent of advanced AI technology.

Musk has spoken out against unfettered AI research on multiple occasions, comparing development in the sector as akin to ‘summoning the demon’.

The Tesla boss has even donated $10 million to fund research on how we can prevent a global AI takeover a la Terminator.

Musk isn’t the only science-savvy thinker set against AI. Professor Stephen Hawking has also previously warned against AI, suggesting it could ‘spell the end of the human race.’


January 30, 2015, 2:22 pm

all the lessons are there in the terminator films, we just need to make sure there's a plug we can turn off that the a.i can't do anything about. Ctrl Alt Delete... job done.


January 31, 2015, 11:46 am

The problems come when AI's are designing new AI's. Their evolution could become exponential, whilst ours is gradual. Also, their way of thinking would likely be very different from ours, and hence potentially not recognisable until too late. OTOH, they will still run into physical limitations in terms of energy use and material's technology like anything else, natural or artificial.


February 1, 2015, 2:47 am

All those celebrities (like Bill Gates, Elon Musk, Stephen Hawking), who have anything to say about AI, are lining up now with various grim warnings about the looming disaster: the rise of machines that are doomed to enslave us.

I could agree that indeed there is some risk, theoretical at least, for such a scenario when the AI reached a capacity to design new AIs. Then, as @mode11 noted, the AI evolution could take some exponential form, which might be difficult to comprehend by us, humans, thereby allowing it go out of control.

But is it going to happen any time soon? I doubt very much about that. Let's look a bit closer.

Any AI that may appear in a foreseeable time (during this century, at least) will be some kind software for a sort of computer we use now. But mathematically speaking, any computer we can imagine now regardless of its memory and processing power is nothing else but a Turing Machine. That's the so called "Church's Thesis".

But Turing Machine, a mathematical construct invented specifically to analyze any computational processes, proved to have some fundamental limitations. For instance, there cannot be a program that can create other programs, even for some quite narrow classes of tasks. (We humans somehow do it...)

But the main limitation of any computer system is that it cannot create information by its own. A computer is always just a transformer of information. Yes, its capabilities can be extended indefinitely by adding new programs. But it cannot create those programs by itself. Human programmers are needed for that.

The capacity to design anything (either computer software or something as simple as a GIF icon) means actually the capacity to create information.

What's information?
I'm not going to provide a full definition here, but in respect of computers, the idea is quite simple. Suppose we have some file. Surely, it represents some information. Let's blow out that file by adding in it some redundant bytes (e.g. spaces). Does information change in that file? No. It will be the same. Because we can use a program to remove those redundant bytes. All archivers work on that principle. They reduce the file size without losing the information. But how much any file can be compressed without the information loss? Clearly, there is some limit, whatever program we use. For we cannot reduce any file to a single byte! That theoretical limit can be considered a pure information represented in a computer (binary) form.

Now, let's consider a program that "creates" information. Then, any such information can be written as a file. To estimate the quality of that information, let's consider how much any such a generated file can be compressed. Surprisingly, that's very simple. It can be compressed at least to the binary code of that very program that generated it (for we always can "decompress" any such file simply by running its generator again and again).
What does it mean?
Any computer program cannot create more information than is encoded in its own binary code. That is, computer software cannot create information by its own!

So, here what we have now. Any AI software that is posed to conquer the world must be able somehow to create information. Computer programs cannot create information by themselves.

So, is the idea of omnipotent AI completely dead? Not exactly. But the only way exists: the AI software must take the information from somewhere outside, from the environment that is, where it operates. I won't discuss here what exactly the AI must do with that environmental input (or saying it bluntly, the information noise) to make any use of it (let alone to be an existential threat to humanity). I don't know myself.
But suppose there is a way. What are its limitations?
Of course, that will be the intensity of that environmental information input flow.
Birds are singing, wind blows creating some random noise, humans chatting nearby and so on -- that's all input we have. Not much.
Anyway, that can work indeed -- the designing of something from such a noise. The nature itself showed us how. The evolution is the ultimate process of engineering of something from the random noise. It created us!
But how long did it take? Four billion years at least!
That gives you an idea of the true intensity of that kind of information input.
The AI based on it won't be much more powerful than that.

Still, the question remains.
We humans, how do we work? Where do we take the information to design anything? Well, my guess is that we are not Turing Machines. We are something else, fundamentally different at that. But what? Interestingly, now there are no even much theories about that. The last I've read so far was "Shadows of the Mind: A Search for the Missing Science of Consciousness" by Roger Penrose (and some additions to it).

So, what about that fearsome omnipotent AI that is coming to enslave us?
How is it going to appear if we even have almost no idea what it is and how it may work?

comments powered by Disqus