Nature isn't intelligent enough to create intelligent machines either.

We are on the path to replacing ourselves.

Machines will inherit the earth.

They may keep a few of us as pets, or relics of the past.

People may become collectors items to be collected and traded among machines. Get one of every race, creed and color for your zoo.

Freeze Dried Republicans only, of course. They are disease carriers.

Smart technology for dumb people

Given that the body of human knowledge increases apace and breakthroughs by their very nature tend to be sudden, this is not something to be flippant or complacent about. Having said that, I'm looking forward to it. Our track record cries out for new players on the field.

Why would it want to take over the world? Or to put it another way, how likely is it that we will be competing for the same resources, and how likely is it that it (the A.I.) will be competitive in any case?

What point is there to worrying about it?

It seems that Cambridge U. is trying to compete with Singularity University. If so, I think Singularity U. has a head start, more energy, and more focus. So this program will probably be a runt compared to the Singularity movement.

On the other hand, the Singularity crowd tends to be super-optimistic about 21st Century technology, while these Cambridge professors have a darker view. Since we are looking at incredibly powerful technology it is good to remember the dark side of it.

The complexity of such developments will make "academic" any thought experiments about it before it becomes possible. The current trend is that our technology will be integrated into ourselves. The line between "us" and "them" will be a blurry one.

As for semmsterr: "I for one welcome our new robot overlords!"

I think we should be pushing AI as far as it can go, but I agree that considerations should be made on the risk of such machines surpassing human intelligence. It would be fair to assume super intelligent machines will have learning systems capable of creating and modifying its learning algorithms based on information and knowledge as it learns. This could mean a machines goal may change over time in a way that negatively effects humans. The threat would probably increase if super intelligent machines were building the next version of themselves as well. This also opens economical and ethical questions about the need for corporations to hire human beings if they can build machines more intelligent.

machines that are not malicious, but machines whose interests don't include us

Which begs the question: what interests do machines have at all?

Procreation or survival doesn't seem to be an innate interest to AI (why should it care if you switch it off?).

And if survival is somehow programmed into it then that merely extends to having enough power/maintenance to continue existing (much like we have a craving for enough food). But without the procreation drive there is no chance that machines would accidentally 'outbreed' humans or significantly compete for resources on any level.

Yes: Conscious machines may not be actively benign - but I really don't see the threat level, either. Unless we're talking about active oversight of potentially harmful systems (weapons, nuclear power plants, etc.) by AI. Then purely technically optimal decisions can have biologically harmful consequences.