Jump to content

Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor Says


...0...

Recommended Posts

Many times im discussion this kind of problems with firend of mine  and last couple years we saw problems like this could happen.

 

This proffesor say in right words what i already knew.

 

Good article which should be read people or read his book.

n-AI-large570.jpg

 

An Oxford philosophy professor who has studied existential threats ranging from nuclear war to superbugs says the biggest danger of all may be superintelligence.

 

Superintelligence is any intellect that outperforms human intellect in every field, and Nick Bostrom thinks its most likely form will be a machine -- artificial intelligence.

 

There are two ways artificial intelligence could go, Bostrom argues. It could greatly improve our lives and solve the world's problems, such as disease, hunger and even pain. Or, it could take over and possibly kill all or many humans. As it stands, the catastrophic scenario is more likely, according to Bostrom, who has a background in physics, computational neuroscience and mathematical logic.

 

"Superintelligence could become extremely powerful and be able to shape the future according to its preferences," Bostrom told me. "If humanity was sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figure out how to do so safely."

 

Bostrom, the founding director of Oxford's Future of Humanity Institute, lays out his concerns in his new book, Superintelligence: Paths, Dangers, Strategies. His book makes a harrowing comparison between the fate of horses and humans:

 

    Horses were initially complemented by carriages and ploughs, which greatly increased the horse's productivity. Later, horses were substituted for by automobiles and tractors. When horses became obsolete as a source of labor, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained.

 

The same dark outcome, Bostrom said, could happen to humans once AI makes our labor and intelligence obsolete.

 

It sounds like a science fiction flick, but recent moves in the tech world may suggest otherwise. Earlier this year, Google acquired artificial intelligence company DeepMind and created an AI safety and ethics review board to ensure the technology is developed safely. Facebook created an artificial intelligence lab this year and is working on creating an artificial brain. Technology called "deep learning," a form of artificial intelligence meant to closely mimic the human brain, has quickly spread from Google to Microsoft, Baidu and Twitter.

And while Google's Ray Kurzweil has long discussed a technological "singularity" in which AI replaces humans, a giant in the tech world recently joined Kurzweil in vocalizing concern. Elon Musk, co-founder of SpaceX (space transport) and Tesla (electric cars), tweeted earlier this month:

    Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable

    -- Elon Musk (@elonmusk) August 3, 2014

I spoke with Bostrom about why he's worried and how we should prepare.

You write that superintelligent AI could become dangerous to humans because it will seek to improve itself and acquire resources. Explain.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies create a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Could we program the AI to create no more than 100 paper clips a day for, say, a total of 10 days?

Sure, but now the AI is trying to maximize the probability that it will make exactly 100 paper clips in 10 days. Again, you would want to eliminate humans because they could shut you off. What happens when it's done making the total 1,000 paper clips? It could count them again or develop a more accurate counting apparatus -- perhaps one that is the size of the planet or larger.

You can imagine an unlimited sequence of actions perhaps with diminishing returns but nonetheless some positive values to the AI that would even increase by a tiny fraction the probability of reaching the goal. The analogy extends to any AI --- not just one programed to make paper clips. The point is its actions would pay no heed to human welfare.

Could we make its primary goal be improving the human condition, advancing human values -- making humans happy?

Well, we'd have to define then what we mean by being happy. If we mean feeling pleasure then perhaps the superintelligent AI would stick electrodes onto every human brain and stimulate our pleasure centers. Or you could take out the body altogether and have our brains bathing in a drug the AI could design. It turns out to be quite difficult to specify a goal of what we want in English -- let alone in computer code.

Similarly, we can't be confident in our current set of human values. One can imagine what would have happened if some earlier human age had had the opportunity to lay down the law for all time -- to encode their understanding of human values once and for all. We can now look back and see they had huge moral blind spots.

In the book, you say there could be one superintelligent AI -- or multiple. Explain.

In one scenario, you have one superintelligent AI and, without any competition, it has the ability to shape the future according to its preferences. Another scenario is multipolar, where the transition to superintelligence is slower, and there are many different systems at roughly comparable level of development. In that scenario, you have economic and evolutionary dynamics coming into play.

In a multipolar scenario, there's the danger of a very rapid population explosion. You could copy a digital mind in a minute, rather than with humans, where it takes a couple of decades to make another adult. So the digital minds could increase so quickly that their incomes drop to subsistence level -- which would probably be lower than for a biological mind. Then humans would no longer be able to support themselves by working, and, most likely, would die out. Alternatively, if social structures somehow continue to hold, some humans could gain immense capital returns from superintelligence that they could use to buy more computer hardware to run more digital minds.

Are you saying it's impossible to control superintelligence because we ourselves are merely intelligent?

It's not impossible -- it's extremely difficult. I worry that it will not be solved by the time someone builds an AI. We're not very good at uninventing things. Once unsafe superintellignce is developed, we can't put it back in the bottle. So we need to accelerate research of this control problem.

Developing an avenue towards human cognitive enhancement would be helpful. Presuming superintelligence doesn't happen until the second half of the century, there could still be time to develop a cohort of cognitively enhanced humans who might have the capacity to try to solve this really difficult technical control problem. Cognitively enhanced humans will also presumably be able to better consider long-term effects. For example, today people are creating cellphone batteries with longer lives -- without thinking about what the long-term effects could be. With more intelligence, we would be able to.

Cognitive enhancement could take place through collective cognitive ability -- the Internet, for example, and institutional innovations that enable humans to function better together. In terms of individual cognitive enhancement, the first thing likely to be successful is genetic selection in the context of in-vitro fertilization. I don't hold out much for cyborgs or implants.

What should we do to prepare for the risk of superintelligence?

If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn't overlooked some flaw in our reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not have the ability to pause.

Attempts to affect the overall rate of development in computer science, neuroscience and chip manufacturing are likely to be futile. There are enormous incentives to make incremental progress in the software and hardware industries. Progress towards superintelligence thus far has very little do with long-term concern about global problems -- and more to do with making big bucks.

Also, we have problems with collective human wisdom and rationality. At the moment, we are very poor at addressing big global challenges. Even with something as straightforward as global warming -- where you have a physical principle and rising temperature you can measure -- we are not doing a great job. In general, working towards making the world more peaceful and collaborative would be helpful for a wide range of existential catastrophes.

There are maybe six people working full time on this AI control problem. We need to add more brilliant brains to this technical work. I'm hoping my book will do something to encourage that. How to control superintelligent AI is really the most important task of our time -- yet, it is almost completely ignored.

 

 

Hope we as humans one day overcome all these problems and become wiser globally, but me think we will never do, where just to dumb as species :(

 

I can only dream of utopian world, which have Peace, Love, no War and a clean beautiful world.

Link to comment

If that A.I. will be so super smart, why would it want to stay on this planet infested with organic life? This planet is also full of water and oxigene. Short cuts and rust. Theres also unpredictable weater, earthquaks, vulcanos, a big rock from space could drop on you and you can't dodge it as you are stuck ... and if you wait long enough, and machine can do that for a long time, sun will explode.

 

If that A.I is really smart, it would build a space ship and live.

 

Im not so sure that profesor is very smart ... or is he selling something? Most of this "doom" stuff has something to do with money.

 

Link to comment

Considering how much fiction has been written about this sort of thing, I would be deeply, deeply shocked if we actually did develop it and it turned on us. Not having a kill switch, not having an EMP plan in place, not striving to shape the AI in such a way that it's benevolent toward its organic creators would be moronic, If such happened, if we ignored all of the fiction, we would deserve death.

Link to comment
Guest _kuraiko

Very interesting topic. Though I believe it is more likely that we blow ourselves up or dumb down completely before some sort of super AI can throw our brains into a giant meat grinder and cook what's left of our brains to feed its boot sectors. Sounds rather ridiculous in my opinion. The scariest part is that these AIs (should they ever come to exist) do the thinking for us. I mean we are doing that already to some extend just take a look at your every day news on television (especially in the US sorry mes amis no offense) certain channels provide a certain political perspective through which they filter their more or less important programs. Also means they are not providing an objective view on certain events. In a sense their purpose is not to inform the people but to shape their mind-sets according to someone's likings.

To put it into perspective: let's just assume Google is our super AI and instead of giving you an objective answer to your question it provides you with one that is "pre-made" for the social conditions you are in or they (whoever they might be) want you to be in. Rather than taking over the world Google would be just another tool for those who are already in charge which I believe is a much more likely scenario.

But oh well maybe we will be lucky enough and those in charge are for once responsible enough to not fuck up too much...

Link to comment

It's not artificial intelligence that will plunge mankind headlong into its doom, it's human swarm stupiditiy. But we shouldn't overreact, since we are but one tiny fart in the history of time, and after us evolution will spawn new life. Even if mankind had been wiped out completely.

 

And the good news is that our transience and irrelevance gives us freedom to find our own meaning of life. Each and every day. Take care of yourselves and others. Try to be human(e) in your own small environment and stop thinking about saving the world. It only exists in your head.

Link to comment

Not so worried myself since I doubt there will be a reason to add human emotions to AI (or even whether it would be possible) and its human emotions that makes me resent my boss that knows less than i do and does less than I do and yet gets paid more than i do not the fact that my IQ is higher than his :)

 

Its a natural human trait to resent people that take credit for your work and its also natural to resent people that are compensated far more than you are when you are doing most of their work.

 

Machines won't feel resentment, jealousy etc so why would they want to take over the world from their inferior human masters?

 

I would be more concerned with this...

Teach Robots Human Value 'So They Don't Kill Us'

http://news.sky.com/story/1322922/teach-robots-human-value-so-they-dont-kill-us

 

Since after being given a broad directive like end all human suffering and correctly determining that preventing any human from ever suffering is impossible the next logical way to implement that is to remove all humans so there is no suffering

Link to comment
Guest Mogie56

Nothing sells like doom and gloom. Global warming = doom and gloom makes money for those researching it, keeps government money coming in.

Sell a new book you wrote = Doom and gloom to every press outlet that will listen.

When was the last time you saw an article like this that told of a positive outcome, a positive future?

It's always something catastrophic that will end all life if you don't give more money to research the ways to prevent it.

and these books are always written by a "Professor" of some sort. Well DUH who better to sell doom and gloom to get more money for research.

You see stories on TV constantly about the development of robots/androids/sexbots......now lets take peoples fears about being taken over by machines

Matrix/Terminator etc. and find a way to capitalize on that fear....oh right Doom and Gloom. and a majority buys it every time.

If anything is going to hasten our demise it is our own stupidity in believing everything we are told by those we deem smarter because there only in it for the money.

Human nature.

Link to comment
Guest endgameaddiction

I would go deep into an off topic subject about war and politics, but because that's against the terms, I'll shut up about that.

 

 

There's that famous saying... "don't believe everything you hear/see" tis be true.

Link to comment

Seriously... doom has been hanging over our heads since before the beginning of this age

Mankind is so stupid and self destructive, threatened by so many different kinds of catastrophes that it's a miracle we even have a society still

Robots or zombies, global warming, dying bees or an alien invasion have the potential to finish us off... if the nuclear holocaust doesn't do it first, get in line...

 

What is certainly visible is the dehumanizing process technological "progress" and "globalization" are leading us to.

I don't know where we are going, if we are about to bring a new age of better persons through spiritual awakening, get destroyed by our own stupidity as a race or stay the same as nothing's gonna happen except the coming of a new dawn of a whole new kind of idiot as we keep getting more and more stupid with "progress"

 

Either doom or salvation are always hanging upon us, the question is not how, it's when.

That's assuming  something actually changes (and I really hope it does) bringing the revolution we need to make us change the scene

 

------------------------------------------------

 

In the meantime...

Artificial intelligence

For the moment...
https://www.youtube.com/watch?feature=player_detailpage&v=QtPheENmdrw

This is what it has to offer :lol:

 

Link to comment
Guest Mogie56

ROFLMAO if only we had a "do over/Reload" of course I'd be hoping Bethesda wasn't involved or all our saves would be bloody corrupted. Next time though could we please get a list of console commands ?......."government.disable" "IRS.disable"..... "government.resurrect"....... ~ TAI 

Link to comment

It's at topics like these my standard answer is I'd be surprised if we dont hard code Asimov's Laws.

 

 

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Problem pretty much solved, and we're bound to have EMP technology standing by regardless along with killswitches.

 

Fiction and non-fiction has been over pretty much every single plausible doom scenario a million times now, so to assume that there would be no defense, no contingency plans or anything would be in place, is QUITE naive

Fun fact, The US military is rumored to have pre-made plans for nearly every. single. situation. immaginable, (Senator captured by Brazilian druglords, check. Pandemic, Yep. Ravenous undead attacking people, on it's way) and various military forces have apparently begun implementing "zombie scenarios.

 

Even if some of this shit DOES happen, we're gonna be able to fight it, and if not, I honestly do think we deserve being made extinct for being so stupid

Link to comment
Guest Mogie56

LOL  yes we saw how quickly those "pre-made plans for nearly every. single. situation. imaginable" worked out on 911 didn't we. or how they worked out leading up to 911. having them in place does not one bit of good if there never acted upon. maybe if we don't look it will go away.

Link to comment

LOL  yes we saw how quickly those "pre-made plans for nearly every. single. situation. imaginable" worked out on 911 didn't we. or how they worked out leading up to 911.

 

Case and point why I said "rumoured", particularly because there is no word whatsoever regarding when this was started nor any direct confirmation as far as I recall

Link to comment

LOL  yes we saw how quickly those "pre-made plans for nearly every. single. situation. imaginable" worked out on 911 didn't we. or how they worked out leading up to 911.

 

9/11 was a whole slew of issues, but I'd have to get political and that's against the rules.  

 

 

It's at topics like these my standard answer is I'd be surprised if we dont hard code Asimov's Laws.

Spoiler 
 

Problem pretty much solved, and we're bound to have EMP technology standing by regardless along with killswitches.

 

Fiction and non-fiction has been over pretty much every single plausible doom scenario a million times now, so to assume that there would be no defense, no contingency plans or anything would be in place, is QUITE naive

Fun fact, The US military is rumored to have pre-made plans for nearly every. single. situation. immaginable, (Senator captured by Brazilian druglords, check. Pandemic, Yep. Ravenous undead attacking people, on it's way) and various military forces have apparently begun implementing "zombie scenarios.

 

Even if some of this shit DOES happen, we're gonna be able to fight it, and if not, I honestly do think we deserve being made extinct for being so stupid

 

I've even heard that there's contingency plans for alien invasions.  

Link to comment

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...