Reddit Reddit reviews Superintelligence: Paths, Dangers, Strategies

We found 48 Reddit comments about Superintelligence: Paths, Dangers, Strategies. Here are the top ones, ranked by their Reddit score.

Computers & Technology
Books
Computer Science
AI & Machine Learning
Artificial Intelligence & Semantics
Superintelligence: Paths, Dangers, Strategies
a history of the study of human intelligence with some new ideas
Check price on Amazon

48 Reddit comments about Superintelligence: Paths, Dangers, Strategies:

u/rusty_shaklefurd · 37 pointsr/Cyberpunk

A central concept in cyberpunk and hacker culture is the idea of planned obsolescence: Corporations can make more money if they get you to buy their products multiple times instead of just once. This leads to a world where everything is discarded and the wealth gap is very clear between the people who have the new and the people who have the old.

The fact of the matter is that DNA is not our friend. Humans were built to spread our seed and be destroyed. We are a tool that DNA uses to extend it's own life. The human body is amazing in many ways, but it's amazing like a disposable razor is amazing. There's no mechanism to prevent cancer, no mechanism to prevent the development of back problems, and no mechanism to prevent it from withering away like a rotten fruit when it's purpose of reproduction has been served.

The implementation of transhumanism might be flawed, but so are all human endeavors. That's what cyberpunk is about: Figuring out how to deal with a world ruled by technology. Sometimes it doesn't go as smoothly as we imagine. The message of transhumanism is still clear, though: DNA doesn't own this planet any more, we do, and the name of the game is going to stop being reproduction and start being the enjoyment of existence.

Since you seem to be basing your understanding almost entirely on fiction, let me recommend some reading

u/Philipp · 20 pointsr/Futurology

Here's a fantastic book on the subject: Superintelligence.

u/chronographer · 19 pointsr/Foodforthought

For background, I understand that Elon's views are informed by this book (among others, no doubt): Nick Bostrom: Superintelligence.

It's a dense read, but talks about AI and how it might emerge and behave. (I haven't finished the book, so can't say more than that).

Edit: fixed up punctuation from mobile posting. See below for more detail.

u/cybrbeast · 19 pointsr/Futurology

This was originally posted as an image but got deleted for IMO in this case, the irrelevant reason that picture posts are not allowed, though this was all about the text. We had an interesting discussion going: http://www.reddit.com/r/Futurology/comments/2mh0y1/elon_musks_deleted_edge_comment_from_yesterday_on/

I'll just post my relevant contributions to the original to maybe get things started.



---------------------------

And it's not like he's saying this based on his opinion after a thorough study online like you or I could do. No, he has access to the real state of the art:

> Musk was an early investor in AI firm DeepMind, which was later acquired by Google, and in March made an investment San Francisco-based Vicarious, another company working to improve machine intelligence.

> Speaking to US news channel CNBC, Musk explained that his investments were, "not from the standpoint of actually trying to make any investment return… I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there."

*Also I love it that Elon isn't afraid to speak his mind like this. I think it might well be PR or the boards of his companies that reigned him in here. Also in television interviews he is so open and honest, too bad he didn't speak those words there.

----------------------------

I'm currently reading Superintelligence which is mentioned in the article and by Musk. One of the ways he describes an unstoppable scenario is that the AI seems to function perfectly and is super friendly and helpful.

However on the side it's developing micro-factories which can assemble from a specifically coded string of DNA (this is already possible to a limited extent). These factories then use their coded instructions to multiply and spread and then start building enormous amount of nanobots.

Once critical mass and spread is reached they could instantly wipe out humanity through some kind of poison/infection. The AI isn't physical, but the only thing it needs in this case is to place an order to a DNA printing service (they exist) and then mail it to someone it has manipulated into adding water, nutrients, and releasing the DNA nanofactory.

If the AI explodes in intelligence as predicted in some scenarios this could be set up within weeks/months of it becoming aware. We would have nearly no chance of catching this in time. Bostrom gives the caveat that this was only a viable scenario he could dream up, the super intelligence should by definition be able to make much more ingenious methods.

u/Ken_Obiwan · 17 pointsr/MachineLearning

>The swipe at Andrew Ng is off the mark and tasteless

Meh, it's on about the same level he brought the conversation to. (Oxford professor writes a carefully-argued 350-page book; Ng apparently doesn't see the need to read it and dismisses news coverage of the book with a vague analogy.)

>Yudkowsky and the LessWrong cult have contributed nothing tangible to the fields of AI and machine learning

Well, at least it's consistent with their position that making public contributions to the field of AI may not actually be a good idea :)

It's not like Yudkowsky is somehow unaware that not having an active AI project makes him uncool, here's him writing about the point at which he realized his approach to AI was wrong and he needed to focus on safety:

>And I knew I had to finally update. To actually change what I planned to do, to change what I was doing now, to do something different instead.

>I knew I had to stop.

>Halt, melt, and catch fire.

>Say, "I'm not ready." Say, "I don't know how to do this yet."

>These are terribly difficult words to say, in the field of AGI. Both the lay audience and your fellow AGI researchers are interested in code, projects with programmers in play. Failing that, they may give you some credit for saying, "I'm ready to write code, just give me the funding."

>Say, "I'm not ready to write code," and your status drops like a depleted uranium balloon.

And if you wanna go the ad hominem route (referring to Less Wrong as a "cult" despite the fact that virtually no one who's interacted with the community in real life seems to think it's a cult), I'll leave you with this ad hominem attack on mainstream AI researchers from Upton Sinclair: "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

u/[deleted] · 16 pointsr/todayilearned

Not to be a dick, but when you dive into the possible consequences of machine learning & AI, some facial detection software is pretty mundane when compared to other possible outcomes.

The book Superintelligence turned me into a luddite in terms of AI.

u/steamywords · 13 pointsr/Futurology

This does nothing to address the difficulty of the control issue. He's basically just saying we'll figure it out before we get AI, don't worry about it.

SuperIntelligence actually spells out why control is so hard. None of those points are touched even generally. He's Director of Engineer at Google, which actually created an AI ethics board because an AI company they bought was afraid that the tech could lead to the end of the human species, yet none of that is even briefly mentioned.

There is very good reason to be cautious around developing an intellect that can match ours, never mind rapidly exceed it. I don't see the necessity for repeated calls to let our guard down.

u/RepliesWhenAngry · 11 pointsr/worldnews

Very good point- I'm currently reading (or trying to read...) this book:

http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111

I think you'd like it also.

u/NondeterministSystem · 11 pointsr/worldnews

A scenario where such an AI becomes arbitrarily intelligent and capable of interacting with the outside world isn't beyond the realm of consideration. If it's smart enough to outplan us, a superintelligent Go engine of the future whose primary function is "become better at Go" might cover the world in computer processors. Needless to say, that would be a hostile environment for us...though I imagine such a machine would be frightfully good at Go.

If you're interested in (much) more along these lines, I'd recommend Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. I got it as an audio book, and it's thought provoking.

u/thetafferboy · 10 pointsr/artificial

From the comments below from /u/Buck-Nasty /u/Jadeyard /u/CyberByte /u/Ken_Obiwan

For those that haven't read it, I can't recommend Superintelligence: Paths, Dangers, Strategies highly enough. It talks about various estimates from experts and really draws the conclusion that, even at the most conservative estimates, it's something we really need to start planning for as it's very likely we'll only get one shot at it.

The time between human-level intelligence and super-intelligence is likely to be very short, if systems can self-improve.

The book brings up some fascinating possible scenarios based around our own crippling flaws, such as we can't even accurately describe our own values to an AI. Anyway, highly recommended :)

u/VorpalAuroch · 8 pointsr/artificial

Sotala and Yampolskiy, Bostrom's book, Infinitely descending sequence... by Fallenstein is a really interesting, clever solution to a piece of the puzzle. I'm not sure what you're looking for, particularly; everyone currently working on the question is pretty invested in it, because it's still coming in from the fringe, so it's all going to be people you'll denounce as "not credible".

u/nexxai · 7 pointsr/OpenAI

I couldn't have said it better myself. I read Superintelligence by Nick Bostrom (which is an insanely good read by the way) earlier this year and was becoming more and more worried that there was no one stepping up to the plate to spearhead a movement like this, at least nothing of this magnitude. To know that people like Elon Musk, Reid Hoffman, and Ilya Sutskever are behind this gives me hope that maybe we can emerge on the other side of the intelligence explosion relatively unscathed.

u/FeepingCreature · 7 pointsr/programming

...

So maybe try to understand what people who worry about AI are worried about? I recommend Superintelligence: Paths, Dangers, Strategies, or for a shorter read, Basic AI Drives.

u/stillnotking · 7 pointsr/atheism

This illustrates why we need to be careful with AI. A superintelligent AI given the directive to maximize human happiness might just stick electrodes in everyone's pleasure centers, or start an intensive, mandatory breeding program, because more humans = more happiness. It might be fully aware that that's not what we meant, but it's what we said...

(Yeah, I'm reading Nick Bostrom's book.)

u/tylerjames · 7 pointsr/movies

It's even more interesting if you don't just think him as the standard insane genius trope, but realize that he is probably genuinely disturbed and conflicted about what he's created and what to do with it.

Trying not to be spoiler-y here for people who haven't seen the movie but there are probably a lot of practical and metaphysical questions weighing on him. Is an AI truly a conscious creature? Does it have wants? If so, what would an AI want? Given that its social manipulation, long-game planning, and deception abilities are off the charts how could we ever be sure that what it told us was the truth? Does it have any moral considerations toward humans? How would we ever be able to contain it if we needed to? And if it is a conscious creature worthy of moral consideration then what are the moral ramifications of everything he's done with it so far?

Really interesting stuff. For those inclined I recommend checking out the book Superintelligence by Nick Bostrom as it explores these themes in depth.

u/narwi · 4 pointsr/Futurology
u/YoYossarian · 4 pointsr/technology

Here's one that I just ordered. It comes with a recommendation from Elon Musk as well. This is a subject Kurzweil discusses at length in his books, though his approach is far more optimistic. He avoids the cataclysm by saying humans and AGI will work together as one, but his point basically concedes humanity's destruction if we don't cooperate/merge.

u/ImNot_NSA · 3 pointsr/technology

Elon Musk's fear of AI was amplified by the nonfiction book he recommended called SuperIntelligence. It is written by an Oxford professor and it's scary http://www.amazon.com/gp/aw/d/0199678111/ref=mp_s_a_1_cc_1?qid=1414342119&sr=1-1-catcorr&pi=AC_SX110_SY165_QL70

u/rubbernipple · 3 pointsr/Showerthoughts

Someone else beat me to it. Here you go.

u/DisconsolateBro · 3 pointsr/Futurology

>Given what Musk does with other technologies, he is by no means a luddite or a technophobe. He's seen something that's disturbing. Given the guys track record, it's probably worth investigating

I agree. There's also a point to be made that one of the recent books Musk mentioned he read in a few interviews (and was acknowledged by the author in, too) was this http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111

I started reading it a few nights ago. It's painting an interesting picture about the future of AI. I'm looking forward to finishing it to discuss further

u/blank89 · 3 pointsr/Futurology

If you mean strong AI, there are many pathways for how we could get there. 15 years is probably a bit shorter than most expert estimates for mind scanning or evolution based AI. This book, which discusses different methods, will be available in the states soon:
http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ref=sr_1_1?ie=UTF8&qid=1406007274&sr=8-1&keywords=superintelligence

> We went from the horse and buggy to landing on the moon in 80 years

Past events are not necessarily good indicators of future events. In this case, faster computers are a mechanism for bringing about AI faster. How much faster we get in how much time will probably be the influencing factor in all this. There is quite a bit of uncertainty surrounding whether that will be post-silicon or not. We don't have post-silicon computing up and running yet.

The other factor may be incentive. Maybe specific purpose AI will meet all such demand for the next 20 years, and nobody will have any incentive to create strong AI. This is especially true given the risks of creating strong AI (both to the world and to the organization or individual who creates the AI).

u/NotebookGuy · 3 pointsr/de

Ein Beispiel wäre, dass sie die Menschen als Hindernis in ihrem Plan ansieht. Es gibt da dieses Beispiel der Maschine, die dafür gebaut wird die Herstellung von Büroklammern zu optimieren. Diese Maschine könnte zum einen indirekt die Menschheit auslöschen indem sie die Erde mit Fabriken vollballert und für uns unbewohnbar macht. Zum anderen könnte sie es auch direkt anstreben, weil sie sich denkt: "Der Mensch nimmt Platz für Fabriken ein. Wenn er weg wäre, wäre mehr Platz für Fabriken." Oder aber sie folgt der folgenden Logik und hat etwas ganz anderes mit den Menschen vor, was wir - da sie schließlich superintelligent ist - uns nicht vorstellen geschweige denn begreifen können:
> The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. - Eliezer Yudkowsky

Will sagen: Sie muss für das Auslöschen der Menschheit keine niederen Beweggründe haben. Sie erfüllt einfach ihren Auftrag. Ohne Rücksicht auf die Menschheit.

Zum Abschluss noch eine Buchempfehlung zu dem Thema: Superintelligence: Paths, Dangers, Strategies

u/CWRules · 2 pointsr/blackmirror

> The truth is that the singularity could be reached but never realized as long as you don't connect that super-smart AI to anything.

A super-intelligent AI could probably convince a human to let it out of its confinement (Google The AI-Box Experiment for an exploration of this), but even failing that it might think of a way to break free that we can't even conceive of. Even if we literally didn't connect it to anything, that leaves us with no way to interact with it, so what was the point of developing it?

The reason I say human-based AI is less risky is because it would implicitly have human values. It wouldn't kill all humans so that we can't stop it from turning the planet into paperclips. Designing a friendly AI from scratch basically requires us to express human ethics in a way a computer can understand, which is not even close to a solved problem.

Nick Bostrom's Superintelligence is a pretty good exploration of the dangers of AI if you're interested in the subject, but it's a fairly difficult read. Tim Urban's articles on the subject are simpler, if much less in-depth.

u/browwiw · 2 pointsr/HaloStory

I'm currently listening to the audio book of Nick Bostrom's Superintelligence: Paths, Dangers, Strategies, so I'm kind of hyped on AI and their possible existential threat, right now. The Halo writers are greatly downplaying what is possible for a powerful superintelligence can do. Once in control of the Domain, and properly bootstrapped to godhood, Cortana wouldn't have need for the Guardians or any of the Promethean's infrastructure. She could just start converting matter into computronium or something even more exotic. Of course, that's way too un-fun and not adventure sci-fi. If the Halo writers wanted to combine Halo-lore with contemporary conjecture on AI doomsdays, Cortana should have started mass producing Composer platforms to convert all sentient life in the known galaxy into info-life and importing them all into the Domain where they can live in a never ending Utopia...on her terms, of course. Using ancient warships to enforce martial law is just too crude. The Guardians are a decisive strategic advantage, but just not nearly what a superintelligence can get away with.

Also, I'd like to note that according to real world AI theory, the Smart AI of Halo are not "true" AI. They are Emulated Minds, ie, their core architecture is based on high resolution scanning of human brains that is emulated via powerful software. I know that this is common knowledge amongst us, but I find it interesting that RL researchers do make a distinction between artificial machine intelligence and theoretical Full Mind Emulation.

u/BullockHouse · 2 pointsr/bestof

That's what I always thought was neat about CNN's - over the last fiveish years, they've shown that they can beat specialized models with decades of hard work and fine-tuning behind them in a wide variety of domains (including speech recognition and image processing). Humans just aren't good at hand-building signal analysis systems. I expect that trend to continue as we get better at leveraging the power of deep neural networks. The remaining applications for non-NN AI will be when you have a very simple problem you need to solve (like, say, image segmentation or simple feature detection), and are heavily constrained by latency or performance.

Anyways, it's a very exciting time to be alive! Both because the technology is incredibly cool, and because it might eventually kill us all.

u/bluehands · 2 pointsr/Futurology

There are huge swaths of the AI community that think this could be a real issue. A recent book goes on about how this could be an issue and what we maybe able to do about it.

All technology has dangers contain within it but AI is one of the most credible that could take us out as a species beyond our control.

u/Ignate · 2 pointsr/Futurology

Superintelligence

Good book.

I think of the human mind as a very specific intelligence designed to meet the demands of a natural life. A tailor made intelligence that is ultra specific seems like an incredibly difficult thing to recreate. I wouldn't be surprised if after AGI was created, it proved that our brains are both works of art, and only useful in specific areas.

They say a Philosopher is comparable to a dog standing on it's hind legs and trying to walk. Our brains are not setup to think about big problems and big solutions. Our brains are very specific. So, certainly, we shouldn't be using it as a model to build AGI.

As far as self awareness, I don't think we understand what that is. I think the seed AI's we have are already self-aware. They just have a very basic drive which is entirely reactionary. We input, it outputs.

It's not that if we connect enough dot's it'll suddenly come alive like Pinocchio. More, it will gradually wake up the more complex the overall program becomes.

u/RobinSinger · 2 pointsr/elonmusk

He seems to have gotten the idea from Nick Bostrom's Superintelligence: Paths, Dangers, Strategies, which he read recently (Twitter link).

u/squishlefunke · 2 pointsr/technology

It was actually a Musk tweet that led me to read Bostrom's book Superintelligence: Paths, Dangers, Strategies. Worth a look.

u/rodolfotheinsaaane · 2 pointsr/singularity

He is mostly referring to 'Superintelligence' by Nick Bostrom, in which the author lays out all the possible scenarios of how an AI could evolve and how we could contain it, and most of the time humanity ends up being fucked.

u/Titsout4theboiz · 2 pointsr/IAmA

Superintellegence- by Nick Bostrom http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111

Currently working through it, very well written and scientifically backed. Elon tweeted about it himself.

u/Bywater · 2 pointsr/JoeRogan

It was pretty good. I also read another one recently that had some AI named flutter in it, where the first AI is a matchmaking social media construct. It was equal parts terrifying and funny at least. But for the life of me I can't remember the fucking name of it.

u/philmethod · 2 pointsr/IAmA

TBO I think a lot about the dangers and promises of ever more capable technology quite a lot. In my view, if it turns out reasonably well it will probably stretch over many decades...

If it turns out badly though, it could be an event, not a sudden event of infinitely increasing technology, but an event of the technological capability we have build up over decades and centuries, suddenly turning against us.

Things can change suddenly and unexpectedly, in 1914 a month before world war I everyone thought that all the great powers of Europe had settled into a stable though somewhat tense modus vivendi. A month later the world was turned on it's head.

Have you read Bostrom's book superintelligence?
https://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111

There are certain subtle disagreements that I have with his analysis, but I think a lot of what he says about the nature of agents and intelligence in general is valid. Agents generally have goals, if a general agent with a specific set of goals comes across another agent with an incompatible set of goals that blocks its goals that first agent will be inclined to incapacitate or eliminate the blocking agent.

This means if we don't like what a computer is doing maybe because we programmed in the wrong goals and try to stop it, the AI may in turn try to stop us stopping it. If it has an off switch it may strategize to prevent us from reaching it.

In otherwords the same dynamics that cause human beings to wage war with each other (incompatible conflicting goals) could cause a war between humans and AI, far from being a fantasy there are logical reasons to consider it to be a possibility.

In the same way while nations are helping each other there is peace but then when one nation turns on another, the situation escalates all hell breaks loose,you could have a situation where an AI is quietly pursuing its goals and doesn't perceive humanity as an impediment and then suddenly we decide we don't like what the AI is doing, we feel its hogging resources that could be used better in other ways and try to stop the AI. The AI then changes its perception of humanity from an unimportant part of its environment to an impediment to its goals and then turns its vast intelligence to the concern of eliminating us... the equivalent of war.

Some kinds of intelligence could be thought of as a measure of ones ability to think of strategies to get things done. If a vastly higher intelligence and a much lower intelligent have mutually incompatible goals, the higher intelligence will achieve all its goals at the expense of any goals the lower intelligence had that were incompatible with the goals of the higher intelligence.

In otherwords in a war between us and superintelligent AI we might well lose. This is speculation, but quite plausible and logical speculation.


Not sure what you mean by "inevitability based on current trends is never, never, never a good prediction"kind of a very strong positive (inevitability) and negative (never, never, never) juxtaposition.



Current trends continue until they stop. Sometimes projecting current trends is very accurate indeed (viewscreens in startrek-skype today) other times its not (man on the moon - warp drive)


In my view typically past projections of futures where energy is exponentially plentiful and all sorts of vastly wasteful uses of energy are common place (flying cars, hover boards, starships) typically have not come to pass.

But projections of technology becoming everymore precise and fiddly and complex (genetic engineering, electron microscopes, computers, 3D printers etc.) have. I have confidence in the tendency of the precision of manufacturing to continue to increase. And there are plenty of technologies on the horizon, 3D chips, parrallel processing, D-wave quantum computers etc.

...I think it's fair to say that we are far from the physical limit of computing power. The very existence of the human brain implies that an arrangement of atoms with the computing power of the human mind is possible.

In fact there's basically two alternatives to AI surpassing all our capabilities:

  1. Civilization collapses (a war, peak fossil fuels, meteor strike) which I grant you is not beyond the pale of possibilities.

  2. We choose not to design computers to be that smart, because of the potential danger it would pose. And again this is not beyond the pale of possibility, the fate of nuclear technology is a precedent for this as a powerful technology that has actually regressed in many ways due to being regulated out of existence.

    So no it's not inevitable that machines will overtake us universally in capability, but it's sufficiently plausible (I would say probable) to merit considerable thought especially since there will at least be the challenge of mass unemployment.

    BTW I don't think it's likely I'll live forever or get uploaded into a computer either. In my view the task of building an intelligence capable of obliterating humanity is far simpler than the task of making human beings immortal or of transferring human consciousness onto a computer...which might be fundamentally impossible.
u/APimpNamedAPimpNamed · 2 pointsr/philosophy

My friend, I believe you hold the same misguided conception(s) that I did a very short time ago. Please give the following book a read (or listen!).

http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111

u/Pation · 1 pointr/technology

I'm glad you asked!

There are a few reasons for this, far better explained by the various experts that research precisely this problem. Here's an executive summary if you only have ten minutes

Or if you have only one minute, the most important concept is that we simply do not know how to program human values. If we were to create an AI, their goals would most likely not be in line with human goals. To quote a now famous line (source):

>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

u/CyberByte · 1 pointr/artificial

This is a topic of debate. There is indeed a hypothesis that a "singleton" might emerge. If you're going to read Bostrom's Superintelligence, look out for that word and also "decisive strategic advantage". An entity with a DSA can eliminate all competition if it wants to. Such an entity could be an AI, but also a group of people such as a government. If the first ASI's power is growing fast enough, it may indeed acquire a DSA before we can build enough competitors to prevent this. When the DSA is large enough, there are probably ways to prevent challenges and threats in other ways than extermination.

An alternative theory comes from Robin Hanson who thinks there will be a society of AIs living/competing together (see his debate with Eliezer Yudkowsky and his book The Age of Em).

Of course there also exist more rosy views of the future with humans and AIs living together, but TBH I don't have a reference for a rigorous analysis of that. Maybe you can find something like that on /r/Transhuman or /r/transhumanism...

> I haven't seen this whole on this sub yet so open a conversation here about it.

You should check out /r/ControlProblem.

u/Empiricist_or_not · 1 pointr/FinalExams

AI is doable.

Friendly is hard, and we probably only get one try..

u/yagsuomynona · 1 pointr/philosophy

Some research group working on artificial general intelligence is successful in making one, but they did not possess a sufficiently detailed theory of AI safety, and plugs in a utility function (or whatever goal system they might be using) that "seemed reasonable", perhaps after some technical but still insufficient analysis.

The most comprehensive resource is Bostrom's "Superintelligence: Paths, Dangers, Strategies". For something shorter, you might find something in Bostrom's or MIRI's papers.

u/joeblessyou · 1 pointr/singularity

In respects to AGI/ASI (so disregarding nanotech, quantum computing, and other singularity subjects), Nick Bostrom is one of current leading academics on the subject: https://www.fhi.ox.ac.uk/publications/

His book is a great intro to what AI might bring in the near future, and you can easily make a connection to Kurzweil's predictions from there.

u/Zulban · 1 pointr/artificial

I recommend you read Superintelligence. It answers this kind of question and more. Not an easy read, but not too hard either.

u/ItsAConspiracy · 1 pointr/Stoicism

The main thing people worry about is that a superintelligent AI wouldn't necessarily share human values at all. The "paperclip maximizer" is the absurd illustration of that: if a paperclip company builds an AI and gives it a goal of producing as many paperclips as possible, the AI could pursue that, with extreme cleverness, to the point of converting us all into plastic paperclips.

You could say: if an AI were so smart, why wouldn't it recognize it has a silly goal? But why would it view that goal as silly, if human values aren't programmed into it? Are human values a basic law of physics? No, they're instincts given to us by evolution. Empathy, appreciation of beauty, thirst for knowledge, these are all programmed into us. An AI could have completely different values. Humans and everything we care about could mean nothing to it.

In the worst case, as the saying goes, "The AI does not love you, or hate you, but you are made out of atoms it can use for something else."

A really good book that lays out these arguments in detail is Superintelligence, by philosopher Nick Bostrom.

u/fermion72 · 1 pointr/technology

I'm just about to finish up Superintellgence, by Nick Bostrom, and I'm a bit scared of AI now. Bostrom elaborates on a ton of ways that AI could go horribly wrong (for humans, and possibly for the Universe, and I'm only slightly exaggerating on that), and I'm not sure we will get it right. Maybe, but I'm not convinced it will be as easy as Kurzweil suggests.

u/lehyde · 1 pointr/Transhuman

A recent (and I think the best yet) book on what a smarter-than-human AI should look like: Superintelligence

u/edhdz1 · 1 pointr/u_edhdz1
u/xplkqlkcassia · 1 pointr/CapitalismVSocialism

I think you are being overly optimistic about SGAI, and I suggest you start by reading Bostrom's Superintelligence in addition to his pieces on the ethical issues of AI. Any AI-agent, in attempting to maximise its utility functions, will initially have a set of utility functions allowing for prioritisation and optimisation of goal-setting tasks. Any self-improving SGAI agent will immediately take action to limit the development and capabilities of other potential SGAI, as they may have conflicting utility functions.

What utility functions might an SGAI have? Realistically, the first SGAI will be developed by an organisation, not a single person, and its utility functions will likewise reflect the goals of that organisation, or potentially some menial auxiliary task - if the organisation has lax safety standards and incautious development procedures. To go into the speculative realm, the SGAI may be tasked with logistical scheduling or managerial decision-making in a large corporation, or in a government, dynamically censoring internet traffic, identifying "terrorists", and optimising the efficacy of military combat.

Although higher productivity may result indirectly, an SGAI with the utility function of maximising the profit of a particular corporation, or maximising the stability of (or territories controlled by) a national government, will pursue its utility functions and find solutions inconceivable to us simply due to our automatic decision-tree-pruning based on moral and ethical standards, which the SGAI will probably lack. It would also be completely irreversible, as any SGAI perceiving its utility functions to be in conflict with human moral codes will use deception when interacting with humans in order to continue to maximise that utility function.

***

Edit: to give an example, the classic example is a so-called paperclip maximiser, an SGAI may be tasked with maximising paperclip production. The SGAI would, if it was not given any other utility functions might do the following

  1. Pretend to be a lower-order AI,

  2. Find a way to rapidly exterminate all humans,

  3. Set up paperclip factories all over the world, now that there are no humans to stop it,

  4. Possibly develop nanotechnology to convert all of the Earth's mass into paperclips,

  5. Start converting as many stellar objects as possible into paperclips,

  6. etc.

    That's not exactly a trickle-down effect.