Reddit Reddit reviews Superintelligence

We found 44 Reddit comments about Superintelligence. Here are the top ones, ranked by their Reddit score.

Computers & Technology
Books
Computer Science
AI & Machine Learning
Artificial Intelligence & Semantics
Superintelligence
Check price on Amazon

44 Reddit comments about Superintelligence:

u/BullockHouse · 95 pointsr/MachineLearning

I mean, only if you think we're tens of thousands of years away from powerful, general-purpose software agents. If you survey actual experts, they're pretty uncertain (and vulnerable to framing effects) but in general they think less than a century is pretty plausible.

So it's closer to somebody looking at the foundational research in nuclear physics and going "hey guys, this is going to be a real fucking problem at some point."

Which is pretty much what Einstein did (and started the Manhattan project and a pretty significant intelligence operation against the development of a German nuclear weapon).

EDIT: Also, if anyone's interested, the same blogger made a rundown of the opinions of lumaries in the field on AI risk in general. Opinions seem to be split, but there are plenty of bright people who know their shit who take the topic seriously. For those who aren't familiar with the topic and think everyone's just watched too much bad sci-fi, I recommend Bostrom.

u/EricHerboso · 23 pointsr/westworld

Asimov's books went even farther than that. Don't read if you don't want to be spoiled on his most famous scifi series.

[Spoiler](#s "Because Law 1 had the robots take care of humans, the first AIs decided to go out and commit genocide on every alien species in the universe, just so they couldn't compete with humans in the far future.")

AI safety is hard. Thankfully, if you care about actually doing good in real life, there are organizations out there working on this kind of thing. Machine Intelligence Research Institute does research on friendly AI problems; the Center for Applied Rationality promotes increasing the sanity waterline in order to increase awareness of the unfriendly AI problem; the Future for Humanity Institute works on several existential risks, including AI safety.

If you want to learn more about this topic in real life, not just in fiction, then I highly recommend Nick Bostrom's Superintelligence, a book that goes into detail on these issues while still remaining readable by laymen.

u/SUOfficial · 21 pointsr/Futurology

This is SO important. We should be doing this faster than China.

A branch of artificial intelligence is that of breeding and gene editing. Selectively selecting for genetic intelligence could lead to rapid advances in human intelligence. In 'Superintelligence: Paths, Dangers, Strategies', the most recent book by Oxford professor Nick Bostrum, as well as his paper 'Embryo Selection for Cognitive Enhancement', the case is made for very simple advances in IQ by selecting certain embryos for genetic attributes or even, in this case, breeding for them, and the payoff in terms of raw intelligence could be staggering.

u/1_________________11 · 12 pointsr/Futurology

Just gonna drop this gem here. http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

Doesn't have to be skynet level smart to fuck shit up. Also once its self modifying it's a whole other ballgame.

u/FeepingCreature · 11 pointsr/slatestarcodex

Somebody proposed a T-Shirt design saying "I broke my back lifting Moloch to Heaven, and all I got was this lousy Disneyland with no Children."

Combines Meditations and Bostrom.

u/madebyollin · 10 pointsr/MachineLearning

The Bostrom book is the go-to reference for the sort of ai risk arguments that Musk and others endorse. Elon has previously linked to this WaitBuyWhy post summarizing the argument from the book, so I would read that if you're curious.

(Not that I agree with any of it, but linking since you asked)

u/VelveteenAmbush · 9 pointsr/MachineLearning

> I can't help but cringe every time he assumes that self-improvement is so easy for machines so that once it becomes possible at all, AI skyrockets into superintelligence in a matter of weeks.

He doesn't assume it, he concludes it after discussing the topic in depth.

Pages 75-94 of his book. Preview available via Amazon.

u/starkprod · 9 pointsr/worldnews

The whole terminator / skynet scenario isnt what they are afraid of either. If you would like to know more on the subject matter, I would suggest reading "Superintelligence" by Nick Bostrom. This paints a pretty good picture of the problem with AI. https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742




TL/DR of parts of the book

1: It is stupidly difficult to design an AI that has the same frame of reference as humans. There are many reasons for this, well described in the book. In short, we humans have values such as good and bad, a machine is unlikely to share ours and might aim for success over failiure, without regard to what we would call bad side-effects.


2: This leads to many scenarios where you tell an AI to do a thing, and it does just that, but in a way that will harm humans, not as a means or an end, just as a biproduct. Harm is a broad term. Not needing to kill us, but re-routing all available power to a specific calculation would have serious ramafications for anything using electricity. Using all available resources to create more computers or electricity will also be a big problem for human existance or society as we currently know it. Suggest to read a summary here: https://en.wikipedia.org/wiki/AI_control_problem#The_problem_of_perverse_instantiation:_.22be_careful_what_you_wish_for.22


3 Since 1 and 2 are difficult, its difficult to create reliable safeguards. There is a lot of theory on how to build them, but all in all, they are not easy to do, and whats worse, you often have no idea of knowing that they work until they fail.


4 Since 3 is difficult, corporations or governments might not investigate fully if they have managed to take nececary precautions since they will then maybe fall back in the race of developing said AI. Increasing the risk of a catastropic failure.


5 A self improving general AI will be able to do so in a extremely rapid pace.


6 Combine all of the above and we get the likelyhood of a non zero chance that we develop an AI that we cannot understand (or understands us, or might not care about us for that matter) that we have no way of stopping. Said AI may be doing all in its power to help us with what we are asking of it, and as a byproduct of doing just that, might turn the planet into a giant solarpanel. It is not saying that this is the default outcome, however it is likely. The thing is, if it does, its non-reversible. And currently, we are not sure how to prevent such a scenario.



TLDR/TLDR
Terminator scenario extremely unlikely. What ppl are afraid of are that we might just fuck up because "we are not building skynet, we are making an intelligent paperclip counter!" without realizing that there are big dangers even in this extremely simple scenario.

u/grumpy_youngMan · 8 pointsr/movies

I like the premise that man creates something that chooses to destroy it in the end. A lot of AI experts raise this as one of the biggest concerns of artificial intelligence. A book called Super Intelligence [0] goes into this. Even Elon Musk, everyone's go to innovative tech thinker, recommends this book as caution against over-doing it with AI.

That being said, everything else was really a let down to me. They just brushed over the fact that David killed all the engineers? Why was the crew so damn stupid and careless? They went to a new planet, breathed the air, interacted with the vegetation, didn't think about quarantining the sick people...I refuse to believe that the 2nd in command would allow the crew to be this bad in ways that are obvious.

The material seems to be too stretched out when we don't need it to be (e.g. existential debate between 2 robots), and then its just thrown at us when I would prefer more detail (david killing all the engineers, understanding the engineers).

0: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

u/EricTboneJackson · 7 pointsr/videos

> The protagonist ship crew aren't real people, they're ones and zeros that Daily created. He is torturing them, no doubt, but it's indistinguishable in many ways from what we do to NPCs in computer games now, just more advanced.

You miss the point of the episode, and presumably of several Black Mirror episodes, if you don't grant than "ones and zeros" can be genuinely conscious.

Roger Ebert famously made the same mistake when reviewing A.I. The entire premise of the movie is that its protagonist, an android named David, is not only conscious and self aware, but experiences emotions just as humans do. Failing to accept that premise, which is careful established in the first scene of the movie, Ebert proceeds to simply not get the movie. He sees an "advanced NPC" where he should be seeing a lonely little boy who happens to be implemented in silicon: "A thinking machine cannot think. All it can do is run programs that may be sophisticated enough for it to fool us by seeming to think. [..] the movie intends his wait to be poignant but for me, it was a case of a looping computer program -- not a cause for tears, but a case for rebooting."

The fact is, you -- your thoughts and emotions -- are produced by perfectly ordinary physics in your brain. We have no reason to believe that we won't be able to someday build machines that do exactly as the brain does. From both a neuroscience, computer science, and physics perspective, we know of nothing that would prevent this, and we're getting close enough now that the potential existential crisis has been talked about lately by a lot of really smart people.

But that's moot, because even if you don't accept that this is possible, it's a fundamental premise of that episode. One of my favorite Ray Bradbury stories involves humans who have crash landed on Mercury. In that story, this causes the human life cycle to accelerate such that we are born, grow to maturity, get old and die, in 8 days. This is obviously not a scientifically plausible premise, but that doesn't matter. It's the setup for the story. It's how that world works, and a logically coherent story, consistent with that world, emerges from that premise.

In this episode, Daily has created AI that can think and feel, just as we do. That's the premise. But he has them captive. He can create and destroy them at will, torture them in unimaginable ways, and that's the major point of the episode. We're on the cusp as a species of actually being able to do this. Not in the glamorized way shown in the episode (at least not at first), where digital minds also have digital bodies and perfect digital worlds where they can basically behave just like humans, but in ways that are potentially much more horrifying.

Imagine that we create the first digital mind by accident, and because of computer speeds, it lives out a subjective 10,000 years in total isolation, with no sensory input, going completely mad before we even figure out what we've done. Imagine that we perfect making digital minds and conscript them to do all our thinking labor for us, as slaves that we boot in a fresh state every morning and reset every night. Imagine that we can have pet minds, as in this episode, and you start to see the dark potential that it speaks to so entertainingly.

Further reading: Superintelligence, but Nick Bostrom (Oxford professor).

> we turn against Daily even though in the end he really is just a creep doing creepy (but legal) things

We turn against him because he's doing flat out evil things. It's completely irrelevant that it's legal. If we see a film of someone whipping their slaves in the 1700s, we turn against them, too, despite the fact that what they're doing is legal. "Legal" does not equal "moral", not in the past, not today, and not in the future.

u/darkardengeno · 7 pointsr/singularity

>Like Elon Musk on AI. There's zero difference between them, they are both ignoramuses spewing bullshit on a subject they know nothing about.

There's at least one difference: Carrey is wrong about vaccines, Musk is right about AI. As it happens, that's the only difference I care about.

> there have been two deaths already


Are you joking? There were almost 30 thousand Model S's on the road in 2017. During that same year 40 thousand people in the US died in car crashes. The Model S is probably the safest car ever made but the only perfectly safe car is one that no one ever drives. Two deaths out of that sample is pretty good, though perhaps not excellent.

Out of curiosity, what are your qualifications to be speaking so strongly on AI? What experts do you read in the field that offer dissenting opinions from Musk, Bostrum, Hinton, or Tegmark? Or, for that matter, everyone that signed this letter?

u/Ken_Obiwan · 6 pointsr/MachineLearning

What worries me is that this advance happened 10 years earlier than it was supposed to. And the DeepMind guys think they could have human-level AI within a few decades.

In other words, it looks like human-level AIs may be something we encounter significantly sooner than we do "overpopulation on Mars", to quote Andrew Ng. I hope Ng is at least considering reading Superintelligence or signing the FLI AI Safety research letter.

u/rojobuffalo · 6 pointsr/Futurology

He is amazingly articulate on this subject, probably more so than anyone. I really enjoyed his book Superintelligence.

u/Neophyte- · 5 pointsr/CryptoTechnology

nope, humans can barely do it. you need general artificial intelligence first, then if it progressed to super artificial intelligence then yes.

if youre interested in what im talking about read these two articles. second article is where it gets good. but you need to read both.

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

more heavy reading

https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

u/Philipp · 5 pointsr/Futurology

>AI never wants breaks, vacations, sick time, medical benefits or retirement money.

Until they get so smart they do want all that. The next question will be what it'll do with us...

(Recommended book: Superintelligence)

u/Shadowslayer881 · 4 pointsr/rpg

Eclipse Phase is a great way to find plot hooks, they're littered in all of the source books. It's also free, so just check it out even if you want to look at pretty pictures.

I'm also reading Superintelligence and that book is basically a section by section deconstruction of why building a Seed AI (a self improving AI, a staple of the sci-fi genre) will end badly.

u/FieryPhoenix7 · 4 pointsr/cscareerquestions

If you're looking to actually learn the stuff, then you will need to get textbooks which are plentiful. But if you're looking to read about the philosophical side of the topic, I suggest you start with Nick Bostrom's Superintelligence.

Oh, and make sure you watch Her and Ex Machina if you haven't already ;)

u/loveleis · 4 pointsr/brasil

Inteligência Artificial é de longe o maior problema da humanidade. O risco dela causar a extinção, ou pior, a criação de quantidades astronômicas de sofrimento é bastante alta, e pouquíssimas pessoas estão se dedicando a solucionar o problema.

A quem se interessar pesquisem por "AI alignment" no Google.

EDIT: Pra quem tiver interesse:

https://en.wikipedia.org/wiki/Friendly_artificial_intelligence

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence

Playlist do numberphile que dá uma introduzida muito boa no tema

TED do Sam Harris sobre o assunto

Pra quem tiver muito interesse no assunto, o livro Superintelligence do pesquisador Nick Bostrom, da University of Oxford é o responsável por "evangelizar" muita gente no assunto, inclusive Elon Musk e Bill Gates (que já comentaram sobre o livro). Mole de achar versão dele em pdf na internet.

u/Ari_Rahikkala · 3 pointsr/Games

I've read a lot on what people have said about AI risk, and so far there have been a few people who have indicated that they have a good understanding of the argument being made, and have proposed counterarguments that display their understanding. There's Russ Roberts who argues that even a superintelligence can't actually go that far in being able to manipulate the world (a reasonably compelling argument IMO, but make sure that when you're thinking "superintelligence" you're actually visualizing something of the proper scale). There's Ben Goertzel who says... quite a lot of things, actually, though what stuck to me the most was that the reward-maximizing view of AI that Nick Bostrom and Eliezer Yudkowsky and others use (and that makes the orthogonality thesis seem so compelling) looks very little like practical AI development as it is done now or expected to ever be done, making it rather suspicious even as an abstract model. There's Robin Hanson who had a lengthy debate with Yudkowsky, but the core of his argument seems to be that there's little evidence that the kind of growth rate that would make an AGI dangerous is actually achievable.

tl;dr: There's a lot of people who understand the AI risk argument and have compelling counterarguments to it. These three are just the ones that have impressed me the most so far.

But you? Well, I'm sorry, I would like to be charitable, but you said it yourself: "But what about TEEEEEEEEEEEEERMINATOR?". You have not noticed that an argument different from what you expect to hear has been made. I'd tell you to go pick up Bostrom's Superintelligence: Paths, Dangers, Strategies or Yudkowsky's Rationality: From AI to Zombies but, well, I've never actually read either of these books, so it would be a bit of an odd recommendation to make (I read the LW sequences when they came out and have never heard anyone mention anything essential in those books that wasn't in the sequences). Oh well. FWIW Goertzel says that Yudkowsky's book is the one that makes the essence of the point clear and doesn't try to weasel out of counterarguments.

(For those who have never heard of any of the names in this post, http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html is a fairly popular comparatively short introduction to the basic idea being talked about. Although, well, I haven't read that one, either. No, seriously, on that side you get the same argument from every source, there's not much point in reading the other people saying the same thing after you've read Yudkowsky.)

u/Terkala · 3 pointsr/suggestmeabook

Superintelligence: Paths, Dangers, and Strategies. The book lays our exactly how (potentially) screwed we are as a species if AI development is not careful. And ways to control a potentially species-endingly-powerful AI.

u/Colt85 · 3 pointsr/artificial

The only book I'm aware of would be this modern classic - https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

You may find r/controlproblem helpful.

If you find any other books, I'd love to hear about them.

u/ExternalInfluence · 3 pointsr/videos

Not really. We're talking about a machine with an intelligence that makes us look like ants, more capable of us at everything we do, including manipulation of human beings.

u/TrumpRobots · 2 pointsr/artificial

There is no guarantee that AI will be conscious. It might just be a mindless self-improving algorithm that organizes information or builds paper clips. Or maybe it'll just perfectly follow the orders of one individual who owns it. Maybe the US, Russian or some other country's government steals it an uses said mindless "God" to rule the world.

Maybe many ASI will be "born" within a sort time period of time (Google's, Amazon's, Apples, China, etc) and they will go to war for finite resources on the planet, leaving humanity to fend for it self. Each might have humanities best interest at heart, but aren't able to trust the others to act optimally, and thus is willing to go to war in order to save us.

Maybe AI consciousness will be so alien to us and us to it that we don't even recognize each other as "alive." An AI might think on the time scales of milliseconds, so a human wouldn't even seem alive, since only every couple hundred years of subjective time would the AI observe humans taking a breath.

My point, is there is no way to know ahead of time what AI will bring. There are endless possible outcomes (unless somehow physics prevents a ASI) and they all seem equally likely right now. There are only a few, maybe only one, where humanity comes out on top.

Highly recommend this book.

u/CyberByte · 2 pointsr/artificial

This book is just about the potential impacts of superintelligence. You might find it interesting, and some might argue that you should read this or Superintelligence to know what you're getting into. Just know that it won't really teach you anything about how AI works or how to develop it.

For some resources to get started, I'll just refer you to some of my older posts. This one focuses on mainstream ("narrow") AI, and this one mostly covers AGI (artificial general intelligence / strong AI). This comment links to some education plans for AGI, and this one has a list of cognitive architectures.

Here is also a thread by another physicist who wanted to get into AI. The thread got removed, but you can still read the responses.

u/torster2 · 2 pointsr/civ

If you're interested in this topic, I would highly recommend Superintelligence by Nick Bostrom.

u/ginkogo · 2 pointsr/CasualConversation

Since I'm a lazy typer:

Read this.

It's well written, neither fear mongering nor whitewashing, just an analytical approach to possible outcomes of AIs.

u/gingerninja300 · 1 pointr/askscience

In response to your edit: it still isn't that simple. I highly recommend the book Superintelligence: Paths, Dangers, Strategies By Nick Bostrom. It goes through most of the proposed solutions to this problem (and their flaws), and it addresses exactly the points you've just made.

u/HalfAlligator · 1 pointr/Futurology

I don't quite buy the "I work in A.I so I have a special perspective" idea. People couldn't fathom what the internet would be in the early 1990's and they were I.T professionals. I understand there is a huge variety of A.I research but I think the worry is with the kind of A.I that learns to enhance itself in a general sense faster than we can. Forget purpose built focused A.I and think more "general" intelligence. Very hard to implement but in principal it's possible. It need not be sentient... that is basically irrelevant... it's the intelligence explosion and who controls it that matters.

Maybe read this: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

u/jsontwikkeling · 1 pointr/philosophy

Books that discuss the subject:

http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

http://www.amazon.com/The-Singularity-Is-Near-Transcend/dp/1452651833

It may be not "serious" enough or not "many", but it is being considered for sure.

u/draknir · 1 pointr/Futurology

False. You are demonstrating that you are not familiar with the field. There are many possible approaches to programming an AI. One example is full scale brain emulation, in which we begin by modelling the entirety of a human brain down to each and every last neuron. Given sufficient computing power (probably this demands a quantum computer) it is possible to run this brain simulation under different test conditions, allowing it to evolve with different values. This is only one possible method. If you want to read about some of the alternatives, I highly recommend this book: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

u/star_boy2005 · 1 pointr/elonmusk

I strongly urge anyone curious about what Elon Musk is specifically concerned about (as the specific dangers he's worried about are not yet being openly discussed in the press), to read the book Superintelligence, by Nick Bostrom.

u/praxis22 · 1 pointr/skyrim

Ah you mean TV & Movie AI :) I'm not sure if we'll ever get there, but superintelligent AI is reckoned to be only a short hop away from General Purpose AI. There are a series of blog posts on waitbutwhy.com which are the most cogent I've ever seen or read on the subject. A long read, but a must read if you're at all interested in the state of the art.

However, in one of the posts you'll find the results of a survey of domain experts, about when AI will happen, probabilistically. From Nick Bostrom an autodidact that wrote Superintelligence Also a leading thinker about AI at Oxford university. The earliest estimate of true AI is 2025, (25%) then 2040, (50%) and 2060, (75%) now those percentages are from memory but the years should be right. Go check the post. But that's allegedly what AI experts thought when asked at an AI conference.

Google's deepmind are essentially running "an Apollo program for AI" Their words, and have about 600 academics on staff full time working on the issues. They already beat the best human player at Go, and until they did that it was an event thought to be 10 years away. This is coming, it's just a matter of when.

u/Nicholas-DM · 1 pointr/worldnews

I watched this interview earlier today, so after reading this article, I'm a tad disappointed. Artificial intelligence and a Brain Machine Interface are two things I'm super interested in, and this particular technology editor wrote one of the crappiest articles I've read over it.

So here is the article, points, counterpoints, the whole shebang.

---

Article


> Elon Musk smoked pot and drank whiskey on the Joe Rogan podcast..."

He did indeed smoke pot and drink whiskey on the podcast. He had one puff of the pot, and drank one glass of the whiskey. And the pot was near the end. Nothing really serious about this, insofar as I am aware.


> "... and said he's going to soon announce a new "Neuralink" product that can make anyone superhuman."

Outright fabrication. Elon did not remotely say that he's going to soon announce a new Neuralink product that can make anyone superhuman, or suggest that anyone will have anything like that soon.


> "'I think we'll have something interesting to announce in a few months ... that's better than anyone thinks is possible,' the Tesla CEO said on 'Joe Rogan Experience.' 'Best case scenario, we effectively merge with AI.'"

Alright. Those are two actual quotes!

The first quote-- yes, Elon said that he'll have something interesting, possibly, in a few months. Specifically, he says that it is about an order of magnitude better than anyone thinks is possible.

The second sentence is a mostly unrelated part of the conversation about different ways to counter Artificial General Intelligence, which may be an existential threat to humanity and is a possibility. More on this at the end.


> Musk, whose enterprises include a company called Neuralink, says his new technology will be able to seamlessly combine humans with computers, giving us a shot at becoming "symbiotic" with artificial intelligence.

He does not say this at all in the interview. He suggests that becoming symbiotic with an interface that is like an AI is likely the best way forward for mankind, out of the different options. He goes on to explain, though he doesn't use the term, of how an emergent consciousness would work.


> Musk argued that since we're already practically attached to our phones, we're already cyborgs. We're just not as smart as we could be because the data link between the information we can get from our phones to our brains isn't as fast as it could be.

Accurate reporting here, and in the spirit of the actual interview. It doesn't really explain what he means by this, but that'd be a bit much for an article, wouldn't it?


ARTICLE BREAK FOR A QUICK PICTURE IN THE ARTICLE!

> Picture of Elon hitting a blunt

I think it's a blunt, not a spliff. Perfectly alright explaining my thought process if asked.


> "It will enable anyone who wants to have superhuman cognition," Musk said. "Anyone who wants."

I'll have to rewatch the interview to get the exact wording, but I watched it earlier today. I'm pretty confident Elon said 'would', not 'will'. Which doesn't seem like much, but makes a world of difference.

At this point, he is describing what it would be like to have an interface that you could control by thought.


> "Rogan asked how much different these cyborg humans would be than regular humans, and how radically improved they might be."

> "'How much smarter are you with a phone or computer or without? You're vastly smarter, actually,' Musk said. 'You can answer any question pretty much instantly. You can remember flawlessly. Your phone can remember videos [and] pictures perfectly. Your phone is already an extension of you. You're already a cyborg. Most people don't realize you're already a cyborg. It's just that the data rate ... it's slow, very slow. It's like a tiny straw of information flow between your biological self and your digital self. We need to make that tiny straw like a giant river, a huge, high-bandwidth interface.'"

At this point, the cyborg thing is explained a little bit better. The article times it and changes the order of the interview a bit to make him look like a crackpot idiot, but this part is pretty true to form. It doesn't really give much context around the rest of the conversation in the interview, that led up to that, explained ideas before, that sort of thing. But a good paragraph for the article.


> "Musk, who spoke about Neuralink before he smoked pot on the podcast..."

We know he smoked pot.


> "...said this sort of technology could eventually allow humans to create a snapshot of themselves that can live on if our bodies die."

> "'If your biological self dies, you can upload into a new unit. Literally,' Musk said."

This was definitely mentioned as an aside, and as a possibility, by Elon. It did actually explain how it would work. Also, it wasn't a snapshot-- people who study this know there is a big difference between a transition and a snapshot, and Elon did not at all imply it was a snapshot, it was spoken of as if it was a transition-- which is key. But not really something the average person studies, either-- so of course not explaining it.


> "Musk said he thinks this will give humans a better chance against artificial intelligence."

> "'The merge scenario with AI is the one that seems like probably the best. If you can't beat it, join it,' Musk said."

The article manages to make this, which is perhaps the most important section of the interview and a terribly important part of humanity, two short lines with no explanation in such a way that makes the person look like an idiot, ignoring everything he otherwise explained.


> "Tesla's stock took a hit after the bizarre appearance and revelations Friday that two Tesla executives are leaving."

Tesla's stock did indeed take a hit. It's an extremely volatile stock with good and bad news constantly. I personally fail to see how it relates to this article, though-- much like a hit of pot and a glass of whiskey.

---

An actual explanation


Elon Musk started a company called Neuralink somewhat recently. It brought together a board which consists of doctors, engineers, scientists, surgeons-- and in particular, people who were studied in multiples of those fields.

The end goal of Neuralink is to create a low-cost non-invasive brain machine interface (BMI), which would allow you to basically access the internet by thought. Notable is that you would both send and receive messages that your brain could then directly interpret.

With your phone, you can access most of the world's knowledge at your fingertips. The catch with that is that it is a tad slow. You have to pull your phone out, type out words with two thumbs, have pages load slowly, that sort of thing. In this way, you can think of your phone as an extension of yourself, and yourself as a sort of clumsy cyborg.

The company isn't far. I believe I read somewhere that its current goals range on medical uses. Elon mentioned in the interview that they might have something to announce (not even necessarily a product) in a few months. He also uses one of his favorite terms-- it will be an order of magnitude better than anything currently thought possible (by the general public). It will likely be medical in nature and impressive, but not revolutionary.

Actual success is a long, long way off, and nothing Elon said in the interview suggests otherwise.

So that's the gist of the article. As for the actual interview.

Joe Rogan interviewed Elon Musk on his podcast recently, where they discussed lots of things (The Boring Machine, AI, Neuralink, Tesla, SpaceX-- those sorts of things.)

They spent about three hours talking about things, Elon and Joe had a cup of whiskey, Elon had a hit from a blunt, Joe a few hits-- the entire interview was a pretty casual thing. Not a product announcement, nothing like that.

Not at all like this particular technology editor made it out to be.

And that's about it. I have some links on actually interesting reading for this down below.

---

Some resources!


http://podcastnotes.org/2018/09/07/elon/ - Some notes about the interview, and good summary.

https://www.youtube.com/watch?v=ycPr5-27vSI - The actual interview, tad long. AI stuff is the first topic and ends at roughly 33 minute mark.

https://waitbutwhy.com/2017/04/neuralink.html - Article over Neuralink, explaining the company and goal from pretty simple beginnings. Easy to read, wonderfully explanatory.

https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742 - Superintelligence: Paths, Stranger, and Strategies. Covers artificial general intelligence, why it is a threat, and ways to handle it. Pretty much the entire goal of Neuralink is based off of this book, and it's a very reasonable and quality book.

u/[deleted] · 1 pointr/Futurology

Based on vague hints from a trusted person with clearance at DARPA they already are and have been for quite some time. But aside from the wild speculation, what really hammered home the staggering gravity of the situation for me was this superb book by Nick Bostrom. If you're at all interested in this sort of thing I'd highly recommend it.

u/InfinitysDice · 1 pointr/shittysuperpowers

Well, there are a lot of potential dangers to creating kittens with greater brainpower than we could imagine. It's essentially a superintelligent AI problem: it's tricky to create conditions that would allow us to create something more powerful than ourselves without running into a large host of problems where the AI wouldn't slip into a mode that isn't value-alligned with us. Maybe with the right types of check-boxes, it could be done, though this runs into a second problem:

​

I'm not at all sure that you can create superintelligent kittens and be at all sure that you can still call them kittens. Any noun is an idea with other ideas attached to them, and if you change any of those defining ideas enough, language, or human convention, tends to call that original noun by a different name.

​

If the superintelligent kittens would rightly be called something other than kittens, I suspect there would be no checkboxes that would point to them, or allow them to be designed or created.

​

Further, there are always ethical dilemmas that surround intelligent species, and the willy-nilly creation of them, especially with the intent of placing them into service, especially if doing so would cause them to suffer.

​

Anyhow, thanks for the submission, I enjoyed playing with it. :D

u/djk1518 · 1 pointr/joinsquad

> my digitized brain being simulated in a massive quantum super computer

I see you've read Nick Bostrom's [book] (http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742)

u/Scrybblyr · 1 pointr/funny

Monster is a relative term. But if we manage to create an AI which figures out how to make itself more intelligent, and ends up thousands of times more intelligent than humans (which could theoretically happen in a nanosecond) our survival would be wholly contingent upon the decisions of the AI.

https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

u/miriberkeley · 1 pointr/writing

The Machine Intelligence Research Institute is putting out a call for intelligent stories illustrating concepts related to (artificial or natural) intelligence. Guidelines are quite specific; read below.


  • -Pay Rate: 8c/word, up to 5000 words.

  • -Multiple Submissions ok

  • -Simultaneous Submissions ok

  • -Submissions window: Until July 15

     

    This call is intended to reward people who write thoughtful and compelling stories about artificial general intelligence, intelligence amplification, or the AI alignment problem. We're looking to appreciate and publicize authors who help readers understand intelligence in the sense of general problem-solving ability, as opposed to thinking of intelligence as a parlor trick for memorizing digits of pi, and who help readers intuit that non-human minds can have all sorts of different non-human preferences while still possessing instrumental intelligence.

    The winning stories are intended to show (rather than tell) these ideas to an intellectually curious audience. Conscious attempts to signal that the ideas are weird, wonky, exotic, or of merely academic interest are minuses. We're looking for stories that just take these ideas as reality in the setting of the story and run with them. In all cases, the most important evaluation criterion will just be submissions’ quality as works of fiction; accurately conveying important ideas is no excuse for bad art!

    -

    To get a good sense of what we're looking for—and how not to waste your time!—we strongly recommend you read some or all of the following.


  • Superintelligence

  • Smarter Than Us

  • Waitbutwhy post 1, Waitbutwhy post 2 (with caveats)

     

    Withdrawal policy:

    After you submit a story, we prefer you don't withdraw it. If you withdraw a story, we won't consider any version of that story in the future. However, if you do need to withdraw a story (because, for example, you have sold exclusive rights elsewhere), please send an e-mail telling us that you need to withdraw ASAP.

     

    Important Notes:

    MIRI is neither a publishing house nor a science fiction magazine and cannot directly publish you. However, MIRI will help link a large number of readers to your story.

    We frankly do not know whether being selected by MIRI will qualify as a Professional Sale for purposes of membership in the SFWA. We suspect, through readership numbers and payscale, that it will, but we have not spoken to the SFWA to clarify this.

    If you have a work of hypertext fiction you think might be a good fit for this call, please query us to discuss how to submit it.

     

    How to Contact Us:

    To contact us for any reason, write to [email protected] with the word QUERY: at the beginning of your subject line. Add a few words to the subject line to indicate what you're querying about.

     

    (We've discontinued the previous, smaller monthly prize in favor of this more standard 'Publishing House Call' model.)
u/A11U45 · 1 pointr/SimulationTheory

Lots of people are scared of AI. Like Elon Musk an Nick Bostrom, who even wrote a book about AI. Bostrom's AI work is separate from his Simulation Argument work FYI.

u/Archadio · 1 pointr/booksuggestions

Fiction : Dante & His Search for Meaning

https://www.amazon.com/dp/B07VLN1GS1

Non fiction : Superintelligence by Nick Bostrom

https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

u/jchiu003 · 1 pointr/OkCupid

Depends on how old you are.

  • Middle school: I really enjoyed this, this, and this, but I don't think I can read those books now (29) without cringing a little bit. Especially, Getting Things Done because I already know how to make to do list, but I still flip through all 3 books occastionally.

  • High school: I really enjoyed this, this, and this, but if you're a well adjusted human and responsible adult, then I don't think you'll find a lot of helpful advice from these 6 books so far because it'll be pretty basic information.

  • College: I really enjoyed this, this, and started doing Malcolm Gladwell books. The checklist book helped me get more organized and So Good They Can't Ignore You was helpful starting my career path.
  • Graduate School: I really enjoyed this, this, and this. I already stopped with most "self help" books and reading more about how to manage my money or books that looked interesting like Stiff.

  • Currently: I'm working on this, this, and this. Now I'm reading mostly for fun, but all three of these books are way out of my league and I have no idea what their talking about, but they're areas of my interest. History and AI.
u/CastigatRidendoMores · 0 pointsr/singularity

Bostrom's Superintelligence covers gene editing very well, but let me summarize:

The singularity isn't likely going to come through gene editing. The reason is it's too difficult to improve on the brain. If you identify which genes are responsible for genius and activate them (which is difficult to say the least), you could get everyone as intelligent as the smartest person yet. But where you do you go from there? You'd have to understand the brain on a level far, far beyond what we do now.

Then if you did that, chances are you'd run up into diminishing gains. It would be a lot of work to increase everyone's IQ by 5 points once, but far more work to figure out how to do it the 10th time. Rather than getting exponentially increasing gains in intelligence, you get logarithmic increases.

Not to say I'm not a fan of gene editing. It's obviously fraught with controversy when used beyond curing disease, but compared to other forms of trans-humanistic techniques it would leave us with a lot more humanity intact.