Reddit Reddit reviews What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence (Edge Question Series)

We found 2 Reddit comments about What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence (Edge Question Series). Here are the top ones, ranked by their Reddit score.

Computers & Technology
Books
Computer Science
AI & Machine Learning
Artificial Intelligence & Semantics
What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence (Edge Question Series)
HARPER PERENNIAL
Check price on Amazon

2 Reddit comments about What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence (Edge Question Series):

u/PepperoniFire · 11 pointsr/NeutralPolitics

Hm, I'm going to push back on this a bit. I work in regulatory law and risk management so Musk's comments about regulating AI piqued my curiosity of a possible future state in both of these areas.

I began reading. I'm still in the early stages, and thus I won't purport to be a specialist in AI (I'm not in computer science or an engineer) or wholly mired in this more theoretical conversation. Primarily, my exposure has been through What to Think About Machines That Think, as well as some exposure at work (tech company) that is flirting with more immediate applications of, at least, machine learning.

The book is a series of short essays about the topic of AI, machine learning, and the future. There are quite a few people who are, at least in this book, presented as experts who do think there is a possibility for machines to learn dangerous habits. One goes so far as to mention Roko's Basilisk, which posits even a benevolent AI might have some imperative to harm humans who were an obstacle to its creation precisely because that would delay its benevolence.

I don't personally subscribe to this, but there is an overarching concern of input/output where some regulation might be required to create parameters of what each iterated goal for AI is in order to ensure it remains constrained enough to avoid incidental harms for broader positive goals.

Anyway, I think there are two approaches to this question: short-term and long-term. From the short-term point of view, there's little evidence that we should fear much from AI short of unintentionally programming our current biases into it. In the long-term, however, (non-imminent) there appears to be a not-insignificant number of prominent AI thinkers mulling over more negative future states.

u/Staberinde_Chair · 2 pointsr/CGPGrey

AI - If anybody is having nightmares like Grey on the AI issue I would strongly recommend the recent book: 'What to think about machines that think.' (edited by John Brockman) http://www.amazon.com/What-Think-About-Machines-That/dp/006242565X
It is a collection of short essays by 186 leading thinkers on the question and contains gems by generalists such as Daniel C. Dennett, Susan Blackmore, Martin Rees, Matt Ridley, Steven Pinker and many more specialists in the field. It presents a wide range of well argued views on the question and it seems clear that: 1) We are much further from 'hard AI' than you might think and 2) It is by no means clear that AI poses an existential threat. I particuarly like the argument put forward by Martin Rees (former president of the Royal Society) that AI represents our best hope for the long-term survival of consciouness/thought/meaning in the long term and that any AI would either be a product (descendent?) of humanity or would be an integration of the human mind with a non-organic substrate.