Reddit Reddit reviews Machine Ethics

We found 2 Reddit comments about Machine Ethics. Here are the top ones, ranked by their Reddit score.

Computers & Technology
Books
Computer Science
AI & Machine Learning
Natural Language Processing
Machine Ethics
Check price on Amazon

2 Reddit comments about Machine Ethics:

u/Mauss22 · 4 pointsr/askphilosophy

I'll pass along wokeupabug's typical recommendations:

>A good broad introduction is Lowe's An Introduction to the Philosophy of Mind (for a broader, philosophy and cognition sort of approach). For an introduction more focused on the mind-body problem, you have lots of options; Kim's Philosophy of Mind and Heil's Philosophy of Mind... are good choices. For a history anthology approach, the Chalmers' Philosophy of Mind... is a good choice; a little more accessible would be Morton's Historical Introduction to the Philosophy of Mind.

And the recommendation from the FAQ page:

>For philosophy of mind, Searle's Mind: A Brief Introduction.

I don't really know what you mean by a 'consideration of the future'. Do you mean issues that could crop up in the future germane to phil. mind (A.I., cog. enhancement, etc.)? If so, that's a tough one! Likely just the Cambridge Handbook. The introduciton is avail here if you'd like a preview. And this book on Machine Ethics is recommended on the PhilPapers bibliography.

​

u/UmamiTofu · 4 pointsr/askphilosophy

>It seems like all the research involving AI alignment seems to be done by computer scientists using machine learning.

Not exactly. Most research here doesn't use machine learning, and much of it looks at issues which are simply above and beyond the question of how an agent is going to learn a classifier function or approximate its value function. That being said, it is largely a matter of computer science in general.

>What role do philosophers have in this conversation?

If decision theorists count as philosophers then there is plenty of work to be done; see Stuart Armstrong's work on corrigibility, Jessica Taylor's work on quantilizers, and Nate Soares, Eliezer Yudkowsky and Wei Dai's work on Functional Decision Theory and its predecessors TDT and UDT. It's worth noting though that it seems better to approach this from a mathematical or computer science background rather than from philosophy if you are doing it for the purposes of advanced AI development.

You can get into more traditional philosophy territory by analyzing about how a superintelligent agent will make decisions and act, as long as you don't get carried away from computational reality. The orthogonality thesis in particular is amenable to philosophical analysis. Here are a couple relevant papers, one from a computer scientist and one from a philosopher.

https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

https://philpapers.org/archive/PETSAS-12.pdf

Finally there is the basic ethical question of what ends advanced AI should achieve, which is a clearly philosophical question. You could technically call this machine ethics, but it is separate from other such work (described in the next part of this comment) in that it assumes very advanced systems. Here are examples of the kinds of ideas at stake:

https://intelligence.org/files/CEV-MachineEthics.pdf

https://intelligence.org/files/CEV.pdf

https://foundational-research.org/wp-content/uploads/2016/08/Suffering-focused-AI-safety.pdf

>Furthermore, what other subfields of AI ethics are there besides AI Alignment?

Machine ethics, which is the question of how AI agents should behave, under the premise of them having human or subhuman level of general intelligence. There is plenty of this in r/AIethics. Wallach and Allen's Moral Machines is a good book here, also this recently came out but I haven't read it. Also there are more papers here. Actual implementation is usually a big part of it these ideas.

Then there is the question of when and how AI interests should be given moral weight. Some interesting stuff here would be:

http://stevepetersen.net/petersen-designing-people.pdf

https://arxiv.org/pdf/1410.8233.pdf

Then, there are arguments about whether it is morally wrong to use artificially intelligent weapons. There is some philosophical literature on when data science and machine learning classifiers and recommenders are fair or unfair. And if you go into legal and political philosophy you could make judgements regarding the rules and policies for developing and using AI.