Reddit Reddit reviews How to Create a Mind: The Secret of Human Thought Revealed

We found 6 Reddit comments about How to Create a Mind: The Secret of Human Thought Revealed. Here are the top ones, ranked by their Reddit score.

Computers & Technology
Books
Computer Science
AI & Machine Learning
Artificial Intelligence & Semantics
How to Create a Mind: The Secret of Human Thought Revealed
Ray Kurzweil is arguably today’s most influential—and often controversial—futurist. In How to Create a Mind, Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse engineering the brain to understand precisely how it works and using that knowledge to create even more intelligent machines.
Check price on Amazon

6 Reddit comments about How to Create a Mind: The Secret of Human Thought Revealed:

u/theekrat0s · 12 pointsr/fo4

I could write a book of information and opinions with dozens of sources and stuff but I am gonna keep it simple and link you to these two thing:
https://www.youtube.com/watch?v=r-jMdJHv1Lk
and
http://www.amazon.de/How-Create-Mind-Thought-Revealed/dp/0670025291

The big thing you gotta look at is consciousness. I am gonna copy paste some previous comments of mine in other threads that talk about that. Everything beyond this point is what I think and if you want you can ignore it and just rely on those 2 previous links because they are pretty much the starting point that leads to what I will say:

Someone asked this "Why don't people understand that gen3s are practically human?"

My answer/comment was:
Well tbh the answer is simply we don't know. We IRL are not advanced enough to know if they can be considered humans. The brain pretty much works like a computer, with electric(and also chemical[PS.: there is a super computer that uses those too it's pretty cool]) signals. What makes humans different from machines is that we are conscious of ourselves and our surroundings. Yes, synths are conscious too BUT (this is a big but) they did not achieve that level of consciousness by themselves. Humans it took hundreds of thousand of years of evolution to build this consciousness and every newborn baby gets taught this consciousness passively by there parents. Growing up we just start understanding these things because everyone does, we leach of each other. Synth do that too but they never achieved this themselves, after they get created the Institute indoctrinates them with what they need to know. Their consciousness is just as artificial and synthetic as their bodies. The way they think and do and feel is all based upon how they see themselves and their surroundings and ALL of that has been implanted into them, kinda like programming the basis of an AI, and after a while they also learn. (IRL AI are not as advanced and can only effectively learn simple tasks but that is changing rapidly) To TRULY determine if synths can gain and create their own consciousness(biological programming pretty much) they have to be separated from any human contact. What happens if they grow up with different species, or with themselves or alone? If they are able to build up that by themselves then we could start calling them sentient beings. Without any of that data they are just fancy pants machines. (Nick is an odd exception because he came from a human, for me he is more human than any Gen3)
PS.: But hey, this is all just my opinion and this is the internet…so who cares.

The Brain is grown but not the informations inside it. A synths mind can be wiped and reshaped. They most likely have a basic "programming" in them to begin with given the fact that they can walk the second they get made/"born". And about that free and happy life thing. Most are ok with beings workers/slaves/tools. The minority of them escapes and in the story they never give a reason as to why that movement got started, all synths are created equal so why do some want t escape and some not. They have the same life since they were created, they came from the same source and got taught the same basic skills needed for their work. When did they start to "think" for themselves and start to be sentient and want things such as freedom? The only logical answer(given with what we know from the game) is that it depended with which human they got into contact more. There are people in the Institute that think they are humans and most likely the synths working in that area are the ones getting those ideas of escaping. Long story short, all of this leads to what I said that they are not creating these thoughts and their consciousness by themselves, they are grabbing it, leaching it passively from the humans they work with. It's very much like an AI learning things step-by-step. And the synths that NEVER try to escape are the coursers, why? Because they spent a lot of time with the department(SRB) that treats the synths more as tools than humans as any other place in the Institute.
TLDR: Synths behave a LOT like how a futuristic AI would work and learn. Never making their own decisions but instead leaching ideas and learning from their surroundings.

The thing is that humans throughout their entire evolution build that up. We build up that consciousness and every generation benefits and expands it, that's what a species does.
Yeah you could consider a synth humans because of that but in my eyes it is just a very advanced AI being thought and given a consciousness that is not his. The answer is IF they can build something like that up by themselves which is at this point in time simply impossible to answer.
Scientists are estimating that by 2023 they can rebuild how a human brain works with a computer, on top of that there are super computers out right now that use both electrical and chemical signals to send information just like the brain. If you combined those two and give it the same ideas, believes and skills that the Institute teaches the synths would you consider that thing human?
I believe it is not the body that determines if they are human, not the flesh and bone but what their mind is and the synths are just a fancy harddrive up there(for me that is). Nick is an exception because his memories are DIRECTLY from a human and after that he built his own personality and consciousness in combination with his former self and the people that he ended up with after being kicked out of the Institute. Because he developed so much himself(not completely though, even the SS helps him with his quest) I consider him more human than any Gen3 Synth out there.

Hope this helps!

u/proggR · 10 pointsr/IAmA

Hello Ben, thank you for doing an AMA. I apologize for the long windedness in advance. I added the bolded text as headings just to break things up a little more easily so if you don't have time to read through my full post I'd be happy to just get suggested readings as per the AMA Question section.

AMA Question

I'm interested more and more in AI but most of what I know has just been cobbled together from learning I've done in other subjects (psychology, sociology, programming, data modeling, etc), with everything but programming being just as hobby learning. AI interests me because it combines a number of subjects I've been interested in for years and tries to fit them all together. I have Society of Mind by Minsky and How to Create A Mind by Kurzweil at home but haven't started either yet. Do you have any follow up reading you would recommend for someone just starting to learn about AI that I could read once I've started/finished these books? I'm particularly interested in information/data modelling.

Feedback Request for Community AI Model

I had a number of long commutes to work when I was thinking about AI a lot and started to think about the idea of starting not with a single AI, but with a community of AI. Perhaps this is already how things are done and is nothing novel but like I said, I haven't done a lot of reading on AI specifically so I'm not sure the exact approaches being used.

My thought process is that the earliest humans could only identify incredibly simple patterns. We would have had to learn what makes a plant different than an animal, what was a predator and what was prey, etc. The complex patterns we idenfity now, we're only able to do so because the community has retained these patterns and passed them onto us so we don't have to go through the trouble of re-determining them. If I were isolated at birth and presented with various objects, teaching myself with no feedback from peers what patterns can be derived from them would be a horribly arduous, if not impossible, task. By brute forcing a single complex AI, we're locking the AI in a room by itself rather than providing it access to peers and a searchable history of patterns.

This made me think about how I would model a community of ai that made sharing information for the purpose of bettering the global knowledge core to their existence. I've been planning a proof of concept for how I imagine this community AI model, but this AMA gives me a great chance to get feedback long before I commit any development time to it. If you see anything that wouldn't work, or that would work better in another way, or know of projects or readings that are heading in the same direction I would love any and all feedback.

The Model

Instead of creating a single complex intelligent agent, you spawn a community of simple agents, and a special kind of agent I'm calling the zeitgeist agent, that acts as an intercessor for certain requests (more on that in a bit).

Agents each contain their own neural networks which data is mapped to, and a reference to each piece of information is stored as meta data to which "trust" values can be assigned which would relate to how "sure" the agent is of something.

Agents contain references to other agents they have interacted with, along with meta data about that agent including a rating for how much they trust them as a whole based on previous interactions, and how much they trust them for specific information domain based on previous interactions. Domain trust will also slowly allow agents to become "experts" within certain domains as they become go-tos for other agents within that domain. This allows agents to learn broadly, but have proficiencies emerge as a byproduct of more attention being given to one subject over another and this will vary from agent to agent depending on what they're exposed to and how their personal networks have evolved over time.

As an agent receieves information, a number of things take place: it takes into account who gave it the information, how much they trust that agent, how much they trust that agent in that domain, how much trust the agent has placed on that information, whether conflicting information exists within its own neural network, and the receiving agent then determines whether to blindly trust the information, blindly distrust the information, or whether to verify it with its peers.

Requests for verification are performed by finding peers who also know about this information which is why a "language" will need to be used to allow for this interaction. I'm envisioning the language simply being a unique hash that can be translated to the inputs recieved that are used by the the neural networks, and whenever a new piece of information is recieved the zeitgeist provisions a new "word" for it and updates a dictionary it maintains that is common to all agents within the community. When a word is passed between agents, if the receiving agent doesn't know the word, it requests the definition from the zeitgeist agent and then moves on to judging the information associated with the word.

When a verification request is made to peers, the same evaluation of trust/distrust/verify is performed on the aggregate of responses and if there is still doubt that isn't enough doubt to dismiss it entirely, the receiving agent can make a request to the zeitgeist. This is where I think the model gets interesting, but again it may be commonplace.

As agents age and die, rather than lose all the information they've collected, their state gets committed to the zeitgeist agent. Normal agents and the zeitgeist agent could be modelled relatively similarly, with these dead agents just acting as a different type peers in an array. When requests are made to the zeitgeist agent, it can inspect the states of all past agents to determine if there was a trustworthy answer to return. If after going through the trust/distrust/verify process its still in doubt, I'm imagining a network of these communities (because the model is meant to be distributed in nature) that can have the same request past onto the zeitgeist agent from another community in order to pull "knowledge" from other, perhaps more powerful, communities.

Once the agent finally has its answer about how much trust to assign that information, if it conflicts with information recieved from other peers during this process, it can notify those peers that it has a different value for that information and inform them of the value, the trust they've assigned, and some way of mapping where this trust was derived from in order for the agent being corrected to perform its own trust/distrust/verify process on the corrected information. This correction process is meant to have a system that's generally self correcting, though bias can still present itself.

I'm picturing a cycle the agent goes through that includes phases of learning, teaching, reflecting, and procreating. Their lifespan and reproductive rates will be determined by certain values including the amount of information they've acquired and verified, the amount of trust other agents have placed on them, and (this part I'm entirely unsure of how to implement) how much information they've determined a priori, which is to say that, through some type of self reflection, they will identify patterns within their neural network, posit a "truth" from those patterns, and pass it into the community to be verified by other agents. There would also exist the ability to reflect on inconsistencies within their "psyche", or put differently to evalutate the trust values and make corrections as needed by making requests against the community to correct their data set with more up to date information.

Agents would require a single mate to replicate. Agent replication habits are based on status within the community (as determined by the ability to reason and the aggregate trust of the community in that agent), peer-to-peer trust, relationships meaning the array of peers determines who the agent can approach for replicating with, and heriditary factors that reward or punish agents who are performing above/sub par. The number of offspring the agent is able to create will be determined at birth, perhaps having a degree of flexibility depending on events within its life, and would be known to the agent so the agent can plan to have the most optimized offspring by selecting or accepting from the best partners. There would likely also be a reward for sharing true information to allow for some branches to become just conduits of information moving it through the community. Because replication relies on trust and ability to collect validated knowledge, as well as being dependent on finding the most optimal partner, lines of agents who are consistently wrong or unable to reflect and produce anything meaningful to the community will slowly die off as their pool of partners will shrink.

The patterns at first would be incredibly simple, but by sharing information between peers, as well as between extended networks of peers, they could become more and more complex over time with patterns being passed down from one generation of agent to the next via the zeitgeist agent so the entire community would be learning from itself, much like how we have developed as a species.


Thanks again

I look forward to any feedback or reading you would recommend. I'm thinking of developing a basic proof of concept so feedback that could correct anything or could help fill in some of the blanks would be a huge help (especially the section about self reflection and determining new truths from patterns a priori). Thanks again for doing an AMA. AI really does have world changing possibilities and I'm excited to see the progress that's made on it over the next few decades and longer.

u/xamomax · 7 pointsr/Futurology

To understand what Google is likely to be doing, I highly recommend How to Create a Mind by Ray Kurzweil. Keep in mind that Kurzweil is now at Google, probably specifically for this project.

u/ItsAConspiracy · 2 pointsr/Futurology

My suggestion is to opensource it under the GPL. That would mean people can use your GPL code in commercial enterprises, but they can't resell it as commercial software without paying for a license.

By opensourcing it, people can verify your claims and help you improve the software. You don't have to worry about languishing as an unknown, or taking venture capital and perhaps ultimately losing control of your invention in a sale or IPO. Scientists can use it to help advance knowledge, without paying the large license fees that a commercial owner might charge. People will find all sorts of uses for it that you never imagined. Some of them will pay you substantial money to let them turn it into specialized commercial products, others will pay you large consulting fees to help them apply the GPL version to their own problems.

You could also write a book on how it all works, how you figured it out, the history of your company, etc. If you're not a writer you could team up with one. Kurzweil and Jeff Hawkins have both published some pretty popular books like this, and there are others about non-AGI software projects (eg. Linux, Doom). If the system is successful enough to really make an impact, I bet you could get a bestseller.

Regarding friendliness, it's a hard problem that you're probably not going to solve on your own. Nor is any large commercial firm likely to solve it own their own; in fact they'll probably ignore the whole problem and just pursue quarterly profits. So it's best to get it out in the open, so people can work on making it friendly while the hardware is still weak enough to limit the AGI's capabilities.

This would probably be the ideal situation from a human survival point of view. If someone were to figure out AGI after the hardware is more powerful than the human brain, we'd face a hard takeoff scenario with one unstoppable AGI that's not necessarily friendly. Having the software in a lot of hands while we're still waiting for Moore's Law to catch up to the brain, we have a much more gradual approach, we can work together on getting there safely, and when AGI does get smarter than us there will be lots of them with lots of different motivations. None of them will be able to turn us all into paperclips, because doing that would interfere with the others and they won't allow it.

u/[deleted] · 1 pointr/mildlyinteresting

I've been reading more about the MegaHAL bot, and it seems like its utilization of Markov chains is considered a primitive method.

How do you think your implementations would fair with a more robust natural language decoder? Siri, for instance, implements Nuance Communications tech, which uses Hidden Hierarchical Markov Models (HHMMs), allowing higher order patterns to send feedback back to lower order recognizers. Excuse the anthropomorphism upcoming, but when Siri hears APPL, she guesses the word is apple, and tells the E recognizer to lower its threshold for acceptance. This allows for much greater flexibility and accuracy, especially when trying to decode audio signals.

Sorry for all the questions; I just finished Ray Kurzweil's How To Create a Mind, I've had this kind of thing on the mind, and you seem to have the sort of real world experience with it that I lack.