Reddit Reddit reviews The Society of Mind

We found 12 Reddit comments about The Society of Mind. Here are the top ones, ranked by their Reddit score.

Books
Self-Help
Personal Transformation Self-Help
The Society of Mind
Check price on Amazon

12 Reddit comments about The Society of Mind:

u/proggR · 10 pointsr/IAmA

Hello Ben, thank you for doing an AMA. I apologize for the long windedness in advance. I added the bolded text as headings just to break things up a little more easily so if you don't have time to read through my full post I'd be happy to just get suggested readings as per the AMA Question section.

AMA Question

I'm interested more and more in AI but most of what I know has just been cobbled together from learning I've done in other subjects (psychology, sociology, programming, data modeling, etc), with everything but programming being just as hobby learning. AI interests me because it combines a number of subjects I've been interested in for years and tries to fit them all together. I have Society of Mind by Minsky and How to Create A Mind by Kurzweil at home but haven't started either yet. Do you have any follow up reading you would recommend for someone just starting to learn about AI that I could read once I've started/finished these books? I'm particularly interested in information/data modelling.

Feedback Request for Community AI Model

I had a number of long commutes to work when I was thinking about AI a lot and started to think about the idea of starting not with a single AI, but with a community of AI. Perhaps this is already how things are done and is nothing novel but like I said, I haven't done a lot of reading on AI specifically so I'm not sure the exact approaches being used.

My thought process is that the earliest humans could only identify incredibly simple patterns. We would have had to learn what makes a plant different than an animal, what was a predator and what was prey, etc. The complex patterns we idenfity now, we're only able to do so because the community has retained these patterns and passed them onto us so we don't have to go through the trouble of re-determining them. If I were isolated at birth and presented with various objects, teaching myself with no feedback from peers what patterns can be derived from them would be a horribly arduous, if not impossible, task. By brute forcing a single complex AI, we're locking the AI in a room by itself rather than providing it access to peers and a searchable history of patterns.

This made me think about how I would model a community of ai that made sharing information for the purpose of bettering the global knowledge core to their existence. I've been planning a proof of concept for how I imagine this community AI model, but this AMA gives me a great chance to get feedback long before I commit any development time to it. If you see anything that wouldn't work, or that would work better in another way, or know of projects or readings that are heading in the same direction I would love any and all feedback.

The Model

Instead of creating a single complex intelligent agent, you spawn a community of simple agents, and a special kind of agent I'm calling the zeitgeist agent, that acts as an intercessor for certain requests (more on that in a bit).

Agents each contain their own neural networks which data is mapped to, and a reference to each piece of information is stored as meta data to which "trust" values can be assigned which would relate to how "sure" the agent is of something.

Agents contain references to other agents they have interacted with, along with meta data about that agent including a rating for how much they trust them as a whole based on previous interactions, and how much they trust them for specific information domain based on previous interactions. Domain trust will also slowly allow agents to become "experts" within certain domains as they become go-tos for other agents within that domain. This allows agents to learn broadly, but have proficiencies emerge as a byproduct of more attention being given to one subject over another and this will vary from agent to agent depending on what they're exposed to and how their personal networks have evolved over time.

As an agent receieves information, a number of things take place: it takes into account who gave it the information, how much they trust that agent, how much they trust that agent in that domain, how much trust the agent has placed on that information, whether conflicting information exists within its own neural network, and the receiving agent then determines whether to blindly trust the information, blindly distrust the information, or whether to verify it with its peers.

Requests for verification are performed by finding peers who also know about this information which is why a "language" will need to be used to allow for this interaction. I'm envisioning the language simply being a unique hash that can be translated to the inputs recieved that are used by the the neural networks, and whenever a new piece of information is recieved the zeitgeist provisions a new "word" for it and updates a dictionary it maintains that is common to all agents within the community. When a word is passed between agents, if the receiving agent doesn't know the word, it requests the definition from the zeitgeist agent and then moves on to judging the information associated with the word.

When a verification request is made to peers, the same evaluation of trust/distrust/verify is performed on the aggregate of responses and if there is still doubt that isn't enough doubt to dismiss it entirely, the receiving agent can make a request to the zeitgeist. This is where I think the model gets interesting, but again it may be commonplace.

As agents age and die, rather than lose all the information they've collected, their state gets committed to the zeitgeist agent. Normal agents and the zeitgeist agent could be modelled relatively similarly, with these dead agents just acting as a different type peers in an array. When requests are made to the zeitgeist agent, it can inspect the states of all past agents to determine if there was a trustworthy answer to return. If after going through the trust/distrust/verify process its still in doubt, I'm imagining a network of these communities (because the model is meant to be distributed in nature) that can have the same request past onto the zeitgeist agent from another community in order to pull "knowledge" from other, perhaps more powerful, communities.

Once the agent finally has its answer about how much trust to assign that information, if it conflicts with information recieved from other peers during this process, it can notify those peers that it has a different value for that information and inform them of the value, the trust they've assigned, and some way of mapping where this trust was derived from in order for the agent being corrected to perform its own trust/distrust/verify process on the corrected information. This correction process is meant to have a system that's generally self correcting, though bias can still present itself.

I'm picturing a cycle the agent goes through that includes phases of learning, teaching, reflecting, and procreating. Their lifespan and reproductive rates will be determined by certain values including the amount of information they've acquired and verified, the amount of trust other agents have placed on them, and (this part I'm entirely unsure of how to implement) how much information they've determined a priori, which is to say that, through some type of self reflection, they will identify patterns within their neural network, posit a "truth" from those patterns, and pass it into the community to be verified by other agents. There would also exist the ability to reflect on inconsistencies within their "psyche", or put differently to evalutate the trust values and make corrections as needed by making requests against the community to correct their data set with more up to date information.

Agents would require a single mate to replicate. Agent replication habits are based on status within the community (as determined by the ability to reason and the aggregate trust of the community in that agent), peer-to-peer trust, relationships meaning the array of peers determines who the agent can approach for replicating with, and heriditary factors that reward or punish agents who are performing above/sub par. The number of offspring the agent is able to create will be determined at birth, perhaps having a degree of flexibility depending on events within its life, and would be known to the agent so the agent can plan to have the most optimized offspring by selecting or accepting from the best partners. There would likely also be a reward for sharing true information to allow for some branches to become just conduits of information moving it through the community. Because replication relies on trust and ability to collect validated knowledge, as well as being dependent on finding the most optimal partner, lines of agents who are consistently wrong or unable to reflect and produce anything meaningful to the community will slowly die off as their pool of partners will shrink.

The patterns at first would be incredibly simple, but by sharing information between peers, as well as between extended networks of peers, they could become more and more complex over time with patterns being passed down from one generation of agent to the next via the zeitgeist agent so the entire community would be learning from itself, much like how we have developed as a species.


Thanks again

I look forward to any feedback or reading you would recommend. I'm thinking of developing a basic proof of concept so feedback that could correct anything or could help fill in some of the blanks would be a huge help (especially the section about self reflection and determining new truths from patterns a priori). Thanks again for doing an AMA. AI really does have world changing possibilities and I'm excited to see the progress that's made on it over the next few decades and longer.

u/PermianWestern · 4 pointsr/scifiwriting

>neural prosthetic, as it’s called in-universe

We're all "in-universe" here, dog.

If you have an opportunity, check your library or used book store for Marvin Minsky's The Society of Mind. His premise is that intelligence, sapience, is the a product of the interaction of non-intelligent parts. According to Minsky's theory, the human mind is made up of parts which are not themselves sapient. But when you throw them together with the right sets of connections, you end up with a sapient mind.

u/sv0f · 3 pointsr/MachineLearning

That's an interpretation that some NN researchers believe.

In reality, the book proved theorems showing the limits of then-current architectures. This is its enduring contribution.

It is the extrapolation of their results -- by Minsky and Papert and others -- that led people to lose interest in NN for the next decade.

I would be weary of looking for protagonists and antagonists in this story. Researchers simply followed what seemed to be more promising directions at the time, and this included symbolic approaches. The pendulum of what's popular has swung back and forth over the decades, and will continue to swing without the need to posit "good guys" and "bad guys".

It's also the case the Minsky's position is pretty misrepresented. His doctoral work at Princeton was a mechanical NN-like system (snarc). So he had a hard-won sense of the limits of that approach. (Whether he was right is another question.) And he was always interested in parallel co-operative processing as a model of computation (see his student Danny Hillis's dissertation, which led to the supercomputer company Thinking Machines) and as a model of cognition (see his completely unique book Society of Mind).

u/legalpothead · 3 pointsr/trees

Marvin Minsky, in The Society of Mind, postulates that a conscious mind is composed of many parts which are not conscious themselves. The simplest parts he calls agents; an agent performs a single task. Agents are grouped together to form agencies, and agencies are grouped into bigger agencies. Finally, agencies interact to form a conscious mind.

u/SamCarterX206 · 2 pointsr/Whatisthis

For context, you can see said symbol as it appears in book by using the "Look Inside" function on the Amazon page: http://www.amazon.com/gp/product/0671657135?ie=UTF8&camp=1789&creativeASIN=0671657135&linkCode=xm2&tag=marvinminsky

It also appears on the back of the book, in the lower right, right next to
"A Touchstone Book
Published by Simon & Schuster
New York".

u/yoda17 · 2 pointsr/science
u/gargoyle_mayonnaise · 1 pointr/philosophy

I read this book when I was much younger and it gives a good schematic understanding of the mechanistic nature of consciousness, but not necessarily the biochemical version.

The fact is we have a pretty good understanding of the nervous system at this point. We understand nervous impulses and which regions of the brain process which sensory inputs and are responsible for coordination, conscious thought, and so on, and even though our understanding is in never complete, we kind of "get it". We get the biochemistry behind nervous impulses, sodium gates and axons and neurotransmitters.

What eludes us is the "conscious" aspect of awareness, and my argument is simply that this is the output of such a complex system. A system that collates many sensory inputs, juggles decision making with learning, reflecting on past experiences, risk assessment and so on. Our decision tree response to stimuli, well what that manifests as is a sort of internal monologue. Our never ending attempt to learn from our history and then output that knowledge in an attempt to influence our environment manifests as our brains' constant chatter and spinning. We develop unique identities because every single person's perspective and experience (not to mention genetics) is slightly different, and unique.

And I think the real driver for advanced consciousness was our physical bodies. Our dexterous and capable bodies necessitated the need for an advanced consciousness.

Consider cattle. Cattle are large mammals with "big" brains (relative to a lot of other animals) but they aren't highly intelligent. What are some of the indicators of intelligence? Complex vocal communication perhaps, or problem solving ability, tool usage, things like that?

How could a cow really use a tool? It's only means of manipulating its environment is to bump its head into things, or bite something, or kick or stomp on things.

Why would cows have complex vocal communication? That would actually work to their disadvantage . . . imagine how absurd a large herd of gibbering cows would be. Vital communication like an "alarm" or "distress" moo would be lost in the endless chatter of dozens or hundreds of cows who can't shut up.

Now birds, they appeared much earlier on than mammals, but in many ways lots of birds exhibit signs of higher intelligence than lots of mammals. This makes sense though, birds have a lot to think about. They can move around their environment in three dimensions. They're very nimble and dexterous, not as much as a primate, but their feet and beaks allow them to perform a lot of complicated tool related tasks. Watch a parrot untie a knot or unscrew a bottle cap just for fun sometime. And they have the need for complex vocalizations based on their social structure, territory ranges and so on.

Primates, and ultimately hominids, first evolved that rare combination of all these things. Very dexterous appendages with which to manipulate their environment. All five of our senses are reasonably acute. Large territories. Big brains with which to process all this information. The ability to move around in psuedo-3D (by climbing and also running and swimming, even though we can't fly by nature, we adapted to mechanical flight very well though!). And probably most important, two of our appendages are not used primarily for locomotion, allowing us to dedicate them to tool usage pretty much full-time. And of course relatively long lifespans allowing us to truly learn and capitalize on life experiences.

The reflective, or "self-aware" consciousness, probably did not appear until humans had sufficiently mastered their environment, long enough to form small societies, minimize their external sources of stress, and formalize some kind of language. The language facilitated reason, and reason facilitated everything else.

With language, idle time and reduced roles and responsibilities of individual humans, we had more time to basically sit around and think about things.

But all that appeared because (in my opinion) of our physical shapes themselves. As far as "by what mechanism did this occur" it is just the same mechanism that carries one nervous impulse to another, proliferated by billions or trillions. A complex network of always firing neurons observing, responding to, regulating, storing and recalling information. I dunno about nuclear reactors but imagine some kind of massive animal that had an array of solar panels on its back. We dissect it and discover that, it's just chlorophyll. Nothing complicated there, the same chlorophyll found in grass and trees. But we discovered some gargantuan beast with plates of shifting green armor on its back that can charge itself in the hot sun. Then we start asking these existential questions about it, and marvel in amazement, but nobody did this with the lowly blade of grass. It wasn't until we saw it on a massive scale, realized to full potential, that it blew our minds. So our consciousness is not qualitatively different than a rudimentary one, it is just much, much grander in scale.

u/asclepius22 · 1 pointr/MessiahComplex

I've found This book to be one of the best models for cognitive psychology.

I need to reread it. I think it could be a good tool for servitor creation.

u/jamey2 · 1 pointr/philosophy

Try out Marvin Minsky's work, especially The Society of Mind

u/investandr · 1 pointr/Futurology

That 'computer guy' wrote the book on how our mind works and poineered cognitive psychology...in 1988. I suggest you read up on him more

u/Cdresden · 1 pointr/suggestmeabook

The Society of Mind by Marvin Minsky.

The Emperor's New Mind by Roger Penrose.

The Age of Spiritual Machines by Ray Kurzweil.