Reddit Reddit reviews At Home in the Universe: The Search for the Laws of Self-Organization and Complexity

We found 5 Reddit comments about At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. Here are the top ones, ranked by their Reddit score.

Literature & Fiction
Books
Classic Literature & Fiction
At Home in the Universe: The Search for the Laws of Self-Organization and Complexity
Check price on Amazon

5 Reddit comments about At Home in the Universe: The Search for the Laws of Self-Organization and Complexity:

u/Maniacademic · 9 pointsr/iamverysmart

>Evolution is inherent in unparalleled chao

Okay, this person (or bot?) is obviously churning out fake-deep bullshit, but...Stuart Kauffman? Is that you?

u/naroays · 2 pointsr/videos

> Can we assume that abiogenesis is also non random? Life had not started yet so how could evolution be in effect.

Yeah, abiogenesis is not subject to natural selection, but I think that it's much more likely for small and simpler molecules to arise due to possibly random physical interactions with the environment. Once this one time event naturally occurred (or maybe it occurred multiple times in different parts of the early earth?), evolution by natural selection in a sense is the machinery which gives rise to complexity.

Nobody knows (yet) exactly how abiogenesis occurred or how the first self-replicating molecule was formed, but if you look at certain types of autocatalytic reactions (or the famous Urey-Miller experiment showing how various amino acids, the building blocks of proteins, can naturally occur), I think the probability of such a one-time event is quite plausible. Indeed, much more so that the alternate hypothesis of a large living organism spontaneously created!

If you're interested, there's a really interesting book by the biologist Stuart Kauffmann, which explores these ideas in more detail.

u/pron98 · 1 pointr/haskell

>Why would biological organisational structures necessarily be best?

I don't think they necessarily are, and perhaps I'm reading Kay too charitably, but biological systems have found solutions to problems we are still struggling to solve with computers (resilience, maintenance). I don't agree with Kay (if that's what he means) that if a piece of software is different from a biological system then the software must be doing something wrong, but I do agree that it is worthwhile to consider how life solves problems that technology hasn't yet. But even then, I totally agree with you that it is very likely that life's solutions may not be applicable to technology, because many of the solutions life takes are based on brute-force or extreme redundancy, things that are, at least currently, materially prohibitive for technology. Also, I think that one of the things those who encourage looking to biology for inspiration often miss is that life's goals are different from a computer. As Hamilton and Price taught us, as a computational system life is a single machine whose "goal" (or what the algorithm optimizes for) is not the survival of the individual but of the gene (the so called "selfish" gene). When we write software, it is very much the survival of the individual which is the goal. Although, maybe not? Maybe the goal is the survival of the cluster, which is made of individuals who share the same "genes" and so "selfish altruism" is a good inspiration?

As to the question of evolution and local minima, I think that's a problem in theory but not in practice. Some of life's solutions are well beyond what our technology is currently capable of achieving no matter what we try, and it will be some time until we can think of doing things better. Evolution is not optimal, but it seems to have done many things better than we have so far managed to, although, I guess, partly because it has access to nano-manufacturing and nano-machines while we don't yet. When we do, maybe we'll be able to surpass evolution's design.

BTW, completely tangential but while we're on the subject I very much encourage you to read Stuart Kauffman's At Home in the Universe, which claims that natural selection is not the only force at play, and there are interesting computational designs that arise naturally under some conditions which are very favorable in the universe. He sees those designs as a sort of a fourth law of thermodynamics. He's the one who, in 1969, proposed the study of a fascinating computational model called boolean networks. Incidentally, synchronicity turns out to have a profound impact on the behavior of boolean networks, but that seems like a minor technical issue, and it seems like we can assume synchronicity as a mathematical abstraction even when the physical implementation isn't exactly synchronous.

> ML modules are just elaborate static machinery on existential types, which are abstract types.

I did not know that, nor do I know what existential types are...

> OOP has no monopoly on modular programming.

I didn't say it does, but I think it is very ungenerous to deny what I think is an indisputable fact: OOP has made programming significantly better, as evidenced by the fact that we've been able to write much more elaborate, more maintainable programs partly thanks to OOP. It is certainly possible that other approaches are even better -- perhaps much better -- but you can't take away OOPs actual achievements. Also, precisely because OOP (or some variations of it) has been used in practice more than approaches that claim superiority, we are simply more aware of where it falls short. We don't know about other approaches' shortcomings as much simply because they haven't been put to the test ("the test" being widespread industry use). It is possible that they have all of the advantages and none of the disadvantages of OOP, but it is also possible that they have other disadvantages.

As to "half-assing", well, I guess that any popular product is a "half-assed" realization of some pure concept, because there are big picture concerns that often necessitate breaking the dream a bit. For example, James Gosling described why he designed Java the way he did by saying that the way he saw it, what people really wanted and needed -- i.e., the things he believed would give the greatest bang-for-the-buck -- were garbage collection and pervasive dynamic linking, but those were things that until that time had only been found in languages that, he says, scared people away. And so he decided to put those most important things in the VM and wrap it in a language that seemed familiar and unthreatening. He intentionally compromised on the language -- which is the UI to the VM, and what people see, and the UI is crucial to adoption -- in order to sell the things he thought are the most important. He called it a wolf in sheep's clothing. And this is why I think Java's design is nothing short of brilliant, but it compromised in order to be successful. Whether a good product that nobody uses is really good or not is a philosophical question, but I think that it's more than fair to see lack of adoption as at least some sort of design failure. A different design may not have ended up being so popular. Now it is true that the people behind Java were Lispers (Gosling and Steele) and Smalltalkers (Bracha), and had they been MLers perhaps the result would have been different or "better". But overall, I don't think Java and ML are so incredibly different (apart from immutability by default, which is huge but I don't think you could have sold that to the masses in 1995; maybe not even now). In some important ways, I think ML is closer to Java than to Haskell.

> I disagree about the value of such things.

I commonly hear this, and all I can say is: read the requirements of an air-traffic control system and see how many of them you could discard to simplify the system. I used to think exactly the same as you until I started working on such systems. I think that a very significant portion of the total industry effort goes into software that solves problems with very high essential complexity. Unfortunately, that part of the industry (which may be the majority) isn't well represented in online forums -- certainly not Haskell forums -- or joint academia-industry conferences.

It is an empirical question (and one which is probably very easy to answer) whether most of the value (measured, say, economically) in software is in small software or in large software (not counting embedded). I am pretty certain that the value is overwhelmingly in large software.