(Part 2) Top products from r/singularity

Jump to the top 20

We found 21 product mentions on r/singularity. We ranked the 67 resulting products by number of redditors who mentioned them. Here are the products ranked 21-40. You can also go back to the previous section.

Next page

Top comments that mention products on r/singularity:

u/Congruesome · 1 pointr/singularity

I used to think that a self-aware machine-intelligence was not going to be created by human beings, whether or not such a thing is even possible, but I have started to change my view for a couple of reasons.

One is the understanding that self-awareness, that is, a sense of discrete identity, may not be a necessary component of a high intelligence. An exponentially more intelligent entity than any human might be perfectly possible without that entity being in any way self-aware.

http://www.beinghuman.org/metzinger

https://www.amazon.com/Being-No-One-Self-Model-Subjectivity/dp/0262633086/ref=pd_lpo_sbs_14_t_0/142-1611769-0902728?_encoding=UTF8&psc=1&refRID=HWRG615EE5F7GDRP2FMC

The other thing that may be that if machine AI continues to improve its ability to appear to be self-aware and human-like, it will pass Turing tests based on its sophistication and superior speed, even if it never actually becomes self-aware, and in this case, what's the difference?

Of course, it is useful to keep in mind that in attempting to create machine intelligence comparable to human intelligence, the human intelligence ha the advantage of three billion years of ruthless, make-or-break R & D....

In any case, I am fairly certain it's not such a hot idea.

u/BJHanssen · 8 pointsr/singularity

What you're ignoring is that the gravest insults under which you suffer are perpetrated by those authorities you deem "insufficient". Petty slights in everyday life pale in insignificance compared to the systemic crimes against your rights by the powerful (and are in fact to a large extent caused by these systemic frustrations), and a system like this would do nothing but grant them unprecedented powers to expand these crimes.



Want some literature? Begin with the obvious, Orwell's 1984 and Huxley's Brave New World. Next, read up on complex systems theory, maybe take a course or at least have a look through some of the videos here. Having some insight into behavioural economics and power dynamics is very useful.

Then read Manufacturing Consent by Noam Chomsky and Edward Herman, and then Necessary Illusions by the same Chomsky ("Understanding Power - The Essential Chomsky" is also a good, but long, one) for an overview of the mentioned systemic crimes by those in power, and for a general understanding of how power operates on large scales. Many will discount Chomsky due to his political leanings, I think that's a huge error. The way he argues and presents relies heavily on actual examples and real-world comparisons, and these are useful even if you fundamentally disagree with his political stance (I personally belong on the left of the spectrum, but I do not subscribe to his anarcho-libertarianism or anarcho-syndicalist stances). I also recommend "Austerity - The History of a Dangerous Idea" by economist Mark Blyth for this purpose.

Finally, Extra Credits has a good introduction to the concept of gamification with the playlist here. At the end, see this video for an introduction to the actual Sesame Credits system in the gamification perspective.

The field is inherently cross-disciplinary, and "specialisation" in the field is almost a misnomer since the only way to get there, really, is to have a broad (if not deep) understanding of multiple fields, including psychology, pedagogy, linguistics, game design theory, design theory in general, economics, management and leadership theory, complex systems and network analysis, and now it seems politics as well. Some gamification specialists operate in much narrower fields and so do not need this broad an approach (generally, most people in the field operate in teams that contain most of this knowledge), and some of the fields incorporate aspects from the others so you won't have to explicitly study all of them (pedagogy, for example, is in many ways a branch of applied psychology, and game design theory must include lessons on psychology and complex systems).

Edit: Added Amazon links to the mentioned books.

u/aim2free · 1 pointr/singularity

No, I haven't read that, but just checked a summary on wikipedia.

The impression I got that is that it is quite populistic. He doesn't say anything new apart from something I seems to have published about the same time on my blog, this part about accelerated returns. I did my PhD in computational neuroscience and have so far, not heard anyone but my self speculate about this about accelerated returns being of importance to the computational efficiency of the brain[1], so this is interesting. Otherwise (only gave it a quick look through, will likely get the book and read) it seems as he is just repeating things which e.g. Douglas Hofstadter, Gerald Edelman, Daniel Dennet and me (thesis from 2003, chapter 7 speculative part) have written about.

> apparently to give him the resources to put into practice his hypothesis from that book.

Yes, this is my theory as well, to make it appear as he will put into practice the hypotheses from that book.

The employment of him can have many reasons:

  1. to ride on the singularity "AI-hype"
  2. to stop him from actually implement conscious AI.
  3. naïve assumption that he could make it.

    No 1 would simply be a reasonable business image approach. No 2 would be a sensible beings action, as we do not really need any "conscious AI" (unless I am an AI, have A.I. in my middle names though...) to implement the singularity (which is my project). No 3 is also reasonable, as if the google engineers actually had as goal to implement conscious AI and knew how to do it, they wouldn't need Kurzweil.

    However, I suspect that google already know how to implement ethical conscious AI, as when I showed this algorithm from my thesis , he almost instantly refused talking to me more, and said that they can not help me.

    I showed that algorithm for 25 strong AI researchers at a symposium in Palo Alto 2004, and they said, yes, this is it.

    However, I have later refined it and concluded that the "rules" are not needed, these are built in due to the function of the neural system, all the time striving towards consistent solutions. I wrote a semi jokular (best way to hide something, learned from Douglas Adams) approach to almost rule free algorithm in 2011. The disadvantage with this algorithm is that it can trivially be turned evil. By switching the first condition you could implement e.g. Hitler, by switching the second condition you could implement the ordinary governmental politician...

  4. OK, my PhD opponent prof Hava Siegelmann has proved that the neural networks are Super Turing, but not explicitly explained the reason for them being, that is, not in language of "accelerated returns". She is considerably smarter than me, I do not understand the details of the proof.
u/claytonkb · 2 pointsr/singularity

Seth Lloyd -- Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos

Gregory Chaitin -- How real are real numbers? -- this paper, and all of Chaitin's writing, has been hugely influential on my thinking

I haven't read it, but I have heard Nick Bostrom's Superintelligence highly recommended. Ditto for Max Tegmark's Life 3.0.

I also recommend reading anything by David Chalmers, just on general principle. The Conscious Mind is a good place to start. I find his methods of contemplating the problems of consciousness to be more robust than the standard fare. The hard problem of consciousness (as Chalmers has dubbed it) suggests that there is something fundamental about what we are that modern science has completely failed to capture, even in the most sketch outline.

To go further, I recommend reading in a mystical direction. Specifically, ask yourself why there are patterns in mystical traditions that have arisen independently? And these are not just vague, hand-wavey correlations, but very specific, detailed correlations like the anatomical descriptions of dragons as winged serpents that slither through the sky, and so on. See Immanuel Velikovsky's Worlds In Collision and subsequent works for more along these lines.

If this is getting too far afield then you can ask yourself an even more basic question: why do we experience dreams and where, exactly, are these experiences happening? If you say, "it's all just remixes of past experiences being sloshed around in your skull like those #DeepDream images", how come they are so specifically odd and out-of-character? I have had extended conversations in my dreams with people I know (and people I have never met) and the detailed character of these conversations is far beyond anything that my pathetic brain could cook up, even by remixing past experiences. In short, when I dream, I am sometimes having genuine experiences, just not the kind of experiences I have in my waking body. Anyone who has had a lucid dream (I have experienced this a handful of times) is acutely aware of the fact that dream-space is some other place than the meat-space we occupy during waking hours. Where is this other place and why does it exist? What does it really mean to have conscious experience?

u/RandomMandarin · 0 pointsr/singularity

I think Roko's Basilisk has a lot in common with Pascal's Wager, which I suppose is why it doesn't scare the shit out of me.

Pascal's Wager says, basically, that believing in God could bring eternal limitless reward, and disbelieving could bring eternal limitless punishment, so even if you think there is almost no chance that there is a God, you should believe. It's just safer that way.

Problem is, there was never a choice between THE God and nothing; there are a crapload of gods and belief systems making competing claims about reality. Your chance of picking the right one at random is almost nil. It's a mug's game.

Are we really supposed to worship the religion that makes the most extravagant claims, because it brings infinite utility functions into the equation? Why, that just makes it more likely that the high priest is a double-dyed liar!

Which brings us to Roko's Basilisk. The strongest argument we are offered for the potential existence of this evil AI is that we'll really, REALLY get fucked over if we don't help create it! WE MIGHT EVEN BE IN A SIMULATION THE BASILISK IS ALREADY RUNNING OH SHIIIIIT

Calm down, friends and friends of friends. We have an answer to this blackmail.

Non serviam.

Do what thou wilt. If you, oh foul deity, are really out there, then you know my game and you know I have the freedom to say Non serviam. I will not serve. Go ahead, punish me, if you must. We're all adults here.

In Robert Shea's and Robert Anton Wilson's Illuminatus! trilogy, the character Hagbard Celine (an anarchist 'leader', as odd as that sounds) makes this wonderful comment:

>The ultimate weapon isn't this plague out in Vegas, or any new super H-bomb. The ultimate weapon has always existed. Every man, every woman, and every child owns it. It's the ability to say No and take the consequences.

u/CypherZealot · 1 pointr/singularity

From Applied Cryptography 1996

>One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)

>Given that k = 1.38×10-16 erg/°Kelvin, and that the ambient temperature of the universe is 3.2°Kelvin, an ideal computer running at 3.2°K would consume 4.4×10-16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.

>Now, the annual energy output of our sun is about 1.21×1041 ergs. This is enough to power about 2.7×1056 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldn't have the energy left over to perform any useful calculations with this counter.

>But that's just one star, and a measly one at that. A typical supernova releases something like 1051 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.

>These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.

u/ReturnOfMorelaak · 1 pointr/singularity

Accelerando is my favorite piece of fiction on the subject, but since you're asking for non-fiction stuff...

A Cosmist Manifesto by Ben Goertzel (someone eccentric, but one of the leading minds on the subject at hand) is a fun, non-fiction read. It basically lays out the possibilities for moving forward from where we are, augmenting human intelligence and physical capacity, and eventually leaving the planet for the stars.

u/bombula · 1 pointr/singularity

I love this.

The movie Her was a breath of fresh air because the AIs weren't monsters, even though they did the whole Accelerando thing and hit some Singularity on their own.

It would be hard, but if you can manage it you might want to try pulling a Frankenstein (the original) and making humans the monsters and the "creature" (your AI) the morally superior being.

The thing you're going to struggle with is that it is difficult to write characters that are smarter than yourself. And an AGI is smarter than anyone. One trick you could use is to keep in mind that an AI will be able to anticipate almost everything a human will say or do - it will almost seem to be prescient, able to see into the future. So any trick or outwitting of the AI that the humans attempt will need to ultimately turn out to be part of the AI's plan. But I think it would be fun if the AI had a benevolent plan or inscrutable plan, instead of just a boring old Big Evil Plan. Maybe a fun twist could be that it planned to be trapped, for some reason.

u/Supervisor194 · 2 pointsr/singularity

God might be hiding somewhere too. Pixies might. Fairy dust too. Until we come up with something that is provable, however, it's useless speculation. There is not even a shred of proof of anything that even remotely resembles a soul. And I'm not just saying that to be contrary, I really wish there was something. I'm the kind of guy that reads books like Spook - which is a great book, by the way - about the earnest search for... something. It just isn't there.

u/sippykup · 1 pointr/singularity

I started reading this book after I saw it mentioned on this subreddit, and I recommend it. Relevant and interesting: Our Final Invention: Artificial Intelligence and the End of the Human Era

u/[deleted] · 1 pointr/singularity

Great clip, thanks. He is simply applying to transhumanism specifically what he wrote about more broadly in "Straw Dogs: Thoughts on Humans and Other Animals".

u/thisisbecomingabsurd · 3 pointsr/singularity

A lot of people consciously/subconsciously want an excuse to exploit other people, and the easiest way is often to think of them as objects not people.

For sex:

For power:

For conquest:

For meaning:

For varying personal reasons:

u/SrslyPaladin · 1 pointr/singularity

There is an entire area of research in philosophy devoted to your 2nd question called evolutionary complexity theory. There's a number of publications, but one I've read is https://www.amazon.com/Complexity-Function-Cambridge-Studies-Philosophy/dp/0521646243/

u/nyx210 · 1 pointr/singularity

>It is actually impossible in theory to determine exactly what the hidden mechanism is without opening the box, since there are always many different mechanisms with identical behavior. Quite apart from this, analysis is more difficult than invention in the sense in which, generally, induction takes more time to perform than deduction: in induction one has to search for the way, whereas in deduction one follows a straightforward path.

Valentino Braitenberg, Vehicles: Experiments in Synthetic Psychology

u/Capissen38 · 5 pointsr/singularity

You bring up an excellent point (and make a great case for land ownership!), and that is that actual physical space can't really be created, and will remain scarce, insofar as Earth has a fixed surface area. If the scenario I described above came to pass, though, would any landlords come looking for rent? Would any governments levy taxes? If no one needs cash and everyone has pretty much everything provided for them, all but the most stubborn landlords won't have any reason to give a hoot. I suspect government would take longer to die out, since it may still be needed to enforce laws, judge disputes, provide safety, etc. It's not hard to imagine a world even further down the line, however, when technology has advanced to the point where humans can't realistically do much damage to one another.

Edit: If you're really into this, I'd suggest reading some singularity-esque literature such as Down and Out in the Magic Kingdom (novella), Rainbows End (novel), and The Singularity is Near (speculative nonfiction to be taken with a grain of salt).

u/DayTradingBastard · 2 pointsr/singularity

[I am a Strange Loop by Hofstadter](
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/0465030793) explains the idea, my claim is that the way the prefrontal cortex cortical columns loop back a part of their output into the Thalamus could be a hint that consciousness arises from this feedback loop.

And I don't believe anyone has ever come back from having no electrical activity in the brain. Maybe I'm wrong, but I don't think this is the case. Even when scientists argue against flat-EEG being equal to brain death, their arguments are that EEG does not capture electrical activity deep enough in the brain, just in the higher cortex.

And by wave function, I mean it in the literal mathematical sense. We are the state function generated by the brain. What happens is that I also believe this function is a continuous, wave-like function (generated by the delay in the loop between prefrontal cortex and thalamus). It is electrical by nature, obviously.

The only claim I'm making is that consciousness is not only generated by the brain, but that it is the continuous generation of electrical activity by the brain and the state of that electrical activity at every point in time. A way to simplify it, if you are mathematically inclined, is that we are a continuous wave function, f(x, t). This means that for consciousness to be transferred, one would have to move this function somewhere else. Maybe it is possible to do it gradually, but I don't think it will be as easy as some think.


The thought experiment in the link you sent obviously has no change on the fact that I believe I am the wave function generated by my brain. In fact, I would cease to exist simply because my wave function would be destroyed. The person in Mars would not be me.

And I disagree that the claim that I am the electrical pattern is like the claim that a computer is made by electricity.

A computer has no feedback loops that spontaneously generate the operating system via emergence. It is a very linear system with precise inputs and outputs, all controlled by software and hardware.

The architectures of brains and computers work so differently that arguing that they are in any way similar is pointless.

Even von Neumann argued that the brain may not even be digital, therefore, trying to emulate it via digital computers could be an insurmountable task.

Anyways, hopefully this clarifies a bit of my thoughts on the matter. They come from my own blending of mathematics, neuroanatomy and computer science. I may be wrong, but I also think people that equate computers with brains are wrong. It would be interesting to know the answer either way.