(Part 3) Top products from r/singularity

Jump to the top 20

We found 20 product mentions on r/singularity. We ranked the 67 resulting products by number of redditors who mentioned them. Here are the products ranked 41-60. You can also go back to the previous section.

Next page

Top comments that mention products on r/singularity:

u/Congruesome · 1 pointr/singularity

I used to think that a self-aware machine-intelligence was not going to be created by human beings, whether or not such a thing is even possible, but I have started to change my view for a couple of reasons.

One is the understanding that self-awareness, that is, a sense of discrete identity, may not be a necessary component of a high intelligence. An exponentially more intelligent entity than any human might be perfectly possible without that entity being in any way self-aware.

http://www.beinghuman.org/metzinger

https://www.amazon.com/Being-No-One-Self-Model-Subjectivity/dp/0262633086/ref=pd_lpo_sbs_14_t_0/142-1611769-0902728?_encoding=UTF8&psc=1&refRID=HWRG615EE5F7GDRP2FMC

The other thing that may be that if machine AI continues to improve its ability to appear to be self-aware and human-like, it will pass Turing tests based on its sophistication and superior speed, even if it never actually becomes self-aware, and in this case, what's the difference?

Of course, it is useful to keep in mind that in attempting to create machine intelligence comparable to human intelligence, the human intelligence ha the advantage of three billion years of ruthless, make-or-break R & D....

In any case, I am fairly certain it's not such a hot idea.

u/aim2free · 1 pointr/singularity

No, I haven't read that, but just checked a summary on wikipedia.

The impression I got that is that it is quite populistic. He doesn't say anything new apart from something I seems to have published about the same time on my blog, this part about accelerated returns. I did my PhD in computational neuroscience and have so far, not heard anyone but my self speculate about this about accelerated returns being of importance to the computational efficiency of the brain[1], so this is interesting. Otherwise (only gave it a quick look through, will likely get the book and read) it seems as he is just repeating things which e.g. Douglas Hofstadter, Gerald Edelman, Daniel Dennet and me (thesis from 2003, chapter 7 speculative part) have written about.

> apparently to give him the resources to put into practice his hypothesis from that book.

Yes, this is my theory as well, to make it appear as he will put into practice the hypotheses from that book.

The employment of him can have many reasons:

  1. to ride on the singularity "AI-hype"
  2. to stop him from actually implement conscious AI.
  3. naïve assumption that he could make it.

    No 1 would simply be a reasonable business image approach. No 2 would be a sensible beings action, as we do not really need any "conscious AI" (unless I am an AI, have A.I. in my middle names though...) to implement the singularity (which is my project). No 3 is also reasonable, as if the google engineers actually had as goal to implement conscious AI and knew how to do it, they wouldn't need Kurzweil.

    However, I suspect that google already know how to implement ethical conscious AI, as when I showed this algorithm from my thesis , he almost instantly refused talking to me more, and said that they can not help me.

    I showed that algorithm for 25 strong AI researchers at a symposium in Palo Alto 2004, and they said, yes, this is it.

    However, I have later refined it and concluded that the "rules" are not needed, these are built in due to the function of the neural system, all the time striving towards consistent solutions. I wrote a semi jokular (best way to hide something, learned from Douglas Adams) approach to almost rule free algorithm in 2011. The disadvantage with this algorithm is that it can trivially be turned evil. By switching the first condition you could implement e.g. Hitler, by switching the second condition you could implement the ordinary governmental politician...

  4. OK, my PhD opponent prof Hava Siegelmann has proved that the neural networks are Super Turing, but not explicitly explained the reason for them being, that is, not in language of "accelerated returns". She is considerably smarter than me, I do not understand the details of the proof.
u/Mazzaroth · 14 pointsr/singularity

You put your finger on a subject I've been entertaining for some time now. Here are some of the web resources I cumulated over time related to this very specific idea:

u/MasterFubar · 1 pointr/singularity

> "why am I doing this" only makes sense in relation to intermediate goals

If you think like that you aren't very good at solving problems.

This little book mentions an interesting problem they had at NASA during the Mariner 4 program in the 1960s. They were trying to develop a damper to retard the opening of the solar panels in space. Every solution they tried had some problem.

In the end, they found the perfect solution that worked flawlessly. Don't do it. The solar panels didn't need any dampening, they could open as fast as they could.

This perfect solution was found only because they applied the "why am I doing this" question to the ultimate goal, which was to develop a damper for solar panel opening.

Maybe, in the case of the paperclip making machine, the perfect solution could be print everything on a single page. Or scan the documents and work with digital copies. A good AI should be prepared to find this kind of solution.



u/RandomMandarin · 0 pointsr/singularity

I think Roko's Basilisk has a lot in common with Pascal's Wager, which I suppose is why it doesn't scare the shit out of me.

Pascal's Wager says, basically, that believing in God could bring eternal limitless reward, and disbelieving could bring eternal limitless punishment, so even if you think there is almost no chance that there is a God, you should believe. It's just safer that way.

Problem is, there was never a choice between THE God and nothing; there are a crapload of gods and belief systems making competing claims about reality. Your chance of picking the right one at random is almost nil. It's a mug's game.

Are we really supposed to worship the religion that makes the most extravagant claims, because it brings infinite utility functions into the equation? Why, that just makes it more likely that the high priest is a double-dyed liar!

Which brings us to Roko's Basilisk. The strongest argument we are offered for the potential existence of this evil AI is that we'll really, REALLY get fucked over if we don't help create it! WE MIGHT EVEN BE IN A SIMULATION THE BASILISK IS ALREADY RUNNING OH SHIIIIIT

Calm down, friends and friends of friends. We have an answer to this blackmail.

Non serviam.

Do what thou wilt. If you, oh foul deity, are really out there, then you know my game and you know I have the freedom to say Non serviam. I will not serve. Go ahead, punish me, if you must. We're all adults here.

In Robert Shea's and Robert Anton Wilson's Illuminatus! trilogy, the character Hagbard Celine (an anarchist 'leader', as odd as that sounds) makes this wonderful comment:

>The ultimate weapon isn't this plague out in Vegas, or any new super H-bomb. The ultimate weapon has always existed. Every man, every woman, and every child owns it. It's the ability to say No and take the consequences.

u/kebwi · 1 pointr/singularity

I found your previous comment quite satisfying. May I ask what paper you read? I've written a book and several papers on the topic, but so have others. Michael Cerullo's paper is excellent (I suspect that is the one you are referring to).

If you're interested, my website has all my papers:

http://keithwiley.com/mindRamblings.shtml

u/bombula · 1 pointr/singularity

I love this.

The movie Her was a breath of fresh air because the AIs weren't monsters, even though they did the whole Accelerando thing and hit some Singularity on their own.

It would be hard, but if you can manage it you might want to try pulling a Frankenstein (the original) and making humans the monsters and the "creature" (your AI) the morally superior being.

The thing you're going to struggle with is that it is difficult to write characters that are smarter than yourself. And an AGI is smarter than anyone. One trick you could use is to keep in mind that an AI will be able to anticipate almost everything a human will say or do - it will almost seem to be prescient, able to see into the future. So any trick or outwitting of the AI that the humans attempt will need to ultimately turn out to be part of the AI's plan. But I think it would be fun if the AI had a benevolent plan or inscrutable plan, instead of just a boring old Big Evil Plan. Maybe a fun twist could be that it planned to be trapped, for some reason.

u/Supervisor194 · 2 pointsr/singularity

God might be hiding somewhere too. Pixies might. Fairy dust too. Until we come up with something that is provable, however, it's useless speculation. There is not even a shred of proof of anything that even remotely resembles a soul. And I'm not just saying that to be contrary, I really wish there was something. I'm the kind of guy that reads books like Spook - which is a great book, by the way - about the earnest search for... something. It just isn't there.

u/sippykup · 1 pointr/singularity

I started reading this book after I saw it mentioned on this subreddit, and I recommend it. Relevant and interesting: Our Final Invention: Artificial Intelligence and the End of the Human Era

u/Singular_Thought · 2 pointsr/singularity

Sometimes I ponder the same idea. Ultimately we won't know until consciousness is better understood. The research is moving forward.

A great book on the matter is:

Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts
by Stanislas Dehaene (Author)


http://www.amazon.com/gp/product/0670025437/

u/[deleted] · 1 pointr/singularity

Great clip, thanks. He is simply applying to transhumanism specifically what he wrote about more broadly in "Straw Dogs: Thoughts on Humans and Other Animals".

u/thisisbecomingabsurd · 3 pointsr/singularity

A lot of people consciously/subconsciously want an excuse to exploit other people, and the easiest way is often to think of them as objects not people.

For sex:

For power:

For conquest:

For meaning:

For varying personal reasons:

u/SrslyPaladin · 1 pointr/singularity

There is an entire area of research in philosophy devoted to your 2nd question called evolutionary complexity theory. There's a number of publications, but one I've read is https://www.amazon.com/Complexity-Function-Cambridge-Studies-Philosophy/dp/0521646243/

u/ReturnOfMorelaak · 2 pointsr/singularity

Not perfectly on-topic, but read Spin Control by Chris Moriarty.

For that matter, read all three, starting with this one. But Spin Control focuses heavily on future middle-east relations.

u/Dancing_Damaru · 1 pointr/singularity

In the context of Singularity I would explain TM from the perspective of the Yoga Vasistha.

"O Rama, there is no intellect, no consciousness, no mind and no individual soul (jiva). They are all imagined in Brahman."

"That consciousness which is the witness of the rise and fall of all beings – know that to be the immortal state of supreme bliss."

"The moon is one, but on agitated water it produces many reflections. Similarly, ultimate reality is one, yet it appears to be many in a mind agitated by thoughts."

https://en.wikipedia.org/wiki/Yoga_Vasistha

To buy it:

http://www.amazon.com/Vasisthas-Yoga-Special-Paper-27/dp/0791413640/ref=pd_sim_b_1?ie=UTF8&refRID=0W8T4Z2ZPJEQJ879XNHA

Also you might like to check out John Hagelins talk at Stanford.

http://www.youtube.com/watch?v=R9ucmRglCTQ

u/ajtrns · 1 pointr/singularity

It actually has affected me a lot in the last year, in a paralysing and negative way. But my way of looking at it is that:

  • There are many ways the singularity may fail, or that we'll be left behind or become zoo pets. If that's our future, there's a lot we can do now to make that future better.

  • As for how it might not work out as expected, read everything by Vernor Vinge. He's been writing stories about failed singularities for a while. Especially the 3-part "Across Realtime" compilation.

  • You could go into any of the many sciences that will make the singularity real. You can be a small part of making it happen. Ride that wave.

  • There's really no telling whether the small things you do in your life will affect the future in a given way. If you're not a little worker bee in the sciences or technology, and you're not making speculative fiction that shapes the future, you are still probably contributing to the world. The world is weird and relatively few of its underlying properties and mechanisms are unknown, despite what our seemingly enormous network of collective knowledge might make us think. There's a lot to be discovered, a lot that can go wrong, a lot that will be weird. Maybe being worried and paralysed by a "perceived" inevitable future is a good strategy for you. Probably not though! Use this time to be awesome or to practice for an awesome future.