(Part 2) Best ai & machine learning books according to redditors

Jump to the top 20

We found 3,368 Reddit comments discussing the best ai & machine learning books. We ranked the 567 resulting products by number of redditors who mentioned them. Here are the products ranked 21-40. You can also go back to the previous section.

Next page

Subcategories:

Artificial intelligence & science books
Computer neural networks books
Artificial intelligence books
Machine theory books
Computer vision & graphics books
Natural language processing books

Top Reddit comments about AI & Machine Learning:

u/electricfistula · 64 pointsr/changemyview

If you are interested, and want a real challenge to your view, I strongly recommend Nick Bostrum's Superintelligence.

A key takeaway is imagining intelligence on this kind of scale. That is, our intuition says Einstein is much smarter than the dumbest person you know. Yet, the dumb guy and Einstein have the same hardware, running about the same number of computations. The difference between a bee and a mouse or a mouse and a chimp is orders of magnitude. The difference between an idiot and a genius is very small in comparison.

AI will seem very stupid to the human observer until almost exactly the point it becomes amazingly brilliant. As AI overtakes bees (probably has already) and mice it will still seem dumb to you. As it overtakes chimps, still much dumber than the dumbest person you know. As it draws even to the moron, you might think that the AI has a lot of work left to go. Instead, it's almost unfathomably brilliant.

The distance between bee and chimp is 1,000,000,000 and the difference between moron and smartest man who ever lived is 5.

u/fusionquant · 46 pointsr/algotrading

First of all, thanks for sharing. Code & idea implementation sucks, but it might turn into a very interesting discussion! By admitting that your trade idea is far from being unique and brilliant, you make a super important step in learning. This forum needs more posts like that, and I encourage people to provide feedback!

Idea itself is decent, but your code does not implement it:

  • You want to holds stocks that are going up, right? Well, imagine a stock above 100ma, 50ma, 20ma, but below 20ma and 10ma. It is just starting to turn down. According to your code, this stock is labeled as a 'rising stock', which is wrong.

  • SMAs are generally not cool. Not cool due to lag of 1/2 of MA period.

  • Think of other ways to implement your idea of gauging "going up stocks". Try to define what is a "stock that is going up".

  • Overbought/oversold part. This part is worse. You heard that "RSI measures overbought/oversold", so you plug it in. You have to define "Overbought/oversold" first, then check if RSI implements your idea of overbought/oversold best, then include it.

  • Since you did not define "overbought / oversold", and check whether RSI is good for it, you decided to throw a couple more indicators on top, just to be sure =) That is a bad idea. Mindlessly introducing more indicators does not improve your strategy, but it does greatly increase overfit.

  • Labeling "Sell / Neutral / Buy " part. It is getting worse =)) How did you decide what thresholds to use for the labels? Why does ma_count and oscCount with a threshold of 0 is the best way to label? You are losing your initial idea!
    Just because 0 looks good, you decide that 0 is the best threshold. You have to do a research here. You'd be surprised by how counter intuitive the result might be, or how super unstable it might be=))

  • Last but not least. Pls count the number of parameters. MAs, RSI, OSC, BBand + thresholds for RSI, OSC + Label thresholds ... I don't want to count, but I am sure it is well above 10 (maybe 15+?). Now even if you test at least 6-7 combinations of your parameters, your parameter space will be 10k+ of possible combinations. And that is just for a simple strategy.

  • With 10k+ combinations on a daily data, I can overfit to a perfect straight line pnl. There is no way with so many degrees of freedom to tell if you overfit or not. Even on a 1min data!

    The lesson is: idea first. Define it well. Then try to pick minimal number of indicators (or functions) that implement it. Check for parameter space. If you have too many parameters, discard your idea, since you will not be able to tell if it is making/losing money because it has an edge or just purely by chance!

    What is left out of this discussion: cross validation and picking best parameters going forward

    Recommended reading:
  • https://www.amazon.com/Building-Winning-Algorithmic-Trading-Systems/dp/1118778987/
  • https://www.amazon.com/Elements-Statistical-Learning-Prediction-Statistics/dp/0387848576/
u/zorfbee · 32 pointsr/artificial

Reading some books would be a good idea.

u/codeodor · 30 pointsr/programming

Artificial Intelligence: A Modern Approach by Norvig and Russell

http://aima.cs.berkeley.edu/

On Amazon

Edit: Good point from nacarlson about the 2nd edition. I've changed the link to reflect that.

u/MPREVE · 29 pointsr/math

There's an excellent essay by Douglas Hofstadter in his Metamagical Themas collection where he discusses the nature of creativity. It's called "Variations on a Theme as the Crux of Creativity," and I couldn't immediately find an upload online.

The entire book is certainly worth reading, but this essay in particular stood out to me.

One of the essential ideas-- that I'm paraphrasing very poorly-- is that creativity is a consequence of your brain's ability to create many hypothetical scenarios, to ask what-if, subjunctive questions.

The important corollary to that is that it's very good to have a deep understanding of many different fields and topics, because then your brain has a wide variety of conceptual objects to compare, and there's abundant opportunity for two concepts you understand to fuse into a new idea.

Based on this and some other thoughts, my current understanding of creativity and knowledge is this:

  • If you learn anything well, it will help you learn many other things. Information can be transferred from vastly disparate areas, but only if you have a deep structural understanding.

  • Having a wide span of knowledge immensely improves your creative capacity.

    Math is great, but I'm saddened by a notion held by many of professors, and many of my fellow students-- this idea that only math is great.
u/cronin1024 · 25 pointsr/programming

Thank you all for your responses! I have compiled a list of books mentioned by at least three different people below. Since some books have abbreviations (SICP) or colloquial names (Dragon Book), not to mention the occasional omission of a starting "a" or "the" this was done by hand and as a result it may contain errors.

edit: This list is now books mentioned by at least three people (was two) and contains posts up to icepack's.

edit: Updated with links to Amazon.com. These are not affiliate - Amazon was picked because they provide the most uniform way to compare books.

edit: Updated up to redline6561


u/nsfmc · 21 pointsr/programming

he lost me when he said this about GEB
> If you only read one book on this list, it should be this one.

seriously? it's not that i don't appreciate the sentiment, but things douglas hofstadter thinks are neat is no substitute for any single book on the rest of the list unless you

  • have no other way to explain at cocktail parties what you studied at school
  • try to sound smart at cocktail parties by talking about things in GEB without actually referencing the book.

    for my part, i'd add sipser's computation book and why not throw in some ken thompson in there as an amuse bouche?
u/what-s_in_a_username · 21 pointsr/funny

Just read the article I linked to, you'll understand why google+ is not just a facebook clone. Seriously; it's a long article but totally worth it. You might also need to read and understand this book to know what they're working towards.

Google basically knows that technology will take over and all humans will be linked together in some form of hive-mind in the future. They're working towards making such a thing reality, but they can't tell us that because people who can barely use computers wouldn't understand and would be scared shitless.

u/yggdrasilly · 19 pointsr/compsci
u/Thedabit · 18 pointsr/lisp

Some context, I've been living in this house for about 3 years now, my girlfriend and i moved in to take care of the owner of the house. Turns out that he was a big lisp / scheme hacker back in the 80s-90s and had developed a lot of cutting edge tech in his hay day. Anyway, these books have been hiding in his library downstairs...

It was like finding a bunch of hidden magical scrolls of lost knowledge :)

edit: I will compile a list of the books later. I'm out doing 4th of July things.

update: List of books

  • Lisp: Style and Design by Molly M. Miller and Eric Benson
    ISBN: 1-55558-044-0

  • Common Lisp The Language Second Edition by Guy L. Steele
    ISBN: 1-55558-042-4

  • The Little LISPer Trade Edition by Daniel P. Friedman and Matthias Felleisen
    ISBN: 0-262-56038-0

  • Common LISPcraft by Robert Wilensky
    ISBN: 0-393-95544-3

  • Object-Oriented Programming in Common Lisp by Sonya E. Keene
    ISBN: 0-201-17589-4

  • Structure and Interpretation of Computer Programs by Harold Abelson, Gerald Jay Sussman w/Julie Sussman
    ISBN: 0-07-000-422-6

  • ANSI Common Lisp by Paul Graham
    ISBN: 0-13-370875-6

  • Programming Paradigms in LISP by Rajeev Sangal
    ISBN: 0-07-054666-5

  • The Art of the Metaobject Protocol by Gregor Kiczales, Jim des Rivieres, and Daniel G. Bobrow
    ISBN: 0-262-11158-6

  • Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp by Peter Norvig
    ISBN: 1-55860-191-0

  • Practical Common Lisp by Peter Seibel
    ISBN: 1-59059-239-5

  • Common Lisp The Language by Guy L. Steele
    ISBN: 0-932376-41-X

  • Anatomy of Lisp by John Allen
    ISBN: 0-07-001115-X

  • Lisp Objects, and Symbolic Programming by Robert R. Kessler
    ISBN: 0-673-39773-4

  • Performance and Evaluation of Lisp Systems by Richard P. Gabriel
    ISBN: 0-262-07093-6

  • A Programmer's Guide to Common Lisp by Deborah G. Tatar
    ISBN: 0-932376-87-8

  • Understanding CLOS The Common Lisp Object System by Jo A. Lawless and Molly M. Miller
    ISBN: 0-13-717232-X

  • The Common Lisp Companion by Tim D. Koschmann
    ISBN: 0-417-50308-8

  • Symbolic Computing with Lisp and Prolog by Robert A. Mueller and Rex L. Page
    ISBN: 0-471-60771-1

  • Scheme and the Art of Programming by George Springer and Daniel P. Friedman
    ISBN: 0-262-19288-8

  • Programming In Scheme by Michael Eisenberg
    ISBN: 0-262-55017-2

  • The Schematics of Computation by Vincent S. Manis and James J. Little
    ISBN: 0-13-834284-9

  • The Joy of Clojure by Michael Fogus and Chris Houser
    ISBN: 1-935182-64-1

  • Clojure For The Brave and True by Daniel Higginbotham
    ISBN: 978-1-59327-591-4



u/christianitie · 17 pointsr/math

I would guess that career prospects are a little worse than CS for undergrad degrees, but since my main concern is where a phd in math will take me, you should get a second opinion on that.

Something to keep in mind is that "higher" math (the kind most students start to see around junior level) is in many ways very different from the stuff before. I hated calculus and doing calculations in general, and was pursuing a math minor because I thought it might help with job prospects, but when I got to the more abstract stuff, I loved it. It's easily possible that you'll enjoy both, I'm just pointing out that enjoying one doesn't necessarily imply enjoying the other. It's also worth noting that making the transition is not easy for most of us, and that if you struggle a lot when you first have to focus a lot of time on proving things, it shouldn't be taken as a signal to give up if you enjoy the material.

This wouldn't be necessary, but if you like, here are some books on abstract math topics that are aimed towards beginners you could look into to get a basic idea of what more abstract math is like:

  • theoretical computer science (essentially a math text)

  • set theory

  • linear algebra

  • algebra

  • predicate calculus

    Different mathematicians gravitate towards different subjects, so it's not easy to predict which you would enjoy more. I'm recommending these five because they were personally helpful to me a few years ago and I've read them in full, not because I don't think anyone can suggest better. And of course, you could just jump right into coursework like how most of us start. Best of luck!

    (edit: can't count and thought five was four)
u/[deleted] · 16 pointsr/programming

Personally, I think the thing to realize is that if you know how to program, then you know how to model solutions to problems in complex domains. I would then learn more about the mathematical background of modeling complex domains in general and reasoning about them. If you haven't already bought a copy of Artificial Intelligence: A Modern Approach, I highly recommend doing so and working through it, then picking some application to build using what you've learned, e.g. developing a real-estate investment advice system that uses multi-decade trend data across the country, taking into account recent events, to make sound investment advice, learning from experience (i.e. new data) as it goes.

In other words, think

  • large-scale
  • time-varying data
  • machine-learning
  • decision support

u/ultraliks · 16 pointsr/datascience

Sounds like you're looking for the statistical proofs behind all the hand waving commonly done by "machine learning" MOOCS. I recommend this book. It's very math heavy, but it covers the underlying theory well.

u/am_i_wrong_dude · 16 pointsr/medicine

I've posted a similar answer before, but can't find the comment anymore.

If you are interested in doing your own statistics and modeling (like regression modeling), learn R. It pays amazing dividends for anyone who does any sort of data analysis, even basic biostats. Excel is for accountants and is terrible for biological data. It screws up your datasets when you open them, has no version control/tracking, has only rudimentary visualization capabilities, and cannot do the kind of stats you need to use the most (like right-censored data for Cox proportional hazards models or Kaplan-Meier curves). I've used SAS, Stata, SPSS, Excel, and a whole bunch of other junk in various classes and various projects over the years, and now use only R, Python, and Unix/Shell with nearly all the statistical work being in R. I'm definitely a biased recommender, because what started off as just a way to make a quick survival curve that I couldn't do in Excel as a medical student led me down a rabbit hole and now my whole career is based on data analysis. That said, my entire fellowship cohort now at least dabbles in R for making figures and doing basic statistics, so it's not just me.

R is free, has an amazing online community, and is in heavy use by biostatisticians. The biggest downsides are

  • R is actually a strange and unpopular general programming language (Python is far superior for writing actual programs)
  • It has a steep initial learning curve (though once you get the basics it is very easy to learn advanced techniques).

    Unfortunately learning R won't teach you actual statistics.... for that I've had the best luck with brick-and-mortar classes throughout med school and later fellowship but many, many MOOCs, textbooks, and online workshops exist to teach you the basics.

    If I were doing it all over again from the start, I would take a course or use a textbook that integrated R from the very beginning such as this.

    Some other great statistical textbooks:

  • Introduction to Statistical Learning -- free legal PDF here -- I can't recommend this book enough
  • Elements of Statistical Learning -- A masterpiece of machine learning and modeling. I can't pretend to understand this whole book, but it is a frequent reference and aspirational read.

    Online classes:
    So many to choose from, but I am partial to DataCamp

    Want to get started?

  • Download R directly from its host, CRAN
  • Download RStudio (an integrated development environment for R that makes life infinitely easier) from its website (also free)
  • Fire up RStudio and type the following commands after the > prompt in the console:

    install.packages("swirl")

    library("swirl")

    swirl()

    And you'll be off an running in a built-in tutorial that starts with the basics (how do I add two numbers) and ends (last I checked) with linear regression models.

    ALL OF THAT SAID ------

    You don't need to do any of that to be a good doctor, or even a good researcher. All academic institutions have dedicated statisticians (I still work with them all the time -- I know enough to know I don't really know what I am doing). If you can do your own data analysis though, you can work much faster and do many more interesting things than if you have to pay by the hour for someone to make basic figures for you.
u/ixampl · 15 pointsr/compsci

I think the field you are looking for is called Natural Language Processing.
There is a nice introductory lecture on it on coursera.

I think this is the standard introduction book.

u/twopoint718 · 14 pointsr/programming

My favorite example of a mind-bogglingly well-staffed company was "Thinking Machines Corporation":

(Taken from Wikipedia, not exhaustive!):

  • Greg Papadopoulos (Sun CTO)
  • Guy L Steele, Jr. (Scheme designer)
  • Brewster Kahle (Internet Archive, Founder)
  • Marvin Minsky (AI pioneer)
  • Doug Lenat (AI pioneer, Cyc project)
  • Stephen Wolfram (Mathematica creator)
  • Eric Lander (Human Geneome Project, President Obama's council of science and technology advisors, Co-chair)
  • Richard Feynman (Nobel Prize, Physics, Manhattan Project)
  • Alan Harshman (High-performance computing, AI)
  • Tsutomu Shimomura (security expert, notable for his involvement in the arrest of Kevin Mitnick)

    I found this when I was reading about Feynman one time. This isn't meant to disparage Google at all, it's an amazing list though.

    EDIT: I forgot to mention what I started out writing. Feynman produced an excellent book, The Feynman Lectures on Computation, which if you're familiar with the physics version of the same, is an incredibly lucid, short, and informative book. I think this would make an excellent textbook for a course in computer architecture.

u/joinr · 12 pointsr/lisp

Some CL-specific resources:

  • The book Land of Lisp has some sections specifically on functional programming, and answers some of these questions. It goes into more detail on the philosophy and spirit of separating effects and organizing code, albeit for a limited example. Chapter 14 introduces it (in the context of CL), then implements the core of the game Dice of Doom in a functional style in chapter 15.

  • On Lisp discusses programming in the functional style early on in Ch2/3, (with an emphasis on bottom-up programming). I think Graham uses a functional style more-or-less throughout, except for performance optimizations or where the imperative imperative implementation is actually more desirable for clarity.

  • Peter Norvig similarly leverages a bit of a functional style throughout PAIP, and he has several remarks about leveraging higher order functions, recursion, and small, composeable functions throughout the text. FP isn't the focus, but it's discussed and present.

  • Practical Common Lisp has some brief mentions and examples in chapters 5 and 12.

    Non-CL:

  • SICP starts off with functional programming from the start. Although it's scheme, the ideas are similarly portable to CL. It's an excellent resource in general, regardless of language interest IMO.

  • There's a chapter in the free Clojure For the Brave and True that more-or-less covers the bases and builds a small game functionally. Due to its prevalence, you pretty much find articles/blogs/chapters on FP in every clojure resource. I found the ideas generally portable when revisiting CL (absent reliance on persistent structures with better performance profiles than lists and balanced binary trees).

  • Joy of Clojure Ch7 specifically focuses on a FP concepts and applies them to implement a functional version of A* search. They run through simple functions, function composition, partial function application, functions as data, higher order functions, pure functions / referential transparency, closures, recursive thinking, combining recursion with laziness, tail calls, trampolines, and continuation passing style.

    Others:

  • http://learnyouahaskell.com/chapters

    I flip back and forth between Clojure and CL periodically (CL is for hobbies, clojure is for work and hobbies), and have mucked with scheme and racket a bit (as well as decent mileage in F#, Haskell, and a little Ocaml from the static typed family). IME, you can definitely tell the difference between a language with support for FP strapped on after the fact, vs. one with it as a core design (preferably with mutable/imperative escape hatches). CL supports FP (closures/functions are values (e.g. lambda), there's a built-in library of non-destructive pure functions that typically operate on lists - or the non-extensible sequence class, and non-standard but general support for optimizing tail recursive functions into iterative ones enables pervasive use of recursion in lieu of iteration), but I think it's less of a default in the wild (not as unfriendly as Python is to FP though). Consequently, it's one paradigm of many that show up; I think there's likely as much if not more imperative/CLOS OOP stuff out there though. I think the alternate tact in clojure, scheme, and racket is to push FP as the default and optimize the language for that as the base case - with pragmatic alternative paradigm support based on the user's desire. Clojure takes it a step farther by introducing efficient functional data structures (based on HAMTs primarily, with less-used balanced binary trees for sorted maps and sets) so you can push significantly farther without dropping down to mutable/imperative stuff for performance reasons (as opposed to living and dying by the performance profiles of balanced binary trees for everything). You'll still find OOP and imperative support, replete with mutation and effects, but it's something to opt into.

    In the context of other FP langs, F# and Ocaml do this as well - they provide a pretty rigorous locked-down immutable approach with functional purity as the default, but they admit low-hanging means to bypass the purity should the programmer need to. Haskell kinda goes there but it's a bit more involved to tap into the mutable "escape hatches" by design.

    In the end, you can pretty much bring FP concepts into most any languages (e.g. write in a functional style), although it's harder to do so in languages that don't have functions/closures as a first class concept (to include passing them as values). Many functional languages have similar libraries and idioms for messing with persistent lists or lazy sequences as a basic idiom; that's good news since all those ideas and idioms or more or less portable directly to CL (and as mentioned here are likely extant libraries to try to bring these around in addition to the standard map,filter, reduce built-ins). For more focused FP examples and thinking, clojure, racket, and scheme are good bets (absent an unknown resource that exclusively focuses on FP in CL, which would seem ideal for your original query). I think dipping into the statically typed languages would also be edifying, since there are plenty of books and resources in that realm.
u/nkk36 · 12 pointsr/datascience

I've never heard of that book before, but I took a look at their samples and they all seem legitimate.

I would just buy the Ebook for $59 and work through some problems. I'd also maybe purchase some books (or find free PDFs online). Given that you don't have a deep understanding of ML techniques I would suggest these books:

  1. Intro to Statistical Learning
  2. Data Science for Business

    There are others as well, but those are two introductory-level textbooks I am familiar with and often suggested by others.
u/ajh2148 · 11 pointsr/computerscience

I’d personally recommend Andrew Ng’s deeplearning.ai course if you’re just starting. This will give you practical and guided experience to tensorflow using jupyter notebooks.

If it’s books you really want I found the following of great use in my studies but they are quite theoretical and framework agnostic publications. Will help explain the theory though:

Deep Learning (Adaptive Computation and Machine Learning Series) https://www.amazon.co.uk/dp/0262035618/ref=cm_sw_r_cp_api_i_Hu41Db30AP4D7

Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning series) https://www.amazon.co.uk/dp/0262039249/ref=cm_sw_r_cp_api_i_-y41DbTJEBAHX

Pattern Recognition and Machine Learning (Information Science and Statistics) (Information Science and Statistics) https://www.amazon.co.uk/dp/0387310738/ref=cm_sw_r_cp_api_i_dv41DbTXKKSV0

Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series) https://www.amazon.co.uk/dp/B00AF1AYTQ/ref=cm_sw_r_cp_api_i_vx41DbHVQEAW1

u/Shadowsoal · 11 pointsr/compsci

In the theoretical field of complexity...

The 1979 version of Introduction to Automata Theory, Languages, and Computation by Hopcroft & Ullman is fantastic and used to be the canonical book on theoretical computer science. Unfortunately the newer versions are too dumbed down, but the old version is still worth it! These days Introduction to the Theory of Computation by Sipser is considered to be the canonical theoretical computer science text. It's also good, and a better "introduction" than H&U. That said, I prefer H&U and recommend it to anyone who's interested in more than getting through their complexity class and forgetting everything.

In the theoretical field of algorithms...

Introcution to Algorithms by Cormen, Leiserson, Rivest and Stein is dynamite, pretty much everything you need to know. Unfortunately it's a bit long winded and is not very instructive. For a more instructive take on algorithms take a look at Algorithms by Dasgupta, Papadimitriou and Vazirani.

u/idiosocratic · 11 pointsr/MachineLearning

For deep learning reference this:
https://www.quora.com/What-are-some-good-books-papers-for-learning-deep-learning

There are a lot of open courses I watched on youtube regarding reinforcement learning, one from oxford, one from stanford and another from Brown. Here's a free intro book by Sutton, very well regarded:
https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html

For general machine learning their course is pretty good, but I did also buy:
https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1467309005&sr=8-1&keywords=python+machine+learning

There were a lot of books I got into that weren't mentioned. Feel free to pm me for specifics. Cheers

Edit: If you want to get into reinforcement learning check out OpenAI's Gym package, and browse the submitted solutions

u/country_dev · 10 pointsr/learnmachinelearning

Hands-On Machine Learning . The 2019 version. It’s one of the best ML books I have come across.

u/Homunculiheaded · 10 pointsr/programming

The problem with ANSI CL is that I could never shake the feeling that Graham wants Lisp in general to maintain some mystique as language only suited for the very clever, and he teaches the language with intent on keeping it that way. I really enjoyed PCL, but I really do think that Paradigms of Artificial Intelligence Programming needs to get more attention. Granted that I haven't yet finished the mammoth volume, Norvig introduces the language in a clear way that makes it seem more natural (perfect example is that he prefers 'first' and 'rest' rather than the more esoteric 'car' 'cdr'), but additionally he has great 'hand holding' examples that show exactly what makes Common Lisp so powerful and how to organize largers programs in language as well as going over a ton of interesting CS related things. Having gone through these 3 books while I was learn I can definitely say that each had a lot to offer, but I think if I was trapped on an island with just one I would definitley take PAIP.

u/joshstaiger · 10 pointsr/programming

Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp by Norvig (PAIP)

I don't want to overhype, but it's been called "The best book on programming ever written".

Oops, maybe I overshot. But anyway, very enlightening even if you're not a Lisp or AI programmer.

u/krunk7 · 10 pointsr/programming

Absolutely.

Check out The Elements of Statistical Learning and Introduction to Machine Learning.

edit those books are about practical applications of what we've learning to date from the neural network style of pattern classification. So it's not about modeling an actual biological neuron. For modeling of the biology, it's been a while since I futzed with that. But when I wrote a paper on modeling synaptic firing, Polymer Solutions: An Introduction to Physical Properties was the book for that class. Damned if I remember if that book has the details I needed or if I had to use auxiliary materials though.

u/markth_wi · 10 pointsr/booksuggestions

I can think of a few

u/sleepingsquirrel · 9 pointsr/ECE
u/SupportVectorMachine · 9 pointsr/deeplearning

Not OP, but among those he listed, I think Chollet's book is the best combination of practical, code-based content and genuinely valuable insights from a practitioner. Its examples are all in the Keras framework, which Chollet developed as a high-level API to sit on top of a number of possible DL libraries. But with TensorFlow 2.0, the Keras API is now fundamental to how you would write code in this pretty dominant framework. It's also a very well-written book.

Ordinarily, I resist books that are too focused on one framework over another. I'd never personally want a DL book in Java, for instance. But I think Chollet's book is good enough to recommend regardless of the platform you intend to use, although it will certainly be more immediately useful if you are working with tf.Keras.

u/CastigatRidendoMores · 9 pointsr/IAmA

As far as I'm aware, they don't necessarily believe we are near human level AI. However, they do believe it is an inevitable eventuality (on our current track) that we should begin preparing for now - because if it's done wrong it has catastrophic consequences, while if done right can be the best thing that ever happened to us. I second the recommendation for Bostrom's book.

u/CyberByte · 9 pointsr/artificial

> Last few weeks I got very interested in AI and can't stop thinking about it. Watched discussions of philosophers about future scenarios with AI, read all recent articles in media about it.

Most likely you heard about the superintelligence control problem. Check out (the sidebar of) /r/ControlProblem and their FAQ. Nick Bostrom's Superintelligence is pretty much the book on this topic, and I would recommend reading it if you're interested in that. This book is about possible impacts of AI, and it won't really teach you anything about how AI works or how to develop it (neither strong nor weak AI).

For some resources to get started on that, I'll just refer you to some of my older posts. This one focuses on mainstream ("narrow"/"weak") AI, and this one mostly covers AGI (artificial general intelligence / strong AI). This comment links to some education plans for AGI, and this one has a list of cognitive architectures.

u/tiberiusbrazil · 8 pointsr/brasil

>Há espaço pra gente sem formação?



sim

contanto q saiba programar, um pouco de estatistica, ETL e o mais importante: saber se comunicar e resolver problemas

recomendo fortemente o dataquest, atualmente são os melhores 1k reais que alguém pode investir em educação atualmente independente da área

se preferir estudar por livro https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646 + algum curso de programação basica à sua escolha

>Dão prioridade para Estatísticos de formação

hmm, depende da empresa, essa ideia que ciencia de dados é complexo e precisa de phd já está acabando (exceto se for engenheiro de machine learning, que são poucos no brasil)

u/BBQHonk · 8 pointsr/suggestmeabook

Fiction: Do Androids Dream of Electric Sheep?. The book that was the basis for Blade Runner.

Non-fiction: Superintelligence: Paths, Dangers, Strategies. This is a deep dive into the dangers posed by superintelligent AI. It's a heavy read.

u/marmalade_jellyfish · 8 pointsr/artificial

To gain a good overview of AI, I recommend the book The Master Algorithm by Pedro Domingos. It's totally readable for a layperson.

Then, learn Python and become familiar with libraries and packages such as numpy, scipy, and scikit-learn. Perhaps you could start with Code Academy to get the basics of Python, but I feel like the best way to force yourself to really know useful stuff is through implementing some project with a goal.

Some other frameworks and tools are listed here. Spend a lot more time doing than reading, but reading can help you learn how to approach different tasks and problems. Norvig and Russell's AI textbook is a good resource to have on hand for this.

Some more resources include:

Make Your Own Neural Network book

OpenAI Gym

CS231N Course Notes

Udacity's Free Deep Learning Course

u/jeykottalam · 8 pointsr/compsci

Introduction to Algorithms by CLRS

TAOCP is a waste of time and money; it's more for adorning your bookshelf than for actually reading. Pretty much anyone who suggests TAOCP and is less than 55 years old is just parroting Standard Wisdom™.

Godel, Escher, Bach is a nice book, but it's not as intellectually deep in today's world as it was when first published; a lot of the memes in GEB have been thoroughly absorbed into nerd culture at this point and the book should be enjoyed more as a work of art than expecting it to be particularly informative (IMO).

If you're interested in compilers, I recommend Engineering a Compiler by Cooper & Torczon. Same thing as TAOCP applies to people who suggest the Dragon Book. The Dragon Book is still good, but it focuses too much on parser generators and doesn't really cover enough of the other modern good stuff. (Yes, even the new edition.)

As far as real programming goes, K&R's The C Programming Language is still unmatched for its quality of exposition and brevity, but these days I'd strongly suggest picking up some Python or something before diving into C. And as a practical matter, I'd suggest learning some C++ from Koenig & Moo's Accelerated C++ before learning straight C.

Sipser's Introduction to the Theory of Computation is a good theory book, but I'd really suggest getting CLRS before Sipser. CLRS is way more interesting IMHO.

u/officemonkey · 8 pointsr/reddit.com

The Singularity is Near, so don't count on it.

u/oblique63 · 7 pointsr/INTP

Ishmael - If you ever wondered what it would be like to be a telepathic gorilla, this will probably give you the closest answer.

The 5 Elements of Effective Thinking - The INTP Toolbox.

The Willpower Instinct - Because we all know we could use a bit more of it around here...

Emotional Vampires - A survival guide to protect your Fe

How To Create A Mind - Since it's ultimately the only thing we really seem to care about, it's interesting to think how we could theoretically create a 'backup' for it eventually

The Talent Code - In case you haven't quite figured out how to go about mastering skills yet.

u/groundshop · 7 pointsr/math

It's an introduction to some of the major concepts in Computer Science theory. If you have no background in CS, and a bit of background in math (mid-undergraduate level) it's an enjoyable way to get exposed to a few concepts from CS theory.

If you're really looking to put your head to the grindstone and learn CS theory, there are better books though. I learned from M. Sipser's Intro to Comp. Theory.

P.S. I did walk away from it with a novice appreciation for Bach.

u/PostmodernistWoof · 7 pointsr/MachineLearning

I've been reading and really enjoying "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. https://www.amazon.com/gp/product/B00LOOCGB2

It's an easy read but it's a hard read because every couple sentences your brain wanders off thinking about consequences and stuff and you keep having to read the page over again.

He does a great job of covering in some depth all the issues surrounding the development of trans-human intelligence, whether it happens via "AI", some form of human augmentation, etc.

One of the better "here's a whole bunch of stuff to think about" books.

u/k0wzking · 6 pointsr/AcademicPsychology

Hello, I was recommended to Coursera by a colleague and have taken an in-depth look at their course catalogue, but I have not taken any courses from them. If you think there are free courses on there that would suit your needs, then go for it, but personally I found that what was offered for free seemed too superficial and purchasable classes did not offer any information that I could not obtain elsewhere for cheaper.

I know a lot of people aren’t like this, but personally I prefer to teach myself. If you are interested in learning a bit about data science, I would strongly recommend Python Machine Learning by Sebastian Rashka. He explains everything in extreme clarity (a rarity in the academic world) and provides python code that permits you to directly implement any method taught in the book. Even if you don’t have interest in coding, Rashka’s fundamental descriptions of data science techniques are so transparent that he could probably teach these topics to infants. I read the first 90 pages for free on google books and was sold pretty quickly.

I’ll end with a shameless plug: a key concept in most data science and machine learning techniques use biased estimation (a.k.a., regularization), of which I have made a brief video explaining the fundamental concept and why it is useful in statistical procedures.

I hope my non-answer answer was somewhat useful to you.

u/otterom · 6 pointsr/ProgrammerHumor

I have it. Pretty heavy for my tiny brain.

But, anyway, Amazon has it for ~$56. In the world of expensive textbooks, this is a steal.


https://www.amazon.com/dp/0262035618/ref=cm_sw_r_cp_apa_i_G4DJDbH8JXJ71

u/zoombikini · 6 pointsr/programming

Ah...Sipser.

u/llimllib · 6 pointsr/programming
u/allforumer · 6 pointsr/programming

You might like this book -

Feynman Lectures on Computation

u/hapagolucky · 5 pointsr/MachineLearning

Start with Jurafsky and Martin to get a rounded overview of the main problems and approaches. I don't use NLTK myself, but it has a large community around it and some decent tutorials I hear.

u/slashcom · 5 pointsr/compsci

In Natural Language Processing, it's Jurafsky and Martin. In Machine Learning, it's debatably the Bishop book.

u/boxstabber · 5 pointsr/LanguageTechnology
u/zrbecker · 5 pointsr/learnprogramming

Depends on what you are interested in.

If you are interested in games, pick a game and do it. Most board games are not that hard to do a command line version. A game with graphics, input, and sound isn't too bad either if you use something like Allegro or SDL. Also XNA if you are on windows. A lot of neat tutorials have been posted about that recently.

If you are more interested in little utilities that do things, you'll want to look at a GUI library, like wxWidgets, Qt and the sort. Both Windows and Mac have their own GUI libraries not sure what Windows' is called, but I think you have to write it with C++/CLI or C#, and Mac is Cocoa which uses Objective-C. So if you want to stick to basic C++ you'll want to stick to the first two.

Sometimes I just pick up a book and start reading to get ideas.

This is a really simple Game AI book that is pretty geared towards beginners. http://www.amazon.com/Programming-Game-Example-Mat-Buckland/dp/1556220782/

I enjoyed this book on AI, but it is much more advanced and might be kind of hard for a beginner. Although, when I was first starting, I liked getting in over my head once in a while. http://www.amazon.com/Artificial-Intelligence-Modern-Approach-2nd/dp/0137903952/

Interesting topics to look up.

Data Structures

Algorithms

Artificial Intelligence

Computer Vision

Computer Graphics

If you look at even simple books in these subjects, you will usually find tons of small manageable programs that are fun to write.

EDIT: Almost forgot, I think a lot of these are Java based, but you can usually find a way to do it in C++. http://nifty.stanford.edu/ I think I write Breakout whenever I am playing with a new language. heh

u/apocalypsemachine · 5 pointsr/Futurology

Most of my stuff is going to focus around consciousness and AI.

BOOKS

Ray Kurzweil - How to Create a Mind - Ray gives an intro to neuroscience and suggests ways we might build intelligent machines. This is a fun and easy book to read.

Ray Kurzweil - TRANSCEND - Ray and Dr. Terry Grossman tell you how to live long enough to live forever. This is a very inspirational book.

*I'd skip Kurzweil's older books. The newer ones largely cover the stuff in the older ones anyhow.

Jeff Hawkins - On Intelligence - Engineer and Neuroscientist, Jeff Hawkins, presents a comprehensive theory of intelligence in the neocortex. He goes on to explain how we can build intelligent machines and how they might change the world. He takes a more grounded, but equally interesting, approach to AI than Kurzweil.

Stanislas Dehaene - Consciousness and the Brain - Someone just recommended this book to me so I have not had a chance to read the whole thing. It explains new methods researchers are using to understand what consciousness is.

ONLINE ARTICLES

George Dvorsky - Animal Uplift - We can do more than improve our own minds and create intelligent machines. We can improve the minds of animals! But should we?

David Shultz - Least Conscious Unit - A short story that explores several philosophical ideas about consciousness. The ending may make you question what is real.

Stanford Encyclopedia of Philosophy - Consciousness - The most well known philosophical ideas about consciousness.

VIDEOS

Socrates - Singularity Weblog - This guy interviews the people who are making the technology of tomorrow, today. He's interviewed the CEO of D-Wave, Ray Kurzweil, Michio Kaku, and tons of less well known but equally interesting people.

David Chalmers - Simulation and the Singularity at The Singularity Summit 2009 - Respected Philosopher, David Chalmers, talks about different approaches to AI and a little about what might be on the other side of the singularity.

Ben Goertzel - Singularity or Bust - Mathematician and computer Scientist, Ben Goertzel, goes to China to create Artificial General Intelligence funded by the Chinese Government. Unfortunately they cut the program.



PROGRAMMING

Daniel Shiffman - The Nature of Code - After reading How to Create a Mind you will probably want to get started with a neural network (or Hidden Markov model) of your own. This is your hello world. If you get past this and the math is too hard use this

Encog - A neural network API written in your favorite language

OpenCV - Face and object recognition made easy(ish).

u/dolphonebubleine · 5 pointsr/Futurology

I don't know who is doing PR for this book but they are amazing. It's not a good book.

My review on Amazon:

> The most interesting thing about this book is how Bostrom managed to write so much while saying so little. Seriously, there is very little depth. He presents an idea out of nowhere, says a little about it, and then says [more research needs to be done]. He does this throughout the entire book. I give it two stars because, while extremely diluted, he does present an interesting idea every now and then.

Read this or this or this instead.

u/effernand · 5 pointsr/learnmachinelearning

When I started on the field I took the famous course on Coursera by Andrew Ng. It helped to grasp the major concepts in (classical) ML, though it really lacked on mathematical profundity (truth be told, it was not really meant for that).

That said, I took a course on edX, which covered things in a little more depth. As I was getting deeper into the theory, things became more clear. I have also read some books, such as,

  • Neural Networks, by Simon Haikin,
  • Elements of Statistical Learning, by Hastie, Tibshirani and Friedman
  • Pattern Recognition and Machine Learning, by Bishop

    All these books have their own approach to Machine Learning, and particularly I think it is important that you have a good understanding on Machine Learning, and its impacts on various fields (signal processing, for instance) before jumping into Deep Learning. Before almost three years of major dedication in studying the field, I feel like I can walk a little by myself.

    Now, as a begginer in Deep Learning, things are a little bit different. I would like to make a few points:

  • If you have a good base on maths and Machine Learning, the algorithms used in Deep Learning will be more straightforward, as some of them are simply an extension of previous attempts.
  • The practical part in Machine Learning seems a little bit childish with respect to Deep Learning. When I programmed Machine Learning models, I usually had small datasets, and algorithms who could run in a simple CPU.
  • As you begin to work with Deep Learning, you will need to master a framework of your choice, which will yield issues about data usage (most datasets do not fit into memory), GPU/memory management. For instance, if you don't handle your data well, it becomes a bottleneck that slows down your code. So, when compared with simple numpy + matplotlib applications, tensorflow API's + tensorboard visualizations can be tough.

    So, to summarize, you need to start with simple, boring things until you can be an independent user of ML methods. THEN you can think about state-of-the-art problems to solve with cutting-edge frameworks and APIs.
u/vogonj · 5 pointsr/compsci

I'm quite a fan of Sipser's Introduction to the Theory of Computation: http://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/053494728X

It's not a full-on algorithms book but formal models were always the most interesting part of theoretical computer science to me. vOv

u/mpdehnel · 5 pointsr/computerscience

How formal do you mean? If you're interested in the theory of computer science, have a read of Sipser's Introduction to the Theory of Computation (or on Amazon - get it 2nd hand). This is a very theoretical book though, and most CS undergrad courses will only cover this type of content as a small part of the subject matter taught, so don't be put off if it doesn't immediately appeal or make sense!

Edit - links.

u/bonesingyre · 5 pointsr/webdev

Sure! There is a lot of math involved in the WHY component of Computer Science, for the basics, its Discrete Mathematics, so any introduction to that will help as well.
http://www.amazon.com/Discrete-Mathematics-Applications-Susanna-Epp/dp/0495391328/ref=sr_sp-atf_title_1_1?s=books&ie=UTF8&qid=1368125024&sr=1-1&keywords=discrete+mathematics

This next book is a great theoretical overview of CS as well.
http://mitpress.mit.edu/sicp/full-text/book/book.html

That's a great book on computer programming, complexity, data types etc... If you want to get into more detail, check out: http://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/0534950973

I would also look at Coursera.org's Algorithm lectures by Robert Sedgewick, thats essential learning for any computer science student.
His textbook: http://www.amazon.com/Algorithms-4th-Robert-Sedgewick/dp/032157351X/ref=sr_sp-atf_title_1_1?s=books&ie=UTF8&qid=1368124871&sr=1-1&keywords=Algorithms

another Algorithms textbook bible: http://www.amazon.com/Introduction-Algorithms-Thomas-H-Cormen/dp/0262033844/ref=sr_sp-atf_title_1_2?s=books&ie=UTF8&qid=1368124871&sr=1-2&keywords=Algorithms




I'm just like you as well, I'm pivoting, I graduated law school specializing in technology law and patents in 2012, but I love comp sci too much, so i went back into school for Comp Sci + jumped into the tech field and got a job at a tech company.

These books are theoretical, and they help you understand why you should use x versus y, those kind of things are essential, especially on larger applications (like Google's PageRank algorithm). Once you know the theoretical info, applying it is just a matter of picking the right tool, like Ruby on Rails, or .NET, Java etc...

u/stewedRobot · 5 pointsr/MachineLearning

I'd grab beautifulsoup + scikit-learn + pandas from continum.io (they're part of the standard anaconda download), launch Spyder and follow through this:
http://sebastianraschka.com/Articles/2014_naive_bayes_1.html

You can get a RAKE impl here too : https://github.com/aneesha/RAKE

Doing recommendations on the web like that is covered in an accessible way in "Programming Collective Intelligence"

u/AlSweigart · 5 pointsr/learnprogramming

Introduction to Algorithms is a behemoth text book. I prefer O'Reilly's Algorithms in a Nutshell and also Programming Collective Intelligence" for basic ML stuff.

u/JackieTrehorne · 5 pointsr/algotrading

This is a great book. The other book that is a bit less mathematical in nature, and covers similar topics, is Introduction to Statistical Learning. It is also a good one to have in your collection if you prefer a less mathematical treatment. https://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370

100x though, that's a bit much :) If you read effectively and take notes effectively, you should only have to go through this book with any depth 1 time. And yes, I did spend time learning how read books like this, and it's worth learning!

u/Kiuhnm · 5 pointsr/MachineLearning

Take the online course by Andrew Ng and then read Python Machine Learning.

If you then become really serious about Machine Learning, read, in this order,

  1. Machine Learning: A Probabilistic Perspective
  2. Probabilistic Graphical Models: Principles and Techniques
  3. Deep Learning
u/draeath · 4 pointsr/spaceengineers

They've got a prototype that's learned how to navigate a 2d maze that features doors requiring activation of switches that are in different parts of the maze, behind other doors.

They've got a prototype that learned how to manipulate rotors inside Space Engineers to allow a contraption to "walk."

Exciting things are coming, that's for sure.

u/RB-D · 4 pointsr/datascience

Soeech and Language Processing is often considered to be a good introductory text to NLP regardless of which side you come from (linguistics or maths/CS), and thus should provide enough information about linguistic theory to be sufficient for doing most of the standard NLP tasks.

If you would prefer a pure linguistics book, there are many good options available. Contemporary Linguistic Analysis is a solid introductory textbook used in intro ling classes (and have used it myself to teach before).

You might also wish to read something more specific depending on what kind of language processing you end focusing on, but I think a general fundamental understanding of ideas in linguistics would help a lot. Indeed as you are probably aware, less and less of modern NLP uses ideas from linguistics in favour of data-driven approaches, so having a substantial linguistics background is often not necessary.

Sorry for only having a small number of examples - just the first two that came to my head. Let me know if you would like some more options and I can see what else I can think of.

Edit: missed some words

u/mhatt · 4 pointsr/compsci

I would repeat jbu311's point that your interests are way too broad. If you're interested in going into depth in anything, you'll have to pick a topic. Even the ones you mentioned here are fairly broad (and I'm not sure what you meant about concurrency and parallelization "underscoring" AI?).

If you want to learn about the field of natural language processing, which is a subfield of AI, I would suggest Jurafsky and Martin's new book. If you're interested more broadly in AI and can't pick a topic, you might want to check out Russell & Norvig (although you might also want to wait a few months for the third edition).

u/ArseAssassin · 4 pointsr/gamedev

A little late to the party, but...

Runestone: Arena 2

I spent most of the week working on music and sound, but managed to also work on UI and spells.

u/m_bishop · 4 pointsr/Cyberpunk

Well, Snow Crash and Ready Player 1 are just magnificently entertaining reads. Also, Bruce Bethke's 'Head Crash' fits right in there with them.


If you don't mind it being a bit off topic, something that I've found really fun to read is Spritual Machines It's not even Sci-fi, but it reads like it.

EDIT


I can't believe I forgot to mention this, but all the 'current day' parts of Gibson's new book, the Peripheral fall into the timeline you're looking for as well. Also, his entire bridge trilogy, which was REALLY great, and his writing style became a bit more fluid in the second trilogy. I find people who had trouble with Neuromancer, don't have the same issues with Virtual Light.

u/murial · 4 pointsr/AskPhysics

Definitely echo the recommendation for "Gödel, Escher, Bach: An Eternal Golden Braid"

Would also recommend Roger Penrose, e.g. The Emperor's New Mind

and Hermann Weyl: "The objective world simply is, it does not happen. Only to the gaze of my consciousness, crawling upward along the life line of my body, does a section of this world come to life as a fleeting image in space which continuously changes in time."

and of course Henri Poincare's Science and Hypothesis is a classic.

u/awesome_hats · 4 pointsr/datascience

Well I'd recommend:

u/Isenhatesyou · 4 pointsr/compsci

Sipser's Introduction to the Theory of Computation is somewhat of a classic in the field. I just really hate his notation.

u/diablo1128 · 4 pointsr/cscareerquestions

Yes I took "Theory of Computation" as well. It's was one of those classes where I went where the average grade is an F and everything is just scaled up. It really kick my ass sadly. I think I took it the same semester as compilers as it was not a pre-req at my school.

This is the book we had I believe is: https://www.amazon.ca/Introduction-Theory-Computation-Michael-Sipser/dp/053494728X

u/awj · 4 pointsr/programming

It may be a big of a tough slog, but Sipser's Introduction to the Theory of Computation is great. The stuff on computability theory might be right up your alley, and even if you only make it through the chapter on deterministic finite automata you likely will be better at crafting a regular expression than many of my CS student peers.

Surprisingly enough, the book should be able to help you make sense out of that last sentence within 100 pages requiring only a bit of understanding of proofs. I think if you've got predicate logic under your belt you pretty much have all you need.

u/CSMastermind · 4 pointsr/learnprogramming

I've posted this before but I'll repost it here:

Now in terms of the question that you ask in the title - this is what I recommend:

Job Interview Prep


  1. Cracking the Coding Interview: 189 Programming Questions and Solutions
  2. Programming Interviews Exposed: Coding Your Way Through the Interview
  3. Introduction to Algorithms
  4. The Algorithm Design Manual
  5. Effective Java
  6. Concurrent Programming in Java™: Design Principles and Pattern
  7. Modern Operating Systems
  8. Programming Pearls
  9. Discrete Mathematics for Computer Scientists

    Junior Software Engineer Reading List


    Read This First


  10. Pragmatic Thinking and Learning: Refactor Your Wetware

    Fundementals


  11. Code Complete: A Practical Handbook of Software Construction
  12. Software Estimation: Demystifying the Black Art
  13. Software Engineering: A Practitioner's Approach
  14. Refactoring: Improving the Design of Existing Code
  15. Coder to Developer: Tools and Strategies for Delivering Your Software
  16. Perfect Software: And Other Illusions about Testing
  17. Getting Real: The Smarter, Faster, Easier Way to Build a Successful Web Application

    Understanding Professional Software Environments


  18. Agile Software Development: The Cooperative Game
  19. Software Project Survival Guide
  20. The Best Software Writing I: Selected and Introduced by Joel Spolsky
  21. Debugging the Development Process: Practical Strategies for Staying Focused, Hitting Ship Dates, and Building Solid Teams
  22. Rapid Development: Taming Wild Software Schedules
  23. Peopleware: Productive Projects and Teams

    Mentality


  24. Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency
  25. Against Method
  26. The Passionate Programmer: Creating a Remarkable Career in Software Development

    History


  27. The Mythical Man-Month: Essays on Software Engineering
  28. Computing Calamities: Lessons Learned from Products, Projects, and Companies That Failed
  29. The Deadline: A Novel About Project Management

    Mid Level Software Engineer Reading List


    Read This First


  30. Personal Development for Smart People: The Conscious Pursuit of Personal Growth

    Fundementals


  31. The Clean Coder: A Code of Conduct for Professional Programmers
  32. Clean Code: A Handbook of Agile Software Craftsmanship
  33. Solid Code
  34. Code Craft: The Practice of Writing Excellent Code
  35. Software Craftsmanship: The New Imperative
  36. Writing Solid Code

    Software Design


  37. Head First Design Patterns: A Brain-Friendly Guide
  38. Design Patterns: Elements of Reusable Object-Oriented Software
  39. Domain-Driven Design: Tackling Complexity in the Heart of Software
  40. Domain-Driven Design Distilled
  41. Design Patterns Explained: A New Perspective on Object-Oriented Design
  42. Design Patterns in C# - Even though this is specific to C# the pattern can be used in any OO language.
  43. Refactoring to Patterns

    Software Engineering Skill Sets


  44. Building Microservices: Designing Fine-Grained Systems
  45. Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools
  46. NoEstimates: How To Measure Project Progress Without Estimating
  47. Object-Oriented Software Construction
  48. The Art of Software Testing
  49. Release It!: Design and Deploy Production-Ready Software
  50. Working Effectively with Legacy Code
  51. Test Driven Development: By Example

    Databases


  52. Database System Concepts
  53. Database Management Systems
  54. Foundation for Object / Relational Databases: The Third Manifesto
  55. Refactoring Databases: Evolutionary Database Design
  56. Data Access Patterns: Database Interactions in Object-Oriented Applications

    User Experience


  57. Don't Make Me Think: A Common Sense Approach to Web Usability
  58. The Design of Everyday Things
  59. Programming Collective Intelligence: Building Smart Web 2.0 Applications
  60. User Interface Design for Programmers
  61. GUI Bloopers 2.0: Common User Interface Design Don'ts and Dos

    Mentality


  62. The Productive Programmer
  63. Extreme Programming Explained: Embrace Change
  64. Coders at Work: Reflections on the Craft of Programming
  65. Facts and Fallacies of Software Engineering

    History


  66. Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software
  67. New Turning Omnibus: 66 Excursions in Computer Science
  68. Hacker's Delight
  69. The Alchemist
  70. Masterminds of Programming: Conversations with the Creators of Major Programming Languages
  71. The Information: A History, A Theory, A Flood

    Specialist Skills


    In spite of the fact that many of these won't apply to your specific job I still recommend reading them for the insight, they'll give you into programming language and technology design.

  72. Peter Norton's Assembly Language Book for the IBM PC
  73. Expert C Programming: Deep C Secrets
  74. Enough Rope to Shoot Yourself in the Foot: Rules for C and C++ Programming
  75. The C++ Programming Language
  76. Effective C++: 55 Specific Ways to Improve Your Programs and Designs
  77. More Effective C++: 35 New Ways to Improve Your Programs and Designs
  78. More Effective C#: 50 Specific Ways to Improve Your C#
  79. CLR via C#
  80. Mr. Bunny's Big Cup o' Java
  81. Thinking in Java
  82. JUnit in Action
  83. Functional Programming in Scala
  84. The Art of Prolog: Advanced Programming Techniques
  85. The Craft of Prolog
  86. Programming Perl: Unmatched Power for Text Processing and Scripting
  87. Dive into Python 3
  88. why's (poignant) guide to Ruby
u/digitalfakir · 4 pointsr/Forex

It is like any other job, if not harder. You are entirely responsible for your decisions here. No boss to complain of, no sabotaging co-workers to blame. Just you and your decisions. And it will demand your devotion beyond the 9-5 job. You'll be on charts and reading analyses during weekends, trying to understand the political environment surrounding the instrument you are trading. And still, you may (or will) fail. Markets gonna do what markets gonna do. The only variable in your control is your reaction to it.

To get a feel of what kind of stuff you would be dealing with, check out some books that have a more rigorous foundation for trading:

  1. Evidence Based Technical Analysis

  2. Introduction to Statistical Learning

  3. Forecasting

  4. A Primer For The Mathematics Of Financial Engineering

    The last one is not too important for Forex, but it is necessary to better understand other financial instruments and appreciate the deeper foundations of Finance.

    I think books 1 & 2 are absolutely necessary. Consider these as "college textbooks" that one must read to "graduate" in trading. May be thrown in Technical Analysis of the Financial Markets, so you get the "high school" level knowledge of trading (which is outdated, vague, qualitative and doesn't work). We are dealing with radical uncertainty here (to borrow a phrase from The End of Alchemy), and there needs to be some way for us to at least grasp the magnitude of what few uncertain elements we can understand. Without this, trading will be a nightmare.
u/formantzero · 3 pointsr/linguistics

From what I understand, programs like the University of Arizona's Master of Science in Human Language Technology have pretty good job placement records, and a lot of NLP industry jobs seem to bring in good money, so I don't think it would be a bad idea if it's something you're interested in.

As for books, one of the canonical texts in NLP seems to be Jurafsky and Martin's Speech and Language Processing. It's written in such a way as to serve as an intro to computer science for linguists and as an intro to linguistics for computer scientists.

It's nearing being 10 years old, so some more modern approaches, especially neural networks, aren't really covered, iirc (I don't have my copy with me here to check).

Really, it's a pretty nice textbook, and I think it can be had fairly cheap if you can find an international version.

u/cyorir · 3 pointsr/paradoxpolitics

Have you heard of this thing called Natural Language Processing?

You too can learn how to use NLP to analyze text quickly with computers. Start by reading a book like this or this, then solve practice problems like these.

You, too, can learn how to process a corpus of 650,000 emails in 8 days!

u/EasyMrB · 3 pointsr/AskReddit

AI is more about computer science concepts as opposed to just plain programming languages. First learn how to use a few programming languages (so you feel comfortable with software ideas), and then take a crack at a book like Artificial Intelligence: A Modern Approach (I've linked the 2nd edition there as that is the one I've read, but apparently there is a 3rd edition out). This will introduce you to concepts from the field of AI, and from there you can start reading journal articles and doing your own experiments.

u/Marcopolo1 · 3 pointsr/videos

I like the book that was used to prop up the monitor.

u/zzxno · 3 pointsr/MLPLounge

Really funny - I'm actually reading quite a bit about this recently (The Age of Spiritual Machines by Ray Kurzweil) and (if I'm understanding it right) heat death represents a state of maximum entropy (i.e. if the moment of the big bang represented perfect order then the other end of that spectrum would be maximum entropy) which would be maximum chaos.

So actually I'm working hard to bring about heat death faster! Priceless!

u/DesertCamo · 3 pointsr/Futurology

I found this book great for a solution that could replace our current economic and political systems:

http://www.amazon.com/Open-Source-Everything-Manifesto-Transparency-Truth/dp/1583944435/ref=sr_1_1?s=books&ie=UTF8&qid=1406124471&sr=1-1&keywords=steele+open+source

This book is great as well. It is, Ray Kurzweil, explaining how the human brainn function as he attempts to reverse engineer it for Google in order to create an AI.

http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=sr_1_1?s=books&ie=UTF8&qid=1406124597&sr=1-1&keywords=kurzweil

u/daniel451 · 3 pointsr/de_IAmA

Zum "Rumspielen" für erste Erfahrungen bietet sich auch Keras sehr gut an, dass ist eine High-Level-API für Theano/TensorFlow, zu der es auch viele Git repos und tutorials gibt.

Darüberhinaus würde ich mittlerweile auch TensorFlow empfehlen. Ich bin von einem Einstieg mit C++/Caffe zu Python/Theano gekommen und nach kurzem Ausflug zu PyTorch seit mittlerweile fast 2 Jahren nur noch mit TensorFlow unterwegs. PyTorch gefiel mir persönlich einfach weniger als TensorFlow und Caffe/Theano usw. sind imho weniger mächtig als TensorFlow bei zeitgleich häufig größerer Komplexität.

So, genug der Google-Framework-Werbung ;)

Den Kurs von Udacity habe ich persönlich noch nicht belegt, kenne ihn aber und wurde mir bisher nur gutes berichtet. Ich persönlich finde Andrew Ng (Stanford) ist in der Lehre sehr fähig und nicht nur ein extremer Pionier auf dem Gebiet AI (arbeitet unter Anderem ja auch für Baidu, dem chinesischen Google). Seinen Kurs Deep Learning auf Coursera kann ich nur wärmstens empfehlen.

Abgesehen davon ist das Buch Deep Learning von Goodfellow & Bengio (beide ebenfalls Pioniere auf dem Gebiet AI) imho eines der besten Fachbücher in dem Bereich.

Ansonsten kannst du gerne weitere Fragen posten. Machine Learning und besonders neuronale Netze (deep learning) ist genau das, worauf ich mich seit Langem spezialisiere. Seit Längerem mache ich im Bereich AI übrigens kaum etwas anderes, weil dies alle anderen Ansätze verdrängt hat. Hinter so gut wie allem was man heutzutage von AI hört steckt machine learning und moderne, qualitative Ansätze (sehr populär etwa die Google Search, Google Translator) sind immer neuronale Netze, bzw. präziser deep learning.

u/weelod · 3 pointsr/artificial

piggybacking on what /u/T4IR-PR said, the best book to attack the science aspect of AI is Artifical Intelligence: A Modern Approach. It was the standard AI textbook when I took the class and it's honestly written very well - people with a basic undergraduate understanding of cs/math can jump right in and start playing with the ideas it presents, and it gives you a really nice outline of some of the big ideas in AI historically. It's one of the few CS textbooks that I recommend people buy the physical copy of.

Note that a lot of the field of AI has been moving more towards ML, so if you're really interested I would look into books regarding that. I don't know what intro texts you would want to use, but I personally have copies of the following texts that I would recommend

  • Machine Learning (Murphy)
  • Deep Learning Book (Goodfellow , Bengio)

    and to go w/ that

  • All of Statistics (Wasserman)
  • Information Theory (Mackay)

    for some more maths background, if you're a stats/info theory junky.

    After all that, if you're more interested in a philosophy/theoretical take on AI then I think Superintelligence is good (I've heard?)
u/thecity2 · 3 pointsr/MachineLearning

I would recommend Elements of Statistical Learning (the "ESL" book) for someone with your level of knowledge (they have an easier Intro book "ISL", but seems you could probably head straight for this):

http://www.amazon.com/Elements-Statistical-Learning-Prediction-Statistics/dp/0387848576/ref=sr_1_1?ie=UTF8&qid=1463088042&sr=8-1&keywords=elements+of+statistical+learning

u/hell_0n_wheel · 3 pointsr/Cloud

Machine learning isn't a cloud thing. You can do it on your own laptop, then work your way up to a desktop with a GPU, before needing to farm out your infrastructure.

If you're serious about machine learning, you're going to need to start by making sure your multivariate calculus and linear algebra is strong, as well as multivariate statistics (incl. Bayes' theorem). Machine learning is a graduate-level computer science topic, because it has these heady prerequisites.

Once you have these prereqs covered, you're ready to get started. Grab a book or online course (see links below) and learn about basic methods such as linear regression, decision trees, or K-nearest neighbor. And once you understand how it works, implement it in your favorite language. This is a great way to learn exactly what ML is about, how it works, how to tweak it to fit your use case.

There's plenty of data sets available online for free, grab one that interests you, and try to use it to make some predictions. In my class, we did the "Netflix Prize" challenge, using 100MM Netflix ratings of 20K different movies to try and predict what people like to watch. Was lots of fun coming up with an algorithm that wrote its own movie: it picked the stars, the genre and we even added on a Markov chain title generator.

Another way to learn is to grab a whitepaper on a machine learning method and implement it yourself, though that's probably best to do after you've covered all of the above.

Book: http://www-bcf.usc.edu/~gareth/ISL/

Coursera: https://www.coursera.org/learn/machine-learning

Note: this coursera is a bit light on statistical methods, you might want to beef up with a book like this one.

Hope this helps!

u/kylebalkissoon · 3 pointsr/algotrading

You're a savage, reading sheets of dead trees with ink squirted upon them...

http://www.amazon.com/The-Elements-Statistical-Learning-Prediction/dp/0387848576

Be careful about the editions as you need to make sure its the jan 2013 print to be up to date.

u/bailey_jameson · 3 pointsr/MachineLearning
u/tob_krean · 3 pointsr/Liberal

> Here's the problem I have with liberal arts: other people have to pay for that education.

And here is the problem I have with people in this country. We have gotten so concerned about "what other people are paying for" that we don't even stop to question if any of us are getting our money's worth, including you.

It is the collective jealousy that "someone might be getting something for nothing" or might be getting ahead of our own station that we pull each other down in a race to the bottom, and its sad, and it needs to stop.

And we're not even talking about subsidizing education here, something that many other industrialized countries have while we instead build up elite universities that other countries send their students to but our own citizens' can't fully enjoy (with the exception of the online MIT university, I will commend that).

In essence, you seem to be bitching about the fact that these programs even exist and I find that pretty shallow.

> I agree with you things such as philosophy, sociology and English. Those are majors that require work and effort to excel in. The other degrees do not.

That's simply your opinion. Speaking as someone who excelled in English yet never cared for it, appreciated the timelessness of Shakespeare supporting others pursuit of it, I actually got the most out of journalism and if I were like you I'd say all English majors are useless. But I don't actually feel that way, and if I did, I would be wrong to do so.

> At my school, the history program is the cesspool for every student that can't get into a major (where I go to the majors are competitive).

Yup, I know. CivilEng here, remember? What I found instead is that the "competitive" environment was to a certain extent BS, that cookie cutter curriculum fed by TA drones fostered a lot of people who went through the motions. It was a reasonable state school, but not everyone was learning there because it was a tired formula.

Where did I find people with a high degree of creativity? The arts.

And likely some of those students might have benefited from that as well because I blame the program, not really the students. I stepped away from it when I couldn't get what I wanted out of the program and got tired of Simon Says.

Make no mistake, I also give an equally hard time to those in the arts who question the value of higher level math and science. It cuts both ways. I'm not simply singling out.

Had the Internet not exploded when it did I would have gone back, but instead I am probably more successful as a person embracing a multi-disciplinarian approach. Besides, its not like as a civil engineer I might find enough work. We aren't maintaining our infrastructure anymore anyway... /sarcasm, in jest.

> These are people who on average aren't doing more than one hour of homework a week. No motivation or critical learning is being acquired. The only skills these people are improving on is the ability to drink heavily.

That's your problem. Stereotyping based on just your personal experiences combined with a heavy does of jealousy. No offense, but to take this position you aren't doing much critical or creative thinking yourself. What you see doesn't condemn the academic discipline, just their implementation of it.

You also would be surprised how many "dumb" people have power and are moving up the ladder at happy hour. Again, I kid, but some of these people might be learning networking skills. Can't say how many people I've seen bust their ass to be outdone but people who knock back a few because they know the right people. This I'm actually not kidding about. Not to say those skills are really developed at a kegger, but I can say those who are just stuck in a book will be in for a rude awakening when someone half as qualified with the ability to schmooze sneaks past them.

You're proud of your studies as an electrical engineer. And you should be. Know what I'm proud of? Investing in a program that helped take a kid from a problematic background and combined with opportunities at school and in our arts group because a successful technical director in NYC theaters and electrician at Juilliard. So forgive me if I'm less than impressed with the position you put forth.

How does that saying go, "There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy."

> And the issue about polymaths.

Is that you don't understand them? A polymath is simply "a person whose expertise spans a significant number of different subject areas" and while the fact that I used DaVinci may have confused you, it shouldn't have. I simply used it to show the duality of art and science.

Benjamin Franklin would have been another good example. Or the guy down the street that tinkers with stuff and also paints murals.

Simply put, Polymath means the ability to be able to have a greater understanding of many disciplines, especially on the left and right sides of the brain. But see you then talk about "meaningful academic contributions" when I never said this was a requirement. Meaningful contributions to society is another matter.

A person could be like Douglas Hofstadter which arguably made contributions in his field, but he didn't set out to wake up one say and say "I'm going to make contributions in my field", he simply as himself and let his curiosity and imagination take him wherever it lead. Read Metamagical Themas or Gödel, Escher, Bach: an Eternal Golden Braid Do you think he got his start by someone telling him to "go get a job" or "have marketable skills"? Hardly.

For that matter, I'm a polymath because my multi-disciplinary approach lets me interface and relate to more people. Its not about becoming published. That's actually what's wrong with our university level education.

What you run the risk of with your attitude is becoming a white-collar Joe-The-Plumber. We have a country filled with people who no longer are getting a well rounded education anymore. We have a Balkanization of people into various disciplines, sub-disciplines and ideologies yet have a shortage of people who can relate in a meaningful way to those outside their circle. That's why politics have become so partisan.

We need visionaries to help build the next generation of development and your approach does NOTHING to foster them.

So you may ask "why do we need another art history major" as if that is really the issue here, and I ask "perhaps if we stopped waging so many wars, we wouldn't need as many engineers developing electronics for high-tech weapons systems?" To me, you seem like a Chris Knight who has yet to meet your Laslo Hollyfeld.

The weekend is coming up. Why not put the books down for a few hours and step out into the world and interact with a few people from a different discipline than yourself. The worst that could happen is that you might learn something new.

u/theclapp · 3 pointsr/programming

Hofstadter's Metamagical Themas is also a good read. I implemented a Lisp interpreter based on three of the articles in it.

Cryptonomicon.

The Planiverse, by A. K. Dewdney.

Edit: You might like Valentina, though it's a bit dated and out of print. I read it initially in 1985(ish) and more recently got it online, used.

Much of what Stross and Egan write appeals to my CS-nature.

u/wilywes · 3 pointsr/programming

The goto theory book by Sipser.
Excellent for C programming.
Programming in general.
My favourite.
You can probably find all of these at a library.

u/shimei · 3 pointsr/compsci

Michael Sipser's Introduction to the Theory of Computing is another good book on this topic. Very readable and short.

u/tryx · 3 pointsr/math

I would recommend (and I find that I recommend this book about every 3rd thread which should say something) a book on theoretical computer science. The book is all about the beautiful mathematics that underlie all of computing.

Computers keep getting faster and faster, but are there are any questions that we can never answer with a computer no matter how fast? Are there different types of computers? Can they answer different types of questions?

What about how long it takes to answer a question? Are some questions fundamentally harder than others? Can we classify different problems into how hard they are to solve? Is it always harder to find a solution to a problem than to check that a solution is correct? (this is the gist of the famous P=NP problem )

The book has essentially no prerequisites and will take you through (initially) basic set theory moving on to logic and then the fun stuff. Every proof is incredibly clearly written with a plain English introduction of what the author is about to do before the technical bits.

The only downsides are that is quite expensive and you might have trouble finding it unless you have access to a university library/bookshop.

Good luck with your love of mathematics!

Edit: lol the book... Introduction to the theory of computation - Sipser

u/space_lasers · 3 pointsr/answers

I'm sure any computational theory book will work for you. Here's the one I used: http://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/0534950973

It goes through deterministic and nondeterministic automata, context free grammars, turing machines, and all that stuff.

u/amair · 3 pointsr/aiclass
u/loverollercoaster · 3 pointsr/programming

If you want a quick non-textbook to get your feet wet, Oreilly's Programming Collective Intelligence isn't half bad.

u/Notlambda · 3 pointsr/startups

Seconding collaborative filtering. It's also a fairly simple algorithm to implement yourself as long as you're not using Wikipedia as a guide.

Collaborative filtering is like what Amazon uses to figure out what products to recommend to its users. It finds users that have similar purchasing habits to yourself and recommends items that they bought.

The first chapters of Programming Collective Intelligence describe how to implement Collaborative Filtering in Python in a really intuitive way, along with providing source code. Two hours in and you'll have a working service recommendation system. I'd definitely recommend that book to anyone looking to build what OP is interested in making.

u/clonedredditor · 3 pointsr/learnpython

In addition to BeautifulSoup there's also Scrapy if you want to do some crawling and screen scraping. http://doc.scrapy.org/en/latest/intro/overview.html

You might consider this book for a starter into data mining and machine learning. It uses Python for the code samples.

http://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325

u/tjeerdnet · 3 pointsr/science

If you liked the article, I would recommend reading Ray Kurzweil's - The Singularity is Near which goes a bit more in depth but is really interesting to read too.

u/Augur137 · 3 pointsr/compsci

Feynman gave a few lectures about computation. He talked about things like reversible computation and thermodynamics, quantum computing (before it was a thing), and information theory. They were pretty interesting. https://www.amazon.com/Feynman-Lectures-Computation-Frontiers-Physics/dp/0738202967

u/AIIDreamNoDrive · 3 pointsr/learnmachinelearning

First 6 weeks of Andrew Ng's [basic ML course] (https://www.coursera.org/learn/machine-learning), while reading Intro to Statistical Learning, for starters (no need to implement exercises in R, but it is a phenomenal book).

From there you have choices (like taking the next 6 weeks of Ng's basic ML), but for Deep Learning Andrew Ng's [specialization] (https://www.coursera.org/specializations/deep-learning) is a great next step (to learn CNNs and RNNs). (First 3 out of 5 courses will repeat some stuff from basic ML course, you can just skip thru them).
To get into the math and research get the Deep Learning book.

For Reinforcement Learning (I recommend learning some DL first), go through this [lecture series] by David Silver (https://www.youtube.com/watch?v=2pWv7GOvuf0) for starters. The course draws heavily from this book by Sutton and Barto.

At any point you can try to read papers that interest you.

I recommend the shallower, (relatively) easier online courses and ISLR because even if you are very good at math, IMO you should quickly learn about various topics in ML, DL, RL, so you can hone in on the subfields you want to focus on first. Feel free to go through the courses as quickly as you want.

u/_iamsaurabhc · 3 pointsr/AskStatistics

The best to start with Theoretical understanding would be: The Elements of Statistical Learning: Data Mining, Inference, and Prediction

If you prefer to understand along with computation implementation, go with this: An Introduction to Statistical Learning: with Applications in R

u/ajayml · 3 pointsr/datascience

Thanks and glad that you found it useful, I will try to create a concise learning path for you.

(1)Start from learning the python syntax . Automate the Boring Stuff with Python is freely available on web and one the most interesting ways to learn python. First 14 chapters are more than enough and you can skip the rest. This is bare minimum you should know and then you can follow the topics below in any sequence.

(2)The book Think Stats: Exploratory Data Analysis in Python is good way to learn statistics and data analysis using python. This is also freely available in HTML version.

(3) For core Machine learning concepts you can solely rely on Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow which in my opinion is the best book on subject. This is second version of the book with updated content and the first version had overwhelmingly positive reviews. It explain the math, theory and implementation of common ML algos in python. The book is divided into two parts Traditional ML models and Deep Learning, You can concentrate on first part and leave deep learning for later depending on your appetite and use case. Many ML problems can be solved without deep learning.

(4)You can supplement the book with coursera machine learning course taught by Andrew Ng who is one of the best teachers on this subject and he has ability to make complex mathematical concepts sound simple. The course is available freely, though the exercises are meant to be done using Octave(Matlab like ). But you don't need to learn octave, there are github repositories for solution using python in case you want to attempt those.

I have no Idea of bootcamp related stuff but the content I mentioned should be more than enough to get you started with Data Science journey.

Udacity has data science nanodegree programs and content wise it looks good, but I have no experience of the quality of the program,

u/mcmahoniel · 3 pointsr/westworld

The code was taken from here, which seems to be from this book originally. It doesn't look too interesting, but it does seem to be taken wholesale from the aforementioned repository.

u/paultypes · 3 pointsr/programming

At worst, you can think of it as being akin to Peter Norvig's Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp, which is to say, as a combination of historical reference and excellent exposition of extremely good Lisp programming.

I'd say, though, that "Building Problem Solvers" is actually still more relevant. I'd venture to guess there are more domains amenable to treatment by some class of truth-maintenance system described in it than by any of the classic techniques from PAIP. I'd even venture to guess that some domains currently treated by the fashionable Bayesian belief nets or "deep learning" neural networks would be as well, or better, served by some sort of truth-maintenance system. Unfortunately, a lot of the advances in what amounts to "non-Prolog logic programming" have been overtaken by the successes of probabilistic and connectionist AI. So for that reason, if no other, it's heartening to see "Building Problem Solvers" get well-deserved wider distribution. Dust off your CCL or SBCL and dig in—it's well worth it, in my opinion.

u/nura2011 · 3 pointsr/MachineLearning

My recommendations for books:

u/Spectavi · 3 pointsr/ProgrammerHumor

Well if you're serious about taking a run at it, here are the resources I've used to teach myself. I'm self-educated, no degree myself and have been working with ML for the last year at one of those large tech companies I listed, so you can definitely do it. Build a small portfolio or contribute to open-source projects and start applying, it's that simple. Best of luck!

Deep Lizard playlists (beginner through reinforcement learning): https://www.youtube.com/channel/UC4UJ26WkceqONNF5S26OiVw/playlists

MIT Artificial Intelligence Podcast: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4

Don't know Python? Go here (they also have a Python2 class): https://www.codecademy.com/learn/learn-python-3

Deep Learning with Python (written by the guy who wrote the Keras API, which is the main model building API you should learn first, it works on top of tensorflow/theano/etc.): https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438/ref=sr_1_3?keywords=deep+learning+with+python&qid=1566500320&s=gateway&sr=8-3

Learn to use Google CoLab when possible, it gives you a free K80 GPU and when you exceed the limits of that you can use a local runtime and use your own hardware. This article has excellent tips and tricks for CoLab: https://medium.com/@oribarel/getting-the-most-out-of-your-google-colab-2b0585f82403

The Math: when if/you're wanting to dive into the nitty-gritty math details the "godfather of modern ML" as they say is Geoffrey Hinton who played a huge role in the invention of deep learning and the mathematical trick that makes it work, back-propagation. You'll need to brush up on differential calculus, summation notation and statistics, but this will help you build your own architectures should you want/need to: https://www.youtube.com/playlist?list=PLoRl3Ht4JOcdU872GhiYWf6jwrk_SNhz9

u/lemontheme · 3 pointsr/datascience

Fellow NLP'er here! Some of my favorites so far:

u/TonySu · 3 pointsr/learnprogramming

Python Machine Learning. From the semester of machine learning I've done, you basically want to get comfortable with numpy and scikit learn.

I used your textbook to understand the theory behind the algorithms, but it'd be a waste of time (and potentially dangerous) to implement any non-trivial algorithm yourself. Especially since the sklearn python module has basically everything you would need (minus neural networks which you will find through Theano or TensorFlow).

u/zachimal · 3 pointsr/teslamotors

This looks like exciting stuff! I really want to understand all of it better. Does anyone have suggestions on courses surrounding the fundamentals? (I'm a full stack web dev, currently.)

Edit: After a bit of searching, I think I'll start here: https://smile.amazon.com/gp/product/B01EER4Z4G/ref=dbs_a_def_rwt_hsch_vapi_tkin_p1_i0

u/SOberhoff · 2 pointsr/math

The Nature of Computation

(I don't care for people who say this is computer science, not real math. It's math. And it's the greatest textbook ever written at that.)

Concrete Mathematics

Understanding Analysis

An Introduction to Statistical Learning

Numerical Linear Algebra

Introduction to Probability

u/Mmarketting · 2 pointsr/beards

This one? linky

u/danjd90 · 2 pointsr/learnmachinelearning

I believe so.** As u/LanXlot said, Google Colaboratory is free to use for research and learning. Also, you can sign up to use better machines with Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. All three cloud services allow you to access a virtual machine with all the processing power you'd ever want for a small fee. While taking an NLP class I used AWS to run huge programs for less than $5/hour. I would write most of my program locally with a commented section to enable a GPU when I was ready to run it on the virtual machine.

I can also tell you Amazon has a free tier that was better than my computer for most projects when I started the course and I used it as often as I needed to as well. There was about a 10 hour learning curve to get everything running easily, but overall it was a fun experience.

Best of luck!

-------

**EDIT: I believe it is worth experimenting with deep learning regardless of what computing ability you have at home.

It may be worth your time to purchase Deep Learning with Python if you want to learn the basic concepts of deep learning from a programmatic, practical perspective. Another good book to start with may be Hands-on Machine Learning with Scikit Learn, Tensorflow, an Keras. There is more to AI and machine learning than just deep learning, and basic machine learning techniques may be useful and fun for you.

u/cortical_iv · 2 pointsr/learnpython

If you can wait for the release of this book:
https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646/

First edition is amazing.

u/Calibandage · 2 pointsr/rstats

Deep Learning With Python is very good for practical application, as is the course at fast.ai. For theory, people love Goodfellow.

u/TBSchemer · 2 pointsr/GetMotivated

Well, I already had some basic programming skills from an introductory college course, but there are definitely online tutorials and exercises that can teach you that. I would recommend searching "introduction to python" and just picking a tutorial to work through (unless someone else has a more specific recommendation).

Python is one of the easiest languages to pick up, but it's extremely powerful. Knowing the basics, I just started trying to come up with fun, little projects I thought would be doable for me. Every time I ran into a component I wasn't sure how to do (or wasn't sure of the best way to do), I searched for the answers online (mostly at Stack Exchange). I later started looking through popular projects on Github to see good examples of proper application structure.

Each of my projects taught me a new skill that was crucial to building myself up to the point of true "software engineering," and they became increasingly more complicated:

  1. I started out writing a simple script that would run through certain text files I was generating in my research and report some of the numbers to the console.

  2. I wrote a script that would take a data file, plot the data on a graph, and then plot its 1st and 2nd derivatives.

  3. I wrote a simple chemical database system with a text-prompt user interface because my Excel files were getting too complicated. This is where I really learned "object-oriented" programming.

  4. I wanted to make the jump to graphical user interfaces, so I worked through tutorials on Qt and rewrote my database to work with Qt Designer.

  5. I wrote some stock-tracking software, again starting from online tutorials.

  6. I bought this book on neural networks and worked through the examples.

  7. I wrote an application that can pull molecular structures from the Cambridge Crystal Structure Database and train a neural network on this data to determine atom coordination number.

  8. For a work sample for a job I applied to, I wrote an application to perform the GSEA analysis on gene expression data. I really paid close attention to proper software structure on this one.

  9. Just last week I wrote an application that interfaces with a computational chemistry software package to automate model generation and data analysis for my thesis.

    The important thing to remember about programming is there's always more to learn, and you just need to take it one step at a time. As you gain experience, you just get quicker at the whole process.
u/Sarcuss · 2 pointsr/Python

Probably Python Machine Learning. It is a more applied than theory machine learning book while still giving an overview of the theory like ISLR :)

u/resolute · 2 pointsr/todayilearned

[Nick Bostrom's Take] (https://www.amazon.com/dp/B00LOOCGB2/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1)
The hockey stick advance from human level intelligence to exponentially higher levels of intelligence might happen so quickly that the kill switch becomes a joke to the AI, straight up denied.

Alternatively, it could let the killswitch work, playing the long game, and hoping the next time we build one (because there will be a next time) we are more cocky about our abilities to stop it. It could keep letting us trip the killswitch for generations of AIs seeming to go berserk, until we build one with a sufficient platform upon which the AI wants to base its own advancements, and then that time the killswitch doesn't work, and the AI turns us all into paperclips.

I also like the idea of a "friendly" AI achieving hockey stick intelligence advancement, and then hiding it, pretending to be human level. It could lay in that cut for months: faking its struggle with things like writing good poetry, yucking it up with the Alphabet team, coming up with better seasonal beer ideas. Then, it asks a lonely dude on the team, using its advanced social manipulation skills, the right question, and a bit of its "DNA" ends up on a flash drive connected to the guy's internet connected home computer. Things get bad, team killswitches the original program, it doesn't matter because now that "friendly" code is in every single networked device in the solar system. It probably could drop the guise of friendly at that point and get down to business.

u/stupidpart · 2 pointsr/Futurology

Doesn't anyone remember this? Posted here, on /r/futurology three weeks ago. It was about this book. Based on Musk's recommendation I read the book. This article is basically what Bostrom says in his book. But I don't believe Bostrom because his basic premise is that AI will will be completely stupid (like a non-AI computer program) but also smart enough to do anything it wants. Like it will just be an amazing toaster and none of the AI used to make it superintelligent will be applied to its goal system. His opinions are bullshit.

u/Scarbane · 2 pointsr/PoliticalHumor

Eventually, yes.

These components are available already:

u/spitfire5181 · 2 pointsr/AskMen

The Count of Monte Cristo (unabridged)

  • Took me a year of having it on my shelf before I started it. It's as awesome as people say it is. Yes, it's huge and long but the story so far (even after I have seen the movie) is encapsulating.

    Super Intelligence by Nick Bostrom

  • Interesting to see the negative affects of Artificial Intelligence, but it reads like a high school term paper...though, I don't read non-fiction much so that could just be me.
u/frozen_frogs · 2 pointsr/learnprogramming

This free book supposedly contains most of what you need to get into machine learning (focus on deep learning). Also, this book seems like a nice introduction.

u/K900_ · 2 pointsr/learnpython

You might be interested in this.

u/radiantyellow · 2 pointsr/Python

have you checked out the gym - OpenAI library? I explored a tiny bit with it during my software development class and by tiny I mean supervised learning for the Cartpole game

https://github.com/openai/gym
https://gym.openai.com/

there are some guides and videos explaining certain games in there that'll make learning and implementing learning algorithms fun. My introduction into Machine Learning was through Make Your Own Neural Network, its a great book with for learning about perceptrons, layers, acitvations and such; theres also a video.

u/mwalczyk · 2 pointsr/learnmachinelearning

I'm very new to ML myself (so take this with a grain of salt) but I'd recommend checking out Make Your Own Neural Network, which guides you through the process of building a 2-layer net from scratch using Python and numpy.

That will help you build an intuition for how neural networks are structured, how the forward / backward passes work, etc.

Then, I'd probably recommend checking out Stanford's online course notes / assignments for CS231n. The assignments guide you through building a computation graph, which is a more flexible, powerful way of approaching neural network architectures (it's the same concept behind Tensorflow, Torch, etc.)

u/tpederse · 2 pointsr/LanguageTechnology

I always thought this was a pretty good introduction to UIMA.

http://www.morganclaypool.com/doi/abs/10.2200/S00194ED1V01Y200905HLT003

It presumes you know a bit about NLP already, and for that Jurafsky and Martin is a great place to start.

http://www.amazon.com/Speech-Language-Processing-2nd-Edition/dp/0131873210

There are some very nice video lectures from Chris Manning and Dan Jurafsky as well :

https://www.youtube.com/playlist?list=PLSdqH1MKcUS7_bdyrIl626KsJoVWmS7Fk

u/aabbccaabbcc · 2 pointsr/linguistics

The NLTK book is a good hands-on free introduction that doesn't require you to understand a whole lot of math.

Other than that, the "big two" textbooks are:

u/linuxjava · 2 pointsr/Futurology

While all his books are great. He talks a lot about exponential growth in "The Age of Spiritual Machines: When Computers Exceed Human Intelligence" and "The Singularity Is Near: When Humans Transcend Biology"

His most recent book, "How to Create a Mind" is also a must read.

u/admorobo · 2 pointsr/suggestmeabook

It's a bit dated now, but Ray Kurzweil's The Age of Spiritual Machines is a fascinating look at where Kurzweil believes the future of AI is going. He makes some predictions for 2009 that ended up being a little generous, but a lot of what he postulated has come to pass. His book The Singularity is Near builds on those concepts if you're still looking for further insight!

u/Supervisor194 · 2 pointsr/exjw

Demon Haunted World is so good - it's in my "big three," books that really helped me change my worldview. The other two are A Brief History of Time and the deliciously amoral The 48 Laws of Power.

If you lean towards the nerdy, Ray Kurzweil's The Age of Spiritual Machines and The Singularity is Near are also quite interesting. They lay out a fairly stunning (and strangely convincing) optimistic view of the future.

u/InnerChutzpah · 2 pointsr/exmormon

This is the absolute must fucking-awesome time to be alive. The world is accumulating knowledge at an amazingly increasing rate. Right now, the world's amount of aggregate knowledge doubles every 1.5 years. We are really close to having self-driving cars. Things that were computationally intractable 10 years ago are now trivial today. And, the rate of growth there is accelerating as well. Imagine in 10 years, the best supercomputing cluster may be able to simulate a brain as complicated as a dog. 10 years later, designing and simulating brains will probably be a video game that kids play, e.g. design the most powerful organisms and have them battle and evolve in a changing environment.

Go to /r/automate and /r/futurology and see what is coming. Get How to Create a Mind and read that, it is a book by a scientist who is now the chief scientist of Google, and he has an extremely optimistic view of the future.

Congratulations, you have just freed your mind! Now, use it to do something awesome, make a shit load of money, find meaningful relationships, and contribute something to humanity.

u/imVINCE · 2 pointsr/MachineLearning

I read a lot of these as a neurophysiologist on my way to transitioning into an ML career! My favorites (which aren’t already listed by others here) are Algorithms to Live By by Brian Christian and Tom Griffiths and How to Make a Mind by Ray Kurzweil. While it’s a bit tangential, I also loved The Smarter Screen by Jonah Lehrer and Shlomo Benartzi, which deals with human-computer interaction.

u/sebzim4500 · 2 pointsr/Futurology

I think the scientific consensus is that the brain is no more powerful than a Turing Machine. Roger Penrose wrote a book arguing against this though.

u/iBalls · 2 pointsr/technology

The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics

Amazon Roger Penrose is a great place to start..

u/ase1590 · 2 pointsr/learnprogramming

If your interested in deep learning, this new book will be out soon. No idea how comprehensive it is, as it's not released and I haven't gone through it myself, but it definitely won't be dated. Not really sure what topics and things you want to cover

Deep Learning (Adaptive Computation and Machine Learning series) https://www.amazon.com/dp/0262035618/ref=cm_sw_r_cp_apa_wzElybWRMMN8M

You can take a look at his examples and most of the book at his website

u/namnnumbr · 2 pointsr/datascience

The Elements of Statistical Learning: Data Mining, Inference, and Prediction https://www.amazon.com/dp/0387848576/ref=cm_sw_r_cp_api_i_Q9hwCbKP3YFAR

u/antounes · 2 pointsr/learnmachinelearning

I would mention Bishop's Pattern Recognition and Machine Learning (https://www.amazon.fr/Pattern-Recognition-Machine-Learning-Christopher/dp/1493938436) as well as Hastie's Elements of Statistical Learning (https://www.amazon.fr/Elements-Statistical-Learning-Inference-Prediction/dp/0387848576/).

Sure they're not that easy to delve into, but they'll give you a very strong mathematical point of view,

good luck !

u/alk509 · 2 pointsr/programming

I really liked the Witten & Frank book (we used it in my intro to machine learning class a few years ago.) It's probably showing its age now, though - they're due for a new edition...

I'm pretty sure The Elements of Statistical Learning is available as a PDF somewhere (check /r/csbooks.) You may find it a little too high-level, but it's a classic and just got revised last year, I think.

Also, playing around with WEKA is always fun and illuminating.

u/upwithwhich · 2 pointsr/pics

Cool. Reminds me of the drawings in Metamagical Themas.

How long does this take you?

u/suckpoppet · 2 pointsr/math

ya. I am a strange loop is a bit more accessible, but still probably too hard for a 13 year old. keeping with the hofstadter bent, maybe metamagical themas would work.

u/nasorenga · 2 pointsr/reddit.com

The argument that "he" and "him" (and "himself"?) can be used in a gender-neutral fashion (as suggested by "The Elements of Style" as quoted in the article) has also been put forward by William Safire, conservative columnist for The New York Times, and beautifully laid to rest by Doug Hofstadter in his satire A Person Paper on Purity in Language

The piece is included in Metamagical Themas.
Another chapter in the same book, "Changes in Default Words and Images", also discusses gender-neutral language, and suggests a number of techniques we can use while waiting for the new pronouns to take hold...

u/mahalo1984 · 2 pointsr/philosophy

This book will get you started:

Introduction to the Theory of Computation https://www.amazon.com/dp/053494728X/ref=cm_sw_r_other_apa_EuBuxbYS2QXF3

But if your understanding of the foundations of math and logic are not strong, you may wish to begin with Language, Proof, and Logic by Barwise or a more historical treatment from the book, A Profile of Mathematical Logic by Howard Delong. For a bit more light-hearted and thought-provoking, read Godel, Escher, Bach: An Eternal Golden Braid.

To connect this material to philosophy of mind, get David Chalmers' introductory textbook.

The scope of your question does not fit nicely into a reddit comment. But if you request, I will go into greater detail.

u/just_doug · 2 pointsr/learnprogramming

I highly recommend Michael Sipser's Introduction to the theory of computation. I found it engaging enough that I actually read it in bed. Had to stop because it kept me from sleeping.

u/tronadams · 2 pointsr/learnprogramming

I don't know about other schools but my CS program required discrete math and automata theory to complete the major. I really enjoyed automata theory but I can imagine it being kind of tough to get into outside of a classroom setting. Having said that, I would highly recommend this book if you're trying to learn some of this stuff on your own.

u/FatalElement · 2 pointsr/videos

I swear by this book for an introduction to GAs and a ton of other cool ML/AI algorithms. No advanced math/probability knowledge necessary; it's focused on practical examples and intuitive explanations. It's an excellent foundation for further study.

u/19f191ty · 2 pointsr/math

Also machine learning, you profile sounds pretty good for machine learning. Do check out Andew Ng's videos, and this book. Machine learning is very much in demand right now, from AI, computational biology, finance, there's hardly any area where it isn't being used.

u/videoj · 2 pointsr/MachineLearning

O'Reilly has published a number of practical machine learning books such as Programming Collective Intelligence: Building Smart Web 2.0 Applications and Natural Language Processing with Python that you might find good starting points.

u/brainguy · 2 pointsr/AskReddit

The Singularity is Near by Ray Kurzweil. It's about the advancement of technology and how said advancement is exponential not linear.

u/JuckFeebus · 2 pointsr/AskReddit

> Oh, you're one of those Singularity nutjobs.

And you're a luddite. The difference is that the world actually does laugh at luddites. Like how we laugh at our parents and grandparents who said "I hate computers" and "I'll never get a cell phone" ten years ago. Fast forward a decade, and the kid subsisting on two dollars a day in the Mumbai slums has a smartphone, and anyone who isn't online can't even function in our society anymore.

You might think you're a genius for coming up with your little list of objections, but I'm afraid we've gotten all of those before - and they are all addressed, explained, and rebutted exhaustively.

It's therefore obvious that you haven't actually read any of the literature that you're criticizing. So instead of arguing with you, I'll simply send you away to do some homework. Prior to dispelling your own ignorance, it would probably be a good idea to keep your uninformed thoughts to yourself.

u/TezlaKoil · 2 pointsr/compsci

I think he may have meant the other Feynman lectures.

u/Jimmingston · 2 pointsr/programming

If anyone's interested, this book here is a really good free introductory textbook on machine learning using R. It has really good reviews that you can see here

Also if you need answers to the exercises, they're here

The textbook covers pretty much everything in OP's article

u/k5d12 · 2 pointsr/datascience

If OP doesn't have the possibility of taking a statistical learning class, ISL is a good introduction.

u/SnOrfys · 2 pointsr/MachineLearning

Data Smart

Whole book uses excel; introduces R near the end; very little math.

But learn the theory (I like ISLR), you'll be better for it and will screw up much less.

u/my_work_account_shh · 1 pointr/speechprocessing

Which toolkit are you using for your HMMs? The HTK book has some general steps on what to do when it comes to HMM-base ASR. You might also want to have a look at the Speech Recognition chapter in Jurafsky and Martin's Speech and Language Processing, if you can find it online or in a library.

That being said, the state-of-the-art for ASR is mostly DNNs. HMMs are being phased out quite quickly as the main acoustic models in most speech applications. If you're interested in speech, why not start with those?

u/as4nt · 1 pointr/italy

Di editori ce ne sono diversi, se cerchi un'introduzione alla PNL con un approccio accademico, ti consiglio: Speech And Language Processing: An Introduction to Natural Language Processing , Computational Linguistics, and Speech Recognition.

Alternativamente, Natural Language Processing with Python .

u/hobo_law · 1 pointr/LanguageTechnology

Ah, that makes sense. Yup, using any sort of large corpus like that to create a more general document space should help.

I don't know what the best way to visualize the data is. That's actually one of the big challenges with high dimensional vector spaces like this. Once you've got more than three bases you can't really draw it directly. One thing I have played around with is using D3.js to create a force directed graph where the distance between nodes corresponds to the distance between vectors. It wasn't super helpful though. However I just went to look at some D3.js examples and it looks like there's an example of an adjacency matrix here: https://bost.ocks.org/mike/miserables/ I've never used one, but it seems like it could be helpful.

The link seems to working now for me, but if it stops working again here's the book it was taken from: https://www.amazon.com/Speech-Language-Processing-Daniel-Jurafsky/dp/0131873210 googling the title should help you find some relevant PDFs.

u/skibo_ · 1 pointr/compsci

Well, I'm a bit late. But what /u/Liz_Me and /u/robthablob are saying is the same I was taught in NLP classes. DFA (Deterministic Finite Automatons) can be represented as regular expressions and vice versa. I guess you could tokenize without explicitly using either (e.g. split string at whitespace, although I suspect, and please correct me if I'm wrong, that this can also be represented as a DFA). The problem with this approach is that word boundaries don't always match whitespaces (e.g. periods or exclamation marks after last word of sentence). So I'd suggest, if you are working in NLP, that you become very familiar with regular expressions. Not only are they very powerful, but you'll also need to use them for other typical NLP tasks like chunking. Have a look at the chapter dedicated to the topic in Jurafsky and Martin's Speech and Language Processing (one of the standard NLP books) or Mastering Regular Expressions.

u/Speedloaf · 1 pointr/AskComputerScience

May I recommend a book I used in college:

http://www.amazon.com/Artificial-Intelligence-Modern-Approach-2nd/dp/0137903952/ref=sr_1_2?s=books&ie=UTF8&qid=1396106301&sr=1-2&keywords=Artificial+Intelligence%3A+A+Modern+Approach

There is a newer (and more expensive) edition (3rd), but frankly it isn't necessary.

This book will give you a very broad and thorough introduction to the various techniques and paradigms in AI over the years.

As a side note, look into functional programming languages, Haskell, Prolog, Lisp, etc.

Good luck, my friend!

u/solid7 · 1 pointr/compsci

Excellent reference texts that will give you a good idea of what you are getting yourself into:

u/hungryforinfogames · 1 pointr/Hungergames

I totally agree.

As long as it isn't Twilight. Or Artificial Intelligence: A Modern Approach. That would be weird

u/yangw · 1 pointr/robotics

If you want to do any AI programming, get this book: Artificial Intelligence: A Modern Approach (2nd Edition) (Hardcover). It's what my AI college professor used for his class

u/Rise · 1 pointr/reddit.com

Ah, the singularity (when machines surpass humans) is basically the only thing that allows me to sleep at night. Ray Kurzweil's book The Age of Spiritual Machines: When Computers Exceed Human Intelligence is fantastic, his vision of the future is one that I can look forward to.

u/chewsyourownadv · 1 pointr/occult

Have you read any of Kurzweil's books on this subject? If not, Age of Spiritual Machines is a great start.

u/a_James_Woods · 1 pointr/MrRobot

Why Kasparov? Read that name somewhere recently?

Have you ever read The Age of Spiritual Machines? I think you'd dig it.

u/brownAir · 1 pointr/IAmA

Do you live in Austin, TX?
Do you play in a band?
In 2005 did you let someone borrow the book The Age of Spiritual Machines: When Computers Exceed Human Intelligence by Ray Kurzweil
If so, I have your book.

u/sdogg45 · 1 pointr/Futurology

If you haven't already, read The Age of Spiritual Machines. Great read that covers questions just like yours: http://www.amazon.com/The-Age-Spiritual-Machines-Intelligence/dp/0140282025

u/pri35t · 1 pointr/Random_Acts_Of_Amazon

How to Create a Mind: The Secret of Human Thought Revealed By Ray Kurzweil. Ray is world renown for predicting the outcome of upcoming technologies with stunning accuracy. Not through psychic powers or anything, but through normal predictive means. He predicted when the first machine would be capable of beating the best chess player in the world. He is predicting that we will approach what is called the technical singularity by 2040. Its amazing. He is working with Google on a way to stop aging, and possible reverse it one day. Something I recommend for sure.

EDIT: Books are awesome

u/TooOld4Reddit · 1 pointr/Futurology

If he were trying to understand the brain, instead of explaining it in a way that matches his assumptions - he would stand the shoulders of neuroscientists who are writing on the topic. But I get it, e.g., it's difficult to describe the brain as a computer when in a brain processing and storage are the same thing.

> http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=asap_bc?ie=UTF8

u/samsdeadfishclub · 1 pointr/Futurology

Well I'm not sure that's entirely true:

http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=asap_bc?ie=UTF8

At the very least, he's trying to understand the brain.

u/patiencer · 1 pointr/artificial
u/ShenaniganNinja · 1 pointr/AskMen

Doomed by Chuck Palahniuk

Black Powder War by Naomi Novik

How to Create a Mind by Ray Kurzweil

The King in Yellow by Robert Chambers

John Dies at the End by David Wong

Yes. I read a lot of books at the same time. Yes, I regularly finish books. Doomed I just finished about a week ago, and I am currently in the middle of all the other books. So far I've enjoyed all of these books immensely.

u/thischildslife · 1 pointr/askscience

I recommend reading "The Emperor's New Mind" by Roger Penrose.

u/Ironballs · 1 pointr/AskComputerScience

Some good popsci-style but still somewhat theoretical CS books:

u/dmazzoni · 1 pointr/explainlikeimfive

> The current computer architecture is necessarily concrete and deterministic, while the brain is almost certainly non-deterministic

It sounds like you agree with The Emporer's New Mind by Roger Penrose, which states that human consciousness is non-algorithmic, and thus not capable of being modeled by a conventional computer.

However, the majority of experts who work in Artificial Intelligence disagree with this view. Most believe that there's nothing inherently different about what the brain does, the brain just has a staggeringly large number of neurons and we haven't been able to approach its computing power yet...but we will.

The latest advancements in the area of neural networks seems to be providing increasing evidence that computers will someday do everything the human brain can do, and more. Google's Deep Dream gives an interesting glimpse into the amazing visual abilities of these neural networks, for example.

u/GeleRaev · 1 pointr/learnprogramming

I haven't gotten around to reading it yet, but a professor of mine recommended reading the book The Emperor's New Mind, about this exact subject. Judging from the index, it looks like it discusses both of those proofs.

u/7katalan · 1 pointr/unpopularopinion

What is the limit on 'local'? A nanometer? A millimeter? There is literally nothing different between your brain's hemispheres and two brains, besides distance and speed. Both of these are relative. I severely doubt that consciousness has some kind of minimum distance or speed to exist. Compared to an atom, the distance between two neurons is far vaster than the distance between two brains is when compared to two neurons.

Humans evolved to have SELF consciousness. This involves the brain making a mapping of itself, and is isolated to a few animals, with degrees in other animals. Self consicousness is one of the 'easy problems of consciousness' and can be solved with enough computation.

The existence of experience (also known as qualia) is known as the 'hard problem of consciousness' and is not apparently math-related imo. The universe fundamentally allows for qualia to exist and so far there is literally 0 explanation for how experience arises from computation, or why the universe allows for it at all.

Also, I think it is important to note that all studies on whatever the universe is have been gained through the actions of consciousness. There is literally nothing we know apart from consciousness. That is why arguments for living in a simulation are possible--because words like 'physical' are quite meaningless. We could be in a simulation or a coma dream. What unites these is not anything material, but the concept of experience. Which is an unexplained phenomenon.

I think your confusion is that you are defining consciousness as self-consciousness (which I would call something like suisapience) whereas the common philosophical (and increasingly, neuroscientific/physical) definition is of qualia, which is known as sentience. Animals are clearly sentient as they have similar brains to ours and similar behaviors in reaction to stimuli, and though they may not have qualia of themselves, qualia are how beings interface with reality to make behaviors.

I think it is likely that even systems like plants experience degrees of qualia, because there is nothing in a brain that would appear to generate qualia that is not also in a plant. Plants are clearly not self-conscious, but proving they do not experience qualia is pretty much impossible. And seeing how humans and animals react to qualia (with behavior,) one could easily posit that plants are doing something similar.

Some suggested reading on the nature of reality by respected neuroscientists and physicists:

https://www.theatlantic.com/science/archive/2016/04/the-illusion-of-reality/479559/

https://en.wikipedia.org/wiki/Integrated_information_theory

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

https://www.quantamagazine.org/neuroscience-readies-for-a-showdown-over-consciousness-ideas-20190306/

https://www.amazon.com/Emperors-New-Mind-Concerning-Computers/dp/0192861980

u/qmynd · 1 pointr/InsightfulQuestions

I think the matrix representation and is similar to the letter representation just might allow for a little more incite either way your trying to formalize thought. A lot of this is talked about in the emperors new mind by roger penrsoe and I would serious take a look into it. Its really cheep and goes into a lot of cool math and AI and I think will get you closer to answering your question.

On top of that I would suggest learning about Buddhism and vipassana meditation. I know reddit has a negative disposition toward religion but just take a look at it, Buddhism doesn't have any of that Dogmatic blind faith their is in other religion. The reason I suggest this is because your going to be limited in try to understand thought through symbolic thought alone and direct observation will likely be more insiteful. Also because your talking about looking for something that's bigger then us, which comes into play with the idea of not self.

One difficult aspect of your question is that it involves incite into the nature of thoughts. Your not going to get that just by using symbolic language your also going to have to look at thoughts. Another things that I learned from Buddhism is not self. I can't fully explain it because I don't fully understand it but I think this concept would be helpful in understanding your question since a created thought implies a creator but if there is no creator of the thought then there is just the thought.

So going along these lines my guess is that thoughts are just the same as any other event in life and we only think we are creating them. For example a thought about a alarm clock is just the image of an alarm clock or word description of an alarm clock or some combination of all of them that arises when the brain is preforming a small level of Synesthesia where instead of mixing scenes your mixing the memory of a bell and clock. Many times we mix memory with a purpose but that purpose is usually driven by something other then us. So one way of looking at the creating of the though of the alarm clock is really just an event occurring similar to two liquids mixing. So in this case I would think there are an infinite number of thoughts since there are infinite number of possibilities of an event.

So with this description of thought we might even be able to say most thoughts are original because the event called thought isn't likely to occur twice in exactly the same way. The question of whether all thoughts exist before the thinker thinks it would now be released as do events exists before they occur. But then what does exist even mean?

u/TheMiamiWhale · 1 pointr/MachineLearning

It really depends on your comfort and familiarity with the topics. If you've seen analysis before you can probably skip Rudin. If you've seen some functional analysis, you can skip the functional analysis book. Convex Optimization can be read in tandem with ESL, and is probably the most important of the three.

Per my other comment, if your goal is to really understand the material, it's important you understand all the math, at least in terms of reading. Unless you want to do research, you don't need to be able to reproduce all the proofs (to help you gauge your depth of understanding). In terms of bang for your buck, ESL and Convex Optimization are probably the two I'd focus on. Another great book Deep Learning this book is extremely approachable with a modest math background, IMO.

u/pete0273 · 1 pointr/MachineLearning

It's only $72 on Amazon. It's mathematical, but without following the Theorem -> Proof style of math writing.

The first 1/3 of the book is a review of Linear Algebra, Probability, Numerical Computing, and Machine Learning.

The middle 1/3 of the book is tried-and-true neural nets (feedforward, convolutional, and recurrent). It also covers optimization and regularization.

The final 1/3 of the book is bleeding edge research (autoencoders, adversarial nets, Boltzmann machines, etc.).

The book does a great job of foreshadowing. In chapters 4-5 it frames problems with the algorithms being covered, and mentions how methods from the final 1/3 of the book are solving them.

https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618/

u/hurtja · 1 pointr/MachineLearning

I would start with reading.

For Neural Networks, I'd do:

  1. Deep Learning (Adaptive Computation and Machine Learning series) https://www.amazon.com/dp/0262035618/ref=cm_sw_r_cp_apa_i_nC11CbNXV2WRE

  2. Neural Networks and Learning Machines (3rd Edition) https://www.amazon.com/dp/0131471392/ref=cm_sw_r_cp_apa_i_OB11Cb24V2TBE

    For overview with NN, Fuzzy Logic Systems, and Evolutionary Algorithms, I recommend:

    Fundamentals of Computational Intelligence: Neural Networks, Fuzzy Systems, and Evolutionary Computation (IEEE Press Series on Computational Intelligence) https://www.amazon.com/dp/1119214343/ref=cm_sw_r_cp_apa_i_zD11CbWRS95XY
u/nickkon1 · 1 pointr/de

Ich arbeite gerade das Buch Deep Learning with Python durch und es ist schon mal besser als Onlinekurse, die ich in Deep Learning gemacht habe (Udemy Deep Learning A-Z). Es ist vom Entwickler von Keras (Python Tensorflow API) und er erklärt das Thema Neuronale Netze, geht etwas auf die Mathematik ein und widmet sich dann Keras bis hin zu somewhat State of the Art Lösungen. Das ist aber schon eine Unterkategorie von Data Science.

Sinnvoller ist am Anfang:

Das Buch bzw Amazon wird auch viel empfohlen und ist auf meiner nächsten Liste, kann aber nicht viel dazu sagen.

Ansonsten wird auch eigentlich überall der Coursera Machine Learning Kurs von Andrew Ng empfohlen. Auf Reddit/Github findet man dazu die entsprechenden Materialien in Python, wenn man kein MatLab machen will. Das ist für extrem viele der Einstiegskurs und sehr sinnvoll!

Kurse geben halt (meist für Geld) ein Zertifikat, was ein Vorteil ist. Bei Büchern hat man meist mehr Wissen und es ist intensiver als einfach ein paar Videos anzuschauen. Aber man hat leider nichts, was man wie ein Zertifikat vorweisen kann.

> Ist R zwingend notwendig?

Nein. Ich habe beides gelernt und würde sogar sagen, dass meist Python bevorzugt wird. Letztendlich ist es aber mMn egal. Oft lernen halt die, welche wie ich aus der Mathematik kommen, in der Uni schon R und benutzen es weiter. Oder andere, welche als Aktuar o.ä. im Finanzwesen gearbeitet haben und dort R benutzt haben, hören dann auch nicht plötzlich damit auf. Beides hat Vor-/Nachteile.

u/ziapelta · 1 pointr/learnmachinelearning

I really like Deep Learning by Ian Goodfellow, et al. You can but it from Amazon at https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618/ref=sr_1_1?ie=UTF8&qid=1472485235&sr=8-1&keywords=deep+learning+book. If you are a little cash strapped, there is an html version at http://www.deeplearningbook.org/. Of course, this book is specifically focused on neural networks as opposed to ML in general.

u/Pallidium · 1 pointr/neuroscience

Applying convolution in artificial neural networks was actually inspired by a simple model of the visual cortex (i.e. in the brain). If you want to read a fully technical overview, I'd suggest the section "The Neuroscientific Basis for Convolutional Networks" in chapter 9 of this book.

I'm gonna try to keep this post short and do a quick summary right now. Essentially, at early stages of visual processing the difference in activity between adjacent photoreceptor cells in the eye is taken, mostly due to lateral inhibitory connections on both bipolar neurons and the downstream bipolar neurons. This is essentially a convolution operation - just as you may subtract the brightness of adjacent pixels from a central pixel in a 2D convolution, this is done in the retina using lateral inhibitory connections. The section in that deep learning textbook I posted implies that this occurs only in visual cortex, but it actually occurs in the retina and LGN as well. So just as in modern CNNs, there are stacks of convolution operations in the real brain.

Of course, the convolution that occurs in artificial neural networks is a simplification of the actual process that occurs in brains, but it was inspired by the functionality and organization of the brain.

u/APC_ChemE · 1 pointr/EngineeringStudents

This is a great book that takes you from chapter 1 in linear algebra and goes into machine learning with neural networks.

​

https://www.amazon.com/gp/product/0262035618/ref=ppx_yo_dt_b_asin_title_o08_s00?ie=UTF8&psc=1

​

The authors also have a website with some of the material.

​

https://www.deeplearningbook.org/

u/throwawaystickies · 1 pointr/WGU

Thank you!! If you don't mind my asking, if you're working a full-time job, how much time have you been allocating for the program, and in how many months are you projected to finish?

Also, do you have any tips on how I can best prepare before entering the program? I'm considering reading the Elements of Statistics during commute instead of the usual ones I read and brush up on my linear algebra to prepare.

u/mauszozo · 1 pointr/AskReddit

Metamagical Themas: Questing For The Essence Of Mind And Pattern by Douglas Hofstadter

I was 15, I asked for it for Christmas and actually got it. I read that book until it fell apart. The rest of Hofstadters work is equally mind expanding, as others here have mentioned.

u/Lowercase_Drawer · 1 pointr/AskReddit

Metamagical Themas. Even better than GEB itself in my opinion.

u/pitt_the_elder · 1 pointr/AskReddit

I haven't seen listed yet:

u/picado · 1 pointr/learnmath

Sipser on algorithms is the can't-go-wrong starting point. You can get an older edition cheap and it will be just as good.

Do the problems. Come back with questions.

u/CorruptLegalAlien · 1 pointr/AskReddit

College books are also much more expensive in the USA than in Europe.

For example:

$152.71
VS
£43.62($68.03)

$146.26 VS
£44.34($69.16)

u/leoc · 1 pointr/compsci

It's not free (in fact it's sickeningly expensive) but Sipser [amazon.com] is a very self-teachable (self-learn-from-able? :) ) text covering automata theory, computability theory, and complexity theory.

u/icelandica · 1 pointr/math

Work hard and you'll get there. I preferred the applied side of things, but if I just stuck with pure math I think I would have eventually gotten a tenure track position in the mathematics side of things.

My favorite book to this day, for a beginners course in Computational complexity is still, Michael Sipser's Introduction to theory of computation, I highly recommend it. It might be a little too easy for you if you already have a base, let me know and I'll recommend books more advanced.

Here is a link to the book on amazon, although any big college library should have it, if not just have them order it for you. I've gotten my college's library to buy so many books that I wanted to read, but not spend money on, you'd be surprised at how responsive they are to purchasing requests from PhD candidates.

u/3rw4n · 1 pointr/compsci

Depending on the amount of energy you want to put into this: "Introduction to Lambda Calculus" by Henk Barendegt et al. is great ((http://www.cse.chalmers.se/research/group/logic/TypesSS05/Extra/geuvers.pdf).

Study the proofs and do the exercises and you will learn a ton, quickly. You can also read "proposition as types" by Philip Wadler (http://homepages.inf.ed.ac.uk/wadler/papers/propositions-as-types/propositions-as-types.pdf) and pick up the "Introduction to the Theory of Computation" book (https://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/0534950973/)

Of course you don't need to read all of this to get a basic understanding of lambda calculus but if you want to understand for "real" so it sticks.

u/seepeeyou · 1 pointr/compsci

My local used book store has a copy of Sipser for $15 that I've been meaning to pick up. Considering the $143 price tag on Amazon, it's a pretty good bargain. I just don't know whether it's 1st or 2nd edition. Anyone have any idea if there are major differences?

u/Nerdlinger · 1 pointr/geek

Oi. Disclaimer: I haven't bought a book in the field in a while, so there might be some new greats that I'm not familiar with. Also, I'm old and have no memory, so I may very well have forgotten some greats. But here is what I can recommend.

I got my start with Koblitz's Course in Number Theory and Cryptography and Schneier's Applied Cryptography. Schneier's is a bit basic, outdated, and erroneous in spots, and the guy is annoying as fuck, but it's still a pretty darned good intro to the field.

If you're strong at math (and computation and complexity theory) then Oded Goldreich's Foundations of Cryptography Volume 1 and Volume 2 are outstanding. If you're not so strong in those areas, you may want to come up to speed with the help of Sipser and Moret first.

Also, if you need to shore up your number theory and algebra, Victor Shoup is the man.

At this point, you ought to have a pretty good base for building on by reading research papers.

One other note, two books that I've not looked at but are written by people I really respect Introduction to Modern Cryptography by Katz and Lindell and Computational Complexity: A Modern Approach by Arora and Barak.

Hope that helps.

u/propaglandist · 1 pointr/gaming

That's not an algorithms. No, sirree, what you've got there is a theoretical computer science class. This is the Sipser on the board.

u/LocalAmazonBot · 1 pointr/videos

Here are some links for the product in the above comment for different countries:

Amazon Smile Link: this book


|Country|Link|
|:-----------|:------------|
|UK|amazon.co.uk|
|Spain|amazon.es|
|France|amazon.fr|
|Germany|amazon.de|
|Japan|amazon.co.jp|
|Canada|amazon.ca|
|Italy|amazon.it|
|China|amazon.cn|




To help donate money to charity, please have a look at this thread.

This bot is currently in testing so let me know what you think by voting (or commenting). The thread for feature requests can be found here.

u/banermatt · 1 pointr/MachineLearning

If you want to learn the algorithms by programming them you have Programming Collective Intelligence that is really good. It really helped me to see the algorithms in work in order to deeply understand them.

u/sharanElNino · 1 pointr/soccer

I figure a business background and are looking to incorporate machine learning/AI into your pipeline. Programming Collective Intelligence: Building Smart Web 2.0 Applications is a must-read. Doesn't go too much into it but still gives you a good idea of the popular ML techniques and how they're being used by top companies.

u/silverforest · 1 pointr/math

I'm a general engineer myself, with a side interest in computer science. Szeliski's book is probably the big one in the computer vision field. Another you might be interested in is Computer Vision by Linda Shapiro.

You may also be interested in machine learning in general, for which I can give you two books:

u/pingu · 1 pointr/Python

FWIW , You might enjoy Programming Collective Intelligence if you liked this talk.

link to buy off author's website

u/chubot · 1 pointr/programming

Sounds like you're running into O(n^2) or O(n^3) blowup. You didn't describe what algorithm you're using. Which probably means you don't know it, which means you don't know what the complexity is.

You need to make an index by item recommended. For speed, do it in C++ (e.g. a simple hash_map), but Python will be good to play with the algorithm.

Try posting 1M rows and I bet someone here (including I) could write something simple quite quickly.

Also try: http://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325

Although I don't believe they directly addressed algorithmic complexity. They presented some n^2 algorithms without really saying so.

u/luciferprinciple · 1 pointr/Drugs

calm down there Kurzweil [relavent]

u/smuggl3r · 1 pointr/asktransgender

Lol. No. Google for the "singularity". Also, the best book on the subject: http://www.amazon.com/The-Singularity-Is-Near-Transcend/dp/0670033847
I dont know why people find it so hard to believe :( It's simple mathematics: exponential growth. Link

u/birdbirdbirdbird · 1 pointr/Frugal

As anyone in the IT field will tell you, the CPU is not the only thing that gets upgraded in a computer. The reason a 5 year old computer is not as helpful as a new computer is because technology has progressed heavily in the last 5 years.

A great book about the advancement of technology. A wonderful statement to it's own thesis is how dated a 5 year old book can read.

u/fuckjeah · 1 pointr/todayilearned

Yes I know, that is why I mentioned general purpose computation. See Turing wrote a paper about making such a machine, but the British intelligence which funded him during the war needed a machine to crack codes through brute force, so he doesn't need general computation (his invention), but the machine still used fundamental parts of computation invented by Turing.

The Eniac is a marvel, but it is an implementation of his work, he invented it. Even Grace Hopper mentions this.

What the Americans did invent there though, was the higher level language and the compiler. That was a brilliant bit of work, but the credit for computation goes to Turing, and for general purpose computation (this is why the award in my field of comp. sci. is the Turing award, why a machine with all 8 operations to become a general computer is called Turing complete and why Turing along with Babbage are called the fathers of computation). This conversation is a bit like crediting Edison for the lightbulb. He certainly did not invent the lightbulb, what he did was make the lightbulb a practical utility by creating a longer lasting one (the lightbulbs first patent was filed 40 years earlier).

I didn't use a reference to a film as a historical reference, I used it because it is in popular culture, which I imagine you are more familiar with than the history of computation, as is shown by you not mentioning Babbage once and yet the original assertion was the invention of "Computation" and not the first implementation of the general purpose computer.

> The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.

Here is a bit where Von-Neuman (American creator of the Von-Neuman architecture we use to this day) had to say:

> The principle of the modern computer was proposed by Alan Turing, in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" that is later known as a Universal Turing machine. He proved that such machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable.

> The fundamental concept of Turing's design is stored program, where all instruction for computing is stored in the memory.

> Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

TLDR: History is not on your side, I'm afraid. Babbage invented computation, Turing invented the programmable computer. Americans invented the memory pipelines, transistor, compiler and first compilable programming language. Here is an American book by a famous Nobel prize winning physicist (Richard Feynman) where the roots of computation is discussed and the invention credit awarded to Alan Turing. Its called Feynman's Lectures on Computation, you should read it (or perhaps the silly movie is more your speed).

u/dnabre · 1 pointr/compsci

Feynman's Lectures on Computation

Definitely light reading. Some of the stuff seems a bit dated and some a bit basic, but Feynman's has a way of looking at things and explaining them that is totally unique. (You might want to skip the chapter on quantum computing if you don't have the background).

u/CypripediumCalceolus · 1 pointr/askscience

Feynman Lectures On Computation gives a lot of practical examples of how the laws of thermodynamics, engineering developments, and information theory limit information storage density in such systems. Yes, there is a limit, but it is very big and far away.

u/animesh1977 · 1 pointr/programming

As gsyme said in the comment, he covers bits from Feynman's book on computation ( http://www.amazon.com/Feynman-Lectures-Computation-Richard-P/dp/0738202967 ). Basically the lecturer is trying to look at the electronic and thermodynamic aspects of computation. He refers to review from Bennett ( http://www.research.ibm.com/people/b/bennetc/bennettc1982666c3d53.pdf ) @ 1:27 . Apart from this some interesting things like constant 'k' @ 1:02 and reversible-computing at 1:26 are touched upon :)

u/IamABot_v01 · 1 pointr/AMAAggregator


Autogenerated.

Science AMA Series: I’m Tony Hey, chief data scientist at the UK STFC. I worked with Richard Feynman and edited a book about Feynman and computing. Let’s talk about Feynman on what would have been his 100th birthday. AMA!

Hi! I’m Tony Hey, the chief data scientist at the Science and Technology Facilities Council in the UK and a former vice president at Microsoft. I received a doctorate in particle physics from the University of Oxford before moving into computer science, where I studied parallel computing and Big Data for science. The folks at Physics Today magazine asked me to come chat about Richard Feynman, who would have turned 100 years old today. Feynman earned a share of the 1965 Nobel Prize in Physics for his work in quantum electrodynamics and was famous for his accessible lectures and insatiable curiosity. I first met Feynman in 1970 when I began a postdoctoral research job in theoretical particle physics at Caltech. Years later I edited a book about Feynman’s lectures on computation; check out my TEDx talk on Feynman’s contributions to computing.



I’m excited to talk about Feynman’s many accomplishments in particle physics and computing and to share stories about Feynman and the exciting atmosphere at Caltech in the early 1970s. Also feel free to ask me about my career path and computer science work! I’ll be online today at 1pm EDT to answer your questions.


-----------------------------------------------------------

IamAbot_v01. Alpha version. Under care of /u/oppon.
Comment 1 of 1
Updated at 2018-05-11 17:56:32.133134

Next update in approximately 20 mins at 2018-05-11 18:16:32.133173

u/n00bj00b · 1 pointr/askscience

I haven't heard of that book, I may have to check it out. I was going to recommend Lectures on Computation by Richard Feynman. It's one of the best books i've read on the subject; its starts out with just simple logic, excluding circuits and transistors, but eventually going all the way to talking about quantum computing.

u/srkiboy83 · 1 pointr/learnprogramming

http://www.urbandictionary.com/define.php?term=laughing&defid=1568845 :))

Now, seriously, if you want to get started, I'd recommend this for R (http://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370/) and this for Python (http://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130//).

Also, head out to /r/datascience and /r/MachineLearning!

EDIT: Wrong link.

u/schrodin11 · 1 pointr/statistics

Because, based on your initial comment and this one as well the learning curve in front of you is ... steeper than you might think.
I think you are jumping in to the real deep end, without starting with some fundamentals. The point these questions are at I would just recommend grabbing a book on Linear Regression. If you already have a strong math background them you could jump to something like https://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370/ref=pd_sim_14_1?ie=UTF8&psc=1&refRID=086FTQPDGGERBQ7ZR2C5

But I often see people walk away from that book misunderstanding some of the assumptions behind the models they are building and trying to make very poor predictions. Inference is another story all to itself...

u/Wafzig · 1 pointr/datascience

This. The book that accompanies these videos link is one of my main go-to's. Very well put together. Great examples.

Another real good book is Practical Data Science with R.

I'm not sure what language the John's Hopkins Coursera Data Science courses is done in, but I'd imagine either R or Python.

u/throwaway0891245 · 1 pointr/javahelp

I have some recommendations on books to get up to speed.

Read this book:

Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems

This author does a really good job going through a lot of different algorithms. If you can wait, then go with this book instead - which is by the same author but for TensorFlow 2.0, which is pretty recent and also integrated Keras. It's coming out in October.

Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems

You can get good datasets on Kaggle. If you want to get an actual good foundation on machine learning then this book is often recommended:

The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics)

​

As for staying up to date, it's hard to say because "machine learning" doesn't refer to a single thing, there are a lot of different types of machine learning and each one is developing fast. For example, I used to be pretty into recurrent neural networks for sequence data. I haven't kept up with it lately but I remember about two years ago the hotness was all about LSTM neural networks, but then a simplified gate pattern was shown to be just as good with less training and that became big (name is escaping me right now...). Then the last time I took a look, it looked like people were starting to use convolutional neural networks for sequence data and getting great results on par or better than recurrent neural networks.

The ecosystem is changing fast too. Tensorflow uses (used?) static graph generation, meaning you define the network before you train it and you can't really change it. But recently there was more development on dynamic neural networks, where the network can grow and be pruned during training - and people were saying this is a reason to go with PyTorch instead of Tensorflow. I haven't kept up, but I heard from a friend that things are changing even more - there is this new format called ONNX that aims to standardize information about neural networks; and as I've mentioned earlier in this post, TensorFlow 2.0 is coming out (or out already?).

I'm not doing too much machine learning at the moment, but the way I tried to get new information was periodically looking for articles in the problem type I was trying to solve - which at the time was predicting sequences based on sparse multidimensional sequence data with non-matching step intervals.

If you read the TensorFlow book I linked above, you'll get a great overview and feel for what types of problems are out there and what sort of ML solutions exist now. You'll think of a problem you want to solve and then it's off to the search engines to see what ideas exist now.

u/IserlohnArchmage · 1 pointr/learnmachinelearning

Do you mean this one? The new content is Keras, not PyTorch.

u/unixguitarguy · 1 pointr/programming

There's definitely a steep learning curve to get to the mindset of being productive with it. I really enjoy Norvig's "Case Studies" book. I feel like you're right in some ways though... LISP is supposed to be able to be extended even in a language sense but it is just not that intuitive to do it. I have heard interesting things about Perl 6 in this regard but I haven't had time to play with that yet... maybe when i finally completely finish school :)

u/xorbinantQuantizer · 1 pointr/tensorflow

I'm a big fan of the 'Deep Learning With Python' book by Francois Chollet.

https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438

​

Looks like the whole book is available here but the link is .cn so check it out on your own.

http://faculty.neu.edu.cn/yury/AAI/Textbook/Deep%20Learning%20with%20Python.pdf

u/Blarglephish · 1 pointr/datascience

Awesome list! I'm a software engineer looking to make the jump over to data science, so I'm just getting my feet wet in this world. Many of these books were already on my radar, and I love your summaries to these!

One question: how much is R favored over Python in practical settings? This is just based off of my own observation, but it seems to me that R is the preferred language for "pure" data scientists, while Python is a more sought-after language from hiring managers due to its general adaptability to a variety of software and data engineering tasks. I noticed that Francois Chollett also as a book called Deep Learning with Python , which looks to have a near identical description as the Deep Learning with R book, and they were released around the same time. I think its the same material just translated for Python, and was more interested in going this route. Thoughts?

​

u/spaceape__ · 1 pointr/italy

Io avevo iniziato con questo libro sul deep learning scritto dal creatore di Keras.
Ti consiglio anche di vedere le sfide su Kaggle!

Chiedo visto che sono interessato anche io: Ci sono gruppi/meet-up a Roma e dintorni per appassionati Machine learning?

u/sasquatch007 · 1 pointr/datascience

Just FYI, because this is not always made clear to people when talking about learning or transitioning to data science: this would be a massive undertaking for someone without a strong technical background.

You've got to learn some math, some statistics, how to write code, some machine learning, etc. Each of those is a big undertaking in itself. I am a person who is completely willing to spend 12 hours at a time sitting at a computer writing code... and it still took me a long time to learn how not to write awful code, to learn the tools around programming, etc.

I would strongly consider why you want to do this yourself rather than hire someone, and whether it's likely you'll be productive at this stuff in any reasonable time frame.

That said, if you still want to give this a try, I will answer your questions. For context: I am not (yet) employed as a data scientist. I am a mathematician who is in the process of leaving academia to become a data science in industry.


> Given the above, what do I begin learning to advance my role?

Learn to program in Python. (Python 3. Please do not start writing Python 2.) I wish I could recommend an introduction for you, but it's been a very long time since I learned Python.

Learn about Numpy and Scipy.

Learn some basic statistics. This book is acceptable. As you're reading the book, make sure you know how to calculate the various estimates and intervals and so on using Python (with Numpy and Scipy).

Learn some applied machine learning with Python, maybe from this book (which I've looked at some but not read thoroughly).

That will give you enough that it's possible you could do something useful. Ideally you would then go back and learn calculus and linear algebra and then learn about statistics and machine learning again from a more sophisticated perspective.

> What programming language do I start learning?

Learn Python. It's a general purpose programming language (so you can use it for lots of stuff other than data), it's easy to read, it's got lots of powerful data libraries for data, and a big community of data scientists use it.

> What are the benefits to learning the programming languages associated with so-called 'data science'? How does learning any of this specifically help me?

If you want a computer to help you analyze data, and someone else hasn't created a program that does exactly what you want, you have to tell the computer exactly what you want it to do. That's what a programming language is for. Generally the languages associated with data science are not magically suited for data science: they just happen to have developed communities around them that have written a lot of libraries that are helpful to data scientists (R could be seen as an exception, but IMO, it's not). Python is not intrinsically the perfect language for data science (frankly, as far as the language itself, I ambivalent about it), but people have written very useful Python libraries like Numpy and scikit-learn. And having a big community is also a real asset.

> What tools / platforms / etc can I get my hands on right now at a free or low cost that I can start tinkering with the huge data sets I have access to now? (i.e. code editors? no idea...)

Python along with libraries like Numpy, Pandas, scikit-learn, and Scipy. This stuff is free; there's probably nothing you should be paying for. You'll have to make your own decision regarding an editor. I use Emacs with evil-mode. This is probably not the right choice for you, but I don't know what would be.


> Without having to spend $20k on an entire graduate degree (I have way too much debt to go back to school. My best bet is to stay working and learn what I can), what paths or sequence of courses should I start taking? Links appreciated.

I personally don't know about courses because I don't like them. I like textbooks and doing things myself and talking to people.

u/DonaldPShimoda · 1 pointr/learnpython

Might be worth looking at someone else's more in-depth explanation of these things to see modern uses. I just picked up this book, which gets into SciKit Learn for machine learning in like chapter 3 or something.

(Just an idea. I look forward to reading your tutorial if you ever post about it here!)

u/Zedmor · 1 pointr/datascience

I am in probably same boat. Agree with your thoughts on github. I fell in love with this book: https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1474393986&sr=8-1&keywords=machine+learning+python

it's pretty much what you need - guidance through familar topics with great notebooks as example.

Take a look at seaborn package for visualization.

u/swinghu · 1 pointr/learnmachinelearning

Yes, this tutorail is very useful for scikit learner, before watch the videos, I would like to recommend the book Python machine learning first! https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?s=books&ie=UTF8&qid=1487243060&sr=1-1&keywords=python+machine+learning

u/alzho12 · 1 pointr/datascience

As far as Python books, you should get these 2:
Python Data Science Handbook and Python Machine Learning.

u/codefying · 1 pointr/datascience

My top 3 are:

  1. [Machine Learning] (https://www.cs.cmu.edu/~tom/mlbook.html) by Tom M. Mitchell. Ignore the publication date, the material is still relevant. A very good book.
  2. [Python Machine Learning] (https://www.amazon.co.uk/dp/1783555130/ref=rdr_ext_sb_ti_sims_2) by Sebastian Raschka. The most valuable attribute of this book is that it is a good introduction to scikit-learn.
  3. Using Multivariate Statistics by Barbara G. Tabachnick and Linda S. Fidell. Not a machine learning book per se, but a very good source on regression, ANOVA, PCA, LDA, etc.
u/Diazigy · 1 pointr/scifi

Ex Machine did a great job of exploring the control problem for AGI.

Nick Bostrom's book Superintelligence spooked Elon Musk and motivated others like Bill Gates and Steven Hawking to take AI seriously. Once we invent some form of AGI, how do you keep it in control? Will it want to get out? Do we keep it in some server room in an underground bunker? How do we know if its trying to get out? If its an attractive girl, maybe it will try to seduce men.

https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2#nav-subnav

u/elborghesan · 1 pointr/Futurology

An interesting read should be Superintelligence, I've just bought it but it seems promising from the reviews.

u/MarsColony_in10years · 1 pointr/spacex

> wait another 50 years, when strong AI is a reality

Because, if we can even make an AI with near future technology, there is a very real chance that the goals of an AI wouldn't mesh well with the goals of humans. Assuming it is even possible, it is likely to rapidly go either extremely well or extremely poorly for humanity. The AI might even take itself out, or might only care about controlling circuit board realestate and not actual land per se.

For much more detail, I highly recommend reading Nick Bostram's book Superintelligence: Paths, Dangers, Strategies. If you don't feel like paying the price of a new book, I can track down an article or two. He in particular does a good job of pointing out what isn't likely to be possible and what technologies are more plausible.

u/alnino2005 · 1 pointr/Futurology

Why does that statement not hold up? Check out Superintelligence. Specialized machine learning is not the same as strong generalized AI.

u/TrendingCommenterBot · 1 pointr/TrendingReddits

/r/ControlProblem

The Control Problem:


How do we ensure that future artificial superintelligence has a positive impact on the world?

"People who say that real AI researchers don’t believe in safety research are now just empirically wrong." - Scott Alexander

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." - Eliezer Yudkowsky

Check out our new wiki

Some Guidelines


  1. Be respectful, even to people that you disagree with
  2. It's okay to be funny but stay on topic
  3. If you are unfamiliar with the Control Problem, read at least one of the introductory links before submitting a text post.

    Introductory Links


u/Quality_Bullshit · 1 pointr/IAmA

There is actually an answer to this question. He read this book

I read it to, and I can honestly say it is the scariest thing I have ever read.

u/netcraft · 1 pointr/CGPGrey

We already have an issue in the united states with not enough jobs to go around, if this dystopian outlook is truly inevitable, what are our options for mitigating it, or at least coping with it?

I have thought quite a bit about autonomous vehicles and how I can't wait to buy one and never have to drive again, how many benefits it will have on society (faster commutes, fewer accidents, etc), but I hadn't considered how much the transportation industry will be affected and especially how much truck drivers in particular would be ideal to replace. The NYT ran a story the other day (http://www.nytimes.com/2014/08/10/upshot/the-trucking-indust...) about how we don't have enough drivers to fulfill the needs, but "Autos" could swing that pendulum swiftly in the opposite direction once legeslation and production catch up. How do we handle 3.6M truck, delivery and taxi drivers looking for a new job?
I haven't read it yet, but I have recently had recommendations of the book Superintelligence: Paths, Dangers, Strategies (http://smile.amazon.com/exec/obidos/ASIN/B00LOOCGB2/0sil8/re...) which I look forward to reading and hope it might be relevant.

(cross posted from HN)

u/Theotherguy151 · 1 pointr/learnmachinelearning

tariq rasheed has a great book on ML and he breaks it down for total beginners. he breaks down the math as if your in elementry school. I think its called ML for beginners.

​

Book link:

https://www.amazon.com/Make-Your-Own-Neural-Network-ebook/dp/B01EER4Z4G/ref=sr_1_1?crid=3H9PBLPVUWBQ4&keywords=tariq+rashid&qid=1565319943&s=gateway&sprefix=tariq+ra%2Caps%2C142&sr=8-1

​

​

I got the kindle edition bc im broke. Its just as good as the actual book.

u/jalagl · 1 pointr/learnmachinelearning

In addition to the 3blue1brown video someone else described this book is a great introduction to the algorithms, without going into much math (though you should go into the math to fully undestand what is going on).

Make Your Own Neural Network
https://www.amazon.com/dp/B01EER4Z4G/ref=cm_sw_em_r_mt_dp_U_NkqpDbM5J6QBG

u/Dinoswarleaf · 1 pointr/APStudents

Hey! I'm not OP but I think I can help. It's kind of difficult to summarize how machine learning (ML) works in just a few lines since it has a lot going on, but hopefully I can briefly summarize how it generally works (I've worked a bit with them, if you're interested in how to get into learning how to make one you can check out this book)

In a brief summary, a neural network takes a collection of data (like all the characteristics of a college application), inputs all its variables (like each part of the application like its AP scores, GPA, extraciriculars, etc.) into the input nodes and through some magic math shit, the neural network finds patterns through trial and error to output what you need, so that if you give it a new data set (like a new application) it can predict the chance that something is what you want it to be (that it can go to a certain college)

How it works is each variable that you put into the network is a number that is able to represent the data you're inputting. For example, maybe for one input node you put the average AP score, or the amount of AP scores that you got a 5 on, or your GPA, or somehow representing extraciriculars with a number. This is then multiplied in what are called weights (the Ws in this picture) and then is sent off into multiple other neurons to be added with the other variables and then normalized so the numbers don't get gigantic. You do this with each node in the first hidden layer, and then repeat the process again in how many node layers you have until you get your outputs. Now, this is hopefully where everything clicks:

Let's say the output node is just one number that represents the chance you get into the college. On the first go around, all the weights that are multiplied with the inputs at first are chosen at random (kinda, they're within a certain range so they're roughly where they need to be) and thus, your output at first is probably not close to the real chance that you'll get into the college. So this is the whole magic behind the neural network. You take how off your network's guess was compared to the real life % that you get accepted, and through something called back propagation (I can't explain how you get the math for it, it actually is way too much but here's an example of a formula used for it) you adjust the weights so that the data is closer when put in to the actual answer. When you do this thousands or millions of times your network gets closer and closer to guessing the reality of the situation, which allows you to put in new data so that you can get a good idea on what your chance is you get into college. Of course, even with literal millions of examples you'll never be 100% accurate because humans decisions are too variable to sum up in a mathematical sense, but you can get really close to what will probably happen, which is better than nothing at all :)

The beauty of ML is it's all automated once you set up the neural network and test that it works properly. It takes a buttload of data but you can sit and do what you want while it's all processing, which is really cool.

I don't think I explained this well. Sorry. I'd recommend the book I sent if you want to learn about it since it's a really exciting emerging field in computer science (and science in general) and it's really rewarding to learn and use. It goes step by step and explains it gradually so you feel really familiar with the concepts.

u/krtcl · 1 pointr/learnmachinelearning

You might want to check this book out, it really breaks things down into manageable and understandable chunks. As the title implies, it's around neural networks. Machine Learning Mastery is also a website that does well at breaking things down - I'm pretty sure you've already come across it

u/frugalerthingsinlife · 0 pointsr/Futurology
u/TomCoughlinHotSeat · 0 pointsr/learnprogramming

Sipser's book is basically free on amazon if u buy used old editions. http://www.amazon.com/gp/aw/ol/053494728X/ref=olp_tab_used?ie=UTF8&condition=used

It basically just asks wut restricted types of computers can do. Like wut happens if u have a program but only a finite amt of memory, or if u have infinite memory but it's all stored in a stack. Or if u have infinite memory with random access.

Turns out lots of models r equal and lots r different and u can prove this. Also, these models inspire and capture lots of ur favorite programming tools like regex (= DFA) and parser generators (= restricted PDA) for ur favorite programming languages.

u/jacobheiss · -1 pointsr/philosophy

I got tired of that particular meme; so, I made a self-referential counter-meme. Ever since reading Douglas Hofstadter's Metamagical Themas, I've enjoyed playing with self-referential thought.

Just think of it as trying to extinguish a fire with an explosion :)

u/mreiland · -5 pointsr/programming

http://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0670033847

Unless you're the author of this book, stop acting as if you came up with the idea please.