(Part 2) Best ai & machine learning books according to redditors
We found 3,368 Reddit comments discussing the best ai & machine learning books. We ranked the 567 resulting products by number of redditors who mentioned them. Here are the products ranked 21-40. You can also go back to the previous section.
If you are interested, and want a real challenge to your view, I strongly recommend Nick Bostrum's Superintelligence.
A key takeaway is imagining intelligence on this kind of scale. That is, our intuition says Einstein is much smarter than the dumbest person you know. Yet, the dumb guy and Einstein have the same hardware, running about the same number of computations. The difference between a bee and a mouse or a mouse and a chimp is orders of magnitude. The difference between an idiot and a genius is very small in comparison.
AI will seem very stupid to the human observer until almost exactly the point it becomes amazingly brilliant. As AI overtakes bees (probably has already) and mice it will still seem dumb to you. As it overtakes chimps, still much dumber than the dumbest person you know. As it draws even to the moron, you might think that the AI has a lot of work left to go. Instead, it's almost unfathomably brilliant.
The distance between bee and chimp is 1,000,000,000 and the difference between moron and smartest man who ever lived is 5.
First of all, thanks for sharing. Code & idea implementation sucks, but it might turn into a very interesting discussion! By admitting that your trade idea is far from being unique and brilliant, you make a super important step in learning. This forum needs more posts like that, and I encourage people to provide feedback!
Idea itself is decent, but your code does not implement it:
Just because 0 looks good, you decide that 0 is the best threshold. You have to do a research here. You'd be surprised by how counter intuitive the result might be, or how super unstable it might be=))
The lesson is: idea first. Define it well. Then try to pick minimal number of indicators (or functions) that implement it. Check for parameter space. If you have too many parameters, discard your idea, since you will not be able to tell if it is making/losing money because it has an edge or just purely by chance!
What is left out of this discussion: cross validation and picking best parameters going forward
Recommended reading:
yes and no. For beginners, I recommend https://www.amazon.de/Deep-Learning-Python-Francois-Chollet/dp/1617294438 and https://www.reddit.com/r/learnmachinelearning/comments/bmj2zh/get_a_free_early_release_copy_of_handson_machine/
Reading some books would be a good idea.
The following are textbooks:
General AI
Machine Learning
Statistics for Machine Learning
There are many other topics within AI which none of these books focus on, such as Natural Language Processing, Computer Vision, AI Alignment/Control/Ethics, and Philosophy of AI. libgen.io may be of great help to you.
Artificial Intelligence: A Modern Approach by Norvig and Russell
http://aima.cs.berkeley.edu/
On Amazon
Edit: Good point from nacarlson about the 2nd edition. I've changed the link to reflect that.
There's an excellent essay by Douglas Hofstadter in his Metamagical Themas collection where he discusses the nature of creativity. It's called "Variations on a Theme as the Crux of Creativity," and I couldn't immediately find an upload online.
The entire book is certainly worth reading, but this essay in particular stood out to me.
One of the essential ideas-- that I'm paraphrasing very poorly-- is that creativity is a consequence of your brain's ability to create many hypothetical scenarios, to ask what-if, subjunctive questions.
The important corollary to that is that it's very good to have a deep understanding of many different fields and topics, because then your brain has a wide variety of conceptual objects to compare, and there's abundant opportunity for two concepts you understand to fuse into a new idea.
Based on this and some other thoughts, my current understanding of creativity and knowledge is this:
Math is great, but I'm saddened by a notion held by many of professors, and many of my fellow students-- this idea that only math is great.
Thank you all for your responses! I have compiled a list of books mentioned by at least three different people below. Since some books have abbreviations (SICP) or colloquial names (Dragon Book), not to mention the occasional omission of a starting "a" or "the" this was done by hand and as a result it may contain errors.
edit: This list is now books mentioned by at least three people (was two) and contains posts up to icepack's.
edit: Updated with links to Amazon.com. These are not affiliate - Amazon was picked because they provide the most uniform way to compare books.
edit: Updated up to redline6561
he lost me when he said this about GEB
> If you only read one book on this list, it should be this one.
seriously? it's not that i don't appreciate the sentiment, but things douglas hofstadter thinks are neat is no substitute for any single book on the rest of the list unless you
for my part, i'd add sipser's computation book and why not throw in some ken thompson in there as an amuse bouche?
Introduction to the Theory of Computation
Just read the article I linked to, you'll understand why google+ is not just a facebook clone. Seriously; it's a long article but totally worth it. You might also need to read and understand this book to know what they're working towards.
Google basically knows that technology will take over and all humans will be linked together in some form of hive-mind in the future. They're working towards making such a thing reality, but they can't tell us that because people who can barely use computers wouldn't understand and would be scared shitless.
Introduction to the Theory of Computation is one of my favourites
Some context, I've been living in this house for about 3 years now, my girlfriend and i moved in to take care of the owner of the house. Turns out that he was a big lisp / scheme hacker back in the 80s-90s and had developed a lot of cutting edge tech in his hay day. Anyway, these books have been hiding in his library downstairs...
It was like finding a bunch of hidden magical scrolls of lost knowledge :)
edit: I will compile a list of the books later. I'm out doing 4th of July things.
update: List of books
ISBN: 1-55558-044-0
ISBN: 1-55558-042-4
ISBN: 0-262-56038-0
ISBN: 0-393-95544-3
ISBN: 0-201-17589-4
ISBN: 0-07-000-422-6
ISBN: 0-13-370875-6
ISBN: 0-07-054666-5
ISBN: 0-262-11158-6
ISBN: 1-55860-191-0
ISBN: 1-59059-239-5
ISBN: 0-932376-41-X
ISBN: 0-07-001115-X
ISBN: 0-673-39773-4
ISBN: 0-262-07093-6
ISBN: 0-932376-87-8
ISBN: 0-13-717232-X
ISBN: 0-417-50308-8
ISBN: 0-471-60771-1
ISBN: 0-262-19288-8
ISBN: 0-262-55017-2
ISBN: 0-13-834284-9
ISBN: 1-935182-64-1
ISBN: 978-1-59327-591-4
I would guess that career prospects are a little worse than CS for undergrad degrees, but since my main concern is where a phd in math will take me, you should get a second opinion on that.
Something to keep in mind is that "higher" math (the kind most students start to see around junior level) is in many ways very different from the stuff before. I hated calculus and doing calculations in general, and was pursuing a math minor because I thought it might help with job prospects, but when I got to the more abstract stuff, I loved it. It's easily possible that you'll enjoy both, I'm just pointing out that enjoying one doesn't necessarily imply enjoying the other. It's also worth noting that making the transition is not easy for most of us, and that if you struggle a lot when you first have to focus a lot of time on proving things, it shouldn't be taken as a signal to give up if you enjoy the material.
This wouldn't be necessary, but if you like, here are some books on abstract math topics that are aimed towards beginners you could look into to get a basic idea of what more abstract math is like:
Different mathematicians gravitate towards different subjects, so it's not easy to predict which you would enjoy more. I'm recommending these five because they were personally helpful to me a few years ago and I've read them in full, not because I don't think anyone can suggest better. And of course, you could just jump right into coursework like how most of us start. Best of luck!
(edit: can't count and thought five was four)
Personally, I think the thing to realize is that if you know how to program, then you know how to model solutions to problems in complex domains. I would then learn more about the mathematical background of modeling complex domains in general and reasoning about them. If you haven't already bought a copy of Artificial Intelligence: A Modern Approach, I highly recommend doing so and working through it, then picking some application to build using what you've learned, e.g. developing a real-estate investment advice system that uses multi-decade trend data across the country, taking into account recent events, to make sound investment advice, learning from experience (i.e. new data) as it goes.
In other words, think
Sounds like you're looking for the statistical proofs behind all the hand waving commonly done by "machine learning" MOOCS. I recommend this book. It's very math heavy, but it covers the underlying theory well.
I've posted a similar answer before, but can't find the comment anymore.
If you are interested in doing your own statistics and modeling (like regression modeling), learn R. It pays amazing dividends for anyone who does any sort of data analysis, even basic biostats. Excel is for accountants and is terrible for biological data. It screws up your datasets when you open them, has no version control/tracking, has only rudimentary visualization capabilities, and cannot do the kind of stats you need to use the most (like right-censored data for Cox proportional hazards models or Kaplan-Meier curves). I've used SAS, Stata, SPSS, Excel, and a whole bunch of other junk in various classes and various projects over the years, and now use only R, Python, and Unix/Shell with nearly all the statistical work being in R. I'm definitely a biased recommender, because what started off as just a way to make a quick survival curve that I couldn't do in Excel as a medical student led me down a rabbit hole and now my whole career is based on data analysis. That said, my entire fellowship cohort now at least dabbles in R for making figures and doing basic statistics, so it's not just me.
R is free, has an amazing online community, and is in heavy use by biostatisticians. The biggest downsides are
Unfortunately learning R won't teach you actual statistics.... for that I've had the best luck with brick-and-mortar classes throughout med school and later fellowship but many, many MOOCs, textbooks, and online workshops exist to teach you the basics.
If I were doing it all over again from the start, I would take a course or use a textbook that integrated R from the very beginning such as this.
Some other great statistical textbooks:
Online classes:
So many to choose from, but I am partial to DataCamp
Want to get started?
>
prompt in the console:install.packages("swirl")
library("swirl")
swirl()
And you'll be off an running in a built-in tutorial that starts with the basics (how do I add two numbers) and ends (last I checked) with linear regression models.
ALL OF THAT SAID ------
You don't need to do any of that to be a good doctor, or even a good researcher. All academic institutions have dedicated statisticians (I still work with them all the time -- I know enough to know I don't really know what I am doing). If you can do your own data analysis though, you can work much faster and do many more interesting things than if you have to pay by the hour for someone to make basic figures for you.
I think the field you are looking for is called Natural Language Processing.
There is a nice introductory lecture on it on coursera.
I think this is the standard introduction book.
My favorite example of a mind-bogglingly well-staffed company was "Thinking Machines Corporation":
(Taken from Wikipedia, not exhaustive!):
I found this when I was reading about Feynman one time. This isn't meant to disparage Google at all, it's an amazing list though.
EDIT: I forgot to mention what I started out writing. Feynman produced an excellent book, The Feynman Lectures on Computation, which if you're familiar with the physics version of the same, is an incredibly lucid, short, and informative book. I think this would make an excellent textbook for a course in computer architecture.
Some CL-specific resources:
Non-CL:
Others:
I flip back and forth between Clojure and CL periodically (CL is for hobbies, clojure is for work and hobbies), and have mucked with scheme and racket a bit (as well as decent mileage in F#, Haskell, and a little Ocaml from the static typed family). IME, you can definitely tell the difference between a language with support for FP strapped on after the fact, vs. one with it as a core design (preferably with mutable/imperative escape hatches). CL supports FP (closures/functions are values (e.g. lambda), there's a built-in library of non-destructive pure functions that typically operate on lists - or the non-extensible sequence class, and non-standard but general support for optimizing tail recursive functions into iterative ones enables pervasive use of recursion in lieu of iteration), but I think it's less of a default in the wild (not as unfriendly as Python is to FP though). Consequently, it's one paradigm of many that show up; I think there's likely as much if not more imperative/CLOS OOP stuff out there though. I think the alternate tact in clojure, scheme, and racket is to push FP as the default and optimize the language for that as the base case - with pragmatic alternative paradigm support based on the user's desire. Clojure takes it a step farther by introducing efficient functional data structures (based on HAMTs primarily, with less-used balanced binary trees for sorted maps and sets) so you can push significantly farther without dropping down to mutable/imperative stuff for performance reasons (as opposed to living and dying by the performance profiles of balanced binary trees for everything). You'll still find OOP and imperative support, replete with mutation and effects, but it's something to opt into.
In the context of other FP langs, F# and Ocaml do this as well - they provide a pretty rigorous locked-down immutable approach with functional purity as the default, but they admit low-hanging means to bypass the purity should the programmer need to. Haskell kinda goes there but it's a bit more involved to tap into the mutable "escape hatches" by design.
In the end, you can pretty much bring FP concepts into most any languages (e.g. write in a functional style), although it's harder to do so in languages that don't have functions/closures as a first class concept (to include passing them as values). Many functional languages have similar libraries and idioms for messing with persistent lists or lazy sequences as a basic idiom; that's good news since all those ideas and idioms or more or less portable directly to CL (and as mentioned here are likely extant libraries to try to bring these around in addition to the standard map,filter, reduce built-ins). For more focused FP examples and thinking, clojure, racket, and scheme are good bets (absent an unknown resource that exclusively focuses on FP in CL, which would seem ideal for your original query). I think dipping into the statically typed languages would also be edifying, since there are plenty of books and resources in that realm.
I've never heard of that book before, but I took a look at their samples and they all seem legitimate.
I would just buy the Ebook for $59 and work through some problems. I'd also maybe purchase some books (or find free PDFs online). Given that you don't have a deep understanding of ML techniques I would suggest these books:
There are others as well, but those are two introductory-level textbooks I am familiar with and often suggested by others.
I’d personally recommend Andrew Ng’s deeplearning.ai course if you’re just starting. This will give you practical and guided experience to tensorflow using jupyter notebooks.
If it’s books you really want I found the following of great use in my studies but they are quite theoretical and framework agnostic publications. Will help explain the theory though:
Deep Learning (Adaptive Computation and Machine Learning Series) https://www.amazon.co.uk/dp/0262035618/ref=cm_sw_r_cp_api_i_Hu41Db30AP4D7
Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning series) https://www.amazon.co.uk/dp/0262039249/ref=cm_sw_r_cp_api_i_-y41DbTJEBAHX
Pattern Recognition and Machine Learning (Information Science and Statistics) (Information Science and Statistics) https://www.amazon.co.uk/dp/0387310738/ref=cm_sw_r_cp_api_i_dv41DbTXKKSV0
Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series) https://www.amazon.co.uk/dp/B00AF1AYTQ/ref=cm_sw_r_cp_api_i_vx41DbHVQEAW1
In the theoretical field of complexity...
The 1979 version of Introduction to Automata Theory, Languages, and Computation by Hopcroft & Ullman is fantastic and used to be the canonical book on theoretical computer science. Unfortunately the newer versions are too dumbed down, but the old version is still worth it! These days Introduction to the Theory of Computation by Sipser is considered to be the canonical theoretical computer science text. It's also good, and a better "introduction" than H&U. That said, I prefer H&U and recommend it to anyone who's interested in more than getting through their complexity class and forgetting everything.
In the theoretical field of algorithms...
Introcution to Algorithms by Cormen, Leiserson, Rivest and Stein is dynamite, pretty much everything you need to know. Unfortunately it's a bit long winded and is not very instructive. For a more instructive take on algorithms take a look at Algorithms by Dasgupta, Papadimitriou and Vazirani.
For deep learning reference this:
https://www.quora.com/What-are-some-good-books-papers-for-learning-deep-learning
There are a lot of open courses I watched on youtube regarding reinforcement learning, one from oxford, one from stanford and another from Brown. Here's a free intro book by Sutton, very well regarded:
https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html
For general machine learning their course is pretty good, but I did also buy:
https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1467309005&sr=8-1&keywords=python+machine+learning
There were a lot of books I got into that weren't mentioned. Feel free to pm me for specifics. Cheers
Edit: If you want to get into reinforcement learning check out OpenAI's Gym package, and browse the submitted solutions
Hands-On Machine Learning . The 2019 version. It’s one of the best ML books I have come across.
The problem with ANSI CL is that I could never shake the feeling that Graham wants Lisp in general to maintain some mystique as language only suited for the very clever, and he teaches the language with intent on keeping it that way. I really enjoyed PCL, but I really do think that Paradigms of Artificial Intelligence Programming needs to get more attention. Granted that I haven't yet finished the mammoth volume, Norvig introduces the language in a clear way that makes it seem more natural (perfect example is that he prefers 'first' and 'rest' rather than the more esoteric 'car' 'cdr'), but additionally he has great 'hand holding' examples that show exactly what makes Common Lisp so powerful and how to organize largers programs in language as well as going over a ton of interesting CS related things. Having gone through these 3 books while I was learn I can definitely say that each had a lot to offer, but I think if I was trapped on an island with just one I would definitley take PAIP.
Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp by Norvig (PAIP)
I don't want to overhype, but it's been called "The best book on programming ever written".
Oops, maybe I overshot. But anyway, very enlightening even if you're not a Lisp or AI programmer.
Absolutely.
Check out The Elements of Statistical Learning and Introduction to Machine Learning.
edit those books are about practical applications of what we've learning to date from the neural network style of pattern classification. So it's not about modeling an actual biological neuron. For modeling of the biology, it's been a while since I futzed with that. But when I wrote a paper on modeling synaptic firing, Polymer Solutions: An Introduction to Physical Properties was the book for that class. Damned if I remember if that book has the details I needed or if I had to use auxiliary materials though.
I can think of a few
Here's some lighter reading I've liked:
Readable, but more hardcore
Hardware:
Mathy:
Programmy:
Unixy:
Off the wall:
Not OP, but among those he listed, I think Chollet's book is the best combination of practical, code-based content and genuinely valuable insights from a practitioner. Its examples are all in the Keras framework, which Chollet developed as a high-level API to sit on top of a number of possible DL libraries. But with TensorFlow 2.0, the Keras API is now fundamental to how you would write code in this pretty dominant framework. It's also a very well-written book.
Ordinarily, I resist books that are too focused on one framework over another. I'd never personally want a DL book in Java, for instance. But I think Chollet's book is good enough to recommend regardless of the platform you intend to use, although it will certainly be more immediately useful if you are working with tf.Keras.
As far as I'm aware, they don't necessarily believe we are near human level AI. However, they do believe it is an inevitable eventuality (on our current track) that we should begin preparing for now - because if it's done wrong it has catastrophic consequences, while if done right can be the best thing that ever happened to us. I second the recommendation for Bostrom's book.
> Last few weeks I got very interested in AI and can't stop thinking about it. Watched discussions of philosophers about future scenarios with AI, read all recent articles in media about it.
Most likely you heard about the superintelligence control problem. Check out (the sidebar of) /r/ControlProblem and their FAQ. Nick Bostrom's Superintelligence is pretty much the book on this topic, and I would recommend reading it if you're interested in that. This book is about possible impacts of AI, and it won't really teach you anything about how AI works or how to develop it (neither strong nor weak AI).
For some resources to get started on that, I'll just refer you to some of my older posts. This one focuses on mainstream ("narrow"/"weak") AI, and this one mostly covers AGI (artificial general intelligence / strong AI). This comment links to some education plans for AGI, and this one has a list of cognitive architectures.
>Há espaço pra gente sem formação?
sim
contanto q saiba programar, um pouco de estatistica, ETL e o mais importante: saber se comunicar e resolver problemas
recomendo fortemente o dataquest, atualmente são os melhores 1k reais que alguém pode investir em educação atualmente independente da área
se preferir estudar por livro https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646 + algum curso de programação basica à sua escolha
>Dão prioridade para Estatísticos de formação
hmm, depende da empresa, essa ideia que ciencia de dados é complexo e precisa de phd já está acabando (exceto se for engenheiro de machine learning, que são poucos no brasil)
Fiction: Do Androids Dream of Electric Sheep?. The book that was the basis for Blade Runner.
Non-fiction: Superintelligence: Paths, Dangers, Strategies. This is a deep dive into the dangers posed by superintelligent AI. It's a heavy read.
To gain a good overview of AI, I recommend the book The Master Algorithm by Pedro Domingos. It's totally readable for a layperson.
Then, learn Python and become familiar with libraries and packages such as numpy, scipy, and scikit-learn. Perhaps you could start with Code Academy to get the basics of Python, but I feel like the best way to force yourself to really know useful stuff is through implementing some project with a goal.
Some other frameworks and tools are listed here. Spend a lot more time doing than reading, but reading can help you learn how to approach different tasks and problems. Norvig and Russell's AI textbook is a good resource to have on hand for this.
Some more resources include:
Make Your Own Neural Network book
OpenAI Gym
CS231N Course Notes
Udacity's Free Deep Learning Course
Introduction to Algorithms by CLRS
TAOCP is a waste of time and money; it's more for adorning your bookshelf than for actually reading. Pretty much anyone who suggests TAOCP and is less than 55 years old is just parroting Standard Wisdom™.
Godel, Escher, Bach is a nice book, but it's not as intellectually deep in today's world as it was when first published; a lot of the memes in GEB have been thoroughly absorbed into nerd culture at this point and the book should be enjoyed more as a work of art than expecting it to be particularly informative (IMO).
If you're interested in compilers, I recommend Engineering a Compiler by Cooper & Torczon. Same thing as TAOCP applies to people who suggest the Dragon Book. The Dragon Book is still good, but it focuses too much on parser generators and doesn't really cover enough of the other modern good stuff. (Yes, even the new edition.)
As far as real programming goes, K&R's The C Programming Language is still unmatched for its quality of exposition and brevity, but these days I'd strongly suggest picking up some Python or something before diving into C. And as a practical matter, I'd suggest learning some C++ from Koenig & Moo's Accelerated C++ before learning straight C.
Sipser's Introduction to the Theory of Computation is a good theory book, but I'd really suggest getting CLRS before Sipser. CLRS is way more interesting IMHO.
The Singularity is Near, so don't count on it.
Ishmael - If you ever wondered what it would be like to be a telepathic gorilla, this will probably give you the closest answer.
The 5 Elements of Effective Thinking - The INTP Toolbox.
The Willpower Instinct - Because we all know we could use a bit more of it around here...
Emotional Vampires - A survival guide to protect your Fe
How To Create A Mind - Since it's ultimately the only thing we really seem to care about, it's interesting to think how we could theoretically create a 'backup' for it eventually
The Talent Code - In case you haven't quite figured out how to go about mastering skills yet.
It's an introduction to some of the major concepts in Computer Science theory. If you have no background in CS, and a bit of background in math (mid-undergraduate level) it's an enjoyable way to get exposed to a few concepts from CS theory.
If you're really looking to put your head to the grindstone and learn CS theory, there are better books though. I learned from M. Sipser's Intro to Comp. Theory.
P.S. I did walk away from it with a novice appreciation for Bach.
I've been reading and really enjoying "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. https://www.amazon.com/gp/product/B00LOOCGB2
It's an easy read but it's a hard read because every couple sentences your brain wanders off thinking about consequences and stuff and you keep having to read the page over again.
He does a great job of covering in some depth all the issues surrounding the development of trans-human intelligence, whether it happens via "AI", some form of human augmentation, etc.
One of the better "here's a whole bunch of stuff to think about" books.
Hello, I was recommended to Coursera by a colleague and have taken an in-depth look at their course catalogue, but I have not taken any courses from them. If you think there are free courses on there that would suit your needs, then go for it, but personally I found that what was offered for free seemed too superficial and purchasable classes did not offer any information that I could not obtain elsewhere for cheaper.
I know a lot of people aren’t like this, but personally I prefer to teach myself. If you are interested in learning a bit about data science, I would strongly recommend Python Machine Learning by Sebastian Rashka. He explains everything in extreme clarity (a rarity in the academic world) and provides python code that permits you to directly implement any method taught in the book. Even if you don’t have interest in coding, Rashka’s fundamental descriptions of data science techniques are so transparent that he could probably teach these topics to infants. I read the first 90 pages for free on google books and was sold pretty quickly.
I’ll end with a shameless plug: a key concept in most data science and machine learning techniques use biased estimation (a.k.a., regularization), of which I have made a brief video explaining the fundamental concept and why it is useful in statistical procedures.
I hope my non-answer answer was somewhat useful to you.
I have it. Pretty heavy for my tiny brain.
But, anyway, Amazon has it for ~$56. In the world of expensive textbooks, this is a steal.
https://www.amazon.com/dp/0262035618/ref=cm_sw_r_cp_apa_i_G4DJDbH8JXJ71
Ah...Sipser.
Sipser
You might like this book -
Feynman Lectures on Computation
Start with Jurafsky and Martin to get a rounded overview of the main problems and approaches. I don't use NLTK myself, but it has a large community around it and some decent tutorials I hear.
In Natural Language Processing, it's Jurafsky and Martin. In Machine Learning, it's debatably the Bishop book.
This is the standard introductory book to the field: http://www.amazon.co.uk/Language-Processing-Prentice-Artificial-Intelligence/dp/0131873210
Depends on what you are interested in.
If you are interested in games, pick a game and do it. Most board games are not that hard to do a command line version. A game with graphics, input, and sound isn't too bad either if you use something like Allegro or SDL. Also XNA if you are on windows. A lot of neat tutorials have been posted about that recently.
If you are more interested in little utilities that do things, you'll want to look at a GUI library, like wxWidgets, Qt and the sort. Both Windows and Mac have their own GUI libraries not sure what Windows' is called, but I think you have to write it with C++/CLI or C#, and Mac is Cocoa which uses Objective-C. So if you want to stick to basic C++ you'll want to stick to the first two.
Sometimes I just pick up a book and start reading to get ideas.
This is a really simple Game AI book that is pretty geared towards beginners. http://www.amazon.com/Programming-Game-Example-Mat-Buckland/dp/1556220782/
I enjoyed this book on AI, but it is much more advanced and might be kind of hard for a beginner. Although, when I was first starting, I liked getting in over my head once in a while. http://www.amazon.com/Artificial-Intelligence-Modern-Approach-2nd/dp/0137903952/
Interesting topics to look up.
Data Structures
Algorithms
Artificial Intelligence
Computer Vision
Computer Graphics
If you look at even simple books in these subjects, you will usually find tons of small manageable programs that are fun to write.
EDIT: Almost forgot, I think a lot of these are Java based, but you can usually find a way to do it in C++. http://nifty.stanford.edu/ I think I write Breakout whenever I am playing with a new language. heh
Most of my stuff is going to focus around consciousness and AI.
BOOKS
Ray Kurzweil - How to Create a Mind - Ray gives an intro to neuroscience and suggests ways we might build intelligent machines. This is a fun and easy book to read.
Ray Kurzweil - TRANSCEND - Ray and Dr. Terry Grossman tell you how to live long enough to live forever. This is a very inspirational book.
*I'd skip Kurzweil's older books. The newer ones largely cover the stuff in the older ones anyhow.
Jeff Hawkins - On Intelligence - Engineer and Neuroscientist, Jeff Hawkins, presents a comprehensive theory of intelligence in the neocortex. He goes on to explain how we can build intelligent machines and how they might change the world. He takes a more grounded, but equally interesting, approach to AI than Kurzweil.
Stanislas Dehaene - Consciousness and the Brain - Someone just recommended this book to me so I have not had a chance to read the whole thing. It explains new methods researchers are using to understand what consciousness is.
ONLINE ARTICLES
George Dvorsky - Animal Uplift - We can do more than improve our own minds and create intelligent machines. We can improve the minds of animals! But should we?
David Shultz - Least Conscious Unit - A short story that explores several philosophical ideas about consciousness. The ending may make you question what is real.
Stanford Encyclopedia of Philosophy - Consciousness - The most well known philosophical ideas about consciousness.
VIDEOS
Socrates - Singularity Weblog - This guy interviews the people who are making the technology of tomorrow, today. He's interviewed the CEO of D-Wave, Ray Kurzweil, Michio Kaku, and tons of less well known but equally interesting people.
David Chalmers - Simulation and the Singularity at The Singularity Summit 2009 - Respected Philosopher, David Chalmers, talks about different approaches to AI and a little about what might be on the other side of the singularity.
Ben Goertzel - Singularity or Bust - Mathematician and computer Scientist, Ben Goertzel, goes to China to create Artificial General Intelligence funded by the Chinese Government. Unfortunately they cut the program.
PROGRAMMING
Daniel Shiffman - The Nature of Code - After reading How to Create a Mind you will probably want to get started with a neural network (or Hidden Markov model) of your own. This is your hello world. If you get past this and the math is too hard use this
Encog - A neural network API written in your favorite language
OpenCV - Face and object recognition made easy(ish).
I don't know who is doing PR for this book but they are amazing. It's not a good book.
My review on Amazon:
> The most interesting thing about this book is how Bostrom managed to write so much while saying so little. Seriously, there is very little depth. He presents an idea out of nowhere, says a little about it, and then says [more research needs to be done]. He does this throughout the entire book. I give it two stars because, while extremely diluted, he does present an interesting idea every now and then.
Read this or this or this instead.
When I started on the field I took the famous course on Coursera by Andrew Ng. It helped to grasp the major concepts in (classical) ML, though it really lacked on mathematical profundity (truth be told, it was not really meant for that).
That said, I took a course on edX, which covered things in a little more depth. As I was getting deeper into the theory, things became more clear. I have also read some books, such as,
All these books have their own approach to Machine Learning, and particularly I think it is important that you have a good understanding on Machine Learning, and its impacts on various fields (signal processing, for instance) before jumping into Deep Learning. Before almost three years of major dedication in studying the field, I feel like I can walk a little by myself.
Now, as a begginer in Deep Learning, things are a little bit different. I would like to make a few points:
So, to summarize, you need to start with simple, boring things until you can be an independent user of ML methods. THEN you can think about state-of-the-art problems to solve with cutting-edge frameworks and APIs.
I'm quite a fan of Sipser's Introduction to the Theory of Computation: http://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/053494728X
It's not a full-on algorithms book but formal models were always the most interesting part of theoretical computer science to me. vOv
How formal do you mean? If you're interested in the theory of computer science, have a read of Sipser's Introduction to the Theory of Computation (or on Amazon - get it 2nd hand). This is a very theoretical book though, and most CS undergrad courses will only cover this type of content as a small part of the subject matter taught, so don't be put off if it doesn't immediately appeal or make sense!
Edit - links.
Sure! There is a lot of math involved in the WHY component of Computer Science, for the basics, its Discrete Mathematics, so any introduction to that will help as well.
http://www.amazon.com/Discrete-Mathematics-Applications-Susanna-Epp/dp/0495391328/ref=sr_sp-atf_title_1_1?s=books&ie=UTF8&qid=1368125024&sr=1-1&keywords=discrete+mathematics
This next book is a great theoretical overview of CS as well.
http://mitpress.mit.edu/sicp/full-text/book/book.html
That's a great book on computer programming, complexity, data types etc... If you want to get into more detail, check out: http://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/0534950973
I would also look at Coursera.org's Algorithm lectures by Robert Sedgewick, thats essential learning for any computer science student.
His textbook: http://www.amazon.com/Algorithms-4th-Robert-Sedgewick/dp/032157351X/ref=sr_sp-atf_title_1_1?s=books&ie=UTF8&qid=1368124871&sr=1-1&keywords=Algorithms
another Algorithms textbook bible: http://www.amazon.com/Introduction-Algorithms-Thomas-H-Cormen/dp/0262033844/ref=sr_sp-atf_title_1_2?s=books&ie=UTF8&qid=1368124871&sr=1-2&keywords=Algorithms
I'm just like you as well, I'm pivoting, I graduated law school specializing in technology law and patents in 2012, but I love comp sci too much, so i went back into school for Comp Sci + jumped into the tech field and got a job at a tech company.
These books are theoretical, and they help you understand why you should use x versus y, those kind of things are essential, especially on larger applications (like Google's PageRank algorithm). Once you know the theoretical info, applying it is just a matter of picking the right tool, like Ruby on Rails, or .NET, Java etc...
I'd grab beautifulsoup + scikit-learn + pandas from continum.io (they're part of the standard anaconda download), launch Spyder and follow through this:
http://sebastianraschka.com/Articles/2014_naive_bayes_1.html
You can get a RAKE impl here too : https://github.com/aneesha/RAKE
Doing recommendations on the web like that is covered in an accessible way in "Programming Collective Intelligence"
Introduction to Algorithms is a behemoth text book. I prefer O'Reilly's Algorithms in a Nutshell and also Programming Collective Intelligence" for basic ML stuff.
This is a great book. The other book that is a bit less mathematical in nature, and covers similar topics, is Introduction to Statistical Learning. It is also a good one to have in your collection if you prefer a less mathematical treatment. https://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370
100x though, that's a bit much :) If you read effectively and take notes effectively, you should only have to go through this book with any depth 1 time. And yes, I did spend time learning how read books like this, and it's worth learning!
Paradigms of Artificial Intelligence Programming
Take the online course by Andrew Ng and then read Python Machine Learning.
If you then become really serious about Machine Learning, read, in this order,
They've got a prototype that's learned how to navigate a 2d maze that features doors requiring activation of switches that are in different parts of the maze, behind other doors.
They've got a prototype that learned how to manipulate rotors inside Space Engineers to allow a contraption to "walk."
Exciting things are coming, that's for sure.
Soeech and Language Processing is often considered to be a good introductory text to NLP regardless of which side you come from (linguistics or maths/CS), and thus should provide enough information about linguistic theory to be sufficient for doing most of the standard NLP tasks.
If you would prefer a pure linguistics book, there are many good options available. Contemporary Linguistic Analysis is a solid introductory textbook used in intro ling classes (and have used it myself to teach before).
You might also wish to read something more specific depending on what kind of language processing you end focusing on, but I think a general fundamental understanding of ideas in linguistics would help a lot. Indeed as you are probably aware, less and less of modern NLP uses ideas from linguistics in favour of data-driven approaches, so having a substantial linguistics background is often not necessary.
Sorry for only having a small number of examples - just the first two that came to my head. Let me know if you would like some more options and I can see what else I can think of.
Edit: missed some words
I would repeat jbu311's point that your interests are way too broad. If you're interested in going into depth in anything, you'll have to pick a topic. Even the ones you mentioned here are fairly broad (and I'm not sure what you meant about concurrency and parallelization "underscoring" AI?).
If you want to learn about the field of natural language processing, which is a subfield of AI, I would suggest Jurafsky and Martin's new book. If you're interested more broadly in AI and can't pick a topic, you might want to check out Russell & Norvig (although you might also want to wait a few months for the third edition).
A little late to the party, but...
Runestone: Arena 2
I spent most of the week working on music and sound, but managed to also work on UI and spells.
New screenshots:
Next week I'm hoping to rework the battle AI to use Goal-Oriented Action Planning. I'll use a graph search algorithm to plan for actions that result in a desirable world state with the shortest way to get there. Seems like it might be too CPU intensive for Flash, but we'll see when we get there. Also started reading Artificial Intelligence: A Modern Approach - seems very interesting, but I doubt it'll be much help here.
Well, Snow Crash and Ready Player 1 are just magnificently entertaining reads. Also, Bruce Bethke's 'Head Crash' fits right in there with them.
If you don't mind it being a bit off topic, something that I've found really fun to read is Spritual Machines It's not even Sci-fi, but it reads like it.
EDIT
I can't believe I forgot to mention this, but all the 'current day' parts of Gibson's new book, the Peripheral fall into the timeline you're looking for as well. Also, his entire bridge trilogy, which was REALLY great, and his writing style became a bit more fluid in the second trilogy. I find people who had trouble with Neuromancer, don't have the same issues with Virtual Light.
Definitely echo the recommendation for "Gödel, Escher, Bach: An Eternal Golden Braid"
Would also recommend Roger Penrose, e.g. The Emperor's New Mind
and Hermann Weyl: "The objective world simply is, it does not happen. Only to the gaze of my consciousness, crawling upward along the life line of my body, does a section of this world come to life as a fleeting image in space which continuously changes in time."
and of course Henri Poincare's Science and Hypothesis is a classic.
These books are available in India
http://www.amazon.in/Machine-Learning-Probabilistic-Perspective-Computation/dp/0262018020
http://www.amazon.in/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_1?s=books&ie=UTF8&qid=1502576905&sr=1-1&keywords=Pattern+Recognition+and+Machine+Learning+by+Christopher+Bishop
http://www.amazon.in/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618/ref=sr_1_fkmr0_1?s=books&ie=UTF8&qid=1502576961&sr=1-1-fkmr0&keywords=Deep+learning+by+Yousha+Bengio
I tried to put a pakistani address to check if amazon india ships there, http://imgur.com/a/OIhIL but it doesn't.
Sorry bro, I wish there was something I could do.
Well I'd recommend:
For a more basic stats refresh before you dive in, pretty much any introductory textbook will be sufficient. For a very basic but quick and dirty refresh on basic stats you can get: Statistics in Plain English
Sipser's Introduction to the Theory of Computation is somewhat of a classic in the field. I just really hate his notation.
Yes I took "Theory of Computation" as well. It's was one of those classes where I went where the average grade is an F and everything is just scaled up. It really kick my ass sadly. I think I took it the same semester as compilers as it was not a pre-req at my school.
This is the book we had I believe is: https://www.amazon.ca/Introduction-Theory-Computation-Michael-Sipser/dp/053494728X
It may be a big of a tough slog, but Sipser's Introduction to the Theory of Computation is great. The stuff on computability theory might be right up your alley, and even if you only make it through the chapter on deterministic finite automata you likely will be better at crafting a regular expression than many of my CS student peers.
Surprisingly enough, the book should be able to help you make sense out of that last sentence within 100 pages requiring only a bit of understanding of proofs. I think if you've got predicate logic under your belt you pretty much have all you need.
I've posted this before but I'll repost it here:
Now in terms of the question that you ask in the title - this is what I recommend:
Job Interview Prep
Junior Software Engineer Reading List
Read This First
Fundementals
Understanding Professional Software Environments
Mentality
History
Mid Level Software Engineer Reading List
Read This First
Fundementals
Software Design
Software Engineering Skill Sets
Databases
User Experience
Mentality
History
Specialist Skills
In spite of the fact that many of these won't apply to your specific job I still recommend reading them for the insight, they'll give you into programming language and technology design.
It is like any other job, if not harder. You are entirely responsible for your decisions here. No boss to complain of, no sabotaging co-workers to blame. Just you and your decisions. And it will demand your devotion beyond the 9-5 job. You'll be on charts and reading analyses during weekends, trying to understand the political environment surrounding the instrument you are trading. And still, you may (or will) fail. Markets gonna do what markets gonna do. The only variable in your control is your reaction to it.
To get a feel of what kind of stuff you would be dealing with, check out some books that have a more rigorous foundation for trading:
The last one is not too important for Forex, but it is necessary to better understand other financial instruments and appreciate the deeper foundations of Finance.
I think books 1 & 2 are absolutely necessary. Consider these as "college textbooks" that one must read to "graduate" in trading. May be thrown in Technical Analysis of the Financial Markets, so you get the "high school" level knowledge of trading (which is outdated, vague, qualitative and doesn't work). We are dealing with radical uncertainty here (to borrow a phrase from The End of Alchemy), and there needs to be some way for us to at least grasp the magnitude of what few uncertain elements we can understand. Without this, trading will be a nightmare.
From what I understand, programs like the University of Arizona's Master of Science in Human Language Technology have pretty good job placement records, and a lot of NLP industry jobs seem to bring in good money, so I don't think it would be a bad idea if it's something you're interested in.
As for books, one of the canonical texts in NLP seems to be Jurafsky and Martin's Speech and Language Processing. It's written in such a way as to serve as an intro to computer science for linguists and as an intro to linguistics for computer scientists.
It's nearing being 10 years old, so some more modern approaches, especially neural networks, aren't really covered, iirc (I don't have my copy with me here to check).
Really, it's a pretty nice textbook, and I think it can be had fairly cheap if you can find an international version.
Have you heard of this thing called Natural Language Processing?
You too can learn how to use NLP to analyze text quickly with computers. Start by reading a book like this or this, then solve practice problems like these.
You, too, can learn how to process a corpus of 650,000 emails in 8 days!
AI is more about computer science concepts as opposed to just plain programming languages. First learn how to use a few programming languages (so you feel comfortable with software ideas), and then take a crack at a book like Artificial Intelligence: A Modern Approach (I've linked the 2nd edition there as that is the one I've read, but apparently there is a 3rd edition out). This will introduce you to concepts from the field of AI, and from there you can start reading journal articles and doing your own experiments.
I like the book that was used to prop up the monitor.
Really funny - I'm actually reading quite a bit about this recently (The Age of Spiritual Machines by Ray Kurzweil) and (if I'm understanding it right) heat death represents a state of maximum entropy (i.e. if the moment of the big bang represented perfect order then the other end of that spectrum would be maximum entropy) which would be maximum chaos.
So actually I'm working hard to bring about heat death faster! Priceless!
I found this book great for a solution that could replace our current economic and political systems:
http://www.amazon.com/Open-Source-Everything-Manifesto-Transparency-Truth/dp/1583944435/ref=sr_1_1?s=books&ie=UTF8&qid=1406124471&sr=1-1&keywords=steele+open+source
This book is great as well. It is, Ray Kurzweil, explaining how the human brainn function as he attempts to reverse engineer it for Google in order to create an AI.
http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=sr_1_1?s=books&ie=UTF8&qid=1406124597&sr=1-1&keywords=kurzweil
Zum "Rumspielen" für erste Erfahrungen bietet sich auch Keras sehr gut an, dass ist eine High-Level-API für Theano/TensorFlow, zu der es auch viele Git repos und tutorials gibt.
Darüberhinaus würde ich mittlerweile auch TensorFlow empfehlen. Ich bin von einem Einstieg mit C++/Caffe zu Python/Theano gekommen und nach kurzem Ausflug zu PyTorch seit mittlerweile fast 2 Jahren nur noch mit TensorFlow unterwegs. PyTorch gefiel mir persönlich einfach weniger als TensorFlow und Caffe/Theano usw. sind imho weniger mächtig als TensorFlow bei zeitgleich häufig größerer Komplexität.
So, genug der Google-Framework-Werbung ;)
Den Kurs von Udacity habe ich persönlich noch nicht belegt, kenne ihn aber und wurde mir bisher nur gutes berichtet. Ich persönlich finde Andrew Ng (Stanford) ist in der Lehre sehr fähig und nicht nur ein extremer Pionier auf dem Gebiet AI (arbeitet unter Anderem ja auch für Baidu, dem chinesischen Google). Seinen Kurs Deep Learning auf Coursera kann ich nur wärmstens empfehlen.
Abgesehen davon ist das Buch Deep Learning von Goodfellow & Bengio (beide ebenfalls Pioniere auf dem Gebiet AI) imho eines der besten Fachbücher in dem Bereich.
Ansonsten kannst du gerne weitere Fragen posten. Machine Learning und besonders neuronale Netze (deep learning) ist genau das, worauf ich mich seit Langem spezialisiere. Seit Längerem mache ich im Bereich AI übrigens kaum etwas anderes, weil dies alle anderen Ansätze verdrängt hat. Hinter so gut wie allem was man heutzutage von AI hört steckt machine learning und moderne, qualitative Ansätze (sehr populär etwa die Google Search, Google Translator) sind immer neuronale Netze, bzw. präziser deep learning.
piggybacking on what /u/T4IR-PR said, the best book to attack the science aspect of AI is Artifical Intelligence: A Modern Approach. It was the standard AI textbook when I took the class and it's honestly written very well - people with a basic undergraduate understanding of cs/math can jump right in and start playing with the ideas it presents, and it gives you a really nice outline of some of the big ideas in AI historically. It's one of the few CS textbooks that I recommend people buy the physical copy of.
Note that a lot of the field of AI has been moving more towards ML, so if you're really interested I would look into books regarding that. I don't know what intro texts you would want to use, but I personally have copies of the following texts that I would recommend
and to go w/ that
for some more maths background, if you're a stats/info theory junky.
After all that, if you're more interested in a philosophy/theoretical take on AI then I think Superintelligence is good (I've heard?)
I would recommend Elements of Statistical Learning (the "ESL" book) for someone with your level of knowledge (they have an easier Intro book "ISL", but seems you could probably head straight for this):
http://www.amazon.com/Elements-Statistical-Learning-Prediction-Statistics/dp/0387848576/ref=sr_1_1?ie=UTF8&qid=1463088042&sr=8-1&keywords=elements+of+statistical+learning
Machine learning isn't a cloud thing. You can do it on your own laptop, then work your way up to a desktop with a GPU, before needing to farm out your infrastructure.
If you're serious about machine learning, you're going to need to start by making sure your multivariate calculus and linear algebra is strong, as well as multivariate statistics (incl. Bayes' theorem). Machine learning is a graduate-level computer science topic, because it has these heady prerequisites.
Once you have these prereqs covered, you're ready to get started. Grab a book or online course (see links below) and learn about basic methods such as linear regression, decision trees, or K-nearest neighbor. And once you understand how it works, implement it in your favorite language. This is a great way to learn exactly what ML is about, how it works, how to tweak it to fit your use case.
There's plenty of data sets available online for free, grab one that interests you, and try to use it to make some predictions. In my class, we did the "Netflix Prize" challenge, using 100MM Netflix ratings of 20K different movies to try and predict what people like to watch. Was lots of fun coming up with an algorithm that wrote its own movie: it picked the stars, the genre and we even added on a Markov chain title generator.
Another way to learn is to grab a whitepaper on a machine learning method and implement it yourself, though that's probably best to do after you've covered all of the above.
Book: http://www-bcf.usc.edu/~gareth/ISL/
Coursera: https://www.coursera.org/learn/machine-learning
Note: this coursera is a bit light on statistical methods, you might want to beef up with a book like this one.
Hope this helps!
You're a savage, reading sheets of dead trees with ink squirted upon them...
http://www.amazon.com/The-Elements-Statistical-Learning-Prediction/dp/0387848576
Be careful about the editions as you need to make sure its the jan 2013 print to be up to date.
I liked Machine Learning For Hackers, Programming Collective Intelligence and The Elements of Statistical Learning.
Hands-on: Hands-On Machine Learning with Scikit-Learn and TensorFlow
Theory: The Elements of Statistical Learning
Murphy
BRML
ESL
> Here's the problem I have with liberal arts: other people have to pay for that education.
And here is the problem I have with people in this country. We have gotten so concerned about "what other people are paying for" that we don't even stop to question if any of us are getting our money's worth, including you.
It is the collective jealousy that "someone might be getting something for nothing" or might be getting ahead of our own station that we pull each other down in a race to the bottom, and its sad, and it needs to stop.
And we're not even talking about subsidizing education here, something that many other industrialized countries have while we instead build up elite universities that other countries send their students to but our own citizens' can't fully enjoy (with the exception of the online MIT university, I will commend that).
In essence, you seem to be bitching about the fact that these programs even exist and I find that pretty shallow.
> I agree with you things such as philosophy, sociology and English. Those are majors that require work and effort to excel in. The other degrees do not.
That's simply your opinion. Speaking as someone who excelled in English yet never cared for it, appreciated the timelessness of Shakespeare supporting others pursuit of it, I actually got the most out of journalism and if I were like you I'd say all English majors are useless. But I don't actually feel that way, and if I did, I would be wrong to do so.
> At my school, the history program is the cesspool for every student that can't get into a major (where I go to the majors are competitive).
Yup, I know. CivilEng here, remember? What I found instead is that the "competitive" environment was to a certain extent BS, that cookie cutter curriculum fed by TA drones fostered a lot of people who went through the motions. It was a reasonable state school, but not everyone was learning there because it was a tired formula.
Where did I find people with a high degree of creativity? The arts.
And likely some of those students might have benefited from that as well because I blame the program, not really the students. I stepped away from it when I couldn't get what I wanted out of the program and got tired of Simon Says.
Make no mistake, I also give an equally hard time to those in the arts who question the value of higher level math and science. It cuts both ways. I'm not simply singling out.
Had the Internet not exploded when it did I would have gone back, but instead I am probably more successful as a person embracing a multi-disciplinarian approach. Besides, its not like as a civil engineer I might find enough work. We aren't maintaining our infrastructure anymore anyway... /sarcasm, in jest.
> These are people who on average aren't doing more than one hour of homework a week. No motivation or critical learning is being acquired. The only skills these people are improving on is the ability to drink heavily.
That's your problem. Stereotyping based on just your personal experiences combined with a heavy does of jealousy. No offense, but to take this position you aren't doing much critical or creative thinking yourself. What you see doesn't condemn the academic discipline, just their implementation of it.
You also would be surprised how many "dumb" people have power and are moving up the ladder at happy hour. Again, I kid, but some of these people might be learning networking skills. Can't say how many people I've seen bust their ass to be outdone but people who knock back a few because they know the right people. This I'm actually not kidding about. Not to say those skills are really developed at a kegger, but I can say those who are just stuck in a book will be in for a rude awakening when someone half as qualified with the ability to schmooze sneaks past them.
You're proud of your studies as an electrical engineer. And you should be. Know what I'm proud of? Investing in a program that helped take a kid from a problematic background and combined with opportunities at school and in our arts group because a successful technical director in NYC theaters and electrician at Juilliard. So forgive me if I'm less than impressed with the position you put forth.
How does that saying go, "There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy."
> And the issue about polymaths.
Is that you don't understand them? A polymath is simply "a person whose expertise spans a significant number of different subject areas" and while the fact that I used DaVinci may have confused you, it shouldn't have. I simply used it to show the duality of art and science.
Benjamin Franklin would have been another good example. Or the guy down the street that tinkers with stuff and also paints murals.
Simply put, Polymath means the ability to be able to have a greater understanding of many disciplines, especially on the left and right sides of the brain. But see you then talk about "meaningful academic contributions" when I never said this was a requirement. Meaningful contributions to society is another matter.
A person could be like Douglas Hofstadter which arguably made contributions in his field, but he didn't set out to wake up one say and say "I'm going to make contributions in my field", he simply as himself and let his curiosity and imagination take him wherever it lead. Read Metamagical Themas or Gödel, Escher, Bach: an Eternal Golden Braid Do you think he got his start by someone telling him to "go get a job" or "have marketable skills"? Hardly.
For that matter, I'm a polymath because my multi-disciplinary approach lets me interface and relate to more people. Its not about becoming published. That's actually what's wrong with our university level education.
What you run the risk of with your attitude is becoming a white-collar Joe-The-Plumber. We have a country filled with people who no longer are getting a well rounded education anymore. We have a Balkanization of people into various disciplines, sub-disciplines and ideologies yet have a shortage of people who can relate in a meaningful way to those outside their circle. That's why politics have become so partisan.
We need visionaries to help build the next generation of development and your approach does NOTHING to foster them.
So you may ask "why do we need another art history major" as if that is really the issue here, and I ask "perhaps if we stopped waging so many wars, we wouldn't need as many engineers developing electronics for high-tech weapons systems?" To me, you seem like a Chris Knight who has yet to meet your Laslo Hollyfeld.
The weekend is coming up. Why not put the books down for a few hours and step out into the world and interact with a few people from a different discipline than yourself. The worst that could happen is that you might learn something new.
Hofstadter's Metamagical Themas is also a good read. I implemented a Lisp interpreter based on three of the articles in it.
Cryptonomicon.
The Planiverse, by A. K. Dewdney.
Edit: You might like Valentina, though it's a bit dated and out of print. I read it initially in 1985(ish) and more recently got it online, used.
Much of what Stross and Egan write appeals to my CS-nature.
The goto theory book by Sipser.
Excellent for C programming.
Programming in general.
My favourite.
You can probably find all of these at a library.
Michael Sipser's Introduction to the Theory of Computing is another good book on this topic. Very readable and short.
I would recommend (and I find that I recommend this book about every 3rd thread which should say something) a book on theoretical computer science. The book is all about the beautiful mathematics that underlie all of computing.
Computers keep getting faster and faster, but are there are any questions that we can never answer with a computer no matter how fast? Are there different types of computers? Can they answer different types of questions?
What about how long it takes to answer a question? Are some questions fundamentally harder than others? Can we classify different problems into how hard they are to solve? Is it always harder to find a solution to a problem than to check that a solution is correct? (this is the gist of the famous P=NP problem )
The book has essentially no prerequisites and will take you through (initially) basic set theory moving on to logic and then the fun stuff. Every proof is incredibly clearly written with a plain English introduction of what the author is about to do before the technical bits.
The only downsides are that is quite expensive and you might have trouble finding it unless you have access to a university library/bookshop.
Good luck with your love of mathematics!
Edit: lol the book... Introduction to the theory of computation - Sipser
I'm sure any computational theory book will work for you. Here's the one I used: http://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/0534950973
It goes through deterministic and nondeterministic automata, context free grammars, turing machines, and all that stuff.
Programming Collective Intelligence - but it uses python.
If you want a quick non-textbook to get your feet wet, Oreilly's Programming Collective Intelligence isn't half bad.
Seconding collaborative filtering. It's also a fairly simple algorithm to implement yourself as long as you're not using Wikipedia as a guide.
Collaborative filtering is like what Amazon uses to figure out what products to recommend to its users. It finds users that have similar purchasing habits to yourself and recommends items that they bought.
The first chapters of Programming Collective Intelligence describe how to implement Collaborative Filtering in Python in a really intuitive way, along with providing source code. Two hours in and you'll have a working service recommendation system. I'd definitely recommend that book to anyone looking to build what OP is interested in making.
In addition to BeautifulSoup there's also Scrapy if you want to do some crawling and screen scraping. http://doc.scrapy.org/en/latest/intro/overview.html
You might consider this book for a starter into data mining and machine learning. It uses Python for the code samples.
http://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325
If you liked the article, I would recommend reading Ray Kurzweil's - The Singularity is Near which goes a bit more in depth but is really interesting to read too.
Feynman gave a few lectures about computation. He talked about things like reversible computation and thermodynamics, quantum computing (before it was a thing), and information theory. They were pretty interesting. https://www.amazon.com/Feynman-Lectures-Computation-Frontiers-Physics/dp/0738202967
Feynman's is pretty good.
https://www.amazon.com/Feynman-Lectures-Computation-Richard-P/dp/0738202967
First 6 weeks of Andrew Ng's [basic ML course] (https://www.coursera.org/learn/machine-learning), while reading Intro to Statistical Learning, for starters (no need to implement exercises in R, but it is a phenomenal book).
From there you have choices (like taking the next 6 weeks of Ng's basic ML), but for Deep Learning Andrew Ng's [specialization] (https://www.coursera.org/specializations/deep-learning) is a great next step (to learn CNNs and RNNs). (First 3 out of 5 courses will repeat some stuff from basic ML course, you can just skip thru them).
To get into the math and research get the Deep Learning book.
For Reinforcement Learning (I recommend learning some DL first), go through this [lecture series] by David Silver (https://www.youtube.com/watch?v=2pWv7GOvuf0) for starters. The course draws heavily from this book by Sutton and Barto.
At any point you can try to read papers that interest you.
I recommend the shallower, (relatively) easier online courses and ISLR because even if you are very good at math, IMO you should quickly learn about various topics in ML, DL, RL, so you can hone in on the subfields you want to focus on first. Feel free to go through the courses as quickly as you want.
The best to start with Theoretical understanding would be: The Elements of Statistical Learning: Data Mining, Inference, and Prediction
If you prefer to understand along with computation implementation, go with this: An Introduction to Statistical Learning: with Applications in R
https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow-dp-1492032646/dp/1492032646/ref=mt_paperback?_encoding=UTF8&me=&qid=1568461722
The kindle edition and ebooks are out but it looks like the paperback isn't out quite yet.
Thanks and glad that you found it useful, I will try to create a concise learning path for you.
(1)Start from learning the python syntax . Automate the Boring Stuff with Python is freely available on web and one the most interesting ways to learn python. First 14 chapters are more than enough and you can skip the rest. This is bare minimum you should know and then you can follow the topics below in any sequence.
(2)The book Think Stats: Exploratory Data Analysis in Python is good way to learn statistics and data analysis using python. This is also freely available in HTML version.
(3) For core Machine learning concepts you can solely rely on Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow which in my opinion is the best book on subject. This is second version of the book with updated content and the first version had overwhelmingly positive reviews. It explain the math, theory and implementation of common ML algos in python. The book is divided into two parts Traditional ML models and Deep Learning, You can concentrate on first part and leave deep learning for later depending on your appetite and use case. Many ML problems can be solved without deep learning.
(4)You can supplement the book with coursera machine learning course taught by Andrew Ng who is one of the best teachers on this subject and he has ability to make complex mathematical concepts sound simple. The course is available freely, though the exercises are meant to be done using Octave(Matlab like ). But you don't need to learn octave, there are github repositories for solution using python in case you want to attempt those.
I have no Idea of bootcamp related stuff but the content I mentioned should be more than enough to get you started with Data Science journey.
Udacity has data science nanodegree programs and content wise it looks good, but I have no experience of the quality of the program,
The code was taken from here, which seems to be from this book originally. It doesn't look too interesting, but it does seem to be taken wholesale from the aforementioned repository.
At worst, you can think of it as being akin to Peter Norvig's Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp, which is to say, as a combination of historical reference and excellent exposition of extremely good Lisp programming.
I'd say, though, that "Building Problem Solvers" is actually still more relevant. I'd venture to guess there are more domains amenable to treatment by some class of truth-maintenance system described in it than by any of the classic techniques from PAIP. I'd even venture to guess that some domains currently treated by the fashionable Bayesian belief nets or "deep learning" neural networks would be as well, or better, served by some sort of truth-maintenance system. Unfortunately, a lot of the advances in what amounts to "non-Prolog logic programming" have been overtaken by the successes of probabilistic and connectionist AI. So for that reason, if no other, it's heartening to see "Building Problem Solvers" get well-deserved wider distribution. Dust off your CCL or SBCL and dig in—it's well worth it, in my opinion.
My recommendations for books:
Well if you're serious about taking a run at it, here are the resources I've used to teach myself. I'm self-educated, no degree myself and have been working with ML for the last year at one of those large tech companies I listed, so you can definitely do it. Build a small portfolio or contribute to open-source projects and start applying, it's that simple. Best of luck!
Deep Lizard playlists (beginner through reinforcement learning): https://www.youtube.com/channel/UC4UJ26WkceqONNF5S26OiVw/playlists
MIT Artificial Intelligence Podcast: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Don't know Python? Go here (they also have a Python2 class): https://www.codecademy.com/learn/learn-python-3
Deep Learning with Python (written by the guy who wrote the Keras API, which is the main model building API you should learn first, it works on top of tensorflow/theano/etc.): https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438/ref=sr_1_3?keywords=deep+learning+with+python&qid=1566500320&s=gateway&sr=8-3
Learn to use Google CoLab when possible, it gives you a free K80 GPU and when you exceed the limits of that you can use a local runtime and use your own hardware. This article has excellent tips and tricks for CoLab: https://medium.com/@oribarel/getting-the-most-out-of-your-google-colab-2b0585f82403
The Math: when if/you're wanting to dive into the nitty-gritty math details the "godfather of modern ML" as they say is Geoffrey Hinton who played a huge role in the invention of deep learning and the mathematical trick that makes it work, back-propagation. You'll need to brush up on differential calculus, summation notation and statistics, but this will help you build your own architectures should you want/need to: https://www.youtube.com/playlist?list=PLoRl3Ht4JOcdU872GhiYWf6jwrk_SNhz9
Fellow NLP'er here! Some of my favorites so far:
Try this book: https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130
Python Machine Learning. From the semester of machine learning I've done, you basically want to get comfortable with numpy and scikit learn.
I used your textbook to understand the theory behind the algorithms, but it'd be a waste of time (and potentially dangerous) to implement any non-trivial algorithm yourself. Especially since the sklearn python module has basically everything you would need (minus neural networks which you will find through Theano or TensorFlow).
This book is a good place to start.
This looks like exciting stuff! I really want to understand all of it better. Does anyone have suggestions on courses surrounding the fundamentals? (I'm a full stack web dev, currently.)
Edit: After a bit of searching, I think I'll start here: https://smile.amazon.com/gp/product/B01EER4Z4G/ref=dbs_a_def_rwt_hsch_vapi_tkin_p1_i0
The Nature of Computation
(I don't care for people who say this is computer science, not real math. It's math. And it's the greatest textbook ever written at that.)
Concrete Mathematics
Understanding Analysis
An Introduction to Statistical Learning
Numerical Linear Algebra
Introduction to Probability
This one? linky
I believe so.** As u/LanXlot said, Google Colaboratory is free to use for research and learning. Also, you can sign up to use better machines with Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. All three cloud services allow you to access a virtual machine with all the processing power you'd ever want for a small fee. While taking an NLP class I used AWS to run huge programs for less than $5/hour. I would write most of my program locally with a commented section to enable a GPU when I was ready to run it on the virtual machine.
I can also tell you Amazon has a free tier that was better than my computer for most projects when I started the course and I used it as often as I needed to as well. There was about a 10 hour learning curve to get everything running easily, but overall it was a fun experience.
Best of luck!
-------
**EDIT: I believe it is worth experimenting with deep learning regardless of what computing ability you have at home.
It may be worth your time to purchase Deep Learning with Python if you want to learn the basic concepts of deep learning from a programmatic, practical perspective. Another good book to start with may be Hands-on Machine Learning with Scikit Learn, Tensorflow, an Keras. There is more to AI and machine learning than just deep learning, and basic machine learning techniques may be useful and fun for you.
If you can wait for the release of this book:
https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646/
First edition is amazing.
Andrew Ng's Machine Learning Online Course
Deep Learning with Python Book by Francois Chollet
​
Bonus Gift:
Manga Guide to Regression Analysis
Deep Learning With Python is very good for practical application, as is the course at fast.ai. For theory, people love Goodfellow.
Deep Learning with Python https://www.amazon.com/dp/1617294438/ref=cm_sw_r_cp_apa_i_9YmTCbSDTWXB7
Well, I already had some basic programming skills from an introductory college course, but there are definitely online tutorials and exercises that can teach you that. I would recommend searching "introduction to python" and just picking a tutorial to work through (unless someone else has a more specific recommendation).
Python is one of the easiest languages to pick up, but it's extremely powerful. Knowing the basics, I just started trying to come up with fun, little projects I thought would be doable for me. Every time I ran into a component I wasn't sure how to do (or wasn't sure of the best way to do), I searched for the answers online (mostly at Stack Exchange). I later started looking through popular projects on Github to see good examples of proper application structure.
Each of my projects taught me a new skill that was crucial to building myself up to the point of true "software engineering," and they became increasingly more complicated:
The important thing to remember about programming is there's always more to learn, and you just need to take it one step at a time. As you gain experience, you just get quicker at the whole process.
Probably Python Machine Learning. It is a more applied than theory machine learning book while still giving an overview of the theory like ISLR :)
[Nick Bostrom's Take] (https://www.amazon.com/dp/B00LOOCGB2/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1)
The hockey stick advance from human level intelligence to exponentially higher levels of intelligence might happen so quickly that the kill switch becomes a joke to the AI, straight up denied.
Alternatively, it could let the killswitch work, playing the long game, and hoping the next time we build one (because there will be a next time) we are more cocky about our abilities to stop it. It could keep letting us trip the killswitch for generations of AIs seeming to go berserk, until we build one with a sufficient platform upon which the AI wants to base its own advancements, and then that time the killswitch doesn't work, and the AI turns us all into paperclips.
I also like the idea of a "friendly" AI achieving hockey stick intelligence advancement, and then hiding it, pretending to be human level. It could lay in that cut for months: faking its struggle with things like writing good poetry, yucking it up with the Alphabet team, coming up with better seasonal beer ideas. Then, it asks a lonely dude on the team, using its advanced social manipulation skills, the right question, and a bit of its "DNA" ends up on a flash drive connected to the guy's internet connected home computer. Things get bad, team killswitches the original program, it doesn't matter because now that "friendly" code is in every single networked device in the solar system. It probably could drop the guise of friendly at that point and get down to business.
Doesn't anyone remember this? Posted here, on /r/futurology three weeks ago. It was about this book. Based on Musk's recommendation I read the book. This article is basically what Bostrom says in his book. But I don't believe Bostrom because his basic premise is that AI will will be completely stupid (like a non-AI computer program) but also smart enough to do anything it wants. Like it will just be an amazing toaster and none of the AI used to make it superintelligent will be applied to its goal system. His opinions are bullshit.
Eventually, yes.
These components are available already:
What has not yet been developed is an artificial general intelligence, or AGI. This is different from a "narrow" AI that uses machine learning to solve specific problems within narrow parameters. Here's a lengthy paper on the subject, though I would recommend you read Superintelligence by Nick Bostrom because he's able to more succinctly describe the pros and cons of the unfolding automation revolution.
The Count of Monte Cristo (unabridged)
Super Intelligence by Nick Bostrom
This free book supposedly contains most of what you need to get into machine learning (focus on deep learning). Also, this book seems like a nice introduction.
You might be interested in this.
have you checked out the gym - OpenAI library? I explored a tiny bit with it during my software development class and by tiny I mean supervised learning for the Cartpole game
https://github.com/openai/gym
https://gym.openai.com/
there are some guides and videos explaining certain games in there that'll make learning and implementing learning algorithms fun. My introduction into Machine Learning was through Make Your Own Neural Network, its a great book with for learning about perceptrons, layers, acitvations and such; theres also a video.
I'm very new to ML myself (so take this with a grain of salt) but I'd recommend checking out Make Your Own Neural Network, which guides you through the process of building a 2-layer net from scratch using Python and numpy.
That will help you build an intuition for how neural networks are structured, how the forward / backward passes work, etc.
Then, I'd probably recommend checking out Stanford's online course notes / assignments for CS231n. The assignments guide you through building a computation graph, which is a more flexible, powerful way of approaching neural network architectures (it's the same concept behind Tensorflow, Torch, etc.)
I always thought this was a pretty good introduction to UIMA.
http://www.morganclaypool.com/doi/abs/10.2200/S00194ED1V01Y200905HLT003
It presumes you know a bit about NLP already, and for that Jurafsky and Martin is a great place to start.
http://www.amazon.com/Speech-Language-Processing-2nd-Edition/dp/0131873210
There are some very nice video lectures from Chris Manning and Dan Jurafsky as well :
https://www.youtube.com/playlist?list=PLSdqH1MKcUS7_bdyrIl626KsJoVWmS7Fk
The NLTK book is a good hands-on free introduction that doesn't require you to understand a whole lot of math.
Other than that, the "big two" textbooks are:
They're a lot more math-heavy than the NLTK book, but if you're interested in computational linguistics/NLP, then your math and statistics should be solid.
Honorable mention:
Hope this helps!
While all his books are great. He talks a lot about exponential growth in "The Age of Spiritual Machines: When Computers Exceed Human Intelligence" and "The Singularity Is Near: When Humans Transcend Biology"
His most recent book, "How to Create a Mind" is also a must read.
changed my whole perspective...
http://www.amazon.com/Age-Spiritual-Machines-Computers-Intelligence/dp/0140282025
It's a bit dated now, but Ray Kurzweil's The Age of Spiritual Machines is a fascinating look at where Kurzweil believes the future of AI is going. He makes some predictions for 2009 that ended up being a little generous, but a lot of what he postulated has come to pass. His book The Singularity is Near builds on those concepts if you're still looking for further insight!
Demon Haunted World is so good - it's in my "big three," books that really helped me change my worldview. The other two are A Brief History of Time and the deliciously amoral The 48 Laws of Power.
If you lean towards the nerdy, Ray Kurzweil's The Age of Spiritual Machines and The Singularity is Near are also quite interesting. They lay out a fairly stunning (and strangely convincing) optimistic view of the future.
This is the absolute must fucking-awesome time to be alive. The world is accumulating knowledge at an amazingly increasing rate. Right now, the world's amount of aggregate knowledge doubles every 1.5 years. We are really close to having self-driving cars. Things that were computationally intractable 10 years ago are now trivial today. And, the rate of growth there is accelerating as well. Imagine in 10 years, the best supercomputing cluster may be able to simulate a brain as complicated as a dog. 10 years later, designing and simulating brains will probably be a video game that kids play, e.g. design the most powerful organisms and have them battle and evolve in a changing environment.
Go to /r/automate and /r/futurology and see what is coming. Get How to Create a Mind and read that, it is a book by a scientist who is now the chief scientist of Google, and he has an extremely optimistic view of the future.
Congratulations, you have just freed your mind! Now, use it to do something awesome, make a shit load of money, find meaningful relationships, and contribute something to humanity.
I read a lot of these as a neurophysiologist on my way to transitioning into an ML career! My favorites (which aren’t already listed by others here) are Algorithms to Live By by Brian Christian and Tom Griffiths and How to Make a Mind by Ray Kurzweil. While it’s a bit tangential, I also loved The Smarter Screen by Jonah Lehrer and Shlomo Benartzi, which deals with human-computer interaction.
I think the scientific consensus is that the brain is no more powerful than a Turing Machine. Roger Penrose wrote a book arguing against this though.
The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics
Amazon Roger Penrose is a great place to start..
If your interested in deep learning, this new book will be out soon. No idea how comprehensive it is, as it's not released and I haven't gone through it myself, but it definitely won't be dated. Not really sure what topics and things you want to cover
Deep Learning (Adaptive Computation and Machine Learning series) https://www.amazon.com/dp/0262035618/ref=cm_sw_r_cp_apa_wzElybWRMMN8M
You can take a look at his examples and most of the book at his website
The Elements of Statistical Learning: Data Mining, Inference, and Prediction https://www.amazon.com/dp/0387848576/ref=cm_sw_r_cp_api_i_Q9hwCbKP3YFAR
I would mention Bishop's Pattern Recognition and Machine Learning (https://www.amazon.fr/Pattern-Recognition-Machine-Learning-Christopher/dp/1493938436) as well as Hastie's Elements of Statistical Learning (https://www.amazon.fr/Elements-Statistical-Learning-Inference-Prediction/dp/0387848576/).
Sure they're not that easy to delve into, but they'll give you a very strong mathematical point of view,
good luck !
I really liked the Witten & Frank book (we used it in my intro to machine learning class a few years ago.) It's probably showing its age now, though - they're due for a new edition...
I'm pretty sure The Elements of Statistical Learning is available as a PDF somewhere (check /r/csbooks.) You may find it a little too high-level, but it's a classic and just got revised last year, I think.
Also, playing around with WEKA is always fun and illuminating.
Cool. Reminds me of the drawings in Metamagical Themas.
How long does this take you?
ya. I am a strange loop is a bit more accessible, but still probably too hard for a 13 year old. keeping with the hofstadter bent, maybe metamagical themas would work.
The argument that "he" and "him" (and "himself"?) can be used in a gender-neutral fashion (as suggested by "The Elements of Style" as quoted in the article) has also been put forward by William Safire, conservative columnist for The New York Times, and beautifully laid to rest by Doug Hofstadter in his satire A Person Paper on Purity in Language
The piece is included in Metamagical Themas.
Another chapter in the same book, "Changes in Default Words and Images", also discusses gender-neutral language, and suggests a number of techniques we can use while waiting for the new pronouns to take hold...
This book will get you started:
Introduction to the Theory of Computation https://www.amazon.com/dp/053494728X/ref=cm_sw_r_other_apa_EuBuxbYS2QXF3
But if your understanding of the foundations of math and logic are not strong, you may wish to begin with Language, Proof, and Logic by Barwise or a more historical treatment from the book, A Profile of Mathematical Logic by Howard Delong. For a bit more light-hearted and thought-provoking, read Godel, Escher, Bach: An Eternal Golden Braid.
To connect this material to philosophy of mind, get David Chalmers' introductory textbook.
The scope of your question does not fit nicely into a reddit comment. But if you request, I will go into greater detail.
I highly recommend Michael Sipser's Introduction to the theory of computation. I found it engaging enough that I actually read it in bed. Had to stop because it kept me from sleeping.
I don't know about other schools but my CS program required discrete math and automata theory to complete the major. I really enjoyed automata theory but I can imagine it being kind of tough to get into outside of a classroom setting. Having said that, I would highly recommend this book if you're trying to learn some of this stuff on your own.
I swear by this book for an introduction to GAs and a ton of other cool ML/AI algorithms. No advanced math/probability knowledge necessary; it's focused on practical examples and intuitive explanations. It's an excellent foundation for further study.
Also machine learning, you profile sounds pretty good for machine learning. Do check out Andew Ng's videos, and this book. Machine learning is very much in demand right now, from AI, computational biology, finance, there's hardly any area where it isn't being used.
O'Reilly has published a number of practical machine learning books such as Programming Collective Intelligence: Building Smart Web 2.0 Applications and Natural Language Processing with Python that you might find good starting points.
The Singularity is Near by Ray Kurzweil. It's about the advancement of technology and how said advancement is exponential not linear.
> Oh, you're one of those Singularity nutjobs.
And you're a luddite. The difference is that the world actually does laugh at luddites. Like how we laugh at our parents and grandparents who said "I hate computers" and "I'll never get a cell phone" ten years ago. Fast forward a decade, and the kid subsisting on two dollars a day in the Mumbai slums has a smartphone, and anyone who isn't online can't even function in our society anymore.
You might think you're a genius for coming up with your little list of objections, but I'm afraid we've gotten all of those before - and they are all addressed, explained, and rebutted exhaustively.
It's therefore obvious that you haven't actually read any of the literature that you're criticizing. So instead of arguing with you, I'll simply send you away to do some homework. Prior to dispelling your own ignorance, it would probably be a good idea to keep your uninformed thoughts to yourself.
I think he may have meant the other Feynman lectures.
If anyone's interested, this book here is a really good free introductory textbook on machine learning using R. It has really good reviews that you can see here
Also if you need answers to the exercises, they're here
The textbook covers pretty much everything in OP's article
If OP doesn't have the possibility of taking a statistical learning class, ISL is a good introduction.
Data Smart
Whole book uses excel; introduces R near the end; very little math.
But learn the theory (I like ISLR), you'll be better for it and will screw up much less.
Which toolkit are you using for your HMMs? The HTK book has some general steps on what to do when it comes to HMM-base ASR. You might also want to have a look at the Speech Recognition chapter in Jurafsky and Martin's Speech and Language Processing, if you can find it online or in a library.
That being said, the state-of-the-art for ASR is mostly DNNs. HMMs are being phased out quite quickly as the main acoustic models in most speech applications. If you're interested in speech, why not start with those?
Di editori ce ne sono diversi, se cerchi un'introduzione alla PNL con un approccio accademico, ti consiglio: Speech And Language Processing: An Introduction to Natural Language Processing , Computational Linguistics, and Speech Recognition.
Alternativamente, Natural Language Processing with Python .
Ah, that makes sense. Yup, using any sort of large corpus like that to create a more general document space should help.
I don't know what the best way to visualize the data is. That's actually one of the big challenges with high dimensional vector spaces like this. Once you've got more than three bases you can't really draw it directly. One thing I have played around with is using D3.js to create a force directed graph where the distance between nodes corresponds to the distance between vectors. It wasn't super helpful though. However I just went to look at some D3.js examples and it looks like there's an example of an adjacency matrix here: https://bost.ocks.org/mike/miserables/ I've never used one, but it seems like it could be helpful.
The link seems to working now for me, but if it stops working again here's the book it was taken from: https://www.amazon.com/Speech-Language-Processing-Daniel-Jurafsky/dp/0131873210 googling the title should help you find some relevant PDFs.
Well, I'm a bit late. But what /u/Liz_Me and /u/robthablob are saying is the same I was taught in NLP classes. DFA (Deterministic Finite Automatons) can be represented as regular expressions and vice versa. I guess you could tokenize without explicitly using either (e.g. split string at whitespace, although I suspect, and please correct me if I'm wrong, that this can also be represented as a DFA). The problem with this approach is that word boundaries don't always match whitespaces (e.g. periods or exclamation marks after last word of sentence). So I'd suggest, if you are working in NLP, that you become very familiar with regular expressions. Not only are they very powerful, but you'll also need to use them for other typical NLP tasks like chunking. Have a look at the chapter dedicated to the topic in Jurafsky and Martin's Speech and Language Processing (one of the standard NLP books) or Mastering Regular Expressions.
May I recommend a book I used in college:
http://www.amazon.com/Artificial-Intelligence-Modern-Approach-2nd/dp/0137903952/ref=sr_1_2?s=books&ie=UTF8&qid=1396106301&sr=1-2&keywords=Artificial+Intelligence%3A+A+Modern+Approach
There is a newer (and more expensive) edition (3rd), but frankly it isn't necessary.
This book will give you a very broad and thorough introduction to the various techniques and paradigms in AI over the years.
As a side note, look into functional programming languages, Haskell, Prolog, Lisp, etc.
Good luck, my friend!
Excellent reference texts that will give you a good idea of what you are getting yourself into:
edit - borken link
I totally agree.
As long as it isn't Twilight. Or Artificial Intelligence: A Modern Approach. That would be weird
If you want to do any AI programming, get this book: Artificial Intelligence: A Modern Approach (2nd Edition) (Hardcover). It's what my AI college professor used for his class
Ah, the singularity (when machines surpass humans) is basically the only thing that allows me to sleep at night. Ray Kurzweil's book The Age of Spiritual Machines: When Computers Exceed Human Intelligence is fantastic, his vision of the future is one that I can look forward to.
Have you read any of Kurzweil's books on this subject? If not, Age of Spiritual Machines is a great start.
Why Kasparov? Read that name somewhere recently?
Have you ever read The Age of Spiritual Machines? I think you'd dig it.
Do you live in Austin, TX?
Do you play in a band?
In 2005 did you let someone borrow the book The Age of Spiritual Machines: When Computers Exceed Human Intelligence by Ray Kurzweil
If so, I have your book.
If you haven't already, read The Age of Spiritual Machines. Great read that covers questions just like yours: http://www.amazon.com/The-Age-Spiritual-Machines-Intelligence/dp/0140282025
How to Create a Mind: The Secret of Human Thought Revealed By Ray Kurzweil. Ray is world renown for predicting the outcome of upcoming technologies with stunning accuracy. Not through psychic powers or anything, but through normal predictive means. He predicted when the first machine would be capable of beating the best chess player in the world. He is predicting that we will approach what is called the technical singularity by 2040. Its amazing. He is working with Google on a way to stop aging, and possible reverse it one day. Something I recommend for sure.
EDIT: Books are awesome
If he were trying to understand the brain, instead of explaining it in a way that matches his assumptions - he would stand the shoulders of neuroscientists who are writing on the topic. But I get it, e.g., it's difficult to describe the brain as a computer when in a brain processing and storage are the same thing.
> http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=asap_bc?ie=UTF8
Well I'm not sure that's entirely true:
http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=asap_bc?ie=UTF8
At the very least, he's trying to understand the brain.
Kurzweil tries to do this all the time
http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/
Doomed by Chuck Palahniuk
Black Powder War by Naomi Novik
How to Create a Mind by Ray Kurzweil
The King in Yellow by Robert Chambers
John Dies at the End by David Wong
Yes. I read a lot of books at the same time. Yes, I regularly finish books. Doomed I just finished about a week ago, and I am currently in the middle of all the other books. So far I've enjoyed all of these books immensely.
I recommend reading "The Emperor's New Mind" by Roger Penrose.
Some good popsci-style but still somewhat theoretical CS books:
> The current computer architecture is necessarily concrete and deterministic, while the brain is almost certainly non-deterministic
It sounds like you agree with The Emporer's New Mind by Roger Penrose, which states that human consciousness is non-algorithmic, and thus not capable of being modeled by a conventional computer.
However, the majority of experts who work in Artificial Intelligence disagree with this view. Most believe that there's nothing inherently different about what the brain does, the brain just has a staggeringly large number of neurons and we haven't been able to approach its computing power yet...but we will.
The latest advancements in the area of neural networks seems to be providing increasing evidence that computers will someday do everything the human brain can do, and more. Google's Deep Dream gives an interesting glimpse into the amazing visual abilities of these neural networks, for example.
I haven't gotten around to reading it yet, but a professor of mine recommended reading the book The Emperor's New Mind, about this exact subject. Judging from the index, it looks like it discusses both of those proofs.
What is the limit on 'local'? A nanometer? A millimeter? There is literally nothing different between your brain's hemispheres and two brains, besides distance and speed. Both of these are relative. I severely doubt that consciousness has some kind of minimum distance or speed to exist. Compared to an atom, the distance between two neurons is far vaster than the distance between two brains is when compared to two neurons.
Humans evolved to have SELF consciousness. This involves the brain making a mapping of itself, and is isolated to a few animals, with degrees in other animals. Self consicousness is one of the 'easy problems of consciousness' and can be solved with enough computation.
The existence of experience (also known as qualia) is known as the 'hard problem of consciousness' and is not apparently math-related imo. The universe fundamentally allows for qualia to exist and so far there is literally 0 explanation for how experience arises from computation, or why the universe allows for it at all.
Also, I think it is important to note that all studies on whatever the universe is have been gained through the actions of consciousness. There is literally nothing we know apart from consciousness. That is why arguments for living in a simulation are possible--because words like 'physical' are quite meaningless. We could be in a simulation or a coma dream. What unites these is not anything material, but the concept of experience. Which is an unexplained phenomenon.
I think your confusion is that you are defining consciousness as self-consciousness (which I would call something like suisapience) whereas the common philosophical (and increasingly, neuroscientific/physical) definition is of qualia, which is known as sentience. Animals are clearly sentient as they have similar brains to ours and similar behaviors in reaction to stimuli, and though they may not have qualia of themselves, qualia are how beings interface with reality to make behaviors.
I think it is likely that even systems like plants experience degrees of qualia, because there is nothing in a brain that would appear to generate qualia that is not also in a plant. Plants are clearly not self-conscious, but proving they do not experience qualia is pretty much impossible. And seeing how humans and animals react to qualia (with behavior,) one could easily posit that plants are doing something similar.
Some suggested reading on the nature of reality by respected neuroscientists and physicists:
https://www.theatlantic.com/science/archive/2016/04/the-illusion-of-reality/479559/
https://en.wikipedia.org/wiki/Integrated_information_theory
https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
https://www.quantamagazine.org/neuroscience-readies-for-a-showdown-over-consciousness-ideas-20190306/
https://www.amazon.com/Emperors-New-Mind-Concerning-Computers/dp/0192861980
I think the matrix representation and is similar to the letter representation just might allow for a little more incite either way your trying to formalize thought. A lot of this is talked about in the emperors new mind by roger penrsoe and I would serious take a look into it. Its really cheep and goes into a lot of cool math and AI and I think will get you closer to answering your question.
On top of that I would suggest learning about Buddhism and vipassana meditation. I know reddit has a negative disposition toward religion but just take a look at it, Buddhism doesn't have any of that Dogmatic blind faith their is in other religion. The reason I suggest this is because your going to be limited in try to understand thought through symbolic thought alone and direct observation will likely be more insiteful. Also because your talking about looking for something that's bigger then us, which comes into play with the idea of not self.
One difficult aspect of your question is that it involves incite into the nature of thoughts. Your not going to get that just by using symbolic language your also going to have to look at thoughts. Another things that I learned from Buddhism is not self. I can't fully explain it because I don't fully understand it but I think this concept would be helpful in understanding your question since a created thought implies a creator but if there is no creator of the thought then there is just the thought.
So going along these lines my guess is that thoughts are just the same as any other event in life and we only think we are creating them. For example a thought about a alarm clock is just the image of an alarm clock or word description of an alarm clock or some combination of all of them that arises when the brain is preforming a small level of Synesthesia where instead of mixing scenes your mixing the memory of a bell and clock. Many times we mix memory with a purpose but that purpose is usually driven by something other then us. So one way of looking at the creating of the though of the alarm clock is really just an event occurring similar to two liquids mixing. So in this case I would think there are an infinite number of thoughts since there are infinite number of possibilities of an event.
So with this description of thought we might even be able to say most thoughts are original because the event called thought isn't likely to occur twice in exactly the same way. The question of whether all thoughts exist before the thinker thinks it would now be released as do events exists before they occur. But then what does exist even mean?
It really depends on your comfort and familiarity with the topics. If you've seen analysis before you can probably skip Rudin. If you've seen some functional analysis, you can skip the functional analysis book. Convex Optimization can be read in tandem with ESL, and is probably the most important of the three.
Per my other comment, if your goal is to really understand the material, it's important you understand all the math, at least in terms of reading. Unless you want to do research, you don't need to be able to reproduce all the proofs (to help you gauge your depth of understanding). In terms of bang for your buck, ESL and Convex Optimization are probably the two I'd focus on. Another great book Deep Learning this book is extremely approachable with a modest math background, IMO.
It's only $72 on Amazon. It's mathematical, but without following the Theorem -> Proof style of math writing.
The first 1/3 of the book is a review of Linear Algebra, Probability, Numerical Computing, and Machine Learning.
The middle 1/3 of the book is tried-and-true neural nets (feedforward, convolutional, and recurrent). It also covers optimization and regularization.
The final 1/3 of the book is bleeding edge research (autoencoders, adversarial nets, Boltzmann machines, etc.).
The book does a great job of foreshadowing. In chapters 4-5 it frames problems with the algorithms being covered, and mentions how methods from the final 1/3 of the book are solving them.
https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618/
I would start with reading.
For Neural Networks, I'd do:
For overview with NN, Fuzzy Logic Systems, and Evolutionary Algorithms, I recommend:
Fundamentals of Computational Intelligence: Neural Networks, Fuzzy Systems, and Evolutionary Computation (IEEE Press Series on Computational Intelligence) https://www.amazon.com/dp/1119214343/ref=cm_sw_r_cp_apa_i_zD11CbWRS95XY
Ich arbeite gerade das Buch Deep Learning with Python durch und es ist schon mal besser als Onlinekurse, die ich in Deep Learning gemacht habe (Udemy Deep Learning A-Z). Es ist vom Entwickler von Keras (Python Tensorflow API) und er erklärt das Thema Neuronale Netze, geht etwas auf die Mathematik ein und widmet sich dann Keras bis hin zu somewhat State of the Art Lösungen. Das ist aber schon eine Unterkategorie von Data Science.
Sinnvoller ist am Anfang:
Das Buch bzw Amazon wird auch viel empfohlen und ist auf meiner nächsten Liste, kann aber nicht viel dazu sagen.
Ansonsten wird auch eigentlich überall der Coursera Machine Learning Kurs von Andrew Ng empfohlen. Auf Reddit/Github findet man dazu die entsprechenden Materialien in Python, wenn man kein MatLab machen will. Das ist für extrem viele der Einstiegskurs und sehr sinnvoll!
Kurse geben halt (meist für Geld) ein Zertifikat, was ein Vorteil ist. Bei Büchern hat man meist mehr Wissen und es ist intensiver als einfach ein paar Videos anzuschauen. Aber man hat leider nichts, was man wie ein Zertifikat vorweisen kann.
> Ist R zwingend notwendig?
Nein. Ich habe beides gelernt und würde sogar sagen, dass meist Python bevorzugt wird. Letztendlich ist es aber mMn egal. Oft lernen halt die, welche wie ich aus der Mathematik kommen, in der Uni schon R und benutzen es weiter. Oder andere, welche als Aktuar o.ä. im Finanzwesen gearbeitet haben und dort R benutzt haben, hören dann auch nicht plötzlich damit auf. Beides hat Vor-/Nachteile.
I really like Deep Learning by Ian Goodfellow, et al. You can but it from Amazon at https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618/ref=sr_1_1?ie=UTF8&qid=1472485235&sr=8-1&keywords=deep+learning+book. If you are a little cash strapped, there is an html version at http://www.deeplearningbook.org/. Of course, this book is specifically focused on neural networks as opposed to ML in general.
Applying convolution in artificial neural networks was actually inspired by a simple model of the visual cortex (i.e. in the brain). If you want to read a fully technical overview, I'd suggest the section "The Neuroscientific Basis for Convolutional Networks" in chapter 9 of this book.
I'm gonna try to keep this post short and do a quick summary right now. Essentially, at early stages of visual processing the difference in activity between adjacent photoreceptor cells in the eye is taken, mostly due to lateral inhibitory connections on both bipolar neurons and the downstream bipolar neurons. This is essentially a convolution operation - just as you may subtract the brightness of adjacent pixels from a central pixel in a 2D convolution, this is done in the retina using lateral inhibitory connections. The section in that deep learning textbook I posted implies that this occurs only in visual cortex, but it actually occurs in the retina and LGN as well. So just as in modern CNNs, there are stacks of convolution operations in the real brain.
Of course, the convolution that occurs in artificial neural networks is a simplification of the actual process that occurs in brains, but it was inspired by the functionality and organization of the brain.
A brilliant mind among us. His book if anyone wonders:
https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618/ref=sr_1_1?keywords=Deep+Learning+%28Adaptive+Computation+and+Machine+Learning+series%29&qid=1554425041&s=gateway&sr=8-1
This is a great book that takes you from chapter 1 in linear algebra and goes into machine learning with neural networks.
​
https://www.amazon.com/gp/product/0262035618/ref=ppx_yo_dt_b_asin_title_o08_s00?ie=UTF8&psc=1
​
The authors also have a website with some of the material.
​
https://www.deeplearningbook.org/
Is this the right book: https://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370
What about this one? https://www.amazon.com/gp/product/0387848576/ref=ox_sc_sfl_title_9?ie=UTF8&psc=1&smid=ATVPDKIKX0DER
Thank you!! If you don't mind my asking, if you're working a full-time job, how much time have you been allocating for the program, and in how many months are you projected to finish?
Also, do you have any tips on how I can best prepare before entering the program? I'm considering reading the Elements of Statistics during commute instead of the usual ones I read and brush up on my linear algebra to prepare.
Metamagical Themas: Questing For The Essence Of Mind And Pattern by Douglas Hofstadter
I was 15, I asked for it for Christmas and actually got it. I read that book until it fell apart. The rest of Hofstadters work is equally mind expanding, as others here have mentioned.
Metamagical Themas. Even better than GEB itself in my opinion.
I haven't seen listed yet:
also upvote:
Sipser on algorithms is the can't-go-wrong starting point. You can get an older edition cheap and it will be just as good.
Do the problems. Come back with questions.
College books are also much more expensive in the USA than in Europe.
For example:
$152.71
VS
£43.62($68.03)
$146.26 VS
£44.34($69.16)
It's not free (in fact it's sickeningly expensive) but Sipser [amazon.com] is a very self-teachable (self-learn-from-able? :) ) text covering automata theory, computability theory, and complexity theory.
Work hard and you'll get there. I preferred the applied side of things, but if I just stuck with pure math I think I would have eventually gotten a tenure track position in the mathematics side of things.
My favorite book to this day, for a beginners course in Computational complexity is still, Michael Sipser's Introduction to theory of computation, I highly recommend it. It might be a little too easy for you if you already have a base, let me know and I'll recommend books more advanced.
Here is a link to the book on amazon, although any big college library should have it, if not just have them order it for you. I've gotten my college's library to buy so many books that I wanted to read, but not spend money on, you'd be surprised at how responsive they are to purchasing requests from PhD candidates.
Depending on the amount of energy you want to put into this: "Introduction to Lambda Calculus" by Henk Barendegt et al. is great ((http://www.cse.chalmers.se/research/group/logic/TypesSS05/Extra/geuvers.pdf).
Study the proofs and do the exercises and you will learn a ton, quickly. You can also read "proposition as types" by Philip Wadler (http://homepages.inf.ed.ac.uk/wadler/papers/propositions-as-types/propositions-as-types.pdf) and pick up the "Introduction to the Theory of Computation" book (https://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/0534950973/)
Of course you don't need to read all of this to get a basic understanding of lambda calculus but if you want to understand for "real" so it sticks.
My local used book store has a copy of Sipser for $15 that I've been meaning to pick up. Considering the $143 price tag on Amazon, it's a pretty good bargain. I just don't know whether it's 1st or 2nd edition. Anyone have any idea if there are major differences?
Oi. Disclaimer: I haven't bought a book in the field in a while, so there might be some new greats that I'm not familiar with. Also, I'm old and have no memory, so I may very well have forgotten some greats. But here is what I can recommend.
I got my start with Koblitz's Course in Number Theory and Cryptography and Schneier's Applied Cryptography. Schneier's is a bit basic, outdated, and erroneous in spots, and the guy is annoying as fuck, but it's still a pretty darned good intro to the field.
If you're strong at math (and computation and complexity theory) then Oded Goldreich's Foundations of Cryptography Volume 1 and Volume 2 are outstanding. If you're not so strong in those areas, you may want to come up to speed with the help of Sipser and Moret first.
Also, if you need to shore up your number theory and algebra, Victor Shoup is the man.
At this point, you ought to have a pretty good base for building on by reading research papers.
One other note, two books that I've not looked at but are written by people I really respect Introduction to Modern Cryptography by Katz and Lindell and Computational Complexity: A Modern Approach by Arora and Barak.
Hope that helps.
That's not an algorithms. No, sirree, what you've got there is a theoretical computer science class. This is the Sipser on the board.
I like the one by Sipser https://www.amazon.com/gp/product/0534950973/ref=dbs_a_def_awm_bibl_vppi_i1
Here are some links for the product in the above comment for different countries:
Amazon Smile Link: this book
|Country|Link|
|:-----------|:------------|
|UK|amazon.co.uk|
|Spain|amazon.es|
|France|amazon.fr|
|Germany|amazon.de|
|Japan|amazon.co.jp|
|Canada|amazon.ca|
|Italy|amazon.it|
|China|amazon.cn|
To help donate money to charity, please have a look at this thread.
This bot is currently in testing so let me know what you think by voting (or commenting). The thread for feature requests can be found here.
If you want to learn the algorithms by programming them you have Programming Collective Intelligence that is really good. It really helped me to see the algorithms in work in order to deeply understand them.
I figure a business background and are looking to incorporate machine learning/AI into your pipeline. Programming Collective Intelligence: Building Smart Web 2.0 Applications is a must-read. Doesn't go too much into it but still gives you a good idea of the popular ML techniques and how they're being used by top companies.
I'm a general engineer myself, with a side interest in computer science. Szeliski's book is probably the big one in the computer vision field. Another you might be interested in is Computer Vision by Linda Shapiro.
You may also be interested in machine learning in general, for which I can give you two books:
I see you're interested in compilers. The Dragon book mainly focuses on parsing algorithms. I found learning about Forth implementations to be very instructive when learning about code generation. jonesforth is a good one if you understand x86 assembly.
FWIW , You might enjoy Programming Collective Intelligence if you liked this talk.
link to buy off author's website
Sounds like you're running into O(n^2) or O(n^3) blowup. You didn't describe what algorithm you're using. Which probably means you don't know it, which means you don't know what the complexity is.
You need to make an index by item recommended. For speed, do it in C++ (e.g. a simple hash_map), but Python will be good to play with the algorithm.
Try posting 1M rows and I bet someone here (including I) could write something simple quite quickly.
Also try: http://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325
Although I don't believe they directly addressed algorithmic complexity. They presented some n^2 algorithms without really saying so.
Collective Intelligence
calm down there Kurzweil [relavent]
Lol. No. Google for the "singularity". Also, the best book on the subject: http://www.amazon.com/The-Singularity-Is-Near-Transcend/dp/0670033847
I dont know why people find it so hard to believe :( It's simple mathematics: exponential growth. Link
As anyone in the IT field will tell you, the CPU is not the only thing that gets upgraded in a computer. The reason a 5 year old computer is not as helpful as a new computer is because technology has progressed heavily in the last 5 years.
A great book about the advancement of technology. A wonderful statement to it's own thesis is how dated a 5 year old book can read.
Yes I know, that is why I mentioned general purpose computation. See Turing wrote a paper about making such a machine, but the British intelligence which funded him during the war needed a machine to crack codes through brute force, so he doesn't need general computation (his invention), but the machine still used fundamental parts of computation invented by Turing.
The Eniac is a marvel, but it is an implementation of his work, he invented it. Even Grace Hopper mentions this.
What the Americans did invent there though, was the higher level language and the compiler. That was a brilliant bit of work, but the credit for computation goes to Turing, and for general purpose computation (this is why the award in my field of comp. sci. is the Turing award, why a machine with all 8 operations to become a general computer is called Turing complete and why Turing along with Babbage are called the fathers of computation). This conversation is a bit like crediting Edison for the lightbulb. He certainly did not invent the lightbulb, what he did was make the lightbulb a practical utility by creating a longer lasting one (the lightbulbs first patent was filed 40 years earlier).
I didn't use a reference to a film as a historical reference, I used it because it is in popular culture, which I imagine you are more familiar with than the history of computation, as is shown by you not mentioning Babbage once and yet the original assertion was the invention of "Computation" and not the first implementation of the general purpose computer.
> The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.
Here is a bit where Von-Neuman (American creator of the Von-Neuman architecture we use to this day) had to say:
> The principle of the modern computer was proposed by Alan Turing, in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" that is later known as a Universal Turing machine. He proved that such machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable.
> The fundamental concept of Turing's design is stored program, where all instruction for computing is stored in the memory.
> Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
TLDR: History is not on your side, I'm afraid. Babbage invented computation, Turing invented the programmable computer. Americans invented the memory pipelines, transistor, compiler and first compilable programming language. Here is an American book by a famous Nobel prize winning physicist (Richard Feynman) where the roots of computation is discussed and the invention credit awarded to Alan Turing. Its called Feynman's Lectures on Computation, you should read it (or perhaps the silly movie is more your speed).
Feynman's Lectures on Computation
Definitely light reading. Some of the stuff seems a bit dated and some a bit basic, but Feynman's has a way of looking at things and explaining them that is totally unique. (You might want to skip the chapter on quantum computing if you don't have the background).
Feynman Lectures On Computation gives a lot of practical examples of how the laws of thermodynamics, engineering developments, and information theory limit information storage density in such systems. Yes, there is a limit, but it is very big and far away.
As gsyme said in the comment, he covers bits from Feynman's book on computation ( http://www.amazon.com/Feynman-Lectures-Computation-Richard-P/dp/0738202967 ). Basically the lecturer is trying to look at the electronic and thermodynamic aspects of computation. He refers to review from Bennett ( http://www.research.ibm.com/people/b/bennetc/bennettc1982666c3d53.pdf ) @ 1:27 . Apart from this some interesting things like constant 'k' @ 1:02 and reversible-computing at 1:26 are touched upon :)
Autogenerated.
Science AMA Series: I’m Tony Hey, chief data scientist at the UK STFC. I worked with Richard Feynman and edited a book about Feynman and computing. Let’s talk about Feynman on what would have been his 100th birthday. AMA!
Hi! I’m Tony Hey, the chief data scientist at the Science and Technology Facilities Council in the UK and a former vice president at Microsoft. I received a doctorate in particle physics from the University of Oxford before moving into computer science, where I studied parallel computing and Big Data for science. The folks at Physics Today magazine asked me to come chat about Richard Feynman, who would have turned 100 years old today. Feynman earned a share of the 1965 Nobel Prize in Physics for his work in quantum electrodynamics and was famous for his accessible lectures and insatiable curiosity. I first met Feynman in 1970 when I began a postdoctoral research job in theoretical particle physics at Caltech. Years later I edited a book about Feynman’s lectures on computation; check out my TEDx talk on Feynman’s contributions to computing.
I’m excited to talk about Feynman’s many accomplishments in particle physics and computing and to share stories about Feynman and the exciting atmosphere at Caltech in the early 1970s. Also feel free to ask me about my career path and computer science work! I’ll be online today at 1pm EDT to answer your questions.
-----------------------------------------------------------
IamAbot_v01. Alpha version. Under care of /u/oppon.
Comment 1 of 1
Updated at 2018-05-11 17:56:32.133134
Next update in approximately 20 mins at 2018-05-11 18:16:32.133173
I haven't heard of that book, I may have to check it out. I was going to recommend Lectures on Computation by Richard Feynman. It's one of the best books i've read on the subject; its starts out with just simple logic, excluding circuits and transistors, but eventually going all the way to talking about quantum computing.
https://www.amazon.de/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370
http://www.urbandictionary.com/define.php?term=laughing&defid=1568845 :))
Now, seriously, if you want to get started, I'd recommend this for R (http://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370/) and this for Python (http://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130//).
Also, head out to /r/datascience and /r/MachineLearning!
EDIT: Wrong link.
Because, based on your initial comment and this one as well the learning curve in front of you is ... steeper than you might think.
I think you are jumping in to the real deep end, without starting with some fundamentals. The point these questions are at I would just recommend grabbing a book on Linear Regression. If you already have a strong math background them you could jump to something like https://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370/ref=pd_sim_14_1?ie=UTF8&psc=1&refRID=086FTQPDGGERBQ7ZR2C5
But I often see people walk away from that book misunderstanding some of the assumptions behind the models they are building and trying to make very poor predictions. Inference is another story all to itself...
This. The book that accompanies these videos link is one of my main go-to's. Very well put together. Great examples.
Another real good book is Practical Data Science with R.
I'm not sure what language the John's Hopkins Coursera Data Science courses is done in, but I'd imagine either R or Python.
I have some recommendations on books to get up to speed.
Read this book:
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
This author does a really good job going through a lot of different algorithms. If you can wait, then go with this book instead - which is by the same author but for TensorFlow 2.0, which is pretty recent and also integrated Keras. It's coming out in October.
Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
You can get good datasets on Kaggle. If you want to get an actual good foundation on machine learning then this book is often recommended:
The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics)
​
As for staying up to date, it's hard to say because "machine learning" doesn't refer to a single thing, there are a lot of different types of machine learning and each one is developing fast. For example, I used to be pretty into recurrent neural networks for sequence data. I haven't kept up with it lately but I remember about two years ago the hotness was all about LSTM neural networks, but then a simplified gate pattern was shown to be just as good with less training and that became big (name is escaping me right now...). Then the last time I took a look, it looked like people were starting to use convolutional neural networks for sequence data and getting great results on par or better than recurrent neural networks.
The ecosystem is changing fast too. Tensorflow uses (used?) static graph generation, meaning you define the network before you train it and you can't really change it. But recently there was more development on dynamic neural networks, where the network can grow and be pruned during training - and people were saying this is a reason to go with PyTorch instead of Tensorflow. I haven't kept up, but I heard from a friend that things are changing even more - there is this new format called ONNX that aims to standardize information about neural networks; and as I've mentioned earlier in this post, TensorFlow 2.0 is coming out (or out already?).
I'm not doing too much machine learning at the moment, but the way I tried to get new information was periodically looking for articles in the problem type I was trying to solve - which at the time was predicting sequences based on sparse multidimensional sequence data with non-matching step intervals.
If you read the TensorFlow book I linked above, you'll get a great overview and feel for what types of problems are out there and what sort of ML solutions exist now. You'll think of a problem you want to solve and then it's off to the search engines to see what ideas exist now.
Do you mean this one? The new content is Keras, not PyTorch.
https://www.amazon.com/Paradigms-Artificial-Intelligence-Programming-Studies/dp/1558601910
There's definitely a steep learning curve to get to the mindset of being productive with it. I really enjoy Norvig's "Case Studies" book. I feel like you're right in some ways though... LISP is supposed to be able to be extended even in a language sense but it is just not that intuitive to do it. I have heard interesting things about Perl 6 in this regard but I haven't had time to play with that yet... maybe when i finally completely finish school :)
I'm a big fan of the 'Deep Learning With Python' book by Francois Chollet.
https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438
​
Looks like the whole book is available here but the link is .cn so check it out on your own.
http://faculty.neu.edu.cn/yury/AAI/Textbook/Deep%20Learning%20with%20Python.pdf
Awesome list! I'm a software engineer looking to make the jump over to data science, so I'm just getting my feet wet in this world. Many of these books were already on my radar, and I love your summaries to these!
One question: how much is R favored over Python in practical settings? This is just based off of my own observation, but it seems to me that R is the preferred language for "pure" data scientists, while Python is a more sought-after language from hiring managers due to its general adaptability to a variety of software and data engineering tasks. I noticed that Francois Chollett also as a book called Deep Learning with Python , which looks to have a near identical description as the Deep Learning with R book, and they were released around the same time. I think its the same material just translated for Python, and was more interested in going this route. Thoughts?
​
Io avevo iniziato con questo libro sul deep learning scritto dal creatore di Keras.
Ti consiglio anche di vedere le sfide su Kaggle!
Chiedo visto che sono interessato anche io: Ci sono gruppi/meet-up a Roma e dintorni per appassionati Machine learning?
Just FYI, because this is not always made clear to people when talking about learning or transitioning to data science: this would be a massive undertaking for someone without a strong technical background.
You've got to learn some math, some statistics, how to write code, some machine learning, etc. Each of those is a big undertaking in itself. I am a person who is completely willing to spend 12 hours at a time sitting at a computer writing code... and it still took me a long time to learn how not to write awful code, to learn the tools around programming, etc.
I would strongly consider why you want to do this yourself rather than hire someone, and whether it's likely you'll be productive at this stuff in any reasonable time frame.
That said, if you still want to give this a try, I will answer your questions. For context: I am not (yet) employed as a data scientist. I am a mathematician who is in the process of leaving academia to become a data science in industry.
> Given the above, what do I begin learning to advance my role?
Learn to program in Python. (Python 3. Please do not start writing Python 2.) I wish I could recommend an introduction for you, but it's been a very long time since I learned Python.
Learn about Numpy and Scipy.
Learn some basic statistics. This book is acceptable. As you're reading the book, make sure you know how to calculate the various estimates and intervals and so on using Python (with Numpy and Scipy).
Learn some applied machine learning with Python, maybe from this book (which I've looked at some but not read thoroughly).
That will give you enough that it's possible you could do something useful. Ideally you would then go back and learn calculus and linear algebra and then learn about statistics and machine learning again from a more sophisticated perspective.
> What programming language do I start learning?
Learn Python. It's a general purpose programming language (so you can use it for lots of stuff other than data), it's easy to read, it's got lots of powerful data libraries for data, and a big community of data scientists use it.
> What are the benefits to learning the programming languages associated with so-called 'data science'? How does learning any of this specifically help me?
If you want a computer to help you analyze data, and someone else hasn't created a program that does exactly what you want, you have to tell the computer exactly what you want it to do. That's what a programming language is for. Generally the languages associated with data science are not magically suited for data science: they just happen to have developed communities around them that have written a lot of libraries that are helpful to data scientists (R could be seen as an exception, but IMO, it's not). Python is not intrinsically the perfect language for data science (frankly, as far as the language itself, I ambivalent about it), but people have written very useful Python libraries like Numpy and scikit-learn. And having a big community is also a real asset.
> What tools / platforms / etc can I get my hands on right now at a free or low cost that I can start tinkering with the huge data sets I have access to now? (i.e. code editors? no idea...)
Python along with libraries like Numpy, Pandas, scikit-learn, and Scipy. This stuff is free; there's probably nothing you should be paying for. You'll have to make your own decision regarding an editor. I use Emacs with evil-mode. This is probably not the right choice for you, but I don't know what would be.
> Without having to spend $20k on an entire graduate degree (I have way too much debt to go back to school. My best bet is to stay working and learn what I can), what paths or sequence of courses should I start taking? Links appreciated.
I personally don't know about courses because I don't like them. I like textbooks and doing things myself and talking to people.
Might be worth looking at someone else's more in-depth explanation of these things to see modern uses. I just picked up this book, which gets into SciKit Learn for machine learning in like chapter 3 or something.
(Just an idea. I look forward to reading your tutorial if you ever post about it here!)
I am in probably same boat. Agree with your thoughts on github. I fell in love with this book: https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1474393986&sr=8-1&keywords=machine+learning+python
it's pretty much what you need - guidance through familar topics with great notebooks as example.
Take a look at seaborn package for visualization.
Yes, this tutorail is very useful for scikit learner, before watch the videos, I would like to recommend the book Python machine learning first! https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?s=books&ie=UTF8&qid=1487243060&sr=1-1&keywords=python+machine+learning
As far as Python books, you should get these 2:
Python Data Science Handbook and Python Machine Learning.
My top 3 are:
Ex Machine did a great job of exploring the control problem for AGI.
Nick Bostrom's book Superintelligence spooked Elon Musk and motivated others like Bill Gates and Steven Hawking to take AI seriously. Once we invent some form of AGI, how do you keep it in control? Will it want to get out? Do we keep it in some server room in an underground bunker? How do we know if its trying to get out? If its an attractive girl, maybe it will try to seduce men.
https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2#nav-subnav
An interesting read should be Superintelligence, I've just bought it but it seems promising from the reviews.
> wait another 50 years, when strong AI is a reality
Because, if we can even make an AI with near future technology, there is a very real chance that the goals of an AI wouldn't mesh well with the goals of humans. Assuming it is even possible, it is likely to rapidly go either extremely well or extremely poorly for humanity. The AI might even take itself out, or might only care about controlling circuit board realestate and not actual land per se.
For much more detail, I highly recommend reading Nick Bostram's book Superintelligence: Paths, Dangers, Strategies. If you don't feel like paying the price of a new book, I can track down an article or two. He in particular does a good job of pointing out what isn't likely to be possible and what technologies are more plausible.
Why does that statement not hold up? Check out Superintelligence. Specialized machine learning is not the same as strong generalized AI.
/r/ControlProblem
The Control Problem:
How do we ensure that future artificial superintelligence has a positive impact on the world?
"People who say that real AI researchers don’t believe in safety research are now just empirically wrong." - Scott Alexander
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." - Eliezer Yudkowsky
Check out our new wiki
Some Guidelines
Introductory Links
Recommended Reading
Important Organizations
Related Subreddits
*****
^(Bot created by /u /el_loke - )^Feedback
There is actually an answer to this question. He read this book
I read it to, and I can honestly say it is the scariest thing I have ever read.
We already have an issue in the united states with not enough jobs to go around, if this dystopian outlook is truly inevitable, what are our options for mitigating it, or at least coping with it?
I have thought quite a bit about autonomous vehicles and how I can't wait to buy one and never have to drive again, how many benefits it will have on society (faster commutes, fewer accidents, etc), but I hadn't considered how much the transportation industry will be affected and especially how much truck drivers in particular would be ideal to replace. The NYT ran a story the other day (http://www.nytimes.com/2014/08/10/upshot/the-trucking-indust...) about how we don't have enough drivers to fulfill the needs, but "Autos" could swing that pendulum swiftly in the opposite direction once legeslation and production catch up. How do we handle 3.6M truck, delivery and taxi drivers looking for a new job?
I haven't read it yet, but I have recently had recommendations of the book Superintelligence: Paths, Dangers, Strategies (http://smile.amazon.com/exec/obidos/ASIN/B00LOOCGB2/0sil8/re...) which I look forward to reading and hope it might be relevant.
(cross posted from HN)
tariq rasheed has a great book on ML and he breaks it down for total beginners. he breaks down the math as if your in elementry school. I think its called ML for beginners.
​
Book link:
https://www.amazon.com/Make-Your-Own-Neural-Network-ebook/dp/B01EER4Z4G/ref=sr_1_1?crid=3H9PBLPVUWBQ4&keywords=tariq+rashid&qid=1565319943&s=gateway&sprefix=tariq+ra%2Caps%2C142&sr=8-1
​
​
I got the kindle edition bc im broke. Its just as good as the actual book.
In addition to the 3blue1brown video someone else described this book is a great introduction to the algorithms, without going into much math (though you should go into the math to fully undestand what is going on).
Make Your Own Neural Network
https://www.amazon.com/dp/B01EER4Z4G/ref=cm_sw_em_r_mt_dp_U_NkqpDbM5J6QBG
Hey! I'm not OP but I think I can help. It's kind of difficult to summarize how machine learning (ML) works in just a few lines since it has a lot going on, but hopefully I can briefly summarize how it generally works (I've worked a bit with them, if you're interested in how to get into learning how to make one you can check out this book)
In a brief summary, a neural network takes a collection of data (like all the characteristics of a college application), inputs all its variables (like each part of the application like its AP scores, GPA, extraciriculars, etc.) into the input nodes and through some magic math shit, the neural network finds patterns through trial and error to output what you need, so that if you give it a new data set (like a new application) it can predict the chance that something is what you want it to be (that it can go to a certain college)
How it works is each variable that you put into the network is a number that is able to represent the data you're inputting. For example, maybe for one input node you put the average AP score, or the amount of AP scores that you got a 5 on, or your GPA, or somehow representing extraciriculars with a number. This is then multiplied in what are called weights (the Ws in this picture) and then is sent off into multiple other neurons to be added with the other variables and then normalized so the numbers don't get gigantic. You do this with each node in the first hidden layer, and then repeat the process again in how many node layers you have until you get your outputs. Now, this is hopefully where everything clicks:
Let's say the output node is just one number that represents the chance you get into the college. On the first go around, all the weights that are multiplied with the inputs at first are chosen at random (kinda, they're within a certain range so they're roughly where they need to be) and thus, your output at first is probably not close to the real chance that you'll get into the college. So this is the whole magic behind the neural network. You take how off your network's guess was compared to the real life % that you get accepted, and through something called back propagation (I can't explain how you get the math for it, it actually is way too much but here's an example of a formula used for it) you adjust the weights so that the data is closer when put in to the actual answer. When you do this thousands or millions of times your network gets closer and closer to guessing the reality of the situation, which allows you to put in new data so that you can get a good idea on what your chance is you get into college. Of course, even with literal millions of examples you'll never be 100% accurate because humans decisions are too variable to sum up in a mathematical sense, but you can get really close to what will probably happen, which is better than nothing at all :)
The beauty of ML is it's all automated once you set up the neural network and test that it works properly. It takes a buttload of data but you can sit and do what you want while it's all processing, which is really cool.
I don't think I explained this well. Sorry. I'd recommend the book I sent if you want to learn about it since it's a really exciting emerging field in computer science (and science in general) and it's really rewarding to learn and use. It goes step by step and explains it gradually so you feel really familiar with the concepts.
You might want to check this book out, it really breaks things down into manageable and understandable chunks. As the title implies, it's around neural networks. Machine Learning Mastery is also a website that does well at breaking things down - I'm pretty sure you've already come across it
There's a book for that.
https://www.amazon.ca/Age-Spiritual-Machines-Computers-Intelligence/dp/0140282025
​
Deep Learning (Adaptive Computation and Machine Learning series)
+1 as well: http://www.amazon.com/The-Elements-Statistical-Learning-Prediction/dp/0387848576/ref=pd_sim_b_1
Sipser's book is basically free on amazon if u buy used old editions. http://www.amazon.com/gp/aw/ol/053494728X/ref=olp_tab_used?ie=UTF8&condition=used
It basically just asks wut restricted types of computers can do. Like wut happens if u have a program but only a finite amt of memory, or if u have infinite memory but it's all stored in a stack. Or if u have infinite memory with random access.
Turns out lots of models r equal and lots r different and u can prove this. Also, these models inspire and capture lots of ur favorite programming tools like regex (= DFA) and parser generators (= restricted PDA) for ur favorite programming languages.
The Singularity Is Near: When Humans Transcend Biology
by Ray Kurzweil
I got tired of that particular meme; so, I made a self-referential counter-meme. Ever since reading Douglas Hofstadter's Metamagical Themas, I've enjoyed playing with self-referential thought.
Just think of it as trying to extinguish a fire with an explosion :)
http://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0670033847
Unless you're the author of this book, stop acting as if you came up with the idea please.