Best computer neural networks books according to redditors

We found 103 Reddit comments discussing the best computer neural networks books. We ranked the 23 resulting products by number of redditors who mentioned them. Here are the top 20.

Next page

Top Reddit comments about Computer Neural Networks:

u/zorfbee · 32 pointsr/artificial

Reading some books would be a good idea.

u/pt2091 · 17 pointsr/datascience

http://neuralnetworksanddeeplearning.com/chap1.html
For neural networks and understanding the fundamentals behind backpropagation.

http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Fourth%20Printing.pdf
(book is free, also an online course on it too called statistical learning)

http://www.amazon.com/Information-Theory-Inference-Learning-Algorithms/dp/0521642981
the author of this also has a set of lectures online:
http://videolectures.net/david_mackay/

Personally, I found it easier to learn linear algebra from the perspective of a physicist. I never really liked the pure theoretical approach. But here's a dude that likes that approach:
https://www.youtube.com/channel/UCr22xikWUK2yUW4YxOKXclQ/playlists

and you can't forget strang:
http://ocw.mit.edu/courses/mathematics/18-085-computational-science-and-engineering-i-fall-2008/video-lectures/

I think the best community for questions on any of the exercises in these book or concepts in this lecture is CrossValidated. I think its doubly helpful to answer other people's questions as well.

u/hellodan_morris · 12 pointsr/artificial

The book is FREE in every country, not just on amazon.com (USA). You can try searching for the book title on your local country site or use one of the direct links below.

US - (link is in original post above)
UK - https://www.amazon.co.uk/dp/B075882XCP
India - https://www.amazon.in/dp/B075882XCP
Japan - https://www.amazon.co.jp/dp/B075882XCP

Australia - https://www.amazon.com.au/dp/B075882XCP
Brazil - https://www.amazon.com.br/dp/B075882XCP
Canada - https://www.amazon.ca/dp/B075882XCP
Germany - https://www.amazon.de/dp/B075882XCP
France - https://www.amazon.fr/dp/B075882XCP
Italy - https://www.amazon.it/dp/B075882XCP
Mexico - https://www.amazon.com.mx/dp/B075882XCP
Netherlands - https://www.amazon.nl/dp/B075882XCP
Spain - https://www.amazon.fr/dp/B075882XCP

Please upvote if this was helpful so others can find it.

Thank you.

u/idiosocratic · 11 pointsr/MachineLearning

For deep learning reference this:
https://www.quora.com/What-are-some-good-books-papers-for-learning-deep-learning

There are a lot of open courses I watched on youtube regarding reinforcement learning, one from oxford, one from stanford and another from Brown. Here's a free intro book by Sutton, very well regarded:
https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html

For general machine learning their course is pretty good, but I did also buy:
https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1467309005&sr=8-1&keywords=python+machine+learning

There were a lot of books I got into that weren't mentioned. Feel free to pm me for specifics. Cheers

Edit: If you want to get into reinforcement learning check out OpenAI's Gym package, and browse the submitted solutions

u/sleepingsquirrel · 9 pointsr/ECE
u/marmalade_jellyfish · 8 pointsr/artificial

To gain a good overview of AI, I recommend the book The Master Algorithm by Pedro Domingos. It's totally readable for a layperson.

Then, learn Python and become familiar with libraries and packages such as numpy, scipy, and scikit-learn. Perhaps you could start with Code Academy to get the basics of Python, but I feel like the best way to force yourself to really know useful stuff is through implementing some project with a goal.

Some other frameworks and tools are listed here. Spend a lot more time doing than reading, but reading can help you learn how to approach different tasks and problems. Norvig and Russell's AI textbook is a good resource to have on hand for this.

Some more resources include:

Make Your Own Neural Network book

OpenAI Gym

CS231N Course Notes

Udacity's Free Deep Learning Course

u/intermerda1 · 7 pointsr/apachespark

I'd highly recommend against it. Spark 2.0 is very different from Spark 1.x. In my experience Spark is not super mature so books become outdated very fast. I mostly relied on their official documentation and API pages when I worked with it.

But if you want a book just make sure that it's a latest one. I found that Spark in Action uses the 2.0 version so that would be a good one.

u/kanak · 6 pointsr/compsci

I would start with Cover & Thomas' book, read concurrently with a serious probability book such as Resnick's or Feller's.

I would also take a look at Mackay's book later as it ties notions from Information theory and Inference together.

At this point, you have a grad-student level understanding of the field. I'm not sure what to do to go beyond this level.

For crypto, you should definitely take a look at Goldreich's books:

Foundations Vol 1

Foundations Vol 2

Modern Crypto

u/k0wzking · 6 pointsr/AcademicPsychology

Hello, I was recommended to Coursera by a colleague and have taken an in-depth look at their course catalogue, but I have not taken any courses from them. If you think there are free courses on there that would suit your needs, then go for it, but personally I found that what was offered for free seemed too superficial and purchasable classes did not offer any information that I could not obtain elsewhere for cheaper.

I know a lot of people aren’t like this, but personally I prefer to teach myself. If you are interested in learning a bit about data science, I would strongly recommend Python Machine Learning by Sebastian Rashka. He explains everything in extreme clarity (a rarity in the academic world) and provides python code that permits you to directly implement any method taught in the book. Even if you don’t have interest in coding, Rashka’s fundamental descriptions of data science techniques are so transparent that he could probably teach these topics to infants. I read the first 90 pages for free on google books and was sold pretty quickly.

I’ll end with a shameless plug: a key concept in most data science and machine learning techniques use biased estimation (a.k.a., regularization), of which I have made a brief video explaining the fundamental concept and why it is useful in statistical procedures.

I hope my non-answer answer was somewhat useful to you.

u/stone11 · 6 pointsr/programming

And really, all of the cognitive scientists I know would yell at anyone who thinks computers can't (well, won't) do any of those things. Like Minksy said in an interview with Discover:

>What is the value in creating an artificial intelligence that thinks like a 3-year-old?

>The history of AI is sort of funny because the first real accomplishments were beautiful things, like a machine that could do proofs in logic or do well in a calculus course. But then we started to try to make machines that could answer questions about the simple kinds of stories that are in a first-grade reader book. There's no machine today that can do that. So AI researchers looked primarily at problems that people called hard, like playing chess, but they didn't get very far on problems people found easy. It's a sort of backwards evolution. I expect with our commonsense reasoning systems we'll start to make progress pretty soon if we can get funding for it. One problem is people are very skeptical about this kind of work.

For that matter, that whole interview was quite interesting, as was the book it was in reference to.

u/Kiuhnm · 5 pointsr/MachineLearning

Take the online course by Andrew Ng and then read Python Machine Learning.

If you then become really serious about Machine Learning, read, in this order,

  1. Machine Learning: A Probabilistic Perspective
  2. Probabilistic Graphical Models: Principles and Techniques
  3. Deep Learning
u/apocalypsemachine · 5 pointsr/Futurology

Most of my stuff is going to focus around consciousness and AI.

BOOKS

Ray Kurzweil - How to Create a Mind - Ray gives an intro to neuroscience and suggests ways we might build intelligent machines. This is a fun and easy book to read.

Ray Kurzweil - TRANSCEND - Ray and Dr. Terry Grossman tell you how to live long enough to live forever. This is a very inspirational book.

*I'd skip Kurzweil's older books. The newer ones largely cover the stuff in the older ones anyhow.

Jeff Hawkins - On Intelligence - Engineer and Neuroscientist, Jeff Hawkins, presents a comprehensive theory of intelligence in the neocortex. He goes on to explain how we can build intelligent machines and how they might change the world. He takes a more grounded, but equally interesting, approach to AI than Kurzweil.

Stanislas Dehaene - Consciousness and the Brain - Someone just recommended this book to me so I have not had a chance to read the whole thing. It explains new methods researchers are using to understand what consciousness is.

ONLINE ARTICLES

George Dvorsky - Animal Uplift - We can do more than improve our own minds and create intelligent machines. We can improve the minds of animals! But should we?

David Shultz - Least Conscious Unit - A short story that explores several philosophical ideas about consciousness. The ending may make you question what is real.

Stanford Encyclopedia of Philosophy - Consciousness - The most well known philosophical ideas about consciousness.

VIDEOS

Socrates - Singularity Weblog - This guy interviews the people who are making the technology of tomorrow, today. He's interviewed the CEO of D-Wave, Ray Kurzweil, Michio Kaku, and tons of less well known but equally interesting people.

David Chalmers - Simulation and the Singularity at The Singularity Summit 2009 - Respected Philosopher, David Chalmers, talks about different approaches to AI and a little about what might be on the other side of the singularity.

Ben Goertzel - Singularity or Bust - Mathematician and computer Scientist, Ben Goertzel, goes to China to create Artificial General Intelligence funded by the Chinese Government. Unfortunately they cut the program.



PROGRAMMING

Daniel Shiffman - The Nature of Code - After reading How to Create a Mind you will probably want to get started with a neural network (or Hidden Markov model) of your own. This is your hello world. If you get past this and the math is too hard use this

Encog - A neural network API written in your favorite language

OpenCV - Face and object recognition made easy(ish).

u/kailashahirwar12 · 4 pointsr/deeplearning

As far as I know, there is no MOOC course specifically designed for GANs. There are several books on Generative Adversarial Networks like "Learning Generative Adversarial Networks https://www.amazon.in/dp/1788396413/ref=cm_sw_r_cp_apa_i_iLdRCbMMMRD60"
and "Generative Adversarial Networks Cookbook: Over 100 recipes to build generative models using Python, TensorFlow, and Keras https://www.amazon.in/dp/1789139902/ref=cm_sw_r_cp_apa_i_SLdRCbK116413".
Recently, I released a book on Generative Adversarial Networks Projects. It is available at "Generative Adversarial Networks Projects: Build next-generation generative models using TensorFlow and Keras https://www.amazon.in/dp/B07F2MY1QH/ref=cm_sw_r_cp_apa_i_TMdRCb5X4375D". If you go through the book, let me know your feedback.

u/Roboserg · 4 pointsr/learnmachinelearning

I started with this book where you code a neural net with 1 hidden layer

u/disgr4ce · 4 pointsr/artificial

If you work hard enough at it, and spend the time necessary (years usually), you can learn anything. If you're interested, this is the book that originally got me into neural networks: https://www.amazon.com/Mind-within-Net-Learning-Thinking/dp/0262194066/ref=sr_1_1?ie=UTF8&qid=1499609914&sr=8-1&keywords=the+mind+within+the+net

It's written for a general audience and is, for once, not focused on the mathematical descriptions of ANNs (not that there's anything wrong with that), yet goes into extremely useful detail about basic NN architectures.

u/quotemycode · 4 pointsr/learnprogramming

On the contrary. The Emotion Machine

By Marvin Minsky - he believes that we can program human emotions. I tend to agree with him.

u/zachimal · 3 pointsr/teslamotors

This looks like exciting stuff! I really want to understand all of it better. Does anyone have suggestions on courses surrounding the fundamentals? (I'm a full stack web dev, currently.)

Edit: After a bit of searching, I think I'll start here: https://smile.amazon.com/gp/product/B01EER4Z4G/ref=dbs_a_def_rwt_hsch_vapi_tkin_p1_i0

u/monkeyunited · 3 pointsr/datascience

Data Science from Scratch

Python Machine Learning

DSFS covers basics of Python. If you're comfortable with that and want to dive into implementing algorithm (using Tensorflow2, for example), then PML is a great book for that.

u/afro_donkey · 3 pointsr/math

https://www.amazon.com/Introduction-Neural-Cognitive-Modeling-3rd-ebook-dp-B07K4CRV11/dp/B07K4CRV11/ref=mt_kindle?_encoding=UTF8&me=&qid=1550716032

This book talks about some foundational discoveries that were made in linking psychology to neuroscience in the past 50 years, and how brains give rise to minds.

u/TonySu · 3 pointsr/learnprogramming

Python Machine Learning. From the semester of machine learning I've done, you basically want to get comfortable with numpy and scikit learn.

I used your textbook to understand the theory behind the algorithms, but it'd be a waste of time (and potentially dangerous) to implement any non-trivial algorithm yourself. Especially since the sklearn python module has basically everything you would need (minus neural networks which you will find through Theano or TensorFlow).

u/timelick · 3 pointsr/Physics

I was glad when someone pointed out David MacKay's book to me. Now I can pass it along to you. I don't know if it is directly relevant to what you are pursuing in physics, but it is a wonderful, and FREE, book. Check out the amazon reviews and see if it would be worth your time.

u/Flofinator · 3 pointsr/learnprogramming

Well I do web-development right now but have been teaching myself Machine-Learning. I would eventually like to venture and do more theoretical work so I will tell you what classes I've taken and what I've found important to learn so far.

Andre Ng's Machine-learning class is excellent, I don't know Matlab and started to fall short in the assignments because I didn't have the time to learn Matlab along with the theoretical learning but it will give you a good base of knowledge on how many of the deep learning techniques work. https://www.coursera.org/learn/machine-learning

I would also highly recommend http://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka-ebook/dp/B00YSILNL0 .

On to the things I find really important to learn

If you plan to do Deep Learning, having at least a basic understanding of Linear Algebra is a must. Linear Algebra I think was much easier to learn than Calculus though!

I think if you really want to start delving into modern AI you'll need to learn to read papers. My level is increasing but a lot of the new papers are still pretty far out of my reach of understanding. This is a huge topic and could spend the rest of your life understanding everything in every paper now a days but some of the major areas to get good at:

  1. Mathematical Analysis I think is very important. You may not need this for a lot of Deep Learning papers unless they are heavy in probability but if you ever wanted to learn about other types of AI it is pretty imperative.
  2. Probability is a must, every interesting paper I've read other than optimization papers are Probability heavy. Think of AlphaGo's win, it used Monte Carlo Tree Search with a Markov Decision Train. Both are from probability theory.
  3. Data structures are becoming increasingly important in a lot of non-deep learning AI ideas. Like a Godel Machine, Evolutionary Algorithms, Cellular Automata, etc... The problems with a lot of these are the data-structures make these algorithms slow. One of my favorite researchers Jurgen Schmidhuber is working on making his Godel machine work faster with something called Levsin search. The problem is, the data-structure is inherently slow as it's data-structure is it's own program.
u/K900_ · 2 pointsr/learnpython

You might be interested in this.

u/editorijsmi · 2 pointsr/rstats

you check the following book

Forecasting models – an overview with the help of R software : Time series - Past ,Present and Future

https://www.amazon.co.uk/dp/B07VFY53B1 (E-Book)

https://www.amazon.com/dp/1081552808 (Paperback)

ISBN: 9781081552800

u/Modatu · 2 pointsr/compmathneuro

I also liked the Dayan & Abbott book.

I also liked the Neuronal Dynamics book by Gerstner, which has accompanying online resources.

u/Sarcuss · 2 pointsr/Python

Probably Python Machine Learning. It is a more applied than theory machine learning book while still giving an overview of the theory like ISLR :)

u/TBSchemer · 2 pointsr/GetMotivated

Well, I already had some basic programming skills from an introductory college course, but there are definitely online tutorials and exercises that can teach you that. I would recommend searching "introduction to python" and just picking a tutorial to work through (unless someone else has a more specific recommendation).

Python is one of the easiest languages to pick up, but it's extremely powerful. Knowing the basics, I just started trying to come up with fun, little projects I thought would be doable for me. Every time I ran into a component I wasn't sure how to do (or wasn't sure of the best way to do), I searched for the answers online (mostly at Stack Exchange). I later started looking through popular projects on Github to see good examples of proper application structure.

Each of my projects taught me a new skill that was crucial to building myself up to the point of true "software engineering," and they became increasingly more complicated:

  1. I started out writing a simple script that would run through certain text files I was generating in my research and report some of the numbers to the console.

  2. I wrote a script that would take a data file, plot the data on a graph, and then plot its 1st and 2nd derivatives.

  3. I wrote a simple chemical database system with a text-prompt user interface because my Excel files were getting too complicated. This is where I really learned "object-oriented" programming.

  4. I wanted to make the jump to graphical user interfaces, so I worked through tutorials on Qt and rewrote my database to work with Qt Designer.

  5. I wrote some stock-tracking software, again starting from online tutorials.

  6. I bought this book on neural networks and worked through the examples.

  7. I wrote an application that can pull molecular structures from the Cambridge Crystal Structure Database and train a neural network on this data to determine atom coordination number.

  8. For a work sample for a job I applied to, I wrote an application to perform the GSEA analysis on gene expression data. I really paid close attention to proper software structure on this one.

  9. Just last week I wrote an application that interfaces with a computational chemistry software package to automate model generation and data analysis for my thesis.

    The important thing to remember about programming is there's always more to learn, and you just need to take it one step at a time. As you gain experience, you just get quicker at the whole process.
u/frozen_frogs · 2 pointsr/learnprogramming

This free book supposedly contains most of what you need to get into machine learning (focus on deep learning). Also, this book seems like a nice introduction.

u/kmyeRI · 2 pointsr/cogsci

I'm no expert, but I thought The Mind Within the Net: Models of Learning, Thinking, and Acting by Manfred Spitzer was a great introduction to neural nets and how they might relate to neurophysiology and cognition.

u/jayman39tx · 2 pointsr/learnpython

I'm not sure this is a cookbook, per se, but it claims to be a step-by-step (cookbook-like) guide with real-world examples.

​

https://www.amazon.com.au/Deep-Learning-Step-Step-Presenting/dp/1717374182/

​

u/mwalczyk · 2 pointsr/learnmachinelearning

I'm very new to ML myself (so take this with a grain of salt) but I'd recommend checking out Make Your Own Neural Network, which guides you through the process of building a 2-layer net from scratch using Python and numpy.

That will help you build an intuition for how neural networks are structured, how the forward / backward passes work, etc.

Then, I'd probably recommend checking out Stanford's online course notes / assignments for CS231n. The assignments guide you through building a computation graph, which is a more flexible, powerful way of approaching neural network architectures (it's the same concept behind Tensorflow, Torch, etc.)

u/radiantyellow · 2 pointsr/Python

have you checked out the gym - OpenAI library? I explored a tiny bit with it during my software development class and by tiny I mean supervised learning for the Cartpole game

https://github.com/openai/gym
https://gym.openai.com/

there are some guides and videos explaining certain games in there that'll make learning and implementing learning algorithms fun. My introduction into Machine Learning was through Make Your Own Neural Network, its a great book with for learning about perceptrons, layers, acitvations and such; theres also a video.

u/ummcal · 2 pointsr/findapath

I really liked this book called "Make your own Neural Network". It's for absolute beginners and only a bit over 200 pages.

u/adventuringraw · 2 pointsr/learnmachinelearning

I always like finding intuitive explanations to help grapple with the 'true' math. It's really hard to extract meaning sometimes from hard books, but at some point, the 'true' story and the kind of challenging practice that goes with it is something you still need. If you just want to see information theory from a high level, Kahn's Academy is probably a great place to start. But when you say 'deep learning research'... if you want to write an original white paper (or even read an information theoretic paper on deep learning) you'll need to wade deep into the weeds and actually get your hands dirty. If you do want to get a good foundation in information theory for machine learning, I went through the first few chapters so far of David MacKay's information theory book and that's been great so far, excited to go through it properly at some point soon. I've heard Cover and Thomas is considered more the 'bible' of the field for undergrad/early grad study, but it takes a more communication centric approach instead of a specific machine learning based approach.

Um... though reading your comment again, do you also not know probability theory and statistics? Wasserman's all of statistics is a good source for that, but you'll need a very high level of comfort with multivariate calculus and a reasonable level of comfort with proof based mathematics to be able to weather that book.

Why don't you start looking at the kinds of papers you'd be interested in? Some research is more computational than theoretical. You've got a very long road ahead of you to get a proper foundation for original theoretical research, but getting very clear on what exactly you want to do might help you narrow down what you want to study? You really, really can't do wrong with starting with stats though, even if you do want to focus on a more computer science/practical implementation direction.

u/amazon-converter-bot · 1 pointr/FreeEBOOKS

Here are all the local Amazon links I could find:


amazon.co.uk

amazon.ca

amazon.com.au

amazon.in

amazon.com.mx

amazon.de

amazon.it

amazon.es

amazon.com.br

amazon.nl

amazon.co.jp

amazon.fr

Beep bloop. I'm a bot to convert Amazon ebook links to local Amazon sites.
I currently look here: amazon.com, amazon.co.uk, amazon.ca, amazon.com.au, amazon.in, amazon.com.mx, amazon.de, amazon.it, amazon.es, amazon.com.br, amazon.nl, amazon.co.jp, amazon.fr, if you would like your local version of Amazon adding please contact my creator.

u/kittttttens · 1 pointr/learnmachinelearning

re. question 2, to my knowledge, there's no comprehensive book or MOOC that covers the applications of machine learning in biology. there's this book, but it's almost 20 years out of date at this point (which is a huge amount of time in this field), so i wouldn't recommend it. it seems to focus mostly on analysis of genomic sequencing data.

it's probably a safer bet to read review papers that are more recent. this paper covers a lot of current applications in molecular biology and human genetics, and brendan frey is well known in the field. for deep learning, there's this collaboratively written review, which is probably the most comprehensive resource you'll find.

if you have a more specific subfield of biology that you're interested in, i can try to help you find more resources.

u/alzho12 · 1 pointr/datascience

As far as Python books, you should get these 2:
Python Data Science Handbook and Python Machine Learning.

u/sasquatch007 · 1 pointr/datascience

Just FYI, because this is not always made clear to people when talking about learning or transitioning to data science: this would be a massive undertaking for someone without a strong technical background.

You've got to learn some math, some statistics, how to write code, some machine learning, etc. Each of those is a big undertaking in itself. I am a person who is completely willing to spend 12 hours at a time sitting at a computer writing code... and it still took me a long time to learn how not to write awful code, to learn the tools around programming, etc.

I would strongly consider why you want to do this yourself rather than hire someone, and whether it's likely you'll be productive at this stuff in any reasonable time frame.

That said, if you still want to give this a try, I will answer your questions. For context: I am not (yet) employed as a data scientist. I am a mathematician who is in the process of leaving academia to become a data science in industry.


> Given the above, what do I begin learning to advance my role?

Learn to program in Python. (Python 3. Please do not start writing Python 2.) I wish I could recommend an introduction for you, but it's been a very long time since I learned Python.

Learn about Numpy and Scipy.

Learn some basic statistics. This book is acceptable. As you're reading the book, make sure you know how to calculate the various estimates and intervals and so on using Python (with Numpy and Scipy).

Learn some applied machine learning with Python, maybe from this book (which I've looked at some but not read thoroughly).

That will give you enough that it's possible you could do something useful. Ideally you would then go back and learn calculus and linear algebra and then learn about statistics and machine learning again from a more sophisticated perspective.

> What programming language do I start learning?

Learn Python. It's a general purpose programming language (so you can use it for lots of stuff other than data), it's easy to read, it's got lots of powerful data libraries for data, and a big community of data scientists use it.

> What are the benefits to learning the programming languages associated with so-called 'data science'? How does learning any of this specifically help me?

If you want a computer to help you analyze data, and someone else hasn't created a program that does exactly what you want, you have to tell the computer exactly what you want it to do. That's what a programming language is for. Generally the languages associated with data science are not magically suited for data science: they just happen to have developed communities around them that have written a lot of libraries that are helpful to data scientists (R could be seen as an exception, but IMO, it's not). Python is not intrinsically the perfect language for data science (frankly, as far as the language itself, I ambivalent about it), but people have written very useful Python libraries like Numpy and scikit-learn. And having a big community is also a real asset.

> What tools / platforms / etc can I get my hands on right now at a free or low cost that I can start tinkering with the huge data sets I have access to now? (i.e. code editors? no idea...)

Python along with libraries like Numpy, Pandas, scikit-learn, and Scipy. This stuff is free; there's probably nothing you should be paying for. You'll have to make your own decision regarding an editor. I use Emacs with evil-mode. This is probably not the right choice for you, but I don't know what would be.


> Without having to spend $20k on an entire graduate degree (I have way too much debt to go back to school. My best bet is to stay working and learn what I can), what paths or sequence of courses should I start taking? Links appreciated.

I personally don't know about courses because I don't like them. I like textbooks and doing things myself and talking to people.

u/ClydeMachine · 1 pointr/learnmachinelearning

Yes! The one I've used with success is Raschka's Python Machine Learning. Very hands-on, many examples, great for getting familiar with the basics of data science work, in my experience.

u/Theotherguy151 · 1 pointr/learnmachinelearning

tariq rasheed has a great book on ML and he breaks it down for total beginners. he breaks down the math as if your in elementry school. I think its called ML for beginners.

​

Book link:

https://www.amazon.com/Make-Your-Own-Neural-Network-ebook/dp/B01EER4Z4G/ref=sr_1_1?crid=3H9PBLPVUWBQ4&keywords=tariq+rashid&qid=1565319943&s=gateway&sprefix=tariq+ra%2Caps%2C142&sr=8-1

​

​

I got the kindle edition bc im broke. Its just as good as the actual book.

u/Thistleknot · 1 pointr/philosophy

I think my idea was like mapping a bunch of x,y coordinates. Then fitting a line. Dropping the x,y as inputs, and using the predicted values of x (that are along the regression line) as input. So the regression line changes (and the predicted values of x) with every new addition of an x,y variable (i.e. another set of inputs to add to the vector of x,y plots). The predicted values along the regression line merely becomes more fine tuned with additional x,y plots [aka vector gets bigger].

What sucks is I never got an ann coded up... I was hoping to model it in excel, but the layers of neurons get's too complex for me to do in excel (multiple connections). It really does require c... and the back propogation was confusing the f out of me; something to do with bastardized calculus explained. Oh yeah, [i think] it was derivatives I got lost on; which in a way reminds me of regression lines.

http://www.amazon.com/Introduction-Math-Neural-Networks-Heaton-ebook/dp/B00845UQL6

u/codefying · 1 pointr/datascience

My top 3 are:

  1. [Machine Learning] (https://www.cs.cmu.edu/~tom/mlbook.html) by Tom M. Mitchell. Ignore the publication date, the material is still relevant. A very good book.
  2. [Python Machine Learning] (https://www.amazon.co.uk/dp/1783555130/ref=rdr_ext_sb_ti_sims_2) by Sebastian Raschka. The most valuable attribute of this book is that it is a good introduction to scikit-learn.
  3. Using Multivariate Statistics by Barbara G. Tabachnick and Linda S. Fidell. Not a machine learning book per se, but a very good source on regression, ANOVA, PCA, LDA, etc.
u/srkiboy83 · 1 pointr/learnprogramming

http://www.urbandictionary.com/define.php?term=laughing&defid=1568845 :))

Now, seriously, if you want to get started, I'd recommend this for R (http://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370/) and this for Python (http://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130//).

Also, head out to /r/datascience and /r/MachineLearning!

EDIT: Wrong link.

u/swinghu · 1 pointr/learnmachinelearning

Yes, this tutorail is very useful for scikit learner, before watch the videos, I would like to recommend the book Python machine learning first! https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?s=books&ie=UTF8&qid=1487243060&sr=1-1&keywords=python+machine+learning

u/krtcl · 1 pointr/learnmachinelearning

You might want to check this book out, it really breaks things down into manageable and understandable chunks. As the title implies, it's around neural networks. Machine Learning Mastery is also a website that does well at breaking things down - I'm pretty sure you've already come across it

u/SmileAndDonate · 1 pointr/artificial


Info | Details
----|-------
Amazon Product | The Mind within the Net: Models of Learning, Thinking, and Acting
>Amazon donates 0.5% of the price of your eligible AmazonSmile purchases to the charitable organization of your choice. By using the link above you get to support a chairty and help keep this bot running through affiliate programs all at zero cost to you.

u/Dinoswarleaf · 1 pointr/APStudents

Hey! I'm not OP but I think I can help. It's kind of difficult to summarize how machine learning (ML) works in just a few lines since it has a lot going on, but hopefully I can briefly summarize how it generally works (I've worked a bit with them, if you're interested in how to get into learning how to make one you can check out this book)

In a brief summary, a neural network takes a collection of data (like all the characteristics of a college application), inputs all its variables (like each part of the application like its AP scores, GPA, extraciriculars, etc.) into the input nodes and through some magic math shit, the neural network finds patterns through trial and error to output what you need, so that if you give it a new data set (like a new application) it can predict the chance that something is what you want it to be (that it can go to a certain college)

How it works is each variable that you put into the network is a number that is able to represent the data you're inputting. For example, maybe for one input node you put the average AP score, or the amount of AP scores that you got a 5 on, or your GPA, or somehow representing extraciriculars with a number. This is then multiplied in what are called weights (the Ws in this picture) and then is sent off into multiple other neurons to be added with the other variables and then normalized so the numbers don't get gigantic. You do this with each node in the first hidden layer, and then repeat the process again in how many node layers you have until you get your outputs. Now, this is hopefully where everything clicks:

Let's say the output node is just one number that represents the chance you get into the college. On the first go around, all the weights that are multiplied with the inputs at first are chosen at random (kinda, they're within a certain range so they're roughly where they need to be) and thus, your output at first is probably not close to the real chance that you'll get into the college. So this is the whole magic behind the neural network. You take how off your network's guess was compared to the real life % that you get accepted, and through something called back propagation (I can't explain how you get the math for it, it actually is way too much but here's an example of a formula used for it) you adjust the weights so that the data is closer when put in to the actual answer. When you do this thousands or millions of times your network gets closer and closer to guessing the reality of the situation, which allows you to put in new data so that you can get a good idea on what your chance is you get into college. Of course, even with literal millions of examples you'll never be 100% accurate because humans decisions are too variable to sum up in a mathematical sense, but you can get really close to what will probably happen, which is better than nothing at all :)

The beauty of ML is it's all automated once you set up the neural network and test that it works properly. It takes a buttload of data but you can sit and do what you want while it's all processing, which is really cool.

I don't think I explained this well. Sorry. I'd recommend the book I sent if you want to learn about it since it's a really exciting emerging field in computer science (and science in general) and it's really rewarding to learn and use. It goes step by step and explains it gradually so you feel really familiar with the concepts.

u/[deleted] · 1 pointr/cogsci

Perhaps you might find Minsky's The Emotion Machine a worthy read. Proposes some ideas for how things like that could work.

u/jalagl · 1 pointr/learnmachinelearning

In addition to the 3blue1brown video someone else described this book is a great introduction to the algorithms, without going into much math (though you should go into the math to fully undestand what is going on).

Make Your Own Neural Network
https://www.amazon.com/dp/B01EER4Z4G/ref=cm_sw_em_r_mt_dp_U_NkqpDbM5J6QBG

u/Wootbears · 1 pointr/Udacity

No problem!

Right now, they mentioned that you really only need a background with Python...however, even from the first week, I've seen a lot of Linear Algebra (mostly just matrix operations like dot product, knowing when to use transpose matrices, etc). I assume as the course progresses, there will likely be more math background needed, such as probability and calc 3.

I think there was even a quiz using Scikit-Learn in Python, though I might be thinking of the Machine Learning Engineer Nanodegree.

If you want to see what was needed for the first week of homework, it was almost exactly the same network that you build in this book: https://www.amazon.com/Make-Your-Own-Neural-Network/dp/1530826608/ref=sr_1_1?ie=UTF8&qid=1485965800&sr=8-1&keywords=python+neural+network

u/Zedmor · 1 pointr/datascience

I am in probably same boat. Agree with your thoughts on github. I fell in love with this book: https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1474393986&sr=8-1&keywords=machine+learning+python

it's pretty much what you need - guidance through familar topics with great notebooks as example.

Take a look at seaborn package for visualization.

u/Mindrust · 1 pointr/singularity

>As for SL4, does being comfortable with it mean you think it's going to happen soon?

Not necessarily. The intelligence explosion (singularity) and a limited amount of mental revision might occur in our lifetimes. The intelligence explosion could happen at any time because we're not sure what's required for AGI. If AGI requires new insight into cognition, then it could happen any time between now and a hundred years from now. If the algorithms we currently have are good enough and all we need is hardware, then it could happen a couple of decades from now. Most AI researchers think the problem is software, not hardware, so it's hard to say when AGI might occur.

Limited forms of mental revision in the form of BCIs and other neuroprosthetics are quite possible by mid-century. See this book for more details on that.

The rest of the stuff on that list would take much more time.

> SL1 is about all we'll see in our lifetime.

SL1 is basically what we have today, minus the hydrogen economy (which will likely never happen) and genetic improvements.

We're already seeing hints of SL2 and SL3 (genetic engineering, nanotechnology). Technology is advancing exponentially, not logarithmically, so I think you'll turn out to be wrong on that one. But we'll see I guess.

The only things I would rule out (from SL0-SL3) for the next few decades is interstellar exploration, since the problems are just too daunting to accomplish in under a century.

u/DonaldPShimoda · 1 pointr/learnpython

Might be worth looking at someone else's more in-depth explanation of these things to see modern uses. I just picked up this book, which gets into SciKit Learn for machine learning in like chapter 3 or something.

(Just an idea. I look forward to reading your tutorial if you ever post about it here!)

u/tedivm · 0 pointsr/programming

I love this book, and came in here to recommend it. After reading Programming Collective Intelligence there were a few things I was still fuzzy on, so I purchased Collective Intelligence in Action to get another perspective and it was really helpful. Amazon even bundles them together for a discount.