(Part 2) Top products from r/MachineLearning
We found 28 product mentions on r/MachineLearning. We ranked the 222 resulting products by number of redditors who mentioned them. Here are the products ranked 21-40. You can also go back to the previous section.
21. Foundations of Machine Learning (Adaptive Computation and Machine Learning series)
Sentiment score: 1
Number of reviews: 2
22. Bayesian Reasoning and Machine Learning
Sentiment score: 1
Number of reviews: 2
Cambridge University Press
23. Machine Learning for Hackers
Sentiment score: 2
Number of reviews: 2
O Reilly Media
24. Data Mining: Practical Machine Learning Tools and Techniques (The Morgan Kaufmann Series in Data Management Systems)
Sentiment score: 1
Number of reviews: 2
25. Artificial Intelligence: A Modern Approach (3rd Edition)
Sentiment score: 2
Number of reviews: 2
Overnight shipping available
26. Python Machine Learning, 1st Edition
Sentiment score: 1
Number of reviews: 2
Python Machine Learning
27. Clean Code: A Handbook of Agile Software Craftsmanship
Sentiment score: 1
Number of reviews: 2
Prentice Hall
28. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
Sentiment score: 1
Number of reviews: 2
The Master Algorithm How the Quest for the Ultimate Learning Machine Will Remake Our World
29. Tales from Both Sides of the Brain: A Life in Neuroscience
Sentiment score: 1
Number of reviews: 1
30. Principles of Mathematical Analysis (International Series in Pure and Applied Mathematics)
Sentiment score: 1
Number of reviews: 1
McGraw-Hill Science Engineering Math
31. Principles of Neural Science, Fifth Edition (Principles of Neural Science (Kandel))
Sentiment score: 1
Number of reviews: 1
32. Fundamental Neuroscience (Squire,Fundamental Neuroscience)
Sentiment score: 1
Number of reviews: 1
Used Book in Good Condition
33. Programming Massively Parallel Processors: A Hands-on Approach
Sentiment score: 1
Number of reviews: 1
Used Book in Good Condition
34. Spoken Language Processing: A Guide to Theory, Algorithm and System Development
Sentiment score: -1
Number of reviews: 1
35. C Programming Language, 2nd Edition
Sentiment score: 0
Number of reviews: 1
Prentice Hall
36. Dell S Series Led-Lit Monitor 32" Black (S3219D), QHD 2560 X 1440, 60Hz, 99% sRGB, 16: 9, AMD FreeSync, 2 x 5W Speakers, 2 x HDMI 1.4, DP 1.2, USB 3.0
Sentiment score: 1
Number of reviews: 1
Enjoy crisp, vivid image clarity in QHD (2560 x 1440) resolution I. E.3. 68 million pixels-almost two times more than full HDThe wide viewing angles ensure a clear view of the monitor from multiple vantage points, and covers 99% sRGB color gamutHear more life like sound, more dynamic music and bigge...
37. How to Study as a Mathematics Major
Sentiment score: -1
Number of reviews: 1
Oxford University Press
38. Networks: An Introduction
Sentiment score: 0
Number of reviews: 1
Oxford University Press USA
> Or can someone shed some light on what they're discussing and what this paper is proposing?
2.5) Rejection of 'trivial information closure' helps a bit with bounding conditions. We can think of an aggregate of informationally closed system as a informationally closed system, but we wouldn't think that a mere aggregate of potentially 'conscious' minds are together having a single unitive consciousness. Since trivial information closure doesn't contribute consciousness according to their hypothesis, adding independent closed systems to another system would not change the degree of consciousness of either. This may also have some relationship with the idea of integration in IIT (Information Integration Theory).
I cannot personally vouch for the book, but Andy Clark is one of 'big guys' in the field; so he can be a pretty reliable source.
_____
About background materials. It seemed pretty readable to me without much of a background. For statements about neural activties, I am just taking their words for it, but the citations can be places to look. You can find more about phenomena like 'blindsight' from googling, if you weren't already aware of it. As opposed to the recommendations made by the other redditor, I don't think it has much to do with anything related to the hard problem of consciousness (Nagel's Bat or Chalmer's zombie) at all and you don't need to read them for this paper - though they can interesting reads for their own sake and can help better understanding the potential limitations - but these work goes on a more philosophical direction not quite related to the scope of the paper. The equations may have some relation with information theory (again the citations may be the best bet for better background). PP seems to be most closely related to the paper with the idea of predictability being on the center. So that may something to explore for background. IIT can be another background material for this: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588
https://www.iep.utm.edu/int-info/
Off-course R is used for machine learning. It's probably the most popular language for interactive exploratory and predictive analytics right. For instance most winners of kaggle.com machine learning competitions use R at one point or another (e.g. packages such as randomForest, gbm, glmnet and off-course ggplot2). There is also a recent book specifically teaching how to use R for Machine Learning: Machine Learning for Hackers.
My-self I am more a Python fan so I would recommend python + numpy + scipy + scikit-learn + pandas (for data massaging and plotting).
Java is not bad either (e.g. using mahout or weka or more specialized libraries like libsvm / liblinear for SVMs and OpenNLP / Standford NLP for NLP).
I find working in C directly a bit tedious (esp. for data preparation and interactive analysis) hence better use it in combination with a scripting language that has good support for writing C bindings.
Classic Russel & Norwig textbook is definitely worth reading. It starts from basics and goes to quite advanced topics:
http://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597/
Udacity has AI class that follows some chapters of that book.
Murphy's textbook builds ML from the ground up, starting from basics of probability theory:
http://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/0262018020/
(I see, it was already recommended)
Coursera has the whole machine learning specialization (Python) and a famous ML class by Andrew Ng (Matlab).
I hope it helps. Good luck!
There have already been a few books listed focusing on theory, so I'll add Machine Learning for Hackers to the list.
It doesn't cover much of the theory, but it's a nice start to getting the programming skills you need for machine learning. When you start using these techniques on real data, you'll quickly see that it's almost never a simple task to go from messy data to results. You need to learn how to program to clean your data and get it into a usable form to do machine learning. A lot of people use Matlab, but since they're free I do all of my programming in R and Python. There are a lot of good libraries/packages for these languages that will enable you to do a lot of cool stuff.
That was a phenomenal article. Extremely long (just like every piece of writing associated with Hofstadter), but excellent nonetheless. I'm admittedly sympathetic to Hofstadter's ideas, not the least of which because of my combined math/cognitive science background.
There was a quote by Stuart Russell, who helped write the book on modern AI, that really stood out to me, and I think expresses a lot of my own issue with the current state of AI:
“A lot of the stuff going on is not very ambitious... In machine learning, one of the big steps that happened in the mid-’80s was to say, ‘Look, here’s some real data—can I get my program to predict accurately on parts of the data that I haven’t yet provided to it?’ What you see now in machine learning is that people see that as the only task.”
This is one of the reasons I've started becoming very interested in ontology engineering. The hyperspecialization of today's AI algorithms is what makes them so powerful, but it's also the biggest hindrance to making larger, more generalizable AI systems. What the field is going to need to get past its current "expert systems" phase is a more robust language through which to represent and share the information encoded in our countless disparate AI systems. \end rant
If you are interested enough in machine learning that you are going to work through ESL, you may benefit from reading up on some math first. For example:
Without developing some mathematical maturity, some of ESL may be lost on you. Good luck!
I wouldn't say it's about Neuroscience, but it covers ML/AI. The Master Algorithm is a really good book. It can also serve as an introduction to a ton of different AI algorithms, from clustering to neural networks. It's short and easy to read, I highly recommend it.
Maybe add this book to this list of books? its modern and by the editor of the Machine Learning Journal
http://www.amazon.com/Machine-Learning-Science-Algorithms-Sense/dp/1107422221
http://www.amazon.com/Machine-Learning-Tom-M-Mitchell/dp/0070428077/ref=sr_1_3?s=books&ie=UTF8&qid=1397051304&sr=1-3&keywords=machine+learning is old but a classic.
And http://www.amazon.com/Data-Mining-Practical-Techniques-Management/dp/0123748569/ref=sr_1_1?s=books&ie=UTF8&qid=1397051336&sr=1-1&keywords=data+mining
Is a good by the authors of weka as well.
There are of course a lot of books but I think these are good ones for beginners.
For deep learning reference this:
https://www.quora.com/What-are-some-good-books-papers-for-learning-deep-learning
There are a lot of open courses I watched on youtube regarding reinforcement learning, one from oxford, one from stanford and another from Brown. Here's a free intro book by Sutton, very well regarded:
https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html
For general machine learning their course is pretty good, but I did also buy:
https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1467309005&sr=8-1&keywords=python+machine+learning
There were a lot of books I got into that weren't mentioned. Feel free to pm me for specifics. Cheers
Edit: If you want to get into reinforcement learning check out OpenAI's Gym package, and browse the submitted solutions
Having done an MEng at Oxford where I dabbled in ML, the 3 key texts that came up as references in a lot of lectures were these:
Pattern Recognition and Machine Learning (Information Science and Statistics) (Information Science and Statistics) https://www.amazon.co.uk/dp/0387310738/ref=cm_sw_r_cp_apa_i_TZGnDb24TFV9M
Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning Series) https://www.amazon.co.uk/dp/0262018020/ref=cm_sw_r_cp_apa_i_g1GnDb5VTRRP9
(Pretty sure Murphy was one of our lecturers actually?)
Bayesian Reasoning and Machine Learning https://www.amazon.co.uk/dp/0521518148/ref=cm_sw_r_cp_apa_i_81GnDbV7YQ2WJ
There were ofc others, and plenty of other sources and references too, but you can't go buying dozens of text books, not least cuz they would repeat the same things.
If you need some general maths reading too then pretty much all the useful (non specialist) maths we used for 4 years is all in this:
Advanced Engineering Mathematics https://www.amazon.co.uk/dp/0470646136/ref=cm_sw_r_cp_apa_i_B5GnDbNST8HZR
I kind of see your point, but I don't completely agree. As I said already, I know something about active research in this field: enough, as a matter of fact, to be able to read these books
https://www.amazon.com/Understanding-Machine-Learning-Theory-Algorithms/dp/1107057132
https://www.amazon.com/Foundations-Machine-Learning-Adaptive-Computation/dp/0262039400/
https://www.amazon.com/High-Dimensional-Probability-Introduction-Applications-Probabilistic/dp/1108415199/
However, as most researchers, I mostly focus on my specific subfield of Machine Learning. Also, every now and then, I'd like to read something about my job, which however doesn't feel like work (even a professional football player may want to kick a ball for fun every now and then 😉). Thus, I was looking for some general overview of Machine Learning, which wouldn't be too dumbed down, according to experts (otherwise I wouldn't have fun reading it), but which at the same time wasn't a huge reference textbook. After all, this would be just a leisure read, it shouldn't become work after work.
That's why I asked here, rather than on r/LearnMachineLearning. However, if other users also feel I should ask there, I will reconsider.
A fun thing to consider: books or lectures about neuroscience. Looking at the meatware brain is a nice thing to do.
That book is nice. http://www.amazon.com/Tales-Both-Sides-Brain-Neuroscience/dp/0062228803
Coursera and other platforms have various "Intro to neuroscience" courses.
You may enjoy causal models; it's a quick, easy read.
I’ve tried using HD monitors (1900x1080 I think), and it was... not exactly an improvement over my laptop personally. This 2560x1440 monitor seems like a great deal (like half the price of competitors): https://www.amazon.com/Dell-LED-Lit-Monitor-Black-S3219D/dp/B07JVQ8M3Q/ref=mp_s_a_1_3?keywords=32+inch+monitor&qid=1571245395&sr=8-3
I went over that course a couple of years ago and I found it to be very useful.
After finishing the course, I went over this book:
https://www.amazon.com/Programming-Massively-Parallel-Processors-Second/dp/0124159923/
And it was totally worth it.
"Principles of neural science" (bit heavy) and "Fundamental Neuroscience" (heavier) are two standard textbooks. For computational neuroscience/modeling "Principles of Computational Modelling in Neuroscience" is a great intro.
I think this book recommendation might be appreciated on this thread:
Clean Code: A Handbook of Agile Software Craftsmanship https://www.amazon.com/dp/0132350882/ref=cm_sw_r_cp_api_mkZwzb0VN10HD
Thats a $30 book
Cool!
I recommend this book: Clean Code
We gave it to every new data scientist we hired at my last company.
MacKay: Information theory, inference and learning algorithms and Newman: Networks for the latter.
K&R -> The C Programming Language by Brian Kernighan and Dennis Ritchie
SICP -> Structure and Interpretation of Computer Programs
The Pragmatic Programmer
Maybe try "The Master Algorithm" by Pedro Domingos: http://www.amazon.com/The-Master-Algorithm-Ultimate-Learning/dp/0465065708
Murphy
BRML
ESL
I'd have to vote for this one :
http://www.amazon.com/gp/aw/d/0123748569/ref=mp_sim_s_1?pi=SL500_SY125
Take the online course by Andrew Ng and then read Python Machine Learning.
If you then become really serious about Machine Learning, read, in this order,
What is your background? Study college level math seriously or casually?
-------
You might want to look at these books (disclaimer, i've never read any but i ordered Devlin and Houston today (from library) ):
http://www.amazon.com/Introduction-Mathematical-Thinking-Keith-Devlin/dp/0615653634/
http://www.amazon.com/How-Study-as-Mathematics-Major/dp/0199661316/
http://www.amazon.com/How-Think-Like-Mathematician-Undergraduate/dp/052171978X/
There are a million details as others have said. You don't know how much you're missing.
This is the book to read for traditional HMM-based ASR.
Ignore the discussion of Baum-Welch. The HMM isn't trained in the normal ways since 1. it's huge, and 2. there's limited data. The transition probabilities come from your language model. The HMM topology is usually to have three states per phone-in-context, and to use a dictionary of pronunciation variants for each word.
Each state has a GMM to model the probabilities of the features. The features are MFCCs of a frame plus deltas and double deltas from the MFCCs of the previous frame. You'll probably use a diagonal covariance matrix.
Remember I said phone-in-context? That's because the actual pronunciation of a phoneme depends on the phonemes around it. You have to learn clusters of these since there are too many contexts to model separately.
Training data: to train, you need alignments of words and their pronunciations to audio frames. This pretty much requires using an existing recognizer to do labeling for you. You give it a restricted language model to force it to recognize what was said and use the resulting alignment as training data.
Extra considerations: how to model silence (voice activity detector), how to handle pauses and "ums" (voiced pauses). How to handle mapping non-verbatim transcripts to how they might have been spoken (how did he say 1024?). How to adapt to individual speakers. How to collapse states of the HMM into a lattice. How to handle backoff from long ngrams to short ones in your language model.
Needless to say, I don't recommend this for a master's thesis.