(Part 2) Top products from r/MachineLearning

Jump to the top 20

We found 28 product mentions on r/MachineLearning. We ranked the 222 resulting products by number of redditors who mentioned them. Here are the products ranked 21-40. You can also go back to the previous section.

Next page

Top comments that mention products on r/MachineLearning:

u/Nameless1995 · 9 pointsr/MachineLearning

> Or can someone shed some light on what they're discussing and what this paper is proposing?


  1. Consciousness (atleast, consciousness(es) that we are familiar with) seems to occur at a certain scale. Conscious states doesn't seem to significantly covary with noisy schocastic activities of individual cells and such; rather it seems to covary at with macro-level patterns and activities emereging from a population of neurons and stuffs. We are not aware of how we precisely process information (like segmenting images, detecting faces, recognizing speeches), or perform actions (like precise motor controls and everything). We are aware of things at a much higher scale. However, consciousness doesn't seem to exist at an overly macro-level scale either (like, for example, we won't think that USA is conscious).


  2. The authors seem to think that the reason consciousness exists in this scale because of the property of 'non-trivial information closure'. A system is informationally closed if the information flow from environment to the system is 0. A trivial case of information closure is when the system and the environment is pretty much independent. For the authors, the degree of consciousness is instead associated with the degree of closure in non-trivially closed informational systems. What is 'non-trivial information closure'? - in this case, even though the environment at time t (E_t) plays a role in the formation of the system state in time t (Y_t), Yt encodes enough information about itself , the environment, and the 'environment's influence on itself', that it is possible for the system to predict much of (not necessarily everything) Y{t+1} just on the basis of Y_t alone, without accessing E_t.


    2.5) Rejection of 'trivial information closure' helps a bit with bounding conditions. We can think of an aggregate of informationally closed system as a informationally closed system, but we wouldn't think that a mere aggregate of potentially 'conscious' minds are together having a single unitive consciousness. Since trivial information closure doesn't contribute consciousness according to their hypothesis, adding independent closed systems to another system would not change the degree of consciousness of either. This may also have some relationship with the idea of integration in IIT (Information Integration Theory).


  3. (2) can explain why consciousness seem to be associated with a certain scale. It is difficult to make prediction by modeling all noisy schocastic neural-celllular-whatever activities. Prediction are easier if essential informations (including ideas of causation, and such) of the environment are modeled at higher 'coarse-grained' scale (see (1)) (more at the level of population than at the level of samples).


  4. You may now wonder, even if predictability from self-representated states can exist in a certain scale which happens to be seemingly associated with consciousness, it's not clear why predictibility is necessary for consciousness, nor it's very intuitive that our degree of consciousness depends on predictibility. For that I don't have any clear answers. Intuitively, most of our conscious experiences does seem to be laden with immediate expectations, and anticipations - even if we don't always explicitly notice it. The so-called 'specious present' may always represent immediate past as retention and immediate potential future as anticipation. But besides that, this framework can have other intuitive properties, like for example, following this framework, high-level contentful consciousness must have a much richer representations (of self and environmental information) with a more complex model that has higher predictive prowess - which would need a more complex neural substrate - which seems to affirm the intuition that 'higher consciousness' would correlate with more 'complex stuffs'. It can also explain differences in conscious and unconscious processing. For example, it can explain blindsight (where people report that they are blind - not conscious of visual information; but behave in a manner that shows evidence that they have some access to visual information) by saying that in this case, the environmental visuation information is more directly associated with actions and such; it is not internally representated in a rich state at a coarse grained level offering predictibility - thus people with blindsight are not conscious of their 'sight'.


  5. 'predictions' seems to be the central part of the paper, however it still seems to be lacking in intuition about why. However, there is a decent chunk of literature in cognitive science and stuff related to the relationship with predictive processing and cognition. PP, Prediction Error Minimization and such are recent hot topics in cognitive science and philosophy. These line of works may or may not better support the paper. This paper is aware of the works and discusses it close relationship with them. ICT seems to extend upon PP in distinguishing unconscious predictions, and conscious predictions, and incorporate the idea of scale and the relationship of consciousness and coarse-graining. I don't have much of a background about PP, but works of Andy Clark may be good introductory materials: (For example) https://www.amazon.com/Surfing-Uncertainty-Prediction-Action-Embodied/dp/0190933216/ref=sr_1_1?keywords=andy+clark&qid=1570248756&s=books&sr=1-1

    I cannot personally vouch for the book, but Andy Clark is one of 'big guys' in the field; so he can be a pretty reliable source.



  6. ICT seems to work well with some of the other theories of consciousness too (Global Workspace Theory, IIT, PP), which the authors discuss about in the paper. It seems to fill in some gaps of those theories. But I am not very qualified to judge about that.


    _____


    About background materials. It seemed pretty readable to me without much of a background. For statements about neural activties, I am just taking their words for it, but the citations can be places to look. You can find more about phenomena like 'blindsight' from googling, if you weren't already aware of it. As opposed to the recommendations made by the other redditor, I don't think it has much to do with anything related to the hard problem of consciousness (Nagel's Bat or Chalmer's zombie) at all and you don't need to read them for this paper - though they can interesting reads for their own sake and can help better understanding the potential limitations - but these work goes on a more philosophical direction not quite related to the scope of the paper. The equations may have some relation with information theory (again the citations may be the best bet for better background). PP seems to be most closely related to the paper with the idea of predictability being on the center. So that may something to explore for background. IIT can be another background material for this: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588

    https://www.iep.utm.edu/int-info/
u/ogrisel · 3 pointsr/MachineLearning

Off-course R is used for machine learning. It's probably the most popular language for interactive exploratory and predictive analytics right. For instance most winners of kaggle.com machine learning competitions use R at one point or another (e.g. packages such as randomForest, gbm, glmnet and off-course ggplot2). There is also a recent book specifically teaching how to use R for Machine Learning: Machine Learning for Hackers.

My-self I am more a Python fan so I would recommend python + numpy + scipy + scikit-learn + pandas (for data massaging and plotting).

Java is not bad either (e.g. using mahout or weka or more specialized libraries like libsvm / liblinear for SVMs and OpenNLP / Standford NLP for NLP).

I find working in C directly a bit tedious (esp. for data preparation and interactive analysis) hence better use it in combination with a scripting language that has good support for writing C bindings.

u/andreyboytsov · 1 pointr/MachineLearning

Classic Russel & Norwig textbook is definitely worth reading. It starts from basics and goes to quite advanced topics:
http://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597/
Udacity has AI class that follows some chapters of that book.

Murphy's textbook builds ML from the ground up, starting from basics of probability theory:
http://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/0262018020/
(I see, it was already recommended)

Coursera has the whole machine learning specialization (Python) and a famous ML class by Andrew Ng (Matlab).

I hope it helps. Good luck!

u/MicturitionSyncope · 1 pointr/MachineLearning

There have already been a few books listed focusing on theory, so I'll add Machine Learning for Hackers to the list.

It doesn't cover much of the theory, but it's a nice start to getting the programming skills you need for machine learning. When you start using these techniques on real data, you'll quickly see that it's almost never a simple task to go from messy data to results. You need to learn how to program to clean your data and get it into a usable form to do machine learning. A lot of people use Matlab, but since they're free I do all of my programming in R and Python. There are a lot of good libraries/packages for these languages that will enable you to do a lot of cool stuff.

u/blindConjecture · 3 pointsr/MachineLearning

That was a phenomenal article. Extremely long (just like every piece of writing associated with Hofstadter), but excellent nonetheless. I'm admittedly sympathetic to Hofstadter's ideas, not the least of which because of my combined math/cognitive science background.

There was a quote by Stuart Russell, who helped write the book on modern AI, that really stood out to me, and I think expresses a lot of my own issue with the current state of AI:

“A lot of the stuff going on is not very ambitious... In machine learning, one of the big steps that happened in the mid-’80s was to say, ‘Look, here’s some real data—can I get my program to predict accurately on parts of the data that I haven’t yet provided to it?’ What you see now in machine learning is that people see that as the only task.”

This is one of the reasons I've started becoming very interested in ontology engineering. The hyperspecialization of today's AI algorithms is what makes them so powerful, but it's also the biggest hindrance to making larger, more generalizable AI systems. What the field is going to need to get past its current "expert systems" phase is a more robust language through which to represent and share the information encoded in our countless disparate AI systems. \end rant

u/TheMiamiWhale · 3 pointsr/MachineLearning
  1. Not sure what exactly the context is here but usually it is the space from which the inputs are drawn. For example, if your inputs are d dimensional, the input space may be R^d or a subspace of R^d

  2. The curse of dimensionality is important because for many machine learning algorithms we use the idea of looking at nearby data points for a given point to infer information about the respective point. With the curse of dimensionality we see that our data becomes more sparse as we increase the dimension, making it harder to find nearby data points.

  3. The size of the neighbor hood depends on the function. A function that is growing very quickly may require a smaller, tighter neighborhood than a function that has less dramatic fluctuations.

    If you are interested enough in machine learning that you are going to work through ESL, you may benefit from reading up on some math first. For example:

u/LazyAnt_ · 11 pointsr/MachineLearning

I wouldn't say it's about Neuroscience, but it covers ML/AI. The Master Algorithm is a really good book. It can also serve as an introduction to a ton of different AI algorithms, from clustering to neural networks. It's short and easy to read, I highly recommend it.

u/idiosocratic · 11 pointsr/MachineLearning

For deep learning reference this:
https://www.quora.com/What-are-some-good-books-papers-for-learning-deep-learning

There are a lot of open courses I watched on youtube regarding reinforcement learning, one from oxford, one from stanford and another from Brown. Here's a free intro book by Sutton, very well regarded:
https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html

For general machine learning their course is pretty good, but I did also buy:
https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1467309005&sr=8-1&keywords=python+machine+learning

There were a lot of books I got into that weren't mentioned. Feel free to pm me for specifics. Cheers

Edit: If you want to get into reinforcement learning check out OpenAI's Gym package, and browse the submitted solutions

u/majordyson · 29 pointsr/MachineLearning

Having done an MEng at Oxford where I dabbled in ML, the 3 key texts that came up as references in a lot of lectures were these:

Pattern Recognition and Machine Learning (Information Science and Statistics) (Information Science and Statistics) https://www.amazon.co.uk/dp/0387310738/ref=cm_sw_r_cp_apa_i_TZGnDb24TFV9M

Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning Series) https://www.amazon.co.uk/dp/0262018020/ref=cm_sw_r_cp_apa_i_g1GnDb5VTRRP9

(Pretty sure Murphy was one of our lecturers actually?)

Bayesian Reasoning and Machine Learning https://www.amazon.co.uk/dp/0521518148/ref=cm_sw_r_cp_apa_i_81GnDbV7YQ2WJ

There were ofc others, and plenty of other sources and references too, but you can't go buying dozens of text books, not least cuz they would repeat the same things.
If you need some general maths reading too then pretty much all the useful (non specialist) maths we used for 4 years is all in this:
Advanced Engineering Mathematics https://www.amazon.co.uk/dp/0470646136/ref=cm_sw_r_cp_apa_i_B5GnDbNST8HZR

u/IborkedyourGPU · -2 pointsr/MachineLearning

I kind of see your point, but I don't completely agree. As I said already, I know something about active research in this field: enough, as a matter of fact, to be able to read these books

https://www.amazon.com/Understanding-Machine-Learning-Theory-Algorithms/dp/1107057132
https://www.amazon.com/Foundations-Machine-Learning-Adaptive-Computation/dp/0262039400/
https://www.amazon.com/High-Dimensional-Probability-Introduction-Applications-Probabilistic/dp/1108415199/

However, as most researchers, I mostly focus on my specific subfield of Machine Learning. Also, every now and then, I'd like to read something about my job, which however doesn't feel like work (even a professional football player may want to kick a ball for fun every now and then 😉). Thus, I was looking for some general overview of Machine Learning, which wouldn't be too dumbed down, according to experts (otherwise I wouldn't have fun reading it), but which at the same time wasn't a huge reference textbook. After all, this would be just a leisure read, it shouldn't become work after work.

That's why I asked here, rather than on r/LearnMachineLearning. However, if other users also feel I should ask there, I will reconsider.

u/Schlagv · 0 pointsr/MachineLearning

A fun thing to consider: books or lectures about neuroscience. Looking at the meatware brain is a nice thing to do.

That book is nice. http://www.amazon.com/Tales-Both-Sides-Brain-Neuroscience/dp/0062228803

Coursera and other platforms have various "Intro to neuroscience" courses.

u/amair · 3 pointsr/MachineLearning

You may enjoy causal models; it's a quick, easy read.

u/ArielRoth · 1 pointr/MachineLearning

I’ve tried using HD monitors (1900x1080 I think), and it was... not exactly an improvement over my laptop personally. This 2560x1440 monitor seems like a great deal (like half the price of competitors): https://www.amazon.com/Dell-LED-Lit-Monitor-Black-S3219D/dp/B07JVQ8M3Q/ref=mp_s_a_1_3?keywords=32+inch+monitor&qid=1571245395&sr=8-3

u/MessyML · 6 pointsr/MachineLearning

I went over that course a couple of years ago and I found it to be very useful.

After finishing the course, I went over this book:

https://www.amazon.com/Programming-Massively-Parallel-Processors-Second/dp/0124159923/

And it was totally worth it.

u/frequenttimetraveler · 5 pointsr/MachineLearning

"Principles of neural science" (bit heavy) and "Fundamental Neuroscience" (heavier) are two standard textbooks. For computational neuroscience/modeling "Principles of Computational Modelling in Neuroscience" is a great intro.

u/garrypig · -1 pointsr/MachineLearning

I think this book recommendation might be appreciated on this thread:
Clean Code: A Handbook of Agile Software Craftsmanship https://www.amazon.com/dp/0132350882/ref=cm_sw_r_cp_api_mkZwzb0VN10HD

u/sanity · 13 pointsr/MachineLearning

I recommend this book: Clean Code

We gave it to every new data scientist we hired at my last company.

u/Kiuhnm · 5 pointsr/MachineLearning

Take the online course by Andrew Ng and then read Python Machine Learning.

If you then become really serious about Machine Learning, read, in this order,

  1. Machine Learning: A Probabilistic Perspective
  2. Probabilistic Graphical Models: Principles and Techniques
  3. Deep Learning
u/gtani · 1 pointr/MachineLearning

What is your background? Study college level math seriously or casually?

-------

You might want to look at these books (disclaimer, i've never read any but i ordered Devlin and Houston today (from library) ):

http://www.amazon.com/Introduction-Mathematical-Thinking-Keith-Devlin/dp/0615653634/

http://www.amazon.com/How-Study-as-Mathematics-Major/dp/0199661316/

http://www.amazon.com/How-Think-Like-Mathematician-Undergraduate/dp/052171978X/

u/Megatron_McLargeHuge · 1 pointr/MachineLearning

There are a million details as others have said. You don't know how much you're missing.

This is the book to read for traditional HMM-based ASR.

Ignore the discussion of Baum-Welch. The HMM isn't trained in the normal ways since 1. it's huge, and 2. there's limited data. The transition probabilities come from your language model. The HMM topology is usually to have three states per phone-in-context, and to use a dictionary of pronunciation variants for each word.

Each state has a GMM to model the probabilities of the features. The features are MFCCs of a frame plus deltas and double deltas from the MFCCs of the previous frame. You'll probably use a diagonal covariance matrix.

Remember I said phone-in-context? That's because the actual pronunciation of a phoneme depends on the phonemes around it. You have to learn clusters of these since there are too many contexts to model separately.

Training data: to train, you need alignments of words and their pronunciations to audio frames. This pretty much requires using an existing recognizer to do labeling for you. You give it a restricted language model to force it to recognize what was said and use the resulting alignment as training data.

Extra considerations: how to model silence (voice activity detector), how to handle pauses and "ums" (voiced pauses). How to handle mapping non-verbatim transcripts to how they might have been spoken (how did he say 1024?). How to adapt to individual speakers. How to collapse states of the HMM into a lattice. How to handle backoff from long ngrams to short ones in your language model.

Needless to say, I don't recommend this for a master's thesis.