# Best probability & statistics books according to redditors

We found 1,070 Reddit comments discussing the best probability & statistics books. We ranked the 426 resulting products by number of redditors who mentioned them. Here are the top 20.

Next page

## Top Reddit comments about Probability & Statistics:

u/OlimarsOnion · 32 pointsr/politics

When Trump supporters say this, they might as well wear a neon sign that says "I lack a 5th grade education in probability and statistics!"

Read this, if you can:

https://www.amazon.com/Probability-Dummies-Deborah-J-Rumsey/dp/0471751413

u/krtcl · 24 pointsr/learnmachinelearning

I've wasted too much time trying to find the so-called "right" statistics book. I'm still early in my journey, going through calculus using Prof. Leonards videos while working through a Linear Algebra book all in prep for tackling a stats book. Here's a list of books that I've had a look at so far.

&amp;#x200B;

• Probability and Statistical Inference by Hogg, Tanis and Zimmerman
• Mathematical Statistics with Applications by Wackerly

These seem to be of a similar level with regards to rigour, as they aren't that rigourous. That's not to say you can get by without the calculus prereq and even linear algebra

&amp;#x200B;

The other two I've been looking at which seem to be a lot more complex are

• Introduction to Mathematical Statistics by Hogg as well. I'd think it's the more rigorous version of the book mentioned above by the same author
• All of Statistics by Wasserman which seems to require a lot of prior knowledge in statistics, but I think tackles just the perfect topics for machine learning

And then there's Casella and Berger's Statistical inference, which I looked at once and decided not to look at again until I can manage at least one of the aforementioned books. I think I'm leaning most to the first book listed. Whichever one you decide to use, good luck with your journey.

&amp;#x200B;
u/Metlover · 24 pointsr/Sabermetrics

I'm usually pretty optimistic for people when it comes to posts asking about "how do I get started in sabermetrics" because I was in that position once as well, and it's worked out okay for me, but I want to be a bit more realistic, because I think there is a big red flag that you should recognize in yourself in respect to this.

There are a couple ways to get jobs in fields that require sabermetrics, but you should be aware: there are very few, they are highly competitive, and they require a good amount of work.

The traditional progression for doing sabermetric work is usually something like:

Stage|Level of Sabermetric Experience|Work you're qualified to do|
--:|:--|:--|
1|You look up stats online to form arguments about baseball|Personal blogging, entry-level analytics writing (FanSided, SBN, other sites)|
2|You put stats into a spreadsheet to visualize data or calculate something new to form an argument about baseball|Personal blogging, entry-level analytics writing (FanSided, SBN, other sites), heavier stuff if you're very lucky and a good writer (bigger sites like FanGraphs, Baseball Prospectus), general baseball coverage that isn’t heavily analytical|
3|You use code with baseball stats to visualize data or calculate something new to form an argument about baseball|Heavier analytics writing (SBN, FanGraphs, Baseball Prospectus, The Athletic), entry-level baseball operations work|
4|You use code to create your own models, predictions, and projections about baseball.|Extremely heavy analytics writing, baseball operations/team analytics work|

From your post, it sounds like you're somewhere between #1 and #2 right now. However: "after trying [coding] out I did not like it." You have a very large barrier keeping you from making the jump to stage 3.

If you actually want to go into a sabermetric field as a career, you need to know how to code. Not with Javascript, mind you, but other languages (Python, R, SQL, etc.). I would advise that you try out Python or R (Analyzing Baseball Data with R is an excellent introduction and gives you a lot of practical skills) and see if those really suck you in - and believe me, they need to suck you in. If you really don't like it, don't force yourself to do it and find some other career path, because you won't be able to succeed if you can't enjoy the work that you do.

FanSided has very low barriers of entry and the compensation reflects that - you cannot make a career out of blogging for FanSided. Even if you get to where I am (stage 4), if you're lucky, you might land a contributing position at a site that pays decently for part-time work. There are extremely few people who are somewhere between #3 and #4 who can make a full-time living off of baseball work, and they do it because they like what they do - if you don't like coding and working with baseball data in that environment, you're not going to be able to beat out everybody else who's trying to get there.

Let's say that you work your rear end off, you get to stage three or stage four. What options are available to you? There's maybe a handful of people who work in the "public" sector - that is, writing for websites like FanGraphs, Baseball Prospectus, The Athletic - who make enough money to make sabermetrics their full-time job. It will take a hail fucking mary to land one of those jobs, regardless of how talented you are, and you'll basically need to work double-duty on both sabermetrics and whatever your main hustle is until one of those positions opens up, and even then, you're not guaranteed anything.

You could also work for a team! There are far more positions available, they pay better, you have more data to work with, better job security - this sounds great, right? Problem is, the market cap for analysts are at about 20 per team, so there's something like 600 analyst positions that could be available in the future (I can't promise that the MLB will ever have 600 analysts total at any given time, but that's an upper estimate). And almost half of those are already full! There's not a whole lot of brain drain from the industry, so it is still extremely hard to break in and you're still going to be competing with the absolute best people in the industry. You will have to love to code and do this work because everybody you're competing with already does, and everybody else is willing to work twice as hard for it.

My advice to you is this: try out R or Python with baseball data. See if it's enough to get you addicted. See if it starts to occupy every ounce of free time you have, and you feel comfortable with it, and you're willing to put yourself out there and advertise your own work. I'm a full time student and basically every ounce of my free time is put towards working with this stuff, like it's a second full-time job for the past three years, and I'm still a bit of a ways away from making a living off of this. If you can't learn to love it, your time and energy are best spent elsewhere.

u/Pyromane_Wapusk · 14 pointsr/learnmath

There are definitely ways to visualize algebraic concepts and many algebraic concepts crop up in geometry. Unfortunately, many books and classes won't emphasize visual intuition. So algebra may be harder for you. In some ways, you get over it even if it isn't your cup of tea, but there are also resources for transfering visual/geometric intuition onto algebraic concepts.

After reading it for myself, I recommend the books visual group theory by Nathan Carter, and algebra, concrete and abstract by Frederick Goodman. The first focuses a lot on visual intuition for group theory, but a lot concepts in group theory generalize to abstract algebra in general. The second book is a more traditional book, less focus on visual intuition, uses symmetry of geometrical objects and linear algebra for many of the examples.

u/RedsBaseballOfficial · 14 pointsr/Reds

Analyzing Baseball with R is the best book, I believe:

https://www.amazon.com/Analyzing-Baseball-Data-Chapman-Hall/dp/1466570229

I also would download PitchRX and Baseball on a Stick to round out your toolkit!

-Kyle

u/jacobolus · 13 pointsr/math

I highly recommend you read the relevant chapters of Nathan Carter’s book Visual Group Theory (site, amzn).

u/nikofeyn · 13 pointsr/math

i have three categories of suggestions.

these are essentially precursors to smooth manifold theory. you mention you have had calculus 3, but this is likely the modern multivariate calculus course.

• advanced calculus: a differential forms approach by harold edwards

• advanced calculus: a geometric view by james callahan

• vector calculus, linear algebra, and differential forms: a unified approach by john hubbard

out of these, if you were to choose one, i think the callahan book is probably your best bet to pull from. it is the most modern, in both approach and notation. it is a perfect setup for smooth manifolds (however, all of these books fit that bill). hubbard's book is very similar, but i don't particularly like its notation. however, it has some unique features and does attempt to unify the concepts, which is a nice approach. edwards book is just fantastic, albeit a bit nonstandard. at a minimum, i recommend reading the first three chapters and then the latter chapters and appendices, in particular chapter 8 on applications. the first three chapters cover the core material, where chapters 4-6 then go on to solidify the concepts presented in the first three chapters a bit more rigorously.

smooth manifolds

• an introduction to manifolds by loring tu

• introduction to smooth manifolds by john m. lee

• manifolds and differential geometry by jeffrey m. lee

• first steps in differential geometry: riemannian, contact, sympletic by andrew mcinerney

out of these books, i only have explicit experience with the first two. i learned the material in graduate school from john m. lee's book, which i later solidifed by reading tu's book. tu's book actually covers the same core material as lee's book, but what makes it more approachable is that it doesn't emphasize, and thus doesn't require a lot of background in, the topological aspects of manifolds. it also does a better job of showing examples and techniques, and is better written in general than john m. lee's book. although, john m. lee's book is rather good.

so out of these, i would no doubt choose tu's book. i mention the latter two only to mention them because i know about them. i don't have any experience with them.

conceptual books

these books should be helpful as side notes to this material.

• div, grad, curl are dead by william burke [pdf]

• geometrical vectors by gabriel weinreich

• about vectors by banesh hoffmann

i highly recommend all of these because they're all rather short and easy reads. the first two get at the visual concepts and intuition behind vectors, covectors, etc. they are actually the only two out of all of these books (if i remember right) that even talk about and mention twisted forms.

there are also a ton of books for physicists, applied differential geometry by william burke, gauge fields, knots and gravity by john baez and javier muniain (despite its title, it's very approachable), variational principles of mechanics by cornelius lanczos, etc. that would all help with understanding the intuition and applications of this material.

conclusion

if you're really wanting to get right to the smooth manifolds material, i would start with tu's book and then supplement as needed from the callahan and hubbard books to pick up things like the implicit and inverse function theorems. i highly recommend reading edwards' book regardless. if you're long-gaming it, then i'd probably start with callahan's book, then move to tu's book, all the while reading edwards' book. :)

i have been out of graduate school for a few years now, leaving before finishing my ph.d. i am actually going back through callahan's book (didn't know about it at the time and/or it wasn't released) for fun and its solid expositions and approach. edwards' book remains one of my favorite books (not just math) to just pick up and read.
u/Croc600 · 12 pointsr/sociology

R for Data Science is great, especially because it teaches tidyverse.

Another good book is Learning Statistics with R: A tutorial for psychology students and other beginners, which also teaches the implementation of basic statistical techniques, like ANOVA or linear regression.

If you have some time spare, you can follow it by Data Analysis Using Regression and Multilevel/Hierarchical Models, which is also (mostly) based on R.

The Visual Display of Quantitative Information is a good book on the principles of data visualization. It’s theoretical, so no R examples.

Complex Surveys: A Guide to Analysis Using R is great if you work with survey data, especially if you work with complex designs (which nowdays is pretty much all the time).

Personaly, I would also invest some time learning methodology. Sadly, I can’t help you here, because I didn’t used textbook for this, but people seem to like books from Earl Babbie.

u/paultypes · 11 pointsr/programming

Of course efforts like this won't fly because there will be people who sincerely want to can them because it's "computerized racial profiling," completely missing the point that, if race does correlate with criminal behavior, you will see that conclusion from an unbiased system. What an unbiased system will also do is not overweight the extent to which race is a factor in the analysis.

Of course, the legitimate concern some have is about the construction of prior probabilities for these kinds of systems, and there seems to be a great deal of skepticism about the possibility of unbiased priors. But over the last decade or two, the means of constructing unbiased priors have become rather well understood, and form the central subject matter of Part II of E.T. Jaynes' Probability Theory: The Logic of Science, which I highly recommend.

u/MRItopMD · 11 pointsr/medicalschool

Sure! I have a lot of resources on this subject. Before I recommend it, let me very quickly explain why it is useful.

Bayes Rule basically means creating a new hypothesis or belief based on a novel event using prior hypothesis/data. So I am sure you can already see how useful it would be in medicine to think about. The Rule(or technically theorem) is in fact an entire field of statisitcs and basically is one of the core parts of probability theory.

Bayes Rule explains why you shouldn't trust sensitivity and specificity as much as you think. It would take too long to explain here but if you look up Bayes' Theorem on wikipedia one of the first examples is about how despite a drug having 99% sensitivity and specificity, even if a user tests positive for a drug, they are in fact more likely to not be taking the drug at all.

Ok, now book recommendations:

Basic: https://www.amazon.com/Bayes-Theorem-Examples-Introduction-Beginners-ebook/dp/B01LZ1T9IX/ref=sr_1_2?ie=UTF8&amp;amp;qid=1510402907&amp;amp;sr=8-2&amp;amp;keywords=bayesian+statistics

https://www.amazon.com/Bayes-Rule-Tutorial-Introduction-Bayesian/dp/0956372848/ref=sr_1_6?ie=UTF8&amp;amp;qid=1510402907&amp;amp;sr=8-6&amp;amp;keywords=bayesian+statistics

Intermediate/Advanced: Only read if you know calculus and linear algebra, otherwise not worth it. That said, these books are extremely good and are a thorough intro compared to the first ones.

https://www.amazon.com/Bayesian-Analysis-Chapman-Statistical-Science/dp/1439840954/ref=sr_1_1?ie=UTF8&amp;amp;qid=1510402907&amp;amp;sr=8-1&amp;amp;keywords=bayesian+statistics

https://www.amazon.com/Introduction-Probability-Chapman-Statistical-Science/dp/1466575573/ref=sr_1_12?s=books&amp;amp;ie=UTF8&amp;amp;qid=1510403749&amp;amp;sr=1-12&amp;amp;keywords=probability

u/analysis1837 · 10 pointsr/math

At the moment, psychology is largely ad-hoc, and not a modicum of progress would've been made without the extensive utilization of statistical methods. To consider the human condition does not require us to simply extrapolate from our severely limited experiences, if not from the biases of limited datasets, datasets for which we can't even be certain of their various e.g. parameters etc..

For example, depending on the culture, the set of phenotypical traits with which increases the sexual attraction of an organism may be different - to state this is meaningless and ad-hoc, and we can only attempt to consider the validity of what was stated with statistical methods. Still, there comes along social scientists who would proclaim arbitrary sets of phenotypical features as being universal for all humans in all conditions simply because they were convinced by limited and biased datasets (e.g. making extreme generalizations based on the United States' demographic while ignoring the entire world etc.).

In fact, the author(s) of "Probability Theory: The Logic of Science" will let you know what they think of the shaky sciences of the 20th and 21st century, social science and econometrics included, the shaky sciences for which their only justifications are statistical methods.
_

With increasing mathematical depth and the increasing quality of applied mathematicians into such fields of science, we will begin to gradually see a significant improvement in the validity of said respective fields. Otherwise, currently, psychology, for example, holds no depth, but the field itself is very entertaining to me; doesn't stop me from enjoying Michael's "Mind Field" series.

For mathematicians, physics itself lacks rigour, let alone psychology.
_

Note, the founder of "psychoanalysis", Sigmund Freud, is essentially a pseudo-scientist. Like many social scientists, he made the major error of extreme extrapolation based on his very limited and personal life experiences, and that of extremely limited, biased datasets. Sigmund Freud "proclaimed" a lot of truths about the human condition, for example, Sigmund Fraud is the genius responsible for the notion of "Penis Envy".

In the same century, Einstein would change the face of physics forever after having published the four papers in his miracle year before producing the masterpiece of General Relativity. And, in that same century, incredible progress such that of Gödel's Incompleteness Theorem, Quantum Electrodynamics, the discovery of various biological reaction pathways (e.g. citric acid cycle etc.), and so on and so on would be produced while Sigmund Fraud can be proud of his Penis Envy hypothesis.

u/CrazyStatistician · 10 pointsr/statistics

Bayesian Data Analysis and Hoff are both well-respected. The first is a much bigger book with lots of applications, the latter is more of an introduction to the theory and methods.

u/idroppedmyapple · 10 pointsr/datascience

So off the top off my head, I can’t think of any courses. Here are three books that include exercieses that are relatively comprehensive and explain their material well. They all touch upon basic methods that are good to know but also how to do analyses with them.

• Hands-On Machine Learning with Scikit-Learn and TensorFlow. General machine learning and intro to deep learning (python) - link

• The Elements of Statistical Learning. Basic statistical modelling (R) link

• Statistical rethinking. Bayesian statistics (R) link - there are lectures to this book as well

But there are many many others.

Then there are plenty of tutorials to python, R or how to handle databases (probably the core programming languages, unless you want to go the GUI route).

u/COOLSerdash · 9 pointsr/statistics
u/statmama · 9 pointsr/statistics

Seconding /u/khanable_ -- most of statistical theory is built on matrix algebra, especially regression. Entry-level textbooks usually use simulations to explain concepts because it's really the only way to get around assuming your audience knows linear algebra.

My Ph.D. program uses Casella and Berger as the main text for all intro classes. It's incredibly thorough, beginning with probability and providing rigorous proofs throughout, but you would need to be comfortable with linear algebra and at least the basic principles of real analysis. That said, this is THE book that I refer to whenever I have a question about statistical theory-- it's always on my desk.

u/siddboots · 9 pointsr/statistics

It is hard to provide a "comprehensive" view, because there's so much disperate material in so many different fields that draw upon probability theory.

Feller is an approachable classic that covers all of the main results in traditional probability theory. It certainly feels a little dated, but it is full of the deep central limit insights that are rarely explained in full in other texts. Feller is rigorous, but keeps applications at the center of the discussion, and doesn't dwell too much on the measure-theoretical / axiomatic side of things. If you are more interested in the modern mathematical theory of probability, try Probability with Martingales.

On the other hand, if you don't care at all about abstract mathematical insights, and just want to be able to use probabilty theory directly for every-day applications, then I would skip both of the above, and look into Bayesian probabilistic modelling. Try Gelman, et. al..

Of course, there's also machine learning. It draws on a lot of probability theory, but often teaches it in a very different way to a traditional probability class. For a start, there is much more emphasis on multivariate models, so linear algebra is much more central. (Bishop is a good text).

u/AllezCannes · 8 pointsr/statistics

I suggest you read John Kruschke's Doing Bayesian Data Analysis: http://www.amazon.com/Doing-Bayesian-Data-Analysis-Second/dp/0124058884

It's a very approachable read. I myself have very little math background, but you will learn all you need. It is a large book though.

u/tiii · 8 pointsr/econometrics

Both time series and regression are not strictly econometric methods per se, and there are a range of wonderful statistics textbooks that detail them. If you're looking for methods more closely aligned with econometrics (e.g. difference in difference, instrumental variables) then the recommendation for Angrist 'Mostly Harmless Econometrics' is a good one. Another oft-prescribed econometric text that goes beyond Angrist is Wooldridge 'Introductory Econometrics: A Modern Approach'.

For a very well considered and basic approach to statistics up to regression including an excellent treatment of probability theory and the basic assumptions of statistical methodology, Andy Field (and co's) books 'Discovering Statistics Using...' (SPSS/SAS/R) are excellent.

Two excellent all-rounders are Cohen and Cohen 'Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences' and Gelman and Hill 'Data Analysis Using Regression and Multilevel/Hierarchical Modelling' although I would suggest both are more advanced than I am guessing you need right now.

For time series I can recommend Rob Hyndman's book/s on forecasting (online copy freely available)

For longitudinal data analysis I really like Judith Singer's book 'Applied Longitudinal Data Analysis'.

It sounds however as if you're looking for a bit of a book to explain why you would want to use one method over another. In my experience I wanted to know this when I was just starting. It really comes down to your own research questions and the available data. For example I had to learn Longitudinal/fixed/random effects modelling because I had to do a project with a longitudinal survey. Only after I put it into practice (and completed my stats training) did I come to understand why the modelling I used was appropriate.

u/DiogenicOrder · 8 pointsr/statistics

How would you rather split beginner vs intermediate/advanced ?

My feeling was that Ben Lambert's book would be a good intro and that Bayesian Data Analysis would be a good next ?

u/Tallowo · 8 pointsr/Sabermetrics

Analyzing Baseball Data with R

https://www.amazon.com/Analyzing-Baseball-Data-Chapman-Hall/dp/1466570229

&amp;#x200B;

Walks you through learning the program using baseball stats as the foundation.

u/raubry · 8 pointsr/math

Practical Algebra: A Self-Teaching Guide
really helped me a couple of years ago when I had to get up to speed on algebra quickly.

Beyond that, you can hardly do better in the best-bang-for-the-buck department than the Humongous Books series. 1000 problems in each book, annotated and explained, and he has an entertaining style.

The Humongous Book of Algebra Problems: Translated for People Who Don't Speak Math

The Humongous Book of Geometry Problems: Translated for People Who Don't Speak Math

The Humongous Book of Calculus Problems: For People Who Don't Speak Math

u/crypto_ha · 7 pointsr/learnmachinelearning

Since you are already going to take Machine Learning and want to build a good statistical foundation, I highly recommend Mathematical Statistics with Applications by Wackerly et al.

u/Blizzarex · 7 pointsr/PhilosophyofScience

If you like logic and the scientific method, I recommend E. T. Jaynes' Probability Theory: The Logic of Science. You can buy it here:
http://www.amazon.com/Probability-Theory-The-Logic-Science/dp/0521592712/

or read a PDF here:
http://shawnslayton.com/open/Probability%2520book/book.pdf

u/unclesaamm · 7 pointsr/math

Your professors really aren't expecting you to reinvent groundbreaking proofs from scratch, given some basic axioms. It's much more likely that you're missing "hints" - exercises often build off previous proofs done in class, for example.

I appreciated Laura Alcock's writings on this, in helping me overcome my fear of studying math in general:
https://www.amazon.com/How-Study-as-Mathematics-Major/dp/0199661316/

https://www.amazon.com/dp/0198723539/ &lt;-- even though you aren't in analysis, the way she writes about approaching math classes in general is helpful

If you really do struggle with the mechanics of proof, you should take some time to harden that skill on its own. I found this to be filled with helpful and gentle exercises, with answers: https://www.amazon.com/dp/0989472108/ref=rdr_ext_sb_ti_sims_2

And one more idea is that it can't hurt for you to supplement what you're learning in class with a more intuitive, chatty text. This book is filled with colorful examples that may help your leap into more abstract territory: https://www.amazon.com/Visual-Group-Theory-Problem-Book/dp/088385757X

u/de_shrike · 7 pointsr/india

It can't be helped, the price is almost the same for the first book on amazon.com and I assume a similar trend for the others. Hardcovers in general are more expensive due to the intrinsic higher cost of manufacturing. What you may have observed is that other books have lower cost Asian editions that make them more affordable for nations with smaller economies, but these research books do not serve such a niche as such.

What is interesting though is that state of the art Machine Learning is usually not in these books, and is simply being published in papers and blog posts as of late.

u/InfanticideAquifer · 7 pointsr/math

Anti-disclaimer: I do have personal experience with all the below books.

I really enjoyed Lee for Riemannian geometry, which is highly related to the Lorentzian geometry of GR. I've also heard good things about Do Carmo.

It might be advantageous to look at differential topology before differential geometry (though for your goal, it is probably not necessary). I really really liked Guillemin and Pollack. Another book by Lee is also very good.

If you really want to dig into the fundamentals, it might be worthwhile to look at a topology textbook too. Munkres is the standard. I also enjoyed Gamelin and Greene, a Dover book (cheap!). I though that the introduction to the topology of R^n in the beginning of Bartle was good to have gone through first.

I'm concerned that I don't see linear algebra in your course list. There's a saying "Linear algebra is what separates Mathematicians from everyone else" or something like that. Differential geometry is, in large part, about tensor fields on manifolds, and these are studied by looking at them as elements of a vector space, so I'd say that linear algebra is something you should get comfortable with before proceeding. (It's also great to study it before taking quantum.) I can't really recommend a great book from personal experience here; I learned from poor ones :( .

Also, there are physics GR books that contain semi-rigorous introductions to differential geometry, even if these sections are skipped over in the actual class. Carroll is such a book. If you read the introductory chapter and appendices, you'll know a lot. On the differential topology side of things, there's Schutz, which is a great book for breadth but is pretty material dense. Schwarz and Schwarz is a really good higher level intro to special relativity that introduces the mathematical machinery of GR, but sticks to flat spaces.

Finally, once you have reached the mountain top, there's Hawking and Ellis, the ultimate pinnacle of gravity textbooks. This one doesn't really fall under the anti-disclaimer from above; it sits on my shelf to impress people.

u/Froggerto · 7 pointsr/baseball

https://www.amazon.com/Analyzing-Baseball-Data-Chapman-Hall/dp/1466570229

This book covers everything related to how to get the data (Retrosheet, Lahman's, pitchf/x IIRC) and then how to do a lot of different stuff with R. It's a good place to start. You could probably find it cheaper than that Amazon link though.

u/Nezteb · 7 pointsr/rstats

I'm going to guess this one based on high reviews and a description that mentions R.

u/Robin_Banx · 7 pointsr/learnmath

"Had a very similar path. Decided to bite the bullet and take my school's remedial algebra course, as I cheated through middle and high school and thus knew nothing. Failed remedial algebra and retook it. Now I'm graduating with a math minor and am taking a calc-based probability theory course. Have hope!

1. Find something to motivate you. I was inspired partially by a friend explaining couple of high-level concepts to me. What little I understood sounded fascinating, and I wanted to know more. Part of the reason math can get tough is that there might be no "light at the end of the tunnel" that will reward your hard work.

2. While immersing yourself in cool stuff can be good to keep you motivated, remember to do the "boring parts" too. Unfortunately, not everything can be awesome serendipity. There is no going around the fact that you're going to have to spend some time just going through practice problems. Way past the point when it stops being fun. You need to develop intuitions about certain things in order for the profundity of later things to really sink, and there's no way to do that aside from doing a bunch of problems.

3. Khan Academy's great. Right now they have tons of practice problems too.
I highly recommend this book: http://www.amazon.com/Humongous-Book-Algebra-Problems-Translated/dp/1592577229 Lots of problems broken down step-by-step. Skipped steps are one of the hardest things to deal with when you don't yet have much mathematical knowledge, especially during self-study. Look for other books by the author, W. Michael Kelly.

4. This blog has a lot of useful general study advice: http://calnewport.com/blog/

An interesting take on math and math education, though a little bitter: http://www.maa.org/devlin/LockhartsLament.pdf
Godspeed, sir!"
u/MyMoon0hMyMoon · 7 pointsr/learnmath

Do not enroll in a precalculus class until you have a solid grasp on the foundations of precalculus. Precalculus is generally considered to be the fundamentals required for calculus and beyond (obviously), and a strong understanding of precalculus will serve you well, but in order to do well in precalculus you still need a solid understanding of what comes before, and there is quite a bit.

I do not mean to sound discouraging, but I was tutoring a guy in an adult learning program from about December 2017-July 2018...I helped him with his homework and answered any questions that he had, but when he asked me to really get into the meat of algebra (he needed it for chemistry to become a nurse) I found a precalculus book at the library and asked him to go over the prerequisite chapter and it went completely over his head. Perhaps this is my fault as a tutor, but I do not believe so.

What I am saying is that you need a good foundation in the absolute basics before doing precalculus and I do not believe that you should enroll in a precalculus course ASAP because you may end up being let down and then give up completely. I would recommend pairing Basic Mathematics by Serge Lang with The Humongous Book of Algebra Problems (though any book with emphasis on practice will suffice) and using websites like khanacademy for additional practice problems and instructions. Once you have a good handle on this, start looking at what math courses are offered at your nearest CC and then use your best judgment to decide which course(s) to take.

I do not know how old you are, but if you are anything like me, you probably feel like you are running out of time and need to rush. Take your time and practice as much as possible. Do practice problems until it hurts to hold the pencil.

u/jakemotata · 7 pointsr/OMSCS

If you have problems with probability take the MITx probability class on edX. That is as good as it can get as a EECS probability class. It teaches you tons of stuff but assumes nothing but multivariable calculus from you. If you have time, read Introduction to Probability by the class instructors.

Note the class alone is a huge time sink.

u/xNOM · 6 pointsr/MensRights

&gt;what can I show her so she can be properly informed?

this

EDIT: or a bunch of leftist professors discussing it

u/MillennialScientist · 6 pointsr/politics

&gt; Probability states that either major party candidate had a 50% chance of winning (since there are two running,

Wow. Really? So all of probability theory is wrong then.

I recommend this book:
https://www.amazon.ca/Probability-Dummies-Deborah-J-Rumsey/dp/0471751413

u/schmook · 6 pointsr/MachineLearning

Imagine you have a dataset without labels, but you want to solve a supervised problem with it, so you're going to try to collect labels. Let's say they are pictures of dogs and cats and you want to create labels to classify them.

One thing you could do is the following process:

1. Get a picture from your dataset.
2. Show it to a human and ask if it's a cat or a dog.
3. If the person says it's a cat or dog, mark it as a cat or dog.
4. Repeat.

(I'm ignoring problems like pictures that are difficult to classify or lazy or adversarial humans giving you noisy labels)

That's one way to do it, but is it the most efficient way? Imagine all your pictures are from only 10 cats and 10 dogs. Suppose they are sorted by individual. When you label the first picture, you get some information about the problem of classifying cats and dogs. When you label another picture of the same cat, you gain less information. When you label the 1238th picture from the same cat you probably get almost no information at all. So, to optimize your time, you should probably label pictures from other individuals before you get to the 1238th picture.

How do you learn to do that in a principled way?

Active Learning is a task where instead of first labeling the data and then learning a model, you do both simultaneously, and at each step you have a way to ask the model which next example should you manually classify for it to learn the most. You can than stop when you're already satisfied with the results.

You could think of it as a reinforcement learning task where the reward is how much you'll learn for each label you acquire.

The reason why, as a Bayesian, I like active learning, is the fact that there's a very old literature in Bayesian inference about what they call Experiment Design.

Experiment Design is the following problem: suppose I have a physical model about some physical system, and I want to do some measurements to obtain information about the models parameters. Those measurements typically have control variables that I must set, right? What are the settings for those controls that, if I take measurements on that settings, will give the most information about the parameters?

As an example: suppose I have an electric motor, and I know that its angular speed depends only on the electric tension applied on the terminals. And I happen to have a good model for it: it grows linearly up to a given value, and then it becomes constant. This model has two parameters: the slope of the linear growth and the point where it becomes constant. The first looks easy to determine, the second is a lot more difficult. I'm going to measure the angular speed at a bunch of different voltages to determine those two parameters. The set of voltages I'm going to measure at is my control variable. So, Experiment Design is a set of techniques to tell me what voltages I should measure at to learn the most about the value of the parameters.

I could do Bayesian Iterated Experiment Design. I have an initial prior distribution over the parameters, and use it to find the best voltage to measure at. I then use the measured angular velocity to update my distribution over the parameters, and use this new distribution to determine the next voltage to measure at, and so on.

How do I determine the next voltage to measure at? I have to have a loss function somehow. One possible loss function is the expected value of how much the accuracy of my physical model will increase if I measure the angular velocity at a voltage V, and use it as a new point to adjust the model. Another possible loss function is how much I expect the entropy of my distribution over parameters to decrease after measuring at V (the conditional mutual information between the parameters and the measurement at V).

Active Learning is just iterated experiment design for building datasets. The control variable is which example to label next and the loss function is the negative expected increase in the performance of the model.

So, now your procedure could be:

• a model to predict if the picture is a cat or a dog. It's probably a shit model.
• a dataset of unlabeled pictures
• a function that takes your model and a new unlabeled example and spits an expected reward if you label this example
6. Do:
1. For each example in your current unlabeled set, calculate the reward
2. Choose the example that have the biggest reward and label it.
3. Continue until you're happy with the performance.
7. ????
8. Profit

Or you could be a lot more clever than that and use proper reinforcement learning algorithms. Or you could be even more clever and use "model-independent" (not really...) rewards like the mutual information, so that you don't over-optimize the resulting data set for a single choice of model.

I bet you have a lot of concerns about how to do this properly, how to avoid overfitting, how to have a proper train-validation-holdout sets for cross validation, etc, etc, and those are all valid concerns for which there are answers. But this is the gist of the procedure.

You could do Active Learning and iterated experiment design without ever hearing about bayesian inference. It's just that those problems are natural to frame if you use bayesian inference and information theory.

About the jargon, there's no way to understand it without studying bayesian inference and machine learning in this bayesian perspective. I suggest a few books:

• Information Theory, Inference, and Learning Algorithms, David Mackay - for which you can get a pdf or epub for free at this link.

Is a pretty good introduction to Information Theory and bayesian inference, and how it relates to machine learning. The Machine Learning part might be too introductory if already know and use ML.

• Bayesian Reasoning and Machine Learning by David Barber - for which you can also get a free pdf here

Some people don't like this book, and I can see why, but if you want to learn how bayesians think about ML, it is the most comprehensive book I think.

• Probability Theory, the Logic of Science by E. T. Jaynes. Free pdf of the first few chapters here.

More of a philosophical book. This is a good book to understand what bayesians find so awesome about bayesian inference, and how they think about problems. It's not a book to take too seriously though. Jaynes was a very idiosyncratic thinker and the tone of some of the later chapters is very argumentative and defensive. Some would even say borderline crackpot. Read the chapter about plausible reasoning, and if that doesn't make you say "Oh, that's kind of interesting...", than nevermind. You'll never be convinced of this bayesian crap.

u/M_Bus · 6 pointsr/statistics

Wellll I'm going to speak in some obscene generalities here.

There are some philosophical reasons and some practical reasons that being a "pure" Bayesian isn't really a thing as much as it used to be. But to get there, you first have to understand what a "pure" Bayesian is: you develop reasonable prior information based on your current state of knowledge about a parameter / research question. You codify that in terms of probability, and then you proceed with your analysis based on the data. When you look at the posterior distributions (or posterior predictive distribution), it should then correctly correspond to the rational "new" state of information about a problem because you've coded your prior information and the data, right?

WELL let's touch on the theoretical problems first: prior information. First off, it can be very tricky to code actual prior information into a true probability distribution. This is one of the big turn-offs for frequentists when it comes to Bayesian analysis. "Pure" Bayesian analysis sees prior information as necessarily coming before the data is ever seen. However, suppose you define a "prior" whereby a parameter must be greater than zero, but it turns out that your state of knowledge is wrong? What if you cannot codify your state of knowledge as a prior? What if your state of knowledge is correctly codified but makes up an "improper" prior distribution so that your posterior isn't defined?

Now'a'days, Bayesians tend to think of the prior as having several purposes, but they also view it as part of your modeling assumptions - something that must be tested to determine if your conclusions are robust. So you might use a weakly regularizing prior for the purposes of getting a model to converge, or you might look at the effects of a strong prior based on other studies, or the effects of a non-informative prior to see what the data is telling you absent other information. By taking stock of the differences, you can come to a better understanding of what a good prediction might be based on the information available to you. But to a "pure" Bayesian, this is a big no-no because you are selecting the prior to fit together with the data and seeing what happens. The "prior" is called that because it's supposed to come before, not after. It's supposed to codify the current state of knowledge, but now'a'days Bayesians see it as serving a more functional purpose.

Then there are some practical considerations. As I mentioned before, Bayesian analysis can be very computationally expensive when data sets are large. So in some instances, it's just not practical to go full Bayes. It may be preferable, but it's not practical. So you wind up with some shortcuts. I think that in this sense, modern Bayesians are still Bayesian - they review answers in reference to their theoretical understanding of what is going on with the distributions - but they can be somewhat restricted by the tools available to them.

As always with Bayesian statistics, Andrew Gelman has a lot to say about this. Example here and here and he has some papers that are worth looking into on the topic.

There are probably a lot of other answers. Like, you could get into how to even define a probability distribution and whether it has to be based on sigma algebras or what. Jaynes has some stuff to say about that.

If you want a good primer on Bayesian statistics that has a lot of talking and not that much math (although what math it does have is kind of challenging, I admit, though not unreachable), read this book. I promise it will only try to brainwash you a LITTLE.

u/AnEmptyInkwell · 6 pointsr/math

In all seriousness, the applications of analysis to geometry can be really interesting and insightful, but to get to them you would have to first have background in differential topology, which it seems you lack. That might be a good subject to start with. A good book would be John Lee's An Introduction to Smooth Manifolds.

u/SupportVectorMachine · 5 pointsr/statistics

A very user-friendly treatment that hits every criterion you mention is John Kruschke's Doing Bayesian Data Analysis, Second Edition.

u/cannonballism · 5 pointsr/math
u/clarinetist001 · 5 pointsr/statistics

If you are really good at calculus, learn some probability first. My personal favorite is Wackerly et al.'s Mathematical Statistics with Applications. This covers both the probability and mathematical stats background that you will see in college. The book is quite pricey, so I recommend buying it on half (dot) com.

You might notice that this text has a lot of negative reviews. This review of the above text explains the prerequisites quite well - this is not an AP-stats type of textbook:

&gt; I believe that this book is designed to teach statistics to those who plan on actually using it professionally (and not just to pass a required course) while continuing to develop one's own mathematical maturity. While Wackerly is not as rigorous as Ross's Probability book, it is taught at a completely different level than a non-calculus-based statistics course that are often taken by students who simply want to know which formula to use for the exam. I think of it as the ideal text for anyone in the sciences, engineering, or economics. The level of rigor is similar to the 2 Calculus courses online at MIT-Open Course Ware.
&gt;
&gt; [...]
&gt;
&gt; this book derives virtually every formula, allowing students to continue to develop their mathematical maturity which will be required for higher-level courses on bootstrapping, pattern recognition, statistical learning, etc. In order to follow these proofs (and also in order to solve problems from about 4 chapters) one must have a firm grasp of calculus. That not only means that one can integrate, differentiate, work with series, use L-Hopital's rule and integration-by-parts, but also that one understands the concepts of calculus very well.
&gt;
&gt; The proofs are all broken down so as to not really skip many steps, but as someone away from math for over 25 years, I must write down each step myself and make sure that I understand it before moving on. If a few steps are skipped, I must connect the dots myself using plenty of scratch paper. My math background was the Calculus series, ordinary differential equations, and linear algebra. About 1/5 of all problems are proof-based.

&gt; I will concede that you can't come at this book without an understanding of at least integral calculus (and since so many people get turned off by Algebra, well...), so I suspect a lot of the negative reviews here are written by people who jumped in the deep end of the pool without having a few swimming lessons. If you know the calculus and basic set theory, the book is exceedingly easy to follow.

Some of what you learned in AP Stats will transfer to calculus-based statistics, but a lot of what you learn in your undergrad will not be like anything you learned in AP Stats. Hence I'm recommending that you start from scratch on probability.

Generally speaking, I agree with /u/Akillees89 that you should get a head start in developing your math background. However, I don't agree that Strang or Axler are good for linear algebra for statistics. See my post here.

u/jonnydedwards · 5 pointsr/math

Bayes is the way to go: Ed Jayne's text Probability Theory is fundamental and a great read. Free chapter samples are here. Slightly off topic, David Mackay's free text is also wonderfully engaging.

u/shujaa-g · 5 pointsr/statistics

You're pretty good when it comes to linear vs. generalized linear models--and the comparison is the same regardless of whether you use mixed models or not. I don't agree at all with your "Part 3".

My favorite reference on the subject is Gelman &amp; Hill. That book prefers to the terminology of "pooling", and considers models that have "no pooling", "complete pooling", or "partial pooling".

One of the introductory datasets is on Radon levels in houses in Minnesota. The response is the (log) Radon level, the main explanatory variable is the floor of the house the measurement was made: 0 for basement, 1 for first floor, and there's also a grouping variable for the county.

Radon comes out of the ground, so, in general, we expect (and see in the data) basement measurements to have higher Radon levels than ground floor measurements, and based on varied soil conditions, different overall levels in different counties.

We could fit 2 fixed effect linear models. Using R formula psuedocode, they are:

1. radon ~ floor
2. radon ~ floor + county (county as a fixed effect)

The first is the "complete pooling" model. Everything is grouped together into one big pool. You estimate two coefficients. The intercept is the mean value for all the basement measurements, and your "slope", the floor coefficient, is the difference between the ground floor mean and the basement mean. This model completely ignores the differences between the counties.

The second is the "no pooling" estimate, where each county is in it's own little pool by itself. If there are k counties, you estimate k + 1 coefficients: one intercept--the mean value in your reference county, one "slope", and k - 1 county adjustments which are the differences between the mean basement measurements in each county to the reference county.

Neither of these models are great. The complete pooling model ignores any information conveyed by the county variable, which is wasteful. A big problem with the second model is that there's a lot of variation in how sure we are about individual counties. Some counties have a lot of measurements, and we feel pretty good about their levels, but some of the counties only have 2 or 3 data points (or even just 1). What we're doing in the "no pooling" model is taking the average of however many measurement there are in each county, even if there are only 2, and declaring that to be the radon level for that county. Maybe Lincoln County has only two measurements, and they both happen to be pretty high, say 1.5 to 2 standard deviations above the grand mean. Do you really think that this is good evidence that Lincoln County has exceptionally high Radon levels? Your model does, it's fitted line goes straight between the two Lincoln county points, 1.75 standard deviations above the grand mean. But maybe you're thinking "that could just be a fluke. Flipping a coin twice and seeing two heads doesn't mean the coin isn't fair, and having only two measurements from Lincoln County and they're both on the high side doesn't mean Radon levels there are twice the state average."

Enter "partial pooling", aka mixed effects. We fit the model radon ~ floor + (1 | county). This means we'll keep the overall fixed effect for the floor difference, but we'll allow the intercept to vary with county as a random effect. We assume that the intercepts are normally distributed, with each county being a draw from that normal distribution. If a county is above the statewide mean and it has lots of data points, we're pretty confident that the county's Radon level is actually high, but if it's high and has only two data points, they won't have the weight to pull up the measurement. In this way, the random effects model is a lot like a Bayesian model, where our prior is the statewide distribution, and our data is each county.

The only parameters that are actually estimated are the floor coefficient, and then the mean and SD of the county-level intercept. Thus, unlike the complete pooling model, the partial pooling model takes the county info into account, but it is far more parsimonious than the no pooling model. If we really care about the effects of each county, this may not be the best model for us to use. But, if we care about general county-level variation, and we just want to control pretty well for county effects, then this is a great model!

Of course, random effects can be extended to more than just intercepts. We could fit models where the floor coefficient varies by county, etc.

Hope this helps! I strongly recommend checking out Gelman and Hill.
u/klaxion · 5 pointsr/statistics

Recommendation - don't learn statistics through "statistics for biology/ecology".

Go straight to statistics texts, the applied ones aren't that hard and they usually have fewer of the lost-in-translation errors (e.g. the abuse of p-values in all of biology).

Try Gelman and Hill -

http://www.amazon.com/Analysis-Regression-Multilevel-Hierarchical-Models/dp/052168689X/ref=sr_1_1?ie=UTF8&amp;amp;qid=1427768688&amp;amp;sr=8-1&amp;amp;keywords=gelman+hill

Faraway - Practical Regression and Anova using (free)

http://cran.r-project.org/doc/contrib/Faraway-PRA.pdf

Categorical data analysis

http://www.amazon.com/Categorical-Data-Analysis-Alan-Agresti/dp/0470463635/ref=sr_1_1?ie=UTF8&amp;amp;qid=1427768746&amp;amp;sr=8-1&amp;amp;keywords=categorical+data+analysis

u/ffualo · 5 pointsr/askscience

For mathematical statistics: Statistical Inference.

Bioinformatics and Statistics: Statistical Methods in Bioinformatics.

R: R in a Nutshell.

Edit: The Elements of Statistical Learning (free PDF!!)

ESL is a great book, but it can get very difficult very quickly. You'll need a solid background in linear algebra to understand it. I find it delightfully more statistical than most machine learning books. And the effort in terms of examples and graphics is unparalleled.

u/mkat5 · 5 pointsr/math

In the lead up to calc first thing you want to do is just make sure you're algebra skills are pretty solid. A lot of people neglect it and then find the course to be harder than it needed to be because you really use algebra throughout.

Beyond that, if you want an extra book to study with and get practice problems from The Calculus Lifesaver is a big book of calculus you can use from now and into a first year college calculus course. If you do get it, don't worry about reading the whole thing from cover to cover, or doing all of the problems in it. It is a big book for a reason, it definitely covers more than you need to know for now, so don't get overwhelmed, it all comes with time.

Best of luck

u/kypronite · 5 pointsr/learnprogramming

I highly recommend this book for learning calculus.
I faced same problem as yours with calculus and this book helped me alot.

u/gopher9 · 5 pointsr/musictheory

Lewin's book is... interesting. It's a rare kind of music theory book which involves some actual math (with absolutely abhorrent typesetting). But it's actually quite straightforward if you know some basic group theory.

So I recommend to take a look at Visual Group Theory lectures on youtube: https://www.youtube.com/watch?v=UwTQdOop-nU&amp;amp;list=PLwV-9DG53NDxU337smpTwm6sef4x-SCLv. By the way, there's also a book with the same name.

u/beanscad · 5 pointsr/learnmath

https://www.amazon.com/Visual-Group-Theory-Problem-Book/dp/088385757X

I'm not much of a visual person so I didn't learn much from skimming it, but many people stand by it and it's a beautiful exposition (of a topic I also think is beautiful) indeed!

u/skier_scott · 5 pointsr/math

As everyone else is saying, Gilbert Strang's book. He also has a [great course][1] on MIT's OCW.

[1]:http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/index.htm

u/timshoaf · 5 pointsr/education

&gt; Elementary Statistics
http://ftp.cats.com.jo/Stat/Statistics/Elementary%20Statistics%20%20-%20Picturing%20the%20World%20(gnv64).pdf

Presuming you mean this book, I am still at an absolute loss to understand how you think this doesn't somehow require algebra as a prerequisite.

All the manipulations about gaussian distributions, student t distributions, binomial distribution etc... or even the bit on regression, right there on page 502, how is that not algebra. It literally makes reference to the general form of a line in 2-space. Are they just expected to memorize those outright with no regard to their derivation?

How do you treat topics like expected value? Because it seems like right there on page 194 that they've given the general algebraic formula for discrete, real valued, random variable.

They seem to elide the treatment of continuous random variables. So I presume they won't even be going through the exercise of the mean of a Poisson.

All of that granted, this book still heavily relies on the ability to perform algebraic permutations. Right there on page 306 is the very z-score transform I explicitly mentioned earlier.

As far as where I teach, I don't, excepting the odd lecture to clients or coworkers. Typically, however, our domain does not fit prettily into the packaged up parameterized distributions of baccalaureate statistics. We deal in a lot of probabilistic graphical models, in manifold learning, in non-parametrics, etc.

The books I recommend to my audience (which is quite different than those who haven't a basic grasp on algebra) are:

u/Banach-Tarski · 5 pointsr/math

Hey I'm a physics BSc turned mathematician.

I would suggest starting with topology and functional analysis. Functional analysis is the foundation of quantum mechanics, and topology is necessary to properly understand manifolds, which are the foundation of relativity.

I would suggest Kreyszig for functional analysis. It's probably the most gentle functional analysis book out there.

For topology, I would suggest John Lee. This topology text is unique because it teaches general topology with a view towards manifolds. This makes it ideal for a physicist. If you want to know about Lie algebras and Lie groups, the sequel to this text discusses them.

u/Bambo222 · 5 pointsr/OMSCS

I can offer my two cents. I’m a Googler who uses machine learning to detect abuse, where my work is somewhere between analyst and software engineer. I’m also 50% done through the OMSCS program. Here’s what I’ve observed:

Yes, Reinforcement Learning, Computer Vision, and Machine Learning are 100% relevant for a career in data science. But data science is vague; it means different things depending on the company and role. There are three types of data science tasks and each specific job may be weighted more heavily in one of these three directions: (1) data analytics, reporting, and business intelligence focused, (2) statistical theory and model prototyping focused and (3) software engineering focused by launching models into production, but with less empathsis on statistical theory.

I've had to do a bit of all three types of work. The two most important aspects are (1) defining your problem as a data science/machine learning problem, and (2) launching the thing in a distributed production environment.

If you already have features and labeled data, you should be able to get a sense of what model you want to use within 24 hours on your laptop based on a sample of the data (this can be much much harder when you can't actually sample the data before you build the prod job because the data is already distributed and hard to wrangle). Getting the data, ensuring it represents your problem, and ensuring you have processes in place to monitor, re-train, evaluate, and manage FPs/FNs will take a vast majority of your time. Read this paper too: https://papers.nips.cc/paper/5656-hidden-technical-debt-in-machine-learning-systems.pdf

Academic classes will not teach you how to do this in a work environment. Instead, expect them to give you a toolbox of ideas to use, and it’s up to you to match the tool with the problem. Remember that the algorithm will just spit out numbers. You'll need to really understand what's going on, and what assumptions you are making before you use each model (e.g. in real life few random variables are nicely gaussian).

I do use a good amount of deep learning at work. But try not to - if a logistic regression or gradient boosted tree works, then use it. Else, you will need to fiddle with hyper parameters, try multiple different neural architectures (e.g. with time series prediction, do you start with a CNN with attention? CNN for preprocessing then DNN? LSTM-Autoencoder? Or LSTM-AE + Deep Regressor, or classical VAR or SARIMAX models...what about missing values?), and rapidly evaluate performance before moving forward. You can also pick up a deep learning book or watch Stanford lectures on the side; first have the fundamentals down. There are many, many ways you can re-frame and tackle the same problem. The biggest risk is going down a rabbit hole before you can validate that your approach will work, and wasting a lot of time and resources. ML/Data Science project outcomes are very binary: it will work well or it won’t be prod ready and you have zero impact.

I do think the triple threat of academic knowledge for success in this area would be graduate level statistics, computer science, and economics. I am weakest in theoretical statistics and really need to brush up on bayesian stats (https://www.amazon.com/Statistical-Rethinking-Bayesian-Examples-Chapman/dp/1482253445). But 9/10 times a gradient boosted tree with good features (it's all about representation) will work, and getting it in prod plus getting in buy-in from a variety of teams will be your bottleneck. In abuse and fraud; the distributions shift all the time because the nature of the problem is adversarial, so every day is interesting.

u/intangiblemango · 5 pointsr/AcademicPsychology

One of the post-docs in my department did his dissertation with Bayesian stats and he essentially had to teach himself! He strongly recommended this as a place to start if you are interested in that topic -- https://www.amazon.com/gp/product/1482253445/ref=oh_aui_detailpage_o09_s00?ie=UTF8&amp;amp;psc=1 (I have not read it yet.)

One of our computer science profs teaches Bayes for the CS folks and said he would be willing to put together a class for psych folks in conjunction with some other people, so that's a place where I am hoping to develop some competency at some point. I strongly recommend reaching outside of your department, especially if you are at a larger university!

u/Tabuhli · 5 pointsr/learnmath

I really believe that Michael Kelly's "Humongous Book of" series are the best resources for getting through all math classes up to Calculus II. These books contain every single type of problem you will ever encounter in Algebra I &amp; II, Geometry, Trig, and Calc I &amp; II, all solved in great detail. They are like Schaums Outlines but much more reliable.

https://www.amazon.com/Humongous-Basic-Pre-Algebra-Problems-Books/dp/1615640835

https://www.amazon.com/Humongous-Book-Algebra-Problems-Books/dp/1592577229

https://www.amazon.com/Humongous-Book-Geometry-Problems-Books/dp/1592578640

https://www.amazon.com/Humongous-Book-Trigonometry-Problems-Comprehensive/dp/1615641823

https://www.amazon.com/Humongous-Book-Calculus-Problems-Books/dp/1592575129

u/marmle · 4 pointsr/statistics

The short version is that in a bayesian model your likelihood is how you're choosing to model the data, aka P(x|\theta) encodes how you think your data was generated. If you think your data comes from a binomial, e.g. you have something representing a series of success/failure trials like coin flips, you'd model your data with a binomial likelihood. There's no right or wrong way to choose the likelihood, it's entirely based on how you, the statistician, thinks the data should be modeled. The prior, P(\theta), is just a way to specify what you think \theta might be beforehand, e.g. if you have no clue in the binomial example what your rate of success might be you put a uniform prior over the unit interval. Then, assuming you understand bayes theorem, we find that we can estimate the parameter \theta given the data by calculating P(\theta|x)=P(x|\theta)P(\theta)/P(x) . That is the entire bayesian model in a nutshell. The problem, and where mcmc comes in, is that given real data, the way to calculate P(x) is usually intractable, as it amounts to integrating or summing over P(x|\theta)P(\theta), which isn't easy when you have multiple data points (since P(x|\theta) becomes \prod_{i} P(x_i|\theta) ). You use mcmc (and other approximate inference methods) to get around calculating P(x) exactly. I'm not sure where you've learned bayesian stats from before, but I've heard good things , for gaining intuition (which it seems is what you need), about Statistical Rethinking (https://www.amazon.com/Statistical-Rethinking-Bayesian-Examples-Chapman/dp/1482253445), the authors website includes more resources including his lectures. Doing Bayesian data analysis (https://www.amazon.com/Doing-Bayesian-Data-Analysis-Second/dp/0124058884/ref=pd_lpo_sbs_14_t_1?_encoding=UTF8&amp;amp;psc=1&amp;amp;refRID=58357AYY9N1EZRG0WAMY) also seems to be another beginner friendly book.

u/NotherDayAnotherDoug · 4 pointsr/wholesomememes

What I love about David Foster Wallace is how profound his writing is across all topics. Whether he's writing about tennis, or exploring deep mathematical + philosophical concepts, it's always incredibly insightful. If anyone reading this enjoys the above quote and wants to read more DFW, but doesn't wish to attempt the intimidating tome that is Infinite Jest, I recommend wetting one's feet with his magazine article Consider the Lobster (footnotes on page 8, they're important).

u/PillarOfIce · 4 pointsr/Warframe

No the drop isn't broken. Here, this should help.

u/brmj · 4 pointsr/probabilitytheory

I'm only part way through it myself, but here's one I've been recomended in the past that I've been enjoying so far:

Probability Theory: The Logic of Science by E.T. Jaynes

http://www.amazon.com/Probability-Theory-The-Logic-Science/dp/0521592712

http://omega.albany.edu:8008/JaynesBook.html

The second link only appears to have the first three chapters in pdf (though it has everything as postscript files), but I would be shocked if you couldn't easilly find a free pdf off the whole thing online with a quick search.

u/lykonjl · 4 pointsr/statistics

Jaynes: Probability Theory. Perhaps 'rigorous' is not the first word I'd choose to describe it, but it certainly gives you a thorough understanding of what Bayesian methods actually mean.

u/gmarceau · 4 pointsr/science
u/placemirror · 4 pointsr/statistics

Try the two:

https://www.amazon.com/Introduction-Mathematical-Statistics-Robert-Hogg/dp/0321795431

https://www.amazon.com/Statistical-Inference-George-Casella/dp/0534243126

introduction to mathematical statistics by craig and statistical inference by george casella.

u/RAPhisher · 4 pointsr/statistics

In addition to linear regression, do you need a reference for future use/other topics? Casella/Berger is a good one.

For linear regression, I really enjoyed A Modern Approach to Regression with R.

u/dustlesswalnut · 4 pointsr/todayilearned

Innumeracy

u/NeverACliche · 4 pointsr/math

&gt; I'm hoping for something like what Div, Grad, Curl and All That does for Vector Calculus.

Is that a math text? I am not really familiar with it, but from what I heard it sounds more like a physics/engineering text. Does it have any formal proofs in it?

You won't be able to get too far with a proofless(?) Abstract Algebra text if there exists one to begin with. Even Charles Pinter's A Book of Abstract Algebra presupposes some degree of mathematical maturity.

Anyway, try these and see if you like them:

Visual Group Theory by Nathan Carter

Learning Modern Algebra: From Early Attempts to Prove Fermat's Last Theorem by Al Cuoco, Joseph J. Rotman

u/Aeschylus_ · 4 pointsr/Physics

You're English is great.

I'd like to reemphasize /u/Plaetean's great suggestion of learning the math. That's so important and will make your later career much easier. Khan Academy seems to go all through differential equations. All of the more advanced topics they have differential and integral calculus of the single variable, multivariable calculus, ordinary differential equations, and linear algebra are very useful in physics.

As to textbooks that cover that material I've heard Div, Grad, Curl for multivariable/vector calculus is good, as is Strang for linear algebra. Purcell an introductory E&amp;M text also has an excellent discussion of the curl.

As for introductory physics I love Purcell's E&amp;M. I'd recommend the third edition to you as although it uses SI units, which personally I dislike, it has far more problems than the second, and crucially has many solutions to them included, which makes it much better for self study. As for Mechanics there are a million possible textbooks, and online sources. I'll let someone else recommend that.

u/link2dapast · 4 pointsr/statistics

I’d recommend Blitzstein’s Into to Probability book- it’s the book used for Harvard’s Stat110 which has free lectures online as well.

https://www.amazon.com/Introduction-Probability-Chapman-Statistical-Science/dp/1466575573

u/jacobcvt12 · 3 pointsr/rstats

Both JAGS and BUGS use the same language and can perform very similar operations. JAGS is more portable across operating systems, so for that reason, I would suggest JAGS (BUGS is generally limited to Windows). However, documentation/blog posts/forum posts (which exist in abundance!) for both languages will generally work for either tool. If you are looking for a textbook, Doing Bayesian Data Analysis provides a nice introduction to both bayesian statistics as well as JAGS.

Outside of JAGS/BUGS, there exists another similar language for performing Bayesian statistics called Stan (also described in the above book). Stan is newer, and often times will "run faster" than JAGS, however it does not directly support as many types of analyses.

My advice would be to learn JAGS while simultaneously learning the basics of Bayesian methods. Once you understand the basics of JAGS, try exploring Stan!

u/benEggers · 3 pointsr/mathematics

My pleasure :\^) It's hard to say what a local community college would have, since courses seem to vary a lot from school to school. The best thing you could find would probably be a class on something like "Set Theory" or "Mathematical Thinking" (those usually tend to touch on subjects like this without being pathologically rigorous), but a course in Discrete Math could do the trick, since you often talk about counting which leads naturally to countable vs uncountable sets. If you really want to learn the hardcore math, a course in Real Analysis is what you want. And if you don't know where to begin or are too busy, I can't recommend this book enough: http://www.amazon.com/Everything-More-Compact-History-Infinity/dp/0393339289. It's DFW so you know it's good ;)

I'm actually an undergrad studying Computer Science and Math but yes, I plan to end up a teacher after some other sort of career. Feel free to PM me if you have any more questions.

u/vmsmith · 3 pointsr/statistics

I dove into this stuff almost two years ago with very little preparation or background. Now I'm in an MS program for Applied Statistics, and doing quite well. Here are some tips that worked for me:

• If you don't have time to back up and regroup, check out Khan Academy, and this guy's YouTube videos. These can help with specific concepts.

• If you have time to back up and regroup, check out Coursera, Udacity, EdX, and the other MOOCs. Coursera in particular has some very good courses dealing with statistics.

• Take a look at Statistics for Dummies and Naked Statistics.

• Use Reddit and StackOverflow. But use them wisely, and only after you've exhausted other means.

Good luck.
u/kuvir · 3 pointsr/worldnews

They are probably a bit advanced but should at least help you understand something about those fancy numbers.

https://www.amazon.com/Probability-Dummies-Deborah-J-Rumsey/dp/0471751413
https://www.amazon.com/Statistics-Dummies-Lifestyle-Deborah-Rumsey/dp/1119293529

u/belarius · 3 pointsr/statistics

Casella &amp; Berger is the go-to reference (as Smartless has already pointed out), but you may also enjoy Jaynes. I'm not sure I'd say it's quick but if gaps are your concern, it's pretty drum-tight.

u/MohKohn · 3 pointsr/math

it most certainly is! There's a whole approach to statistics based around this idea of updating priors. If you're feeling ambitious, the book Probability theory by Jaynes is pretty accessible.

u/vyaas · 3 pointsr/math

If you can find this at your library, I suggest you pour over it in the weekend. You will not regret it.

u/complexsystems · 3 pointsr/econometrics

The important part of this question is what do you know? By saying you're looking to learn "a little more about econometrics," does that imply you've already taken an undergraduate economics course? I'll take this as a given if you've found /r/econometrics. So this is a bit of a look into what a first year PhD section of econometrics looks like.

My 1st year PhD track has used
-Casella &amp; Berger for probability theory, understanding data generating processes, basic MLE, etc.

-Greene and Hayashi for Cross Sectional analysis.

-Enders and Hamilton for Time Series analysis.

These offer a more mathematical treatment of topics taught in say, Stock &amp; Watson, or Woodridge's Introductory Econometrics. C&amp;B will focus more on probability theory without bogging you down in measure theory, which will give you a working knowledge of probability theory required for 99% of applied problems. Hayashi or Greene will mostly cover what you see in an undergraduate class (especially Greene, which is a go to reference). Hayashi focuses a bit more on general method of moments, but I find its exposition better than Greene. And I honestly haven't looked at Enders or Hamilton yet, but they will cover forecasting, auto-regressive moving average problems, and how to solve them with econometrics.

It might also be useful to download and practice with either R, a statistical programming language, or Python with the numpy library. Python is a very general programming language that's easy to work with, and numpy turns it into a powerful mathematical and statistical work horse similar to Matlab.

u/jmcq · 3 pointsr/statistics

I was an Actuary (so I took the Financial Engineering exams) before I went back to get my PhD in Statistics. If you're familiar with:

• Real Analysis (limits, convergence, continuity etc)
• Basic Probability (Random variables, discrete vs. continuous, expectation, variance)
• Multivariate Calculus

You should be fine in a PhD stats program. It's easy enough to learn the statistics but harder to learn the math (specifically you're going to want strong analysis and calculus skills).

Check out Statistical Inference - Casella &amp; Berger it's a pretty standard 1st year theory text in Statistics, flip through the book and see how challenging the material looks to you. If it seems reasonable (don't expect to know it -- this is stuff you're going to learn!) then you ought to be fine.
u/PandaHuggers · 3 pointsr/AskStatistics

This is a classic. I took a grad level course with this textbook and every problem is nasty. But yes, it is really a classic.

Also, I just begun Data Analysis Using Regression and Multilevel/Hierarchical Models by Andrew Gelman and Jennifer Hill. Love his interpretation of linear regression. Linear regression might sound like basics, but it lays the foundation work for everything else and from time to time I feel compelled to review it. This book gave me a new way to look at a familiar topic.

If you are familiar with any statistical programming language/packages, I would highly suggest you implement the learnings from any books you have.

u/BayesianPirate · 3 pointsr/AskStatistics

Beginner Resources: These are fantastic places to start for true beginners.

Introduction to Probability is an oldie but a goodie. This is a basic book about probability that is suited for the absolute beginner. Its written in an older style of english, but other than that it is a great place to start.

Bayes Rule is a really simple, really basic book that shows only the most basic ideas of bayesian stats. If you are completely unfamiliar with stats but have a basic understanding of probability, this book is pretty good.

A Modern Approach to Regression with R is a great first resource for someone who understands a little about probability but wants to learn more about the details of data analysis.

&amp;#x200B;

Advanced resources: These are comprehensive, quality, and what I used for a stats MS.

Statistical Inference by Casella and Berger (2nd ed) is a classic text on maximum likelihood, probability, sufficiency, large sample properties, etc. Its what I used for all of my graduate probability and inference classes. Its not really beginner friendly and sometimes goes into too much detail, but its a really high quality resource.

Bayesian Data Analysis (3rd ed) is a really nice resource/reference for bayesian analysis. It isn't a "cuddle up by a fire" type of book since it is really detailed, but almost any topic in bayesian analysis will be there. Although its not needed, a good grasp on topics in the first book will greatly enhance the reading experience.

u/animalcrossing · 3 pointsr/cscareerquestions

You received A's in your math classes at a major public university, so I think you're in pretty good shape. That being said, have you done proof-based math? That may help tremendously in giving intuition because with proofs, you are giving rigor to all the logic/theorems/ formulas, etc that you've seen in your previous math classes.

Statistics will become very important in machine learning. So, a proof-based statistics book, that has been frequently recommended by /r/math and /r/statistics is Statistical Inference by Casella &amp; Berger: https://www.amazon.com/Statistical-Inference-George-Casella/dp/0534243126

I've never read it myself, but skimming through some of the beginning chapters, it seems pretty solid. That being said, you should have an intro to proof-course if you haven't had that. A good book for starting proofs is How to Prove It: https://www.amazon.com/How-Prove-Structured-Approach-2nd/dp/0521675995

u/trijazzguy · 3 pointsr/statistics

See Casella and Berger chapter 2, theorem 2.1.5

u/CodeNameSly · 3 pointsr/statistics

Casella and Berger is one of the go-to references. It is at the advanced undergraduate/first year graduate student level. It's more classical statistics than data science, though.

Good statistical texts for data science are Introduction to Statistical Learning and the more advanced Elements of Statistical Learning. Both of these have free pdfs available.

u/freireib · 3 pointsr/math

Disclaimer: I'm an engineer, not a mathematician, so take my advice with a grain of salt.

Early in my grad degree I wanted to master probability and improve my understanding of statistics. The books I used, and loved, are

DeGroot, Probability and Statistics

Rozanov, Probability Theory: A Concise Course

The first is organized very well, with ever increasing difficulty and a good number of solved problems. I also appreciate that as things start to get complicated, he also always bridges everything back to earlier concepts. The books also basically does everything Bayesian and Frequentist side by side, so you get a really good idea of the comparison and arbitraryness.

The second is a good cheap short book basically full of examples. It has just enough math flavor to be mathier, without proofing me to death.

Also, if you're really just jumping into the subject, I would recommend some pop culture math books too, e.g.,

Paulos, Innumeracy

Mlodinow, The Drunkards Walk

Have fun!

u/Galphanore · 3 pointsr/atheism

It is detailed. It just doesn't seem logical to me. His entire position is that since the odds against things being the way they are are so high, there must be a god that arranged them. It's a fundamental misunderstanding of probability. The chance of things being the way they are is actually 100%, because they are that way. We don't know how likely they were before they happened because we only have one planet and one solar system to examine. For all we know there could be life in most solar systems.

Even if that wasn't the case, even if we did have enough information to actually conclude that our existence here and now has a .00000000000000000000000000000001% chance of happening he then makes the even more absurd jump to saying "there being a God is the only thing that makes sense". God, especially the Christian God, is even less likely than the already unlikely chance of us existing at all. If it's extremely unlikely that we could evolve into what we are naturally, how is it less unlikely that an all-powerful, all-knowing, all-good being could exist for no discernible reason?

You should get him a copy of this book. It's great and will help him with these misconceptions. If you haven't read it, I highly suggest you do as well.

I think this book, Innumeracy, does a good job at explaining the odds of these kind of things happening to people.

u/magnanimous_xkcd · 3 pointsr/books

Innumeracy is pretty entertaining (and useful), even if you're not a math person. It's only about 150 pages, so it's a quick read.

u/pinxox · 3 pointsr/learnmath

As long as you have a solid foundation in algebra (and basic trig), you should be fine. However, you have to put in the study time. If you want supplementary material, I'd recommend The Calculus Lifesaver, which was a tremendous help for me, although it only covers single-variable calculus (i.e., Calc I and II). The cool thing about this book is that its author (a Princeton University professor) also has video lectures posted online.

u/Airballp · 3 pointsr/princeton

The single best resource for 103/104 is The Calculus Lifesaver by Adrian Banner. There's a book and a series of recorded review sessions. I stopped showing up to 104 lectures when I found these because they were so much more thorough than the classes. Banner also did review sessions for 201/202 when you reach that point, which are equally good.

u/sordidarray · 3 pointsr/math

Check out Adrian Banner's The Calculus Lifesaver for a companion to a typical undergrad introductory calculus sequence and the accompanying videos from Princeton.

u/rcochrane · 3 pointsr/math

FWIW I had no fun with mathematics in school and didn't start studying it til I was in my thirties. I'm no genius, but I now teach the subject and still self-study it. You don't need any mysterious talent to get very competent at university-level maths, just to be interested enough in it to put the hours in.

Self-study is hard and frustrating. Be prepared for that. Reading one page can take a day. You can stare at a definition or theorem for hours and not understand it. Looking things up in multiple books can really help with that -- there are some good resources online as well. Also, some things just take a while to "cook" in the brain; keep at it. Take lots and lots of notes, preferably with pictures. Do plenty of exercises. When you're really stumped, post here.

I'll echo what others have said: add to Spivak a couple of other books so you can change it up. A book on group theory and one on linear algebra would be a nice combination -- maybe one on discrete maths, probability or something similar as well if that interests you. For group theory I think this book is fantastic, though it's expensive.

If you really want to make it through Spivak, make a plan. Break the book down into, say, 50-page chunks and make 50 pages your target for each week (I have no idea whether this is too ambitious for you -- try it and see). Track your progress. Celebrate when you hit milestones.

Good luck!

[EDIT: Also, be aware that maths books aren't really designed to be read like novels. Skim a chapter first looking for the highlights and general ideas, then drill into some of the details. Skip things that seem difficult and see if they become important later, then go back (with more motivation) etc.]

u/jaaval · 3 pointsr/AskStatistics

This has been pretty much the standard textbook on Bayes
https://www.amazon.com/Bayesian-Analysis-Chapman-Statistical-Science/dp/1439840954/

u/WhataBeautifulPodunk · 3 pointsr/Physics

Study what you find the most interesting!

Does your linear algebra include the spectral theorem or Jordan canonical form? IMHO, a pure math subject that is relatively the easiest to learn and is useful no matter what you do is linear algebra.

Group theory (representation theory) has also served me well so far.

If you want to learn GR and Hamiltonian mechanics in-depth, learning smooth manifolds would be a must. Smooth manifolds are basically spaces that locally look like Euclidean spaces and we can do calculus on. GR is on a pseudo-Riemannian manifold with changing metric (because of massive stuffs). Hamiltonian mechanics is on a cotangent bundle, which is a symplectic manifold (whereas Lagrangian mechanics is on a tangent bundle.) John Lee's book is a gentle starting point.

Edit: If you feel like the review of topology in the appendix is not enough, Lee also wrote a book on topological manifolds.

u/ATV360 · 3 pointsr/baseball

Here you go! It's very helpful and has a wide range of topics so you can learn whatever you want. It uses Retrosheet, Lahman and Pitch Fx

https://www.amazon.com/Analyzing-Baseball-Data-Chapman-Hall/dp/1466570229/ref=sr_1_1?ie=UTF8&amp;amp;qid=1494296330&amp;amp;sr=8-1&amp;amp;keywords=analyzing+baseball+data+with+r

u/GD1634 · 3 pointsr/math

Sure! I'll just assume knowledge of the more common stuff like OPS. I'll try to break it into learning resources v. interesting work to be read. Think my suggestions to OP might be structured a bit differently. I'll try to keep it moderately short.

&amp;#x200B;

Learning

The Book: Playing the Percentages in Baseball set the foundation for a lot of stuff seen today. Win expectancy, lineup optimization, "clutch" hitting, matchups, etc. A lot of it is common knowledge today, but probably because of this work. It's great to see them work through it.

This is a bit of a glossary to many of the more important stats, with links for further reading.

As well, not quite the same, but Analyzing Baseball Data With R is also a great introduction to learning R, which is probably preferable to Python for a lot of baseball-specific work (not to make a general statement on the two, at all).

&amp;#x200B;

A lot of good work is, somewhat annoyingly, scattered through the internet on blogs. I don't have time to dig up too much right now but I'll shamelessly plug some work a couple of friends did a few years ago that was rather successful. These are mostly just examples of the what these projects tend to look like.

• The Value of Draft Picks
• Projecting Prospects' Hitting Primes
• xxFIP p1 p2 p3

Much of the more current work will probably be found on FanGraphs' community submissions section, which I honestly haven't up with recently. I imagine a lot of focus is on using all the new Statcast data.

There's also the MIT Sloan Sports Analytics Conference, where a lot of really cool work comes from. The awesome part about Sloan is that there seems to be a strong emphasis on sharing; I looked for the data/code for two papers I was interested in and ended up getting it for three! My favourite work might be (batter|pitcher)2vec. This is more machine-learning oriented, which I think is a good direction.

&amp;#x200B;

That's all I have time for rn, hope that helps!
u/dankney · 3 pointsr/Sabermetrics

https://www.amazon.com/Analyzing-Baseball-Data-Chapman-Hall/dp/1466570229/

It's an introduction to baseball data, statistical analysis, and the R programming language.

u/ginger_beer_m · 3 pointsr/science

May I recommend this book to you then, Statistical Rethinking: A Bayesian Course with Examples in R and Stan (Chapman &amp; Hall/CRC Texts in Statistical Science) https://www.amazon.co.uk/dp/1482253445/ref=cm_sw_r_cp_apa_Iu6.BbJE7EECQ

u/futrawo · 3 pointsr/BayesianProgramming

There seem to be a few options. I've had this and this on my reading list for a while, but haven't got further than that.

I'm also interested in recommendations.

u/mobcat40 · 3 pointsr/AskScienceDiscussion

Here's mine

To understand life, I'd highly recommend this textbook that we used at university http://www.amazon.com/Campbell-Biology-Edition-Jane-Reece/dp/0321558235/ That covers cell biology and basic biology, you'll understand how the cells in your body work, how nutrition works, how medicine works, how viruses work, where biotech is today, and every page will confront you with what we "don't yet" understand too with neat little excerpts of current science every chapter. It'll give you the foundation to start seeing how life is nothing special and just machinery (maybe you should do some basic chemistry/biology stuff on KhanAcademy first though to fully appreciate what you'll read).

For math I'd recommend doing KhanAcademy aswell https://www.khanacademy.org/ and maybe a good Algebra workbook like http://www.amazon.com/The-Humongous-Book-Algebra-Problems/dp/1592577229/ and after you're comfortable with Algebra/Trig then go for calc, I like this book http://www.amazon.com/Calculus-Ron-Larson/dp/0547167024/ Don't forget the 2 workbooks so you can dig yourself out when you get stuck http://www.amazon.com/Student-Solutions-Chapters-Edwards-Calculus/dp/0547213093/ http://www.amazon.com/Student-Solutions-Chapters-Edwards-Calculus/dp/0547213107/ That covers calc1 calc2 and calc3.

Once you're getting into calc Physics is a must of course, Math can describe an infinite amount of universes but when you use it to describe our universe now you have Physics, http://www.amazon.com/University-Physics-Modern-12th/dp/0321501217/ has workbooks too that you'll definitely need since you're learning on your own.

At this point you'll have your answers and a foundation to go into advanced topics in all technical fields, this is why every university student who does a technical degree must take courses in all those 3 disciplines.

If anything at least read that biology textbook, you really won't ever have a true appreciation for the living world and you can't believe how often you'll start noticing people around you spouting terrible science. If you could actually get through all the work I mentioned above, college would be a breeze for you.

u/kiwipete · 2 pointsr/statistics

An intermediate resource between the Downey book and the Gelman book is Doing Bayesian Analysis. It's a bit more grounded in mathematics and theory than the Downey, but a little less mathy than the Gelman.

u/beaverteeth92 · 2 pointsr/statistics

The absolute best book I've found for someone with a frequentist background and undergraduate-level math skills is Doing Bayesian Data Analysis by John Kruschke. It's a fantastic book that goes into mathematical depth only when it needs to while also building your intuition.

The second edition is new and I'd recommend it over the first because of its improved code. It uses JAGS and STAN instead of Bugs, which is Windows-only now.

u/Secret_Identity_ · 2 pointsr/AskReddit

It depends on what kind of math you want to learn. If you want to get up to speed on your basic math, khan academy is the way to go. However, I think that is probably a waste of your time. The math that you will see in high school and the first year or too of college has very little to do with what a mathematician might consider 'real math.' Frankly I found it boring as hell and I majored in math undergrad and grad.

If I were you, I would start with something interesting and if you end up really liking math, go back and pick up algebra and calculus. So check out the two books below:

This book will walk you through really high level stuff in an easy to understand way. As a grad student I would hang out in this class because it was rather fun.

This book is a history of math/pop math book. As an undergrad it put the field into perspective. Lots and lots of really useful information for anyone, especially someone who is interested in being well learned.

u/nulledit · 2 pointsr/AskReddit

I have to disagree a bit with your second pick, Everything and More by David Foster Wallace. His somewhat chaotic writing style [footnote footnote footnote] is more suited to essays and fiction than describing mathematical ideas. Clear and concise he is not.

That said, it is an interesting read. I just don't see it as a great introduction to someone who has had trouble with math in the past. It can be confusing.

u/PrurientLuxurient · 2 pointsr/philosophy

I'd just throw in that the late David Foster Wallace has a fun little book about this stuff. I'd recommend it as an accessible introduction; I am a complete layman when it comes to advanced mathematics and I found it to be an enjoyable read. I'm sure it isn't quite as comprehensive as what you'd get in a pure math course, but it might be useful nevertheless.

Edit: I should add that it is written in DFW's characteristic style--full of digressions, self-consciously and ironically pedantic footnotes, etc.--which plenty of people find unbelievably annoying. Be forewarned, then, that if you've never read anything by him, you may find the style irritating.

u/Chakix · 2 pointsr/Scrolls

I have never argued that draw is good for the game. If you read my posts around this subreddit, I have critized mojang for not putting in the proper way to the hand limit and have argued that it makes the game less tactical on several occasions. As for the rest, try this:
http://www.amazon.co.uk/Statistics-For-Dummies-Deborah-Rumsey/dp/0470911085

u/kungfooe · 2 pointsr/matheducation

Honestly, if you're wanting an understanding of statistics, I'd recommend Statistics for Dummies. Don't be deceived by the title, you'll still have to do some real thinking on your own to grasp the ideas discussed. You might consider using textbooks or other online resources as secondary supports to your study.

I can also give you a basic breakdown of the topics you'd want to develop an understanding of in beginning to study statistics.

Descriptive Statistics

Descriptive statistics is all about just describing your sample. Major ideas in being able to describe the sample are measures of center (e.g., mean, median, mode), measures of variation (e.g., standard deviation, variance, range, interquartile range), and distributions (e.g., uniform, bell-curve/normally distributed, skewed left/right).

Inferential Statistics

There is a TON of stuff related to this. However, I would first recommend beginning with making sure you have some basic understanding of probability (e.g., events, independence, mutual exclusivity) and then study sampling distributions. Because anything you make an inference about will depending upon the measures in your sample, you need to have a sense of what kinds of samples are possible (and most likely) when you gather data to form one. One of the most fundamental ideas of inferential statistics is based upon these ideas, The Central Limit Theorem. You'll want to make sure you understand what it means before progressing to making inferences.

With that background, you'll be ready to start studying different inferences (e.g., independent/dependent sample t-tests). Again, there are a lot of different kinds of inference tests out there, but I think the most important thing to emphasize with them is the importance of their assumptions. Various technologies will do all of the number crunching for you, but you have to be the one to determine if you're violating any assumptions of the test, as well as interpret what the results mean.

As a whole, I would encourage you to focus on understanding the big ideas. There is a lot of computation involved with statistics, but thanks to modern technology, you don't have to get bogged down in it. As a whole, keep pushing towards understanding the ideas and not getting bogged down in the fine-grained details and processes first, and it will help you develop a firm grasp of much of the statistics out there.

u/RmooreChuckECheese · 2 pointsr/MagicArena
u/Sarcuss · 2 pointsr/learnmath

I personally think you should brush up on frequentist statistics as well as linear models before heading to Bayesian Statistics. A list of recommendations directed at your background:

u/urmyheartBeatStopR · 2 pointsr/rstats

&gt; I'd like to know, how did you learn to use R?

My batshit crazy lovable thesis advisor was teaching intro datascience in R.

He can't really lecture and he have high expectation. The class was for everybody including people that don't know how to program. The class book was advance R http://adv-r.had.co.nz/... (red flag).

We only survived this class because I had a cs undergrad background and I gave the class a crash course once. Our whole class was more about how to implement his version of random forest.

I learned R because we had to implement a version of Random forest with Rpart package and then create a package for it.

Before this a dabble in R for summer research. It was mostly cleaning data.

So my advice would be to have a project and use R.

&gt;how did you learn statistics?

Master program using the wackerly book and chegg/slader. (https://www.amazon.com/Mathematical-Statistics-Applications-Dennis-Wackerly/dp/0495110817)

It's a real grind. You need to learn probability first before even going into stat. Wackerly was the only real book that break down the 3 possible transformations (pdf,cdf, mgf).

u/sovietcableguy · 2 pointsr/learnmath

I learned from Wackerly which is decent, though I think Devore's presentation is better, but not as deep. Both have plenty of exercises to work with.

Casella and Berger is the modern classic, which is pretty much standard in most graduate stats programs, and I've heard good things about Stat Labs, which uses hands-on projects to illuminate the topics.

u/astrok0_0 · 2 pointsr/Physics

FYI, Jaynes actually wrote a whole probability textbook that essentially put together all his thoughts about probability theory. I haven't read it, but many people say it got some good stuff.

u/G-Brain · 2 pointsr/math

I'm really fond of Jaynes' Probability Theory: The Logic of Science and Rudin's Principles of Mathematical Analysis. Both are excellent, clearly written books in their own way.

u/bbsome · 2 pointsr/MachineLearning

Depends what your goal is. As you have a good background, I would not suggest any stats book or deep learning. First, read trough Probability theory - The logic of science and the go for Bishop's Pattern Recognition or Barbers's Bayesian Reasoning and ML. If you understand the first and one of the second books, I think you are ready for anything.

u/mrdevlar · 2 pointsr/statistics

If you want a math book with that perspective, I'd recommend E.T. Jaynes "Probability Theory: The Logic of Science" he devolves into quite a lot of discussions about that topic.

If you want a popular science book on the subject, try "The Theory That Would Not Die".

Bayesian statistics has, in my opinion, been the force that has attempted to reverse this particular historical trend. However, that viewpoint is unlikely to be shared by all in this area. So take my viewpoint with a grain of salt.

u/sleepingsquirrel · 2 pointsr/math
u/riraito · 2 pointsr/math
u/naasking · 2 pointsr/philosophy

Probability Theory: The Logic of Science. This is an online pdf, possibly of an older version of the book. Science covers knowledge of the natural world, and mathematics and logic covers knowledge of formal systems.

u/dogdiarrhea · 2 pointsr/learnmath

I've heard good things about (but have not read) Probability, the logic of science.

A complete table of contents + the first 3 chapters are available here. This should tell you if it covers the appropriate material and if the explanations are to your satisfaction.

u/mshron · 2 pointsr/AskStatistics

It sounds like you want some kind of regression, especially to answer 2. In a GLM, you are not claiming that the data by itself has a Normal/Poisson/Negative Binomial/Binomial distribution, only that it has such a distribution when conditioned on a number of factors.

In a nutshell: you model the mean of the distribution as a linear combination of the inputs. Then you can read the weighting factors on each input to learn about the relationship.

In other words, it doesn't need to be that your data is Poisson or NB in order to do a Poisson or NB regression. It only has to be that the error, that is, the difference between the expected based on the mean function and the actual, follows such a distribution. In fact, there may be some simple transformations (like taking the log of the outcome) that lets you use a standard linear model, where you can reasonably assume that the error is Normal, even if the outcome is anything but.

If your variance is not dependent on any of your inputs, that's a great sign, since heteroskedasticity is a great annoyance when trying to do regressions.

If you have time, the modern classic in this area is http://www.amazon.com/Analysis-Regression-Multilevel-Hierarchical-Models/dp/052168689X. It starts with a pretty gentle introduction to regression and works its way into the cutting edge by the end.

u/Here4TheCatPics · 2 pointsr/statistics

I've used a book by Gelman for self study. Great author, very good at using meaningful graphics -- which may be an effective way to convey ideas to students.

u/cokechan · 2 pointsr/rstats

https://www.amazon.com/Analysis-Regression-Multilevel-Hierarchical-Models/dp/052168689X is the definitive text on the subject. I highly recommend this book to understand the fundamentals of multilevel modeling.

u/geneusutwerk · 2 pointsr/sociology

So I am a political scientist (though my research crosses into sociology).

What I would recommend is starting by learning Generalized Linear Models (GLMs). Logistic regression is one type, but GLMs are just a way of approaching a bunch of other type of dependent variables.

Gelman and Hill's book is probably the best single text book that can cover it all. I think it provides examples in R so you could also work on picking up R. It covers GLMs and multi-level models which are also relatively common in sociology.

u/efrique · 2 pointsr/AskStatistics

&gt; the first half of my degree was heavy on theoretical statistics,

Really? Wow, I'm impressed. Actual coverage of even basic theoretical stats is extremely rare in psych programs. Usually it's a bunch of pronouncements from on high, stated without proof, along with lists of commandments to follow (many of dubious value) and a collection of bogus rules of thumb.

What book(s) did you use? Wasserman? Casella and Berger? Cox and Hinkley? or (since you say it was heavy on theory) something more theoretical than standard theory texts?

I'd note that reaction times (conditionally on the IVs) are unlikely to be close to normal (they'll be right skew), and likely heteroskedastic. I'd be inclined toward generalized linear models (perhaps a gamma model -probably with log-lnk if you have any continuous covariates- would suit reaction times?). And as COOLSerdash mentions, you may want a random effect on subject, which would then imply GLMMs

u/flight_club · 2 pointsr/math

What is your background?

http://www.amazon.com/Statistical-Inference-George-Casella/dp/0534243126
Is a fairly standard first year grad textbook with I quite enjoy. Gives you a mathematical statistics foundation.

http://www.amazon.com/All-Statistics-Concise-Statistical-Inference/dp/1441923225/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1278495200&amp;amp;sr=1-1
I've heard recommended as an approachable overview.

http://www.amazon.com/Modern-Applied-Statistics-W-Venables/dp/1441930086/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1278495315&amp;amp;sr=1-1
Is a standard 'advanced' applied statistics textbook.

http://www.amazon.com/Weighing-Odds-Course-Probability-Statistics/dp/052100618X
Is non-standard but as a mathematician turned probabilist turned statistician I really enjoyed it.

http://www.amazon.com/Statistical-Models-Practice-David-Freedman/dp/0521743850/ref=pd_sim_b_1
Is a book which covers classical statistical models. There's an emphasis on checking model assumptions and seeing what happens when they fail.

u/Econonerd · 2 pointsr/GradSchool

This book has a fairly good introduction to probability theory if you don't need it to be measure theoretic. Statistical Inference

u/gajeam · 2 pointsr/education

John Allen Paulos' Innumeracy goes into a similar subject. He says that logic puzzles, analytical and inductive skills, and more importantly probability and statistical analysis should be taught alongside regular mathematics.

It's a short read and the man is a genius.

u/jrandom · 2 pointsr/atheist

Argh. Numerology. It's like every logical fallacy for numbers rolled into one.

I highly recommend you pick up a copy of Innumeracy.

u/ltnately · 2 pointsr/math

Innumeracy by Paulos

A great read that deal in part with the general acceptability math incompetence has compared to other subjects. Also a fun book as a "math person" just in the way he speaks and confides in the reader.

http://www.amazon.com/Innumeracy-Mathematical-Illiteracy-Consequences-Vintage/dp/0679726012

u/Froost · 2 pointsr/programming

By the way, there's a book called Innumeracy which tackles the problem of the consequences of not knowing math, how come it became OK, or even fashionable to not know math in the society, places where you can apply the knowledge etc. You should give it a read, it's enjoyable and a short read. You can notice that most of the arguments you are making are similar to those that were against literacy ("I'm plantin' seeds all day and then weedin for some, what are books gonna do me for?")

u/bluestrike2 · 2 pointsr/politics

Completely off-topic, but stick with it. I had the same problem. Then everything will start to fit together. In the meantime, might I suggest Adrian Banner's The Calculus Lifesaver as a really approachable second textbook/help guide/reference?

/tangent

u/lamson12 · 2 pointsr/math

Here is an actual blog post that conveys the width of the text box better. Here is a Tufte-inspired LaTeX package that is nice for writing papers and displaying side-notes; it is not necessary for now but will be useful later on. To use it, create a tex file and type the following:

\documentclass{article}
\usepackage{tufte-latex}

\begin{document}
blah blah blah
\end{document}

But don't worry about it too much; for now, just look at the Sample handout to get a sense for what good design looks like.

I mention AoPS because they have good problem-solving books and will deepen your understanding of the material, plus there is an emphasis on proof-writing when solving USA(J)MO and harder problems. Their community and resources tabs have many useful things, including a LaTeX tutorial.

Free intro to proofs books/course notes are a google search away and videos on youtube/etc too. You can also get a free library membership as a community member at a nearby university to check out books. Consider Aluffi's notes, Chartrand, Smith et al, etc.

You can also look into Analysis with intro to proof, a student-friendly approach to abstract algebra, an illustrated theory of numbers, visual group theory, and visual complex analysis to get some motivation. It is difficult to learn math on your own, but it is fulfilling once you get it. Read a proof, try to break it down into your own words, then connect it with what you already know.

Feel free to PM me v2 of your proof :)

u/cr3bits · 2 pointsr/math

You might also want to search for your question on MSE. One advice that I recall is to consider Gilbert Strang's book Introduction to Linear Algebra along with his videos on OCW. He has an offbeat style both in his book and in his videos that might be unappealing to some people but the reason is that he really tries to make his students understand rather than remember. Also note that his target audience is typically engineers so proofs are present in his book but not the emphasis of his course.

u/IjonTichy85 · 2 pointsr/compsci

I think before you start you should ask yourself what you want to learn. If you're into programming or want to become a sysadmin you can learn everything you need without taking classes.

If you're interested in the theory of cs, here are a few starting points:

Introduction to Automata Theory, Languages, and Computation

The book you should buy

MIT: Introduction to Algorithms

The book you should buy

Computer Architecture&lt;- The intro alone makes it worth watching!

The book you should buy

Linear Algebra

The book you should buy &lt;-Only scratches on the surface but is a good starting point. Also it's extremely informal for a math book. The MIT-channel offers many more courses and are a great for autodidactic studying.

Everything I've posted requires no or only minimal previous education.
You should think of this as a starting point. Maybe you'll find lessons or books you'll prefer. That's fine! Make your own choices. If you've understood everything in these lessons, you just need to take a programming class (or just learn it by doing), a class on formal logic and some more advanced math classes and you will have developed a good understanding of the basics of cs. The materials I've posted roughly cover the first year of studying cs. I wish I could tell you were you can find some more math/logic books but I'm german and always used german books for math because they usually follow a more formal approach (which isn't necessarily a good thing).
I really recommend learning these thing BEFORE starting to learn the 'useful' parts of CS like sql,xml, design pattern etc.
Another great book that will broaden your understanding is this Bertrand Russell: Introduction to mathematical philosophy
If you've understood the theory, the rest will seam 'logical' and you'll know why some things are the way they are. Your working environment will keep changing and 20 years from now, we will be using different tools and different languages, but the theory won't change. If you've once made the effort to understand the basics, it will be a lot easier for you to switch to the next 'big thing' once you're required to do so.

One more thing: PLEASE, don't become one of those people who need to tell everyone how useless a university is and that they know everything they need just because they've been working with python for a year or two. Of course you won't need 95% of the basics unless you're planning on staying in academia and if you've worked instead of studying, you will have a head start, but if someone is proud of NOT having learned something, that always makes me want to leave this planet, you know...

EDIT: almost forgot about this: use Unix, use Unix, and I can't emphasize this enough: USE UNIX! Building your own linux from scratch is something every computerscientist should have done at least once in his life. It's the only way to really learn how a modern operating system works. Also try to avoid apple/microsoft products, since they're usually closed source and don't give you the chance to learn how they work.

u/bobbyj_chard · 2 pointsr/MachineLearning

| $350 on amazon wat? the newest edition (2009) of his undergrad book is available for about 70 dollars, which is probably a steal. http://www.amazon.com/Introduction-Linear-Algebra-Fourth-Gilbert/dp/0980232716/ref=sr_1_1?ie=UTF8&amp;amp;qid=1459450518&amp;amp;sr=8-1&amp;amp;keywords=gilbert+strang u/user0183849184 · 2 pointsr/gamedev I realized as I was writing this reply, I'm not sure if you're interested in a general linear algebra reference material recommendation, or more of a computer graphics math recommendation. My reply is all about general linear algebra, but I don't think matrix decompositions or eigensolvers are used in real-time computer graphics (but what do I know lol), so probably just focusing on the transformations chapter in Mathematics for 3D Game Programming and Computer Graphics would be good. If it feels like you're just memorizing stuff, I think that's normal, but keep rereading the material and do examples by hand! If you really understand how projection matrices work, then the transformations should make more sense and seem less like magic. I took Linear Algebra last semester and we used http://www.amazon.com/Introduction-Linear-Algebra-Fourth-Edition/dp/0980232716, I would highly recommend it. Along with that book, I would recommend watching these video lectures, http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/, given by the author of the book. I've never watched MIT's video lectures until I watched these in preparation for an interview, because I always thought they would be dumb, but they're actually really great! I will say that I used the pause button furiously because the lectures are very dense and I had to think about what he was saying! In my opinion, the most important topics to focus on would be the definition of a vector space, the four fundamental subspaces, how the four fundamental subspaces relate to the fundamental theorem of linear algebra, all the matrix decompositions in that book, pivot variables and special solutions...I just realized I'm basically listing all of the chapters in the book, but I really do think they are all very important! The one thing you might not want to focus on is the chapter on incidence matrices. However, in my class, we went over PageRank in detail and I think it was very interesting! u/Second_Foundationeer · 2 pointsr/Physics I think the book I used was by Gilbert Strang. He also has some video lectures, apparently. However, I think most of my real understanding of linear algebra (after being introduced to the formalism) came from some combination of upper division classes (classical mechanics, mathematical methods, linear algebra in the math dept). Maybe quantum mechanics was when I just got used to it.. I think I'd suggest complex analysis if you've already been introduced to the basic formalism of linear algebra because you have to use linear algebra a shit ton in quantum mechanics so you'll get good at it just from sheer exposure, imo. u/DavidJayHarris · 2 pointsr/statistics This is very similar to the analysis featured on the cover of Bayesian Data Analysis (third edition). Here's a bigger picture of their decomposition into day-of-week effects, seasonal effects, long-term trends, holidays, etc. A bit more here, and lots more in the book. u/gatherinfer · 2 pointsr/statistics A lot of the recommendations in this thread are good, I'd like to add "Bayesian Data Analysis 3rd edition" by Gelman et al. Useful if you encounter Bayesian models, especially hierarchical/multilevel models. u/ryry9379 · 2 pointsr/ProductManagement Mostly because I wanted to analyze baseball stats, and at the time (4-5 years ago) that was mostly done in R. If the last industry conference I went to is any indication, it still is, many of the presentations features plots that were clearly ggplot2. There are also books like this one floating around: https://www.amazon.com/Analyzing-Baseball-Data-Chapman-Hall/dp/1466570229/ref=nodl_. u/MeloYelo · 2 pointsr/Rlanguage I'm in a similar boat as you. I'm a biologist by trade, but want to delve deeper into statistical analysis with R programming to add a new skill to my career. I'm also a huge baseball fan, especially love it for the stats. A friend of mine gave me this book for a birthday gift and I've been working way my through it, albeit very slowly. So far (I'm only at Chapter 3), it's been easy to follow and a nice to guide through R. I'd suggest it. The edx course, that /u/sin7 suggested sounds interesting as well. u/Aok1425 · 2 pointsr/AskStatistics At least re: random variables, events, PDF, and CDF, I like the diagrams from Prof. Joe Blitzstein's textbook: http://i.imgur.com/aBkgHGC.jpg u/themiro · 2 pointsr/learnmath Blitzstein and one of his students published a probability textbook u/SOberhoff · 2 pointsr/math The Nature of Computation (I don't care for people who say this is computer science, not real math. It's math. And it's the greatest textbook ever written at that.) Concrete Mathematics Understanding Analysis An Introduction to Statistical Learning Numerical Linear Algebra Introduction to Probability u/OrigamiDuck · 2 pointsr/artificial This may vary by school, but it's been my experience that there aren't a lot of classes explicitly labeled as "artificial intelligence" (especially at the undergraduate level). However, AI is a very broad and interdisciplinary field so one thing I would recommend is that you take courses from fields that form the foundation of AI. (Math, Statistics, Computer Science, Psychology, Philosophy, Neurobiology, etc.) In addition, take advantage of the resources you can find online! Self-study the topics you're interested in and try to get some hands on experience if possible: read blogs, read papers , browse subreddits, program a game-playing AI, etc. Given that you're specifically interested in reasoning: • (From the sidebar) AITopics has a page on reasoning with some recommendations on where to start. • I'm not an expert in this area but from what I've been exposed to I believe many of the state-of-the-art approaches to reasoning rely on bayesian statistics so I would look into learning more about it. I've heard good things about this book, the author also has some lectures available on youtube • From what I understand, whether or not we should look to the human mind for inspiration in AI reasoning is a pretty controversial topic. However you may find it interesting, and taking a brief survey of the psychology of reasoning may be a good way to understand the types of problems involved in AI reasoning, if you aren't very familiar with the topic. *As a disclaimer: I'm fairly new to this field of study myself. What I've shared with you is my best understanding, but given my lack of experience it may not be completely accurate. (Anyone, please feel free to correct me if I'm mistaken on any of these points) u/ST2K · 2 pointsr/IAmA &gt;I mean its to late now to enroll... Why wait? Pick up a few books on math and use your Google Fu to get yourself started. I really like this book. And instead of studying geometry (which I doubt you'd be using in college), study Logic instead. The way problems are constructed is similar to geometry. In geometry you have theorem and postulates, in logic you make proofs. You start out with two or three opening statements, and by using different combinations of OR, AND &amp; IF-THEN statements, you can prove the final statement. I'll give you this link about it but I'm hesitant to because it has lots of scary symbols and letters. Here. But save that for later. If you want to get started, take a look at truth tables . Logic is so much more interesting than geometry because it'll help your Google Fu get even better. You can make Boolean statements when you enter a Google query. It also gets you on the path to learning SQL (which your brother may also be able to help you with). SQL is all about sets - sets of records, and how you can join them and select those that have certain values, etc. You may even find this book a nice, gentle introduction to logic that doesn't require much math. Basically, what I'm saying to you is this: you live in the most incredible time to be alive ever. The Internet is a super-powerful tool you can use to educate yourself and you should make full use of it. I also want you to know that if you don't have a specialized skill, you're going to be treated like a virtual slave for the rest of your life. Working at WalMart is not a good career choice. That's just choosing a life of victimhood. Make full use of the Internet, and your lack of a car will seem less problematic. u/mrbarky · 2 pointsr/booksuggestions I've been working my way through the Humongous Book of Algebra Problems. It's about a thousand math problems with complete (and very good explanations). The only way to get good is to get out the paper and plow through problems. I supplement that with videos from Khan Academy (which has it's own math quiz system that is also excellent). I try to do every problem, even if I hate it (looking at you matrices!). And if I get it wrong, no matter the mistake, I re-do the whole problem. After I do that one, it's on to the Humongous book on Trig. Then calculus. All for the randy hell of it (I grew up with an interest in science and bad math teachers). u/Josharooski · 2 pointsr/learnmath http://www.amazon.com/The-Humongous-Book-Algebra-Problems/dp/1592577229 Maybe? I'm thinking about picking this up when I finish Khan academy algebra. u/PookaProtector · 2 pointsr/learnmath No worries. There's also a book called Humongous Book of Algebra Problems u/XLordS · 2 pointsr/learnmath This is the book I used when I was studying statistics and probability. https://www.amazon.com/Introduction-Probability-2nd-Dimitri-Bertsekas/dp/188652923X "Math isn't a spectator sport", but you shouldn't make yourself hate math by doing hundreds of problems. Study what you find interesting. u/Islamiyyah · 2 pointsr/datascience It's pretty basic stuff, but the first three chapters of this book was a game-changer for me https://www.amazon.com/Introduction-Probability-2nd-Dimitri-Bertsekas/dp/188652923X My mind was blown when I finally understood the connection between random variables and the "basic" probability theory with events and sample spaces. For me they had always been two seperate things. The notation is also really nice. Having solid fundamentals makes it much easier to study advanced topics, so I would start here. There's also a great EDX course which is based on the book, but it's a complement and not a substitute. Get the book. u/rrsmitto · 2 pointsr/matheducation When you say everyday calculations I'm assuming you're talking about arithmetic, and if that's the case you're probably just better off using you're phone if it's too complex to do in you're head, though you may be interested in this book by Arthur Benjamin. I'm majoring in math and electrical engineering so the math classes I take do help with my "everyday" calculations, but have never really helped me with anything non-technical. That said, the more math you know the more you can find it just about everywhere. I mean, you don't have to work at NASA to see the technical results of math, speech recognition applications like Siri or Ok Google on you're phone are insanely complex and far from a "solved" problem. Definitely a ton of math in the medical field. MRIs and CT scanners use a lot of physics in combination with computational algorithms to create images, both of which require some pretty high level math. There's actually an example in one of my probability books that shows how important statistics can be in testing patients. It turns out that even if a test has a really high accuracy, if the condition is extremely rare there is a very high probability that a positive result for the test is a false positive. The book states that ~80% of doctors who were presented this question answered incorrectly. u/Alexanderdaawesome · 2 pointsr/math I can recommend a very good book, I am using it and it is beautiful. u/coffeecoffeecoffeee · 1 pointr/statistics For applied Bayesian statistics, Kruschke's Doing Bayesian Data Analysis is fantastic. It's a fantastic intro book that goes into only as much technical detail as you need to grasp what's going on. u/adcqds · 1 pointr/datascience The pymc3 documentation is a good place to start if you enjoy reading through mini-tutorials: pymc3 docs Also these books are pretty good, the first is a nice soft introduction to programming with pymc &amp; bayesian methods, and the second is quite nice too, albeit targeted at R/STAN. u/noahpoah · 1 pointr/suggestmeabook If you liked Consider the Lobster, then you will also very probably like A Supposedly Fun Thing I'll Never Do Again and Both Flesh and Not. Edited to add that Everything and More is also very good, though it's not a collection of essays. u/RuttyRut · 1 pointr/INTP It sounds like you would enjoy Everything and More: A Compact History of Infinity, also by DFW. Fascinating read, non-fiction, both somewhat technical and easily readable. "The task Wallace has set himself is enormously challenging: without radically compromising the complexity of the philosophy, metaphysics, or mathematics that underlies the evolving concept of infinity, present the material to a lay audience in a manner that is entertaining." https://www.amazon.com/Everything-More-Compact-History-Infinity/dp/0393339289 u/racketship · 1 pointr/davidfosterwallace I would be interested in Everything and More: A Compact History of Infinity Is that a possibility? u/threepoint14ApplePi · 1 pointr/332e313431353932 Of note, DFW also wrote this. u/Oldkingcole225 · 1 pointr/technicallythetruth This is the basis of calculus. An infinitesimal (1/Infinity) can both = 0 and &gt; 0. When calculus was first presented to the math community, they saw this and called it a bunch of liberal hippie bullshit. It took ~100-150 years for calculus to be fully formalized and accepted within the math community, but it was immediately accepted in the engineering community because it worked. If you’re interested, I highly recommend Everything and More: A Compact History of Infinity by David Foster Wallace u/CentralNervousPiston · 1 pointr/philosophy I am a Strange Loop is about the theorem Another book I recommend is David Foster-Wallace's Everything and More. It's a creative book all about infinity, which is a very important philosophical concept and relates to mind and machines, and even God. Infinity exists within all integers and within all points in space. Another thing the human mind can't empirically experience but yet bears axiomatic, essential reality. How does the big bang give rise to such ordered structure? Is math invented or discovered? Well, if math doesn't change across time and culture, then it has essential existence in reality itself, and thus is discovered, and is not a construct of the human mind. Again, how does logic come out of the big bang? How does such order and beauty emerge in a system of pure flux and chaos? In my view, logic itself presupposes the existence of God. A metaphysical analysis of reality seems to require that base reality is mind, and our ability to perceive and understand the world requires that base reality be the omniscient, omnipresent mind of God. Anyway these books are both accessible. Maybe at some point you'd want to dive into Godel himself. It's best to listen to talks or read books about deep philosophical concepts first. Jay Dyer does a great job on that https://www.youtube.com/watch?v=c-L9EOTsb1c&amp;amp;t=11s u/dagbrown · 1 pointr/ContagiousLaughter I invite you to read a couple of books, both of which I really enjoyed. One, Two, Three...Infinity by George Gamow (that link is almost certainly an act of piracy, but I doubt the author would mind because he dedicated his life to spreading knowledge), and Everything and More by David Foster Wallace. That's a publication which is so recent that if you want to read it, you'll have to cough up money. Go ahead and do it: it's so interesting that it's worth the eight bucks for the e-book easily. Don't worry, it's not an affiliate link, I stand to gain nothing from your purchase. Both of those books talk about the mind-blowing idea that there are multiplie levels of infinity, with some infinities being much bigger than other infinities. The state of the art in thinking about infinities is so brain-hurting that David Foster Wallace's book was published 40 years after George Gamow's book, and includes a relatively small number of concepts that weren't in the older book (which isn't to say that they're insignificant--they're ideas about infinity so by necessity they're huge). One of the things I liked about David Foster Wallace's book is that it actually has a formula for quantifying how much bigger a higher-level infinity is than a lower-level infinity. Nobody in Gamow's day had come up with anything like that yet, they were just waving their arms talking about "huge" and "huger". u/smartfbrankings · 1 pointr/OhioStateFootball Only if you are too dumb to know how to use it. Knowing a median is actually quite useful when making future predictions. I would encourage you to read up on statistics, so you can focus on things that matter, rather than on the odds that Urban Meyer wins a game when there is snow within 100 miles and he is wearing khakis. http://www.amazon.com/Statistics-Dummies-Deborah-J-Rumsey/dp/0470911085 u/Kirkaine · 1 pointr/changemyview IQ tests are calibrated to return an average of 100. Absent any evidence to the contrary, we assume the null hypothesis and place all subgroups at the global average. Here's a good starting point if you're interested in learning more. Educational barriers for African Americans are well documented and muddy the relationship between intelligence and education. You'll note that the educational data I provided earlier was solely for whites, where the relationship is clear cut. I'm unaware of any reliable data for blacks. Now, since you're presumably white, and we do have that data, would you mind telling the audience a little about your education, and we'll see what inferences we can draw? u/sloppyzhou · 1 pointr/news Asking every single person is definitely not the only way to get accurate numbers. For starters you could give this read: http://www.amazon.com/Statistics-For-Dummies-Deborah-Rumsey/dp/0470911085/ref=sr_1_5?ie=UTF8&amp;amp;qid=1412872846&amp;amp;sr=8-5&amp;amp;keywords=intro+to+statistics But you're right about this little piece of click bait. I'm not sure why more people aren't commenting on the NBC/Survey Monkey Ad they were just tricked into reading. u/Lochen · 1 pointr/atheism [citation needed] http://www.amazon.ca/Statistics-For-Dummies-Deborah-Rumsey/dp/0470911085 Here is the lowest end dumbed down version. Should be perfect. u/TonySu · 1 pointr/learnmath Probability and Random Processes by Grimmett is a good introduction to probability. Mathematical Statistics by Wackerly is a comprehensive introduction to basic statistics. Probability and Statistical Inference by Nitis goes into the statistical theory from heavier probability background. The first two are fairly basic and the last is more involved but probably contains very few applied techniques. u/Jimmy_Goose · 1 pointr/math By introductory, do you mean undergrad level and advanced do you grad level? If that is the case: The most widely used undergrad book is Wackerly et al. I also taught out of Devore before and it is not bad. Wackerly covers more topics, but does so in a much more terse manner. Devore covers things better, but covers less things (some of which are pretty important). Grad: Casella and Berger. People might have their qualms with this book but there is really no better book out there. u/kenderpl · 1 pointr/learnmath If you want to do statistics in a rigorous way you should start with calculus and linear algebra. For calculus I recommend Paul's notes -&gt; http://tutorial.math.lamar.edu/Classes/CalcI/CalcI.aspx They are really clearly written with good examples and provide good intuition. As supplement go through 3blue1borwn Essence of calculus. I think it's an excellent resource for providing the right intuition. For linear algebra - linear algebra - Linear algebra done right as already recommended. Additionally, again 3blue1brown series on linear algebra are top notch addition for providing visual intuition and understanding for what is going on and what it's all about. Finally, for statistics - I would recommend starting with probability calculus - that way you'll be able to do mathematical statistics and will have a solid understanding of what is going on. Mathematical statistics with applications is self-contained with probability calculus included. https://www.amazon.com/Mathematical-Statistics-Applications-Dennis-Wackerly/dp/0495110817 u/LoKx · 1 pointr/actuary Sadly the only university in my city lost their accreditation since they couldn't pay a competitive salary. I lucked out because my Statistics professor is insanely qualified. (Ph.D in Mathematics and Ph.D in Statistics) So our Stats course covers MGFs and the derivations of all the theorems. Pretty much every question in this book: http://www.amazon.ca/Mathematical-Statistics-Applications-Dennis-Wackerly/dp/0495110817 Thanks a lot for the response. The thought of taking on something of this magnitude with no real life mentor-ship is really daunting. u/crossingtheabyss · 1 pointr/math Just completed Probability this semester, and moving on to Statistical Inference next semester. Calc. B is a prerequisite, and wound up seeing plenty of it along with a little Calc C (just double integrals). I'm an Applied Mathematics undergrad major btw and former Physics major from some years ago. I wound up enjoying it despite my bad attitude in the beginning. I keep hearing from fellow math majors that Statistical Inference is really difficult. Funny thing is I heard the same about Linear Algebra and didn't find it overwhelming. I'll shall soon find out. We used Wackerly's Mathematical Statistics with Applications. I liked the book more than most in my class. Some thought it was overly complicated and didn't explain the content well. Seems I'm always hearing some kind of complaint about textbooks every semester. Good luck. u/bdubs91 · 1 pointr/badeconomics This was mine. u/Randy_Newman1502 · 1 pointr/badeconomics This book comes to mind. u/pgoetz · 1 pointr/statistics I would try Mathematical Statistics and Data Analysis by Rice. The standard intro text for Mathematical Statistics (this is where you get the proofs) is Wackerly, Mendenhall, and Schaeffer but I find this book to be a bit too dry and theoretical (and I'm in math). Calculus is less important than a thorough understanding of how random variables work. Rice has a couple of pretty good chapters on this, but it will require some mathematical maturity to read this book. Good luck! u/keepitsalty · 1 pointr/AskStatistics I enjoyed Introduction to Probability Theory, Hoel et. al Also, Probability Theory, Jaynes is essential. For probabilistic programming I would also look into Bayesian Methods for Hackers u/whitewhim · 1 pointr/Physics I really love Probability Theory: The Logic of Science by Jaynes. While it is not a physics book, it was written by one. It is very well written, and is filled with common sense (which is a good thing). I really enjoy how probability theory is built up within it. It is also very interesting if you have read some of Jaynes' more famous works on applying maximum entropy to Statistical Mechanics. u/qtMITTENS · 1 pointr/math u/Kijanoo · 1 pointr/technology &gt; Honestly, both of our arguments have become circular. This is because, as I have stressed, there is not enough data for it to be otherwise. Science is similar to law in that the burden of proof lies with the accuser. In this case there is no proof, only conjecture. ((Just in case it is relevant: Which two arguments do you mean exactly, because the circularity isn't obvious to me?)) In my opinion you can argue convincingly about future events where you are missing important data and where no definitive proof was given (like in the AI example) and I want to try to convince you :) I want to base my argument on subjective probabilities. Here is a nice book about it. It is the only book of advanced math that I worked through \^\^ (pdf). My argument consists of multiple examples. I don't know where we will disagree, so I will start with a more agreeable one. Let's say there is a coin and you know that it may be biased. You have to guess the (subjective) probability that the first toss is head . You are missing very important data: The direction the coin is biased to, how much it is biased, the material .... . But you can argue the following way: "I have some hypotheses about how the coin behaves and the resulting probabilities and how plausible these hypotheses are. But each hypothesis that claims a bias in favour of head is matched with an equally plausible hypothesis that points in the tail direction. Therefore the subjective probability that the first toss is head is 50%" What exactly does "the subjective probability is 50%" mean? It means if I have to bet money where head wins 50 cent and tail wins 50 cent, I could not prefer any side. (I'm using small monetary values in all examples, so that human biases like risk aversion and diminishing returns can be ignored). If someone (that doesn't know more than me) claims the probability is 70% in favour of heads, then I will bet against him: We would always agree on any odds between 50:50 and 70:30. Let's say we agree on 60:40, which means I get 60 cent from him if the coin shows tail and he gets 40 cent from me if the coin shows head. Each of us agrees to it because each one claims to have a positive expected value. This is more or less what happened when I bet against the brexit with my roommate some days ago. I regularly bet with my friends. It is second nature for me. Why do I do it? I want to be better at quantifying how much I believe something. In the next examples I want to show you how I can use these quantifications. What happens when I really don't know something. Let's say I have to guess my subjective probability that the Riemann hypothesis is true. So I read the Wikipedia article for the first time and didn't understand the details ^^. All I can use is my gut feeling. There seem to be some more arguments in favour of it being true, so I set it to 70%. I thought about using a higher value but some arguments might be biased by arguing in favour to what some mathematicians want to be true (instead of what is true). So would I bet against someone who has odds that are different from mine (70:30) and doesn't know much more about that topic? Of course! Now let's say in a hypothetic scenario an alien, a god, or anyone that I would take serious and have no power over him appears in front of me, chooses randomly a mathematical conjecture (here: it chooses the Rieman hypotheses) and speaks the following threat: "Tomorrow You will take a fair coin from your wallet and throw it. If the coin lands head you will be killed. But as an alternative scenario you may plant a tree. If you do this, your death will not be decided by a coin, but you will not be killed if and only if the Riemann hypothesis is true" Or in other words: If the subjective probability that the Riemann hypothesis is true is &gt;50% then I will prefer to plant a tree; otherwise, I will not. This example shows that you can compare probabilities that are more or less objective (e.g. from a coin) with subjective probabilities and that you should even act on that result. The comforting thing with subjective probabilities is that you can use all the known rules from "normal" probabilities. This means that sometimes you can really try to calculate them from assumptions that are much more basic than a gut feeling. When I wrote this post I asked myself what the probability is that the Riemann hypothesis will be proven/disproven within the next 10 years. (I just wanted to show you this, because the result was so simple, which made me happy, but you can skip that). • assumption 1: Given a single arbitrary mathematical statement I know nothing about. And lets say I consider only those with a given difficulty, which means it is either easy to solve or difficult to solve from an objective point of view. Now I use the approximation that if it wasn't solved for n days, then the probability that it will be solved within the next day is like throwing a dice - it is independent of n. This behaviour is described by an exponential function "exp(-r t)", where the result is the probability that it remains unsolved after t years and a given difficulty parameter r. You could use better models of course, but given I know nothing about that statement, it is OK for me to expect a distribution which looks like an exponential function. • assumption 2: Most mathematical problems and subproblems are solved rather fast/instantly, because they are simple. The outstanding problems are the difficult ones. This can be described by a difficulty parameter probability distribution where each possible parameter value has the same subjective probability. This is only one way to describe the observation of course, but I also get this probability distribution if I use the principle of indifference, according to which the problem should be invariant with respect to the timescale (= nothing changes if I change the units from months to decades). • result: Ok I don't know how difficult the Riemann hypothesis is to prove, but integrating over all possible difficulties and weighting them by their subjective probability (=assumption 2) and the plausibility of not being solved for past years "p", I can calculate the odds that it will be solved within the next years "t". The solution = "t:p". So given, that it wasn't solved for 100 years the odds are very small (10:100). And this result is useful for me. Would I bet on that ratio? Of course! Would I plant a tree in a similar alien example? No I wouldn't, because the probability is &lt;50%. Again, it is possible to use subjective probabilities to find out what to do. And here is the best part, about using subjective probabilities. You said "Science is similar to law in that the burden of proof lies with the accuser. In this case there is no proof, only conjecture." But this rule is no longer needed. You can come to the conclusion that the probability is too low to be relevant for whatever argument and move on. The classic example of Bertrand Russel's teapot can be solved that way. Another example: You can calculate which types of supernatural gods are more or less probable. One just needs to collect all pro and contra arguments and translate them to likelihood ratios . I want to give you an example with one type of Christian god hypothesis vs. pure scientific reasoning: • Evidence "The species on planet earth can be organized by their genes in a tree shape.": evolution predicts this (therefore p=1) and Christian-god-intelligent-design-hypothesis says "maybe yes maybe something else" (p= 1/2 at most). Therefore the likelihood ratio is 1:2 in favour of pure scientific reasoning. • more arguments, contra: problem of evil, lawful universe and things that follow from that, ... • more arguments, pro: Fine-tuned Universe problem, existence of consciousness, ... In the end you just multiply all ratios of all arguments and then you know which hypothesis of these two to prefer. The derived mathematical formula is a bit more complicated, because it takes into account that the arguments might depend on each other and that there is an additional factor (the prior) which is used to indicate how much you privilege any of these two hypotheses over all the other hypotheses (e.g. because the hypothesis is the most simple one). I wanted to show you that you can construct useful arguments using subjective probabilities, come to a conclusion and then act on the result. It is not necessary to have a definitive proof (or to argue about which side has the burden of proof). I can imagine two ways were my argument is flawed. • Maybe there will be too much to worry/ things to do, if one uses that method consequently. But all extreme examples I can think of either have too low probability (e.g. Pascal's Wager), or there is not much that can be done today (most asteroids are detected too late), or it is much easier to solve the problem when it arrives instead of today. • Subjective probabilities are formalized and can be used consistently for environmental uncertainty. But there are problems if you try to reason under logical uncertainty. This is not yet formalized. Assuming it will never be, then my argument cannot be used. u/mryanbell · 1 pointr/probabilitytheory Jaynes' Probability Theory is fantastic. u/leoc · 1 pointr/programming All gone now. (05:30 UMT 10 August) LiSP and Probability Theory: The Logic of Science are still in the top two slots, but amazon.ca appears to have sold out of new copies. u/Bromskloss · 1 pointr/statistics &gt; There are some philosophical reasons and some practical reasons that being a "pure" Bayesian isn't really a thing as much as it used to be. But to get there, you first have to understand what a "pure" Bayesian is: you develop reasonable prior information based on your current state of knowledge about a parameter / research question. You codify that in terms of probability, and then you proceed with your analysis based on the data. When you look at the posterior distributions (or posterior predictive distribution), it should then correctly correspond to the rational "new" state of information about a problem because you've coded your prior information and the data, right? Sounds good. I'm with you here. &gt; However, suppose you define a "prior" whereby a parameter must be greater than zero, but it turns out that your state of knowledge is wrong? Isn't that prior then just an error like any other, like assuming that 2 + 2 = 5 and making calculations based on that? &gt; What if you cannot codify your state of knowledge as a prior? Do you mean a state of knowledge that is impossible to encode as a prior, or one that we just don't know how to encode? &gt; What if your state of knowledge is correctly codified but makes up an "improper" prior distribution so that your posterior isn't defined? Good question. Is it settled how one should construct the strictly correct priors? Do we know that the correct procedure ever leads to improper distributions? Personally, I'm not sure I know how to create priors for any problem other than the one the prior is spread evenly over a finite set of indistinguishable hypotheses. The thing about trying different priors, to see if it makes much of a difference, seems like a legitimate approximation technique that needn't shake any philosophical underpinnings. As far as I can see, it's akin to plugging in different values of an unknown parameter in a formula, to see if one needs to figure out the unknown parameter, or if the formula produces approximately the same result anyway. &gt; read this book. I promise it will only try to brainwash you a LITTLE. I read it and I loved it so much for its uncompromising attitude. Jaynes made me a militant radical. ;-) I have an uncomfortable feeling that Gelman sometimes strays from the straight and narrow. Nevertheless, I looked forward to reading the page about Prior Choice Recommendations that he links to in one of the posts you mention. In it, though, I find the puzzling "Some principles we don't like: invariance, Jeffreys, entropy". Do you know why they write that? u/TheLeaderIsGood · 1 pointr/statistics This one? Damn, it's £40-ish. Any highlights or is it just a case of this book is the highlight? It's on my wishlist anyway. Thanks. u/fyl999 · 1 pointr/Christianity &gt;All I'm saying is that the origin of a claim contains zero evidence as to that claim's truth. I had a look back though your other posts and found this, which explains a lot, for me anyway. Most people would put some more options in there - yes, no, im pretty sure, its extremely unlikely etc.. Heres what I think is the problem, and why I think you need to change the way you are thinking - Your whole concept of what is "logical" or what is "using reason" seems to be constrained to what is formally known as deductive logic. You seem to have a really thorough understanding of this type of logic and have really latched on to it. Deductive logic is just a subset of logic. There is more to it than that. I was searching for something to show you on other forms of logic and came across this book - "Probability Theory - The Logic of Science" Which looks awesome, Im going to read it myself, it gets great reviews. Ive only skimmed the first chapter... but that seems to be a good summary of how science works- why it does not use just deductive logic. Science draws most of its conclusions from probability, deductive logic is only appropriate in specific cases. Conclusions based on probability - "Im pretty sure", "This is likely/unlikely" are extremely valid - and rational. Your forcing yourself to use deductive logic, and only deductive logic, where its inappropriate. &gt;You have no way of knowing, and finding out that this person regularly hallucinates them tells you nothing about their actual existence. Yeah I think with the info you've said we have it would be to little to draw a conclusion or even start to draw one. Agreed. It wouldnt take much more info for us to start having a conversation about probabilities though - Say we had another person from the planet and he says its actually the red striped jagerwappas that are actually taking over - and that these two creatures are fundamentally incompatible. ie. if x exists y can't and vice-versa. u/bayen · 1 pointr/RPI I'd suggest MATP 4600, Probability Theory &amp; Applications. Only prerequisite is Calc if I remember right. Or if you're confident in your time management, maybe read this textbook on your own; it's pretty accessible: https://www.amazon.com/gp/aw/d/0521592712/ (Neither of these will teach you a bunch of statistical tests, but those are easy to abuse if you don't understand the fundamentals ... and very easy to look up if you do understand the fundamentals.) u/chrispine · 1 pointr/atheism &gt; For one, you need a categorical definition by which to justify your "probability" with. What, does each time you tell a god to speak deduct 1%? That's absurdly vague, stupid, and unheard of, so no wonder I never thought you'd actually be arguing this. I don't happen to know the appropriate decibel-values to assign to E and not-E in this case. But I know the fucking SIGNS of the values. No, I don't know how many times god needs to appear for me to believe that I wasn't drugged or dreaming or just going crazy. But god appearing is evidence for the existence of god, and him not appearing is evidence against. Does it really matter if we are talking intervals of 5-seconds versus lifetimes? 3 pages, and you don't even have to go to a library! Check it out: http://www.amazon.com/reader/0521592712?_encoding=UTF8&amp;amp;ref_=sib%5Fdp%5Fpt#reader Click on "First Pages" to get to the front. You can lead a horse to water... u/llama-lime · 1 pointr/reddit.com "Bayesian" is a very very vague term, and this article isn't talking about Bayesian networks (I prefer the more general term graphical models), or Bayesian spam filtering, but rather a mode of "logic" that people use in everyday thinking. Thus the better comparison would be not to neural nets, but to propositional logic, which I think we can agree doesn't happen very often in people unless they've had lots of training. My favorite text on Bayesian reasoning is the Jaynes book.. Still, I'm less than convinced by the representation of the data in this article. Secondly, the article isn't even published yet to allow anyone to review it. Thirdly, I'm suspicious of any researcher that talks to the press before their data is published. So in short, the Economist really shouldn't have published this, and should have waited. Yet another example of atrocious science reporting. u/mark_bellhorn · 1 pointr/statistics u/PM_ME_YOUR_WOMBATS · 1 pointr/statistics Somewhat facetiously, I'd say the probability that an individual who has voted in X/12 of the last elections will vote in the next election is (X+1)/14. That would be my guess if I had no other information. As the proverb goes: it's difficult to make predictions, especially about the future. We don't have any votes from the next election to try to discern what relationship those votes have to any of the data at hand. Of course that isn't going to stop people who need to make decisions. I'm not well-versed in predictive modeling (being more acquainted with the "make inference about the population from the sample" sort of statistics) but I wonder what would happen if you did logistic regression with the most recent election results as the response and all the other information you have as predictors. See how well you can predict the recent past using the further past, and suppose those patterns will carry forward into the future. Perhaps someone else can propose a more sophisticated solution. I'm not sure how this data was collected, but keep in mind that a list of people who have voted before is not a complete list of people who might vote now, since there are some first-time voters in every election. If you want to get serious about data modeling in social science, you might check out this book by statistician/political scientist Andrew Gelman. u/DS11012017 · 1 pointr/AskStatistics I will second this. I used this book for my year of undergrad foundations of probably and stats. I also really like Casella and Berger's 'Statistical Inference.' https://www.amazon.com/Statistical-Inference-George-Casella/dp/0534243126https://www.amazon.com/Statistical-Inference-George-Casella/dp/0534243126 u/cherise605 · 1 pointr/AskStatistics Since you are still in college, why not take a statistics class? Perhaps it can count as an elective for your major. You might also want to consider a statistics minor if you really enjoy it. If these are not options, then how about asking the professor if you can sit in on the lectures? It sounds like you will be able to grasp programming in R, may I suggest trying out SAS? This book by Ron Cody is a good introduction to statistics with SAS programming examples. It does not emphasize theory though. For theory, I would recommend Casella &amp; Berger, many consider this book to be a foundation for statisticians and is usually taught at a grad level. Good luck! u/mathwanker · 1 pointr/math For probability I'd recommend Introduction to Probability Theory by Hoel, Port &amp; Stone. It has the best explanations of any probability book I've seen, great examples, and answers to most of the problems are in the back (making it well-suited for self-study). I think it's still the best introductory book on the subject, despite its age. Amazon has used copies for cheap. For statistics, you have to be more precise as to what you mean by an "average undergraduate statistics" course. There's a difference between the typical "elementary statistics" course and the typical "mathematical statistics" course. The former requires no calculus, but goes into more detail about various statistical procedures and tests for practical uses, while the latter requires calculus and deals more with theory than practice. Learning both wouldn't be a bad idea. For elementary stats there are lots of badly written books, but there is one jewel: Statistics by Freedman, Pisani &amp; Purves. For mathematical statistics, Introduction to Mathematical Statistics by Hogg &amp; Craig is decent, though a bit dry. I don't think that Statistical Inference by Casella &amp; Berger is really any better. Those are the two most-used textbooks on the subject. u/gabbriel · 1 pointr/math Maybe "too applied", depending on your fields, but there's always Casella and Berger, especially if you're in Economics. u/El-Dopa · 1 pointr/statistics If you are looking for something very calculus-based, this is the book I am familiar with that is most grounded in that. Though, you will need some serious probability knowledge, as well. If you are looking for something somewhat less theoretical but still mathematical, I have to suggest my favorite. Statistics by William L. Hays is great. Look at the top couple of reviews on Amazon; they characterize it well. (And yes, the price is heavy for both books.... I think that is the cost of admission for such things. However, considering the comparable cost of much more vapid texts, it might be worth springing for it.) u/whyilaugh · 1 pointr/math We use Casella and Berger. It glosses over the measure theory somewhat but it appropriately develops the concept of "a probability". If you haven't had much background in proper math stats, then this is a good place to start (even if you've done the more applied courses). u/lrnz13 · 1 pointr/statistics I’m finishing up my stats degree this summer. For math, I took 5 courses: single variable calculus , multi variable calculus, and linear algebra. My stat courses are divided into three blocks. First block, intro to probability, mathematical stats, and linear models. Second block, computational stats with R, computation &amp; optimization with R, and Monte Carlo Methods. Third block, intro to regression analysis, design and analysis of experiments, and regression and data mining. And two electives of my choice: survey sampling &amp; statistical models in finance. Here’s a book for intro to probability. There’s also lectures available on YouTube: search MIT intro to probability. For a first course in calculus search on YouTube: UCLA Math 31A. You should also search for Berkeley’s calculus lectures; the professor is so good. Here’s the calc book I used. For linear algebra, search MIT linear algebra. Here’s the book. The probability book I listed covers two courses in probability. You’ll also want to check out this book. If you want to go deeper into stats, for example, measure theory, you’re going to have to take real analysis &amp; a more advanced course on linear algebra. u/determinot · 1 pointr/math Since you're an applied math PhD, maybe the following are good. They are not applied though. This is the book for first year statistics grad students at OSU. http://www.amazon.com/Statistical-Inference-George-Casella/dp/0534243126/ref=sr_1_1?ie=UTF8&amp;amp;qid=1368662972&amp;amp;sr=8-1&amp;amp;keywords=casella+berger But, I like Hogg/Craig much more. http://www.amazon.com/Introduction-Mathematical-Statistics-7th-Edition/dp/0321795431/ref=pd_sim_b_2 I believe each can be found in international editions, and for download on the interwebs. u/ChaosCon · 1 pointr/AskReddit Only tangentially relevant, but a really good read! Innumeracy u/cruise02 · 1 pointr/math Innumeracy: Mathematical Illiteracy and Its Consequences by John Allen Paulos, and its sequel Beyond Numeracy are two of my favorites. u/santino314 · 1 pointr/math Well is not exactly statistics, rather a bunch of anecdote on common mistakes and misconception about mathematics, but there is this book: "Innumeracy: Mathematical Illiteracy and Its Consequences" by John Allen (http://www.amazon.com/Innumeracy-Mathematical-Illiteracy-Consequences-Vintage/dp/0679726012) and it's topic is vaguely related to OP's concern. I haven't read it all but so far it was quite fun. Again is more anecdotal than scientific and the author might be a little condescending, but is worth reading. u/rottedtree · 1 pointr/science Some great stuff here. However, these are MUST READS. First, for a good introduction to numbers, read: Innumeracy http://www.amazon.com/Innumeracy-Mathematical-Illiteracy-Consequences-Vintage/dp/0679726012 It explains how numbers work very, very well, in a non-technical fashion. Second, read, The Structure of Scientific Revolutions by Thomas Kuhn This excellent, excellent easy to read book is simply THE BEST EXPLANATION OF HOW SCIENCE WORKS. Next, The Way Things Work by David Macaulay. It is not a 'science' book, per se, more of an engineering book, but it is brilliantly written and beautifully illustrated. Then, dive into Asimov. "Please Explain" is fantastic. Though dated, so is his Guide to Science. The great thing about these books is that they are all very short and aimed at people who are not technically educated. From there I am sure you will be able to start conquering more material. Honestly, Innumeracy and The Structure of Scientific Revolutions, alone, will fundamentally change the way you look at absolutely everything around you. Genuinely eye opening. u/juicyfizz · 1 pointr/learnmath I took both precalc and calc 1 back to back (and we used Stewart's calc book for calc 1-3). To be honest, concepts like limits and continuity aren't even covered in precalculus, so it isn't like you've missed something huge by skipping precalc. My precalc class was a lot of higher level college algebra review and then lots and lots and lots of trig. I honestly don't see how you'd need much else aside from PatricJMT and lots of example problems. It may be worthwhile for you to pick up "The Calculus Lifesaver" by Adrian Banner. It's a really great book that breaks down the calc 1 concepts pretty well. Master limits because soon you'll move onto differentiation and then everything builds from that. Precalc was my trig review that I was thankful for when I got to calc 2, however, so if you find yourself needing calculus 2, please review as much trig as you can. If you need some resources for trig review, PM me. I tutored college algebra, precalc, and calc for 3 years. Good luck! u/truckbot101 · 1 pointr/math Hello! It's been a while since I last suggested a resource for calculus - so far, I've been finding the following two books extremely helpful and thought it would be good to share them: 1. The Calculus Lifesaver http://www.amazon.com/The-Calculus-Lifesaver-Tools-Princeton/dp/0691130884/ref=sr_1_1?ie=UTF8&amp;amp;qid=1398747841&amp;amp;sr=8-1&amp;amp;keywords=the+calculus+lifesaver I have mostly been using this as my main source of calculus lessons. You can find the corresponding lectures on youtube - the ones on his site do not work for whatever reason. The material is quite good, but still slightly challenging to ingest (though still much better than other courses out there!). 2. How to Ace Calculus: The Street-Wise Guide When I first saw this book, I thought it was going to be dumb, but I've been finding it extremely helpful. This is the book I'm using to understand some of the concepts in Calculus that are taken for granted (but that I need explained more in detail). It actually is somewhat entertaining while doing an excellent job of teaching calculus. The previous website I recommended to you is quite good at giving you an alternative perspective of calculus, but is not enough to actually teach you how to derive or integrate functions on your own. Hope your journey in math is going well! u/maxximillian · 1 pointr/EngineeringStudents These are the two things that saved my ass in calc 2: this book, the calculus lifesaver and this guy, Mr. McKeague from MathTV u/SoundTheUrethras · 1 pointr/AdviceAnimals Well the good news is that we have more resources available now than even 5 years ago. :) I'm in calc 1 right now, and was having trouble putting the pieces together into a whole that made sense. A few of my resources are classroom specific but many would be great for anyone not currently in a class. Free: www.khanacademy.org free video lectures and practice problems on all manner of topics, starting with elementary algebra. You can start at the beginning and work your way through, or just start wherever. http://ocw.mit.edu/index.htm free online courses and lessons from MIT (!!) where you can watch lectures on a subject, do practice problems, etc. Use just for review or treat it like a course, it's up to you. Cheap$$http://www.amazon.com/How-Ace-Calculus-Streetwise-Guide/dp/0716731606/ref=sr_1_1?ie=UTF8&amp;amp;qid=1331675661&amp;amp;sr=8-1$10ish shipped for a book that translates calculus from math-professor to plain english, and is funny too.

http://www.amazon.com/Calculus-Lifesaver-Tools-Excel-Princeton/dp/0691130884/ref=pd_cp_b_1

$15 for a book that is 2-3x as thick as the previous one, a bit drier, but still very readable. And it covers Calc 1-3. u/legogirl · 1 pointr/learnmath This book and his videos: https://www.amazon.com/Calculus-Lifesaver-Tools-Princeton-Guides/dp/0691130884 I was good at calculus, but this book made anything I struggled to fully understand much easier. He does a good job of looking back at how previous work supports and and talks about how this relates to future topics. u/jctapp · 1 pointr/learnmath The best way to learn is take the class and find your deficiencies. Khan Academy is also great to get a base line of where you are. If you need help with calc. And precal, calculus lifesaver book is good. lifesaver calculus amazon u/Natsfan3754 · 1 pointr/learnmath u/Oh_How_Absurd · 1 pointr/math Eh. Not every abstract algebra/group theory textbook even tries to link its topics to the intuitive modes of thinking that evolution had to work with. https://www.amazon.com/Visual-Group-Theory-Problem-Book/dp/088385757X This book might be worth a shot? ..You're probably beyond this level though. u/ood_lambda · 1 pointr/AskEngineers I don't, but I'm in the minority of the field. It definitely required a lot of catch-up in my first couple years. If you want to try and break in I can make some suggestions for self-teaching. Linear Algebra is the backbone of all numerical modeling. I can make two suggestions to start with: • I was very impressed with Jim Hefferon's book. It's part of an open courseware project so is available for free here (along with full solutions) but for$13 used I'd rather just have the book.

• The Gilbert Strang course on MIT Open Courseware is very good as well. I didn't like his book as well, but the video lectures are excellent as supplemental material for when I had questions from Hefferon.

As for the actual FEA/CFD implementations:

• Numerical Heat Transfer and Fluid Flow ($22, used) seems to the standard reference for fluid flow. I'm relatively new to CFD so can't comment on it, but it seems to pop up constantly in any discussion of models or development. • Finite Element Procedures, ($28, used) and the associated Open Courseware site. The solid mechanics (FEA) is very well done, again, haven't looked much at the fluids side.

• 12 steps to Navier Stokes. If you're interested in Fluids, start here. It's an excellent introduction and you can have a basic 2D Navier Stokes solver implemented in 48 hours.

Note that none of these will actually teach you the the software side, but most commercial packages have very good tutorials available. These all teach the math behind what the solver is doing. You don't need to be an expert in it but should have a basic idea of what is going on.

Also, OpenFoam is a surprisingly good open source CFD package with a strong community. I'd try and use it to supplement your existing work if possible, which will give you experience and make future positions easier. Play with this while you're learning the theory, don't approach it as "read books for two years, then try and run a simulation".
u/roninsysop · 1 pointr/learnmath

I find Gilbert Strang's Introduction to Linear Algebra quite accessible, and seems to be aimed towards the practical (numerical) side of things. His video lectures are also quite good, IMHO.

u/ekg123 · 1 pointr/learnmath

&gt; To be honest, I do still think that step 2 is a bit suspect. The inverse of [;AA;]is [;(AA)^{-1};] . Saying that it's [;A^{-1}A^{-1};] seems to be skipping over something.

I realized how right you are when you say this after I reread the chapter on Inverse Matrices in my book. I am using Introduction to Linear Algebra by Gilbert Strang btw. I'm following his course on MIT OCW.

The book saids: If [;A;] and [;B;] are invertible then so is [;AB;]. The inverse of a product [;AB;] is [;(AB)^{-1}=B^{-1}A^{-1};].

So, before I went through with step two, I would have to have proved that [;A;] is indeed invertible.

&gt;Their proof is basically complete. You could add the step from A2B to (AA)B which is equivalent to A(AB) due to the associativity from matrix multiplication and then refer to the definition of invertibility to say that A(AB) = I means that AB is the inverse of A. So you can make it a bit more wordy (and perhaps more clear), but the basic ingredients are all there.

I will write up the new proof right here, in its entirety. Please let me know what you think and what I need to fix and/or add.

Theorem: if [;B;] is the inverse of [;A^2;], then [;AB;] is the inverse of A.

Proof: Assume [;B;] is the inverse of [;A^2;]

1. Since [;B;] is the inverse of [;A^2;], we can say that [;A^2B=I;]

2. We can write [;A^2B=I;] as [;(AA)B=I;]

3. We can rewrite [;(AA)B=I;] as [;A(AB)=I;] because of the associative property of matrix multiplication.

4. Therefore, by the definition of matrix invertibility, since [;A(AB)=I;], [;AB;]is indeed the inverse of [;A;].

Q.E.D.

Do I have to include anything about the proof being correct for a right-inverse and a left-inverse?

&gt; That's a great initiative! Probably means you're already ahead of the curve. Even if you get a step (arguably) wrong, you're still practicing with writing up proofs, which is good. Your write-up looks good to me, except for the questionability of step 2. In step 3 (and possibly others) you might also want to mention what you are doing exactly. You say "therefore", but it might be slightly clearer if you explicitly mention that you're using your assumption. You can also number everything (including the assumption), and then put "combining statement 0 and 2" to the right (where you can also go into a bit more detail: e.g. "using associativity of multiplication on statement 4").

I haven't began my studies at university yet, but I sure am glad that I exposed myself to proofs before taking an actual discrete math class. I think that very few people get exposed to proof writing in the U.S. public school system. I've completed all of the Khan Academy math courses, and the MIT OCW Math for CS course is still very difficult. I basically want to develop a very strong foundation in proof writing, and all the core courses I will take as a CS major now, and then I will hopefully have an easier time with my schoolwork once I begin in the fall. Hopefully this prior knowledge will keep my GPA high too. I really appreciate all the constructive criticism about my proof. I will try to make them as detailed as possible from now on.
u/GhostOfDonar · 1 pointr/math

I recommmend the math video lectures at the MIT [1]. Single variable calculus is 39 lectures at about 50 minutes each [2]. Go through the first ones and you'll have not only a refresher but also a head start. While you are about it, don't miss out Prof. Gilbert Strang's video lectures on Linear Algebra [3], that man is phenomenal (he teaches based on one of his own books [4].)

Resources:
1,
2,
3,
4.

u/acetv · 1 pointr/math

Differential geometry track. I'll try to link to where a preview is available. Books are listed in something like an order of perceived difficulty. Check Amazon for reviews.

Calculus

Thompson, Calculus Made Easy. Probably a good first text, well suited for self-study but doesn't cover as much as the next two and the problems are generally much simpler. Legally available for free online.

Stewart, Calculus. Really common in college courses, a great book overall. I should also note that there is a "Stewart lite" called Calculus: Early Transcendentals, but you're better off with regular Stewart. Huh, it looks like there's a new series called Calculus: Concepts and Contexts which may be a good substitute for regular Stewart. Dunno.

Spivak, Calculus. More difficult, probably better than Stewart in some sense.

Linear Algebra

Poole, Linear Algebra. I haven't read this one but it has great reviews so I might as well include it.

Strang, Introduction to Linear Algebra. I think the Amazon reviews summarize how I feel about this book. Good for self-study.

Differential Geometry

Pressley, Elementary Differential Geometry. Great text covering curves and surfaces. Used this one in my undergrad course.

Do Carmo, Differential Geometry of Curves and Surfaces. Probably better left for a second course, but this one is the standard (for good reason).

Lee, Riemannian Manifolds: An Introduction to Curvature. After you've got a grasp on two and three dimensions, take a look at this. A great text on differential geometry on manifolds of arbitrary dimension.

------

Start with calculus, studying all the single-variable stuff. After that, you can either switch to linera algebra before doing multivariable calculus or do multivariable calculus before doing linear algebra. I'd probably stick with calculus. Pay attention to what you learn about vectors along the way. When you're ready, jump into differential geometry.

Hopefully someone can give you a good track for the other geometric subjects.

u/ProceduralDeath · 1 pointr/mathbooks

Strang's book looks nice, and I noticed he has accompanying lectures which is good. I found this version, which is more or less in my price range but appears a bit outdated. https://www.amazon.com/Introduction-Linear-Algebra-Fourth-Gilbert/dp/0980232716

u/skytomorrownow · 1 pointr/compsci

I think for a rigorous treatment of linear algebra you'd want something like Strang's class book:

http://www.amazon.com/Introduction-Linear-Algebra-Fourth-Edition/dp/0980232716

For me, what was great about this book was that it approached linear algebra via practical applications, and those applications were more relevant to computer science than pure mathematics, or electrical engineering like you find in older books. It's more about modern applications of LA. It's great for after you've studied the topic at a basic level. It's a great synthesis of the material.

It's a little loose, so if you have some basic chops, it's fantastic.

u/BallsJunior · 1 pointr/learnmath

To piggy back off of danielsmw's answer...

&gt; Fourier analysis is used in pretty much every single branch of physics ever, seriously.

I would phrase this as, "partial differential equations (PDE) are used in pretty much every single branch of physics," and Fourier analysis helps solve and analyze PDEs. For instance, it explains how the heat equation works by damping higher frequencies more quickly than the lower frequencies in the temperature profile. In fact Fourier invented his techniques for exactly this reason. It also explains the uncertainty principle in quantum mechanics. I would say that the subject is most developed in this area (but maybe that's because I know most about this area). Any basic PDE book will describe how to use Fourier analysis to solve linear constant coefficient problems on the real line or an interval. In fact many calculus textbooks have a chapter on this topic. Or you could Google "fourier analysis PDE". An undergraduate level PDE course may use Strauss' textbook whereas for an introductory graduate course I used Folland's book which covers Sobolev spaces.

If you wanted to study Fourier analysis without applying it to PDEs, I would suggest Stein and Shakarchi or Grafakos' two volume set. Stein's book is approachable, though you may want to read his real analysis text simultaneously. The second book is more heavy-duty. Stein shows a lot of the connections to complex analysis, i.e. the Paley-Wiener theorems.

A field not covered by danielsmw is that of electrical engineering/signal processing. Whereas in PDEs we're attempting to solve an equation using Fourier analysis, here the focus is on modifying a signal. Think about the equalizer on a stereo. How does your computer take the stream of numbers representing the sound and remove or dampen high frequencies? Digital signal processing tells us how to decompose the sound using Fourier analysis, modify the frequencies and re-synthesize the result. These techniques can be applied to images or, with a change of perspective, can be used in data analysis. We're on a computer so we want to do things quickly which leads to the Fast Fourier Transform. You can understand this topic without knowing any calculus/analysis but simply through linear algebra. You can find an approachable treatment in Strang's textbook.

If you know some abstract algebra, topology and analysis, you can study Pontryagin duality as danielsmw notes. Sometimes this field is called abstract harmonic analysis, where the word abstract means we're no longer discussing the real line or an interval but any locally compact abelian group. An introductory reference here would be Katznelson. If you drop the word abelian, this leads to representation theory. To understand this, you really need to learn your abstract/linear algebra.

Random links which may spark your interest:

u/dp01n0m1903 · 1 pointr/math

Yes, -5/9 is a typo, just as you say.

By the way, the lecturer in the MIT video is Gilbert Strang, and his textbook, Introduction to Linear Algebra is the text that he uses for the course. I'm not really familiar with that book, but I believe that it has a pretty good reputation. See for example, this recent reddit thread, where Strang is mentioned several times.

u/crowsmen · 1 pointr/learnmath

&gt; don't think that there is a logical progression to approaching mathematics

Well, this might be true of the field as a whole, but def not true when it comes to learning basic undergrad level math after calc 1, as the OP asked about. There are optimized paths to gaining mathematical maturity and sufficient background knowledge to read papers and more advanced texts.

&gt; Go to the mathematics section of a library, yank any book off the shelf, and go to town.

I would definitely NOT do this, unless you have a lot of time to kill. I would, based on recommendations, pick good texts on linear algebra and differential equations and focus on those. I mean focus because it is easy in mathematics to gloss over difficulties.

My recommendation, since you are self-studying, is to pick up Gil Strang's linear algebra book (go for an older edition) and look up his video lectures on linear algebra. That's a solid place to start. I'd say that course could be done, with hard work, in a summer. For a differential equations book, I'm not exactly sure. I would seek out something with some solid applications in it, like maybe this: http://amzn.com/0387978941

That is more than a summer's worth of work.

Sorry, agelobear, to be such a contrarian.

u/You_meddling_kids · 1 pointr/MagicArena
u/Radeh · 1 pointr/pics

If you think that sample size is enough, you really need this.

Like I said, you are clueless about statistics...and just fyi, my username's from a damn scifi show you dimwit.

As for Munich, I used to live there so get the fuck out with your nonsense :D

u/Fabul0usLumin0us · 1 pointr/france

&gt;Ça tombe bien puisque l’expérience a été reproduite 10 fois.

lol à ce niveau là de stupidité je peux rien faire pour toi désolé

EDIT : nan allez, en vrai je vais t'aider un peu, va lire ça : https://www.amazon.com/Statistics-Dummies-Math-Science/dp/1119293529

u/tuga2 · 1 pointr/CrazyIdeas

Here is some recommended reading. It's absurd to attempt to make a 1-1 comparison when whites are between 62% and 77% (depending on if you count Hispanics as white) of the total population of the US. It makes much more sense to make a per capita compairsion so total mass shootings per 100000.

u/tactics · 1 pointr/learnmath

I suggest either Tu or (easy) Lee.

u/HigherMathHelp · 1 pointr/math

Both Lee's and Tu's books are on my reading list. They both seem excellent.

However, my vote is for Professor Tu's book, mainly because it manages to get to some of the big results more quickly, and he evidently does so without a loss of clarity. In the preface to the first edition, he writes "I discuss only the irreducible minimum of manifold theory that I think every mathematician should know. I hope that the modesty of the scope allows the central ideas to emerge more clearly." Consequently, his book is roughly half the length of Lee's.

I'd rather hit the most essential points first, and then if I want a more expansive view, I'd pick up Lee.

Disclaimer: I may not participate very frequently, as I have some other irons in the fire, so you might want to weigh my vote accordingly. If your sub sticks around for a while, I'd definitely like to join in when I can.

u/grandzooby · 1 pointr/rstats
u/deepaksuresh · 1 pointr/MachineLearning

I found Prof. Joseph Blitzstein's course, at Harvard, on statistics engaging. First I watched his lectures and worked through the problem sets. This was extemely rewarding, so I went on to work through his book on probability. According to me, what separates him from other Profs is that he takes a lot of effort to build intuition about statistical concepts.
Stat110 is the course website. You can find his book here.

u/josquindesprez · 1 pointr/statistics

If you want an extremely practical book to complement BDA3, try Statistical Rethinking.

It's got some of the clearest writing I've seen in a stats book, and there are some good R and STAN code examples.

u/lickorish_twist · 1 pointr/learnmath

This may be good for example:
http://www.amazon.com/The-Humongous-Book-Algebra-Problems/dp/1592577229/ref=pd_sim_b_5?ie=UTF8&amp;amp;refRID=0H1GD8HDQZB58PWTY0F5
You could take a look and see if it suits you.

But don't trust me on this. Others on /r/learnmath or /r/matheducation may be more knowledgeable than me about good algebra workbooks.

u/GOD_Over_Djinn · 1 pointr/math

I'm late to this party, but as a lot of other people have said, missing a negative sign somewhere is not an indication that you're bad at math. What is important in math is understanding why things are the way that they are. If you can look at the spot where you missed a negative sign and understand exactly why there should have been a negative sign there then you're doing fine. Being good at math isn't so much about performing the calculations—I mean, computers can find the roots of a quadratic function polynomial pretty reliably, so probably no one's going to hire you to do that by hand—but it's following the chain of reasoning that takes you from problem to solution and understanding it completely.

That said, there are things you can do to make yourself better at performing the calculations. Go back to basics, and I mean wayyyy back to like grade 5. A lot of students are seriously lacking skills that they should have mastered in around grade 5, and that will really screw up your ability to do algebra well. For instance, know your times tables. Know, and I mean really know and understand, how arithmetic involving fractions works: how and why and when do we put two fractions over a common denominator, what does it mean to multiply and divide by a fraction, and so on. It's elementary stuff but if you can't do it with numbers then you'll have an even harder time doing it with x's and y's. Make sure you understand the rules of exponents: Do you know how to simplify (a^(2)b^(3))^(2)? How about (a^(3)b)/(ab^(5)) How about √(3^(4))? What does it mean to raise a number to a negative power? What about a fractional power? These things need to be drilled into you so that you don't even think twice about them, and the only way to make it that way is to go through some examples really carefully and then do as many problems as you can. Try to prove the things to yourself: why do exponents behave the way that they do? Go out and get yourself something like this and just work through it and make sure you understand exactly why everything is the way that it is.

Feel free to PM me if you are stuck on specific stuff.

u/senseofdecay · 1 pointr/math

this is one of the best for self teaching. the examples are very clear so you don't get tripped up on them jumping steps. You will need to get more problems from somewhere like a more formal textbook, but this will help you get the idea of what to do instead of fuming at an impasse.

http://www.amazon.com/The-Humongous-Book-Calculus-Problems/dp/1592575129

there's also trig and precalc versions if he needs the review.

http://www.amazon.com/The-Humongous-Book-Algebra-Problems/dp/1592577229/ref=pd_bxgy_b_text_z

http://www.amazon.com/The-Humongous-Book-Trigonometry-Problems/dp/1615641823/ref=pd_bxgy_b_text_y

u/bryanrabbit · 1 pointr/learnmath

It's a lot of work but with this book I lost my math anxiety and actually started to enjoy math. The author's philosophy is the only way to get better at algebra is to just do a lot of algebra, it starts out with the most basic fundamentals you need to know too, like if you have trouble with negative numbers or fractions (as I did). It's possible you just need a recap on the foundational stuff you forgot in grade school + more practice. By the end of the book you'll be working with functions and logarithms and you'll understand it.

u/AtomPhys · 0 pointsr/northernireland

A lot of people on this thread could do with

1. Reading the article and not just the headline
2. Reading this book
Statistics For Dummies, 2E https://www.amazon.co.uk/dp/0470911085/ref=cm_sw_r_cp_apa_w4xUAbNGYEP37
u/wolf_387465 · 0 pointsr/sto

&gt; The same way that buying a lotto ticket this week doesn't increase my odds of winning next weeks draw if I don't win.

no, but buying multiple tickets this week increses your chance to win this week

&gt; You would be correct if you could use multiple keys on one box to increase your odds of getting a ship, but thats not how it work.

yes, i am also correct if i can use multiple keys on multiple boxes

&gt; You only get to buy 1 ticket to this weeks lotto, one for next weeks etc etc

not really, you can buy as many tickets as you can afford

i suggest https://www.amazon.com/Probability-Dummies-Deborah-J-Rumsey/dp/0471751413/ unless you want to make fool out of yourself some more...

u/INTEGRVL · 0 pointsr/matheducation

Introduction to Linear Algebra by Serge Lang.

http://www.amazon.com/Introduction-Linear-Algebra-Serge-Lang/dp/3540780602

Or Introduction to Linear Algebra by Gilbert Strang

http://www.amazon.com/Introduction-Linear-Algebra-Fourth-Edition/dp/0980232716

I have not used the Strang book, but I here it is all right for non-mathematicians.

u/SymmetricBrightTiger · 0 pointsr/politics

That's a start. Now read this to refresh your memory.

https://www.amazon.com/Statistics-Dummies-Math-Science/dp/1119293529

u/donmcronald · -1 pointsr/canada

Try this.