Reddit Reddit reviews An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements

We found 25 Reddit comments about An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. Here are the top ones, ranked by their Reddit score.

Science & Math
Books
Mathematics
Mathematical Analysis
An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements
Check price on Amazon

25 Reddit comments about An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements:

u/casaubon · 4 pointsr/funny

This image was used for the cover of a famous text on error analysis.

u/nanokelvin · 3 pointsr/askscience

An Introduction to Error Analysis by John R. Taylor is the text that undergrads at UC Berkeley use. It's pretty decent.

As an aside, I think that the undergraduate sequence at most schools does a terrible job of teaching about uncertainty and error analysis. I'm a PhD candidate at Berkeley (graduating in December!), and my dissertation involves high precision measurements that test the Standard Model. Thus, analyzing sources uncertainty is my bread and butter. I really appreciate how approximations, models, and measurement precision are interrelated.

I'm really curious to see what resources other people put here.

u/PrincessZig · 3 pointsr/CatastrophicFailure

It’s the cover of one of my favorite books I used in college. I still keep it on my desk. Error Analysis by John Taylor

u/craklyn · 3 pointsr/starcraft

In fact, it is possible to give error bars from one exact measurement. For example, let's say I count how many rain drops hit my hand in 5 seconds and the result is 25. The number of rain drops striking my hand in a given length of time will form a Poisson distribution. One can argue that based on my one measurement, the best estimate I can make of the true rate of rain striking my hand each 5 seconds is 25 +/- sqrt(25) = 25 +/- 5.

As you might intuit, the uncertainty of the mean number of drops striking my hand will decrease as more measurements are taken. This tends to drop like 1/Sqrt(N), where N is the number of 5-second raindrop measurements I make.

This style of problem is very standard in any introductory statistics textbook, but I can give you a particular book if you'd like to look into it further:

An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements by John R. Taylor

These plots are "distributions" in the sense that I meant the word distribution. Distributions are simply a collection of values placed side by side. When you arrange each month's datapoint side by side, that's a distribution.

u/Thaufas · 3 pointsr/chemhelp

Are you interested in systematic errors, random errors, or both? Ignoring systematic errors, with the information that you've given, here are the obvious things to consider:

  1. What is the purity of the solute you will be weighing and the solvent you will be diluting it with?

  2. What is the uncertainty in the balance that you will be using to weigh the solvent?

  3. What is the uncertainty in the volumetric flask that you will be using to measure the volume of the final solution?

  4. What is the uncertainty of the DSC instrumentation you will be using to measure the transition temperatures. Note that the uncertainty in most instrumental measurements vary as a function of the value being measured.

    For each of the items above, you can determine the uncertainties with a simple design-of-experiments. For validated instrumentation, the uncertainties will be specified as part of the IQ/OQ/PQ process, but even so, you should still verify them yourself.

    Once you have these values, calculating how each of them contribute to the final error is relatively straightforward using principles of error propagation. There are many books and websites devoted to the topic of error propagation. I have a copy of John Taylor's book, which I like. It does have a significant number of errors in the book because it contains so many equations and works them out in detail. However, the principles of error propagation are taught very well in the book, and the minor math errors (I know it's ironic) are easy to spot.
u/Loco_Mosquito · 3 pointsr/AskPhysics
u/listens_to_galaxies · 3 pointsr/AskPhysics

The idea of significant figures is a simplification of error analysis. It doesn't produce perfect results, as you've found in your example. It's useful as a simple rule of thumb, especially for students, but any proper analysis would use real error analysis. Your approach of looking at the range of possible values is good, and is basically the next level of complexity after sig figs.

The problem with error analysis is that it's a bit of a bottomless rabbit-hole in terms of complexity: you can make things very complicated very quickly if you try to do things as accurately as possible (for example: the extreme values in your range of possible times are less likely than the central values, and since your using the inverse of the time, that produces a non-uniform distribution in the velocities. Computing the actual probability distribution is a proper pain in the ass).

My advice is this: if you're a highschool student or non-physics university student, stick to sig-figs: it's not perfect, but it's good enough for the sorts of problems you'll be working with. If you're a physics major, you should learn some basic error analysis from your lab courses. If you're really interested in learning to do it properly, I think the most common textbook is the 'Introduction to Error Analysis' by Taylor.

u/sheseeksthestars · 2 pointsr/learnmath

This book about error analysis is really good

I think the rule about sig figs is that you want the sig figs on your error to be of the same place as your last sig fig in your calculation. So your numbers would be 5.77 ± 0.31.

u/ln2ar · 2 pointsr/MapPorn

>When's the last time someone flew a train into a building?

It's happened before.

u/afarnsworth · 2 pointsr/CatastrophicFailure
u/ZeMoose · 2 pointsr/Physics

I'm potentially interested in picking up a textbook on error analysis. How do we feel about John R. Taylor's book?

u/youaremacunganow · 2 pointsr/OkCupid

I took a Stats for Sci & Eng class (it had this book). All I learned was that stats is really hard and you have to use way more calculus than I initially thought.

u/dotrichtextformat_ · 1 pointr/ThatLookedExpensive
u/kaushik_93 · 1 pointr/Physics

Refer to this book, it will most definitely have the answer for you. Refer pages 60-62, I think it is what you are looking for, if not that chapter should have the answer for you.

u/Doctor_Anger · 1 pointr/CatastrophicFailure

This image is used in one of my all time favorite textbook covers of all time: Introduction to Error Analysis

https://www.amazon.com/Introduction-Error-Analysis-Uncertainties-Measurements/dp/093570275X

u/omgdonerkebab · 1 pointr/Physics

If you're looking to apply basic error analysis, I recommend Taylor's book:

http://www.amazon.com/Introduction-Error-Analysis-Uncertainties-Measurements/dp/093570275X

It's pretty common to find this book on physics grad students' shelves. You may have already seen it, though, and you may be asking for something deeper.

u/OldLabRat · 1 pointr/chemistry

You need this book.

Until then - the general formula for error propagation in a function q(x, y, z, ....) with uncertainties δx, δy, δz .... is equal to sqrt( (δx∂q/∂x)^2 + (δy∂q/∂y)^2 .....)

For your simple case where q = log10(x), δq = δx/(x*ln(10)).

Hope this helps.

u/mjanmohammad · 1 pointr/AskPhysics

We used this book in my intro level physics lab for error analysis.

u/erath_droid · 1 pointr/worldnews

I would agree that people can get lost in the illusion of what science can and cannot reasonably do. My course of study was very careful to make sure that people were not indoctrinated. The undergraduate courses were of course devoted to learning basic terminology and principles that have been around for decades if not centuries, but the upper division courses never presented you with "this is the answer spit it back" types of courses. It was all about teaching us how to design experiments and how to think critically. For example, one of my favorite courses was Advanced Molecular Genetics where our professor (who had a Nobel Prize and was just teaching for the hell of it- and because he loved tormenting students) would present us with papers that had been published and point us to the "further questions" section and say "design an experiment that would determine what is actually going on." We were judged based on the experiments we designed, and we actually had the equipment to run the experiments, which we did. It would have been a very good reality TV show called "So You Think You're a Scientist." He was brutal. Imagine Gordon Ramsey as a scientist. He would tear you a new one if your experiment was shit and he had nothing to lose. That class was awesome. You had to have balls to show up every day because he'd shit all over everything you did unless you had solid facts to back you up.

Come to think of it- I'd watch the shit out of that show.

But yeah- this book was required reading for all of us. It explicitly lays out what science can do and (more importantly) lays out what science can't do.

Relating to our conversation- people severely overestimate what science can and cannot do. GMOs (or any other technology for that matter) can potentially help or potentially harm. What we have to weigh is the potential harm of the new technology versus the actual harm of the current technology.

Here's an example for a though experiment: Horses vs automobiles. Automobiles emit greenhouse gases and require mining of minerals to make, among other things, catalytic converters. There are risks of using automobiles, but compare them to the hazards of using horses. Piles of manure attracting rats and spreading disease. Millions of acres of cropland being grown to provide fuel for the horses, etc.

Old vs new. Neither is perfect. If we wait for something perfect we'll never do anything and become stagnant.

But thanks for the conversation. And just so you know I have rather thick skin so your insults didn't phase me at all. Glad we could get to the point where we're having civil discourse.

u/sc_q_jayce · 1 pointr/Reformed
u/desperate_coder · 0 pointsr/pics

I think OkCupid data is all fair and good. It represents a certain set of people, those that are comfortable with doing online dating on one particular website. Now if we look at the whole population, including those that do not feel the need to use a dating site, we might see a different result.

If you take in account a proper bell curve you'll see that the population of men that are taller than women 5'10 or greater is actually about half the population. so that majority, if one at all, is a small majority. There are two separate bell curves: one for men and one for women. So is it really about a scarcity of suitors; if not, what could it be?

It could be something like men on a certain dating website are intimidated because of previous social interactions skewing the data or a smaller population of tall women on said site. All in all it is more often than not a person making the decision to rule out a specific group.

So, why don't we blame shallow people instead people with a certain physical trait?

Here's a nice book on errors

TL;DR: you don't have data, you have a graph. Shallow people are to blame.

u/nutso_muzz · 0 pointsr/Velo

John Taylor did a great job at writing this book, I suggest it as a good read. It is still used in physics classes to this day: https://www.amazon.com/Introduction-Error-Analysis-Uncertainties-Measurements/dp/093570275X

It boils down to this: Uncertainty in percent is effectively an error.

So if your left leg does 200 Watts (As measured by some mythical leg powermeter device that is 100% accurate) you will get a measurement of +/- 2% from the Stages unit meaning the reading drifts 196 to 204 (left leg only, remember that). Now if you double that (as Stages does) you get a reading of 392-408. This gives you a variance of 4% (assuming left leg power = right leg power).

As for your other question: The claim of accuracy needs to be made about their measurement, not their calculated value. The calculated value (as you have pointed out) is based onan assumption of both legs putting out the same power. You can't account for that in marketing.

u/Credulous7 · -6 pointsr/neoliberal

> This is objectively untrue. How in the world did you come away with that conclusion?

You literally just google scholar'd "economics quantifying uncertainty" and assumed things would be there. There isn't, did you look? It's all about the psychological concept of uncertainty like "ohhh I don't know what the Fed does next" instead of measurement uncertainty. Do you know what that is? John Taylor wrote a good book on it.

> The statistic only applies to experimental economics

What? "Economics is only non-replicable when we actually try it out in real life." I'm actually starting to feel bad for stressing you out with this argument.

> I assume you wouldn't tell me that psychology is all fake. Then again, maybe you would.

Nearly all of psychology is fake.

> 5) Let's summarize. One study of 18 papers in experimental economics found that 60% were replicated, 40% were not, and you're ready to throw out the entire field?

Let me be very clear. Experimental economics is the shining star of economics. It is the upper bound. 1000x as credible as macro-econmics. And it still fails to replicate 40% of the time.

> Even studies in medicine often fail replication

Medicine has a bad problem with replication as well due to low sample sizes necessary due to high experiment cost and p-hacking due to pressure to publish.