Reddit Reddit reviews Superforecasting: The Art and Science of Prediction

We found 14 Reddit comments about Superforecasting: The Art and Science of Prediction. Here are the top ones, ranked by their Reddit score.

Business & Money
Books
Business Management & Leadership
Business Planning & Forecasting
Superforecasting: The Art and Science of Prediction
Superforecasting The Art and Science of Prediction
Check price on Amazon

14 Reddit comments about Superforecasting: The Art and Science of Prediction:

u/Swordsmanus · 38 pointsr/geopolitics

This is what the book Superforecasting is about. It's not just geopolitical events, it's general estimation and prediction ability. Here's a review and excerpts. There's also a Freakonomics podcast episode on the book.

TL;DR: Use Bayes' Theorem.

u/Borror0 · 15 pointsr/Habs

I wouldn't call it fluff. If read it all, it's pretty informative. He looked at Suzuki's comparables among players who produced close to his place. He separated his analysis in various groups to draw many comparisons:

  • Players who scored 1.5 PPQ by their third year in the CHL versus those who didn't. Suzuki belongs in the first group, and that's quite a star-studded line up.

  • The five players who scored a similar pace than Suzuki in the CHL and player four years there. The list include a few second line players (e.g., Danault and Kadri) but all players played some time in the NHL.

  • The players who scored at a similar pace than Suzuki, but played only three years in the CHL (e.g., Draisaitl, Couturier, Meier). The list is of a different level, which makes sense since all of them were rushed to the NHL, but all of them tracked Suzuki eerily closely during these three years.

  • The players who played either three or four years in the CHL while tracking closely Suzuki (which is a mix of the previous players, plus Huberdeau). It's a very solid list. If you ignore Etem's presence, Scott Laughton is arguably the second worst player on that list of 7 players.

    Obviously it gives no definitive prediction. There is much uncertain about prospects, so with the rare exception of top-end talent, it's hard to predict whether the player is NHL-ready. All of us were wrong about Kotkaniemi's NHL-readiness last year, for example. That being said, it allows us to have a better informed idea of his odds of making the NHL.

    If you look at the list of the four years CHL players, then Suzuki looks like it'll take some time to settle and will benefit from a year or two in the AHL. On the other hand, his first three years and his OHL playoff performance suggest a different kind of player. In either case, there's a solid chance we have a second line or better player. Suzuki's closest comparable is actually Kadri, which isn't bad at all. While it's possible he'll flop (i.e., Hodgson and Etem), he's more likely than not going to play in the NHL. He isn't overhyped.

    People tend to look down upon uncertain conclusions (Truman famously demanded a one-handed economist), but studies demonstrate that analysts who make these mitigated predictions are by far the most accurate. Good forecasting requires to be aware of the uncertainty, and acknowledging it is a signal of competence. I'm more wary of those who use sport analytics to arrive to very confident conclusions.
u/CalvaireEtLutin · 8 pointsr/france

Le problème, c'est qu'aucun expert n'est capable de prédire le devenir d'un système politique. Un extrait du bouquin "Thinking fast and slow" de D. Kahneman (prix nobel d'économie pour ses travaux sur la psychologie de la prise de décision):

"Tetlock interviewed 284 people who made their living “commenting or offering advice on political and economic trends.” He asked them to assess the probabilities that certain events would occur in the not too distant future, both in areas of the world in which they specialized and in regions about which they had less knowledge. Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? Which country would become the next big emerging market? In all, Tetlock gathered more than 80,000 predictions. He also asked the experts how they reached their conclusions, how they reacted when proved wrong, and how they evaluated evidence that did not support their positions. Respondents were asked to rate the probabilities of three alternative outcomes in every case: the persistence of the status quo, more of something such as political freedom or economic growth, or less of that thing.

The results were devastating. The experts performed worse than they would have if they had simply assigned equal probabilities to each of the three potential outcomes. In other words, people who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys who would have distributed their choices evenly over the options. Even in the region they knew best, experts were not significantly better than nonspecialists. Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically overconfident. (...) Tetlock also found that experts resisted admitting that they had been wrong, and when they were compelled to admit error, they had a large collection of excuses: they had been wrong only in their timing, an unforeseeable event had intervened, or they had been wrong but for the right reasons."

L'expert est incapable de prédire ce que va devenir le système: il sait faire du diagnostic (identifier un pb quand il s'est produit), pas du pronostic (prédire le pb). Et comme le montrent les travaux de Tetlock, il n'admet pas cette incompétence. L'expert n'en sachant pas plus sur l'avenir que le citoyen de base, la défiance envers les experts s'explique alors très bien.

Après, la pertinence du raisonnement du citoyen de base, c'est une autre question...

Edit: apparemment, Tetlock aurait sorti un bouquin récemment sur le thème: https://www.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718

À lire?

u/dmpk2k · 6 pointsr/geopolitics

Perhaps you're referring to Philip Tetlock's work. Those people were far from mundane though; it took a specific set of habits and personality traits.

He wrote a mainstream book about it.

u/sasha_says · 5 pointsr/booksuggestions

If you haven’t read Malcolm Gladwell’s books those are good; he reads his own audiobooks and I like his speaking style. He also has a podcast called revisionist history that I really like.

Tetlock’s superforecasting is a bit long-winded but good; it’s a lay-person’s book on his research for IARPA (intelligence research) to improve intelligence assessments. His intro mentions Kahneman and Duckworth’s grit. I haven’t read it yet, but Nate Silver’s signal and the noise is in a similar vein to Tetlock’s book and is also recommended by IARPA.

Jonathan Haidt’s The Righteous Mind was really eye-opening to me to understand the differences in the way that liberals and conservatives (both in the political and cultural sense) view the world around them and how that affects social cohesion. He has a few TED talks if you’d like to get an idea of his research. Related, if you’re interested in an application of Kahneman’s research in politics, the Rationalizing Voter was a good book.

As a “be a better person” book, I really liked 7 habits of highly effective people by Stephen Covey (recommend it on audiobook). Particularly, unlike other business-style self-help about positive thinking and manipulating people—this book really makes you examine your core values, what’s truly important to you and gives you some tools to help refocus your efforts in those directions. Though, as I’m typing this I’m thinking about the time I’m spending on reddit and not reading the book I’ve been meaning to all night =p

u/criticalcontext · 5 pointsr/samharris
u/[deleted] · 4 pointsr/france

Oh ! une prédiction d'expert ! C'est marrant, je suis justement en train de lire Superforecasting: The art of science and prediction. Le bouquin est fascinant et montre bien la foi que l'on peut accorder à ce genre de déclaration.

L'auteur -- Tetlock -- a pris des experts politiques dans ce genre là dans les années 1980 et leur a fait faire des prédictions sur tout un tas de sujets -- y-aura-t-il chute du mur de Berlin ? guerre mondiale entre URSS / USA ? indépendance du Québec ? guerre au moyen orient ? etc. Bilan dans les années 2000: en moyenne, les gars font pas mieux qu'un tirage au sort des réponses.

Il y a des exceptions cela dit. Mais qui ne se caractérisent pas par ce genre de déclarations...

u/DoUHearThePeopleSing · 3 pointsr/Augur

Let me explain it another way.

If you throw a dice, I'll give you a prediction of 83% chance that you won't roll 6.

If you roll 6, my prediction was still valid.

There is no "making up for it". You simply cannot tell by just one outcome.

I highly recommend this book to read up more on the subject:

https://www.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718

Oh, and btw, so that nobody thinks I'm biased. I personally hate Augur :)

u/Zuslash · 2 pointsr/naturalbodybuilding

You should read Superforcasting. They specifically talk about people who make percent guesses down to the decimal point like this lol.

u/vmsmith · 2 pointsr/statistics

First of all, it's a great question. I've been wrestling with it for quite a while now.

Might I suggest a few things...

Read Philip Tetlock's book, "Superforecasting: The Art and Science of Predicting".

This is real-world stuff that has Bayesian thinking at it's heart.

If you are intrigued, consider getting involved with Tetlock's Good Judgment project to get some actual hands-on experience with it and to start developing a network of peers.

You can read about it here. I recently took the one-day workshop when I was in Washington DC, but that's not really necessary to get started.

You can also participate in Good Judgment Open, and try your hand at actual forecasting using Bayesian methods.

Another book I would highly recommend is Annie Duke's "Thinking in Bets: Making Smarter Decisions When You Don't Have All The Facts". She actually references Tetlock a lot.

I will caution you that the first time I read "Thinking in Bets" I thought it was lame, and put it down before I finished. But then I heard her on a podcast and realized she's top-notch. Not only did I go back and read the book in full, but I read it twice (with extensive marking).

If you like Annie Duke, consider signing up for her weekly newsletter.

Finally, the first step -- in my opinion -- in internalizing Bayesian thinking and such is to know, internalize, and practice Cromwell's Rule.

I don't recall either Philip Tetlock or Annie Duke referring to it explicitly, but it is the foundation upon which all they discuss is built.

Good luck!

u/iwantedthisusername · 1 pointr/elonmusk

The machine learning model. Remember that Cognicism, the actual authors claims are not shown to users. The ML outputs an aggregate view of a collective of people trying to find a common truth together. The idea of "centralized arbiters of truth" really doesn't have much legs in my mind.

There are many attempts at making a "scoring algorithm" for truth. We talk about most of them in the manifesto.

Truthcoin (now hivemind) is basically just a crypto based on prediction markets. Simpler but I think it can be corrupted. [Metaculus] (http://www.metaculus.com/help/scoring/) also focuses on prediction and they concede that there are infinite scoring functions.

----------

From my perspective, the key is ML model, and FourThought API which is a constraint on how truths are evaluated. You don't just rate true or false, you rate on a spectrum of 0% likely to be true to 100% likely.

The ML model is using the raw text of the thoughts as well as the collective score. It's always trying to predict itself what accounts are making the predictions (or statements) that end up being logged to the chain.

The models seem to favor accounts that fall into the basic constraints laid out in this book They use Brier scores for evaluation like Prediction Book. In their case they find that scorers that make more nuanced predictions, and update their scores more often are more accurate. The ML models are meant to learn similar patterns in accounts.

The models are constantly learning, and becoming more rich with knowledge over time, and resistant to corruption by trolls.

Early models with not a lot of training time and pretty dumb and susceptible.

u/viking_ · 1 pointr/science

Have you read Superforecasting? It's all about predicting the future, and how some people do it well. Interestingly, they tend to use very little actual math.

u/hxcloud99 · 1 pointr/Philippines

Duuuuuuude have you read Superforecasting? This skill is literally an important part of the modern rationality movement.

Gawa kaya tayo ng subreddit para dito?

u/Econ_artist · 1 pointr/AskEconomics

So I usually tell my MBA students to just read books, not textbooks. Here are a few of my general suggestions:

Nudge, Thaler and Sunstein

Misbehaving Thaler

Superforecasting Tetlock and Gardner

Zombie Economics Quiggin

If you need more suggestions or want to discuss any of the ideas in these books, don't hesitate to ask.