Good predictions here will consist of a sum of several spikes at round numbers, each of which will have to be adjusted individually. I don't know if that's avoidable.

@Skyt3ch please note that we're trying to estimate the probability that Trump will be reelected and not the fraction of the presidency that Trump will win in the most probable future

@Tamay There's nothing wrong with reality falling at the 31st percentile of one's distribution. The outcome here is not significant Bayesian evidence against the community's ability to predict: the probability density is quite close to the peak.

@Tamay Thank you!

Quick clarification: the probabilities quoted here are conditional on "no transformative AI by 2050", which I consider <50% likely.

— edited by steven0461

@Anthony In my opinion, five years could still easily be enough for an intelligence explosion and tons of optimization.

Wikipedia's [list of nuclear close calls]( has 1 out of 15 (counting 1991 as non-summer, but counting only one of the two Cuban missile crisis dates) in these months, compared to an expected 5. The only one in June-September is the Petrov incident. Probably just a coincidence, but enough to make me go a bit below 1/3. (edit: according to the [binomial calculator](, the probability of 0-1 successes in 15 attempts with probability 1/3 is about 0.0...

The current coalition won 32 instead of 30 seats, but they needed 38, so it's still a negative resolution.

@Jgalt The Metaculus prediction has been doing worse than the community prediction over the past couple of years (at least on the questions that I've predicted on and that have resolved, which is a fairly large sample) and is probably still doing worse unless something changed.

— edited by steven0461

Berkeley Earth is now giving 6% for hottest on record.

@(Tamay) The benefits are major enough to already be considered worth many EAs spending their time and money on attending. With a median of 161 cases in San Francisco by 1 Apr (as a naive guess, 161^(2/3) = 30 at conference time), I don't see how COVID-19 infections are much more likely compared to if people live their normal lives at home or travel elsewhere. Deaths would be very rare given that EAs tend to be young and healthy. I would also expect the EAs to be good at things like putting hand sanitizer everywhere and avoiding hand shaking norms. *— e...


Still uncertain whether it will make landfall in the US at Category 4 or 5 intensity

But it was category 4 when it made landfall in the Bahamas. I think there's some ambiguity in the question about whether a hurricane counts if it's category 4 when it first makes landfall outside the US and if it then later makes landfall in the US as category < 4. It looks to me like the most plausible reading is such a hurricane wouldn't count, though.

— edited by steven0461

@(jzima) Attention has been shifting from unrealistically high emissions scenarios like [RCP 8.5]( to lower estimates like those I've been linking in this thread. At the same time, there's been wider acceptance (e.g. by the IPCC) that multiple lines of evidence allow us to constrain climate sensitivity in a narrower range, leading to less probability in the tails. See [this EA forum post](
> The date of resolution of when the first AGI is built is determined by [this question]( Does that mean this question's clock only starts ticking when the first AGI becomes *publicly known*? That may well imply resolution at a negative number of months. (If not negative, then maybe close to zero.) After all, a superintelligence could be developed out of, or by, an AGI that isn't publicly known. *— edited by stev...

@DanielFilan That sounds like one of the few possible ways to create a real incentive for people to press the button (after predicting 1%).

@DanielFilan I don't think so. About 1% of the world has been confirmed infected. "Confirmed infected" is a subset of "estimated infected" but a superset of "confirmed infected and would post test result to Metaculus". Metaculus users differ from average world inhabitants in important ways, but "1% of world population" is at least close to the region in parameter space where no major bad luck is required for the question to resolve negative.

— edited by steven0461

If the question had a requirement like "any jurisdiction governing at least one million people", I think that would make it more about the future of humanity and less about small jurisdiction trivia, and this would be good.

I'm copying my answer for the AR6 question, but adding some mass on the 3-4 range, because confidence intervals should usually narrow with more information, making the upper bound more likely to go down than up.

I had to remind myself not to treat the scale as linear: for example, if the years 2030 and 2040 are equally likely, that implies the density at 2040 should be twice as high.