@Jgalt taking an outside view, the correct past predictions of this model aren't very impressive since the model was selected for having such a trait. this is similar to p-value hacking.
since this theory contradicts quantum mechanics, i'm assigning a probability of 1%. waiting for the day metaculus will go to basis points.
@Skyt3ch you should only claim 99% certainty for statements where you can make a hundred similar statements and be wrong roughly once. if you're not well calibrated, you risk large losses.
in this context, 90% is considered extremely confident. if you're at 99%, you're either a superintelligence, or you have private information.
— edited by elspeth
The current distribution makes it look like this question will resolve to the end or beginning of a specific year. Is that the case?
@AABoyles those are not uncorrelated.
@godshatter i'd like to note that there are significant problems with predictit that prevent it from being efficient. for example, each trader can only invest 850 dollars in a question, and there is a limit on the number of traders that can participate in a given market.
@JavierSouto again, this is selection bias.
this is an interesting question, but the resolution criteria make the threshold lower than the title suggests. four in particular seems somewhat easy to achieve, but the disjunction in general raises the probability.
— edited by elspeth
doing a simple logarithmic extrapolation based on the data for the last 6 years from this site yields a median of 2531.
how does this resolve if the planet mars ceases to exist?
@Walthari b. a positive resolution implies ubi in at least one eu country.
@Charles What if the United States dissolves before 2120?
this question should have been logarithmic.
i'm curious to know why the estimate for this is several months later than the estimate for 10M.
@sbares i'm getting 0.141 based on 47 events since 11812-12-08. (1-e^(-47*245/75750)
i dislike the misuse of the word positive. maybe change it to beneficial?
it seems to me that the dictator scenario is a bad example. if this actually happened, it would probably still lead to a paretotopia. in contrast, the situations we're most concerned about are where the agent has a completely non-human utility function, so that we don't have any of our prefences satisfied.
— edited by elspeth
@Sylvain it seems to me that such a question is necessary. the only problem is that there's no reason to include candidates who don't win the nomination.
i'm surprised to see that this question is linear, especially given the exponential nature of the distribution.
too many wills in title.