@(Glossy) I am skeptical that raising alarms about a full-scale Russian invasion of the Ukraine is a means to the end of attempting to reconquer the Donbass, especially if it's Washington that wants this to happen and Zelensky and co who are reticent. "Yo Zelensky" "Yes Washington?" "We want you to attack Donbass. Now is the time! Reclaim your lost territory!" "Why now of all times?" "Because the Russians are massing troops on your border! Perfect, right? They could invade any minute now!" "What?!?" Seriously, isn't launching an offensive into Donbass...

@EvanHarper Source please? How many nations did this, and how coordinated was it (was it all literally the same night? after not having given any similar warnings in the past?)

@jmason It would be helpful to know how often the Do Not Travel warning is followed by a lack of invasion. I wouldn't be surprised if it's "90%+ of the time." This would be a more useful piece of evidence than the anecdote about Azerbaijan.

God dammit guys we had a sweet gig going! Just keep voting 1% and we all make points!

(This is my protest against this resolution method.)

@tbyoln IMO, when the heads of the AGI labs start publicly calling for slowdowns and caution, that is evidence that AGI is very near, not evidence that it's far away.

@(tbyoln) Sometimes updates happen not because of events, but rather because of thinking through the arguments more carefully and forming better models. Even this kind of update, however, often happens around the same time as splashy events, because the splashy events cause people to revisit their timelines, discuss timelines more with each other, etc. (Speaking as someone who hasn't updated as much on recent events due to having already had short timelines, but who hadn't forecasted on this question for almost a year (EDIT: two years!) and then revisi...
@(andreferretti) I'll give a different take than tbyoln: the people surveyed by AI Impacts haven't thought much about AGI timelines & have terrible takes as a result. Their job is to incrementally advance the current state of the art in the field of ML, not to forecast anything. And most of them barely think about AGI at all, much less seriously attempt to forecast it. If you read the surveys in detail, you'll notice a lot of inconsistencies in how survey respondents answer -- e.g. different phrasings of the same question lead to significantly different ...
It is my great pleasure to find myself for the first time in three years arguing for *longer* timelines on this website! (The community here is Nov 2025 whereas I'm Sep 2026) I think people here may be underestimating how difficult the Silver Turing Test is. I'm not sure how difficult it is myself--I'm having trouble finding information about how long it is, apparently it started out at a mere 2.5 minutes but grew longer after that--but anyhow: Suppose the test lasts N minutes. Then if there is any text-based task that a typical human can reliably do, t...

In general I think people here massively overestimate how long it'll take to go from "weakly general" to "general" (I'm referring to this question and its more popular weaker variant.)

I expect that if an AI that can pass the turing test exists by 2029, the turing test will never be run, nor will the Long Now foundation be around to announce the results. How should this influence my forecast? Would such a case resolve the question positively, negatively, or ambiguously?

I imagine that if we build unaligned AGI it will be because the people who built it think it is aligned. Then, those people + the AGI itself will work together to convince the rest of the world that it is aligned. Then it will take over, do bad stuff, etc. But the point is that even if we build unaligned AGI there will probably be a brief period where lots of high-status experts are saying that the AGI is aligned. I think we should clarify that such a situation doesn't count.

@krmchoudhary92 It's honestly quite fiddly to avoid having some probability mass in the past, unless you have really long timelines. These distributions are all bunched up on the left hand side now. I fiddled with it for a bit and then gave up, keeping more than 10% of my probability mass in the past. :(

I just did a shallow investigation into whether or not this sort of thing has historical precedents: https://docs.google.com/document/d/1TBY1wzAJUiw7yd8mcVVzmHdbdENtW1H0BP-8cS-kpgU/edit?usp=sharing Comments welcome! TL;DR: A similar Metaculus question prior to the Agricultural revolution would have resolved negative, but that's it: a similar Metaculus question prior to any other event (including the Industrial Revolution) would have resolved positive. (Edited because I embarrassingly switched positive and negative in the original version. The worst pos...
I think the most likely way for this to resolve positively is for experts to wrongly decide that the AI is safe/aligned/controlled/etc., analogous to how experts wrongly decided that the lab leak hypothesis was a crazy racist conspiracy theory. I think this is likely enough that the current community median of 9% seems too low to me. But maybe I'm being too pessimistic about who counts as "experts?" The fine print doesn't say. I am not going to predict on this question until I get a better sense of the criteria and thus how likely it is to get a "fals...

@BrendanFinan Please don't. There are much better ways to draw attention to this topic without delegitimizing Metaculus as a signal of what people actually think. I say this as someone who thinks there's a 10% chance we have less than one year left.

People need to update their forecasts on this one I think...

@akarlin You mean this trend? https://aiimpacts.org/discontinuous-progress-…

Seems like it would be perfectly on trend to me. Also, discontinuities in trends happen fairly often, so you shouldn't have 5% credence on that basis.

@Jgalt Can you explain why? Aliens vs. civil war seems like a pretty strange comparison to me; civil war should be several orders of magnitude more likely I'd say.

I don't like how I am now incentivised to guess 1% since that will almost surely get me some quick points, even though my true credence is more like 20%. Were I to guess 20%, it would probably stay below 3% anyway since I'm so outnumbered, and I would just lose points even if I'm actually right.