I expanded my range to put further weight on the US' share being less than it is currently (28%). I know that Frontier is set to change things, but I don't think the possibility of delays are properly accounted for in the community's forecasts. It takes a long time for such things to come online, let alone be built.
Beyond the exascale projects, does anyone have any links that suggest the US' relative investment is expected to increase from previous years?
@Sergio Thanks, that makes a lot more sense — skim reading meant I attributed it to the un-augmented model
I'm fairly new to using Metaculus (despite creating an account a while ago). The thing I'd most like is a prediction slider that works on mobile! I realise that's an irritating request (and I'm fine without fully mobile-friendly website) but would like to be able to make or update predictions on the go.
Also @AABoyles' point on Dec 27, 2019 about discrete/category-based questions is something that I have noticed as well.
I expect there'll be an error rate of around 5% to 6%; there's going to be diminishing returns for a bunch of these metrics, so I expect big jumps from a 7% error to be unlikely (even if possible). I also imagine the practical maximum scores are <100% for some of these metrics (but I haven't looked into it at all).
— edited by qassiov
With exaflop computers coming soon, I expect an increase of a couple of orders of magnitude, as that's the sort of change we've seen historically over a decade. It's not quite like Moore's law, as rates have slowed over time, but perhaps more incentives for building supercomputers will reappear.
I've guessing we'll see 30x-200x growth from the first exaflop computers - exciting!
My median guess assumes an improvement of 3 percentage points, with 1.5pp and 5pp falling in my 75% bounds. This is just based on the observation that this value has had annual improvements ranging from 3.5pp to 6.5pp in recent years, with growth slowing in recent years (presumably due to diminishing returns for some of the benchmarks).
However, I've got a reasonably long tail on this, since I think a >6pp improvement (~140 here) is more likely than a <0.5pp improvement (~118).
@admins I calculate this to be 117.9, using the data from PapersWithCode. Weirdly though, the Stanford Cars SOTA decreased on PapersWithCode (as the original SOTA got replaced with a pretrained version of the model, I think). Also, many of the models in the spreadsheet for CUB200 were missing — I'm guessing this is because PapersWithCode might not be the most reliable source?