@(Tamay) https://ai.googleblog.com/2022/07/ml-enhanced-code-completion-improves.html "With 10k+ Google-internal developers using the completion setup in their IDE, we measured a user acceptance rate of 25-34%. We determined that the transformer-based hybrid semantic ML code completion completes >3% of code, while reducing the coding iteration time for Googlers by 6% (at a 90% confidence level). The size of the shift corresponds to typical effects observed for transformational features (e.g., key framework) that typically affect only a subpopulation, wher...
@(isinlor) Thanks! If there's a way for me to edit my post, I don't see it. Here are my first-pass answers to your question: At a high level, this question aims to capture (or at least be a proxy for) the important thing people often debate about AI timelines: "Whether the scaling will continue, or plateau." There are various different scaling trends and various theories about why they are the way they are; I just picked this one because it's perhaps a particularly solid and prominent one. From the abstract: "We identify empirical scaling laws for the...

This investor report summarizes the state of AI in 2020 and makes a few predictions about the next year (slide 172) I think it would be interesting to put some of their predictions on metaculus:

--Attention-based neural nets achieve multiple SOTA results in computer vision --Beefier version: ALL SOTA results in computer vision come from attention-based neural nets by end of 2021 --An AI-based drug discovery startup IPOs or is bought for $1B+ --Chinese and European AI-based defense startups raise $100M+ between them

@Jgalt This makes me wonder if part of the reason why experts are saying vaccines are far away is that they want to scare people into taking containment measures now. Or, to put it another way, they are worried about people dragging their feet due to lazy hope in a vaccine. It's a perfectly reasonable utility-maximizing PR policy, I think.

@metani Nice. In scenarios where actually the virus did escape from the lab, and her tests showed as much, would she have been able to say so publicly? Would the government have pressured her to say that the tests came back negative, such that by mid-March an article like this could be written?? (Seems to me the answers are no and yes, respectively)

Can someone explain to me why it is likely that there will be ~10M cases? If this thing is contained, won't it probably be contained before then? (It's really hard to contain a disease once it has infected 10M people around the world!) If this thing is not contained, won't it infect substantially more than 10M people?

Several AI-related predictions were made in [this Tesla video](https://www.youtube.com/watch?v=Ucp0TTmvqOE). 1:46: It will take at least 3 years for a competitor to make (deploy?) a chip that is as good as Tesla's current neural-net-optimized chip, and by then Tesla will have the next generation which will be 3x better. This bears on the more general question of how quickly new hardware (esp. AI-optimized hardware) can be built and improved, and how quickly competitors can catch front-runners. 2:46: Tesla's autopilot will be feature complete by end of...

Suppose in 2025 things like Replika are not popular, but something like AI Dungeon is super popular, hundreds of millions of people have ongoing fantasies collaboratively written with the AI... and a significant portion of these fantasies are sexual and/or romantic. I think this should count, but I'm not sure, so I'm asking.

@(Tamay) If you have models I'd be interested to see them! The ones I've seen seem to have speedup happening by the 2030's or so, possibly sooner. Where are you getting the <0.01% figure from? Anecdotally the people I talk to at DeepMind and OpenAI seem to answer something like 5% on average for "how much faster do you work thanks to Copilot" and even if we assume that's off by an order of magnitude due to sampling bias, exaggeration, and copilot not helping at all with other important factors of production, we still get 0.5%, 50x higher than your upper...
Boston Dynamics succeeded without deep learning, right? And now videos like this show people succeeding with deep learning: https://www.youtube.com/watch?v=zXbb6KQ0xV8 So... it seems like Tesla can totally build a humanoid robot that walks around and picks up objects and follows simple voice commands and stuff like that. (See: DeepMind's research on simulated command-followers) That seems like a straightforward implementation of existing technology; a solved problem. What about expense? Spot costs $75,000. A Tesla bot is larger than Spot and so natur...
"Transformative AI is said to be developed on the date when growth rate of GWP reaches 25% due to a computer program or collection of computer programs. If 25% growth rate is reached without such a software system through nanotechnology, WBE or some other means then the question resolves as ambiguous." Just want to point out that this is importantly different from the definition of TAI. TAI is the computer program or programs that cause the growth; thus, TAI is developed when TAI is developed, which will presumably be years before the growth actually ha...

@Sylvain But, like, why isn't that all priced into the market by now? Heck, for all we know the market is overreacting, with tons of people panic-selling their stocks, and it's going up from here...

Or not. IDK. But surely the mere fact that coronavirus will get worse is basically no evidence at all; what matters is whether the market will be pleasantly or unpleasantly surprised.

"build me a general-purpose programming system that can write from scratch a deep-learning system capable of transcribing human speech." So to make sure I understand: We feed this prompt to a model, such as AlphaCode. The model then produces some code which, when run, *writes additional code,* which, when run, trains a deep-learning system capable of transcribing human speech. That seems to have one unnecessary step in the middle. Are you sure it isn't: We feed this prompt to a model, such as AlphaCode. The model then produces some code which, when run...

Why exactly has this not already resolved?

What do people think about the hypothesis that summer will stop it? As far as I can tell, warm-weather countries really do seem to be handling this virus pretty well; even if some of them have good healthcare systems plenty of them don't, and it's been long enough now that I am starting to doubt the "cases are just going unreported" counterargument.

Maybe the thought is, summer will slow it down but not stop it (since half the world will be in winter) and it'll be really big by then anyway?

Really cool stuff, thanks! Correct me if I'm mistaken: It sounds like you are extrapolating based on date? i.e. looking at the average improvement in performance per year and extrapolating until we get a year when performance = the estimated entropy of english text (which is maximum possible performance, right?) If this is so, (1) isn't that a calculation for superhuman performance, not human level? and (2) It would be really cool to see the calculations done by parameter count or training-compute-expenditure rather than by year. Like, you could take ...

I'd be very interested in a version of this question for 2030 or 2035 instead of 2050. Failing that, anyone here care to comment with an answer? Should I assume it is something like 20%, extrapolating from the current 2050 aggregate answer?

Interesting! The community median here is higher than I expected. Anyone care to explain why they think the scaling trend will break down?