Tim Beiko announced a preliminary total terminal difficulty of 58750000000000000000000. When the network will reach that depends on the hash rate, but it’s likely early September.
Yandex open-sourced YaLM-100B today, licensed Apache 2.0. The repository says:
The model is published under the Apache 2.0 license that permits both research and commercial use, Megatron-LM is licensed under the Megatron-LM license.
From a quick glance, YaLM-100B appears to be an instance of Megatron-LM. (So the network architecture is Megatron-LM, and the weights are provided by Yandex.)
Spoiler alert, this comment refers to the plot of No Time To Die.
At the end of every Bond movie, typically after the credits, it shows “James Bond will return”. With the events in No Time To Die, I kind of expected to see “007 will return” instead. It would at least have kept the options open. I stayed until after the credits, but it said “James Bond will return”.
Yesterday was the last Monday of 2021, and the Mantic Monday post on that day did not mention this question.
(Edit: I was wrong about this! Scott did mention the title, see above.)
— edited by oumeen
@optimaloption The Goerli merge was just completed!
The community prediction odds on the linked question went from 1.5 (60%) in December 2022 to 4 (80%) currently (February 2023). It would have to go to 82% this month to reach the tripling threshold.
Maybe Scott will avoid mentioning this question, but discuss https://www.metaculus.com/questions/7976/acx-… instead, carefully worded to avoid mentioning this question itself.
OpenAI just announced Codex, a GPT model fine-tuned on code, with up to 12B parameters. https://arxiv.org/abs/2107.03374
The community prediction on Date of Artificial General Intelligence now assigns a 60% probability of AGI before 2040, and I think that a system that would pass the resolution criteria of that question, would be able to pass the test in this question. I wrote a more elaborate comment on the meta-question about this question.
https://paperswithcode.com/sota/language-mode… hasn’t been updated for almost three years, but there has been lots of progress on language models. I expect that if you’d run LaMDA or ChatGPT against the Penn Treebank, they would score better than GPT-3, it’s just that nobody reported the set perplexity yet.
There is a market about the same topic on Polymarket, which at the time of writing has a price of $0.93 for “no” (vs $0.07 for “yes”) to “Will Ethereum switch to Proof-of-Stake (EIP-3675) by February 22, 2022?”