Paper: https://arxiv.org/abs/2306.02519
For this market to resolve as YES, t
hree things has to happened.
1)A paper disproving this is posted
2)The paper has to be generally credited more than the above paper.
3)This has to happen before 2024, Dec 31
Otherwise this market resolves to NO.
Re 1) Do you require preprints on arxiv, or are posts on LessWrong/the Alignmentforum okay?
Re 1) How do you determine if it disproves the original paper? Does it need to directly address arguments, or is addressing the conclusion e.g. by using a different methodology okay?
Re 2) How do you measure how much a paper is credited?
Re 3) Just to make sure, do papers from 2023 also count?
Betting No to 40% as I think it is relatively likely no one goes to the effort of disproving this.
I've given executive summary a very quick read and here are my arrogant thoughts:
Think you could apply their methodology to any of the technological booms that have happened over the past 1-2 hundred years and get similarly low probabilities.
There's huge uncertainty in all their assumptions.
To do a Bayesian analysis like that you need to put a distribution on each of your assumptions. I think if they do this they'll find it hard to keep their probabilities so low.
I reckon:
2 is the same as 1. So 2 can become 100%.
3 is like 50% (don't see it taking more than the brain - brains are not efficient. And think it's easy to get 10x improvements in computing power. Programmable chips are an easy 10x that has already happened.) (Also think $25/hr is too low, AGI could add huge value even at like $1000/hr)
4 is nonsense - robots are irrelevant. 100%.
5 is the same as 3. So 100%.
7 should be like 95+%, covid didn't slow us down.
Putting that into their calculator gives like 15%.
I'm pretty sure if I read the paper properly I would find I have made several misinterpretations of what they have said. But I don't have time for that.
@Daniel_MC Why do you think 2 is 1? Aren’t inventing such algorithms and making them train quickly not identical? (I may also have misread it.)
@Heliscone when you develop an algorithm you do both at the same time realistically. Think you need that real time feedback. We won't know we've done 1 unless we've done 2 as well.
@harfe yes. anything more than 1%, and is more accepted by the scientific community, this resolves to yes