Will tailcalled think that the Brain-Like AGI alignment research program has achieved something important by October 20th, 2026?
Basic
17
2.7k
2026
49%
chance

The Brain-Like AGI research program by Steven Byrnes is based on taking brain models from neuroscience, containing a genetically hard-coded steering system and a learned-from-scratch thought generator and a learned-from-scratch thought assessor, and analogizing this to a method of creating artificial general intelligence. A hope is that it may provide a factorization of the problems involved in alignment that better fits together with the products that capabilities researchers are actually producing.

In 4 years, I will evaluate Brain-Like AGI and decide whether there have been any important good results since today. I will probably ask some of the alignment researchers I most respect (such as John Wentworth or Steven Byrnes) for advice about the assessment, unless it is dead-obvious.

About me: I have been following AI and alignment research on and off for years, and have a somewhat reasonable mathematical background to evaluate it. I tend to have an informal idea of the viability of various alignment proposals, though it's quite possible that idea might be wrong.

At the time of making the prediction market, my impression is that the Brain-Like AGI research program contains numerous critical insights that other alignment researchers seem to be missing; everyone involved in AI safety should read the Brain-Like AGI sequence. However, beyond what is written about in the posts, I'm concerned that there might not be much new to say about the Brain-Like AGI program in 4 years. Sure, the five-star open questions that Steven Byrnes poses would be nice to solve, but I'm not sure that e.g. human social instincts is tractable, or that the other five-star open questions will be much connected to the Brain-Like AGI program.

More on Brain-Like AGI:

https://www.lesswrong.com/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why

Get Ṁ1,000 play money
Sort by:

I don't expect to think it achieved something important, but I think there's a reasonably good chance we'll disagree so I'm buying YES.

Zero insights from “alignment research” will be considered worth anything over any reasonable time-span

Remains a sinecure for people who’ve never accomplished anything and can hide in a fake science that makes zero testable predictions and has zero contact with reality

Somewhere south of journalism or TV panel talking heads in its rigor and accountability

@Gigacasting Even if you think alignment research isn't worth anything, it is unwise to use this as a core prediction assumption for this market, as the question is not whether alignment research will be worth anything, but rather about whether I will consider alignment research to be worth anything.

Since I already consider alignment to likely be an important area, this will factor into my evaluation later on.

Will tailcalled think that the Brain-Like AGI alignment research program has achieved something important by October 20th, 2026?, 8k, beautiful, illustration, trending on art station, picture of the day, epic composition