When will there be an AI which is better at doing AI research than the average human AI researcher not using AI?
The AI must be capable of doing everything that a current AI researcher does, including coming up with new research ideas, brainstorming with coworkers, writing code, debugging, doing code reviews, communicating results, and writing papers.
If this is constrained to a specific domain of AI research such as LLM development or interpretability that still counts.
This question is meant to be another version of "When will we get text AGI / transformative AI"
All answers which are true resolve Yes.
A question which is conditional on this one:
I think requiring AIs to do brainstorming is a bit pointless, since brainstorming is uniquely human way of coming up with ideas. Maybe it would be better to just judge them on their output.
I.e. you tell an AI "Please generate a better AI algorithm", it thinks for a while and spits out an implementation and a paper that are better than state of the art. I would definitely call this "better than humans at AI research", but it wouldn't fit the detailed criteria of the question.
I have a similar question here
https://manifold.markets/robm/when-will-selfimproving-ai-outperfo?r=cm9ibQ
Can you operationalize AI research? For example, does it suffice to have a task-specific model that can improve language models faster than humans can, or does this include all the different types of AI research? Is interpretability part of AI research?
@NoaNabeshima I mean a model that is capable of doing everything that a current AI researcher does, including coming up with new research ideas, brainstorming with coworkers, writing code, debugging, doing code reviews, communicating results, and writing papers.
If this is constrained to a specific domain of AI research such as LLM development or interpretability that still counts.
It's basically equivalent to "When will we get text AGI / transformative AI"