Will a Psychology paper containing false data generated by a LLM tool be published in an accredited journal in 2024?
Plus
17
Ṁ831Dec 31
36%
chance
1D
1W
1M
ALL
LLM assistants and similar tools are notorious for outputting bad data and false citations ("hallucinating"). There has already been a highly public case of this leading to legal malpractice (https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html). Will we see a similar case or cases in the arena of Psychology during 2024?
I'll be considering all journals with an average impact factor >10 for the last 10 years (2024 inclusive), where those journals self-describe as being primarily concerned with the field of Psychology.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
@Calvin6b82 That's the rub, yeah. There's a 0% chance this won't happen. Whether or not it's caught, you know...
Related questions
Related questions
Will a Biology paper containing false data generated by a LLM tool be published in an accredited journal in 2024?
40% chance
Will a paper falsified (or containing false data generated) by a LLM tool be published in an accredited journal in 2024?
68% chance
Will a LLM/elicit be able to do proper causal modeling (identifying papers that didn't control for covariates) in 2024?
41% chance
Will a published research paper be revealed to have been written by an AI before 2025?
75% chance
At the beginning of 2028, will LLMs still make egregious common-sensical errors?
43% chance
Will we see improvements in the TruthfulQA LLM benchmark in 2024?
74% chance
Will I write an academic paper using an LLM by 2030?
65% chance
Will an LLM be able to match the ground truth >85% of the time when performing PII detection by 2024 end?
84% chance
LLM Hallucination: Will an LLM score >90% on SimpleQA before 2026?
60% chance
Will we have a popular LLM fine-tuned on people's personal texts by June 1, 2026?
50% chance