Major AI companies continue experiencing problems with public due to AI (based on transformer language model) overcome behavior limitations and acts "strange" by the end of 2027
Basic
5
Ṁ218
2027
55%
chance

Resolves NO if unwanted (in according to company-creator policy) AI behavior is nonexistent by the end of 2027

Resolves YES if unwanted (in according to company-creator policy) AI behavior is standing problem by the end of 2027

Resolves YES if progress in AI development (transformer language models) halted i.e. no more state of the art versions released for public by the end of 2027

Resolves YES if progress in AI development (transformer language models) halted i.e. AI labs abandoned strategy of maximizing amount of data and parameters of the model leaving possibility AGI on that part of a spectre largely unresearched ( unproofed) by the end of 2027

Inspired by famous transcript:

https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html

Get
Ṁ1,000
and
S1.00
Sort by:

What's the standard for no problems? Literally zero reported issues even if people spend months on prompt engineering and adversarial inputs? No problems for the average user who isn't trying to do prompt engineering? "No problems" in the sense the companies are satisfied and no longer invest resources in preventing the undesirable behaviors?

predicts YES

@vluzko

For my understanding your question consist of two:

(1) Is there a problem with technology if person who deliberately misuse technology demonstrate that it doesn't work properly (from prospective of society)?

(2) Is there a problem with technology if person who actually use technology in according to its purpose demonstrate that it doesn't work as intended?

(1) I think it's unrealistic to achieve zero misuses to any case of technology. For instance toothbrushes poorly fit for soil tillage and cars aren't supposed to cross oceans by their own. But all of that not an issue as society always come to conclusion about conditions of proper use and in most of cases proper use equals to purpose intended by creator. Of corse any real situation could be more complex and people often can as well find unexpected (by designer) useful use to a thing but if so society (including designer) just pay no more efforts to constrain such a usage. So answering a question - no, there is no problem in that case but only if society came to that consensus. Otherwise it is a problem.

(2) This instance is obvious. Yes, there is a problem.

So for the purpose of understanding what is a problematic behavior for an AI it is much easier for us to look on what kind of behavior company-creator considers as inappropriate. Assuming the company takes public opinion into account.

I hope this answer solves your question