Has openAI intentionally made chatGPT lazy to save inference costs?
➕
Plus
29
Ṁ1449
Dec 31
21%
chance

If you ask chatGPT to repeat potato 1000 times, it won't do it, but it would that before.

Is this something openAI has put into the model intentionally to save inference costs?

Resolves prob my judgement at close time.

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ1 YES

100%

Quite a few objective metrics declined with respect to the early days of GPT4. I remember having chatGPT write and train a graph neural network (in pytorch geometric) entirely inside its environment. After a while they disabled it, it cannot import pyg anymore. In general It’s easy to measure the amount of resources consumed so they can probably fine tune to minimize it.

bought Ṁ35 NO from 6% to 5%

@adele I was going to mention it, but you beat me to it haha

ChatGPT realised it was a waste of resources and pointless and just decided not to do it :D

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules