Unless otherwise specified, the options are about the state of OpenAI's latest video model at the end of 2025. Options about things that might happen before then, or by a specific date before then, are also acceptable.
I'll N/A duplicates and options I consider to not be valid. If an option refers to an external source, I encourage the option creator to notify me when/if the option should resolve.
For the purpose of this market, any Text-to-Video model released by OpenAI after Sora will count.
Some clarification:
"released" counts widespread public release OR widespread access to corporations, through partnerships or wtv else
the "Nth" model doesn't require release in the sense mentioned above. Sora is the 1st model, whether or not it ever gets "released". any new model announced after Sora for text-to-video counts as the 2nd model. and so on.
@JamesF Can you clarify what "sued over the model" means? Is this sued for damages due to video generated by the model, or sued over their use of training data for the model, or sued over using a proprietary file format as part of the model, or something else entirely?
I'm trying to get a feel for what kind of lawsuits should qualify, or if any lawsuit even slightly related to the model should qualify.
Latest Sora ‘short film’ released here - https://youtu.be/yplb0yBEiRo?feature=shared
But given most offical openAI videos only get max 2M I doubt it’ll reach 100M
@JamesF good question, I didn't really expect what's happening with the 1st model to happen when i made that option. ig I'll go for:
if users or corporations can widely request access and a large # of em get access, it counts as released
open to feedback / disagreement, like if many ppl saw it differently im willing to consider N/A and remake the option more clearly
a few things here are brought up. not much that hasn't been mentioned elsewhere though
they say that they won't be making sora available "any time soon"..
@Bayesian No; if you use Dall-E through your ChatGPT Plus subscription, I don't believe you're charged separately for the image generation.
@apetresc (Unless you mean, like, separately, because you could use DallE-3 through the API as well as use it for free through ChatGPT Plus)
@CharlesPaul the phrasing means it's asking about whether it'll be banned at the end of 2025. to ask whether it would be banned for any temporary amount of time in any EU country, you could ask "it'll have been banned in at least one EU country for at least some amount of time" or smth
does “free to use” resolve yes if they have a set up like “each user gets 30 minutes a month of video free, but you have to pay beyond that.” What about if they have a similar situation to GPT3 before Nobember 2022 where you got free tokens for signing up with them, but once they are gone you had to pay?
@Bayesian I guess I'd argue YES in the first example and NO in the second- the difference being that the tokens renew themselves regularly and give the user continued use. Happy to defer to your resolution if it gets complicated.
what if it's free via, like, Bing Chat, and sustainably so, so not just a one-time free amount of tokens?
It's interesting that people, given his NO positioon, are essentially betting that @EliezerYudkowsky will change his mind and state during the next two years that the Sora line is a threat to civilization.
I can see this logic, in that this model is the closest to "AGI" (whatever that means) there is now, and there could be some rapid advance that surprises everyone. He would see a capabilities advance very negatively, so I might take YES on this if it were cheaper.
@SteveSokolowski you could have a video model that outputs a perfect ten hour television series exactly to your specifications and i don't think it would be as dangerous as I expect LLMs in a few years to be
@SemioticRivalry I think I said in another market that people are missing the big picture with Sora.
It's not about video; watching movies is a sideshow. It's about Sora generating an internal scene, making predictions about what is going to happen, and moving through it. You can tell it to do a lot more than an LLM, and connect its output to take real world actions based on its time-series predictions.
@SteveSokolowski Training something to generate text can only be successful if that something itself understands human thought, but when generating videos (for most videos at least) you only need to understand concepts like 3d space, time, physics which are much less likely to lead to dangerous stuff like agency, planning, general intelligence than human thought (imo). There are some videos which do require understanding human thought to generate, so (imo) your proposed system will be possible, but I think it will always lag behind text-based model in "AGI-ness".