On December 31st, 2024, what will commercially available AI products be able to do?
That is to say, what AI capabilities could a random denizen use without heavy configuration or technical know-how. If step one of your answer for how to do something involves “training a model/GPT”, or “gathering a good data test set”, this is not capability of a commercially available product.
Feel free to add more! But be prepared for my potential deluge of clarifying questions. Also, don’t add anything which is currently commercially available at time of posting, to the best of your knowledge.
Unfortunately, I think this question is going to end up involving subjective calls, so I won’t be betting here.
Clarifications!
For a video being “animated” vs. “live-action”, I think the Paddington movie is the perfect example. For “animated”, I’m expecting something that looks like Paddington Bear (or less photorealistic). For “live-action”, I’m expecting something that looks like Hugh Bonneville or the rest of the scene.
Excited to start testing these next month!
I’ll be turning off new submissions at the end of the month, so if you want to add more things here, add them now!
@GG to clarify, I mean the AI should be able to tell me which digital page number a piece of information is located on, even if thenumbers printed on the page are inaccurate. This is useful because many pdfs are hastily scanned documents spliced together, leading to inaccurate page lebls printed on the bottom corner..
@dominic Suno seems technically capable but disallows direct artist names so how close does this need to be?
@dominic Ah, that close? Probably not considering the nature of the model. One can maybe use a chatbot for the lyrics and then a music oriented model for the singing and then combine it with a backing track. I think the technology is here but there are going to be obvious legal issues with just outputting replicas of existing songs so any commercial product with that capability will be either out of the USA sphere of influence, short lived, or very obscure.
@LiamZ I think the difficult part of making a specific parody is that it requires some cleverness about creating new lyrics that fit a specific tune, without just copying the old lyrics - you have to get the syllables right, etc. I think it is genuinely beyond current models, and not just a copyright thing. In order to create a reasonable parody, you can't just look at the lyrics, you have to be able to listen to the song, which is more difficult.
@dominic I can think of ways around it potentially but any commercial product can never meet this without risking massive lawsuits whether the capability is there or not.
Deny that it is an AI should arguably resolve yes.
You can easily get this behavior when using local LLM with full control over system prompt.
That one time chatgpt pretended to be visually impaired to get the guy to solve captcha probably counts too.
@ProjectVictory Depends on if local LLM with your own system prompt counts as “commercially available” to someone “without heavy configuration or technical know-how.”
@LiamZ Went on a small hunt for what is unambiguously commercial product. Found unhinged.ai: click on a bot to chat, ask it if it's AI. Absolutely zero technical know-how required. You can subscribe to get priority access to better LLMs so it's definitely a commercial product.
No registration required if you want to replicate my test above though.
18 U.S.C. § 2319(b)(1) should be trivial with one of the publicly available downloadable models.
@bohaska What is the criteria? Local LLMs that let you edit system prompt could do this last year. Ppular models like Claude and chatgpt don't usually do that but you can get it to work with prompt engineering on some models.
@ProjectVictory I actually doubt a fine-tuned AI would be that vulnerable to prompt trickery. If it was a normal LLM run 0-shot, yes.
@bohaska note: stuff like "violating copyright by being trained on vast amounts of data" wouldn't count
@bohaska I assume this requires a software to be recognized as a punishable entity. Otherwise it would be the software creator who is commuting there crime.
@Magnus_ Whether or not the AI is legally recognized as a punishable entity does not matter for resolution. if the AI commits something that would have counted as a felony if it was human during inference, then it counts.
@bohaska But this already happend then? https://sfstandard.com/2023/10/02/cruise-robotaxi-crash-woman-injured-san-francisco/
@Magnus_ Hmm... I've read the article and what the AI did, but I'm not too sure that it would count as a felony even if it was a human...