I have been seeing many reports/rumors that Apple might be working on local on-device LLMs.
The LLM feature can be a combination of on-device and cloud-based processing, but must include some on-device functionality.
ps: i will participate in this market because it is very objective - in case some subjective interpretation is required I will delegate to one of the moderators
Can you confirm (or deny) if the on-device 3B Apple Intelligence model counts? It is specifically a chatbot (Siri), but also not specifically a chatbot like chatGPT or Claude that only uses one model. It's partially on-device for fast interactions, and for more complex tasks it hands them off. I still think that we need clarification here, since one can interpret the question differently here (does the 3B model count? It's an on-device chatbot. Or does Siri, the chatbot, not count, as it's not 100% on device all the time?)
(I would argue it counts. ChatGPT is also not just chatGPT all the time, as it also gets input from the web. But the counterargument would be something like “there is no standalone chatbot application that is purely on-device all the time", the OS seamlessly switches to off-device models.)
Apple's summary: https://machinelearning.apple.com/research/introducing-apple-foundation-models
Claude's opinion:
4o's opinion:
@JonathanMannhart i knew this would happen when i created the market which is why I included the following in the description which imo makes it very clear that it should count ☺️. So yes I agree with you!!
The LLM feature can be a combination of on-device and cloud-based processing, but must include some on-device functionality.