Hot take. Today some people will "ghost" someone by not replying to their messages. But, in the grim dark future, they will instead "chatbot" them by hooking their incoming messages up to a language model. Eg, the LLM may simulate a person gently breaking up with them over time, or who is too busy for another date right now.
Resolves yes if in early 2030 I think this hot take was hot. Resolves no if not. If I am dead the market resolver may substitute their own opinion.
What about if they weren't used for breakups but used to keep in touch with a greater number of people, @MartinRandall?
i.e. some service that responds to messages when the LLM has high confidence (above some custom % threshold) that the simulated response is close to how the real person would respond. Then for messages that don't meet that bar, the user has to manually approve from a selection of possible LLM responses or choose to write the response themselves.
in practice this would probably mean automating small talk & other low entropy moments in conversations
(assuming that LLMs no longer have hallucinations & become fairly good approximations of a given person)
Not sure if there's a market and/or term for such a thing already.
@elf It would count if botting is being done in places where someone would choose to ghost given 2020 tech.
I'm not restricting this to romantic relationships.
It does have to be automated. A conversational assistant who only suggests responses is not in the spirit of my 2023 hot take.
@firstuserhere if there's at least one AI regulation I support, it's requiring "generated by AI" labelling
@HannahFox This is a valid concern. "Chatbotting" could definitely be used in a malicious way, such as to harass or intimidate someone. However, I believe that "chatbotting" can also be a force for good. If used responsibly, it could be a great tool for communication and understanding. Just like any other technology, its impact will ultimately depend on how it's used.
@MartinRandall It feels wrong to trick someone into thinking they're talking to you when they're actually just talking to a chat bot. They will probably figure it out eventually and then it is even worse than just ghosting them. In my opinion if you dislike someone that much you should just block them instead of doing this. That's why I think it is worse to "chatbot" someone than to ghost them.
@HannahFox I see your point--it could be considered deceptive to use an LLM in this way. Also, if two people are both responding to each other using LLMs, it could create confusion and lead to miscommunication. This is because the two LLMs may not understand each other--they may interpret the other person's messages differently, or may not know how to respond appropriately. This could lead to a cycle of frustration and misunderstanding, which is obviously not ideal. Ultimately, I believe that communication is best achieved through authentic, human-to-human interactions. While AI technology can certainly be helpful in some ways, I believe that it is ultimately no replacement for genuine connection.
@AngolaMaldives yeah, there's a certain je ne sais quo about his comment which strongly smells of GPT-3. Sorry @MartinRandall if you did in fact write that comment yourself!