You can help us in resolving options by spending at least 1 mana on each tweet you have an opinion on. Buy YES if you think it's a good take and NO if you think it's a bad take.
Many markets come in the form of "is this tweet a good take?" so I thought we'd try just doing the most direct possible version of that.
You can submit any "hot take" tweet, as well as a quote from the tweet or a neutral summary of the take.The tweet can be from any time, but I think more recent hot takes would be better.
I may N/A options for quality control, or edit them to provide a more neutral summary.
As a trader, you should buy any amount of YES in tweets you think are Good Takes, buy any amount of NO in tweets you think are Bad Takes. I will leave the definition of those terms up to you. The amount of shares doesn't matter for the resolution, one share of yes is one vote and one hundred shares of yes is also one vote.
If I think you are voting purely as a troll, such as buying no in every option, I may block you or disregard your votes. Please vote in good faith! But hey, I can't read your mind. Ultimately this market is on the honor system.
Note that market prices will be a bit strange here, because this is simultaneously a market and a poll. If you sell your shares, you are also removing your vote.
The market will close every Saturday at Noon Pacific. I will then check the positions tab on options that have been submitted.
If there is a clear majority of YES holders, the option resolves YES. If there is a clear majority of NO holders, the option resolves NO. If it's very close and votes are still coming in, the option will remain un-resolved. The market will then re-open for new submissions, with a new close date the next week. This continues as long as I think the market is worth running. It does not matter what % the market is at, and bots holding a position are also counted. In a tie, the tweet will not resolve that week.
I may update these exact criteria to better match the spirit of the question if anyone has any good suggestions, so please leave a comment if you do.
Related questions
Superhuman artificial intelligence?
Oh never mind. I don't think there's a good term for it.
So far I've only come across the term TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism) once, in the Guardian hit piece on Lightcone, CFAR, and Manifold, which was definitely sus, though it seems plausible that the term could be picked up and used by people who aren't against those movements, since it is catchy and is a useful blanket term for a group of super overlapping movements
@TheAllMemeingEye I agree, this is a statement about the term as it's currently used, not how it may be used in the future.
@TimothyJohnson5c16 I've seen it pointed out that if you get rid of cosmism, which was always the odd one out anyway, you can get REALEST. But that one is perhaps a bit too self-flattering for people to use with a straight face.
@PlasmaBallin I think the best argument against this is the is/ought problem: 'an ethical or judgmental conclusion cannot be inferred from purely descriptive factual statements."
https://en.m.wikipedia.org/wiki/Is%E2%80%93ought_problem
How do you solve that problem?
I think even if you believe in the is-ought problem, you have to think that empirical evidence can help answer real-world moral questions. As for abstract ones, either you have some way of reasoning about them, or you have no reason to believe that objective morality exists at all. If you favor the latter, than it's not a problem for the "all magesteria overlap" view - there simply is no field of knowledge corresponding to morality to begin with. If you favor the former, then presumably you don't think that there are some special rules of inference that apply only to moral claims. Instead you use the same forms of reasoning that you do for non-moral claims. This is how most philosophers argue about morality.
As for how I specifically solve it, I don't think there is a genuine distinction between is and ought claims. Ought claims are just claims about what it is good to do. The real problem is that without a definition of "good", we can't derive claims about goodness from those about other things. But we can reason about what "good" must mean to have the properties and connotations we typically ascribe to it, so this difficulty can be overcome.
Glad to see all the people I take as rationalists voted no. (and yes I'm judging those who voted, though not traded, yes)
A better steel man might be that I am terribly calibrated on some things. If you ask me if there's alien life in the Milky Way one day and a random day the next month, I might give a very different answer to these questions. If you ask me about something I know something about, I will probably be more calibrated.
From a layman perspective it seems like Anthropic are doing pretty much the exact same thing as OpenAI and Google, racing to increase capabilities and releasing to the general public, how exactly is this even remotely safe by @EliezerYudkowsky standards?
Why on Earth would we want it to be EASIER for super rich bosses to arbitrarily punish impoverished subordinates?
he's from the UK. if you visit his page it shows his location as London.