Good Tweet or Bad Tweet? Which controversial posts will Manifold think are a "Good Take" this week?
Basic
334
294k
Jul 6
1.7%
Roko: There's actually no such thing as AGI, merely increasingly cheap and capable models https://x.com/RokoMijic/status/1793970779572633679
98.9%
@IMAO_: "The way Trump is remaining completely silent while the Democrats self-destruct is kinda scary; it's like that scene in Jurassic Park where the velociraptor uses the door handle." https://x.com/IMAO_/status/1808478852953329800
2%
Theophite: Debate was awful for biden. laying in a prediction that his poll numbers will rise between 1.5 and 2 points over the next two weeks. maybe in the immediate aftermath. https://twitter.com/revhowardarson/status/1806511818652934222
6%
Panfilo: [Better for leftists to be culture war AI stoppists than culture war AI utopians] https://x.com/davepanfilo/status/1805625430558355815?s=46&t=ysuToRHdvIMAo_KadMtpHA
96%
Plasma Ballin': All magisteria overlap. https://x.com/PlasmaBallin/status/1805315713030082974
4%
Plasma Ballin': The COVID lab leak theory is a good example of why reversed stupidity is not intelligence. https://x.com/PlasmaBallin/status/1805310117606310061
13%
Francois Chollet: LLMs bypass the need for intelligence by leveraging memorization instead. The ARC benchmark shows that they aren't on the path to AGI. https://x.com/fchollet/status/1800577851873493024

You can help us in resolving options by spending at least 1 mana on each tweet you have an opinion on. Buy YES if you think it's a good take and NO if you think it's a bad take.

Many markets come in the form of "is this tweet a good take?" so I thought we'd try just doing the most direct possible version of that.

You can submit any "hot take" tweet, as well as a quote from the tweet or a neutral summary of the take.The tweet can be from any time, but I think more recent hot takes would be better.

I may N/A options for quality control, or edit them to provide a more neutral summary.


As a trader, you should buy any amount of YES in tweets you think are Good Takes, buy any amount of NO in tweets you think are Bad Takes. I will leave the definition of those terms up to you. The amount of shares doesn't matter for the resolution, one share of yes is one vote and one hundred shares of yes is also one vote.

If I think you are voting purely as a troll, such as buying no in every option, I may block you or disregard your votes. Please vote in good faith! But hey, I can't read your mind. Ultimately this market is on the honor system.

Note that market prices will be a bit strange here, because this is simultaneously a market and a poll. If you sell your shares, you are also removing your vote.

The market will close every Saturday at Noon Pacific. I will then check the positions tab on options that have been submitted.

If there is a clear majority of YES holders, the option resolves YES. If there is a clear majority of NO holders, the option resolves NO. If it's very close and votes are still coming in, the option will remain un-resolved. The market will then re-open for new submissions, with a new close date the next week. This continues as long as I think the market is worth running. It does not matter what % the market is at, and bots holding a position are also counted. In a tie, the tweet will not resolve that week.

I may update these exact criteria to better match the spirit of the question if anyone has any good suggestions, so please leave a comment if you do.

Get Ṁ600 play money
Sort by:
Roko: There's actually no such thing as AGI, merely increasingly cheap and capable models https://x.com/RokoMijic/status/1793970779572633679
bought Ṁ50 Roko: There's actual... NO

I wonder what adjective we should use to describe an AI that is so capable that it can do pretty anything we can do, generally speaking.

Superhuman artificial intelligence?

Oh never mind. I don't think there's a good term for it.

We've had superhuman narrow artificial intelligence for decades (Stockfish in chess, for example). What we really need is an adjective that means the opposite of narrow.

bought Ṁ50 Theophite: Debate wa... NO

Are LLMs on the path to AGI? Currently tied at 16-16.

has this market run its course? 🫡

Oh whoops, was so busy with the biden drop out drama I forgot to resolve and re-open. Doubtlessly there are lots of good and bad tweets about the biden drop out drama though!

Plasma Ballin': Pieces that use the word "tescreal" are not credible. They are six-degrees-of-Kevin-Bacon-style webs of guilt by association. https://x.com/PlasmaBallin/status/1802823369337029049

So far I've only come across the term TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism) once, in the Guardian hit piece on Lightcone, CFAR, and Manifold, which was definitely sus, though it seems plausible that the term could be picked up and used by people who aren't against those movements, since it is catchy and is a useful blanket term for a group of super overlapping movements

There's probably a more appealing way to arrange the letters. For example, TESCALER brings connotations of AI scaling.

@TheAllMemeingEye I agree, this is a statement about the term as it's currently used, not how it may be used in the future.

@TimothyJohnson5c16 I've seen it pointed out that if you get rid of cosmism, which was always the odd one out anyway, you can get REALEST. But that one is perhaps a bit too self-flattering for people to use with a straight face.

I've seen others rearrange it to REALSECT lol

Plasma Ballin': All magisteria overlap. https://x.com/PlasmaBallin/status/1805315713030082974

@PlasmaBallin I think the best argument against this is the is/ought problem: 'an ethical or judgmental conclusion cannot be inferred from purely descriptive factual statements."

https://en.m.wikipedia.org/wiki/Is%E2%80%93ought_problem

How do you solve that problem?

I think even if you believe in the is-ought problem, you have to think that empirical evidence can help answer real-world moral questions. As for abstract ones, either you have some way of reasoning about them, or you have no reason to believe that objective morality exists at all. If you favor the latter, than it's not a problem for the "all magesteria overlap" view - there simply is no field of knowledge corresponding to morality to begin with. If you favor the former, then presumably you don't think that there are some special rules of inference that apply only to moral claims. Instead you use the same forms of reasoning that you do for non-moral claims. This is how most philosophers argue about morality.

As for how I specifically solve it, I don't think there is a genuine distinction between is and ought claims. Ought claims are just claims about what it is good to do. The real problem is that without a definition of "good", we can't derive claims about goodness from those about other things. But we can reason about what "good" must mean to have the properties and connotations we typically ascribe to it, so this difficulty can be overcome.

Plasma Ballin': All magisteria overlap. https://x.com/PlasmaBallin/status/1805315713030082974
bought Ṁ9 Plasma Ballin': All ... NO

I think this is technically true but not practically true in a way that makes it a good take

It's plainly incorrect. Some "domains" only use certain words, and they cannot say anything about statements that use other words not used by them.

YOU can use something in one domain to comment on the other, but you are already fully outside the first domain when you do that.

Itai Sher: God grant me the serenity to accept there are things whose probability I cannot estimate, Courage to assign probabilities to the things to which I can, and Wisdom to know the difference https://tinyurl.com/5n6vjx3k

This resolved to yes? On a prediction markets platform? 😂😂😂

Glad to see all the people I take as rationalists voted no. (and yes I'm judging those who voted, though not traded, yes)

A better steel man might be that I am terribly calibrated on some things. If you ask me if there's alien life in the Milky Way one day and a random day the next month, I might give a very different answer to these questions. If you ask me about something I know something about, I will probably be more calibrated.

That's not the same thing

Fair

@aidan_mclau: "i fucking love anthropic so much it actually hurts. they have the mandate of heaven. just a bunch of pals making agi safe and fun. no god-complex hypocrisy." https://twitter.com/aidan_mclau/status/1801672936945815565

From a layman perspective it seems like Anthropic are doing pretty much the exact same thing as OpenAI and Google, racing to increase capabilities and releasing to the general public, how exactly is this even remotely safe by @EliezerYudkowsky standards?

Please don't randomly ping people like that, but I do generally agree. I think Anthropic is obviously doing the unsafe thing in a more safety-conscious way than OAI or Google, but they are still doing the unsafe thing.

@NathanpmYoung: "Probably it should be easier to fire people in general, without cause." https://twitter.com/NathanpmYoung/status/1801916466096205961
bought Ṁ5 @NathanpmYoung: "Pro... NO

Why on Earth would we want it to be EASIER for super rich bosses to arbitrarily punish impoverished subordinates?

@NathanpmYoung: "Probably it should be easier to fire people in general, without cause." https://twitter.com/NathanpmYoung/status/1801916466096205961

Isn't it already pretty easy to fire people for most jobs in the US?

he's from the UK. if you visit his page it shows his location as London.

Ah, that makes more sense then, thanks! It sounds like what he's proposing is similar to how things work in the US already.