If Tetraspace does X will she think it was a great idea in retrospect?
➕
Plus
27
Ṁ3133
Jan 1
90%
Gets a job in the US
88%
Bikes around her city
73%
Tries to get any old software job in the US
71%
Hang out in VR (with cyborgisms)
66%
Tries to get an alignment-related job in the US
64%
Plays a full song on the piano
51%
Meditates/practices intentionality
48%
Takes MDMA alone
47%
speak American English
39%
parler français
14%
Attempts to slip the word "rationalussy" into a conversation with someone who has never heard of Manifold
7%
Stays awake for 72 hours
Resolved
YES
Hangs out in real life with other rats/postrats/etc
Resolved
YES
Dyes hair white
Resolved
YES
Quits their job with no plan
Resolved
YES
Stays awake for 24 hours
Resolved
N/A
Attend/host a Christmas celebration with friends (on the actual day or eve)
Resolved
N/A
End of year trip

You should propose more values of X!

Get
Ṁ1,000
and
S3.00
Sort by:
Joins a dating app that doesn't have prediction markets

@Tetraspace Resolving NO; I joined Duolicious and despite how based many of the people seem there I ended up not talking with anyone.

Hang out in VR (with cyborgisms)

funny how i've done this a lot in real life but still not in VR

Tries to get an alignment-related job in the US

Only reason to do this over generic software or even an AI job is if you don't think you can survey the field yourself.

A lot of people use others as a sort of distributed memory. You don't have to remember everything if you know someone who's keeping up-to-date for you.

I never trust people and I always do everything myself. So I would never need this. I mean, I still learn things from people, but given 6 months lead time I don't get surprised. And I don't think that "working on alignment as a job" is going to help directly. Not unless you're scared of a specific project(maybe Q*'s dangerous, right? So you work on alignment just to get placed on Q* so you can keep watch over it). It's the indirect effects of knowing when to jump on something that's helpful.

So I would never do this, but you might if you're the type to use people as distributed memory.

Do consider a generic AI job too. If you improve "AI capabilities" but it's like a voice cloner or a generic chatbot for Adobe or a diffusion model, who cares? Keeps you up-to-date and doesn't actually hurt anything. Designing algorithms to speed up LLM pretraining, or helping on the very biggest most expensive models, maybe you're actually contributing to the problem. But don't be scared of most AI jobs. You get more value in expectation helping with the problem than you're contributing to the problem, if you feel comfortable jumping on an important problem later.

Gets a job in the US

This is a very good option actually. I think you want to do alignment research, from your recent tweets. But you have at least 4 years before AI will be any potential threat at all, and I don't get the impression that you would be immediately effective. So there's no rush in the sense of, "action is urgently needed and action now has high impact".

If you don't believe that you have at least 4 years to live, I can suggest a project that would basically prove it with 3-6 months of work. There's also lots of other projects that would allow you to precisely bound what current models can do. I'm doing some of them too, but not for alignment.

If you do believe that you have at least 4 years, you're probably not that scared of just transformers. Because those can be scaled with money and data in predictable ways, so you can't have confidence if transformers alone are believed to be dangerous. So you're waiting for some new architecture. So waiting is actually a good idea, because general agents are extremely hard to model. Like "research effort setting up a mathematical field for 20 years" level, not "I want to complete a project and gets some results to improve my x-risk chances" level. But if there's any hint of a specific architecture, you would jump on that and get precise bounds on what it can do and where any dangers are and write them up, and I guarantee you we can get OpenAI or DeepMind to read these results and take precautions.

There's a huge efficiency boost in having an actual specific type of AI algorithm that you're trying to align or study misalignment on. So waiting is not necessarily a problem if you actually want to reduce x-risk: Because if 5 years from now you need to jump on something, you want to have a history and savings where you can give it 100% attention.

So if you just make money for 2 years, watch for upcoming architectures(big AGI labs are not actually that secretive. Details and data are, but usually not algorithms), you'll be better off than if you're splitting attention in a couple years. And US jobs are the most efficient way to stockpile money.

I don't know exactly what your projects are. Do you have a specific idea in mind? Or is it "I have a cause and I need to think of a way to do it"? If it's the latter, get a job an make an effort to let ideas stew for a year or two. Parallelize it.

I should say: There's benefit in doing alignment stuff now, not because it gets you anywhere, but because it develops habits that will be useful later. If you never do any research, then when it would actually be useful, you won't be able to contribute. So I think you want to keep yourself sharp. Watch for stuff that could be useful, don't necessarily deep-dive, but watch it just enough that you know you could contribute if you felt it valuable.

And if you want to do something and don't know what, you can always do surveys. Write a survey of every optimization algorithm. Every architecture. Every transformer trick. Every dataset. Every search augmentation on transformers. Every Q* rumor. Keep watch of everything that could possibly be a threat. Because Deepmind and OpenAI and everyone, they're not going to come up with something new that surprises you if you know all the literature. If you just do a survey of everything potentially related, but it's shallow, then you can come up with ideas anytime you have offtime.

Maybe you're at work, writing a unit test, waiting for it to run, and you're thinking about you might bound a search on an LLM sequence search or something. You don't have to be "working on it" to come up with ideas.

BUT - once you do see something valuable, and there's a specific project that you think has a shot at defining bounds on capabilities, or a way of demonstrating a dangerous behavior in a controlled fashion, then you can jump on it. Because you know the whole field. Nothing takes you by surprise. You know approximately what the big companies are going to do, so you might be 6 months behind but you still know how to be relevant. You jump on your project, you do the experiment, you write it up, you demonstrate something specific, people read it, they take some precautions because OpenAI and DeepMind and Facebook all have people that would be willing to make minor adjustments if you document it.

This doesn't help unless you have 6 months to a year of slack time. If you're worried about an AI becoming a super-AI in like 30 minutes on a supercomputer, and it's fully general so it's discovering completely new optimization processes that you haven't researched, then this won't work. This won't prepare you.

But if you think, "at some point in the next 10 years, something dangerous could happen. Maybe we'll have 6 months, 2 years, 5 years of warning. I'll have approximately an idea of what it is. And I can mitigate it if I'm watching closely". I think you can usefully help there.

@Mira

If you don't believe that you have at least 4 years to live, I can suggest a project that would basically prove it with 3-6 months of work. There's also lots of other projects that would allow you to precisely bound what current models can do. I'm doing some of them too, but not for alignment.

What 3-6 month project are you referring to here?

bought Ṁ10 Gets a job in the US YES

probably having a job somewhat closer to your friends would be good

how would the hair bleaching thing work, would you do it yourself/get it done at an okay salon/get it done at a high-end salon?

@goblinodds current plan is okay salon this Saturday!

@Tetraspace !!! godspeed, i hope i lose mana on this bet then

@goblinodds omg i just remembered to check. glad the hair bleaching went well!!!

Resolving the Christmas / NY ones N/A because they didn't happen; feel free to make new markets for 2025.

@Tetraspace 2024 even

Buys and wears white cat ears

😔 There are probably better cat ears that can be bought that I should find, but the two pairs I have now are uncomfortable and interfere with hairflow

Quits their job with no plan

If you dislike your job and have 6 months of savings, IMO it's not actually that risky.

If you don't have any savings, it's better to stay at the job and reduce spending until you reach a specific amount of savings. (phrasing it as a concrete goal might help motivate you)

Hangs out in real life with other rats/postrats/etc

Went to the Schelling house party on Saturday and it was great, so many cool people around

biking is great

Biking and piano are my two longstanding hobbies from high school, and I've also dabbled in meditation for a while. All three make for generally good pastimes (which is why I suggested them). However, all three also have a learning curve to some extent, which could make it a little hard if you are going to do any one of these for the very first time.

On the other hand, if she does this, it would probably be after having a good plan... This market is implicitly conditional on Tetraspace choosing to do the thing, which distorts it.

Quit job with no plan will be a better plan in a little bit imo

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules