Will some U.S. software engineers be negatively affected financially due to AI by end of 2025?
Basic
168
15k
2026
71%
chance

Requires at least 3 articles from traditionally reputable news organizations reporting that some U.S. software engineers have lost income, job security, or hiring velocity as a result of AI-based automation.

I won't be proactively searching for such articles - I will need to come across them organically or they can be posted in the comments / sent to me via Twitter message or other DM.

Get Ṁ600 play money
Sort by:

i put the title question at 30% because i think increased productivity will increase the value of developers. but, i think that the market may resolve the other way anyway due to media portrayal.

on reflection, should resolve yes almost no matter what due to ai fear.

predicts YES

@JimHays Note that Klarna is Swedish, so this would not count towards one of the three articles

predicts YES

although they serve the US, so maybe some of their engineers are US? Not sure, without more research

I expect junior engineers to have worse job prospects first. This would be enough to qualify for YES.

predicts NO

GPT-4 can't really understand code you give it from my experience. It can do boilerplate or generic projects but it doesn't understand anything beyond a surface level and can't refactor code.

Please tell me if I'm wrong.

predicts YES

@Shai I think you're wrong, I've successfully used gpt-3.5 even to explain a piece of code or create non trivial code. I don't think the limitation is that it's understanding is shallow, I think the limitation is that it doesn't have access to all the files in your project, or the internet or the errors in your terminal. All of these have been announced (through chatgpt plug-ins and copilot chat), so I expect a massive increase in abilities.

@Shai Although I think AI can do more than you are suggesting, I also think that the requirements are lower than you’re implying: It just takes a few managers to decide that lowering their coding standards are worth saving an entire engineer’s salary as the economy slows.

Twitter straight up cut a ton of people. Its reliability has definitely suffered a bit, but I suspect plenty of cash-flow focused leaders are thinking of it as a successful case study if they can gesture towards greater efficiency while doing the same.

What counts as "a result of AI" ? Does an article ending in the sort of "While there could be many reasons to explain this phenomenon, some people think AI is the reason" ?

predicts NO

@XavierBaton might depend on context but I think i would count that.

predicts YES

I bet we'll see such articles within a few months.

@connorwilliams97 I give it 20%, if you make a market ping me.

Will be very hard to verify, as hiring velocity is already currently low

@tftftftftftftftftftftftf and US news media will make up any number of reasons

Could you list what news articles are acceptable?

@PatrickDelaney I won't give an exhaustive list since that seems difficult. But WSJ, NYT, Washington Post, Fox News, CNN, Vox, etc. are good. Happy to give thoughts on any specific organizations as examples if you have any in mind.

@CarsonGale I would say, try to make it more third-party objective, such as 48 or higher source reliability based upon this chart: https://adfontesmedia.com/interactive-media-bias-chart/ ... or another pre-selected chart. For reference none of those publications make the cut, PBS Newshour barely makes the cut, but you could pick a higher threshold.

This is a topic that is definitely going to garner a TON of clickbait over time and I would say journalistic integrity is going to be close to nothing on this topic, because how would you even baseline that? How could you decouple A.I. from rising interest rates / less cheap tech capital being in the market, the collapse of SiValley Bank and other similar banks, etc.

@PatrickDelaney For further clarification, your question is, "some" engineers. Obviously there are always going to be, "some" engineers who feel they have been screwed over by...well, almost anything! So anyone could write a story about it...cue the NYT and Michael Barbaro saying, "hmmmm," to some guest they bring on their podcast talking about this, "investigation," which was really just speculation and interviewing people about layoffs. Of course it's all very interesting and I would definitely listen, but again...is it really a reliable conclusion or is it just storytelling?

Thanks @PatrickDelaney - I appreciate the feedback / suggestions. It's a tough call, but I think I will retain the subjectivity with regard to the reputability of specific news sources for purposes of this market. I do think this is an excellent opportunity for someone to duplicate my market(s) with the more objective criteria that you suggest, and traders that prefer increased objectivity can trade there. I'd be happy to link to such a market in the description.

As a clarification, to be eligible, any news reporting should specifically be making the claim that SWEs are being negatively affected financially due to AI (and not from other sources).

predicts NO

@CarsonGale I had thought about making a market which says something along the lines of, "white collar work in category X, Y, Z," which have been identified as categories that are most likely to be disrupted by LLM's will grow in income W% less than the projected growth from a baseline. The more I thought about it, the more difficult it is to disentangle that from rising interest rates, any unknown other economic factors that might occur.

Part of my thought was - there have been past studies which have linked automation going back into the 1970s with rising inequality that I have scanned through in the past. But to be honest, I am not super familiar with this topic, I'm not an economist by any means, and I don't really have the capacity to judge - I just know that other much smarter and more well informed individuals have judged it alread.

https://news.mit.edu/2020/study-inks-automation-inequality-0506

So that being said, it's reasonable to believe that this inequality trend could extrapolate out into the future. So it's something that we all kind of, "know, with a certain degree of evidence." What we don't know (which points this market toward a NO), is how much you can decouple precise sectors of an economy down to an exact effect ... e.g. how do we avoid sloppy thinking?

I don't think the market is put together in bad faith by any means, I'm just trying to think of a way to up your game here. I would rather not put together a market I would rather bring more people to your market because it already has a lot of participants.

predicts NO

@CarsonGale I guess it comes down to market-making philosophy...would you rather put together a market that's for a thing that's already a-priori known by a large group of people (e.g. people just believe it's going to be true) and have that market just resolve a particular direction because of vox populi, or would you rather have a market that has an increasing degree of precision over time and forces the participants to really aggregate more thoughts and information rather than just go wherever the crowd pre-destines it to go from the start?

@PatrickDelaney Thanks for raising these questions...they are important for me to think about generally as I think about my 'brand' as a market maker (which I think will differ from other market creators).

On the spectrum between 'highly technical pre-designed criteria' and 'slightly-vague commonsense resolutions', I think I will generally be further on the commonsense end than other market creators. Where there is an objective verifiable measure that is easily conceptualized, I'd generally like to try to include that. But it's important to me that my markets stay 'people-friendly', such that you can grasp the market's intention pretty quickly and bet on it with similar information available to other traders without having to look closely at the minutiae details or worrying about resolving on a technicality. Technicalities are somewhat inevitable, but I think they can be mitigated.

There's also something to be said about the impossibility of foreseeing the cruxes of market resolution decisions when creating the market. I would rather respond to specific clarifications as they arise (and are important to traders) vs spend time trying to forsee all possible cruxes. Again there's a spectrum here that I'd like to be in the middle of.

predicts NO

@CarsonGale I hear you. I'm probably going to put together just a ton of non-people friendly markets that are highly empirical and as third party validated as possible and then pay advertisements for people to join. I don't want to be a social influencer I just want to understand more about where A.I. is going and gather more information.

100x-engineer-with-copilot bf

adult-daycare-does-no-work-pm gf

More related questions