January 2025 updates
All these updates are ultimately just speculation. Especially when it comes to anything related to the future of entire societies and governmental change, it’s really hard to say what’s coming in terms of wealth, work, power, and money. Paul Graham replied in a post on X this week that, “Hard to say if it will be this extreme, but if you average this together with the people saying that all the programming jobs will disappear, you have a reasonable estimate of what we know for sure about the future of programming as a career: nothing.”
It’s true, we really don’t know much about the future of programming or the future of many other things regarding what’s coming in the rest of the 2020s. But I wanted to give an update on how I’m feeling and overall why I changed my mind on a lot of things. These updates are also meant to serve as a reference as my ideas continue evolving, and to explain to others why I am no longer pessimistic about or offended by the thought of an AI-heavy future.
A lot has changed since June of 2024. I have been thinking about the questions below (under ‘My initial questions’) a lot over the past 6 months while doing a lot of building with LLMs, and I think I have my answer for all of them. They’re pretty much all just opinions, definitely with some blind spots, and also with an optimistic outlook.
Overall, my perception has changed a lot after thinking about a few different topics:
- realizing that a lot of what we still do manually is pointless
- asking myself a lot about my predictability
- what things do I do in my day-to-day that are completely unpredictable?
- how much real creation do I do in my life?
- why is it not 100% of everything that I do, and why can’t I spend all of my time doing what I really love to do?
And then I started thinking about everyone in general. The truth is, we don’t need to be doing a lot of this labor anymore.
A big change was that I stopped seeing AI models as ’theft.’ Sure, training on private data without permission could be called theft, but my view changed when I stopped focusing on protecting our current way of life, where everyone does the same repetitive tasks every day. Instead, I started thinking about a future where we don’t waste time doing things manually that machines could handle. Basically, I stopped feeling offended over the idea of job loss.
I realized that inevitably, many people worldwide will soon ’lose their jobs.’ This can mean different things to different people, as many point out that historically, humans have always found something to do and created new types of work. The idea of losing a job worries many people and puts them in a protective mindset, thinking that maybe the solution is to become someone who doesn’t lose their job in their field—to be one of the last standing. I believe that we’re moving toward the point where being protective and anti-advancement for the sake of preserving what we have right now isn’t worth it.
What I’ve accepted is that people who are highly repetitive, predictable, and provide value by the amount of things they know may be in trouble, but possibly just for the short term. I think that with safeguards in place, this is a good thing. I see it this way: the more people impacted by job displacement, the better in the long term. As time goes on, the less predictable roles are affected, until we have a majority no longer in traditional ’employment’. I now believe it’s going to hurt at first for a few, then for many, and then for none. With proper planning to incorporate these safeguards I mentioned to society, the phases with pain do not have to be nearly as scary as a lot of people imagine.
The hard push toward AGI is really like nothing we’ve seen before in terms of speed and investment, and I don’t think its desired outcomes can be compared to an industrial revolution. The possible results of these efforts take us beyond the idea of work as we know it.
Some issues are that we might initially see certain fields revolutionized at different times, and it’s hard to say how fast everything will get there. For example, there are still bottlenecks in tasks like agriculture, construction, and others like patient care that need specialized and accurate robotics.
Optimism says that adoption across fields will happen all at once after a solid AGI contender is released, but there’s also the possibility for slow, painful change that creates a lot of new problems initially. For example, if only half of society is obsoleted, rather than everyone, it makes for some interesting questions about what’s left of work on the path to post-scarcity. I personally feel optimistic about this slower-adoption scenario where each field is replaced independently on its own timeline. I mentioned safeguards above, and think they will look different across societies depending on the number of displaced workers at a point in time.
The question about safeguards is complex. My only concern is that they are certainly needed, as without a plan, we might see large segments of the workforce become quickly obsolete without any backup options. This relates partially to AI alignment, but I am more interested in the current period, where we effectively have prediction engines strong enough to replace a significant portion of work once they’re implemented as ‘agents’ that can adaptively handle daily tasks. At this point, my concern isn’t about AI going rogue, but rather that governments are not prepared for potentially Great Depression-level unemployment appearing seemingly overnight with no real replacement gigs.
2025 is being called the year of the ‘AI agent’, which to me describes products that autonomously take action for you to reach some desired goal. We already have the technology to make years of progress in this space without any more model advancements, but model advancements are coming too. With each improvement, the existing agents just get stronger and stronger. They get cheaper, faster, and overall better at what they do. This is also focusing solely on LLM technology, which may be augmented further with new findings and even types of AI that could improve their performance in many areas.
Right now, the limiting factors of agents are in multiple areas: price to run, speed to complete tasks, and accuracy of results (and reasoning/understanding, though that’s a different issue). All of these limitations are being addressed at an incredible pace. As a result, we are seeing computer-use become a major focus (e.g., Anthropic and Dia by Browser Company), and this could be a huge source of efficiency gains and a turning point for knowledge work.
The agents are being created on principles that will only make them stronger over time as their underlying models are improved. These agents don’t really ‘break’ anymore like a scraper thrown together to consume APIs or page elements in rigid ways in the past. Agents are being made to act as a human, kind of just spoofing a human’s usage with limited access to the suite of tools their owner has access to. So, we basically are almost ready to send out 24/7 active AI employees into the wild now, as they quickly go lower in price while becoming more accurate and still never breaking.
So I’ve decided, there’s no use in gatekeeping knowledge, getting offended by it, or trying to regulate the change away. Change is here, and for now, productivity is going to soar. In the first stages, we’ll see a two-person unicorn company, then massive job displacement and pain, but I really believe that eventually, abundance. At least, I expect abundance of time. Because this isn’t like an industrial revolution or the advent of the internet, it’s more like throwing away more than 30% of work. Thirty percent of people won’t just go live on the street, so what’s the path forward? To me, it seems like the reality will be everyone working way less. ‘But they said that about the computer!’ Yes, but my opinion is that this is very different. This time around, what are the new tasks? I guess we never know beforehand, but remember, the innovation doesn’t stop with these prediction engines: they continue becoming more generally useful until they can look at most scenarios and solve them.
The questions we’re all asking now are how much pain, and what is this displacement going to look like? My answers are that there will be a couple of multi-year waves. The first not strong enough to make very meaningful government reform, and the second being too large to ignore. Globally, we haven’t really started planning for what’s coming, and there’s certainly going to be some pain. Honestly, possibly a lot of pain, and it might be a tsunami for those affected. My hope is that I’m not naive in thinking that better use of our days is coming once it affects almost everybody. Additionally, superintelligence might be much slower than anticipated and have very negative impacts only on work behind a computer as robotic improvements lag behind.
The question of how to mitigate the pain resulting from the coming phases of disruption is a very difficult one. I think it’s a deeply philosophical and governmental issue that everyone should consider and discuss with their families. I know that a simple UBI won’t solve all of these problems, and I acknowledge that I don’t have all the answers.
Anyways, here are my rough answers to the questions I asked myself 6 months ago.
- What really is the current social contract of the open web? Are we here to contribute anymore, or just recycle what’s been released?
- Yes, we are here to contribute. For the eventual benefit of everyone.
- What is the benefit to create and release to the general public? Including open source projects, issue reports and bug fixes, technical and artistic essays, and even any art from photography to painting.
- What’s the benefit if money had never been involved in the first place? Let’s think past our current standard of monetary value, and that’s the benefit.
- Are we allowed to ignore robots.txt or other explicit requests to not crawl?
- I’d have to say no still. Just because it’s kind of rude. But I’m not radically on the no stance anymore, as to create true superintelligence, we probably need ideally the maximum human context.
- Is subscription culture the end-game for 90% of consumer spending?
- Nope. Maybe for a little while but personal-software > SaaS is a fundamental shift coming.
- How will societal contributions change due to lowered incentives?
- This is short-term thinking again. I think we could see lowered incentives for a little bit again but there’s a fundamental shift in time use coming which will actually increase incentives.
- Where does “creativity for everyone” go once creatives no longer release publicly? Trends in design, programming, writing, photography, and many more stay stagnant or private?
- Same type of answer as the second question: what’s the point of anything then? Trends will keep coming and going, with barriers to entry to producing quality content frequently lowered.
- How is art going to have its sources cited at this point? Dataset transparency doesn’t seem like enough.
- This is a tough one and also depends a lot on the context. For example realtime computer-use (which I think is going to be very big) can easily cite the current sources, however citing all relevant sources is more complex.
- Much further out: regardless of how advanced our systems of the future become, we all need to eat. Is this planned to look like UBI and a breadline?
- This is also a thing that will change in ‘waves.’ For a bit, yeah I imagine UBI will be important. Eventually, probably the only way. I can’t think of anything else yet at least. Because if traditional ‘work’ doesn’t exist at some point, there’s not really consumer ‘money’ in the same way anymore.
- The big thing though, is that in this context, UBI and ‘breadline’ is not the same thing it is in current contexts. It has nothing to do about wealth redistribution, because in this context, what even is ‘wealth’ anymore? What is wealth when we move beyond the context of a traditional job and payment based on replaceability when everyone is replaceable?
There are some other pieces to all of this thinking. It’s really quite radical, and can be considered naïve by many. After all, ‘LLMs don’t even get hands right!’ But to me, it’s all about the predictability. We have these incredible prediction engines going to $0, at ‘PhD level’, and the world is working to build the infrastructure to sustain it. If we are already so predictable, why would we not predict all of the random work we waste time doing?
Of course, another missing piece to this puzzle is ‘reasoning’. We see some pretty awesome results from the existing (and quite new) LLM focused approaches to ‘agency’. But, it’s probably not enough on its own to create all of this change I talk about, and will need some separate breakthrough when it comes to actually understanding the universe rather than just throwing everything at the tech we have right now. I could see this breakthrough also happening, whether it be related or parallel, which is why I have some of these new opinions.
Also, even without some form of reasoning breakthrough, we already have seen a lot of capabilities that will change the world in work productivity and social. Most people I know over 30 years old still don’t even use LLMs in their daily life, so there’s even a lack of adoption in 2025.
My initial questions
- What really is the current social contract of the open web? Are we here to contribute anymore, or just recycle what’s been released?
- What is the benefit to create and release to the general public? Including open source projects, issue reports and bug fixes, technical and artistic essays, and even any art from photography to painting.
- Are we allowed to ignore robots.txt or other explicit requests to not crawl?
- Is subscription culture the end-game for 90% of consumer spending?
- How will societal contributions change due to lowered incentives?
- Where does “creativity for everyone” go once creatives no longer release publicly? Trends in design, programming, writing, photography, and many more stay stagnant or private?
- How is art going to have its sources cited at this point? Dataset transparency doesn’t seem like enough.
- Much further out: regardless of how advanced our systems of the future become, we all need to eat. Is this planned to look like UBI and a breadline?
Context
Before the release of large scale consumer LLM interfaces in 2022, artists, authors and researchers released content, contributions, and discoveries for free to the public without much concern. During this age of sharing, I’d say the big worries for producers were with plagiarism through direct or inexplicit copying, to which there are a few solutions. There were various stages of development, starting from visibility benefits, to money through ads, to even greater monetary benefit with the attention economy, which took ads to the next level. At this point post AI boom, I’m really questioning the future of these online economies and any new ones. The concerns of plagiarism are now in an undefined state, where public content isn’t guaranteed to be used only with permission.
The ad attention economy reached a point where many contributions ended up just being untruthful schemes. However, important useful content was still released in public such as research and technological patterns or discoveries on personal blogs. Posting on a personally owned space had multiple benefits of attracting eyes to then feed into a creator’s attention economy, help grow a following, and get recognition. Now however, this seems to be changing. With the public growing their use of text generating interfaces and reducing search, posting publicly makes less sense. An LLM will have the content crawled, stripped, and indexed without guaranteed verifiable sourcing and crediting.
Even with the explicit request to not be crawled by these bots, the pages are still being crawled by industry leaders. So, why post for free? No credit, no money, no eyes, and no attention is gained. Until this is solved with adherence to no-crawl requests and highly precise source citing features, I imagine that a lot of good text and art content is going to start getting paywalled or no longer released for the foreseeable future. I’m sure there are great researchers covering most of these concerns right now. I’m hopeful that ethical concerns can be incorporated to help incentivize people to continue publicly innovating.
Comments
Loading comments...