These are a few common quotes right now:
- “I’ve not written any code in months now”
- “I don’t even read the code anymore. It’s that good”
- “We’re a tiny team that now ships full stack apps end to end for our clients instead of just designing them”
- “I’m a PM and feel more empowered than ever. I made { some big project } in just 48 hours without writing a line of code”
- “I built an entire { massive project } in 48 hours. The future is here”
In a lot of cases, it doesn’t matter. At this point, a lot of the demos remain in a prototype stage on local, and are semi-functional just for the demo.
However, there’s at least one case I’ve discovered deployed to the web with independent research that’s left me wondering about software’s future. In 2025, I found that a web app hosting provider had two unauthenticated WebSocket endpoints for their AI chatting feature. Thus, anyone was able to sit in the middle and read all messages back and forth with the assistant. Not as critical as things could get, but this was built by a real development team, and not just non-technical folks. I found it because the developers bragged about no longer needing to read or write code anymore in this “new world of software.”
Recently, OpenClaw also took off as a way to give an AI assistant access to “everything.” Just in the past month there have been multiple issues uncovered, like thousands of exposed agents found publicly accessible on the internet on the default port, or a one-click RCE where clicking a malicious link could exfiltrate a user’s gateway token and give an attacker full access. After setting up the agent, many people like to talk about it online, about how amazing it is, how they did it, what they do with it, etc. Of course an experienced developer or security conscious user could configure things properly, but what we’ve seen is that average users frequently don’t. The average consumer hasn’t been in the driver’s seat, possibly ever in their lives. And now, they disclose that they’ve given an LLM access to possibly everything and have it available to the world on port 18789.
By ever publicly announcing anything like the content of those quotes above, someone is roughly disclosing many things simultaneously to the entire world. Not just to their audience, but to every bad actor too:
- My product is probably missing some fundamental secure-design principles. Maybe has no logging at all, bad secrets management, etc.
- If this thing gets pwned, I might never know. For all I know, there might even be someone sitting on the box already, ingesting every new signup.
- I have no idea how this thing works and if I need to figure it out, it will be driven case-by-case to find or fix anything that happens. Customer service therefore is hit or miss.
- I’m at the mercy of my provider(s), and trusting them to take care of everything I didn’t do. For example, if they don’t provide a really good built in WAF or logging or code scanning, your user data or PII might already be at risk, and I don’t know about that.
- My product probably suffers from LLM failure modes. Shortcuts that adversaries know that the LLMs like to take (these are still developing, but perhaps things like client side secrets, non-parameterized queries, and idor).
- Maybe I don’t care about any of those things? Use at your own risk?
Software sits in a very unique position in regards to low regulation, high margins and value, and accessibility. It’s now seen as more accessible than ever, and I agree that it is. But that cuts in every direction, unfortunately. It’s easier than ever to deploy a fully featured web application to the world in a day. And this comes with little to no responsibility.
I’d love to say it’s increasingly important for developers and non-developers to really take the time to understand the OWASP Top 10. And beyond that, build a base in how the internet and applications work, and always incorporate this knowledge into everything built.
This is, of course, not commonly how the world works though. So it seems likely that providers aim to grow in scope and do the heavy-lifting for deployment, security, privacy, and user education. We see this growing already, like Vercel’s OSS bounty program or their payouts for React2Shell. I appreciate this, because it acts in part as responsibility for their v0 tool that deploys vibe-coded projects to the world.
Regardless, no platform provider is perfect, and by shifting all of this scope to them and not expecting it from a project’s development team, we set ourselves up for disasters. In the end, the human in charge is supposed to be properly educated about their responsibility for the safety of their users and data. But really, that responsibility has never been enforced in any meaningful way.
With just one 2017 breach of Equifax, 147.9M Americans had private records compromised. With a tiny speech generation model like Qwen3-TTS that runs on a phone, someone’s voice can be convincingly cloned in a few minutes. Video cloning and spoofing is getting more convincing every few months too. Each new insecure application deployed to the web doesn’t exist in a vacuum. It feeds into this environment where the material for exploitation is abundant and the tools to use it are getting cheaper by the month.
However, like I said, getting hacked may never surface or have consequences at all. The vibe coder asking “who cares” isn’t right, but they may go unpunished. We’re in a moment right now where the past 20 years of monetary incentives, accessibility, and current practices intersected to give us vibe coding to save time and money. Where the only real issue with deploying insecure code is a hit to your integrity. Maybe some fines if you’re big enough. The incentive structure simply doesn’t punish shipping insecure software at the scale this is happening.
Going forward in the world of software, does this increased attack surface actually matter? As a developer focused on security and privacy, I think it does. More applications are being built with less scrutiny, deployed faster, by people who can’t audit what they’ve shipped. At the same time, exploitation is getting easier and cheaper. Developers are expected to do more with less, faster, and with “agentic AI,” which could cause more frequent breaches.
I think that the unique combination of factors that made the software industry boom can also end up as the catalyst shifting how it looks: fewer people and more frequent security issues that aren’t taken very seriously. Radically changing the regulatory aspect could kill the accessibility of the industry, while leaving it as it is might leave it increasingly insecure. This new added attack surface matters, but what I’m not sure about is whether the industry has a mechanism to make anyone care, or what it would take to create one.
Comments
Loading comments...