Post

AI, Addiction, Burnout, and the Time Value of Money

Three years ago I, like many others, predicted what would happen as AI gained capabilities. ChatGPT 3.5 was less than two months old when I wrote the following:

There will be winners and losers as AI continues encroaching on new markets and industries. Those best positioned to win in a “ubiquitous talent” future are today’s skilled individuals who choose to embrace AI as a tool to accelerate their creativity.

By now you’ve probably read Matt Shumer’s blog, “Something Big is Happening”. Based on my own recent experiments with AI, I think Matt is directionally correct with his warnings. If you haven’t read his post yet, it’s still worth reading. I’ll warn you that it left me feeling a bit like I had just watched a trailer for Avengers: Infinity War.

It would seem that destiny has arrived.

An image of Thanos wearing the Infinity Gauntlet with the purple and blue stones. A quote to his right states "In time, you will know what it’s like to lose. To feel so desperately that you’re right, yet to fail all the same.Dread it. Run from it. Destiny still arrives. —Thanos

What changed in Nov. 2025

When Google and Anthropic released new models in November, the entire AI coding paradigm changed. Shortly after, developers found themselves caught-up in a Claude Code-ing frenzy during the final week of December thanks to Anthropic’s Holiday Usage Promotion 🫨 What they discovered was that Claude could now build robust, full-featured applications entirely via prompting and Agentic workflows. AI-first software development became a reality.

Then Opus 4.6 came out on February 5th (2026), and other tech sectors began taking notice. When it launched, Anthropic announced they had discovered 500 high-severity vulnerabilities using the model; no custom tooling instructions required.

Just last week Nathan Sportsman, CEO at the security consultancy Praetorian, used $6.01 in tokens to assess OpenSSL for vulnerabilities. Within 8 minutes he uncovered 4 previously undisclosed CVE’s. Eugene Lim, a security researcher and author of From Day Zero to Zero Day, created an LLM-powered tool to detect CVEs before they’re published called Vulnerability Spoiler Alert.

The list goes on.

If you’re envisioning Ralph Wiggum on a bus, you’re in good company 🚌 To better understand all the hype I was reading, I decided to dig-in and do some prompting of my own.

Ralph Wiggum from the Simpsons sitting on a bus chuckling and saying "I'm in danger"

Experimenting for myself

Before going to college, I seriously considered pursuing a career in Finance and Investing. Thankfully I’ve built a successful career in a field I’m passionate about (application security), but my fascination with investing stayed with me. At the recommendation of my former colleague (thanks Maciej) I picked up Nassim Taleb’s The Black Swan to accompany my reading of Benjamin Graham’s The Intelligent Investor and Daniel Kahneman’s Thinking, Fast & Slow.

Reading this collection of books sparked an idea for how I could validate the latest Claude Code-ing experience for myself. I decided to design an Electron app that would perform value investment research using the Massive.com “Stocks Advanced” API ($1,920 / year) 💸

I personally have zero experience building Electron apps, and my knowledge of JavaScript/TypeScript only goes so far as to using it for exploitation. Even so, the app itself was designed to take local user input and retrieve data from the Massive.com API directly, so it felt pretty low-risk if it ended up being wildly vulnerable. Moreover, committing nearly $2,000 to the idea incentivized me to design an app that actually works, and that I would actually use.

Time Value of Money

What I discovered during my experiment was an understanding for why so many of my close friends use the Claude Pro Max plan ($100-$200 / month) 💰 It took several days of hitting my Claude Pro plan ($20 / month) token usage limits multiple times a day before I had a functional app I could use. And then a handful more days of regularly hitting my token usage limit before I felt comfortable releasing it as an open source app.

Aside from designing a fully functional Electron app for investment research with just a bit of prompting, which itself feels like dark magic, I came away with a deeper understanding for why people like Daniel Miessler, Matt Shumer, Gergely Orosz, and others are so hyped about the state of AI right now.

Today, anyone using AI coding tools with a modest amount of technical skill can create nearly any piece of software they can think of. What’s more, for people making “tech money”, $100-$200 a month is a meager investment to make in exchange for feeling like you have super powers. To quote OpenAI’s marketing campaign, “you can just build things”—which contributes to the addictive nature of current AI technologies.

Hitting the dopamine loop

At the end of Chloe Potsklan and Megan Kaczanowski’s talk “Vibe Check: the dark side of vibe coding” last year at BSidesLV, I asked how they thought the dopamine loop of vibe coding might encourage people to use AI coding tools more often. I even went so far as to describe the experience as “one more pull at the slot machine”, which felt very apropos given the setting.

Then last month Malte Ubl, CTO at Vercel, described for the Wall Street Journal that his use of Claude Code is giving him an “endorphin rush akin to playing a Vegas slot machine” 🎰 We’re also seeing a growing number of people express feelings of “Token Anxiety” regarding their use of AI. They are experiencing a deeply rooted feeling that they need to get back to prompting their agents.

From behaviors I have personally witnessed in others, as well as my own first-hand experience, I’m telling you now: this technology is dangerously addictive. If it weren’t for the token usage limits I was hitting when designing the app, I might still be prompting away at new features instead of writing this blog post.

The burnout that follows

It feels practically inevitable that we will start witnessing people crash and burn as a result of extended Claude Code-ing sessions. Harvard researchers have already begun studying this problem:

For workers, the cumulative effect is fatigue, burnout, and a growing sense that work is harder to step away from, especially as organizational expectations for speed and responsiveness rise.

In their research thus far, the adoption of AI in the workplace has led to task expansion, fewer breaks at work, extended working hours, and more multi-tasking (read: more context switching). All of the classic work experiences that typically lead to burnout 🫠

Even Steve Yegge, creator of the multi-agent orchestration system for Claude Code called “Gas Town”, has called AI a vampire that is starting to kill us all. It’s beginning to look like some of my predictions for 2028 may happen a lot sooner than I anticipated.

Anticipating what comes next

Since writing my predictions for 2028 just ten months ago, a few things have happened. First, new “gadgets” have been released in the form of Claude Skills, which definitely helps with novel vulnerability discovery. We’ve also seen Claude Opus 4.6 allegedly identify high-severity vulnerabilities without custom tool instructions 🤖 I’m still waiting to see what the true positive rates look like in “closed source” codebases, but there are signs that I might be wrong.

As for CVEs resulting from AI-generated code, we need only look through commits from Claude on GitHub for answers to that prediction (h/t to Kevin Beaumont for this one). And then there’s the burnout problem discussed above, which I believe is a slow-moving train wreck still in progress.

So what comes next? Well, here’s what we’ve seen in just the last 12 months:

Feb 24, 2025
Claude 3.7 Sonnet
May 22, 2025
Claude 4
Aug 5, 2025
Claude Opus 4.1
Nov 24, 2025
Claude Opus 4.5
Feb 5, 2026
Claude Opus 4.6
Built with Claude

At this rate, I think we can expect at least three more model releases from Anthropic before the end of 2026. These models are likely to get even more powerful when it comes to software development and related fields. As a recent example, we just saw Anthropic take a swing at the Software Security industry with their release of Claude Code Security.

As for me, I’m abandoning my pursuit of the OSWE in favor of working through the reading and lab materials in “From Day Zero to Zero Day” by Eugene Lim. It now seems like a profound waste of time to study for an exam that only allows for manual, human-driven efforts in a world that is becoming augmented by increasingly capable AI models. Like many, I too am concerned that the AI-pocalypse is slowly encroaching on my livelihood.

Dreading the AI-pocalypse

I also want to share that I have empathy for people who feel an extreme aversion to AI. I know many people who feel violated by the ongoing use and expansion of this technology 😣

For starters, there’s the wanton theft of intellectual property used in training data, which coincides with an abundant violation of copyright laws. We are also seeing sweeping job losses—and may continue seeing even more job losses—among knowledge workers as model performance improves. Not to mention the creation of non-consensual sexualized imagery and underage sexual abuse material being generated using current models.

And then there’s the dramatic increase in greenhouse gas emissions from datacenter operations, along with community impacts from water consumption used to cool said datacenters. The anticipated acceleration of climate change from the continued building and operation of datacenters is, quite frankly, terrifying.

I acknowledge these violations, and do not want to be perceived as discounting, downplaying, or disregarding them in any way. They’re real. They are having—and will continue to have—an impact on people’s lives for the foreseeable future. Unfortunately, even if the financial side of the AI bubble collapses and the pace of these impacts slows, the technology is here to stay.

Pandora’s box has been opened.

It is human to grieve for the multitude of futures that might have existed; the futures we all had hopes for. At some point, it becomes necessary to get back to living your life. Unless you’re extremely wealthy or leveraging Banksy-levels of creativity in your work, then your life will likely require using AI in some capacity in order to survive the AI-pocalypse.

I am trying to find a way to live through it, and I am deeply sorry for our collective loss ❤️‍🩹 But, while I don’t believe the theft of intellectual property used as training data will ever be fully reconciled, I still believe there is some room for hope in the future.

Open to Hope

Nvidia published a research paper in September (2025) titled “Small Language Models are the Future of Agentic AI”. Similarly, OpenAI released research that same month titled “Why Language Models Hallucinate”. In the OpenAI paper, they admit that Small Language Models (SLMs) can be easily built such that they never hallucinate.

When we stop to consider what a transition to SLMs would look like, I am filled with hope that we might avoid some of the AI-pocalypse doom and gloom I’ve outlined above. For starters, Small Language Models are far less resource intensive in terms of the volume of training data needed, energy consumed, or water required for cooling. What’s more—they can be built such that they never hallucinate, meaning the outcomes are reliably deterministic.

There’s also the fact that Small Language Models can run locally run on commercially available hardware. Not only does this add a layer of privacy and locality, but it also means that a single engineer could scale their own infrastructure without also scaling hosting and data storage costs.

All that being said, I don’t believe there is a future where AI will be completely eradicated from our daily lives. But I do believe there is a future where Language Models are ethically built and operated in ways that bring about more good than harm.

Only time will tell ⏳


Thank you to my friends Natalie Somersall and Thereasa Roy for providing feedback and suggestions during multiple draft revisions of this post. If you found this post interesting or useful, I invite you to support my content through Patreon — and thank you once again for stopping by. While I consider my next blog post, you can git checkout my human-centric newsletter, or other (usually off-topic) content I’m reading over at Instapaper.

Cheers,

Keith

This post is licensed under CC BY 4.0 by the author.