Are We Ready for AI Enshittification? What Happens When the Systems You Trust Suddenly Stop Working
Key Takeaways
- Platforms like Facebook show how user-first systems can decay over time. Cory Doctorow calls this ‘enshittification,’ a cycle where platforms shift from serving user interests to serving stakeholder goals.
- AI may follow the same path. High compute costs and financial losses are pushing companies toward ads, paywalls, and ‘freemium’ features, weakening the free user experience.
- Declining model quality, limited free tiers, embedded ads in chatbots, and opaque ranking systems are among the early signs of the enshittification of AI. This raises concerns on how this AI breakdown would follow the same path seen in old tech platforms like Facebook.
There was a time when Facebook felt warm and personal. You opened the app, and you would see real updates from friends and family.
However, Facebook feeds have changed over the years. Now, it shows irrelevant posts from pages you liked years ago, ads about products you don’t care about, and updates from people whom the platform thinks you should care about.
According to writer and journalist Cory Doctorow, it’s not just Facebook; many of our beloved tech platforms, like Amazon and Uber, have gone bad on the path to higher profits. He coined the term ‘enshittification‘ for this intentional decay of tech platforms.
With AI penetrating every facet of our lives, it’s natural to ask: will AI go down the same path? Even worse, are we ready for that? But let’s first answer a more important question: how exactly does the enshittification happen?
How Platforms Rot on Purpose
Cory Efraim Doctorow said that enshittification happens in three stages:
1. Attracting Users
In the first phase, tech platforms attract an audience through good user experience and valuable features. Then, they lock users in with a consumer-first approach, making it harder for them to leave the ecosystem.
Think about the good old days of Facebook, when the platform didn’t try to shove ads down your throat every second and mostly showed you updates from your closest friends.
That’s what got users fully invested. How? You uploaded photos, created groups, and ran Facebook pages. All this made leaving difficult because a big part of your social life was on Facebook.
2. Abusing Users for Business Gains
When platforms have managed to attract a large number of users, stakeholder and business interests tend to take on a more pivotal role. Pressure to make profits go up becomes a norm, shareholders start demanding better balance sheets, and tech companies must answer.
That’s when the enshittification rears its ugly head. At this stage, the platforms focuses more on the interests of business customers.
In Facebook’s case, the company started tracking your activity to push targeted ads. It also throttled the reach of posts, nudging you to pay for boosts.
On top of that, viral pages and constant suggestions flooded your feed, hiding the real updates of people you cared about.
3. Abusing Everyone for Stakeholder Profits
In the final stage, tech platforms extract as much value as possible for their owners and shareholders at the expense of both end users and business customers.
Facebook followed the same path.
Users now see many sponsored posts, marketplace ads, and unrelated pages. Advertisers, as Cory noted, faced rising costs, lower returns, and growing ad fraud. It’s no surprise that one of the world’s biggest advertisers, P&G, cut digital spending once due to wasted digital ad spend.
And remember when Yahoo turned down a offer to buy Google for $1M because they thought it would mean fewer ads? Same story. A
ccording to Cory Efraim Doctorow, enshittification has already set in leading tech companies, such as Amazon, Uber, and Microsoft.
So, it’s fair to ask if AI is heading the same way.
Why There’s a Real Chance of AI Enshittification
Spoiler alert: yes, AI can slide into enshittification because incentives push it into that direction.
AI companies require substantial funding to build and run data centers continually. Electricity consumption is a big problem, as Deloitte estimates that data centers could easily consumer over 1,000 terrawat-hours by 2030.
And computer hardware cost is already skyrocketing, with OpenAI’s Sam Altman launching a $1 trillion IPO last month.
So, only a handful of AI companies will survive and thrive in the market. Fewer players mean less competition, giving companies room to lock features behind paywalls or to implement ads.
Besides, most leading AI companies are still operating on a loss. OpenAI, the maker of the world’s most popular AI chatbot, ChatGPT, is still losing money. Another AI player, Anthropic, is also not profitable.
This bubbling pressure may force AI companies to search for alternate ways to make more money. And that’s the classic start of a trip down enshittification lane, where the end user’s interests get sidetracked in the pursuit for profits.
Furthermore, there are indications of an AI bubble. So, if the bubble pops, AI companies are likely to rush to squeeze more value from users, which can speed up enshittification.
As LLMs are like black boxes, we don’t know what goes inside and how you see the results in your chatbot. So, Cory believes that it’s easy for AI companies to hide their enshittification techniques.
They have an ability to disguise their enshittifying in a way that would allow them to get away with an awful lot…I think they’ll try every sweaty gambit you can imagine as the economics circle the drain.
– Cory Doctorow, Wired
What Happens When AI Stops Working the Way You Expect
AI is not just a productivity tool; it’s now part of our daily lives. People use chatbots to write emails, summarize documents, and make purchase decisions. Some even use to plan their wedding, make important life decisions and… cheat on science exams.
According to OpenAI’s own data, ChatGPT has over 700 million weekly users. Of these, 49% use it for ‘Asking’ activities, including moments when people want clarity on a complicated topic or advice.
Now imagine if OpenAI monetizes and enshittifies ChatGPT for an audience who’s using it for anything and everything.
Answers may feel weaker, slower, or biased; daily tasks like writing or summarizing may take longer, and buying advice will skew toward paid recommendations.
And the impact doesn’t stop with everyday users. Companies that depend on AI feel the strain too, and even harder.
According to a McKinsey report, 88% of companies regularly use AI for at least one business function. For these companies, enshittification of AI could kill productivity.
It can slow workflows and weaken output quality, and force teams to spend more time on tasks they once handled in minutes.
The end result = profits may take a hit. And profits are the bottom line for most businesses.
Real-World Signs of AI Decay
AI companies are actively exploring ad space. Users can now buy products right inside ChatGPT, and Walmart already works with OpenAI to show product listings right in the chat.
Perplexity is also experimenting with ads, though it paused new advertising deals.
Could these be signs of AI enshittification? Well, it’s debatable. But one thing is for sure: when a platform blends answers with ads, the line between help and revenue gets blurry.
The risk grows because users trust AI to act like a neutral guide. But once ads sit inside the same space as organic answers, you can’t easily tell which part serves you and which part serves the platform.
And when the system doesn’t disclose the parameters used to ranks or selects results, you no longer know if a paid selection has replaced an organic one that was a better fit for your ask.
Furthermore, evidence of AI quality decline is already surfacing. According to Stanford and UC Berkeley research, ChatGPT’s response quality showed an alarming decline.
Besides, AI platforms are now shrinking their free tiers with message caps and weaker fallback models. For example, a free ChatGPT user can send up to 10 messages every 5 hours with OpenAI’s latest model, GPT-5.1.
These limited free tiers push users toward paid plans, indicating the early pattern of enshittification, where helpful services slowly trade user value for revenue pressure.
Where Does This Leave Us Now?
AI as a whole can offer real value, helping millions of people work faster and live more fulfilling lives. But the risk of enshittification is very much there, whether you like it or not, and it’s shaped by high costs and pressure to grow.
We’ve seen this pattern play out before with tech companies like Facebook. So, the question isn’t whether AI can fall in the same abyss. Rather, it’s when and how that will happen.
It’s worth noting that companies like Google stand in a different position, with steady streams of income from other sources. So, they can wait for AI to make money before extracting value from users.
However, OpenAI, the leading company in the AI space, doesn’t have that kind of luxury. It must bring more people in and keep adding value.
This creates a constant dance between offering real help and slipping toward enshittification. Besides, the pressure of cash burn can also lead to rushed or user-unfriendly decisions to squeeze users.
That said, the path ahead isn’t fixed. The choices these AI companies make now will shape how much we can trust AI tomorrow.
Sandeep Babu is a cybersecurity writer with over four years of hands-on experience. He has reviewed password managers, VPNs, cloud storage services, antivirus software, and other security tools that people use every day. He follows a strict testing process—installing each tool on his system and using it extensively for at least seven days before writing about it. His reviews are always based on real-world testing, not assumptions. Sandeep’s work has appeared on well-known tech platforms like Geekflare, MakeUseOf, Cloudwards, PrivacyJournal, and more. He holds an MA in English Literature from Jamia Millia Islamia, New Delhi. He has also earned industry-recognized credentials like the Google Cybersecurity Professional Certificate and ISC2’s Certified in Cybersecurity. When he’s not writing, he’s usually testing security tools or rewatching comedy shows like Cheers, Seinfeld, Still Game, or The Big Bang Theory.
The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, software, hardware, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
