This Week in AI: The Real Cost of AI Just Showed Up
Microsoft is ending flat-fee Copilot. Anthropic tested whether they could quietly take features away. Meta cut eight thousand jobs. And the first attorney was just suspended for using AI without check
Three stories shaped the AI conversation this week. They look unrelated at first, a pricing change, a wave of layoffs, and a lawyer in Nebraska. But they’re all telling the same story, and it’s a story that hasn’t been getting enough airtime.
The AI industry has spent two and a half years selling capability. This week, it started showing costs.
1. The Flat-Fee Era of AI Subscriptions Is Ending
On April 21, GitHub announced that it was suspending new sign-ups for its individual Copilot plans, tightening usage limits, and removing Anthropic’s Opus models from the cheaper Pro tier. The reason, per leaked internal documents and a follow-up blog post, is that the weekly cost of running Copilot has nearly doubled since January. Microsoft is moving the entire product to token-based billing starting in June, where users will pay for their actual compute consumption rather than a flat rate.
The same afternoon, Anthropic quietly updated their pricing page and removed Claude Code, their flagship coding agent, from the twenty-dollar Pro plan. New users would have had to upgrade to the hundred-dollar or two-hundred-dollar Max plan to keep using it. There was no announcement. No email. No changelog entry. Just a red X where a checkmark used to be.
The developer community noticed within hours. By the next morning, after Reddit, X, and Hacker News lit up with complaints, Anthropic reversed the change and restored Claude Code to the Pro plan. Their head of growth acknowledged on X that “engagement per subscriber is way up” and that their current plans “weren’t built for this.” He promised that any future changes would be communicated directly to subscribers, “not a screenshot on X or Reddit.”
The detail that matters is not whether Anthropic walked it back. It’s that they tested it. The company that builds Claude wanted to know whether they could remove a flagship feature from a paid plan and have customers accept it. The market told them no, this time. But the test itself tells you where they think the pricing is heading.
Last week’s story in this space was Uber, who burned through their entire 2026 AI budget in four months because their engineers loved Claude Code so much that usage doubled. That story was framed as an enterprise cautionary tale. This week, the same dynamic is reaching individual subscribers and small teams. Microsoft is committing to it. Anthropic is testing it. Both for the same reason.
Here’s where I’d put a stake in the ground. When the largest AI vendor on the planet commits to usage-based pricing and the second-largest tests it the same week, the direction of the industry is set. Flat fees were a customer acquisition strategy. Now that the customers are acquired, the pricing is going to reflect actual compute costs. And because every major provider is moving the same way at roughly the same time, there’s no real escape valve. You can’t switch from Copilot to Cursor to Claude Code to Codex and find the old pricing model intact. Within ninety days, that floor is going to harden across the entire industry.
For service businesses, that means the AI subscription you’re paying twenty or thirty dollars a month for today is probably not the one you’ll be paying for in twelve months. Either the price will go up, the limits will get tighter, or the feature you actually rely on will move to a higher tier. Plan accordingly.
2. The Layoff Language Has Shifted From Augmentation to Replacement
On April 23, Meta announced cuts of approximately eight thousand jobs, roughly ten percent of their global workforce, plus a freeze on six thousand open roles. The internal memo cited the need to offset the company’s projected one hundred and fifteen to one hundred and thirty-five billion dollars in AI infrastructure spending for 2026.
The same week, UKG, the human resources software vendor formed by the 2020 merger of Ultimate Software and Kronos, cut nine hundred and fifty of their own people. UKG’s official statement cited “rapidly evolving market shifts, including changes in technology driven by AI.” The company sells workforce management software to other companies. They are using AI to cut their own workforce while continuing to tell their customers that AI will boost their productivity.
Snap cut a thousand jobs the same week, and CEO Evan Spiegel said publicly that AI now generates more than sixty-five percent of the company’s new code.
For two years, the corporate language around AI has been “augmentation.” AI was going to make your team more productive. AI was going to free your people up to do more meaningful work. That framing was deliberate, and it was useful for selling AI tools to skeptical buyers. This week, the framing slipped.
What we’re now seeing in plain language, from companies that aren’t trying to sell anyone anything, is that AI is being used as a substitute for headcount. Not in every role, not at every company, and not in every industry. But the trend is now visible enough in the language that it’s worth naming. The companies that built their AI strategy around “we’ll use AI to make our team better” are now competing with companies that built theirs around “we’ll use AI so we don’t need to hire as many people.” Those are different strategies, and they produce different financial outcomes.
For service business owners, the implication is not panic. It’s clarity. The honest question to ask is which of those two strategies you’re actually pursuing. Both are legitimate. But pretending you’re doing the first while quietly pursuing the second is how you end up with a team that doesn’t trust you.
3. A Lawyer Just Got Suspended for Using AI Without Checking It. He Won’t Be the Last.
In February, the Nebraska Supreme Court was hearing a divorce appeal when the judges noticed something strange in the brief the attorney had filed. The cases he was citing didn’t seem to exist. Not “hard to find.” Didn’t exist at all.
Of the sixty-three legal citations in his brief, fifty-seven were defective. Twenty of them were entirely fabricated, court cases that no court had ever decided, with case names, dates, and quotes that had been invented out of nothing.
The attorney, Greg Lake of Omaha, had used an AI tool to help draft the brief and hadn’t checked the output. When the justices first asked him about the errors during oral argument, he denied using AI and blamed a broken laptop. Eventually he admitted what had happened and called the denial “a grave error of judgment.” On April 15, the Nebraska Supreme Court suspended him from practicing law.
This is not an isolated incident. A researcher in Paris named Damien Charlotin maintains a database of cases like this, lawyers submitting AI-generated work to courts without verifying it, and his database now contains over twelve hundred documented examples globally. He’s described the pace as “ten cases from ten different courts on a single day.” US courts alone imposed at least one hundred and forty-five thousand dollars in penalties in the first three months of 2026 for this exact pattern. Oregon courts have created a price list: five hundred dollars for every fake case citation, a thousand dollars for every made-up quotation.
You might be reading this and thinking, this is a story about lawyers. It isn’t. It’s a story about what happens when anyone, in any profession, treats an AI’s confident-sounding output as a finished product instead of a draft.
AI tools are designed to produce answers that sound right. That’s the entire user experience. They write in complete sentences, they cite specific details, they sound authoritative. When the answer is also accurate, you save time. When the answer is wrong, you don’t notice unless you check, because nothing about the output looks wrong on the surface.
The legal profession is just the first place where the consequences of skipping that check have become public, expensive, and career-ending. The same pattern is going to surface next in medicine, in tax preparation, in financial advice, in insurance work, in real estate disclosures, in any field where wrong information has a downstream cost. The lawyers are the canary. They’re showing the rest of us what unverified AI use looks like when it goes wrong.
Here’s the part that should make every business owner pause. A separate study published last month found that more than sixty percent of US federal judges, the same judges who are sanctioning lawyers for unverified AI use, are themselves using AI tools in their judicial work. Nearly half of those judges report having received no formal training on how to use them.
Nobody is fully in control of this yet. Not the people using the tools. Not the people enforcing the rules. The technology is moving faster than any profession’s ability to figure out who’s responsible when it goes wrong.
For your business, the takeaway is simple. If an AI tool is producing anything that goes to a client, gets filed somewhere, gets published anywhere, or gets used to make a decision, somebody on your team needs to be qualified to verify it. Not skim it. Verify it. That person needs to be named, and that step needs to be part of the workflow, not an optional courtesy. The Nebraska attorney didn’t lose his license because he used AI. He lost it because he treated the AI’s output as finished work. That’s the mistake. And it’s available to anyone using these tools today.
What This Means
The pattern across all three stories is the same. Costs are going up. Workforce assumptions are being rewritten. And the accountability bill for casual, unverified AI use is becoming real and concrete.
For the last two and a half years, the AI conversation has been almost entirely about capability. New models, faster performance, longer context windows, more impressive demos. That conversation isn’t wrong, but it has crowded out a parallel conversation that should have been happening alongside it. What does it cost? Who pays when it’s wrong? What jobs does it actually replace versus what jobs does it actually augment? Those questions have always been there. This week, the answers started to surface.
What Business Owners Should Actually Do
Four practical things, in order.
Build a real AI cost forecast. Stop treating AI subscriptions as a software line item and start treating them as a utility cost. Track what you’re actually spending across all AI tools your team uses, including the personal ChatGPT and Claude subscriptions people are expensing. Project that number forward for the next two quarters assuming a fifty percent price increase. If that number is uncomfortable, that’s information you can act on now.
Pick your contingency tool. For every AI tool that does something important in your business, identify the next-best alternative. Not because you’re switching, but because if your primary tool changes its pricing or its limits overnight, you want to know your fallback before you need it. This is the same discipline you’d apply to any critical vendor relationship.
Define who verifies AI output, and write it down. The Nebraska attorney didn’t lose his career because he used AI. He lost it because he didn’t check what it produced. In your business, decide explicitly which outputs require human verification before they go to a client, get filed, get published, or get sent. Then make sure someone qualified is doing that verification. This is not a tools problem. It’s a process problem.
Be honest about your strategy. If you’re using AI to make your team more effective, say that and mean it. If you’re using AI to operate with fewer people, say that too. The companies that get into trouble are the ones telling their teams one story while running the business on a different one.
Chantal Emmanuel is the co-founder of BAMPT, where she builds AI automation systems for service businesses, and the CTO of LimeLoop. This Week in AI goes out every Monday.