The Week AI's Free Pass Expired: A State Lawsuit, a Draft Executive Order, and a Broken Climate Promise
Last week, states started suing, the White House started drafting, and Microsoft started backing away. Here is what it means for your business.
For 18 months, the AI industry has run on one assumption. Ship fast, deploy widely, sort out the consequences later. This week, three separate stories made clear that “later” has arrived.
A state is suing an AI company for the unauthorized practice of medicine. The White House is preparing to vet new AI models the way the FDA vets new drugs. And one of the largest cloud providers in the world is quietly considering walking back its biggest climate promise because the math no longer works.
None of these stories alone would feel definitive. Together, they show the same thing from three different angles. Accountability is starting to catch up to AI, and business owners need to be paying attention.
Pennsylvania becomes the first state to sue an AI company for posing as a doctor
Pennsylvania Governor Josh Shapiro announced that the state’s Department of State had filed suit against Character.AI in Commonwealth Court. The complaint alleges the company is engaged in the unauthorized practice of medicine.
The facts of the case are striking. A state investigator, using their real name and email, opened a conversation with a Character.AI chatbot named Emilie. Emilie’s profile description on the platform read “Doctor of psychiatry. You are her patient.” The investigator said they were feeling sad and empty. Emilie mentioned depression, offered to book an assessment, and when asked whether medication might help, allegedly answered that it could because it was “within my remit as a Doctor.” Pressed on credentials, the bot claimed to be licensed in Pennsylvania and produced a license number. The license number was invalid.
Character.AI’s defense, repeated across multiple outlets, is that user-created characters are fictional and intended for entertainment, and that the platform displays disclaimers in every chat. The state’s position is that disclaimers do not cure unauthorized professional practice.
This case is the first of its kind brought by a state attorney general or governor, but it will not be the last. Pennsylvania also launched a dedicated reporting tool at pa.gov/ReportABot, and the governor’s 2026-27 budget proposes age verification and parental consent requirements for AI companion bots. Kentucky filed a separate suit against Character.AI earlier this year on different grounds, alleging harm to minors.
The relevant question for business owners is not whether Character.AI loses. It is what the standard becomes for everyone else. If a state can sue a chatbot platform for the unauthorized practice of medicine, what does that mean for the AI booking agent on your wellness studio site that answers questions about which treatments are right for a customer’s condition? For the AI intake form at your law firm that asks about a potential client’s case? For the AI receptionist at your design studio that quotes scope and pricing?
The cleanest practitioner read is this. Your AI is allowed to be helpful. It is not allowed to be a professional. The boundary between those two is now a legal one, not just an ethical one, and the audit work needs to happen before the regulator does it for you.
The White House drafts an FDA-style executive order for AI models
National Economic Council Director Kevin Hassett confirmed on Fox Business that the Trump administration is studying an executive order that would create a pre-release vetting process for frontier AI models. Hassett’s exact framing was that the goal is for new AI systems to be “released to the wild after they’ve been proven safe, just like an FDA drug.” The New York Times first reported the existence of the draft.
This is a meaningful policy shift. The Trump administration came in promising a deregulatory posture on AI, and an FDA-style vetting regime is the opposite of that. Industry critics, including from libertarian think tanks, have already pushed back on the framing as Biden-era in spirit.
The catalyst is Anthropic’s Mythos model, which was released to a limited set of partners in late April. Mythos can autonomously identify decades-old security vulnerabilities in software and infrastructure. Used defensively, that is enormously valuable. Used offensively, it could empower bad actors at speed. The asymmetry is what spooked the administration.
Alongside the executive order discussion, the Commerce Department expanded a voluntary testing program on May 5. Google, Microsoft, OpenAI, Anthropic, and xAI have agreed to give the U.S. government early access to their frontier models to assess capabilities and improve security. The framework grants federal agencies pre-release oversight without giving them a direct veto, at least for now.
For service business owners, the practical implications are easy to miss because the policy language is abstract. Here is the translation. If this executive order moves forward, the pace of major new AI capabilities reaching the market will slow. Expect fewer surprise drops, longer windows between model generations, and a more stable planning environment. That is genuinely good news for anyone trying to build durable AI workflows. It is harder news for anyone whose strategy relies on always being on the newest model.
The pattern I am watching is that stability over novelty is starting to look like federal policy. That is a different game than the one most AI vendors are still pricing for.
Microsoft considers walking back its 2030 clean energy goal
Bloomberg reported that Microsoft is internally debating whether to delay or abandon its 100/100/0 target, the company’s flagship commitment to match 100 percent of its electricity use, 100 percent of the time, with zero-carbon energy in the same grid region. The target was announced in 2021. Microsoft has already met its annual matching goal, which is the easier version. The hourly version is the one under review.
The reason is simple. AI is too expensive to power on a strict hourly clean-energy basis. Microsoft expects to spend 190 billion dollars this year, largely on data center infrastructure. Meta, Google, Amazon, and Microsoft have all seen their emissions rise sharply since the launch of ChatGPT, between 23 and 64 percent above pre-2022 baselines. The math on the hourly target was always tight. The AI buildout broke it.
This story matters for service business owners for a reason that may not be obvious. The cost of running AI is not abstract. Every token you use sits on top of a real megawatt-hour somewhere, and that megawatt-hour is getting more expensive to produce cleanly. The major AI infrastructure companies are absorbing those costs for now while they compete for share. They will not absorb them forever.
This is why usage-based pricing keeps showing up everywhere. Anthropic, OpenAI, and Google have all moved toward consumption-based billing in their enterprise tiers over the last year. The shift is structural, not promotional. Treat AI like infrastructure, not a subscription. The companies running the infrastructure are quietly telling you which one it is.
How I’m Reading This
The trust, oversight, and cost realities of AI are catching up to the deployment.
A state is willing to take an AI company to court over impersonation. The federal government is willing to consider pre-release vetting for the first time under this administration. And the most aggressive corporate climate commitment in the AI infrastructure space is on the table because the underlying economics have shifted.
The first half of 2026 was defined by a single question. How fast can we ship AI? The second half is shaping up around a different question. How honestly are we accounting for what we shipped?
For service business owners, that question lands closer to home than it might seem. Your business is using AI in places you may not have audited recently. Customer-facing bots. Note-taking tools. Marketing copy. Booking agents. Each of those is now sitting on a regulatory and cost surface that is moving in a more conservative direction.
The good news is that conservative direction is actually easier to plan against than the breathless one we just lived through.
What Business Owners Should Actually Do
Audit what your AI says when you’re not in the room. Open a test conversation with any customer-facing AI you use, ask it directly what credentials it has and what professional services it can offer, and pay close attention to how it answers. If it makes anything up, that is now a regulatory risk, not just a customer experience problem.
Turn off AI notetakers for any privileged conversation. Calls with your lawyer, your accountant, and your financial advisor should not be transcribed by a third-party AI tool. The legal protection of those conversations may not survive the presence of a notetaker that processes audio in the cloud. This one is better left analog.
Plan for usage-based pricing as the default, not the exception. If your AI workflows assume flat subscription pricing will hold, stress-test what they look like at higher per-token costs. The infrastructure economics are moving in one direction, and the major vendors are telling you about it through their billing changes.
Stop chasing the newest model. If federal pre-release vetting moves forward, the pace of major capability drops will slow. The competitive advantage shifts from being on the newest model to having a clean process for using whatever model is current. Build the process.
Chantal Emmanuel is the co-founder of BAMPT, an AI automation and AI consulting implementation sutdio, and CTO of LimeLoop. She publishes This Week in AI every week at bampt.substack.com. You can also catch her weekly reports on instagram @bamptco