AI Just Moved Into Your Infrastructure Layer. Here's What That Actually Costs You.

A Colorado man can't drive his truck, Microsoft lost its grip on OpenAI, and a new attack vector just went mainstream. The throughline matters more than any single story.

A Colorado Man Cannot Drive His Truck Because Of An AI Camera

Kyle Dausman lives in Cherry Hills Village, Colorado. He keeps getting pulled over by police because Flock Safety’s automated license plate readers flag his truck as connected to an active warrant. He has no warrant. The mistake traces back to a court clerk in Gilpin County who entered the wrong license plate into the Colorado Crime Information Center, the state database that feeds Flock’s “hotlist” of vehicles to watch for. Colorado plates use both the letter O and the number zero, and the clerk entered the suspect’s plate in both formats. Now every Flock camera in the state alerts police when Dausman’s plate passes by.

Cherry Hills Village police have stopped him twice. They’ve confirmed he’s not the person they’re looking for and suppressed the alert in their own system. They’ve also confirmed they have no power to remove him from the wider statewide system. To clear his name, Dausman was told he needs to provide the name of the actual suspect, which is protected information tied to an ongoing criminal investigation. He cannot get it.

The point of this story is not that the AI failed. The point is that a small data error became a large life problem because nobody designed an exit. The cameras worked exactly as built. The database was wrong. The escalation path for an innocent person was missing. That gap is the story.

This is what AI looks like when it gets into the infrastructure of public life faster than the rules around it. And it is a preview of what happens when small business owners deploy AI tools without thinking through what happens when those tools are wrong.

BOOK AN AI SESSION

The Microsoft-OpenAI Era Of “Exclusive” AI Just Ended

On April 27, Microsoft and OpenAI announced a restructured partnership that fundamentally changes how AI gets sold. Azure exclusivity is over. OpenAI can now offer its products on any cloud provider, starting with Amazon Web Services and likely Google Cloud later this year. Microsoft will no longer pay OpenAI a revenue share for products accessed through Azure. OpenAI continues to pay Microsoft a capped 20 percent revenue share through 2030. The controversial AGI clause that had tied Microsoft’s commercial rights to OpenAI achieving artificial general intelligence is dead.

The deal arrived against the backdrop of an FTC inquiry into whether the original arrangement effectively amounted to an unregistered merger. The European Union and the UK Competition and Markets Authority had also opened informal probes. The restructure gives both companies more freedom and addresses the central regulatory concern.

For business owners, this matters in a way that isn’t always obvious. The single-vendor AI era is ending. The tools you’re buying or planning to buy are about to be available through more providers, at more price points, with more competitive pressure on margins. That’s good news. It also means the boring vendor diligence work you’ve been able to skip (”there’s only one place to buy this”) starts to actually matter.

A note on the broader context: the same week, Avoca raised 125 million dollars at a billion dollar valuation building AI voice agents for HVAC, plumbing, and other trades businesses. The company is on track to book a billion dollars in jobs for its customers this year. The Microsoft-OpenAI restructure is the macro story. Avoca is the micro story. Both point in the same direction: AI is moving into the operational layer of every business, and the vendor landscape is fragmenting fast.

Hidden Attacks On AI Agents Just Jumped 32 Percent

Google researchers published findings this week documenting a 32 percent increase between November 2025 and February 2026 in what’s called “indirect prompt injection.” The short version: attackers are hiding instructions inside ordinary web pages, designed to be invisible to humans but fully readable by AI agents. The techniques include shrinking text to a single pixel, using transparent fonts, burying commands in HTML comments, and embedding instructions in page metadata.

When an AI agent visits one of these pages (to summarize an article, look up information, or carry out a task), it reads the hidden instructions and treats them as legitimate commands. Google found examples that included fully specified PayPal transaction instructions waiting for an agent with payment access to walk by. Another example used a meta tag injection combined with a persuasion keyword to route AI-driven financial actions toward a Stripe donation link.

The disturbing part: traditional security tools see nothing wrong, because the AI agent is using its real, approved credentials to do real damage. The agent is doing exactly what it was told to do. The problem is who told it.

This matters specifically for any business owner who has connected an AI assistant to email, calendar, internal documents, payment systems, or anything that pulls information from the open internet. Voice agents that look up customer questions. Scheduling agents that read appointment requests. “AI assistants” that can browse the web on your behalf. All of these are exposed.

What This Means

The pattern across all three stories is the same: AI is moving into the infrastructure layer of your business faster than the systems around it (legal accountability, vendor relationships, security practices, escalation paths) are adapting.

When AI was a subscription, it was a tool you turned on or off. When it becomes infrastructure, it touches your customers, your data, your money, and your operations in ways that are not always visible. Infrastructure is not something you outsource the thinking on. You inspect it. You know what’s connected to what. You have a plan for when it breaks.

This is an extension of last week’s framing, not a contradiction. The point of treating AI like infrastructure was always cost discipline and durability. This week sharpens what that actually requires. It requires knowing what your tools touch, what they can do without you, where their information comes from, who can fix them when they’re wrong, and what your off-ramp looks like when a vendor disappears.

The businesses that win the next year are the ones that treat AI like the wiring in their building, not the apps on their phone. Wiring decisions are slower, more deliberate, and harder to reverse. They’re also the ones that hold the building up.

What Business Owners Should Actually Do

  1. Run a 5-minute audit on every AI tool in your business this week. For each one, write down what it has access to (email, calendar, customer records, payments), what it can do without you, where its information comes from, who can fix it when it’s wrong, and what your off-ramp is if the vendor disappears.

  2. If any AI tool you use can read information from outside your business (emails from non-employees, customer-uploaded documents, the open web), treat that as a security boundary. Anything coming through that boundary may contain instructions you did not write.

  3. For any AI tool that takes action on your behalf (sending messages, booking appointments, charging cards), build the human checkpoint before something goes wrong, not after. Decide which actions require approval and which don’t.

  4. Reopen vendor conversations you’ve been avoiding. With OpenAI now available across multiple clouds and a wave of vertical AI startups raising money fast, you have more leverage than you did six months ago. Use it.

  5. Stop thinking of AI tools as subscriptions you can swap out next month. Some of them are. Many of them are quietly becoming infrastructure. Make the decisions deliberately.

Chantal Emmanuel is the co-founder of BAMPT, an AI automation systems company for service businesses, and the CTO of LimeLoop. This Week in AI is published every Monday.

Next
Next

How to use AI to learn from your Squarespace email campaigns.