💧 Big Tech’s water‑efficient AI still leaves supply risks


What happened
Tech giants are cutting data-center water use with warmer cooling systems and closed-loop designs, but savings are partial. U.S. data centers used ~66B liters directly in 2023—and ~800B liters indirectly via electricity—while many facilities sit in water-stressed regions.

Why it matters
AI’s compute boom makes water a hidden but critical constraint; indirect power-related use far outweighs cooling gains, raising regional supply and investor-risk concerns.

What’s next
Expect tighter disclosure rules, more recycling and renewable-powered cooling—and fiercer competition for sites with reliable water and energy.


🧑‍💻 Agentic coding shows AI doing entire jobs

What happened
OthersideAI CEO Matt Shumer wrote in Fortune that by early 2026 he could describe a product in plain English and let AI handle the rest—writing, testing, and iterating tens of thousands of lines of code before delivering a finished app. He says rapid model gains in 2025 made him feel redundant on technical tasks—and that other professions are next.

Why it matters
This is AI moving from copilot to operator. If models can autonomously build software end-to-end, displacement won’t stop at engineers—and markets are already reacting, with major tech stocks swinging as investors reassess future labor needs.

What’s next
Expect more autonomous dev stacks, tighter guardrails, and bigger legal questions around liability and IP when AI builds the product itself.

🎬 Generative video backlash: Seedance sparks deepfake drama

What happened
ByteDance quietly rolled out Seedance 2.0, a high-powered video generator—and the internet quickly used it to create a fake Brad Pitt vs. Tom Cruise clip. Hollywood responded with cease-and-desists. ByteDance apologized and promised guardrails, while TechCrunch’s Equity crew warned of a coming flood of AI video “slop.”

Why it matters
Studios just drew a bright red line on unlicensed deepfakes. As AI video gets cheaper and easier, discovery gets harder—and authenticity becomes the new premium.

What’s next
Expect watermarking, provenance rules, and tighter platform controls. Creators will lean into hybrid workflows: AI for speed, humans for trust.

🧠 Boards are told to get smart on AI risks

What happened
In a Harvard Law School governance memo, Deloitte’s Beena Ammanath warned boards to treat AI as a core risk issue—not a side project. She urged directors to boost AI literacy, bring in operational AI expertise, form dedicated oversight groups, and apply formal risk frameworks as agentic and physical systems scale.

Why it matters
AI now shapes products, hiring, compliance, and strategy—raising real liability exposure. Boards that don’t understand the tech can’t govern it.

What’s next
Expect more AI-savvy directors, formal AI governance charters, and sharper scrutiny from regulators and shareholders alike.

🏛️ U.S. states accelerate chatbot safety bills

What happened
Statehouses are busy. Chatbot safety bills advanced or crossed chambers in Oregon, Utah, Virginia, and Washington, with more moving in Idaho, Iowa, Oklahoma, and Hawaii—and new proposals popping up in California, Colorado, and Georgia. Several measures require bots to disclose they’re not human, add safeguards around minors, and address mental-health risks.

Why it matters
Companion chatbots are now squarely in regulators’ sights. The result: a fast-growing patchwork of disclosure, age, and safety rules companies can’t ignore.

What’s next
Expect more states to pile on—and mounting pressure for a federal baseline to prevent 50 different AI rulebooks

🎨 Tripo AI brings enterprise‑grade generative 3D to spatial computing

What happened
Tripo AI expanded its Tripo 3.0 platform into the U.S., touting a 200B+ parameter model that can generate production-ready 3D models in ~20 seconds. The company says it now serves 6.5M creators, 40K developers, and 700+ enterprise clients across gaming, design, and e-commerce.

Why it matters
Spatial computing runs on high-quality 3D assets—and manual modeling doesn’t scale. If generative 3D can deliver clean, usable geometry fast, it turns immersive content from bottleneck to pipeline.

What’s next
Expect deeper integrations into game engines and commerce platforms, plus a race toward text-to-3D + physics + auto-rigging—and eventually, rules around 3D asset provenance and IP.

🏭 CES 2026: Industrial AI and robotics go mainstream

What happened
CES signaled a shift from chatbot demos to real-world deployment. Siemens rolled out industrial digital twins, Caterpillar launched an AI assistant for heavy equipment, Lenovo debuted its Qira platform—and NVIDIA locked in a 10-GW data-center deal with OpenAI while hyperscalers poured $305B into capex, much of it on GPUs. NVIDIA’s data-center revenue jumped 66% YoY, reinforcing its grip on the stack.

Why it matters
AI is moving off the screen and into factories, machines, and wearables. But the boom runs on chips—and GPU supply and capital intensity are now the industry’s main constraints.

What’s next
Watch whether NVIDIA’s next-gen chips ease bottlenecks—or whether rivals gain ground. If industrial pilots turn into productivity gains, physical AI could move from showcase to standard.

💡 Bottom line

We’re witnessing agentic and physical AI moving from hype to infrastructure—spanning water-strained data centers, chatbot regulation, autonomous app-building, generative video backlash, and factory-floor robotics. The focus is shifting to real-world constraints—power, water, governance, labor—and the competitive race to deploy AI at scale.

Keep reading