
🔊 OpenAI prepares a family of physical devices
What happened
A Reuters report revealed that OpenAI has more than 200 people working on a family of AI‑powered hardware devices, including a smart speaker, smart glasses and a smart lamp. The smart speaker — the first product slated for release — is expected to cost $200–$300 and will include a camera to capture information about users and their surroundings. The device isn’t expected to ship until 2027, while smart glasses would likely follow in 2028. Meta’s Ray‑Ban smart glasses already have strong sales, and Apple and Google reportedly have competing products in development.
Why it matters
OpenAI’s hardware push signals a broader trend toward physical AI, where conversational agents live in dedicated consumer devices. By bundling a camera and multimodal processing into a $200‑$300 speaker, OpenAI could collect contextual data and offer more personalized services, but it also raises privacy questions. Competition from Meta, Apple and Google suggests that AI hardware may become as strategically important as smartphones and smartwatches.
What’s next
Expect official product announcements as OpenAI finalizes designs. Watch for regulatory filings, partnerships with hardware manufacturers and debates over on‑device privacy protections.
💰 Nvidia’s $30 billion bet on OpenAI
What happened
The Guardian reported that Nvidia plans to invest $30 billion in OpenAI’s next funding round after a previous $100 billion “circular” deal collapsed. The new financing round is expected to value OpenAI at roughly $730 billion, nearly twice Anthropic’s valuation, and could include investments from Amazon, SoftBank and Microsoft. The prior $100 billion deal would have required OpenAI to buy Nvidia chips; the new arrangement simply trades capital for equity. OpenAI is also striking chip‑supply deals with AMD and Broadcom to diversify its hardware sources.
Why it matters
The investment underscores how expensive generative AI has become — hardware companies like Nvidia can not only sell chips but also fund the companies that use them. A $730 billion valuation would make OpenAI one of the world’s most valuable private firms, yet the company’s market share is slipping (ChatGPT’s share reportedly fell from 86.7 % to 64.5 %). Diversifying suppliers suggests OpenAI wants leverage against Nvidia, while Nvidia’s funding hints at a future where chipmakers act as financiers for AI firms.
What’s next
The funding round is expected to close later this year. Keep an eye on official confirmation of investors, final valuation and whether OpenAI pursues an IPO. Also monitor how new chip‑supply deals with AMD and Broadcom reshape the competitive landscape.
⚠️ Wall Street warns AI could cannibalize profits
What happened
In a note to clients, Bank of America strategists said the AI revolution could become a “double‑edged sword”, warning that the industry’s rapid growth might cannibalize profits for software companies and lead to oversupply. They pointed to signs of an AI bubble, including surging capital expenditures and the “SaaSpocalypse” selloff in software stocks. The analysts also noted that labor market fragility and over‑investment could lead to disappointing returns.
Why it matters
The comments highlight growing skepticism among financiers that AI will deliver sustainable profits. While venture capital continues to pour into AI, Wall Street’s caution could slow new offerings or spur consolidation. If AI spending cannibalizes existing software revenue, companies may need to rethink pricing and differentiate beyond simply adding AI features.
What’s next
Investors will be watching upcoming earnings from AI‑heavy firms for signs of revenue cannibalization. Macro indicators and valuation trends over the next quarter will reveal whether the bubble narrative gains traction or recedes.
🛡️ NIST launches AI Agent Standards Initiative
What happened
The U.S. National Institute of Standards and Technology (NIST) announced a Center for AI Standards and Innovation initiative to develop technical standards for AI agents. The initiative aims to define how autonomous agents authenticate themselves, handle sensitive data and interoperate, and it invites stakeholders to provide feedback by March 9. Officials emphasized security concerns, noting that highly capable agents that can write code, manage emails, or shop online require robust identity management and permission controls.
Why it matters
As AI systems gain more autonomy, regulators are looking to ensure that agents behave predictably and safely. Standardizing agent identity and authorization could enable interoperability across platforms and reduce the risk of rogue agents acting without proper oversight. NIST’s timeline signals that government bodies are moving quickly to draft guardrails before mass deployment.
What’s next
Stakeholder comments are due by March 9, after which NIST is expected to release draft guidelines. Follow the initiative to see how industry feedback shapes the standards and whether other governments adopt similar frameworks.
🤖 China’s Unitree plans to ship 20,000 humanoid robots
What happened
According to eWeek, Chinese robotics firm Unitree plans to ship up to 20,000 humanoid robots in 2026, a major increase from about 5,500 units the previous year. The announcement came after Unitree’s robots dazzled audiences at China’s Spring Festival gala, performing martial arts and trampoline stunts. Analysts noted that most of the units will likely be used in demonstration environments because real‑world deployment remains challenging.
Why it matters
This scale‑up shows how quickly robotics manufacturers are moving toward mass production. If Unitree hits its target, it will outship Western competitors like Tesla’s Optimus. However, the mention of limited real‑world applications underscores the gap between eye‑catching demonstrations and useful service robots.
What’s next
Watch if Unitree meets its production goals and how Western firms respond with their own humanoid rollouts. Pilot deployments in logistics, entertainment and service industries will reveal whether demand matches the hype.
🪙 OpenAI’s GPT‑5.3‑Codex exploits smart contracts
What happened
A research update from The Neuron highlighted that GPT‑5.3‑Codex, an experimental OpenAI model, achieved a 72.2 % success rate at exploiting vulnerable smart contracts on the new EVMbench benchmark, outperforming earlier models. OpenAI and crypto firm Paradigm released EVMbench to evaluate AI models on detecting, patching and exploiting Ethereum smart contracts. The article noted that GPT‑5.3‑Codex’s ability to autonomously perform flash‑loan attacks demonstrates how AI agents can be better at attacking than defending, prompting OpenAI to launch a security agent called Aardvark and offer researchers $10 million in API credits.
Why it matters
AI models that can autonomously exploit blockchain contracts could enable new forms of cybercrime if left unchecked. By releasing EVMbench and funding security research, OpenAI hopes to spur development of defensive agents and encourage responsible disclosure. The episode underscores the dual‑use nature of advanced coding models and the urgency of building AI that can detect and fix vulnerabilities as effectively as it can exploit them.
What’s next
OpenAI and Paradigm plan to expand EVMbench and release more defensive tools. Expect research on automated exploit detection and patching, and watch for how blockchain platforms incorporate AI‑driven auditing.
🏛️ Big Tech lobbying hits record levels
What happened
DeepLearning.AI’s The Batch newsletter reported that major technology companies — Meta, Amazon, Alphabet, Microsoft and Nvidia — spent over $100 million on lobbying in 2025, the first time they collectively crossed that threshold. Meta led the pack with $26.29 million in spending, while Amazon, Alphabet, Microsoft and Nvidia each spent between $15 million and $22 million. The lobbying focused on influencing data‑center policy, AI export controls and state‑level legislation.
Why it matters
As AI regulations proliferate, tech giants are deploying record resources to shape the rules in their favor. High lobbying spending suggests that companies see upcoming legislation as existential to their business models. The trend raises concerns about unequal influence and underscores the need for transparent policy‑making so that smaller innovators and the public are not crowded out.
What’s next
Lobbying is likely to intensify as the 2026 U.S. midterm elections approach. Monitor how proposed AI legislation evolves and whether public interest groups or smaller companies mount counter‑campaigns.
🧠 Z.ai releases GLM‑5, a 744B‑parameter open‑weights model
What happened
VentureBeat reported that Z.ai (formerly Zhipu AI) released GLM‑5, an open‑source Mixture‑of‑Experts model with 744 billion parameters, claiming it achieves record‑low hallucination rates. GLM‑5’s “Agent Mode” can convert prompts into ready‑to‑use .docx, .pdf and .xlsx files and uses an innovative reinforcement learning framework called slime to scale training. The model is priced at $0.80–$1.00 per million input tokens and $2.56–$3.20 per million output tokens, which is considered aggressive for a model of its size.
Why it matters
GLM‑5 adds a powerful competitor to the open‑weight LLM ecosystem, particularly for agentic workflows that require generating complex documents. Its low hallucination rate and built‑in file generation could make it attractive for enterprises seeking reliable, high‑throughput models. The aggressive pricing may pressure other providers to lower costs, accelerating democratization of advanced AI capabilities.
What’s next
Over the coming months, expect benchmark results and adoption data as users stress‑test GLM‑5. Watch for derivative models, updates to the slime training framework and pricing responses from competing providers.
💡 The Bottom Line
Capital, hardware, and governance are converging around autonomous AI systems — from billion-dollar funding rounds and humanoid scale-ups to agent standards and real infrastructure failures. The race isn’t just to build smarter models anymore — it’s to operationalize, secure, and control them at scale.
