Agentic AI

🤝 Okta rolls out a blueprint for the secure agentic enterprise

What happened
Identity‑management provider Okta unveiled a framework aimed at bringing human‑grade security to AI agents. Its “secure agentic enterprise” blueprint registers autonomous agents as non‑human identities, maps what systems they access and logs their activities. A universal logout mechanism can revoke an agent’s privileges across multiple platforms. Okta plans to make “Okta for AI Agents” generally available on April 30.

Why it matters
The proliferation of autonomous agents raises the stakes for identity and access management. By treating agents as first‑class identities, Okta extends familiar controls—least‑privilege access, credential rotation and audit trails—to code‑based workers. This move signals that enterprise security stacks are adapting rapidly to accommodate non‑human participants.

What’s next
Expect major agent platforms to integrate with Okta’s framework as organizations seek to centralize governance. As agents evolve beyond simple chatbots into task‑orchestrating entities, pressure will mount for vendors to offer granular policy enforcement and real‑time revocation.

🧠 Siemens Fuse EDA AI Agent targets chip‑design workflows

What happened
Siemens announced Fuse™, an autonomous AI agent for electronic‑design automation (EDA). The agent orchestrates multi‑tool workflows across Siemens’ entire chip‑design portfolio, from schematic through manufacturing sign‑off. It uses retrieval‑augmented generation and multimodal data to sequence tasks, and it integrates with NVIDIA’s Agent Toolkit and Nemotron models.

Why it matters
Modern semiconductors are created through dozens of specialized tools and lengthy back‑and‑forth between design teams. Fuse promises to collapse these silos by letting an agent call tools in sequence, accelerate verification and catch errors earlier. Siemens argues that end‑to‑end orchestration will boost productivity and design quality.

What’s next
Early adopters in the semiconductor and PCB industries will test whether agentic orchestration delivers on its promise of faster design cycles. Given the open architecture, customers may plug in their own models or competitor tools—potentially turning Fuse into a hub for heterogeneous EDA workflows.

🔒 Nvidia’s NemoClaw: an enterprise‑grade OS for AI agents

What happened
At its GTC conference, Nvidia introduced NemoClaw, a commercial distribution of the OpenClaw agent platform. The stack installs with a single command, adding privacy routing and security so agents can run locally or access cloud models when needed. Jensen Huang called OpenClaw “the operating system for personal AI,” and noted that NemoClaw allows agents to run on RTX laptops, DGX workstations or cloud hardware.

Why it matters
OpenClaw has become a de facto standard for agentic software, but enterprises have been wary of running sensitive workflows on an open‑source platform. NemoClaw addresses those concerns by integrating privacy controls and enterprise support, potentially accelerating adoption. The ability to mix local Nemotron models with frontier models in the cloud gives organizations flexibility over cost and performance.

What’s next
As more companies deploy agents to automate internal processes, the competition between agentic operating systems will intensify. Nvidia hopes to leverage its hardware stack to make NemoClaw the preferred option for tasks from code generation to business‑process automation. Watch for integrations with productivity apps and vertical‑specific models in the coming months.

📩 Handle raises $6M to build AI agents for insurance brokers

What happened
Enterprise‑software startup Handle announced a $6 million seed round led by Andreessen Horowitz. The company is developing AI agents that automate operational workflows for insurance brokers. Handle’s real‑time engine, called Signal, connects across emails, messaging apps and CRMs, allowing agents to process quotes, reconcile data and handle claims.

Why it matters
Insurance brokerage is document‑heavy and riddled with manual steps. By building agents that can ingest data from disparate tools and take actions, Handle aims to free brokers from repetitive tasks and reduce errors. The funding round shows investor appetite for vertical‑specific agentic startups.

What’s next
Handle plans to expand into Mexico and sign more brokerages while refining its agents. Success here could spur similar agentic platforms in other regulated industries such as real estate and logistics.

Enterprise and Generative AI

🎬 D‑ID debuts expressive visual agents with sub‑0.5‑second latency

What happened
D‑ID launched its V4 Expressive Visual Agents, a diffusion‑based model that generates high‑fidelity AI avatars for enterprises. The system produces speech‑synced video in under half a second and costs roughly 70× less than Google’s Veo 3 Fast model. It supports mixed‑resolution diffusion for realistic motion, offers an optional camera layer for sentiment awareness and can embed interactive UI elements in the video.

Why it matters
Ultra‑low‑latency, cost‑effective generative video unlocks new use cases—from dynamic customer support avatars to real‑time training simulations. By offering a privacy‑friendly on‑premises deployment and customizing appearances for corporate personalities, D‑ID is targeting businesses that need scalable but controllable visual agents.

What’s next
D‑ID plans to offer the system to its 1,500 enterprise customers and expand language support. As generative video quality improves, competition will intensify among avatar vendors seeking to integrate voice, gestures and interactivity.

⚖️ Britannica sues OpenAI for allegedly copying its encyclopedia

What happened
Encyclopedia Britannica and its sister dictionary Merriam‑Webster filed a lawsuit accusing OpenAI of training ChatGPT on nearly 100,000 copyrighted articles and reproducing near‑verbatim passages. The complaint alleges that OpenAI’s outputs cannibalized Britannica’s web traffic and even replicated trademarks like “What is Biology?,” while OpenAI maintains that its training used publicly available data and constitutes fair use.

Why it matters
The case adds to a growing wave of copyright suits against generative‑AI companies. Courts will have to decide whether ingesting copyrighted reference works for training falls under fair use and whether generative systems can mislead users by mimicking trademarked material. A ruling against OpenAI could reshape how AI firms acquire data and negotiate licensing.

What’s next
Legal experts expect prolonged litigation that could set important precedents. A settlement might involve licensing deals or content filtering; alternatively, a court could provide clarity on the boundaries of AI training under U.S. copyright law.

💊 AI revival lifts healthtech, cybersecurity and SaaS funding

What happened
PitchBook data shows venture investors are returning to AI‑enabled sectors after a lull. In the fourth quarter of 2025, healthtech deals doubled to $678 million with companies like Function Health and Radial Health raising large rounds; cybersecurity deals reached a record $643 million; and funding flowed to biotech and enterprise SaaS. Fortune reports that AI is breathing life into previously neglected industries.

Why it matters
The renewed investment underscores how generative and analytic AI are being applied beyond hype cycles to address real-world problems—from medical diagnostics to zero‑trust security. Robust funding also signals confidence that AI‑native startups can deliver enterprise value rather than just consumer chatbots.

What’s next
Investors will watch whether the surge in healthtech and cybersecurity translates into scalable businesses. As valuations rise, startups will need to demonstrate regulatory compliance and defendable intellectual property to sustain momentum.

Physical AI

🤖 Skild AI’s general‑purpose robot brain comes to Foxconn assembly lines

What happened
Skild AI’s general‑purpose “robot brain” will control industrial robots on Foxconn lines that assemble Nvidia’s new Blackwell GPU servers. Backed by Nvidia and SoftBank, Skild’s model allows robots to handle diverse tasks rather than repetitive motions. Skild will also partner with ABB Robotics and Universal Robots to embed its brain across a variety of industrial machines. Nvidia’s Deepu Talla said building $500 billion of AI infrastructure in the next few years requires autonomous factories.

Why it matters
General‑purpose robot intelligence could transform manufacturing by enabling flexible assembly lines that adapt quickly to new products. If Skild’s brain proves reliable, factories could reduce downtime and labor costs while increasing throughput. Partnerships with multiple robot makers suggest industry‑wide adoption is possible.

What’s next
Skild must demonstrate that its brain can scale across different robots and tasks without compromising safety. Foxconn’s deployment will be a high‑profile test; success could accelerate adoption in electronics, automotive and other industries pursuing lights‑out manufacturing.

🦾 European chipmakers team up with Nvidia for humanoid robots

What happened
NXP, Infineon and STMicroelectronics announced partnerships with Nvidia to supply sensors, motion‑control and communications chips for humanoid robots. The companies expect more than 50,000 humanoid robots to be sold this year, with high‑end models costing around $170,000 and low‑end models about $16,000. Analysts noted that many car‑oriented chips are well‑suited for robotics.

Why it matters
The collaboration underscores Europe’s bid to remain competitive in the emerging humanoid‑robot market. By leveraging automotive chip expertise, these firms can offer reliable components for sensing and actuation. The projected sales figures suggest humanoid robots are moving from research labs to commercial deployment.

What’s next
Hardware integration with Nvidia’s AI stack will be key to delivering safe and capable humanoids. Expect a wave of announcements from robot manufacturers incorporating these chips, and watch whether regulatory frameworks evolve to address safety and labor‑market impacts.

🌐 Nvidia and robotics leaders bring physical AI to the real world

What happened
Nvidia unveiled partnerships with robotics companies—including ABB Robotics, Agility Robotics, FANUC, KUKA, Medtronic, Yaskawa and others—to deploy its physical‑AI stack. The initiative combines the Cosmos world‑model with the Isaac simulation framework and the GR00T foundation models, enabling robots to learn in digital twins before operating in the real world. Jensen Huang said that every industrial company will become a robotics company.

Why it matters
By providing both the “brains” (models) and the “bodies” (robotics partners), Nvidia is positioning itself as the platform for physical AI. Digital‑twin simulation promises to shorten deployment cycles and improve safety by letting robots practice virtually. If successful, the initiative could accelerate adoption across manufacturing, healthcare and logistics.

What’s next
The breadth of partners suggests a pipeline of pilot projects. Watch for demonstrations of robots trained in the Cosmos environment performing complex tasks in warehouses, hospitals and factories. The ecosystem will need to address interoperability and ethical considerations as physical AI moves into human spaces.

🚗 SoundHound unveils on‑device multimodal agentic AI for vehicles

What happened
SoundHound introduced a multimodal agentic platform that runs entirely on NVIDIA’s DRIVE AGX Orin hardware. The system integrates voice, vision and reasoning capabilities, enabling vehicles to respond to spoken commands, interpret camera input and control navigation without cloud connectivity. It supports multiple agent interoperability protocols (MCP and A2A) and offers 100% uptime and privacy.

Why it matters
Automotive AI assistants typically rely on cloud services, limiting responsiveness and raising privacy concerns. By running agents on‑device, SoundHound provides real‑time interaction and ensures data stays within the vehicle. The platform could serve as a template for other edge‑based agentic systems in consumer electronics and robotics.

What’s next
Automakers will evaluate how seamlessly SoundHound’s agents integrate with existing infotainment and safety systems. Future iterations may add features like personalized navigation or vehicle‑to‑infrastructure communication. The approach may inspire other device manufacturers to embed agentic intelligence at the edge.

💡 Bottom Line

Agents are officially becoming first-class citizens in the enterprise—complete with identities, permissions, and audit trails. The stack is evolving fast: security, orchestration, and infrastructure are locking in at the same time. The companies that manage agents best won’t just be safer—they’ll move faster.

⚙️ Try It Yourself

What would it take to run an AI agent locally?

1/Think of one task you’d want an agent to handle—like organizing files or summarizing documents.

2/ Now ask: what data would you not want leaving your machine?

3/ Try running a small local model (like via llama.cpp or an on-device tool) and compare it to a cloud-based assistant.

You’ll notice the tradeoff immediately:

• Local = more privacy, less power
• Cloud = more capability, less control

Platforms like Nvidia’s NemoClaw are trying to bridge that gap.

The future may not be local or cloud—but both working together.

Keep reading