Agentic AI

🕸️ AI Agents Are Becoming the Internet’s New Users

What happened
Box CEO Aaron Levie argued that the next generation of software will increasingly be designed for AI agents rather than human users. In interviews this week, he described a near-term future where autonomous agents browse the web, analyze databases, and even spend money to access services or data on behalf of humans. Levie believes early adoption will come from businesses deploying agents to handle research, procurement, and operational workflows.

Why it matters
This represents a subtle but important shift in how software gets built. Instead of optimizing interfaces for humans clicking buttons, companies may soon optimize APIs and services for machine-to-machine interactions, where AI agents call tools, negotiate transactions, and orchestrate workflows autonomously. In effect, the internet could evolve from a human-centric network to an agent-centric economy.

What’s next
Expect more platforms to introduce agent-native APIs, micropayment systems, and permission layers so autonomous systems can safely transact online. Early movers in developer tools, data access, and enterprise software may benefit first.

💡 Gumloop Raises $50M to Democratize Enterprise Agent Building

What happened
Gumloop, a no-code platform for building and sharing AI agents, secured a $50 million Series B led by Benchmark. The platform enables employees at companies like Shopify, Ramp, and Instacart to create and deploy autonomous agents for complex, multi-step tasks—no engineering required. Gumloop’s model-agnostic approach lets enterprises use credits from OpenAI, Gemini, and Anthropic, and share agents internally to accelerate automation.

Why it matters
This funding signals a shift toward empowering every employee—not just developers—to build and deploy agentic workflows. By lowering the technical barrier and supporting multiple AI models, Gumloop could drive a new wave of enterprise automation and internal innovation.

What’s next
Expect rapid expansion of Gumloop’s customer base, more integrations with enterprise tools, and increased competition with established automation platforms and specialized agent builders.

🚀 Rox AI Hits $1.2B Valuation with Autonomous Sales Agents

What happened
Rox AI, a startup developing autonomous agents for sales productivity, reached a $1.2 billion valuation. Rox’s platform plugs into existing enterprise software (Salesforce, Zendesk, etc.) and deploys hundreds of AI agents to monitor accounts, research prospects, and update CRM systems. The company closed 2025 with $8 million in ARR and is positioning itself as an intelligent revenue operating system.

Why it matters
Rox’s approach streamlines fragmented sales tools into a unified, agent-driven workflow, promising higher productivity and proactive customer management. The unicorn valuation reflects strong investor confidence in agentic automation for revenue operations.

What’s next
Look for Rox to expand its agent capabilities, face off with established revenue intelligence and AI CRM competitors, and drive broader adoption of agentic sales workflows.

Enterprise and Generative AI

💰 AI Model Economics Start to Shift

What happened
Several major AI developers are adjusting pricing for generative AI models as demand surges and compute costs remain high. Model providers including OpenAI and leading Chinese AI labs such as Zhipu AI and Tencent have experimented with token-based pricing changes and tiered access models as usage across coding, content generation, and enterprise automation continues to grow.

Why it matters
For years, the generative AI race was about capability. Now economics are entering the equation. Rising prices suggest demand is outpacing available compute while the infrastructure required to run large models remains extremely expensive. As a result, companies are shifting focus toward sustainable business models—prioritizing efficiency, smaller models, and specialized architectures that deliver strong performance with far less compute.

What’s next
Expect continued experimentation with tiered pricing, open-source alternatives, and optimized small models that reduce costs while maintaining strong performance. The companies that win may not just build the smartest models — but the most economically sustainable ones.

🛠️ Gemini Brings AI Task Automation to Smartphones

What happened
Google rolled out new Gemini-powered task automation features on the Samsung S26 and Google Pixel 10. The update allows users to automate multi-step workflows directly from their phones — things like booking appointments, managing reminders, or coordinating everyday tasks — all powered by Gemini’s large language models.

Why it matters
This is a major step toward bringing agent-like automation into mainstream consumer devices.By embedding LLM-driven task orchestration into smartphones, Google is effectively turning the phone into a personal AI operator capable of executing workflows instead of just answering questions.

What’s next
Expect fast adoption as users experiment with AI-driven automation on their phones. Google will likely expand the capability to more devices while opening Gemini automation APIs to developers. That could trigger a wave of new apps — and push competitors to ship their own on-device AI task agents.

🌧️ Google Uses LLMs to Predict Flash Floods

What happened
Google introduced a new AI system that uses large language models to convert historical news reports into structured data for predicting flash floods. By extracting quantitative signals from qualitative accounts of past floods, the system can help fill critical data gaps in regions where sensor coverage and historical measurements are limited.

Why it matters
Much of the world’s environmental history exists in unstructured text — news reports, government documents, and local records. LLMs make it possible to transform that narrative data into usable signals, improving forecasting in places where traditional monitoring infrastructure is scarce.

What’s next
Expect more climate and disaster-response tools built on LLM-powered data extraction. Similar techniques could soon be used to improve predictions for hurricanes, wildfires, and droughts — turning decades of archived text into actionable forecasting data.

Physical AI

🏭 AI Robotics Startup Raises $500M to Automate Factories

What happened
Mind Robotics, a startup founded by Rivian CEO RJ Scaringe, raised $500 million in new funding, valuing the company at about $2 billion. The firm is developing AI-powered robots designed to perform practical factory tasks like assembling components and managing wiring systems. The robots are trained using camera data collected from real manufacturing environments.

Why it matters
Robotics investment is accelerating as AI models become capable of interpreting visual data and controlling machines in real time. Instead of flashy humanoid demos, companies are focusing on high-value industrial applications where automation delivers immediate ROI. Investors have already poured tens of billions into AI robotics in 2026, reflecting confidence that embodied AI will reshape manufacturing.

What’s next
Mind Robotics plans to deploy robots inside Rivian factories before expanding to other industries. More broadly, expect a surge in AI-powered industrial robots as better perception models, simulation training, and reinforcement learning close the gap between software intelligence and physical execution.

🦾 Sunday Raises $165M to Build Household Humanoid Robots

What happened
Sunday, a robotics startup founded by Tony Zhao and Cheng Chi, raised $165 million in a Series B round led by Coatue Management, with participation from Tiger Global, Benchmark, and Bain Capital Ventures. The round values the company at $1.15 billion. Sunday is developing Memo, a household humanoid robot designed to assist with everyday tasks like doing laundry, clearing the table, and helping around the home.

Why it matters
Home robotics has long been one of the hardest problems in automation. Unlike factories, homes are unpredictable environments filled with delicate objects, clutter, and constantly changing tasks.

Recent advances in AI perception, manipulation, and training data are making it possible to build robots capable of handling more general-purpose household work — bringing the long-promised “Jetsons” vision of home robotics closer to reality.

What’s next
Sunday plans to use the funding to accelerate Memo’s development and move toward commercial deployment. The company is focused on solving core challenges in object manipulation, reliability, and cost, with the broader goal of making humanoid robots a practical part of everyday life.

🧹 Dyson Launches AI Robot Vacuum That Detects Stains

What happened
Dyson introduced the Spot+Scrub AI Robot, a new robotic vacuum and mop that uses green lasers and AI-powered perception to detect stains and ensure they are fully cleaned. The device scans floors in real time, identifying dirt and spills before automatically adjusting its cleaning behavior. The robot is now available for consumer purchase.

Why it matters
Consumer robotics is becoming far more sophisticated as AI perception improves. Instead of blindly vacuuming in patterns, robots can now identify specific messes and respond intelligently.This marks a shift toward smarter, context-aware home robots that adapt to real-world environments rather than simply following preprogrammed routines.

What’s next
Expect competition to intensify across the smart home robotics market as manufacturers add more AI-driven capabilities. As perception improves, consumers will increasingly expect robots that can detect, decide, and clean autonomously.

🚕 Lucid Unveils Steering Wheel-Free Robotaxi Concept

What happened
Lucid revealed a two-seater robotaxi concept with no steering wheel or pedals, signaling its entry into the autonomous ride-hailing market. The design targets fully driverless transportation and positions Lucid as a competitor to Tesla’s Cybercab.
At the same investor event, the company also introduced new self-driving technology subscriptions and outlined a broader roadmap for its autonomous vehicle platform.

Why it matters
Removing the steering wheel entirely is a strong signal that automakers are moving toward fully autonomous vehicle designs, rather than incremental driver-assist systems. Lucid’s push into robotaxis intensifies competition among automakers and tech companies racing to define the future of autonomous urban mobility.

What’s next
Lucid plans to continue advancing its self-driving technology while working toward regulatory approval for driverless vehicles. The company’s roadmap points to pilot deployments followed by potential commercial robotaxi services in the coming years.

💡 Bottom Line

Agents are moving from tools to operators. As companies build software for machines instead of humans, LLMs get embedded into everyday devices, and robots gain real-world capability, the next phase of AI is becoming clear: autonomous systems executing work across the digital and physical world.

⚙️ Try It Yourself

Build your first simple AI agent workflow.

Pick a repetitive task you do every week — researching prospects, summarizing industry news, updating a CRM, planning travel, connecting internal system processes, or organizing notes. Then use a tool like ChatGPT, Gumloop, or Workato to design an agent that performs the workflow step-by-step.

Start simple:
1/Define the goal of the agent.
2/List the tools or data sources it should use.
3/Write the instructions for how it should complete the task.

The real insight comes when you begin thinking like a workflow architect instead of a user. Once you break work into repeatable steps, you’ll start to see dozens of processes that an AI agent could run for you.

Keep reading