
Nvidia’s eye‑popping OpenAI investment is on ice
The Wall Street Journal reported that chip maker Nvidia had been considering investing up to US$100 billion in OpenAI. Reuters later confirmed that the talks have stalled and that both sides are re‑thinking the numbers. People familiar with the matter said the companies are now discussing a smaller equity investment, potentially “tens of billions,” after Nvidia CEO Jensen Huang questioned OpenAI’s business discipline and said the original term sheet was non‑binding and not final (reuters). Huang later told reporters in Taipei that Nvidia still plans a “huge” investment – the largest in its history – but that it won’t be US$100 billion (reuters). Bloomberg reported that Amazon is also mulling a multi‑billion‑dollar stake in OpenAI.
Why it matters
The pause underscores the unusual scale and volatility of funding in generative‑AI. A US$100 billion deal would be by far the biggest corporate investment in an AI start‑up, almost doubling OpenAI’s rumoured US$830 billion valuation. Nvidia’s hesitation reflects concerns that OpenAI’s breakneck expansion has outpaced its ability to monetize models and that rivals like Google’s DeepMind and Anthropic are narrowing the gap (reuters). The episode also shows that chip suppliers want governance rights and business discipline, not just orders for high‑margin processors.
What’s next
OpenAI is still seeking to raise around US$100 billion and will likely court sovereign wealth funds and strategic partners. Nvidia’s eventual investment will probably be substantial but lower than originally mooted. Amazon, which operates the AWS cloud that powers OpenAI, is reportedly in talks to invest up to US$50 billion, signaling competition among hyperscalers. Expect more scrutiny of OpenAI’s burn‑rate and a renewed focus on converting demand into revenue.
Google’s ‘Project Genie’ conjures playable worlds and spooks the gaming industry
Google quietly rolled out a new AI model, internally called “Project Genie,” that can generate interactive 3D game worlds from simple text or image prompts. Reuters reported that the model simulates physics, rendering and real‑time interactions and can condense development cycles dramatically (reuters). During a private demo, Google showed how users could upload hand‑drawn level designs or describe scenarios and Genie would build a playable environment on‑the‑fly. The news spooked investors: shares of major game‑engine makers and publishers – including Unity, Roblox and Take‑Two – fell between 10 % and 21 % on the day (reuters).
Why it matters
If Project Genie works as described, it could upend the economics of game development. Today even small titles require teams of designers, programmers and artists working for months. Genie promises to automate much of that work, raising the prospect of “one‑person studios” and a flood of user‑generated games. It also threatens incumbents: existing game engines charge license fees and provide tooling; Genie could render them obsolete or force them to integrate similar capabilities. The model highlights how generative AI is expanding beyond text and images into physics‑based simulation.
What’s next
Google has not said when or how Genie will be released, but developers expect it to become part of the company’s gaming and AR/VR offerings. Regulators may take interest if the model ingests copyrighted assets, and unions will watch for its impact on jobs. For gamers, the promise is more imaginative experiences created by anyone with a good idea.
Starlink to users: your data will train our AI (unless you opt out)
SpaceX updated the privacy policy for its Starlink satellite‑internet service on Jan. 15, and Reuters revealed the change on Jan. 30. The new policy allows SpaceX to use customers’ personal data to train AI models unless they explicitly opt out (reuters). Starlink collects a wide array of information, including users’ location data, payment details, device identifiers, browsing history and potentially the content of communications. The policy says data may be shared with “third‑party collaborators” for AI training and product improvement. Privacy advocates worry that the move could expand surveillance and erode trust ahead of a potential Starlink IPO and rumored merger with Elon Musk’s AI startup xAI.
Why it matters
SpaceX operates one of the world’s largest constellations of low‑Earth‑orbit satellites, giving it unique access to global user data. Letting that data feed machine‑learning models raises serious privacy and regulatory questions. Unlike Meta or Google, Starlink also handles user traffic, which could include sensitive communications. A merger with xAI would give the AI company direct access to this dataset and a powerful training pipeline, potentially accelerating xAI’s competitiveness but also heightening concerns about cross‑company data use.
What’s next
Expect regulatory scrutiny from data‑protection authorities in the U.S., EU and other jurisdictions. Users can opt out now, but consumer‑rights groups are calling for stricter consent requirements. The rumored xAI merger is still pending; if it proceeds, watch for lawsuits and calls for oversight. Starlink may face pressure to adopt more transparent data‑governance practices.
SpaceX’s plan: a million satellites to power AI data centers
In an FCC filing made public on Jan. 31, SpaceX outlined an audacious plan to launch a constellation of up to one million satellites to create solar‑powered data centers in space (reuters). Each satellite would harvest sunlight and beam energy to data‑processing modules, enabling AI workloads to run above the atmosphere with near‑constant solar power. The project hinges on cost reductions from SpaceX’s fully reusable Starship rocket, which has yet to complete an orbital mission but promises to slash launch costs (reuters).
Why it matters
Data‑center power consumption is a major bottleneck for AI expansion. Earth‑based facilities are constrained by grid capacity, rising energy prices and environmental concerns. By moving compute into orbit and tapping continuous sunlight, SpaceX hopes to lower operating costs and reduce carbon footprint. The proposed constellation would dwarf existing satellite fleets and could cement SpaceX as both an internet provider and an AI infrastructure giant. However, the scale – a million satellites – is unprecedented; SpaceX acknowledges it is seeking a high number for flexibility rather than a literal rollout (reuters).
What’s next
The FCC will review the filing, and environmental agencies are likely to weigh in on space‑debris implications. Much depends on Starship’s success; if reusable launch becomes routine, orbital data centers could become economically feasible. Competitors such as Amazon’s Kuiper and China’s GuoWang will watch closely.
Waymo seeks US$16 billion to keep its robotaxis rolling
Bloomberg reported that Waymo, Alphabet’s self‑driving car unit, aims to raise about US$16 billion in fresh financing. Reuters confirmed the story on Jan. 31, noting that Alphabet would provide US$13 billion, with investors such as Sequoia Capital, DST Global and Dragoneer expected to supply the rest (reuters). The funding round would value Waymo at roughly US$110 billion, approaching the market cap of General Motors. Waymo runs the only paid robotaxi service in the U.S. without a safety driver, operating a fleet of 2,500 vehicles across Phoenix and parts of California.
Why it matters
Robotaxi deployment is capital‑intensive; Waymo burns cash on vehicle manufacturing, mapping and remote‑operator staffing while revenue remains modest. The new round would give it a war chest to expand services ahead of rivals like Cruise (GM) and Tesla’s planned robotaxis. However, regulators are scrutinizing safety: the National Highway Traffic Safety Administration (NHTSA) opened a probe after a Waymo vehicle struck a child near a school (reuters). Raising money amid regulatory heat signals Alphabet’s commitment but also underscores the operational and legal risks of autonomous vehicles.
What’s next
Investors will watch whether Waymo can scale operations without increasing incidents. The NHTSA investigation could lead to additional rules for all robotaxi operators. If the round closes at the targeted valuation, it will reinforce investor confidence in autonomous driving just as some competitors have pulled back.
Judge signals she’ll toss xAI’s trade‑secret suit against OpenAI
In a Jan. 30 hearing, U.S. District Judge Rita Lin indicated she is inclined to dismiss xAI’s lawsuit accusing OpenAI of stealing trade secrets related to the Grok chatbot. Lin said xAI’s complaint failed to plausibly allege that OpenAI acquired or used trade secrets and characterized the claim of unfair competition as centered on employee poaching rather than misappropriation (reuters). She signaled she may allow xAI to amend its complaint, but the case is likely to be narrowed. The lawsuit is part of a broader legal battle in which Elon Musk’s xAI is seeking US$134.5 billion in damages and claims OpenAI undermined its ability to compete.
Why it matters
The case underscores the high stakes of the AI talent war. xAI alleged that OpenAI hired its employees to obtain confidential information about Grok, a chatbot competing with ChatGPT. A dismissal would be a major victory for OpenAI, reducing legal risk as it chases another huge funding round. It also sets a precedent that poaching alone may not constitute trade‑secret theft, potentially cooling similar lawsuits in the future.
If the complaint is dismissed with leave to amend, xAI could refile with more specific allegations. Regardless of the lawsuit’s outcome, the two companies will continue to vie for top researchers and compute resources. The trial date of April 27 remains on the calendar, but may be cancelled if the case is tossed.
China clears DeepSeek to buy Nvidia’s H200 chips—under conditions
Chinese regulators have granted AI startup DeepSeek conditional approval to purchase Nvidia’s H200 accelerator chips, Reuters reported on Jan. 30. The National Development and Reform Commission is still finalizing restrictions, but the move mirrors earlier approvals for ByteDance, Alibaba and Tencent to buy more than 400,000 H200 chips (reuters). U.S. officials previously cleared the sale despite concerns that H200, Nvidia’s second‑most‑powerful AI chip, could be repurposed for military use. DeepSeek plans to launch its Kimi V4 large‑language model with advanced coding capabilities in February (reuters).
Why it matters
Access to high‑end AI chips has become a geopolitical flashpoint. Washington restricts exports of Nvidia’s top‑tier H100 and H200 chips to Chinese entities over national‑security concerns, while Beijing wants to cultivate domestic AI champions. DeepSeek’s conditional approval suggests China is balancing the need to fuel innovation against the risk of U.S. sanctions and military misuse. For Nvidia, sales to Chinese customers remain a lucrative albeit politically sensitive revenue stream.
What’s next
The conditions on DeepSeek’s purchase could include restrictions on model size, training data and export controls. U.S. lawmakers have criticized Nvidia for selling to Chinese firms; additional congressional scrutiny is likely. DeepSeek’s Kimi V4 release will be closely watched to see whether access to H200 chips propels Chinese models closer to Western peers.
Moonshot AI’s Kimi K2.5 pushes agentic models forward
Chinese AI firm Moonshot AI released Kimi K2.5, an open‑weight multimodal model that expands on its K2 series. According to HPCwire, K2.5 was trained on 15 trillion text and image tokens and introduces an “Agent Swarm” feature that can spin up up to 100 sub‑agents to parallelize tasks (hpcwire). These sub‑agents collaborate to divide and conquer complex workflows, reducing latency by three‑ to four‑and‑a‑half‑fold compared to a single agent. K2.5 also improves “vision‑to‑code” capabilities, allowing the model to generate software from images or video (hpcwire).
Why it matters
While many frontier models remain closed, Moonshot AI continues to release its weights, letting developers fine‑tune and deploy the model without sending data to U.S. servers. The Agent Swarm architecture showcases how agentic AI—models that can orchestrate multiple sub‑processes autonomously—is advancing. The combination of native multimodality and open weights positions K2.5 as a platform for research and enterprise use, especially in regions sensitive to data sovereignty.
What’s next
Moonshot AI plans to launch Kimi V4 in February with even stronger coding and reasoning capabilities. Western labs will watch how the Agent Swarm approach performs relative to closed systems like OpenAI’s GPT‑4. If successful, K2.5 could accelerate a shift toward decentralized, agentic AI systems.
Apple to unveil a Gemini‑powered Siri—yes, that Gemini
Technology site TechEdt (techedt) reported that Apple plans to unveil a significantly upgraded Siri that runs on Google’s Gemini large‑language model. According to leaked details, Apple will preview the new assistant in February before rolling it out in the iOS 26.4 beta a few weeks later
The integration marks the biggest architectural change to Siri since its 2011 launch. The update will bring Siri closer to modern chatbots, enabling it to understand complex queries, maintain context and perform multi‑step tasks
Why it matters
Apple has lagged rivals in generative‑AI capabilities, and its in‑house Apple Intelligence framework remains limited. By licensing Gemini—a model often benchmarked alongside GPT‑4 and Anthropic’s Claude—Apple can leapfrog ahead without waiting for its own models to catch up. The move also signals a shift toward cross‑ecosystem partnerships in AI; rather than building everything themselves, tech giants may mix and match models to offer the best experience.
What’s next
Beta testers will scrutinize how well the Gemini‑powered Siri preserves privacy and whether Apple continues to process requests on‑device. Developers at WWDC will learn how to integrate the new assistant into their apps. If the partnership succeeds, expect deeper collaboration between Apple and Google in other product areas—and pressure on Amazon’s Alexa and Microsoft’s Copilot to keep up.
Amazon cuts 16,000 jobs to bankroll AI—and hints at more to come
Amazon announced on Jan. 28 that it will eliminate about 16,000 corporate jobs, its second major layoff since October. A company blog post explained that the cuts are part of an effort to reduce management layers, increase ownership and remove bureaucracy while freeing resources to invest heavily in artificial intelligence (cnbc). The layoffs follow an earlier reduction of 14,000 roles in October and bring total cuts since then to around 30,000, roughly 10 % of Amazon’s corporate workforce (cnbc).
Why it matters
Amazon’s willingness to continually trim staff underscores the capital‑intensive nature of AI. CEO Andy Jassy has said that efficiency gains from AI will lead to fewer people performing some existing jobs (cnbc). The company is also building massive new data centers, with 2026 capital expenditures expected to reach US$125 billion, the highest among the mega‑caps (cnbc). By cutting overhead, Amazon aims to redirect billions toward AI research, semiconductor procurement and cloud infrastructure, a shift that may reshape its internal culture.
What’s next
Beth Galetti, Amazon’s head of people experience, cautioned that more layoffs could occur as each team evaluates its ownership and speed (cnbc). Watch for Amazon to announce new AI products and services that justify the spending, including updates to its Bedrock foundation‑model platform and AI‑enhanced Alexa devices. The layoffs may also prompt questions about employee morale and Amazon’s ability to recruit specialized talent.
