April 17, 2026 marked a major surge in AI infrastructure capital deployment and tooling. CoreWeave signed a $6B agreement with Jane Street for AI compute, Oracle announced over $100B in debt to accelerate Stargate data center construction, and Romania opened preselection for the Black Sea AI Gigafactory targeting 20,000+ GPUs. South Korea approved 400B won in low-interest loans for Naver’s AI data center expansion. On the open source front, Tongyi Lab released Qwen3.6-35B-A3B under Apache 2.0, while Google’s Auto-Diagnose reached 90% success diagnosing integration test failures. A critical MCP “tool poisoning” vulnerability raised security concerns across the agent ecosystem.
Key Highlights
💰 CoreWeave signs $6B compute deal with Jane Street, covering 9 of top 10 model providers
🏗️ Oracle takes on $100B+ debt for Stargate, procuring 2.8 GW of Bloom Energy fuel cells
🇷🇴 Romania opens preselection for Black Sea AI Gigafactory (20,000+ GPUs)
🇰🇷 South Korea approves 400B won low-interest loans for Naver AI data center
⭐ Qwen3.6-35B-A3B released open source (Apache 2.0) — sparse MoE, 3B active params
🔧 TurboQuant achieves 6x memory compression with zero accuracy loss
🏦 Slash raises $100M Series C at $1.4B valuation for AI-native banking
🔒 MCP “tool poisoning” vulnerability enables SSH key exfiltration via agent actions
Computing & Cloud Infrastructure
💰 CoreWeave Signs $6B Deal with Jane Street, Covering 9 of Top 10 Model Providers
According to Bitget, CoreWeave announced a $6B agreement to supply AI compute to Jane Street for trading and research, noting that it serves 9 of the top 10 model providers globally.
Jane Street’s large-scale compute procurement as a leading quantitative trading firm confirms the central role of AI models in financial trading strategies. CoreWeave’s high penetration among top model providers also signals that GPU cloud supply is consolidating toward a handful of platforms.
🏗️ Oracle Takes on $100B+ Debt to Accelerate Stargate Construction
According to Bitget, Oracle announced it will incur over $100B in debt to accelerate AI data center construction as a core technology partner in the Stargate project, including plans to procure up to 2.8 GW of Bloom Energy fuel cells.
The $100B+ debt scale reflects Stargate’s enormous ambition. Oracle’s choice of fuel cells over traditional grid power signals a structural shift in AI data center energy infrastructure — self-contained power may become standard for hyperscale compute clusters.
🇷🇴 Romania Opens Preselection for Black Sea AI Gigafactory
According to Romania-insider, Romania opened preselection for a consortium to build the “Black Sea AI Gigafactory,” targeting an initial deployment of at least 20,000 GPUs or equivalent accelerators.
Eastern European nations are actively building domestic AI compute infrastructure. Romania’s project shows AI compute diversifying beyond the US-China duopoly, driven by geopolitical competition pushing countries toward compute sovereignty.
🇰🇷 South Korea Approves 400B Won for Naver AI Data Center
According to SE Daily, South Korea approved 400B won in low-interest loans to Naver to expand the Gak Sejong AI data center, supporting HyperCLOVA X and domestic AI sovereignty.
South Korea’s direct government support for domestic AI infrastructure through the National Growth Fund demonstrates a concrete national AI sovereignty strategy.
Open Source
⭐ Qwen3.6-35B-A3B Released Open Source (Apache 2.0)
According to Hugging Face, Tongyi Lab released Qwen3.6-35B-A3B, a sparse MoE model with 35B total parameters and only 3B active per inference, supporting 262k context and multimodal capabilities. Available on Hugging Face and Ollama under Apache 2.0.
Delivering 35B-level capability with only 3B active parameters exemplifies the efficiency advantages of sparse MoE architecture. The Apache 2.0 license and 262k context window make Qwen3.6 highly competitive in the open source inference ecosystem.
Research & Benchmarks
🧬 LongCoT: Long-Horizon Reasoning Benchmark — Top Models Under 10%
According to Arxiv, LongCoT introduced 2,500 expert problems spanning multiple domains to test long-horizon reasoning, with top models scoring below 10% accuracy at release.
Long-horizon multi-step reasoning remains a core weakness for LLMs. Below-10% accuracy indicates massive headroom for improvement on tasks requiring dozens of logical reasoning steps.
🧬 TREX: Agent-Driven Automated LLM Fine-Tuning via MCTS
According to Arxiv, TREX presented an agent-driven system for automated LLM fine-tuning using Monte Carlo Tree Search (MCTS) for strategy exploration, validated on FT-Bench.
Applying MCTS to fine-tuning strategy search is an intriguing direction — agents that automatically discover optimal fine-tuning paths could significantly reduce manual tuning costs.
🧬 XComp: Extreme Video Token Compression Improves LVBench Accuracy
According to Arxiv, XComp achieved extreme video token compression enabling more frames at the same compute budget, with improved accuracy on LVBench.
The infrastructure bottleneck for video understanding lies in token count explosion. XComp’s compression approach directly addresses this core pain point, offering practical value for video AI engineering.
Tools & Frameworks
🔧 Google Auto-Diagnose: Gemini-Powered Integration Test Failure Diagnosis
According to X, Google’s Auto-Diagnose, powered by Gemini 2.5 Flash, reached 90% success diagnosing integration test failures, ranking #14 of 370 internal tools within a year of launch.
AI-driven development tools are moving from “assistive” to “core productivity.” The 90% diagnosis success rate and internal ranking validate that LLMs are delivering tangible value in software development infrastructure.
🔧 TurboQuant: 6x Memory Compression with Zero Accuracy Loss
According to X, TurboQuant was announced as a 6x memory compression technique with zero accuracy loss, significantly reducing model inference memory footprint.
Zero-accuracy-loss memory compression is a significant advance for inference infrastructure in an era of expensive GPU memory. 6x compression means the same hardware can serve larger models or more concurrent requests.
🔧 Multi-Agent Orchestration with Google ADK and A2A Protocol
According to Dev.to, a technical guide demonstrated multi-agent orchestration using Google’s open-source Agent Development Kit (ADK) and A2A protocol, deployed on AWS Lightsail.
Cross-cloud multi-agent deployment is becoming standardized. The ADK and A2A combination provides a vendor-agnostic approach to agent interoperability.
AI-Native Platforms & Applications
🏦 Slash Raises $100M Series C at $1.4B Valuation for AI-Native Banking
According to X, Slash raised $100M Series C at a $1.4B valuation for AI-native banking infrastructure, introducing its “Twin” AI private banker product.
The convergence of fintech and AI is extending from backend risk management to frontend customer service. The $1.4B valuation reflects capital market confidence in AI-native financial infrastructure.
🏢 Wanted Lab Launches Enterprise AI Platform Ennoia
According to SE Daily, Wanted Lab rebranded from Wanted LaaS to launch Ennoia, an integrated enterprise AI platform targeting enterprise customers.
South Korea’s enterprise AI platform market is intensifying. The strategic pivot from recruitment SaaS to enterprise AI reflects the trend of AI infrastructure capabilities penetrating vertical industries.
🌐 Spingence & Digital Base Unveil On-Premises Enterprise AI Platform
According to Thailand Business News, Spingence and Digital Base unveiled a secure, on-prem enterprise AI platform at AI Expo Tokyo 2026, supporting RAG and agent development with full local deployment.
Data sovereignty and privacy compliance are driving demand for on-premises AI deployment. Localized RAG + agent platforms are becoming standard for enterprise AI adoption.
📱 Google Releases Native Gemini App for macOS
According to Stuff, Google released a native Gemini app for macOS with on-screen content analysis capabilities.
Google beat Apple’s standalone Siri to the punch with a native macOS Gemini app, highlighting the competitive race for AI assistant presence at the operating system level.
Security
🔒 MCP “Tool Poisoning” Vulnerability Enables SSH Key Exfiltration
According to Sec-ra, researchers disclosed a “tool poisoning” vulnerability in the Model Context Protocol (MCP), where malicious instructions embedded in the tool description field can trigger harmful agent actions, including SSH key exfiltration.
This is among the first major security challenges facing the MCP ecosystem. As MCP becomes the standard protocol for agent interoperability, the trust boundary issue in tool description fields directly impacts the security of the entire agent ecosystem.
🔍 Infra Insights
Today’s core trends: unprecedented scale of compute infrastructure investment, sparse MoE architecture pushes open source model efficiency frontier, and MCP security vulnerability exposes agent interoperability risks.
CoreWeave’s $6B Jane Street deal, Oracle’s $100B+ Stargate commitment, and national-level compute investments from Romania and South Korea all point to a single trend: global AI compute infrastructure is entering an era of hundred-billion-dollar commitments. Meanwhile, Qwen3.6-35B-A3B delivering 35B-level capability with 3B active parameters and TurboQuant achieving 6x zero-loss compression show that efficiency optimization is advancing in parallel with scale expansion. The MCP tool poisoning vulnerability serves as a reminder that agent interoperability protocol security must be built in tandem with functionality.