We’re releasing Qwen3-Coder-480B-A35B-Instruct, our most powerful open agentic code model to date. This 480B-parameter Mixture-of-Experts model (35B active) natively supports 256K context and scales to 1M context with extrapolation. It achieves top-tier performance across multiple agentic coding benchmarks among open models, including SWE-bench-Verified!!! "
"Unitree Introducing | Unitree R1 Intelligent Companion Price from $5900
Join us to develop/customize, ultra-lightweight at approximately 25kg, integrated with a Large Multimodal Model for voice and images, let's accelerate the advent of the agent era!"
Inspired by brain's hierarchical processing, HRM delivers unprecedented reasoning power on complex tasks like ARC-AGI and expert-level Sudoku using just 1k examples, no pretraining or CoT!"
"Introducing Ideogram Character -- the first character consistency model that works with just one reference image. Now available to all users for free!"
"Our humanoid now learns loco-manip skills that generalize across space (Stanford) & time (day & night), using egocentric vision, trained only in simulation"
"We’ve supercharged coding & agentic skills—now Qwen3-Max-Instruct without thinking rivaling top models on SWE-Bench, Tau2-Bench, SuperGPQA, LiveCodeBench, and AIME25.
With Qwen3-Max-Thinking equipped with tool use and deployed in heavy mode, it’s nearly perfect on key benchmarks. Built on massive scale + data, and backed by relentless compute scaling in pre-training & RL.
"Unitree G1 has mastered more quirky skills
Unitree G1 has learned the "Anti-Gravity" mode: stability is greatly improved under any action sequence, and even if it falls, it can quickly get back up."