AI video generation news. Sora, Runway, Pika updates. Video AI model releases, capabilities, and industry news.
We cap out our World Models coverage with one of the most exciting new approaches - long running, multiplayer, interactive world models built with agents bootstrapped from game engines!
Google is adding a way to customize and instruct avatars for video creation in the Vids app.
Google Vids logo surrounded by various video editing UI
GLM-5V-Turbo is Z.ai’s first native multimodal agent foundation model, built for vision-based coding and agent-driven tasks. It natively handles image, video, and text inputs, excels at long-horizon planning, complex coding, and task execution, and works seamlessly with agents to complete the full loop of “perceive → plan → execute“.
Build with Veo 3.1 Lite
OpenAI's decision last week to shut down Sora, its AI video-generation tool, just six months after releasing it to the public raised immediate suspicions. The app had invited users to upload their own faces — so was this some kind of elaborate data grab?
When an 82-year-old Kentucky woman was offered $26 million from an AI company that wanted to build a data center on her land, she said no. Sure, that same company can try to rezone 2,000 acres nearby anyway, but as AI infrastructure stretches further into the real world, the real world is starting to push back. That tension is everywhere […]
When an 82-year-old Kentucky woman was offered $26 million from an AI company that wanted to build a data center on her land, she said no. Sure, that same company can try to rezone 2,000 acres nearby anyway, but as AI infrastructure stretches further into the real world, the real world is starting to push back. That tension is everywhere […]
a quiet day lets us reflect on the growing trend of CLIs for ~everything~ agents
Typically, AI requires massive amounts of training data to understand complex human actions. However, in real-world scenarios, it is often difficult to secure sufficient video data for specific actions. A research team led by Jae-Pil Heo, Professor in the Department of Software at Sungkyunkwan University, has developed an AI technology that can accurately recognize new actions from only a small number of example videos. The research team focused on few-shot action recognition, which enables AI to learn and distinguish the characteristics of new actions from only a few examples.
The new model in CapCut will have built-in protections for making video from real faces or unauthorized intellectual property.
Chinese artificial intelligence powerhouse and TikTok creator ByteDance has quietly rolled out its latest video generator Seedance 2.0 worldwide, while its US rival OpenAI called time on a similar product.
In a major step toward more adaptable and intuitive machines, Kempner Institute Investigator Yilun Du and his collaborators have unveiled a new kind of artificial intelligence system that lets robots envision their actions before carrying them out. The system, which uses video to help robots imagine what might happen next, could transform how robots navigate and interact with the physical world.
Get our weekly newsletter on pricing changes, new releases, and tools.

Deploy OpenClaw in Under 1 Minute— We handle hosting, scaling, and maintenance
Built by @aellman
2026 68 Ventures, LLC. All rights reserved.