AI regulation and policy news. EU AI Act, US legislation, compliance updates, and government oversight of artificial intelligence.
France will make a fresh attempt to protect children from excessive screen time, proposing a ban on social media access for children under 15 by next September, according to a draft law seen by AFP.
Use a loyalty card at a drug store, browse the web, post on social media, get married or do anything else most people do, and chances are companies called data brokers know about it—along with your email address, your phone number, where you live and virtually everywhere you go.
Italy has ordered Meta to suspend its policy that bans companies from using WhatsApp's business tools to offer their own AI chatbots on the popular chat app.
Italian regulators ordered Meta on Wednesday to open its WhatsApp chat platform to rival AI chatbots as it and EU authorities pursue a probe that the US tech giant is abusing its dominant market position.
A federal judge on Tuesday blocked a Texas law that would have required age verification and parental consent for minors downloading mobile apps, ruling the measure likely violates free speech protections.
These authors rejected Anthropic's class action settlement, arguing that "LLM companies should not be able to so easily extinguish thousands upon thousands of high-value claims at bargain-basement rates."
As generative AI systems become more deeply woven into the fabric of modern life—drafting text, generating images, summarizing news—debates over who should profit from the technology are intensifying.
The bill will require large AI developers to publish information about their safety protocols and report safety incidents to the state within 72 hours.
Each time you’ve heard a borderline outlandish idea of what AI will be capable of, it often turns out that Sam Altman was, if not the first to articulate it, at least the most persuasive and influential voice behind it.  For more than a decade he has been known in Silicon Valley as a world-class…
It’s a weird time to be an AI doomer. This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad—very, very bad—for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety…
Do you believe the singularity is nigh?
Is AI balkanization measurable?
At what point will AI change your daily life?
Discover OpenAI’s Teen Safety Blueprint—a roadmap for building AI responsibly with safeguards, age-appropriate design, and collaboration to protect and empower young people online.
Meeting the demands of the Intelligence Age will require strategic investment in energy and infrastructure. OpenAI’s submission to the White House details how expanding capacity and workforce readiness can sustain U.S. leadership in AI and economic growth.
We’re strengthening the Frontier Safety Framework (FSF) to help identify and mitigate severe risks from advanced AI models.
OpenAI's Korea Economic Blueprint outlines how South Korea can scale trusted AI through sovereign capabilities and strategic partnerships to drive growth.
We are growing machines we do not understand.
OpenAI and Allied for Startups release the Hacktivate AI report with 20 actionable policy ideas to accelerate AI adoption in Europe, boost competitiveness, and empower innovators.
OpenAI and Japan’s Digital Agency partner to advance generative AI in public services, support international AI governance, and promote safe, trustworthy AI adoption worldwide.
Built by @aellman
2026 68 Ventures, LLC. All rights reserved.