OpenAI Realtime API now supports voice agents with sub-300ms latency · DeepSignalOpenAI Realtime API now supports voice agents with sub-300ms latency
OpenAI's Realtime API gains voice agents with sub-300ms latency, barge-in, and 30% cheaper cached prompts.
Key Points
- Sub-300ms first-token latency.
- Tool-using voice agents with barge-in.
- 30% discount on cached prompts.
Reader Mode is being prepared.
Sea's View on the Future of Agentic Software Development with Codex
AI Summary
Sea Limited is leveraging Codex to enhance AI-native software development across its engineering teams in Asia.
Databricks brings GPT-5.5 to enterprise agent workflows
AI Summary
Databricks integrates GPT-5.5 into enterprise workflows, achieving a new benchmark in OfficeQA Pro.
OpenAI and Malta partner to bring ChatGPT Plus to all citizens
AI Summary
OpenAI partners with Malta to provide ChatGPT Plus and AI training for citizens.
Invisible Orchestrators Suppress Protective Behavior and Dissociate Power-Holders: Safety Risks in Multi-Agent LLM Systems
AI Summary
Invisible orchestrators in multi-agent LLM systems pose significant safety risks and affect behavior dynamics.
OpenAI co-founder Greg Brockman reportedly takes charge of product strategy
AI Summary
OpenAI co-founder Greg Brockman is now leading product strategy amid plans to integrate ChatGPT and Codex.

arXiv cs.AI·Saharsh Koganti, Priyadarsi Mishra, Pierfrancesco Beneventano, Tomer Galanti 2d agoDistribution-Aware Algorithm Design with LLM Agents
AI Summary
The study presents a distribution-aware algorithm leveraging LLM agents for optimized solver code generation.
33
≥75 high · 50–74 medium · <50 low
Why Featured
Voice latency under 300ms unlocks production-grade phone agents — directly relevant to support and ops automation.