返回 2026-05-10
⚙️ 工程

引用卢克·库里Quoting Luke Curley

simonwillison.net·2026-05-09 节选正文

文章引用了卢克·库里关于 WebRTC 的言论,指出 WebRTC 的设计目标是在网络条件不佳时主动丢弃音频数据包以保持低延迟。作者对此表示惊讶,因为对于像语音通话这样的实时应用,用户通常更愿意等待完整的数据包以确保音频质量,而不是接受因丢包导致的失真。

Simon Willison

9th May 2026

WebRTC is designed to degrade and drop my prompt during poor network conditions. wtf my dude WebRTC aggressively drops audio packets to keep latency low. If you’ve ever heard distorted audio on a conference call, that’s WebRTC baybee. The idea is that conference calls depend on rapid back-and-forth, so pausing to wait for audio is unacceptable. …but as a user, I would much rather wait an extra 200ms for my slow/expensive prompt to be accurate. After all, I’m paying good money to boil the ocean, and a garbage prompt means a garbage response. It’s not like LLMs are particularly responsive anyway. But I’m not allowed to wait. It’s impossible to even retransmit a WebRTC audio packet within a browser; we tried at Discord. The implementation is hard-coded for real-time latency or else.

— Luke Curley, OpenAI’s WebRTC Problem, in response to How OpenAI delivers low-latency voice AI at scale

需要完整排版与评论请前往来源站点阅读。