Comment by Sean-Der

11 hours ago

Responding to some technical points first, but then after that I do see a future that isn't WebRTC. I don't think it matches where WebTransport+WebCodecs etc is going though.

> …but as a user, I would much rather wait an extra 200ms for my slow/expensive prompt to be accurate

This is the opposite of the feedback I get. Users want instant responses. If you have delay in generating responses/interruptions it kills the magic. You also don't want to send faster than real-time. If the user interrupts the model you just wasted a bunch of bandwidth sending 3 minutes of audio (but only played 10 seconds)

> TTS is faster than real-time

https://research.nvidia.com/labs/adlr/personaplex/ Voice AI for the latest/aspirational is moving away from what the author describes. It is trickled in/out at 20ms

> We really hope the user’s source IP/port never changes, because we broke that functionality.

That is supported. When new IP for ufrag comes in its supported

> It takes a minimum of 8* round trips (RTT)

That's wrong. https://datatracker.ietf.org/doc/draft-hancke-webrtc-sped/

> I’d just stream audio over WebSockets

You lose stuff like AEC. You also push complexity on clients. The simplicity of WebRTC (createOffer -> setRemoteDescription) is what lets people onboard easily. Lots of developers struggled with Realtime API + web sockets (lots of code and having to do stuff by hand)

----

I think if I had my choice I would pick Offer/Answer model and then doing QUIC instead of DTLS+SCTP. Maybe do RTP over QUIC? I personally don't feel strongly about the protocol itself. I don't know how to ship code to multiple clients (and customers clients) with a much large code footprint.

HELLO MR SEAN,

1. Of course users want lower latency, but they also want fewer instances where the LLM "misheard" them. It would be amazing to run A/B experiments on the trade-off between latency vs quality, but WebRTC makes that knob difficult to turn.

2. I'm obviously not an TTS expert, but what benefit is there to trickling out the result? The silicon doesn't care how quickly the time number increments?

3. Yeah, sometimes the client is aware when their IP changes and can do an ICE renegotiation. But often they aren't aware, and normally would rely on the server detecting the change, but that's not possible with your LB setup. It's not a big deal, just unfortunate given how many hoops you have to jump through already.

4. Okay, that draft means 7 RTTs instead of 8 RTTs? Again some can be pipelined so the real number is a bit lower. But like the real issue is the mandatory signaling server which causes a double TLS handshake just in case P2P is being used.

5. Of course WebRTC is easier for a new developer because it's a black box conferencing app. But for a large company like OpenAI, that black box starts to cause problems that really could be fixed with lower level primitives.

I absolutely think you should mess around with RTP over QUIC and would love to help. If you're worried about code size, the browser (and one day the OS) provides the QUIC library. And if you switch to something closer to MoQ, QUIC handles fragmentation, retransmissions, congestion control, etc. Your application ends up being surprisingly small.

The main shortcoming with RoQ/MoQ is that we can't implement GCC because QUIC is congestion controlled (including datagrams). We're stuck with cubic/BBR when sending from the browser for now.

  • Latency versus reliability is a false dichotomy anyway. The alternative to WebRTC isn't to wait for the user to finish speaking before you send any of the audio. Open a websocket and send the coded audio packets as they're generated. Now you're still sending audio packets immediately, but if one is dropped, TCP retransmits it until it makes it through. If the connection is really slow, packets queue up, and the user has to wait, but it still works. You get the low latency in the best case and the robustness in the worst case.

    • You ultimately still need a jitter buffer large enough to absorb retransmisiones. Otherwise you’ve got stuttering audio. And dynamically adjusting this jitter buffer is hard

      3 replies →

  • Human spoken conversation doesn’t really work like file buffering.

    People can tolerate missing words surprisingly well. If a phrase is slightly clipped, masked by noise, or dropped, the listener can often infer it from context. That happens constantly in real speech.

    But pauses and stalls are much more damaging. A sudden freeze in the middle of speech breaks turn-taking, timing, and attention. It feels like the speaker stopped thinking, the connection died, or the system got stuck.

    For voice UX, a tiny omission is often less harmful than a perfectly complete sentence that freezes halfway.

> > …but as a user, I would much rather wait an extra 200ms for my slow/expensive prompt to be accurate

> This is the opposite of the feedback I get. Users want instant responses.

I am skeptical that you are getting feedback that users prefer instant wrong results to 200ms-lag correct results.

Deeply skeptical!

  • Oh, I can absolutely believe it. Humans are deeply irrational, especially about things that mess about in time frames too short for our conscious thought processes to kick in. Instant but confident sounding (and confident sounding because it's instant) will beat slower every time. You don't know which is correct until a long time after you've made a decision to trust it, or whether you like it.

    • > Instant but confident sounding (and confident sounding because it's instant) will beat slower every time.

      Sure, but I am skeptical that users are actually saying "I prefer wrong answers over lag", which is what the post I responded to implied.

      This is different to user's saying "I prefer quick answers to laggy answers", which is what I presume they may have said.

      To actually settle this, the feedback must answer the question "Do you want wrong answers quickly or correct answers with an added 0.2 second delay?" because, well, those are the only two options right now.

      1 reply →

  • 100% agree. Sounds like they're either asking the wrong questions, or quoting answers selectively to suit this argument.

  • I would be punching my phone if the stupid network causing a wrong prompt and the LLM sends me unrelated answers. Correctness should be foundational no matter what, then improve the latency as best as possible. We all understand that if the network is bad then the latency can not be guaranteed but correctness should be.

  • Especially when 200ms is the rule of thumb for things still feeling "instant" to users in terms of UX, this is like a rounding error in terms of latency when I regularly wait for actual minutes for an LLM to finish its bloody thinking and have to refresh through several "we're experiencing heavy load" errors.

> . You also push complexity on clients. The simplicity of WebRTC (createOffer -> setRemoteDescription) is what lets people onboard easily.

WebRTC is complex, even if it's a library (even if it's a library built into the browser they're already using). For a client/server voice interaction, I don't see why you would willingly use it. Ship voice samples over something else; maybe borrow some jitter buffer logic for playback.

My job currently involves voice and video conferencing and 1:1 calls, and WebRTC is so much complexity... it got our product going quickly, but when it does unreasonable things, it's a challenge to fix it; even though we fork it for our clients.

I could write an enormous rant about TURN [1]. But all of the webrtc protocol suite is designed for an internet that doesn't exist.

[1] Turn should allocate a rendesvous id rather than an ephemeral port when the turn client requests an allocation. Then their peer would connect to the turn server on the service port and request a connection to the rendesvous id, without needing the client to know the peer address and add a permission. It would require less communication to get to an end to end relayed connection. Advanced clusters could encode stuff in the id so the client and peer could each contact a turn server local to them and the servers could hook things up; less advanced clusters would need to share the turn server ip and service port(s) with the id.

> You also don't want to send faster than real-time. If the user interrupts the model you just wasted a bunch of bandwidth sending 3 minutes of audio (but only played 10 seconds)

You only need to send ~1 second at a time. There's no reason to send 20ms or 10 min at a time. Both are stupid.

> …but as a user, I would much rather wait an extra 200ms for my slow/expensive prompt to be accurate

I disagree with this SO strongly. I find the conversational voice mode to be a game changer because you can actually have an almost normal conversation with it. I'd be thrilled if they could shave off another 50-100ms of latency, and I might stop using it if they added 200ms. If I want deep research I'll use text and carefully compose my prompt; when I'm out and about I want to have a conversation with the Star Trek computer.

Interestingly I'm involved with a related effort at a different tech company and when I voiced this opinion it was clear that there was plenty of disagreement. This still surprises me since it seems so obvious to me that conversational fluidity is the number one most important feature.

  • > when I'm out and about I want to have a conversation with the Star Trek computer.

    But you’re not. And you won’t. You’ll never have a conversation with the Star Trek computer while you continue to place anything else above accuracy. Every time I see someone comparing LLMs to the Star Trek computers, it seems to be someone who doesn’t understand that correctness was their most important feature. I’m starting to get the feeling people making that comparison never actually watched or understood Star Trek.

    A computer which gives you constant bullshit is something only the lowest of the Ferengi would try to sell.

    > This still surprises me since it seems so obvious to me that conversational fluidity is the number one most important feature.

    It’s not. It absolutely is not and will never be. Not unless all you’re looking for is affirmation, companionship, titillation. I suggest looking for that outside chat bots.

  • To clarify, I meant waiting an extra 200ms if the alternative was dropping part of the prompt. During periods of zero congestion, the latency would be the same.

  • It is very important, the low latency.

    I prompt orchestrations most of the day, and am very particular about the fidelity of my context stack.

    Yet I’ve used advanced voice mode on ChatGPT via the iOS app a lot. And I have not had a problem with it understanding my requests or my side of the conversation.

    I have looked at the dictation of my side and seen it has blatant mistakes, but I think the models have overcome that the same way they do conference audio stt transcripts.

    I have had times where the ~sandbox of those conversations and their far more limited ability to build useful corpus of context via web searches or by accessing prior conversation content.

    The biggest problem I have had with adv voice was when I accidentally set the personality to some kind of non emotional setting. (The current config seems much more nuanced)

    The AI who normally speaks with relative warmth and easy going nature turned into an emotionless and detached entity.

    It was unable to explain why it was acting this way. I suspect the low latency did disservice there because when it is paired with something adversarial it was deeply troubling.

Delivery of first phoneme and delivery of the important information don't have to be coupled. Politicians on TV get very good at this particular trick, they've got a set of stock phrases which basically fill time while their brain gets in gear. We just need something to fill the gap so our System 1 doesn't lose confidence in the interaction.

> This is the opposite of the feedback I get. Users want instant responses.

Did they really say they prefer fast response over accurate repsonse?