Comment by maxbond
7 months ago
It's the term used in frontend dev. It is actually a little worse than you're imagining, because we're not sampling, we're receiving callbacks (so more analogous to interrupts than sampling in a loop). Eg the oninput callback. I've used it for implementing auto save without making a localStorage call on every key press, for example.
I think it makes sense if you view it from a control theory perspective rather than an embedded perspective. The mechanics of the UI (be that a physical button or text input) create a flaggy signal. Naively updating the UI on that signal would create jank. So we apply some hysteresis to obtain a clean signal. In the day way that acting 50 times on a single button press is incorrect behavior, saving (or searching or what have you) 50 times from typing a single sentence isn't correct (or at least undesired).
The example of 10ms is way too low though, anything less than 250ms seems needlessly aggressive to me. 250ms is still going to feel very snappy. I think if you're typing at 40-50wpm you'll probably have an interval of 100-150ms between characters, so 10ms is hardly debouncing anything.
Additionally, regardless of naming, debouncing is an accessibility feature for a surprisingly large portion of the population. Many users who grew up with double-click continue to attempt to provide this input on web forms, because it mostly works. Many more with motor control issues may struggle to issue just a single click reliably, especially on a touchscreen.
Holy moly, for years I've had in the back of my head this thought about why, earlier in my career, I'd see random doubly submitted form submissions on certain projects. Same form code and processing as other sites, legitimate submissions too. Eventually we added more spam filtering and restrictions unrelated to these legitimate ones, but it was probably the double-click users causing those errant submissions. I'd never even have thought of those users. Fascinating
Users of GUI operating systems had been trained to double-click on icons representing applications or files in order to launch them.
If you make a web UI in which a button is styled via an icon image (or otherwise) to look like a launchable application or file, those users will double click on it.
If you make it look like a button, they won't; they were certainly not trained to double-click on [OK] or [Cancel] in an OK/Cancel dialog box, for instance!
Double clicking to launch an action on a file makes sense because you need single click for selecting it. There are things you can do with it other than launch, like dragging it to another location.
T;DR: don't make buttons look like elements that can be selected and dragged somewhere?
--
Another reason: I've sometimes multiply clicked on some button-like thing in a web UI because in spite of working fine, it made no indication that had received the click!
It was styled to look like a button in the unclicked state ... and that one image was all it had.
To detect that the action launched you have to look for clues in the browser status areas showing that something is being loaded. Those are often made unobtrusive these days. Gone are the days of Netscape's spinning planet.
When the user sees that a button doesn't change state upon being clicked, yet the normal cursor is seen (pointer, not hourglass or spinning thing or whatever) they assume that the application has become unresponsive due to a bug or performance problem, or that somehow the click event was dropped; maybe their button is physically not working or whatever.
Yes, it's something pretty much all UI frameworks end up implementing. The easiest way to do it is to simply disable the button at first click until the request is complete. This, of course, also prevents double submissions in cases the user doesn't get enough feedback and clicks again to make sure something actually happened.
1 reply →
I've found that with a sensitive mouse, I unintentionally double click maybe 1 out of 100 times. It's infrequent enough that it rarely causes problems, but every once in a while Murphy's Law gets me. Definitely appreciate UIs that disable the submit button right after first click until it's done processing :-)
The thing about debouncing is that debouncing interrupts from input pins connected directly to e.g. switches is universally a bad idea.
Interrupt pins are weird, level change too slowly? You might never see the interrupt trigger or you'll see it trigger more than once. Switch back and forth many times at a high frequency before settling? Again, you might see interrupts for every switch or you might just get nothing at all. (In summary, directly connecting a noisy switch to an interrupt pin will mean you can lose level transition events.)
To make this work reliably you either need to debounce in hardware before it reaches your interrupt pin or forego interrupts entirely in favour of polling the input pin at a specific speed and keeping a state history.
So the term as it originates in electronics is now both misused and if you do translate it back to electronics it's considered a terrible idea.
> 250ms is still going to feel very snappy
WTF no it won't.
For the kind of behaviors they are describing it would. An extra 250ms waiting for an app to load is a lot, but for something like the described autosave behavior, waiting for a 250ms pause in typing before autosaving or making a fetch call is pretty snappy.
What value would you recommend?
An office keyboard's own debouncing could delay a key press 30 ms, and then the OS, software and graphics/monitor hardware would delay it just as much before the user could see the character on screen. So, indeed, 10 ms is much too low.
The delay between key press and sound starts to become noticeable at around 10ms when you play an electronic (musical) keyboard instrument.
At 20-30ms or more, it starts to make playing unpleasant (but I guess for text input it's still reasonable).
50ms+ and it starts becoming unusable or extremely unpleasant, even for low expectations.
I'm not sure how much the perception of delay and the brain lag differs between audio and visual stimuli.
But that's just about the perceived snappiness for immediate interactions like characters appearing on screen.
For events that trigger some more complex visual reaction, I'd say everything below 25ms (or more, depending on context) feels almost instant.
Above 50ms you get into the territory where you have to think about optimistic feedback.
Point that most seem to miss here is that debouncing in FE is often about asynchronous and/or heavy work, e.g. fetching search suggestions or filtering a large, visible list.
Good UIs do a lot of work to provide immediate feedback while debouncing expensive work.
A typical example: when you type and your input becomes longer with the same prefix, comboboxes don't always need to fetch, they can filter if the result set was already smallish.
If your combobox is more complex and more like a real search (and adding characters might add new results), this makes no sense – except as an optimistic update.
_Not_ debouncing expensive work can lead to jank though.
Type-ahead with an offline list of 1000+ search results can already be enough, especially when the suggestions are not just rows of text.
> 10ms when you play an electronic (musical) keyboard instrument.
Sound travels 3.4 meters in that time; if your speakers are that far away in a live situation, there is your extra 10 ms.
1 reply →
No, correct debouncing of a hardware button should not add any delay to a single press. It's not wait-then-act, but rather act-then-wait-to-act-again. You're probably thinking of a polling interval (often exacerbated by having key switches wired in a matrix rather than one per pin).
The bouncing behavior itself adds delay. If it takes 30 ms for the switch to settle to a state where it is considered closed according to the debouncing algorithm's parameters, then that's what it is. The algorithm might be "sample every millisecond, until 9 out of 10 samples show "closed". That imposes a minimum delay of 10 ms, and the maximum is whatever it takes for that criterion to be reached.
4 replies →
They describe that approach in another comment [1], so I take it to be a descriptive statement about low end keyboards. Perhaps the engineers designing these keyboards view 30ms as an acceptable latency to prevent spurious key presses.
[1] https://news.ycombinator.com/item?id=44822183
Perhaps the people at MDN are 10x typists, with competition-grade gaming keyboards.
Why would e.g. saving after each keypress be janky from the UI perspective? These days disks can complete a write in 20 us. If you're typing at 0.1 seconds/character, you're going 5,000 times slower than the computer is capable of. If you have a 60 Hz monitor, it can block saving your work every frame and still be 99.9% idle. Even if you're making network requests each time, if the request finishes in 20 ms, you're still done 80 ms before the user presses the next button.
Local storage is a poor example because it updates in the background and wouldn’t necessarily change your UI much. But if a design calls for a search to be made while a user types that would get janky fast.
React in particular is data driven so in the above example, if you make the api call on each keypress, and save it into state or whatever, the UI will update automatically. I can type 70 words per minute. Nobody wants the search results to update that fast. (Should we be building searches that work this way? Often you have no choice.) A slow network + a short search string + a not top of the line device like a cheap phone means a really janky experience. And even if it’s not janky, its a waste of your users bandwidth (not everybody has unlimited) and an unnecessary drain on your server resources.
Even though we say “update as the user types” people type in bursts. There’s no reason not to debounce it, and if you can make the debounce function composable, you can reuse it all over the place. It’s a courtesy to the users and a good practice.
I type around 90-100 wpm and I appreciate that clementine player for example doesn't seem to have any delay except apparently when querying radio stations (which is described as "to be polite"). When searching your own library, it updates immediately (I'd be very surprised if the query weren't faster than my monitor refresh). My only perception is that it's fast. I don't know why anyone would see it as janky.
So it seems to me that it's entirely about not wasting resources/to be polite. In the limit where you have a computer that can do its work faster than your display refreshes, letting it do so seems to clearly make everything feel snappier.
No one's typing quickly on a phone anyway and most people probably do a word at a time, and that word will come in slower than your denounce, so again there is no point in delaying it.
One thing another user pointed out is that your search does need to be stable. An exact substring match shouldn't randomly get bumped down as you type more.
A separate issue you might encounter for slow queries is that requests might get blocked behind each other such that your new query is queued before you even submitted the previous one. In that case it makes sense to cancel the unsent one (and if very expensive, perhaps when the sent ones), but I don't know that web browsers can tell you whether a request is queued or in-flight or give you meaningful lifecycle hooks. Obviously normal programs have a lot more flexibility in how to handle this. But this is also not the case posited originally, which is low latency/fast processing.
Introducing auto save into the discussion may have been confusing, you're both right that that wouldn't generally cause jank, debouncing an auto save is more about not using resources unnecessarily and may help provide a better edit history depending on how you've written the application.