← Back to context

Comment by kazinator

6 months ago

Debouncing refers to cleaning up the signal from an opening and closing switch contact so that the cleaned signal matches the intended semantics of the switch action (e.g. one simple press of a button, not fifty pulses).

The analogy here is poor; reducing thrashing in those obnoxious search completion interfaces isn't like debouncing.

Sure, if we ignore everything about it that is not like debouncing, and we still have something left after that, then whatever is left is like debouncing.

One important difference is that if you have unlimited amounts of low latency and processing power, you can do a full search for each keystroke, filter it down to half a dozen results and display the completions. In other words, the more power you have, the less important it is to do any "debouncing".

Switch debouncing is not like this. The faster is your processor at sampling the switch, the more bounces it sees and consequently the more crap it has to clean up. Debouncing certainly does not go away with a faster microcontroller.

It's the term used in frontend dev. It is actually a little worse than you're imagining, because we're not sampling, we're receiving callbacks (so more analogous to interrupts than sampling in a loop). Eg the oninput callback. I've used it for implementing auto save without making a localStorage call on every key press, for example.

I think it makes sense if you view it from a control theory perspective rather than an embedded perspective. The mechanics of the UI (be that a physical button or text input) create a flaggy signal. Naively updating the UI on that signal would create jank. So we apply some hysteresis to obtain a clean signal. In the day way that acting 50 times on a single button press is incorrect behavior, saving (or searching or what have you) 50 times from typing a single sentence isn't correct (or at least undesired).

The example of 10ms is way too low though, anything less than 250ms seems needlessly aggressive to me. 250ms is still going to feel very snappy. I think if you're typing at 40-50wpm you'll probably have an interval of 100-150ms between characters, so 10ms is hardly debouncing anything.

  • Additionally, regardless of naming, debouncing is an accessibility feature for a surprisingly large portion of the population. Many users who grew up with double-click continue to attempt to provide this input on web forms, because it mostly works. Many more with motor control issues may struggle to issue just a single click reliably, especially on a touchscreen.

    • Holy moly, for years I've had in the back of my head this thought about why, earlier in my career, I'd see random doubly submitted form submissions on certain projects. Same form code and processing as other sites, legitimate submissions too. Eventually we added more spam filtering and restrictions unrelated to these legitimate ones, but it was probably the double-click users causing those errant submissions. I'd never even have thought of those users. Fascinating

      3 replies →

    • I've found that with a sensitive mouse, I unintentionally double click maybe 1 out of 100 times. It's infrequent enough that it rarely causes problems, but every once in a while Murphy's Law gets me. Definitely appreciate UIs that disable the submit button right after first click until it's done processing :-)

  • The thing about debouncing is that debouncing interrupts from input pins connected directly to e.g. switches is universally a bad idea.

    Interrupt pins are weird, level change too slowly? You might never see the interrupt trigger or you'll see it trigger more than once. Switch back and forth many times at a high frequency before settling? Again, you might see interrupts for every switch or you might just get nothing at all. (In summary, directly connecting a noisy switch to an interrupt pin will mean you can lose level transition events.)

    To make this work reliably you either need to debounce in hardware before it reaches your interrupt pin or forego interrupts entirely in favour of polling the input pin at a specific speed and keeping a state history.

    So the term as it originates in electronics is now both misused and if you do translate it back to electronics it's considered a terrible idea.

  • > 250ms is still going to feel very snappy

    WTF no it won't.

    • For the kind of behaviors they are describing it would. An extra 250ms waiting for an app to load is a lot, but for something like the described autosave behavior, waiting for a 250ms pause in typing before autosaving or making a fetch call is pretty snappy.

  • An office keyboard's own debouncing could delay a key press 30 ms, and then the OS, software and graphics/monitor hardware would delay it just as much before the user could see the character on screen. So, indeed, 10 ms is much too low.

    • The delay between key press and sound starts to become noticeable at around 10ms when you play an electronic (musical) keyboard instrument.

      At 20-30ms or more, it starts to make playing unpleasant (but I guess for text input it's still reasonable).

      50ms+ and it starts becoming unusable or extremely unpleasant, even for low expectations.

      I'm not sure how much the perception of delay and the brain lag differs between audio and visual stimuli.

      But that's just about the perceived snappiness for immediate interactions like characters appearing on screen.

      For events that trigger some more complex visual reaction, I'd say everything below 25ms (or more, depending on context) feels almost instant.

      Above 50ms you get into the territory where you have to think about optimistic feedback.

      Point that most seem to miss here is that debouncing in FE is often about asynchronous and/or heavy work, e.g. fetching search suggestions or filtering a large, visible list.

      Good UIs do a lot of work to provide immediate feedback while debouncing expensive work.

      A typical example: when you type and your input becomes longer with the same prefix, comboboxes don't always need to fetch, they can filter if the result set was already smallish.

      If your combobox is more complex and more like a real search (and adding characters might add new results), this makes no sense – except as an optimistic update.

      _Not_ debouncing expensive work can lead to jank though.

      Type-ahead with an offline list of 1000+ search results can already be enough, especially when the suggestions are not just rows of text.

      2 replies →

    • No, correct debouncing of a hardware button should not add any delay to a single press. It's not wait-then-act, but rather act-then-wait-to-act-again. You're probably thinking of a polling interval (often exacerbated by having key switches wired in a matrix rather than one per pin).

      6 replies →

  • Why would e.g. saving after each keypress be janky from the UI perspective? These days disks can complete a write in 20 us. If you're typing at 0.1 seconds/character, you're going 5,000 times slower than the computer is capable of. If you have a 60 Hz monitor, it can block saving your work every frame and still be 99.9% idle. Even if you're making network requests each time, if the request finishes in 20 ms, you're still done 80 ms before the user presses the next button.

    • Local storage is a poor example because it updates in the background and wouldn’t necessarily change your UI much. But if a design calls for a search to be made while a user types that would get janky fast.

      React in particular is data driven so in the above example, if you make the api call on each keypress, and save it into state or whatever, the UI will update automatically. I can type 70 words per minute. Nobody wants the search results to update that fast. (Should we be building searches that work this way? Often you have no choice.) A slow network + a short search string + a not top of the line device like a cheap phone means a really janky experience. And even if it’s not janky, its a waste of your users bandwidth (not everybody has unlimited) and an unnecessary drain on your server resources.

      Even though we say “update as the user types” people type in bursts. There’s no reason not to debounce it, and if you can make the debounce function composable, you can reuse it all over the place. It’s a courtesy to the users and a good practice.

      2 replies →

I agree that this is a bad analogy.

I've programmed my own keyboards, mice and game controllers. If you want the fastest response time then you'd make debouncing be asymmetric: report press ("Make") on the first leading edge, and don't report release ("Break") until the signal has been stable for n ms after a trailing edge. That is the opposite of what's done in the blog article.

Having a delay on the leading edge is for electrically noisy environments, such as among electric motors and a long wire from the switch to the MCU, where you could potentially get spurious signals that are not from a key press. Debouncing could also be done in hardware without delay, if you have a three-pole switch and an electronic latch.

A better analogy would perhaps be "Event Compression": coalescing multiple consecutive events into one, used when producer and consumer are asynchronous. Better but not perfect.

Debouncing is a term of art in UI development and has been for a long time. It is analogous to, but of course not exactly the same as, debouncing in electronics.

It's also worth mentioning that real debouncing doesn't always have to depend on time when you have an analog signal. Instead you could have different thresholds for going from stat A to B vs going from B to A with enough distance between those threshold that you won't switch back and forth during an event. This can even be implemented physically in the switch itself by having separate ON and OFF contacts.

Actually I think it's pretty similar to your example. The "intended semantics" of the search action in that sort of field are to search for the text you enter – not to search for the the side-effects of in-progress partial completion.

Yes, it's not an exact comparison (hence analogy) – but it's not anything worth getting into a fight about.

  • Yeah, I don't get how this thread is at the top.

    You debounce a physical switch because it makes contact multiple times before settling in the contacted position, e.g. you might wait until it's settled before acting, or you act upon the first signal and ignore the ones that quickly follow.

    And that closely resembles the goal and even implementation of UI debouncing.

    It also makes sense in a search box because there you have the distinction between intermediate vs. settled state. Do you act on what might be intermediate states, or do you try to assume settled state?

    Just because it might have a more varied or more abstract meaning in another industry doesn't mean it's a bad analogy, even though Javascript is involved, sheesh.

  • The user intent is usually to get to what they are looking for as quickly as possible. If you intentionally introduce delays by forcing them to enter the complete query or pause to receive intermediate results then you are slowing that down.

    • > The user intent is usually to get to what they are looking for as quickly as possible.

      Yes, and returning 30,000 results matching the "a" they just typed is not going to do that. "Getting the desired result fastest" probably requires somewhere between 2 and 10 characters, context-dependent.

> One important difference is that if you have unlimited amounts of low latency and processing power, you can do a full search for each keystroke,

But you don't want that, as it's useless. Until the user actually finished typing, they're going to have more results than they can meaningfully use - especially that the majority will be irrelevant and just get in the way of real results.

The signal in between is actually, really not useful - at least not on first try when the user is not aware what's in the data source and how can they hack the search query to get their results with minimal input.

  • No one wants to see results for the letter "a", no one wants their database processing that search, and updating the UI while you're typing can be really distracting.

    • >No one wants to see results for the letter "a"

      Don't make assumptions about what the user may or may not want to search for.

      E.g. in my music collection I have albums from both !!! [1] and Ø [2]. I've encountered software that "helpfully" prevented me from searching for these artists, because the developers thought that surely noone would search for such terms.

      _______

      [1] https://www.discogs.com/artist/207714-!!! ← See? The HN link highlighter also thinks that URLs cannot end with !!!.

      [2] https://www.discogs.com/artist/31887-Ø

      6 replies →

    • I don't care if there are results for the letter "a", if they are instant.

      Don't become unresponsive after one key to search for results. If the search impacts responsiveness, you need to have a hold-off time before kicking it off so that a longer prefix/infix can be gathered which will reduce the search space and improve its relevance.

  • > as it's useless

    Be that as it may, the performance side of it becomes irrelevant. The UI responds to the user's keystrokes instantly, and when they type what they had intended to type, the search suggestions are there.

    Switch debouncing does not become irrelevant with unlimited computing power.

In electronics, I think we'd use a latch, so it switches high, and stays high despite input change.

Doesn't really apply to a search box, where it's more of a delayed event if no event during a specific time window, only keeping last event.

  • Switches usually open after closing, so your latch arrangement has to figure out how to unlatch.

    At which point you are doing debouncing: distinguishing an intentional switch opening from the bounce that continued after you latched. You need some hold-off time or something.

    Also, switches contacts bounce when opening!

    A latch could be great for some kind of panic button which indicates a state change that continues to be asserted when the switch opens (and is reset in some other way).

  • > In electronics, I think we'd use a latch, so it switches high, and stays high despite input change.

    RC circuits are more typical, you want to filter out high frequency pulses (indicative of bouncing) and only keep the settled/steady state signal. A latch would be too eager I think.

Thank you for this comment! Suddenly 'bouncing' makes total sense as a mental image when before it only vaguely tracked in some abstract way about tons of tiny events bouncing around and triggering things excitedly until you contain them with debounce() :-)

Come to think of it throttle is the much easier to understand analogy.

  • Throttling is a different thing though. Debouncing is waiting until the input has stopped occurring so it can run on the final result, throttling is running immediately on the first input and blocking further input for a short duration.

    • thanks - I know :-)

      ambiguous phrasing! Just saying of the two throttling is the one that has the more well-known physical analogy

> if you have unlimited amounts of low latency and processing power

And battery, or at least enough air conditioning to cool down the desktop because of those extraneous operations, right?

It's a word borrowed for a similar concept. This is so common in software, it is basically the norm. There are hundreds of analogistic terms in software.

I like you said obnoxious... it is assumed this behaviour is what people want rather than just press a button or hit enter when ready.

Search is a bad example there, a better one would have been clicking a button to add an item to a list, or pressing a shortcut key to do so, where you want to only submit that item once even if someone frantically clicks on the button because they're feeling impatient.

  • No you should not filter user input like this. Keep user interfaces simple and predictable.

    If it really only makes sense to perform the action once than disable/remove the button on the first click. If it makes sense to click the button multiple times then there should be no limit to how fast you can do that. It's really infuriating when crappy software drops user input because its too slow to process one input before the next. There is reason why input these days comes in events that are queued and we aren't still checking if the key is up or down in a loop.

    • Removing the button from the DOM after click is maybe the worst advice I’ve ever heard for web UX

[flagged]

  • What a bitter lens with which to view the world.

    The reality is that language evolves all the time through specialized use and adoption, web development is no different. Every profession and craft builds a pattern language from both borrowed and new terms.

    You can explain the phenomenon without patronizing and insulting anyone who works on frontend code.

  • You seem to have no idea of how language works, and has worked for as long as there has been language.