USB Cheat Sheet

4 years ago (fabiensanglard.net)

Some of the entries seem incorrect: "USB 3.2 (USB 3.2 Gen 2x2) and "USB 4" (USB 4 USB4 Gen 2×2) should have the same nominal data rate of 2500MB/s, they're 2 lanes (x2) of 10GB/s. Though they are apparently coded differently electrically, so they're distinct protocols.

The tables would benefit from mentioning the coding (8/10 or 128/132) as IMO it's one of the most confusing bits when you see the effective data rates:

* USB 3.2 Gen 1x2 has a nominal data rate of 10G (2 lanes at 5G) with a raw throughput of 1GB/s (effective data rates topping out around 900MB/s)

* USB 3.2 Gen 2x1 has the same nominal data rate of 10G (1 lane at 10G) but a raw throughput of 1.2GB/s (and effective data rates topping out around 1.1GB/s)

The difference is that Gen 1x uses the "legacy" 8/10 encoding, while Gen 2x uses the newer 128/132 encoding, and thus has a much lower overhead (around 3%, versus 20).

  • Thank you for noticing these issues, I have updated the table.

    I would be happy to improve it and add encoding. I am surprised by some of the summary entries on Wikipedia (https://en.wikipedia.org/wiki/USB4). Looks like USB4 "reverted" to 128b/132b. It is accurate?

    • 128b/132b is the more efficient coding. The closer to 1 the fraction is, the less coding overhead it has, and 128/132 is larger than 8/10.

      2 replies →

    • Fyi, last two columns in table 2 are a bit confusing: footnote c says "real life sequential speed", but then the last column title is "real life", so it's unclear what the difference is

  • He goes off the rails earlier than that, by saying that USB 2.0 is "also known as" Hi-speed. HS is only one data rate supported by the USB 2.0 standard; it incorporates both full speed from the earlier standard and low speed, which isn't mentioned at all.

    • That's more of an approximation matching how, frankly, most people think of the specs: yes USB 2.0 supersedes 1.1 entirely, but everyone will think of "full speed" and "low speed" as USB 1 which are BC supported by USB 2.0.

      That's also why USB 3.1 and 3.2's rebranding of previous versions is so confusing and a pain in the ass to keep straight: USB 3.2 1x1 is USB 3.1 Gen 1 is USB 3.0 (ignoring the USB 2.0 BC).

    • Right, per his chart "Full Speed" should be known as USB 1.1 Full Speed and USB 2.0 Full Speed.

  • Also should be:

    12 Mbps -> 1.43 MiB/s -> 1.5 MB/s

    480 Mbps -> 57 MiB/s -> 60 MB/s

    5000 Mbps (5 Gbps) -> 596 MiB/s -> 625 MB/s

    10000 Mbps (10 Gbps) -> 1192 MiB/s -> 1250 MB/s

    20000 Mbps (20 Gbps) -> 2384 MiB/s -> 2500 MB/s

    40000 Mbps (40 Gbps) -> 4768 MiB/s -> 5000 MB/s

    • No, some of your rates are wrong.

      The so-called 5 Gb/s USB has a data rate of 4 Gb/s.

      The marketing data rates for Ethernet are true, i.e. 1 Gb/s Ethernet has a 1 Gb/s data rate, but a 1.25 Gb/s encoded bit rate over the cable.

      The marketing data rates for the first 2 generations of PCIe, for all 3 generations of SATA, and for USB 3.0 a.k.a. "Gen 1" of later standards, are false, being advertised as larger with 25% (because 8 data bits are encoded into 10 bits sent over the wire, which does not matter for the user).

      All these misleading marketing data rates have been introduced by Intel, who did not follow the rules used in vendor-neutral standards, like Ethernet.

      So PCIe 1 is 2 Gb/s, PCIe 2 & USB 3.0 are 4 Gb/s and SATA 3 is 4.8 Gb/s.

      So USB "5 Gbps" => 500 MB/s (not 625 MB/s), and after accounting for protocols like "USB Attached SCSI Protocol", the maximum speed that one can see for an USB SSD on a "5 Gbps" port is between 400 MB/s and 450 MB/s.

      The same applies for a USB Type C with 2 x 5 Gb/s links.

      As other posters have already mentioned, USB 3.1 a.k.a. the "Gen 2" of later standards has introduced a more efficient encoding, so its speed is approximately 10 Gb/s.

      The "10 Gbps" USB is not twice faster than the "5 Gbps" USB, it is 2.5 times faster, and this is important to know.

      7 replies →

No USB On-The-Go (https://en.wikipedia.org/wiki/USB_On-The-Go) or Wireless USB (https://en.wikipedia.org/wiki/Wireless_USB)?

USB is a triumph of marketeers over engineers. All these things are called USB because USB sells (see also: Bluetooth).

  • I don't know anything about wireless USB but USB OTG is called USB because it is USB. It's not some totally unrelated protocol.

    • I thought OTG was just changing up where the host controller is sitting in the USB relationship? So you can have a device that acts like a client when hooked to a computer, or a master when hooked to a thumb drive/webcam/etc...?

      8 replies →

> May 05, 2025

The article is dated May 5, 2025. I've long been wondering about the future of USB.

  • USB 4.2 (later renamed to USB 3.2 gen 2 Mk. 1) comes with built in time traveling. They just keep adding features to the protocol and making it complicated.

  • It's a form of SEO. Google promotes "fresh" content, so if it sees a date less than a year ago it often assumes the content is better. Normally you will see this abused by crappy content mills using a plugin that constantly updates the date on their garbage.

    Putting a static date from 3 years in the future seems like a quick a dirty hack to do the same thing.

Fun fact: USB 2.0 webcams have been existing for over 10 years. USB 2.0 is 60 MB/s.

A pixel of an image is 3 Bytes. A 1920x1080 FullHD image is 6.2 MB. At 30 frames per second, second of a FullHD video is 186 MB. How did they do that?

Answer: frames are transferred as JPEG files. Even a cheap $15 webcam is a tiny computer (with a CPU, RAM, etc), which runs a JPEG encoder program.

  • Most webcams, especially 10 years ago are not 1080p, or even 60fps. Many aren't even 720p. 1280 x 720 x 3 bytes x 30 fps = ~80MB/second. 480p @ 30 fps = 26MB. That is how many webcams can get by without hardware JPEG/H264 encoding.

    4K @ 60fps = 1.4GB/sec. USB 3, even with 2 lanes, will have trouble with that.

  • The cheap ones are using hardware JPEG encoders. The associated micro isn't powerful enough to do it in firmware alone.

    • Surprised they don't use a hardware video encoder, is it because the well and efficiently supported formats are all MPEG, and thus have fairly high licensing cost on top of the hardware? Or because even efficient HVEs use more resources than webcams can afford? Or because inter-frame coding requires more storage, which (again) means higher costs, which (again) eats into the margin, which cheap webcam manufacturers consider not worth the investment?

      4 replies →

  • Hm. But then wouldn't it make more sense to just stream the raw sensor data, which is 1 byte per pixel (or up to 12 bits if you want to get fancy), and then demosaic it on the host? Full HD at 30 fps would be 59.33 MB/s, barely but still fitting into that limit.

    But then also I think some webcams use H264? I remember reading that somewhere.

    • The pixel density doesn't generally refer to the density of the Bayer pattern, which can be even denser. Generally a cluster of four Bayer pixels makes up one pixel (RG/GB), but like most things in computing, the cognitive complexity is borderline fractal and this is a massive simplification.

    • > Full HD at 30 fps would be 59.33 MB/s, barely but still fitting into that limit.

      It's not fitting into anything I fear, best case scenario the effective bulk transfer rate of USB2 is 53MB/s.

      60 is the signaling rate, but that doesn't account for the framing or the packet overhead.

    • It would need a funny driver and since that stuff is big parallel image processing it's easy in HW but if someone has a netbook or cheap/old Celeron crap it would peg their CPU to do the demosaic and color correction.

    • > Full HD at 30 fps would be 59.33 MB/s, barely but still fitting into that limit.

      That limit is too high even as a theoretical max.

      You could do raw 720p.

    • I don't know where you get "1 byte per pixel" from. At minimum, raw 4:2:0 video would be two bytes per pixel, and RGB would be three bytes per pixel with 8-bit color depth.

      4 replies →

  • It needs a uC with some special hardware anyways to do demosaic or else it would require special drivers that would peg some people's crappy laptop CPUs.

    Also the raw YUV 4:2:0 is 1.5 bytes per pixel so that's doing half of the "compression" work for you.

Just how much do you have to hate consumers to come up with a scheme like this? Increment revisions as you add more features, add something to the end to say how fast it goes. The 3.2 renaming is idiotic.

USB 4 AKA USB 4 Gen2x2

USB 4 (opt) AKA USB 4 Gen3x2

They had a chance to fix their colossal fuckup and they decided not to.

  • In marketing and on cables they've chosen to use the terms USB4 20Gbps and USB4 40Gbps, so at least that's explicit. There's also officials ways to mark cables as being 100W or 240W capable.

  • Their issue was not the naming for consumer or tech user, their issue was "how do we allow any random laptop from claiming latest usb despite not actually supporting it".

    It was super obvious with usb 3 and its sub versions, and it gets even worse with 4.

    • Yes. The "IF" in "USB-IF" stands for implementers forum, it is a consortium of hardware companies who make devices. It's preferable to them if they can slap "USB 3.2 support!" on the box without having to redo their boards with a new, expensive component.

      In other words, the incentives here are for USB-IF to promote customer confusion, not to reduce it, because that confusion can sell devices and push profit margins.

      It's absolutely terrible that the EU is giving this group a legal monopoly on the ability to create and proliferate new standards. Their incentives fundamentally run against the consumer and they have repeatedly acted against the interests of the consumer. Unlike HDMI, there is no VESA to counterbalance them, it is USB or nothing, so you'll have to deal with these crappy standards going forward.

      --

      HDMI is doing something similar now too - "HDMI 2.1" is a completely hollow standard where every single feature and signaling mode added since HDMI 2.0 is completely optional. You can take HDMI 2.0 hardware and get it recertified as HDMI 2.1 without any changes - actually you must do this since HDMI Forum is not issuing HDMI 2.0 certifications any more, only HDMI 2.1 going forward, the new standard "supercedes" the old one entirely.

      So - "HDMI 2.1" on the box doesn't mean 4K120 support, it doesn't mean VRR support, it doesn't mean HDR support. It could actually just literally be HDMI 2.0 hardware inside. You need to look for specific feature keywords if that is your goal.

      https://arstechnica.com/gadgets/2021/12/the-hdmi-forum-follo...

      https://www.youtube.com/watch?v=qo9Y7AMPn00

USB versioning is such a clusterfuck.

  • There was a really short timeframe when I was really positive about USB, but that has been long lost since.

    They should've never allowed cables to only provide some capabilities and still get the branding. Having capabilities for connectors was fine imo, but also accepting them with cables was bad because you cannot really find out what it supports and where the issue originates of something goes wrong

    • It’s why I always buy TB3 (or now TB4) cables rather than a cheaper USB-C to USB-C. Due to the strict requirements on TB cables, you can pretty much guarantee it’ll support any use case (alt modes, PD, etc). Sometimes overspending is worth the headache prevention.

      14 replies →

So on the next versions of USB, the cable length will get shorter and shorter until the max gets to 5cm?

While I get the technical reasoning about high frequency/attenuation etc that limits cable length as speeds go higher, there are obviously some practical limits to how short cables can be.

How would that be solved, I don't know.

  • I'm confused what that section is supposed to represent. E.g. Apple has a 3 meter USB 4 3x2 (40 Gbps) cable but the "cable" value for that section is listed as 0.8m. The only hit I'm getting in the USB 4 spec for "0.8" is on page 59 referring to maximum receiver insertion loss in dB for a gen 3 connection including a 0.8m passive cable but that in itself isn't a hard limitation on cable length.

  • Not my area of expertise, but maybe some (unrealistic) options include using fiber optics for the data lines, or adding more data lines.

    • There already exists some fiber-optic USB cables that come in lengths >50m and with support for USB 3.1 so it doesn't seem like a very unrealistic option.

      8 replies →

    • I guess at some point optical will be the only way forward.

      Having more data lines in a serial bus is interesting, as the whole reasoning to go from parallel lines (e.g. Centronics, ATA/SCSI or ISA/PCI buses) to serial (SATA/SAS, PCIe, USB) was that coordinating multiple data lines got impossible due to physical limitations where e.g. minimal differences in cable lengths started to matter).

      3 replies →

Suggestion: maybe include all the USB-C-plug Thunderbolt versions too. My personal policy these days is to just buy reputable Thunderbolt cables for all my USB-C needs. Maybe I'm doing the wrong thing?

Also, I think there's a difference between active and passive USB-C cables, or something like that.

  • > Maybe I'm doing the wrong thing?

    If you're happy with it then probably not.

    The main possible issues are that it's more expensive and you get shorter and thicker (less flexible) cables, a passive non-optical TB (or USB4) cable will top out around 1m.

    Less capable cables can be longer and thinner which is convenient for e.g. mice and such small devices. But otherwise may not matter overly much.

    • I've been pretty happy with my less flexible cables. I don't need to snake them around tight corners anywhere. Being less flexible seems to keep them from auto-tangling.

Ah USB. In the old days it was different cables for different things, nowadays it's 1 connector for everything but beware, the cable might physically plug into the socket, but whether you'll get the functionality you want?

  • Seems it is going backward to me too.

    At one point I remember hooking up a computer being like one of those shape puzzles we give children. If you can match them they'll work. No two of my devices used the same cable or port, but if it fit it'd work.

    Keyboard switched to PS/2 so those and PS/2 mice were confusing, but eventually they standardized on colours.

    USB came out and you could just plug it in wherever. This was great.

    And now? 20 combinations of cable features with the same socket but all do something else. I can only imagine what the return rate will be for stuff like this.

  • Just how many devices do you meet that regularly hit those edge cases? Outside 4K+ multimonitor connections?

    (It's really popular and easy to bash on USB on this forum, but it turns out that in real life your USB-C device will "just work" for pretty much all setups outside really fringe high performance ones. And even those will usually just negotiate lower rate.)

There's something about the naming of USB that is great. I love how there are now something like a dozen 'universal' standards, and how the serial bus now has multiple lanes.

In the "USB-A/B" section, they're all labelled "Type-A", the 3rd and 4th should be labelled "Type B".

It's also missing - mini-b 4 wire (older phones, etc) - micro-b 4 wire (most electronics prior to type c) - micro-b 8 wire (mostly seen only found on external 2.5" HDDs)

There were also a bunch of other connectors (mini-a, mini-a/b, etc) but they are very rare.

I'm always flabbergasted at how difficult and hostile to the user is discern between the various USB standards.

Hello Fabien ! I saw on twitter that you had built a gaming setup, can you write an article on your blog as you did for your silent pc ?

So where exactly does USB-C fall into?

I have 2 different generations of USB-C hosts, and they behave quite differently when approaching max cap, especially with high-quality low-latency audio (USB-C was supposed to be de-facto replacement for FireWire).

  • USB-C is a connector type, like USB-A (usually known as the classic USB plug) and USB-B (usually the other side of said plug, a square kind of connector). USB-B had other offspring like miniUSB and microUSB (note that in these cases on the other side of the cable you usually have a USB-A plug).

    USB-C is the first time cables have the same connector on both sides, so it obsoletes USB-A and USB-B. But what is sent over USB-C? Can be USB 3 with which it is often conflated because they came around the same time, but it can also be USB 2, so it is a bit hard to tell. But USB 3 can use old style USB-A as well (the blue plugs with the same shape as the classic USB plugs) and USB-C (the microUSB plugs with an extension off to the side).

    • Can be USB 3 [...] USB 2, so it is a bit hard to tell.

      ...or Thunderbolt, USB 4, DisplayPort (through Alt-mode or encapsulated in Thunderbolt), or HDMI (Alt-mode), or MHL (Alt-mode), USB Power Delivery...

      Unfortunately, not every cable with USB-C connectors can carry all of these. E.g. there are USB-C cables that can only carry USB 2. Or cables that can carry USB 3, but not Thunderbolt. Also, not all cables can carry the same wattage for power delivery.

      It's a mess.

      9 replies →

    • >note that in these cases on the other side of the cable you usually have a USB-A plug

      Usually a full-size USB-A, you mean, because what we commonly know as mini-USB and micro-USB are actually mini-B and micro-B, which have corresponding (but now rarely used) micro-A and micro-A ports. Before USB-OTG, USB used to be an explicitly directional protocol, with a master and a slave device.

      https://upload.wikimedia.org/wikipedia/commons/8/82/USB_2.0_...

      https://en.wikipedia.org/wiki/USB_hardware

  • It's orthogonal.

    Usb A is a host side connection Usb B (normal/mini/micro) is a client side connector Usb C is a 2 way connector.

    Each of them can be implemented for each USB version, except USB C came later and makes no sense befor USB 3.

    Then USB versions added features, signalling conventions and wires. But the USB A and B connector are backward compatible all the way to USB 1.0 1.5Mbit/s

  • Good question. I bought a RaidSonic Icy Box IB-1121-C31 USB 3.1 (10Gbit) S-ATA dock recently (with a USB Type C connector) that came with a USB C cable and had a buy a special "USB-A - USB type C cable" to achieve 10Gbit/s with the 10GBit/s USB A connector of my mainboard.

    The "USB A - USB type C cables" that i had already only worked up to 480MBit/s.

Like the simple site design with a one page info about USB.

someone please make similar one-page with tables about PCI Express, Ethernet, HDMI...

If only it included a guide to the different USB connectors, but that might make TFA too long to publish.

I've definitely used 5m+ extensions on USB1 (and 2 iirc) before. I guess it'd be sketchy running something that requires decent throughput and not b0rk on ECC/FEC/whatever it uses but for temperature sensors which I was using, it was fine.

  • A long time ago, I was using a USB 1 or 2 Wifi adapter through a USB extension cord, I'm pretty sure the total cable length was more than 5 meters. It "worked", but even just flicking a light switch caused the network connection to reset. So yeah, it may "work", for certain values of "work".

This is wrong. For example Full Speed isn't a name for USB 1, it's the name for a speed which is supported by USB 1 and 2 (not sure about 3). Most USB microcontrollers are Full Speed USB 2.

Why didn't they focus enough on cable length? I'm not sure about how much latency they would add since the current is still traveling at light speed.

Maybe there's someone in the world wondering if it's possible to emulate MarioKart from his office PC to the living room with a 10m HDMI and USB3 cable... Just guessing :)

USB 4 (opt) is ... optical? Or optional?

  • According to the wiki table (https://en.wikipedia.org/wiki/USB4#Support_of_data_transfer_...), it's "optional":

    * "USB4 20 Gbit/s Transport" (= USB4 20Gbps = USB4 Gen 2x2) is required for host to support

    * "USB4 40 Gbit/s Transport" (= USB4 40Gbps = USB4 Gen 3x2) is not

    Also USB4 apparently only requires support for tunneling "SuperSpeed USB 10Gbps" (USB 3.2 Gen 2×1), "SuperSpeed USB 20Gbps" (USB 3.2 Gen 2x2) is optional.

  • It would have a longer max length if the data lanes were optical.

    • You can actually get optical usb3 and thunderbolt (all generations) cables. Thunderbolt was originally called light peak and shown off by Intel and Apple in demos as optical, and Sony had a line of laptops with optical light peak connectors to connect to external GPUs. But ultimately the default became non-optical because it can carry power too.

      5 replies →

just curious. Can USB3 work without D+ and D- ?

  • USB 3 and previous standards are completely separate connections and software stacks - the USB 1/2 D+/D- pair does not interact at all with SSRX/SSTX. You should be able to literally cut the D+/D- wires in an USB 3 cable and it should still work as a USB 3 cable.