Comment by jonplackett
1 day ago
It would help if computers / phones had an easy way to just identify a cable when you plug it in. Is this hard to do or just something normal people never care about?
1 day ago
It would help if computers / phones had an easy way to just identify a cable when you plug it in. Is this hard to do or just something normal people never care about?
I guess you need control over both cable endings. You can buy dedicated cable testers like https://treedix.com/products/treedix-usb-cable-tester-usb-c-...
I have enjoyed my Treedix - now almost every cable I have has coloured labels for what it supports and what ends it has (handy when you're in a rush.)
On the downside, it has highlighted what a cowboy industry manufacturing USB-C cables is.
The cable can report what it "thinks" it is, and in fact, modern USB-C cables do this: they have "e-Marker chips" inside the plugs which communicate with whatever they're plugged into and enumerate their belief as to their capabilities. The thing is, manufacturers can set the e-Marker chips to spew lies, or a cable that used to support 80Gbps got slightly damaged after 6 months of use and now only reliably transmits 10Gbps.
Power capacity is relatively easy to measure ad-hoc via voltage drop from one end to the other...USB-PD controllers already do this and can even fine-tune the voltage to make sure that if the device receiving (sinking) power needs 20V they'll send 20.4V or 20.9V to compensate for voltage drop so that the charging device gets 20V on its end.
But actual maximum data throughput is hard to know. The only way to really "know" how much data can flow through a cable is with an expensive oscilloscope or cable tester. Because 80Gbps cables run at ~13GHz so, at minimum you need a 26GHz scope (Nyquist–Shannon sampling theorem) or more practically a 52GHz scope. And it turns out it's really expensive to measure electrical signals 52 billion times per second. The necessary devices start at $15,000 (cable signal integrity tester) [0] on the very low end and only work for max 10Gbps USB 3.2 cables, or past $270,000 for 80Gbps USB4 cables (proper 60GHz oscilloscope) [1].
On the high end, each signal integrity test device can actually cost $1-2 million [2] where the base unit starts at $670,000 plus then spending additional money for hardware-accelerated analysis, specialized active probes, and the specific PAM-3 / USB4 compliance software packages.
0: https://www.totalphase.com/products/advanced-cable-tester-v2...
1: https://www.edn.com/12-bit-oscilloscope-operates-up-to-65-gh...
2: https://www.eevblog.com/forum/testgear/uxr1104a-infiniium-ux...
This is overthinking it a bit. You mostly only need that stuff to tell you why it isn't working. If you want to know if it's up to the job, you can just measure the error rate, which just means sending a lot of data across and counting the errors. There might be some faults which only occur when the cable is in a particular position, but you can at least detect it when it happens.
The interface IC almost certainly also estimates signal quality, but it's likely hard to get that information out of it.
> modern USB-C cables [...] have "e-Marker chips" inside the plugs
If only they all did. I have a significant percentage in my pile with no e-Marker chip. They'll be the first to be culled once I get around to that, mind.
I get that to properly test a cable, you need that level of accuracy, but for home use, couldn’t you get away with a source and a receiver that are far cheaper?
If a USB4 device can output a USB4 stream and the receiver can check that stream for errors, isn’t that sufficient?
At some point you end up testing the peripheral and/or host rather than the cable. For example, cables often state that they can handle up to 240W ... but no 240W USB-PD chip has ever gone into production -- you won't even find one at the hottest USB-PD trade shows[0] in China.
It could be reasonable for computers to be allowed to trigger a data throughput test and the peripheral would state "I support up to 40Gbps of receiving/sending", and then send a simple pattern that can be generated on the fly. But a lot of devices can't receive/send that 80Gbps of data for long enough to perform a decent test - the storage, RAM, buffers, etc get depleted or act as bottlenecks.
If you know enough to accurately interpret the measurements you get from that, you know enough to write your own computer program to try to send 80Gbps from one computer to another and use DMA to process it in real-time without hitting storage (which a lot of peripherals likely don't have the CPU to accomplish).
If you don't know enough to write those test applications, you probably don't know enough to interpret the results of a built-in test function and the measurements would confuse and frustrate a lot of well-meaning, nerdy, but under-educated consumers who make assumptions about why they're not actually getting the rated speed.
Idk, my opinion doesn't go one way or the other here. Perhaps I myself don't quite know enough to be a good judge of that concept.
0: https://asiachargingexpo.com
8 replies →
Super helpful -- integrated this into the guide. Thank you.
> https://github.com/darrylmorley/whatcable
This was on show hn only yesterday.
Probably can't tell you anything about the other end of the cable though.
> Is this hard to do or just something normal people never care about?
If i believed in conspiracies i'd say the usb consortium or mafia or whatever it's called is pressuring software developers to not display that info. Otherwise they'd have "normal people" with torches and pitchforks at their door.
it violates every products person wish to be “simple”.
There’s a reason that Windows barely shows any errors until the system fully halts.
Windows will throw up warnings when the disk space is nearly empty, when it detects driver instability, when RAM is full and page files can't keep up, when a specific application is draining your battery, when your files aren't backing up right, and all other kinds.
The problem with most of those is that either users don't care until it's too late ("I need to get this done now, I'll delete files later"), third party applications are the cause and Windows can't/shouldn't interfere (did a program memory leak or is the user pushing the boundaries of what the system can handle?), or because there's not much the user can do about it ("your GPU driver crashed", well gee, my drivers are up to date, let me spend half a month's wages on a new GPU then, shall we?).
The only "too late" errors I've seen on Windows are when something very important has crashed and the system needs to shut down for data integrity (crss.exe crashing on school computers comes to mind, though I doubt that was the fault of Microsoft), or when something unpredictable went wrong, like a file ending up corrupt because of a failing hard drive or flipped bit in memory.
Microsoft actually created a dedicated screen to monitor errors and failures of all kinds (https://www.elevenforum.com/t/view-reliability-history-in-wi...) that's been around since Vista. It used to open up automatically if you clicked a popup after certain errors, but it appears Microsoft eventually stopped doing that. Going by how many "today I learned" posts I find when I look up the feature, I'm guessing nobody who actually understands what the screen does ever used the feature.
They now have the option to silently add this kind of detail to logs and have clippy find answers to why is my computer odd/slow only when asked. For a long time I felt like companies leaving product decisions to the Occamist (or the closely related lazy programmer) was a superpower to compete against larger organizations that usually don't, but we may get a run for our money from emulated simplicity.
This is the right idea.