Yes, because at that time, a modem didn't actually talk to a modem over a switched analog line. Instead, line cards digitized the analog phone signal, the digital stream was then routed through the telecom network, and the converted back to analog. So the analog path was actually two short segments. The line cards digitized at 8kHz (enough for 4kHz analog bandwidth), using a logarithmic mapping (u-law? a-law?), and they managed to get 7 bits reliably through the two conversions.
ISDN essentially moved that line card into the consumer's phone. So ISDN "modems" talked directly digital, and got to 64kbit/s.
An ISDN BRI (basic copper) actually had 2 64kbps b channels, for pots dialup as an ISP you typically had a PRI with 23 b, and 1 d channel.
56k only allowed one ad/da from provider to customer.
When I was troubleshooting clients, the problem was almost always on the customer side of the demarc with old two line or insane star junctions being the primary source.
You didn’t even get 33k on analog switches, but at least US West and GTE had isdn capable switches backed by at least DS# by the time the commercial internet took off. Lata tariffs in the US killed BRIs for the most part.
T1 CAS was still around but in channel CID etc… didn’t really work for their needs.
33.6k still depended on DS# backhaul, but you could be pots on both sides, 56k depended on only one analog conversion.
In case anyone else is curious, since this is something I was always confused about until I looked it up just now:
"Baud rate" refers to the symbol rate, that is the number of pulses of the analog signal per second. A signal that has two voltage states can convey two bits of information per symbol.
"Bit rate" refers to the amount of digital data conveyed. If there are two states per symbol, then the baud rate and bit rate are equivalent. 56K modems used 7 bits per symbol, so the bit rate was 7x the baud rate.
Not sure about your last point but in serial comms there are start and stop bits and sometimes parity. We generally used 8 data bits with no parity so in effect there are 10 bits per character including the stop and start bits. That pretty much matched up with file transfer speeds achieved using one of the good protocols that used sliding windows to remove latency. To calculate expected speed just divide baud by 10 to covert from bits per second to characters per second then there is a little efficiency loss due to protocol overhead. This is direct without modems once you introduce those the speed could be variable.
Yes, except that in modern infra i.e. WiFi 6 is 1024-QAM, which is to say there are 1024 states per symbol, so you can transfer up to 10bits per symbol.
As someone that started with 300/300 and went via 1200/75 to 9600 etc - I don't believe conflating signalling changes with bps is an indication of physical or temporal proximity.
Yes, because at that time, a modem didn't actually talk to a modem over a switched analog line. Instead, line cards digitized the analog phone signal, the digital stream was then routed through the telecom network, and the converted back to analog. So the analog path was actually two short segments. The line cards digitized at 8kHz (enough for 4kHz analog bandwidth), using a logarithmic mapping (u-law? a-law?), and they managed to get 7 bits reliably through the two conversions.
ISDN essentially moved that line card into the consumer's phone. So ISDN "modems" talked directly digital, and got to 64kbit/s.
An ISDN BRI (basic copper) actually had 2 64kbps b channels, for pots dialup as an ISP you typically had a PRI with 23 b, and 1 d channel.
56k only allowed one ad/da from provider to customer.
When I was troubleshooting clients, the problem was almost always on the customer side of the demarc with old two line or insane star junctions being the primary source.
You didn’t even get 33k on analog switches, but at least US West and GTE had isdn capable switches backed by at least DS# by the time the commercial internet took off. Lata tariffs in the US killed BRIs for the most part.
T1 CAS was still around but in channel CID etc… didn’t really work for their needs.
33.6k still depended on DS# backhaul, but you could be pots on both sides, 56k depended on only one analog conversion.
56k relied on the TX modem to be digitally wired to the DAC that fed the analog segment of the line.
In case anyone else is curious, since this is something I was always confused about until I looked it up just now:
"Baud rate" refers to the symbol rate, that is the number of pulses of the analog signal per second. A signal that has two voltage states can convey two bits of information per symbol.
"Bit rate" refers to the amount of digital data conveyed. If there are two states per symbol, then the baud rate and bit rate are equivalent. 56K modems used 7 bits per symbol, so the bit rate was 7x the baud rate.
Not sure about your last point but in serial comms there are start and stop bits and sometimes parity. We generally used 8 data bits with no parity so in effect there are 10 bits per character including the stop and start bits. That pretty much matched up with file transfer speeds achieved using one of the good protocols that used sliding windows to remove latency. To calculate expected speed just divide baud by 10 to covert from bits per second to characters per second then there is a little efficiency loss due to protocol overhead. This is direct without modems once you introduce those the speed could be variable.
Yes, except that in modern infra i.e. WiFi 6 is 1024-QAM, which is to say there are 1024 states per symbol, so you can transfer up to 10bits per symbol.
Yeah I got baud and bit rates confused. I also don't recall any hayes commands anymore either...
Confusing baud and bit rates is consistent with actually being there, though.
As someone that started with 300/300 and went via 1200/75 to 9600 etc - I don't believe conflating signalling changes with bps is an indication of physical or temporal proximity.