← Back to context

Comment by jwr

18 hours ago

Incidentally, this client isolation thing can be extremely annoying in practice in networks you do not control. Hardware device makers just assume that everything is on One Big Wi-Fi Network and all devices can talk to all other devices and sing Kum-Ba-Yah by the fire.

Then comes network isolation and you can no longer turn on your Elgato Wi-Fi controlled light, talk to your Bose speaker, or use a Chromecast.

That seems less annoying than a hotel full of people who can play whatever they want with my Chromecast. No malice is required for this to happen; it is completely possible to do by mistake.

Words like "I've been trying to use the Chromecast!" "The Living Room Chromecast?" "Yes! It says it's playing, but I don't see anything on the TV screen!" "You hit the play button, right?" "Yeah, and then it keeps stopping on its own!" "Are you sure you plugged it in?" "What in the world is wrong with this dumb thing?" drift between one partner and another in some other in some far corner of the hotel as they innocently trample my efforts to watch old episodes of How It's Made.

For all of these reasons, I tend to travel with a network that I control. That's usually in the form of some manner of very small router -- with a strong preference towards something that runs (or can run) OpenWRT. There's a ton of such "travel routers" in the market that are centered around $60 or so that don't take up much space at all.

I use this to slurp up whatever free wifi or ethernet I can get, or my phone tethering/hotspot, and I don't worry at all about how someone else's network might decide to treat me today. Whatever stuff I bring with me all works about as well as it does at home.

Even when not using client isolation, I've run into similar problems simply from having a computer connected over Ethernet instead of WiFi, and whatever broadcast method a gadget uses for discovery didn't get bridged between wired and wireless. (Side note: broadcast traffic on WiFi can be disproportionately problematic because it needs to be transmitted at a lowest common denominator speed to ensure all clients can receive it. IIRC, that usually means 6Mbps.)

I mean, yeah, isn't that the main purpose of client isolation? It sucks when you're on something like a locked down university dormitory network but it also stops (or at least, inhibits) other people from randomly turning on your lightbulb or worse, deploying exploits on your poorly engineered IoT device and lighting you up with malware.

Adding exceptions for certain protocols, IP ranges (maybe multicast, even) are certainly ways around this, but I imagine with every hole you poke to allow something, you are also opening a hole for data to leak.

  • Client isolation is done at L2. You can't add exceptions for IP ranges / protocols / etc this way because that's up the stack. Even if devices can learn about each other in other ways, isolation gets in the way of direct communication between them.

    • The paper makes the point that you need to consider L3 in client isolation too - they call this the gateway bouncing attack. If you can hairpin traffic for clients at L3, it doesn't matter what preventions you have at L2