← Back to context

Comment by ryao

9 days ago

Did they also set IP_TTL to set the TTL value to match the platform being impersonated?

If not, then fingerprinting could still be done to some extent at the IP layer. If the TTL value in the IP layer is below 64, it is obvious this is either not running on modern Windows or is running on a modern Windows machine that has had its default TTL changed, since by default the TTL of packets on modern Windows starts at 128 while most other platforms start it at 64. Since the other platforms do not have issues communicating over the internet, so IP packets from modern Windows will always be seen by the remote end with TTLs at or above 64 (likely just above).

That said, it would be difficult to fingerprint at the IP layer, although it is not impossible.

>That said, it would be difficult to fingerprint at the IP layer, although it is not impossible.

Only if you're using PaaS/IaaS providers don't give you low level access to the TCP/IP stack. If you're running your own servers it's trivial to fingerprint all manner of TCP/IP properties.

https://en.wikipedia.org/wiki/TCP/IP_stack_fingerprinting

  • I meant it is difficult relative to fingerprinting TLS and HTTP. The information is not exported by the berkeley socket API unless you use raw sockets and implement your own userland TCP stack.

    • Couldn’t you just monitor the inbound traffic and associate the packets to the connections? Doing your own TCP seems silly.

      1 reply →

Wouldn’t the TTL value of received packets depend on network conditions? Can you recover the client’s value from the server?

  • The argument is that if the many (maybe the majority) of systems are sending packets with a TTL of 64 and they don't experience problems on the internet, then it stands to reason that almost everywhere on the internet is reachable in less than 64 hops (personally, I'd be amazed if it any routes are actually as high as 32 hops).

    If everywhere is reachable in under 64 hops, then packets sent from systems that use a TTL of 128 will arrive at the destination with a TTL still over 64 (or else they'd have been discarded for all the other systems already).

    • Windows 9x used a TTL of 32. I vaguely recall hearing that it caused problems in extremely exotic cases, but that could have been misinformation. I imagine that >99.999% of the time, 32 is enough. This makes fingerprinting via TTL to distinguish between those who set it at 32, 64, 128 and 255 (OpenSolaris and derivatives) viable. That said, almost nobody uses Windows 9x or OpenSolaris derivatives on the internet these days, so I used values from systems that they do use for my argument that fingerprinting via TTL is possible.

What is the reasoning behind TTL counting down instead of up, anyway? Wouldn't we generally expect those routing the traffic to determine if and how to do so?

  • To allow the sender to set the TTL, right? Without adding another field to the packet header.

    If you count up from zero, then you'd also have to include in every packet how high it can go, so that a router has enough info to decide if the packet is still live. Otherwise every connection in the network would have to share the same fixed TTL, or obey the TTL set in whatever random routers it goes through. If you count down, you're always checking against zero.

  • The primary purpose of TTL is to prevent packets from looping endlessly during routing. If a packet gets stuck in a loop, its TTL will eventually reach zero, and then it will be dropped.

    • That doesn't answer my question. If it counted up then it would be up to each hop to set its own policy. Things wouldn't loop endlessly in that scenario either.

      3 replies →

  • If your doctor says you have only 128 days to live, you count down, not up. TTL is time to live, which is the same thing.

    • Although, more accurately it's like "transmissions to live" since it doesn't have anything to do with time, regardless of its original naming.