We have ipinfo at home or how to geolocate IPs in your CLI using latency

18 hours ago (blog.globalping.io)

This is a little project exploring the feasibility of using a service such as Globalping for geo location needs.

I had fun making it but please note that the current implementation is just a demo and far from a proper production tool.

If you really want to use it then for best possible results you need at least 500 probes per phase.

It could be optimized fairly easily but not without going over the anon user limit which I tried to avoid

  • I wonder if you could optimize for reducing the total probe count (at the expense of possibly longer total time, though it may be faster in some cases) by using some sort of "gradient descent".

    Start by doing the multi-continent probe, say 3x each. Drop the longest time probes, add probes near the shortest time, and probe once. Repeat this pattern of probe, assess, drop and add closer to the target.

    You accumulate all data in your orchestrator, so in theory you don't need to deliberately issue multiple probes each round (except for the first) to get statistical power. I would expect this to "chase" the real location continuously instead of 5 discrete phases.

    I just watched the Veritasium video on potentials and vector fields - the latency is a scalar potential field of sorts, and you could use it to derive a latency gradient.

    • Yes, most likely there are multiple algorithms that could be used to get better results with fewer probes, but I'm not smart enough to do the math and implement them.

      1 reply →

  • isn't 3 theoretically enough?

    • Time of flight from three points gets you two options for position with GPS, but GPS signals propagate directly in free space. At least mostly, reflections happen.

      Internet signals generally travel by cable, and the selected route may or may not be the shortest distance.

      It's quite possible for traffic between neighboring countries to transit through another continent, sometimes two. And asymetric routing is also common.

      Since this is using traceroute anyway, if you characterize the source nodes, you could probably use a lot fewer nodes and get similar results with something like:

      a) probe from a few nodes on different continents (aiming to catch anycast nodes)

      b) assuming the end of the trace is similar from all probes, choose probe nodes that are on similar networks, and some other nodes that are geolocated nearby those nodes.

      c) declare the target is closest to the node with the lowest measured latency (after offsetting from node charachterized first hop latency)

      You'll usually get the lowest ping times if you can ping from nearby customer of the same ISP as the target. Narrowing to that faster is possible if you know about your nodes.

Congrats on doing it without AI! Just reading your crappy one-word commit messages make me happy.

  • Some code may be AI generated, because the code uses "══════" to separate terminal output. In my experience, Claude really likes to use this character to separate terminal output.

    • >Claude really likes

      Plenty of developers really like it too though, because that's where Claude learned to use it.

    • Maybe, but at least OP typed in the commit message by himself. That places you in the top percentile these days

How feasible would it be for the host under measurement to introduce additional artificial latency to ping responses, varying based on source IP, in order to spoof its measured location?

  • Traceroutes are already notoriously hard to interpret correctly[1] and yes, they can be trivially spoofed. Remember the stunt[2] pulled by tpb to move to North Korea? If you are an AS you can also prepend fake AS to your BGP announcements and make the spoofed traceroute even more legitimate.

    I wonder if this thing will start a cat and mouse game with VPNs.

    [1]: https://news.ycombinator.com/item?id=5319419

  • Courtesy of Xfinity and Charter overprovisioning most neighborhood’s circuits, we already have that today for a significant subset of U.S. Internet users due to the resulting Bufferbloat (up to 2500ms on a 1000/30 connection!)

    • You probably meant to say oversubscribing, not overprovisioning.

      Oversubscription is expected to a certain degree (this is fundamentally the same concept as "statistical multiplexing"). But even oversubscription in itself is not guaranteed to result in bufferbloat -- appropriate traffic shaping (especially to "encourage" congestion control algorithms to back off sooner) can mitigate a lot of those issues. And, it can be hard to differentiate between bufferbloat at the last mile vs within the ISP's backbone.

      1 reply →

  • >varying based on source IP,

    Aha, that's what you would think, but what if I fake the source of the IP used to do the geolocation ping instead!

Nice work! I presented similar research at DEFCON 31 - 'You Can't Cheat Time: Finding foes and yourself with latency trilateration' https://youtu.be/_iAffzWxexA

though with some key differences that address the limitations mentioned in the thread. The main issue with pure ping-based geolocation is that: IPs are already geolocated in databases (as you note) Routing asymmetries break the distance model Anycast/CDNs make single IPs appear in multiple locations ICMP can be blocked or deprioritized My approach used HTTP(S) latency measurements (not ping) with an ML model (SVR) trained on ~39k datapoints to handle internet routing non-linearity, then performed trilateration via optimization. Accuracy was ~600km for targets behind CloudFront - not precise, but enough to narrow attribution from "anywhere" to "probably Europe" for C2 servers. The real value isn't precision but rather: Detecting sandboxes via physically impossible latency patterns Enabling geo-fenced malware Providing any location signal when traditional IP geolocation fails Talk: https://youtu.be/_iAffzWxexA"

Bit surprised this works. Latency variability is huge and sometimes quite disconnected from geo location. I recall talking to someone in NL and realised I've got better latency to NL content from the UK than he did. Presumably better peering etc.

  • Could just be local loop latency, in VDSL or DOCSIS you can get 5-15ms of latency just in your first 1KM. London (e.g Telehouse) > Amsterdam is only about 7ms.

  • Wouldn't you just be closer to the closest PoP and requesting mostly cached content? With how connected amsterdam is they couldn't be around there. Also depending on when it was up until like 7-8 years ago even in major city centers there was no fiber in most places in NL. Now it's mostly covered.

    • Was a while back so bit fuzzy on what precisely we were measuring, but no wasn't something cached/CDN'd. Maybe a VPS or something not sure.

      I was on a better connection (gigabit FTTC) and in a better peered location (central London).

      >amsterdam

      Don't know where precisely in NL they were or what connection type. I'd certainly expect a like for like amsterdam wired connection to win so this was probably something more pedestrian & rural

  • > Latency variability is huge ...

    Yup. For example from my city to one of my dedicated server whose location is fully well-know (in France), I know there's 250 kilometers as the crow flies. Yet if I ping that server and draw a circle around my place (considering ping travels as fast as light in a vaccuum, which we know ain't happening but, hey, it's something) I get a radius of 2000 kilometers. About 8x the distance. I can prove that my IP ain't in the US but that's still not very precise.

    And indeed many servers in the UK, which is 2x the distance than my server is, gives me constantly a lower ping.

    TFA's approach, especially with the traceroute instead of Ping, is nice.

Great post and a great little tool. Some of my experience using these techniques in production:

1. Trilateration mostly doesn't work with internet routing, unlike GPS. Other commenters have covered this in more detail. So the approach described here - to take the closest single measurement - is often the best you can do without prior data. This means you need a crazy high distribution of nodes across cities to get useful data at scale. We run our own servers and also sponsor Globalping and use RIPE Atlas for some measurements (I work for a geo data provider), yet even with thousands of available probes, we can only accurately infer latency-based location for IPs very close to those probes.

2. As such, latency/traceroute measurements are most useful for verifying existing location data. That means for the vast majority of IP space, we rely on having something to compare against.

3. Traceroute hops are good; the caveat being that you're geolocating a router. RIPE IPmap already locates most public routers with good precision.

4. Overall these techniques work quite well for infrastructure and server IP addresses but less so for eyeball networks.

https://ping.sx is also a nice comparison tool

If I understood the post the author just takes the location of smallest ping as the winner. This seems like a very rudimentary approach. Why not do triangulation? If you take each ping time as a measurement of distance between two points, you should be able to ping from a random selection of IPs and from there calculate the location.

  • I talk a little about it in the article, but the main goal was to build something simple that works as proof of concept.

    This brute force approach works much better than I expected as long as you have enough probes and a bit of luck.

    But of course there are much better and smarter approaches to this, no doubt!

    • How did you know how well these results work?

      You mention the quality several times in the article but it's not clear how this is verified. Do you have a set of known-location-ip-addresses around the world (apart from your home)? Or are we just assuming that latency is a good indicator?

      1 reply →

  • Packets don't travel in straight lines.

    • This is/was also my take. I’m skeptical that a probe-based network can be granular enough to reliably pinpoint a city, especially when some paths are much better connected than others (fewer hops, uncongested fiber, no throttling).

      However, ipinfo still appears to rely on active probing to triangulate geolocation data, which suggests they believe these routing asymmetries can be modeled or averaged out in practice.

      https://ipinfo.io/blog/ipinfos-probe-network

      1 reply →

    • yeah, when i used to live in New England, and had more time to be interested in transit, i always was peaked in how comcast would route. No matter how far south i seemed to get, i'd always need to travel to Boston's peering point first to make it to NYC, even in New Haven. If you then simply switch isps, even at same address, verizon would send you south immediately.

      so theres funky overlap wherein on one isp you appear closer to city A, and on isp 2 closer to city B, but its same physical address.

      Continental classification I'd think would be good as they appear to be coalesced endpoints, separated by vast oceans.

You can extend this by looking at the IP route for the reverse path, I've found it's usually accurate to the state at least on the last hop before destination - added benefit that there's usually an airport or city code on the fqdn of that hop.

It'd be clever to integrate this into the TCP stack so it tells you immediately what the lowest bound is on the distance to the counterparty based on the time between data sent and the corresponding acknowledgements. I can see some immediate applications for that.

  • You can get tcp measured round trip time from tcp_info with

       struct tcp_info info;
       socklen_t len = sizeof(info);
       getsockopt(sock, IPPROTO_TCP, TCP_INFO, &info, &len);
    

    tcp_info varies by OS and version, but I think tcpi_rtt is well supported.

> Globalping is an open-source, community-powered project that allows users to self-host container-based probes. These probes then become part of our public network, which allows anyone to use them to run network testing tools such as ping and traceroute.

How's this different from RIPE ATLAS?

  • Atlas is great but it is focused more on academic research and professional use.

    Globalping offers real-time result streaming and a simpler user experience with focus on integrations https://globalping.io/integrations

    For example you can use the CLI as if you were running a traceroute locally, without even having to register.

    And if you need more credits you can simply donate via GitHub Sponsors starting from $1

    They are similar with an overlapping audience yet have different goals

> Group and sort the results; the country with the lowest latency should be the correct one

Sometimes residential ISPs (that hosts the probe) may have a bad routing due to many factors, how does the algorithm take that into account?

Tried with an IP allocated to a major wireless network operator. It was far off but also ran out of credits when trying with higher limits on subsequent attempts.

Seems tool is relying on ICMP results from various probes. So wouldn't this project become useless if target device disables ICMP?

I wonder if you can "fake" results by having your gateway/device respond with fake ICMP requests.

  • I talk about it a bit in the article. The easiest solution is to use the last available hop. In most cases its close enough to properly detect the country even if the target blocks ICMP.

    Email me if you would like to get some additional credits to test it out, dakulovgr gmail.