17 Nov

Measuring latency to endpoints with blocked ICMP

And a blog post after a while. Last few months went busy with RPKI. After my last post about RPKI and the fact that India was lacking a little bit on RPKI ROA front, we started with a major push by a set of like-minded folks like us. For now, Indian signed table has jumped from 12% since Aug to 32% now in Oct. Detailed graphs and other data can be found here on the public Grafana instance.

In terms of absolute percentage, India now has the highest number of absolute signed prefixes in this region. 13972 Indian prefixes have a valid ROA and nearest to that is Taiwan at 6824. Though 13972 results in just 32% of the Indian table while 6824 results in 91% of Taiwanese table. So a long way to go for us.

If you are a network operator in India and reading this, consider joining our RPKI webinar which is planned at 3 pm (IST) on 18th Nov 2020. You can register for the event here. Or buzz me to talk about RPKI!

Catching Covid-19

Besides RPKI push I also caught up with Covid19 along with family members. Luckily for us, it went fine and wasn’t that painful. The impact was mild and everyone has recovered. Phew!
I hope readers of this blog post are well.

TraceroutePing in Smokeping

Coming to the topic for today’s blog post. I recently came across this excellent Smokeping plugin which solves a very interesting problem. There are often nodes we see in the traceroute/MTR which is either not routed or simply block ICMP/TCP/UDP packets which are addressed to them. This can include routers which have a pretty harsh firewall dropping everything addressed to them as well as cases where we have IX or any other non-routed IP in the traceroute. It becomes tricky to measure latency to those. Someone used the simple idea of incremental TTLs as used in traceroute to get a reply from these middle nodes of “TTL time exceeded error” and based on that a way to plot latency.

Let’s look at a real-world case: One of ISP serving my home is IAXN AS134316 and they peer with my ex-employers network Spectra AS10029 at Extreme IX in Delhi. Let’s see how traceroute to Spectra’s anycast DNS looks from my home.

traceroute -P icmp
traceroute to (, 64 hops max, 72 byte packets
 1  router01.rtk.anuragbhatia.com (  2.818 ms  1.876 ms  4.274 ms
 2 (  4.258 ms  4.301 ms  5.953 ms
 3 (  5.490 ms  5.916 ms  5.257 ms
 4 (  11.349 ms  9.246 ms  9.430 ms
 5  as10029.del.extreme-ix.net (  10.628 ms  8.802 ms  9.609 ms
 6  resolver1.anycast.spectranet.in (  8.446 ms  9.113 ms  10.699 ms

Now hop 5 here is likely Spectra’s Delhi router’s interface which has Extreme IX IP – Let’s see what we get when we ping it.

ping -c 5
PING ( 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3

--- ping statistics ---
5 packets transmitted, 0 packets received, 100.0% packet loss

I cannot ping it. Let’s look at the trace to it to see where it drops.

traceroute -P icmp
traceroute to (, 64 hops max, 72 byte packets
 1  router01.rtk.anuragbhatia.com (  3.196 ms  1.790 ms  4.421 ms
 2 (  5.514 ms  3.624 ms  5.323 ms
 3 (  5.252 ms  4.043 ms  3.671 ms
 4  * * *
 5  * * *
 6  nsg-static- (  14.221 ms  10.963 ms  11.574 ms
 7 (  146.531 ms  147.899 ms  146.065 ms
 8  * * *
 9  * * *
10  * * *

Now, this is an interesting and not very unexpected result. Basically, my ISP – IAXN AS134316 does not has any route in it’s routing table for and hence passing it to default route towards it’s upstream Airtel. BGP wise IAXN is not supposed to have any route belonging to IX peering IP anyways and that’s expected. Likely their router which peers with Extreme IX is different from the router which serves me and is possibly missing sharing of connected routes via IGP and hence the unexpected path. As soon as traffic hits Airtel router with a full routing table & no default route, it drops it.

In this setup, I cannot reach Spectra’s interface connected to the Extreme IX ( directly if I try to send packets to it. But I do know from the first trace that it comes in middle when I try to send packets to This option can be used where packets can be sent with incremental TTL and latency can be measured and even graphed. This concept can be used even if there’s a use of private IPs before the destination.

So this goes to my Probe config

+ TraceroutePing

binary = /usr/bin/traceroute # mandatory
binaryv6 = /usr/bin/traceroute6
forks = 5
offset = 50%
step = 300
timeout = 15

and this goes to my Target’s config

probe = TraceroutePing
menu = Spectra via Extreme IX
title = Spectra via Extreme IX
host =
desthost =
maxttl = 15
minttl = 5
pings = 5
wait = 3

How it works?

A reminder on working on traceroute!

Remember the concept of TTL in IP routing. TTL is time to live and basically whenever the router passes the packets, it decreases TTL by 1 and when TTL reaches 0, the router just drops it. This ensures loops aren’t as dangerous in layer 3 as we see in layer 2. Now when a router drops packets with TTL 0, it replies back to the source saying “TTL exceeded” and the reply packets have the router’s source IP address. That way traceroute can send 1st packet with TTL 1, 1st router in the chain gets it, reduces TTL by 1 and (now that TTL is 0) drops it with a reply from its source IP. Next, another packet is sent with TTL 2 and so on.

Note: Thanks to networking folks from OVH Cloud who replied me with this probe on Twitter. It wasn’t what I was looking for but quite fascinating and useful!
Time to go back into the routing world! 🙂

22 Nov

My home network…

This is a common discussion topic when I tell friends in Indian network operators that I work from home. As soon as I say that, they ask me – “How good is the connectivity at your home?” And of course like all answers in engineering – it depends. 🙂

So I have two links at my home: IAXN and Siti broadband. IAXN is a FTTH connection with 50Mbps down and 25Mbps up, while Siti broadband is a DOCSIS connection with ~60Mbps down and 25Mbps up.

Both have reasonable but not 100% uptime. So to get close to 100% uptime, I use both together. These are consumer grade connections with no BGP. These days many routing platforms support running multiple WAN links for the redundancy reasons. I use Ubnt Edgerouter Lite which my good friend Nat Morris gifted me a while ago. Both links are defined in the “load balancing” where one link acts as primary and other for failover only with multiple routing tables. Next, policy based routing on the LAN VLAN sub-interface takes care of routing packets as needed. This documentation covers the setup in detail. For wifi I use a Asus device which runs purely as a access point in bridged mode with no routing.

Some other things in use at home network:

  • A Raspberry pi 3 stays on a dedicated VLAN & runs multiple site to site Wireguard VPN tunnels (over multiple WAN links) to multiple of my remote locations.
  • It also runs OSPF over FRR to ensure dynamic routing table changes whenever a link is changed. I can switch over traffic by defining the OSPF cost.
  • My server in Munich runs a NGIX proxy & apart from doing various tasks, it also hosts a test URL which does reverse proxy via Raspberry Pi at my home over Siti broadband (only). UptimeRobot monitors that URL for availability and that’s how I monitor my Siti broadband link which is without any public IP and totally behind the CGNAT.
  • Site to site VPNs over multiple links with OSPF taking care of dynamically moving traffic also takes care of things like SNMP monitoring of home devices. I use LibreNMS which is hosted remotely & keeps an eye on home network.
  • Raspberry Pi at home also runs Smokeping where certain predefined targets are moved forcefully out of each WAN link to plot latency. That helps in keeping eye on latency to ISP’s core, as well as upstream telco cores via each link.
  • I also host a node for Galmon project node to keep an eye on (American) GPS satellites, European, Chinese & Russian navigation satellites. The wonderful map here shows the receivers. Lately project is getting good coverage for it’s stats (reference here)
  • I run a DNS resolver at home (again on the raspberry pi)

While there’s auto switching in case of failure or packet loss beyond certain rate on the primary WAN link, I also have a ansible playbook which can be used to tweak the primary/secondary choice & the playbook is available via Semaphone web UI based interface so that my family can switch if they need to.

So the end result is close to 100% uptime (30 seconds outage if primary fails) as well with no irritating wifi switching as well as push notifications on my phone about an outage (via Uptime Robot) for both links. Usually there’s outage once in 30 days not because of WAN links but because I have shut things to clean up the dust.

05 Apr

Tata – Airtel domestic peering IRR filtering and OpenDNS latency!

Last month I noticed quite high latency with Cisco’s OpenDNS from my home fibre connection. The provider at home is IAXN (AS134316) which is peering with content folks in Delhi besides transit from Airtel.

ping -c 5
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=51 time=103 ms
64 bytes from icmp_seq=2 ttl=51 time=103 ms
64 bytes from icmp_seq=3 ttl=51 time=103 ms
64 bytes from icmp_seq=4 ttl=51 time=103 ms
64 bytes from icmp_seq=5 ttl=51 time=103 ms
--- ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 103.377/103.593/103.992/0.418 ms

This is bit on the higher side as from Haryana to Mumbai (OpenDNS locations list here). My ISP is backhauling from Faridabad which is probably 6-8ms away from my city and 2-3ms further to Delhi and from there to Mumbai around 30ms. Thus latency should be around ~40-45ms.

Here’s how forward trace looked like

traceroute to (, 30 hops max, 60 byte packets
 1 (  0.730 ms  0.692 ms  0.809 ms
 2  axntech-dynamic- (  4.904 ms  4.314 ms  4.731 ms
 3 (  6.000 ms  6.414 ms  6.326 ms
 4 (  6.836 ms  7.135 ms  7.047 ms
 5  nsg-static- (  9.344 ms  9.416 ms  9.330 ms
 6 (  62.274 ms (  66.874 ms (  61.297 ms
 7 (  85.789 ms  82.250 ms  79.591 ms
 8 (  110.049 ms (  114.350 ms  113.673 ms
 9 (  112.598 ms (  114.889 ms (  113.415 ms
10 (  125.770 ms  125.056 ms  123.779 ms
11  resolver1.opendns.com (  113.648 ms  115.044 ms  106.066 ms

Forward trace looks fine except that latency jumps as soon as we hit Tata AS4755 backbone. OpenDNS connects with Tata AS4755 inside India and announces their anycast prefixes to them. If the forward trace is logically correct but has high latency, it often reflects the case of bad return path. Thus I requested friends at OpenDNS to share the return path towards me. As expected, it was via Tata AS6453 Singapore.

Here’s what Tata AS4755 Mumbai router had for IAXN prefix:

BGP routing table entry for
Paths: (1 available, best #1, table Default-IP-Routing-Table)
Not advertised to any peer
6453 9498 134316 134316 134316 134316 134316 134316 134316 134316 134316 134316 from (
Origin IGP, localpref 62, valid, internal, best
Community: 4755:44 4755:97 4755:888 4755:2000 4755:3000 4755:47552 6453:50 6453:3000 6453:3400 6453:3402
Originator:, Cluster list:
Last update: Mon Mar 25 15:26:36 2019

Thus what was happening is this:

Forward path: IAXN (AS134316) > Airtel (AS9498) > Tata (AS4755) > OpenDNS (AS36692)

Return path: OpenDNS (AS36692) > Tata (AS4755) > Tata (AS6453) > Airtel (AS9498) > IAXN (AS134316)

While this may seem like a Tata – Airtel routing issue but it wasn’t. I could see some of the prefixes with a direct path as well. Here’s a trace from Tata AS4755 Mumbai PoP to an IP from a different pool of IAXN:

traceroute to (, 15 hops max, 60 byte packets
1 * * *
2 ( 0.911 ms 0.968 ms 0.643 ms
3 ( 1.233 ms 0.821 ms 0.810 ms
4 ( 23.540 ms 23.454 ms 23.367 ms
5 ( 49.175 ms 48.832 ms 49.107 ms
6 ( 48.777 ms ( 49.043 ms ( 54.879 ms
7 ( 60.865 ms 60.540 ms 60.644 ms

This clearly was fine. So why Tata was treating different from The reason for that lies in following:

  • Airtel (AS9498) very likely peers with Tata (AS4755). They do interconnect for sure as we see in traceroutes and my understanding is that it’s based on settlement-free peering for Indian traffic.
  • Airtel (AS9498) buys IP transit from Tata (AS6453) (besides a few others). Tata AS6453 is carrying the routing announcements to other networks in the transit free zone and that confirms that Airtel (at least technically) has a downstream customer relationship here.
  • Tata (AS4755) has IRR based filters on peering but not the Tata (AS6453) for it’s downstream. Hence while Tata rejected the route in India, they did accept that in Singapore PoP.
  • My IP was from prefix and there was no valid route object for it at any of key IRRs like ATLDB, APNIC or RADB. But other prefix did had a valid route object on APNIC.

Now after almost 10 days of it, my ISP has changed the BGP announcement and announcing (which does a valid route object on APNIC). This fixes the routing problem and give me pretty decent latency with OpenDNS:

ping -c 5
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=55 time=52.552 ms
64 bytes from icmp_seq=1 ttl=55 time=53.835 ms
64 bytes from icmp_seq=2 ttl=55 time=53.330 ms
64 bytes from icmp_seq=3 ttl=55 time=52.700 ms
64 bytes from icmp_seq=4 ttl=55 time=52.504 ms
--- ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 52.504/52.984/53.835/0.518 ms

So if you are a network operator and originating prefixes, please do document them in any of the IRRs. You can do that via IRR of your RIR (APNIC, ARIN etc) or a free IRR like ALTDB. If you have downstreams, make sure to create AS SET, add downstreams ASNs in your AS SET and also include that AS SET on peeringdb for the world to see!

Misc Notes

  • Posted strictly in my personal capacity and has nothing to do with my employrer.
  • Thanks for folks from Cisco/OpenDNS for quick replies with relevant data which helped in troubleshooting. 🙂