25 Oct

NIXI root DNS servers and updates

Has been a while since I checked the status of root servers which are hosted at NIXI. The list as per their official member list stays the same i.e i root in Mumbai, K root in Noida and F root in Chennai. 

i root seems to be up!

show ip bgp neighbors 218.100.48.75 received-routes
       There are 5 received routes from neighbor 218.100.48.75
Searching for matching routes, use ^C to quit...
Status A:AGGREGATE B:BEST b:NOT-INSTALLED-BEST C:CONFED_EBGP D:DAMPED
       E:EBGP H:HISTORY I:IBGP L:LOCAL M:MULTIPATH m:NOT-INSTALLED-MULTIPATH
       S:SUPPRESSED F:FILTERED s:STALE
       Prefix             Next Hop        MED        LocPrf     Weight Status
1      192.36.148.0/24    218.100.48.75   0          100        0      BE    
         AS_PATH: 8674 29216
2      194.58.198.0/24    218.100.48.75   0          100        0      BE    
         AS_PATH: 8674 56908
3      194.58.199.0/24    218.100.48.75   0          100        0      BE    
         AS_PATH: 8674 56908
4      194.146.106.0/24   218.100.48.75   0          100        0      BE    
         AS_PATH: 8674
5      194.146.107.0/24   218.100.48.75   0          100        0      BE    
         AS_PATH: 8674

K root seems to be down!

Router: NIXI Delhi (Noida)

Command: show ip bgp neighbors 218.100.48.6 received-routes


show ip bgp neighbors 218.100.48.6 received-routes
Inbound soft reconfiguration not enabled for neighbor 218.100.48.6

F root seems to be up!

show ip bgp neighbors 218.100.48.135 received-routes
       There are 1 received routes from neighbor 218.100.48.135
Searching for matching routes, use ^C to quit...
Status A:AGGREGATE B:BEST b:NOT-INSTALLED-BEST C:CONFED_EBGP D:DAMPED
       E:EBGP H:HISTORY I:IBGP L:LOCAL M:MULTIPATH m:NOT-INSTALLED-MULTIPATH
       S:SUPPRESSED F:FILTERED s:STALE
       Prefix             Next Hop        MED        LocPrf     Weight Status
1      192.5.5.0/24       218.100.48.135  10         100        0      ME    
         AS_PATH: 24049 3557 3557

Atleast while 2 out of 3 root servers seems to be up but for some reason my connection in Haryana isn’t  hitting i root. F root instance it is taking me there for sure. 

i root latency check from my home: 

ping -c 5 i.root-servers.net.
PING i.root-servers.net (192.36.148.17) 56(84) bytes of data.
64 bytes from i.root-servers.net (192.36.148.17): icmp_seq=1 ttl=52 time=156 ms
64 bytes from i.root-servers.net (192.36.148.17): icmp_seq=2 ttl=52 time=155 ms
64 bytes from i.root-servers.net (192.36.148.17): icmp_seq=3 ttl=52 time=155 ms
64 bytes from i.root-servers.net (192.36.148.17): icmp_seq=4 ttl=52 time=156 ms
64 bytes from i.root-servers.net (192.36.148.17): icmp_seq=5 ttl=52 time=155 ms

--- i.root-servers.net ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 155.817/156.034/156.481/0.551 ms

That’s clearly too high latency. Latency from my location to Mumbai is typically 30-40ms. Let’s trace to the i root. 

traceroute i.root-servers.net.
traceroute to i.root-servers.net. (192.36.148.17), 30 hops max, 60 byte packets
 1  172.16.0.1 (172.16.0.1)  0.590 ms  0.664 ms  0.774 ms
 2  103.201.140.218 (103.201.140.218)  3.500 ms  3.413 ms  3.495 ms
 3  10.10.26.1 (10.10.26.1)  6.182 ms  6.304 ms  6.012 ms
 4  10.10.26.9 (10.10.26.9)  6.318 ms  6.047 ms  5.964 ms
 5  nsg-static-77.249.75.182-airtel.com (182.75.249.77)  45.636 ms  44.225 ms  44.144 ms
 6  182.79.191.89 (182.79.191.89)  56.411 ms 182.79.181.218 (182.79.181.218)  59.690 ms 182.79.153.86 (182.79.153.86)  66.090 ms
 7  182.79.149.95 (182.79.149.95)  207.319 ms 182.79.217.94 (182.79.217.94)  57.670 ms 182.79.149.95 (182.79.149.95)  207.285 ms
 8  182.79.177.101 (182.79.177.101)  187.845 ms 182.79.224.134 (182.79.224.134)  183.999 ms 182.79.224.124 (182.79.224.124)  180.657 ms
 9  182.79.146.218 (182.79.146.218)  211.258 ms 182.79.154.2 (182.79.154.2)  187.929 ms 182.79.154.10 (182.79.154.10)  192.907 ms
10  182.79.149.103 (182.79.149.103)  183.405 ms  181.645 ms  181.540 ms
11  peering.r1.lnx.dnsnode.net (195.66.225.151)  157.300 ms  157.214 ms  157.293 ms
12  i.root-servers.net (192.36.148.17)  157.364 ms  156.423 ms *

Thus Airtel is taking me to all the way to London (while LNX = Airport code for Smolensk Airport, Smolensk, Russia but route clearly shows it’s being exchanged at LINX. Someone in Netnod got into habit of writing LINX as LNX which is confussing). 

I see the same by querying id.server and hostname.bind in CHAOS class.

dig chaos @192.36.148.17 id.server txt  +short
"s1.lnx"

dig chaos @192.36.148.17 hostname.bind  txt  +short
"s1.lnx"

So, for now, Airtel is preferring route learnt via LINX peering over route learnt at NIXI. In a check by all Indian RIPE Atlas probes, I see that out of 50 RIPE Atlas probes, 23 are hitting s1.mum in Mumbai, 19 are hitting LINX London (s1.lnx) and 1 (which is hosted on NKN) is hitting s1.amx in Amsterdam (json data here). 

Why this happens? 

It’s often the lack of peering and/or case of prefered routes. For smaller networks, it’s simply missing peering. For larger networks, it’s about which route they prefer, which not. Here’s a view of networks with their ASNs sorted by latency (wherever RIPE Atlas Probe) was present (measurement link here). 

So what can be done about it? 

NIXI needs to be more attractive to various (smaller) networks which clearly it is not since it just does not has any content player connected to it due to policy issue. Furthermore customers of Airtel need to buzz it and request for a better route to i root’s local instance. 

Comments & thoughts expressed in the post are personal and have nothing to do with my employer. I am also volunteering for supporting tech platform for BharatIX to facilitate peering.

26 Oct

K root route leak by AS49505 – Selectel, Russia

There seems be an ongoing route leak by AS49505 (Selectel, Russia) for K root server.

K root server’s IP: 193.0.14.129
Origin Network: AS25152

 

Here’s trace from Airtel Looking Glass, Delhi PoP

Mon Oct 26 16:21:18 GMT+05:30 2015
traceroute 193.0.14.129

Mon Oct 26 16:21:22.053 IST

Type escape sequence to abort.
Tracing the route to 193.0.14.129

 1   * 
    203.101.95.146 19 msec  4 msec 
 2  182.79.224.73 14 msec  3 msec  1 msec 
 3  14.141.116.89.static-Delhi.vsnl.net.in (14.141.116.89) 7 msec  3 msec  2 msec 
 4  172.23.183.134 26 msec  45 msec  26 msec 
 5  ix-0-100.tcore1.MLV-Mumbai.as6453.net (180.87.38.5) 151 msec  153 msec  152 msec 
 6  if-9-5.tcore1.WYN-Marseille.as6453.net (80.231.217.17) [MPLS: Label 383489 Exp 0] 160 msec  163 msec  155 msec 
 7  if-2-2.tcore2.WYN-Marseille.as6453.net (80.231.217.2) [MPLS: Label 595426 Exp 0] 161 msec  162 msec  162 msec 
 8  if-7-2.tcore2.FNM-Frankfurt.as6453.net (80.231.200.78) [MPLS: Label 399436 Exp 0] 149 msec  151 msec  155 msec 
 9  if-12-2.tcore1.FNM-Frankfurt.as6453.net (195.219.87.2) 164 msec  163 msec  159 msec 
 10 195.219.156.146 153 msec  151 msec  160 msec 
 11 spb03.transtelecom.net (188.43.1.226) 190 msec  192 msec  189 msec 
 12 Selectel-gw.transtelecom.net (188.43.1.225) 185 msec  185 msec  185 msec 
 13 k.root-servers.net (193.0.14.129) 183 msec  204 msec  196 msec 
RP/0/8/CPU0:DEL-ISP-MPL-ACC-RTR-9#

 

The routing information (show route 193.0.14.129 output) from their looking glass doesn’t seems useful since it shows that it’s learning K root Noida route via NIXI. This is likely because routing information is different from actual forwarding information in that device.

So the trace looks extremely weird. It’s leading traffic to K root which does has anycast instance in Noida, landing into Russia!

 

Why is that happening?

Let’s look at what Tata Communications (AS6453) routing table has for K root’s prefix. I am looking at feed of AS6453 which it’s putting into RIPE RIS RRC 03 collector.

anurag@server7:~/temp$ awk -F ‘|’ ‘$5==6453’ rrc03-table-26-Oct-2015.txt|grep 193.0.14.0/24
TABLE_DUMP_V2|10/26/15 08:00:03|A|80.249.209.167|6453|193.0.14.0/24|6453 20485 49505 25152|IGP
anurag@server7:~/temp$

 

Let’s analyse this AS_PATH

  1. AS25152 is orignating prefix to AS49505 (Selectel Russia)
  2. AS49505 is “leaking” route to it’s upstream AS20485 (Trans Telecom, Russia)
  3. AS20485 is further propagating route to Tata Communications AS6453 making route visible globally via Tata Communications IP backbone

 

What impact of it?

Impact is much higher latency with K root from India. Here’s how RIPE Probe 170111 hosted at my home finds latency to K root:

K_root_Performance

 

As per graph change, leak started on 24th Oct at 9am UTC and this resulted in jump of latency of over 180ms.

 

Disclaimer: Post, comments, thoughts and analysis is in personal capacity and in no way linked to my employer.

06 Oct

K root server – Noida anycast and updates

K root in Noida seems to be not getting enough traffic from quite sometime and connectivity does seems bit broken. This is a blog post following up to Dyn’s excellent and detailed post about how TIC leaked the world famous 193.0.14.0/24 address space used by AS25152. It was good to read this post from RIPE NCC written by my friend Emile (and thanks to him for crediting me to signal about traffic hitting outside!)

 

The route leak…

TIC AS48159 was supposed to keep the route within it’s IGP but it leaked it to Omantel AS8529 – a large International backbone which propagated route leak further to global table. It was mistake at by both players primarily by TIC for leaking route.

 

If we look at IPv4 route propagation graph of Omatel AS8529 on Hurricane Electric BGP tool kit, it shows two import ASNs:

 

Omantel IPv4 routing

 

 

This has AS9498 (Bharti Airtel) and AS6453 (Tata Communications). Both of these are extremely important networks and two of large International and domestic IP transit providers in India. Very likely Omantel is customer of Bharti Airtel and if we look at IRR record of Airtel as published in their peeringdb record: AS9498:AS-BHARTI-IN

 

Anurags-MacBook-Pro:~ anurag$ whois -h whois.apnic.net AS9498:AS-BHARTI-IN |grep -w AS8529
members: AS38476,AS45219,AS45264,AS45283,AS45514,AS45451,AS37662,AS45491,AS7642,AS45517,AS45514:AS-TELEMEDIA-SMB,AS45609,AS38740,As131210,AS45335,AS23937,AS132045,AS8529,AS132486,AS8164,AS133967,AS37048
Anurags-MacBook-Pro:~ anurag$

 

This also confirms the same. Airtel did picked this route and since it was a customer route, it had a higher local preference then the peering route Airtel learnt from NIXI Noida peering with  K root. For now route leak fixed and Airtel seems to be having good routing with K root anycast instance in Noida.

 

Current status

From Tata Communications – it’s yet not picking announcement of K root anycast instance from Noida since their peering session at NIXI Noida has been down from long time. NIXI moved over from STPI to Netmagic Sector 63 Noida in August (see heavy drop of traffic in NIXI Noida graphs here). From that time onwards Tata’s domestic backbone AS4755’s peering session seems down.

NIXI Looking Glass - show ip bgp summary

Router: NIXI Delhi (Noida)

Command: show ip bgp summary


BGP router identifier 218.100.48.1, local AS number 24029
BGP table version is 541676, main routing table version 541676
10616 network entries using 1528704 bytes of memory
13657 path entries using 1092560 bytes of memory
1546/1197 BGP path/bestpath attribute entries using 210256 bytes of memory
1275 BGP AS-PATH entries using 40472 bytes of memory
566 BGP community entries using 22196 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 2894188 total bytes of memory
BGP activity 523875/512278 prefixes, 1016379/1001610 paths, scan interval 60 secs

Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
  218.100.48.6    4        25152   35502  102431   541675    0    0 3w3d              1
  218.100.48.10   4        10029    8285   15774   541675    0    0 2d16h           194
  218.100.48.12   4         9583    4750    9899   541675    0    0 2d16h          1969
  218.100.48.13   4        17439  109297  191050   541675    0    0 9w5d             32
  218.100.48.15   4         9829     713    2669   541675    0    0 11:04:52        857
  218.100.48.17   4        17426    1205    3995   541675    0    0 19:57:29         17
  218.100.48.20   4         9498  190999  159646   541675    0    0 3w3d           7254
  218.100.48.21   4         4637   63761  141723   541675    0    0 6w2d              5
  218.100.48.23   4        63829   30808   80566   541675    0    0 2w5d              5
  218.100.48.25   4        17754   20071   50107   541675    0    0 1w5d            102
  218.100.48.26   4        18101   14641   29277   541675    0    0 5d00h           190
  218.100.48.27   4        17488   22887   58026   541675    0    0 2w0d            354
  218.100.48.28   4        55410   58592  107852   541675    0    0 2w2d           2637
  218.100.48.29   4        10201       0       0        1    0    0 2d08h    Active
  218.100.48.31   4        55836    9164   23591   541675    0    0 6d08h             7
  218.100.48.34   4        45528   38354  107593   541675    0    0 3w5d             18
  218.100.48.36   4       132215   27000   56646   541675    0    0 1w2d             15
  218.100.48.40   4       132453       0       0        1    0    0 2d07h    Idle

 

As per NIXI’s connected parties page, Tata Comm’s IP is 218.100.48.30. From NIXI’s looking glass there seems to no peer on that IP !

NIXI Looking Glass - show ip bgp neighbors 218.100.48.30 routes

Router: NIXI Delhi (Noida)

Command: show ip bgp neighbors 218.100.48.30 routes


% No such neighbor or address family

 

Hence for now Tata Comm isn’t getting route at all from Noida instance and that explains reason for bad outbound path.

 

Example of trace from Tata Comm to K root:

## AS4755/TATACOMM-AS - TATA Communications formerly VSNL is Leading ISP (2.7% of browser users in IN)
#prb:15840 dst:193.0.14.129
1 () 192.168.34.1 [0.344, 0.426, 17.445]
2 err:{u'x': u'*'}
3 (AS4755) 115.114.137.158.static-pune.vsnl.net.in [2.73, 2.916, 2.921] |Pune,Maharashtra,IN|
4 () 172.29.250.33 [5.659, 5.789, 6.274]
5 (AS6453) ix-0-100.tcore1.mlv-mumbai.as6453.net [5.143, 5.168, 5.755]
6 (AS6453) if-9-5.tcore1.wyn-marseille.as6453.net [125.474, 125.554, 125.596] |Marseille,Provence-Alpes-C?te d'Azur,FR|
7 (AS6453) if-2-2.tcore2.wyn-marseille.as6453.net [125.723, 125.739, 126.525] |Marseille,Provence-Alpes-C?te d'Azur,FR|
8 (AS6453) if-7-2.tcore2.fnm-frankfurt.as6453.net [126.535, 126.788, 127.22]
9 (AS6453) if-12-2.tcore1.fnm-frankfurt.as6453.net [125.75, 125.828, 125.871]
10 (AS6453) 195.219.156.146 [262.957, 265.3, 266.39]
11 (AS20485) spb03.transtelecom.net [297.919, 297.954, 302.452] |Saint-Petersburg,St.-Petersburg,RU|
12 (AS20485) selectel-gw.transtelecom.net [288.789, 296.574, 298.442]
13 (AS25152) k.root-servers.net [296.981, 297.042, 297.118]

 

even same stays for its downstream customers who have outbound via TCL:

## AS45528/TDN - Tikona Digital Networks Pvt Ltd. (1.4% of browser users in IN)
#prb:22793 dst:193.0.14.129
1 () 10.135.150.254 [0.521, 0.539, 0.814]
2 (AS45528) 1.22.55.185 [5.774, 7.721, 8.195]
3 (AS4755) 115.113.133.125.static-mumbai.vsnl.net.in [7.282, 14.754, 48.013] |Mumbai,Maharashtra,IN|
4 (AS6453) if-2-590.tcore2.l78-london.as6453.net [121.089, 122.755, 124.416] |London,England,GB|
5 (AS6453) if-2-2.tcore1.l78-london.as6453.net [121.828, 122.077, 123.869] |London,England,GB|
6 (AS6453) if-17-2.tcore1.ldn-london.as6453.net [120.716, 122.008, 122.768] |London,England,GB|
7 (AS6453) 195.219.83.10 [122.039, 123.532, 125.424]
8 (AS8468) te2-2.interxion.core.enta.net [125.262, 126.587, 127.04]
9 (AS8468) 188-39-11-66.static.enta.net [122.424, 123.028, 123.163]
10 (AS5459) ge0-1-101.tr1.linx.net [121.656, 124.826, 125.182] |London,England,GB|
11 (AS5459) fe3-0.tr4.linx.net [120.654, 120.721, 138.858] |London,England,GB|
12 (AS5459) g00.router.linx.k.ripe.net [123.306, 123.536, 125.486] |London,England,GB|
13 (AS25152) k.root-servers.net [121.285, 122.653, 122.942]

 

 

Another issue which is causing serious trouble around K root is the fact that it appears to be broken IP transit pipe of K root Noida. Due to the way NIXI works, K root must have a IP transit pipe. I pointed long back about broken connectivity of root DNS servers due return path problems. After that both K root and i root got transit but seems like after NIXI moved over, IP transit has been broken for current setup in Netmagic.

 

Why “local node” of root server needs IP transit?

It needs transit because:

    1. NIXI has a weird pricing of “x-y” where requester pays and this leads to a quite high settlement amount for a network which has a high inbound traffic (eyeball network) – even few x times than that of transit! (paying 5Rs/GB!). This leads to scenario where networks do “partial prefix announcement” to keep their traffic balanced (or slightly in outbound direction) to avoid high settlement cost. Hence most of such eyeball networks announce their regional routes but avoid all routes while they still do learn K root’s route and inject in their IGP.This leads in case where K root’s 193.0.14.0/24 is leant by networks in West and South India and hence there’s a forward path from customers >>> K root Noida node. Now since these networks aren’t announcing their West or South Indian routes at NIXI Noida, there’s no return path for packets. Thus for root DNS to stay operationally stable (which they should since they are critical) they must have transit / default route to return packets as last resort to IP’s which aren’t visible via peering.
    2. Similar case of some other random leaked routes. E.g if a large ISP decided to learn K root route and announce to customer’s table thus leading to Customer > Large network > K root Noida path while not announcing that customer’s route at NIXI resulting in no return path.

 

 

So in short – It does needs transit but just for outbound traffic, not for announcing routes on the transit.

I have informed of broken connectivity issue to RIPE NCC and their team is actively working on the fix. Hopefully it would be fixed very soon!

 

With hope that your DNS is not getting resolved from other side of world, good night! 🙂

 

Disclaimer: As usual – thoughts & comments are completely personal.