Just back from APRICOT 2018. As I mentioned in my previous blog post, APNIC had its first Hackathon and it was fun (blog post of APNIC here). There was one project on the ranking of CDNs using RIPE Atlas data. To achieve this team was trying to find strings/hostnames which they can trace to and figure out nearby CDN. As part of that, I suggested them to look into www.facebook.com and carefully noting the sources from where elements get loaded. It’s quite common that Facebook.com (or Google.com for the logic) would be hosted on some server at a large PoP while FNA (or GGC) would serve only specific static content out of it. FNA, of course, sits on the IPs of the ISP hosting it. So in the source list, we found scontent.fktm1-1.fna.fbcdn.net and that gives an idea that FNA strings are around logic: scontent.fxxx1-1.fna.fbcdn.net where xxx is the airport code. 1-1 means 1st PoP in 1st ISP over there probably (strong guess!). If there are more FNA nodes in a given area, the number goes further up. The team used it and for now, the project is over. But while I was on the way back to India, I thought that this is very interesting data if we pull the full picture by querying all possible IATA airport codes with a logic. This logic can be used for two things:
Writing this post from my hotel room in Kathmandu. I found that many of the servers appear to be DNS resolvers which is unusual.
Have a look at these weird DNS replies:
APNIC and RIPE NCC are doing a hackathon at APRICOT 2018. It just started today with some light interaction with various participating members yesterday. The theme of the hackathon is around IPv6. Many cool projects were suggested yesterday and teams started working today on certain shortlisted projects like:
A tool for ranking CDNs - A tool based on RIPE Atlas data to rank CDNs based on latency across different regions.
An IPv6 fun word game - Where anyone with a member account can suggest a word, and compete with other members who share more IPv6 addresses. It may include things like showcasing creative use of hexadecimal strings in an IPv6 address like Facebook popularly does face:b00c in their IPv6 pools.
IPv4 and IPv6 network security - Study of attacks and overall security in IPv6. It would involve study and possibly a report on various attack vectors in the IPv6 domain.
A countrywide report on IPv6 deployment - I have yet to see how it is different from existing other reports.
IPv6 tunnel detection - Figuring out where tunnels used and figuring out the IPv4 address of those endpoints via a javascript plugin and possibly comparing IPv4 Vs IPv6 performance.
Let’s see how things go in next 12hrs. Super fun. Things should show up on Github in next few hours. :)
And here goes first blog post of 2018. Last few months went busy with some major changes in personal life. :)
I looked into Amazon’s India connectivity with various ASNs tonight. Here’s how it looks like. (Note: Jump to bottom most to skip traces and look at the summary data).
Traceroutes
Amazon India to Vodafone India
traceroute to 118.185.107.1 (118.185.107.1), 30 hops max, 60 byte packets
1 ec2-52-66-0-128.ap-south-1.compute.amazonaws.com (52.66.0.128) 21.861 ms ec2-52-66-0-134.ap-south-1.compute.amazonaws.com (52.66.0.134) 19.244 ms 19.233 ms
2 100.64.2.200 (100.64.2.200) 14.789 ms 100.64.0.200 (100.64.0.200) 20.731 ms 100.64.3.12 (100.64.3.12) 13.187 ms
3 100.64.0.193 (100.64.0.193) 14.418 ms 100.64.3.69 (100.64.3.69) 15.469 ms 100.64.3.67 (100.64.3.67) 15.946 ms
4 100.64.16.67 (100.64.16.67) 0.343 ms 100.64.17.165 (100.64.17.165) 0.312 ms 100.64.17.199 (100.64.17.199) 0.313 ms
5 52.95.67.213 (52.95.67.213) 1.942 ms 52.95.67.209 (52.95.67.209) 1.967 ms 52.95.67.213 (52.95.67.213) 1.935 ms
6 52.95.66.218 (52.95.66.218) 4.998 ms 4.694 ms 52.95.66.130 (52.95.66.130) 4.650 ms
7 52.95.66.67 (52.95.66.67) 1.752 ms 52.95.66.89 (52.95.66.89) 1.850 ms 1.806 ms
**8 52.95.217.183 (52.95.217.183) 3.111 ms 3.102 ms 3.088 ms <- Amazon India**
**9 182.19.106.204 (182.19.106.204) 3.426 ms 4.547 ms 4.537 ms <- Vodafone India**
10 118.185.107.1 (118.185.107.1) 2.035 ms 2.059 ms 2.039 ms
A few weeks back an Indian ISP contacted me via a contact form on my blog. That ISP has been struggling with a targetted DDoS attack. For the reason of privacy as well as the stability of their network, I will not put their name or AS number. The attack on that ISP was much higher than their bandwidth levels. Their upstream did not really share the volume of attack but I could tell from the screenshots they shared was that it was distributed volumetric attack choking their upstream bandwidth. I suggested that ISP get the blackholing option from his upstream (preferred way) or buy a cheap server/VM somewhere outside India with BGP (and BGP blackholing) and manually blackhole traffic when the attack comes. They were able to get blackholing enabled from their upstream and it did work. They started blackholing traffic and it helped to manually drop traffic going towards the IPs which were being attacked. It’s important to have BGP blackholing because it helps a network to signal upstream ISP about the pools which are under attack and to drop traffic towards them. ISPs further signal the same to their upstream and larger networks typically drop that traffic on all their edge routers i.e closer to the entry fo attack. Next, the problem which hit that ISP was that it was a pain to manually find IPs which were under attack and quickly drop them. I suggested them to try fastnetmon. I heard of fastnetmon from the presentation of Job Snijders at NLNOG (slides here & video here). Fastnetmon is developed and maintained by Pavel Odintsov (who works at Cloudflare AS13335).
It has been some time since I started pushing Indian community for hosting RIPE Atlas Probes. These probes are small devices designed to be hosted at end user’s connection and do pre-defined as well as user-defined measurement. Measurement includes ping, trace, DNS lookup, SSL check etc. Currently, there are 61 active RIPE Atlas probes. I would say it has +/- of 7-8 probes which go offline and come back online when I request hosts to check.
And breaking silence over here, last few months went quite busy. I travelled to Cambodia, Singapore, Taiwan and Hong Kong recently for various tasks from APNIC’s IXP workshop in Cambodia to SGNOG in Singapore, APNIC 44 in Taiwan and so on. Apart from this, I went for solar power setup at home here in Haryana since grid has been quite unstable (specifically this year). I think the overall grid is OKish but it’s quite bad in my city due to the construction of an overhead road from where a high voltage line is crossing. That has lead to regular long outages in the area. I think in terms of formal load shedding things are getting better and I am certain Indian Govt. will be able to reach its target of 24 x 7 supply without any shedding before 2022. That would be the key part as we go for electric vehicles in India. But still, I think we won’t get a good stable grid for at least next 6-7 years. Checkout Vidyut Pravah website which gives an idea of load and demand across India. And here is data for Haryana.
Last week I noticed that F root was showing poor connectivity with Indian RIPE Atlas probes for F-root. The graph looked really terrible.
Telekom Germany
I traced to it from one of RIPE Atlas probes and saw this trace:
Probe #6107
1 2401:7500:fff0:1::1 0.838 ms 0.747 ms 0.632 ms
2 2400:5200:1c00:d::1 1.755 ms 1.745 ms 1.726 ms
3 2403:0:100::2be 2.089 ms 2.054 ms 2.049 ms
4 2404:a800:2a00::13d 45.589 ms 26.274 ms 33.64 ms
5 2404:a800::178 26.376 ms 25.406 ms 25.276 ms
6 2001:de8:1:2::3 25.363 ms 25.232 ms 25.223 ms
7 * * * *
8 * * * *
9 * * * *
10 * * * *
11 * * * *
Here the last hop before timeout i.e hop 6 is of NIXI Chennai peering subnet 2001:de8:1:2::/64. As soon as I saw it, it reminded me older issue which happened and broke IPv4 connectivity to root DNS servers. I blogged about it here, here and here. So the problem remains that NIXI is broken cost wise due to charge on in - out policy. This leads to people accepting routes at all NIXI’s but they do not announce their routes. Thus return path is broken and essentially traffic is being blackholed. Earlier this issue was fixed by adding IP transit support to these root DNS servers so that a default route stays in case of all other failures. It seems like same is missing in IPv4 world and routes are not being announced. During this time, I saw two BGP sessions at NIXI Chennai for F root:
A few weeks back I got in touch with Marc from Meghalaya. He offered to host RIPE Atlas probe at Shillong and that’s an excellent location which isn’t there on RIPE Atlas coverage network yet. It took around 5 days for the probe to reach Shillong from Haryana. I think probably this probe is the one at the most beautiful place in India. :) Now that probe is connected, I thought to look into routing which is super exciting for far from places like Shillong. Marc has a BSNL FTTH connection & mentioned about not-so-good latency. Let’s trace to 1st IP of the corresponding /24 pool on which probe is hosted: