Why Indian internet traffic routes from outside of India?

After my last post about home networking, I am jumping back into global routing. More specifically how Indian traffic is hitting the globe when it does not need to. This is an old discussion across senior management folks in telcos, policymakers, and more. It’s about “Does Indian internet traffic routes from outside of India?” and if the answer is yes then “Why?” and “How much?”

It became a hot topic, especially after the Snowden leaks. There was even an advisory back in 2018 from Deputy National Security Advisor to ensure Indian internet traffic stays local (news here). Over time this has come up a few dozen times in my discussion with senior members from the Indian ISP community, individuals, and even latency-sensitive gamers. So I am going to document some of that part here. I am going to put whatever can be verified publically and going to avoid putting any private discussions I had with friends in these respective networks. The data specially traceroutes will have measurement IDs from RIPE Atlas so they can be independently verified by other network engineers.

Designing high capacity home network

For the last few months, we have been working on setting up a dedicated room for my home office usage. This gives me the opportunity to plan for changes in the home network.


Key design ideas

  1. Cabling should support needs for now as well as the future but active hardware stays strictly for what I need for now. Hardware can be upgraded easily. It is not impossible to change cabling in the future since we have ducts - 0.75" ducts everywhere and 1.5" where it goes towards my office for aggregation but it’s still a few hour’s tasks to plan, change cables, etc. Plus, if changing cables, they have to change together. Any partial change will be very hard as 0.75" ducts get filled with cables. Cables are in star topology but ducts are daisy-chained (older POTS design)
     
  2. While it may not be true especially in the Western world where manpower is very expensive but hardware is relatively cheap, here in India good managed switches, good routers with gigabit ports, policy-based routing, etc can be quite expensive compared to the cost of cabling. I would prefer to centrally aggregate circuits instead of putting switches here & there. Thus if I needed 1 port, I targeted to provision 2 ports and associated cabling from that point till the aggregation.
     
  3. Single place aggregation also has another advantage - it makes it easy to have one single “fat” wall plate with lots of ports and just 1, 2, or 3 port faceplates in the rest of the wall plates. Most of the old telephone boxes are 3 x 3 and can give a max of 2 cat6 ports. One might be able to fit 4 ports if a faceplate is found with those but would be very hard to manage it because there’s barely space to put the extra cable in a 3 x 3 box.
     
  4. All cables should be terminated on a faceplate. In past, I ran a network between different rooms without faceplates for the last 8 years and it gets messy as time goes. The area gets dirty, becomes a fragile point to disconnect, cleanup, etc. Thus this time I decided that all cables must terminate with a faceplate instead of hanging out of the wall.
     
  5. For cabling, cat6 made the most sense to me. Cat6 cost 7000 INR for 300m box. I used around half of the cable and thus it was 3500 INR for 150m / $48 USD for 492 feet. I have the option to return the remaining unused cable.
    I couldn’t find cat6a locally but pricing hints were way higher than cat6. I can run gigabit right away on cat6 and can easily do 2 x 1Gbps LCAP LAG if needed between key locations (the room where uplinks come and the office where aggregation will happen). Furthermore, I can run 2.5Gbps & 5Gbps on each leg at some point when I upgrade the switch. I might be able to run 10G on copper though the longest leg might be a margin of 50m including patch cords. I doubt I will need more than 1Gbps to individual wall jacks where bandwidth is being “consumed” for at least the next 6-7 years. But might very well need more than 1Gbps between uplink room (location of core router, GEPON optical units) and office (aggregation switch). Thus I also got a two-strand single-mode fiber pulled in. That costs around 5.5Rs/meter. It’s a drop cable & I can add connectors to that directly without the need of splicing (though would still need cleaver for a 90 degrees sharp cut). I might light that up after few years especially when both home uplinks are 1Gbps or above. For now, they are 100Mbps (primary) and 50Mbps (secondary). Because fiber is there in the wall, I can in theory run 100Gbps in the far-off future if need to.
     
  6. Price of electronics changes very quickly. It makes sense to over-provision on the cable part but I would always avoid over-provision on the router/switch part. The cost of a 10G switch today is probably 10-20x more than what it will be after 6-7 years when I might actually need that kind of capacity.
     
  7. The highest traffic segment in the home is between the home server (Intel NUC) with a connected 4TB HDD. It does various monitoring, home NAS, site-to-site VPN & more. There are times when traffic touches 1Gbps between Intel NUC, core switch, and core router. Its port is 1Gbps and thus no upgrade path on that except to just replace it in the future. Newer NUC has a 2.5Gbps port. As cloud computing becomes super cheap, I doubt I will be putting super high compute at home in the near future. As of now, Intel NUC hosts only what has to be hosted locally.
     
  8. For faceplates my friend helped me to get a Hi-Fi modular plate - 2 x 8 port each and thus 16 port aggregation. These are basically electric plates with cat6 modules fitted in. So far working well. Though I have terminated cat6 countless times, it was a hard job on that one due to excess cabling. Challenge is: If I leave excess cable, 8 “excess cables” per plate make it hard to close the faceplate while if I leave minimal cable then it would be hard to tweak/fix/make changes in the future. I ended up not cutting the cat6 cable tearing string & tied it along the cable with electric tape. That way I can open up tape anytime & just pull that up to further peel off the outer jacket. For faceplates in the rest of the home, I found Anchor face plates locally. Again these were also electric plates with the option to add a cat6 jack in place of a single “button”. For patch cables, I prefer to use cat5e instead of cat6 because cat5e is less rigid. I prefer to use the same in datacenter environment as well. 1Gbps runs just fine on that & cabling management is a lot easier. Again all of these or a few can be upgraded to cat6 at some point when I have to run high speeds (2.5G/5G/10G). I think I made the mistake of not having an extra “box” to hold the extra cables. Since it was a fresh design, that could have been done easily during the built. So in total this was 13 cat6 cables, 2 ends, 8 strands each = 13 x 2 x 8 = 208 punch downs. In reality, it was probably 250 as I had to re-do few things twice due to less space in the wall box.

 

Remembering M Henri Day & Google Apps forum

Blog post dedicated to my friend M Henri Day from Stockholm, Sweden. Today I learnt that he’s no more and passed away in the first week of December last year. He one of my few good friends from college days. We both were so called “power posters or top contributors” as Google named us in their different forums. I was one of top contributors in Google Apps (Gsuite / Google Workplace) and he was …..well to be honest I don’t even recall that now after 11 years about which specific Google product he was active on. I think it was Google bookmarks, Picasa and few other things. We were super active in those forums for no specific reason but because it was just fun helping people around. Plus that was the time I learnt how DNS works and was very excited to talk about it with everyone. I was out of school and didn’t perform well & got into a college which was ok. To be true college was less fun and life in Radaur was harsh but somehow I developed the taste of the life there. I documented part of that life in some old posts here and here.

Facebook FNA updates - April 2021

Over last couple of years I posted updates on Facebook caching nodes (FNA) deployment across the world. If you would like to read the logic I am using to pull the data, you can check the original post here. While the data is about Facebook FNA, it’s highly likely that networks would have Google GGC nodes alongside (a bit less) Akamai caches.

My last post about it was back in Nov 2019 and it seems just about the time to do a fresh check. So here we go…

How technology loses out in companies...

Just came across this brilliant talk by my friend Bert Hubert. It covers so nicely about the mad rush to just outsource everything and how innovation is lost. While he mentioned names of EU telcos in examples, unfortunately situation isn’t that different in this side of world either. Operator in South Asia also very much suffer with this problem.

Slides of this presentation are here.