28 Sep

Good bye BSNL (AS9829) | New link at home!

A blog post dedicated to BSNL AS9829. It just tried so hard to become as irrelevant as it can from everyone’s life (and that doesn’t excludes me now).


So what really is BSNL btw?

  • A Govt of India telco sitting at a extensive fiber of over 600,000 Kms across the country (staying just unused and unavailable for anyone’s use!)
  • A telco which has an extensive last mile copper (which is very poorly maintained and barely works!)
  • A backbone with over 200Gbps of IP transit capacity (which completely sucks due to rotten routing)
  • An integrated telecom provider offering services from landline to DSL broadband, from leased line to datacenter services! (out of which everything fails miserably from product line to technical ground level operations)
  • An extensive manpower (which is terribly arrogant and from top to ground level staff anyone barely works!)
  • Although telecom industry just boomed, it went from 10,000 crore profits in 2004 to 8000 crore losses in 2015. And still politics goes around it!
  • While private sector was busy with focus on 4G LTE deployment, BSNL’s market share dropped below 10% in 2014
  • While private sector firms like Sterlite, Radius Infratel focused on FTTH rollouts, BSNL rolled out FTTH plans for 4000 INR/month for 50GB cap and FUP speed of (amazing) 512Kbps to ensure no one uses it
  • While Reliance Jio is about to come, Airtel is extensively launching 4G LTE, cool companies like ACT are getting more investment, BSNL is putting 6000 crore in public wifi infrastructure to give few mins of free wifi and with hop of users paying it afterwards. (Wow?!)


All above tells nothing but ways in which BSNL is 100% screwed up for now. I don’t expect it to ever pick up again. Politically, technically, and fundamentally it’s a mess.

I became BSNL broadband user in 2008 and it has been over 7 years of (painful and terrible) experience with them. As a company which put so much of infrastructure to connect India worked extremely hard to do as many stupid things as possible. For me trouble remained that in my city they were only wired telecom provider for retail services.


Last month I got a long haul circuit from Airtel (provisioned on fiber) between my city and a friend’s ISP PoP for 10Mbps bandwidth. Circuit is delivered at a Airtel BTS site location (slightly away from my home) and I have installed Microtik SXT Lite 5’s shooting link from there to my home (around 1km link with clear LoS). This is a usual long range fixed wireless RF link over un-licensed 5.8Ghz band. (Thankyou govt. of India for delicensing it in 2007 and making available for public use). Thanks to companies like Microtik and Ubiquiti for opening up world of good fixed wireless radios and antennas which really work great and are available for quite good prices. I got pair of SXT Lite5’s from Amazon.in at 7700 INR (~$116).

Fortunately BTS site has a private WISP tower and the owner of tower agreed to let me use his tower for my radio for reasonable price.



Some statistics about my new link


Airtel BTS site

Airtel BTS Site




LoS of tower (from home)




Radio at my rooftop

Radio on rooftop



(Water tanks pipes were tall enough that I didn’t had to mount any pole and used those pipes)


Closer look

Radio at home


Link quality checks

Radio link stats


I am getting end to end bandwidth of around 35Mbps between radios (while provisioned bandwidth is 10Mbps on backend). I am using 5Mhz of channel bandwidth with 802.11 protocol and usual WPA2-PSK works to have encryption between radios.

End to end latency between Rasberry Pi (connected via wired to my home router) to other end radio:


And lastly speedtest from a server far away from here:



(Note: Hided ISP name to avoid un-needed DDoS attack on them which are hitting my blog from few weeks)



Some thoughts on fixed wireless links

  1. Work great if LOS and free channels are there. India does has serious problem of very low unlicensed open spectrum permitted for outside use.
  2. Hard to predict capacity for large country like India – may work somewhere, may not somewhere.
  3. WISP stupidly use 20Mhz and HT beams of 40Mhz when even 5Mhz can do job for many of their links. (More “bandwidth” usage = reducing channels for others + more potential chances of interference).
  4. Links work well given 1st Fresnel zone is cleared. Special thanks to my friend Brough Turner for pointing this out. He runs an ISP based on this technology in Boston & surrounding areas. (Checkout netblzr)
  5. Fixed wireless is NOT mobile wireless (understand the difference!).
  6. Some other successful ISPs using this technology – MonkeyBrains in San Francisco (on unlicensed spectrum) and Webpass (using microwave links).
  7. Tikona in India used it a bit but with mesh to increase coverage and eventually got a network with latency & packet loss issues. Wireless links work well but for point to point and very little point to multi-point. Not good choice for a large network with wireless nodes acting as transport in between. Indian media as usual stupidly took technology as swiss knife solution to broadband issues. (checkout NDTV review of Tikona).
  8. Tech and NOG community across India have to support for more un-licensed spectrum for use in India. (Excellent article on this here)
  9. I am overall motivated by excellent paper – America’s Broadband Heros which gave very detailed understanding of technology and limitations
  10. I am overall happy with 2.5x increase in download speed but a whopping 20x increase in upload speeds. Fixed wireless has a good edge over upload speeds when compared to DSL


Ending this blog post with Cacti graph of my home broadband connection for last one month. There’s high amount of systematic transfers of routing table data and some other stuff. I do keep a Rasberry Pi running all the time as home server. 🙂


Home Broadband Graph


05 Mar

Different CDN technologies: DNS Vs Anycast Routing

And I am back from Malaysia after attending APRICOT 2014. It was a slightly slow event this time as less people came up due to change of location from Thailand to Malaysia. But I kind of enjoy the APRICOT in start of year. 🙂

It has been quite sometime when I blogged. After getting into Spectranet I got relatively more busy along with bit of travelling to Delhi NCR which has been taking lot of time. I wish to blog more over time. 

In recent time I got chance to understand in detail the working of CDN from the point of view of delivery and this brings me to this post where I will be working on putting in detail how the popular CDN networks work and where they are dependent on DNS recursors and where on anycast routing. 


Understanding CDN

CDN’s as we know are Content Delivery Networks and these are specialized networks which are designed for the content delivery to the edge networks by serving content from as close location as possible. The location of servers and type of connectivity heavily depends on each CDN provider and their business model. E.g Google maintains it’s own delivery network consisting of large number of GGC (Google Global Cache) nodes placed on ISPs network and help in serving Google’s static content while other large networks like Akamai (whose core business is into Cache delivery) put their servers on large number of edge networks but they stay as disconnected small islands. While the new comers in the industry like Limelight,  Cloudflare’s model of deployment is around putting node in major datacenter and direct connection to major networks via peering from IXPs. 


The key features of almost all these CDNs are:
  1. Low latency delivery of content giving very fast throughputs.
  2. Making networks more efficient by caching near to the point of serving and not consuming long haul International bandwidth.
  3. Ensuring that content is delivered with optimum performance with as low as possible dependency on middle networks/backbone. 
  4. Ensures that there is no single point distribution and hence during high load, traffic serving can be optimized. 


Technical side of “edge cache serving”

In order to make the “edge delivery” concept work, CDN providers have multiple options and it is slightly tricker here. Challenge here is to ensure that all users go to their nearest CDN node and get served from there rather then a node far away from them. 


Here we have ISP A with a Cache A deployed very near to it, ISP B with Cache B deployed just next to it and so does ISP C with Cache C right next to it. Assuming that end users visit a website which has services from the CDN provider. Here end user will get a url like “http://cdn.website.com/images/image1.jpg” and here cdn.website.com is supposed to be going to “nearest node”. Thus we expect that when users try to reach cdn.website.com on ISP A, it should hit Cache A, from ISP to Cache B and so on (under normal circumstances). 


Two fundamental ways to achieve that:

  1. Have DNS to do the magic i.e when users from network ISP A lookup for cdn.website.com, they should get a unicast IP address of Cache A in return, similarly for users coming from ISP B network, Cache B’s unicast IP should return. 
  2. Have routing to route to nearest cache node based on “anycast routing” concept. Here Cache A, Cache B and Cache C will use same identical IP address and routing will take care of reaching the closest one. 


Both of these approaches have their own advantages as well as challenges. Some of very large CDN providers like that of Akamai, Amazon Cloudfront rely on DNS. While some of new entrants like Cloudflare rely very much on anycast routing. I have discussed DNS and it’s importance in CDN and node selection in some previous posts, but will be going through this quickly in this one. 


Making use of DNS for CDN

DNS is pretty basic protocol. It’s role is simply into “hostname to IP resolution” (and vice versa). What makes is powerful is that based on certain logic, we can influence this “hostname to IP resolution” and do many cool things like load balancing, high availability, and more. However the key challenge in doing all that is first result of DNS changes usually is not instance since there is lot of caching by the “recursive DNS servers” and second that since recursive DNS servers contact authoritative DNS servers, thus authoritative DNS servers (as by default protocol design) don’t really know of end users. They only know that to which DNS recursor they are talking with (based on source IP of DNS recursor) which many times has relation with end users since primarily ISPs run the recursive DNS servers. But in modern world of large Open DNS recursors like OpenDNS, Google Public DNS – it faints out that impact. 


Here’s how DNS based CDN services work




Here we have users on ISP A requesting for “cdn.website.com” IP address. Requests will go to DNS recursor of ISP which will further hit authoritative DNS servers of CDN provider via DNS hierarchy. Green lines here show the flow of DNS information. Eventually based on IP of requesting DNS recursor, authoritative DNS will reply back with the IP address of cache node close to network A. 


Some of key features of this approach:
  1. Optimization logic is pretty much with authoritative DNS server which can change around IP in order to give a location which can serve off request in optimum manner. If one of edge servers is down, algorithm can take care of it by serving other location.
  2. In most of such deployments cdn.domain.com points to cdnxx.cdn-provider.com via cname record and thus actual resolution logic stays within domain of cdn-provider.com. The records like cdnxx.cdn-provider.com have very low TTL (less then a minute) to make changes reflect instantly. 
  3. These approaches fails significantly if end users do not use DNS recursors of their ISP since reply is very much dependent on location/GeoIP parameters of source IP of DNS recursor. 


Some of new CDN networks have came up with full anycast based setup with very little dependency on DNS. E.g Cloudflare.


Here’s how anycast routing based CDN providers work




Here  we have User1 & User 2 on ISP A connected to ISP A router, User 3 & User 4 on ISP B connected to ISP B router & finally User 5 & User 6 on ISP C connected on ISP C router. All off these routers are have CDN provider caches nearby and get multiple routes. So e.g for ISP A router, CDN server A is 1 hop away, while CDN server B is 2 hops away and CDN Server C is 3 hops away. If all servers use the same IP then ISP A will prefer going to CDN ServerA, B will go to CDN server B and so on with C. 


Some of key features of this approach:

  1. Optimization is based on BGP routing and announcement with little role of DNS. 
  2. This setup is very hard to build up and scale since for anycast to work perfectly at global level, one needs lot’s and lot’s of peering and consistent transit providers at each location. If any of peers leaks a route to upstream or other peers, there can be lot of unexpected traffic on a given cluster due to break of anycast. 
  3. This setup has no dependency on DNS recursor and hence Google DNS or OpenDNS works just fine. 
  4. This saves a significant amount of IP addresses since same pools are used at multiple locations. 



With that beings said, I hope you are getting served from nearest cache for static content of my blog. (since I use Amazon Cloudfront for static content). 🙂


Disclaimer: This is my personal blog and does not necessarily reflect thoughts of my employer.