Different CDN technologies: DNS Vs Anycast Routing

And I am back from Malaysia after attending APRICOT 2014. It was a slightly slow event this time as less people came up due to change of location from Thailand to Malaysia. But I kind of enjoy the APRICOT in start of year. :)

It has been quite sometime when I blogged. After getting into Spectranet I got relatively more busy along with bit of travelling to Delhi NCR which has been taking lot of time. I wish to blog more over time. 

In recent time I got chance to understand in detail the working of CDN from the point of view of delivery and this brings me to this post where I will be working on putting in detail how the popular CDN networks work and where they are dependent on DNS recursors and where on anycast routing. 

Understanding CDN

CDN’s as we know are Content Delivery Networks and these are specialized networks which are designed for the content delivery to the edge networks by serving content from as close location as possible. The location of servers and type of connectivity heavily depends on each CDN provider and their business model. E.g Google maintains it’s own delivery network consisting of large number of GGC (Google Global Cache) nodes placed on ISPs network and help in serving Google’s static content while other large networks like Akamai (whose core business is into Cache delivery) put their servers on large number of edge networks but they stay as disconnected small islands. While the new comers in the industry like Limelight,  Cloudflare’s model of deployment is around putting node in major datacenter and direct connection to major networks via peering from IXPs. 

The key features of almost all these CDNs are:
  1. Low latency delivery of content giving very fast throughputs.
  2. Making networks more efficient by caching near to the point of serving and not consuming long haul International bandwidth.
  3. Ensuring that content is delivered with optimum performance with as low as possible dependency on middle networks/backbone. 
  4. Ensures that there is no single point distribution and hence during high load, traffic serving can be optimized. 

Technical side of “edge cache serving”

In order to make the “edge delivery” concept work, CDN providers have multiple options and it is slightly tricker here. Challenge here is to ensure that all users go to their nearest CDN node and get served from there rather then a node far away from them. 

CDN1 

Here we have ISP A with a Cache A deployed very near to it, ISP B with Cache B deployed just next to it and so does ISP C with Cache C right next to it. Assuming that end users visit a website which has services from the CDN provider. Here end user will get a url like “_http://cdn.website.com/images/image1.jpg_" and here cdn.website.com is supposed to be going to “nearest node”. Thus we expect that when users try to reach cdn.website.com on ISP A, it should hit Cache A, from ISP to Cache B and so on (under normal circumstances). 

Two fundamental ways to achieve that:

  1. Have DNS to do the magic i.e when users from network ISP A lookup for cdn.website.com, they should get a unicast IP address of Cache A in return, similarly for users coming from ISP B network, Cache B’s unicast IP should return. 
  2. Have routing to route to nearest cache node based on “anycast routing” concept. Here Cache A, Cache B and Cache C will use same identical IP address and routing will take care of reaching the closest one. 

Both of these approaches have their own advantages as well as challenges. Some of very large CDN providers like that of Akamai, Amazon Cloudfront rely on DNS. While some of new entrants like Cloudflare rely very much on anycast routing. I have discussed DNS and it’s importance in CDN and node selection in some previous posts, but will be going through this quickly in this one. 

Making use of DNS for CDN

DNS is pretty basic protocol. It’s role is simply into “hostname to IP resolution” (and vice versa). What makes is powerful is that based on certain logic, we can influence this “hostname to IP resolution” and do many cool things like load balancing, high availability, and more. However the key challenge in doing all that is first result of DNS changes usually is not instance since there is lot of caching by the “recursive DNS servers” and second that since recursive DNS servers contact authoritative DNS servers, thus authoritative DNS servers (as by default protocol design) don’t really know of end users. They only know that to which DNS recursor they are talking with (based on source IP of DNS recursor) which many times has relation with end users since primarily ISPs run the recursive DNS servers. But in modern world of large Open DNS recursors like OpenDNS, Google Public DNS - it faints out that impact. 

Here’s how DNS based CDN services work

cdn2

Here we have users on ISP A requesting for “cdn.website.com” IP address. Requests will go to DNS recursor of ISP which will further hit authoritative DNS servers of CDN provider via DNS hierarchy. Green lines here show the flow of DNS information. Eventually based on IP of requesting DNS recursor, authoritative DNS will reply back with the IP address of cache node close to network A. 

Some of key features of this approach:
  1. Optimization logic is pretty much with authoritative DNS server which can change around IP in order to give a location which can serve off request in optimum manner. If one of edge servers is down, algorithm can take care of it by serving other location.
  2. In most of such deployments cdn.domain.com points to cdnxx.cdn-provider.com via cname record and thus actual resolution logic stays within domain of cdn-provider.com. The records like cdnxx.cdn-provider.com have very low TTL (less then a minute) to make changes reflect instantly. 
  3. These approaches fails significantly if end users do not use DNS recursors of their ISP since reply is very much dependent on location/GeoIP parameters of source IP of DNS recursor. 

Some of new CDN networks have came up with full anycast based setup with very little dependency on DNS. E.g Cloudflare.

Here’s how anycast routing based CDN providers work

cdn3

Here  we have User1 & User 2 on ISP A connected to ISP A router, User 3 & User 4 on ISP B connected to ISP B router & finally User 5 & User 6 on ISP C connected on ISP C router. All off these routers are have CDN provider caches nearby and get multiple routes. So e.g for ISP A router, CDN server A is 1 hop away, while CDN server B is 2 hops away and CDN Server C is 3 hops away. If all servers use the same IP then ISP A will prefer going to CDN ServerA, B will go to CDN server B and so on with C. 

Some of key features of this approach:

  1. Optimization is based on BGP routing and announcement with little role of DNS. 
  2. This setup is very hard to build up and scale since for anycast to work perfectly at global level, one needs lot’s and lot’s of peering and consistent transit providers at each location. If any of peers leaks a route to upstream or other peers, there can be lot of unexpected traffic on a given cluster due to break of anycast. 
  3. This setup has no dependency on DNS recursor and hence Google DNS or OpenDNS works just fine. 
  4. This saves a significant amount of IP addresses since same pools are used at multiple locations. 

With that beings said, I hope you are getting served from nearest cache for static content of my blog. (since I use Amazon Cloudfront for static content). :)

Disclaimer: This is my personal blog and does not necessarily reflect thoughts of my employer.