18 May

Joining the board of E2E

This week I worked on the paperwork to get on the board of E2E Networks Ltd as one of the Independent Director. It was quite an interesting process as this is the first time I am joining the board of directors of an organisation.

About E2E

Not to be confused by the “Networks” in the name, E2E is in the business of selling high powered, low cost compute (Virtual machines) hosted in India targeted towards Indian organisations. It’s one of the very few organisations of its kind which is listed at the National Stock Exchange of India (NSE) since 15th May 2018. E2E was started by Tarun Dua and Mohammed Imran. I know Tarun since a really long time (if I remember correctly, probably since 2010). It has been good to see the organisation grow from a very small team to where it stands now. While AWS, Azure and now Google Cloud are helping to grow the market of Cloud Computing, there’s still a gap of providers who can offer much more competitive pricing for the monster machines for lesser overbuild which large cloud providers do.

Quick links about E2E

  • List of other board members here
  • List of E2E’s team members here
  • E2E has its own ASN but relies on Netmagic’s AS17439 for originating pools. HE’s BGP toolkit for E2E’s prefixes here
  • News on listing here

Thoughts on DSC, DIN and OS X!

Director of an organisation (private or listed) have to maintain a DIN i.e Director Identification Number in India. Now to get a DIN, one needs a DSC which is Digital Signing Certificate. DSC uses RSA keys and Government of India has recognised around 9 Certifying authorities for it (listed here). So one gets key loaded in a physical device. As soon as you have a physical device in the picture and Mac OS X cannot recognise it, it’s a problem. Most of the driver packages offered by the vendors are unsigned and I personally don’t like installing those packages on the base machine I use. Besides those drivers, one also needs the emsigner package to associated DSC with DIN which again happens to be another package to make the whole stack work.
Thus to make it all work securely, I ended up in putting a Windows VM on Virtualbox locally, thus in an isolated instance for such tasks only.

My involvement…

I will be on the board to contribute as an independent director. It’s a non-executive role and thus I won’t have any involvement in the day to day things but just macro level picture of the things and involvement in board meetings. I continue with my full-time job at Hurricane Electric. It would be great to learn corporate governance by looking at how things work at the board of a company.

Well, so that’s all for now. Time to get back to work! 🙂

06 Sep

CDN Caching Panel discussion at APNIC 46

I am in Noumea in New Caledonia in the Pacific Islands. Next week we have APNIC 46 conference and I would be moderating an exciting panel discussion with friends from Akamai, Cloudflare, Facebook and more about working of CDNs. 

If attending APNIC 46, please come & join this session –https://conference.apnic.net/46/program/schedule/#/day/7/panel-cdn-caching

If you are interested in connecting to Hurricane Electric (AS6939) in this region, please do drop me a message. (List of our PoPs in the region here)

11 May

Building redundancy on home network

I posted about the home network in multiple other posts in past. I recent time I switched from Microtik SXT Lite 5 to Power Beam PBE-M5-400. This gave me a jump from 16dbi to 25dbi which gives much sharper beam. I also got a harness & climbed BTS myself (after getting permission from the manager) this time to switch gear. I think I can do a better job than wasting time in finding guys from local WISPs to do it. 🙂

 

Also, Essel Group launched Siti broadband in my home area and they are using DOCSIS. The network is overall fine though initially faced many outages due to fibre cuts here & there. As of now, the connection is reasonably stable. I am paying 860Rs/month ~ $14 for 10Mbps uncapped link which gives me 10Mbps down and 1.5Mbps up. From a price point, it’s an excellent connection to have for redundancy reasons. Now as the connection is stable enough to explore auto-failover. For last few months I took both primary links as well as backup links to the router in the form of tagged VLANs and used to push specific traffic based on source IP (device at home) or destination IP/port combination using policy based routing.

 

 

Here both links drop on the TP-Link router which I use as a layer2 switch. I tag both links on different VLANs and carry them to my room over a single cable. TP-link 1043nd flashed with OpenWRT and it allows me to do simple layer 2 aggregation and maintains 1Gig link with other switch placed in my room.

It’s tricky to do an auto-failover in such static setup where I am not using BGP and hence WAN IP changes when the connection is switched. I use Ubiquity Edge router as core router at home and it comes with the option of “load balancing” features where one can load balance or simply put a secondary interface in failover mode.

 

Here’s how the config looks like now:

(Note: VLAN10 / routing table1  – Primary link and VLAN20 / routing table 2: Secondary link)

 

So this is simply putting two different routing tables in the router besides the main table known as “main”. Next, is the load balancing config:

 

So here I have eth2.20 defined for failover only and it uses routing table 2 while the primary link is eth2.10 which uses the main table. It’s basically sending 6 pings (one in every 5 seconds) and hence if 6/6 fail during 30 seconds long outage, a primary link would be considered dead and traffic will move to secondary link. The further router will keep on trying to ping the defined IP and once there are 12 successful pings (one in every 5 seconds) in a 1min period, it would be assumed live again. New sessions will switch over to primary while existing ones will stick with secondary to avoid outage on them.

 

Next, load balance config is called on a firewall modify instance:

and this “SOURCE_ROUTE” is called on the LAN-facing interface to apply this policy on the interface:

 

And that’s all about it. It ensures that regular internet usage (not SSH sessions), streaming, Chromecast, etc all can stay live with a maximum impact of 30 seconds in case of the issue on the primary link.

 

Some misc notes:

  1. If primary link goes down, IPv6 would be still broken and I have yet to put a script to disable IPv6 on LAN in the case of an outage on the link.
  2. I noticed Ubnt doesn’t behave well in terms of failover if I do not specify IPv4 test address. It tends to use a test string which was pointed to Amazon CDN (which is fine btw) but as a primary link fails, DNS resolution also fails and devices seem to be re-trying DNS resolution instead of assuming failure instantly.
  3. I focused on testing primary link with an IP far away in Europe. The secondary link does not really matter because it’s just not being used and the case when it is being used it is the only option. Hence extensive testing makes no sense on the secondary link.

 

Here’s output of this load-balancing setup:

 

 

Sidenote: I am in Bangalore for Rootconf 2017. I would be presenting about Eyeball routing measurement using RIPE Atlas. If you are around in Bangalore, drop me a message and it would be great to meet!