Designing high capacity home network
For the last few months, we have been working on setting up a dedicated room for my home office usage. This gives me the opportunity to plan for changes in the home network.
Key design ideas
- Cabling should support needs for now as well as the future but active hardware stays strictly for what I need for now. Hardware can be upgraded easily. It is not impossible to change cabling in the future since we have ducts - 0.75" ducts everywhere and 1.5" where it goes towards my office for aggregation but it’s still a few hour’s tasks to plan, change cables, etc. Plus, if changing cables, they have to change together. Any partial change will be very hard as 0.75" ducts get filled with cables. Cables are in star topology but ducts are daisy-chained (older POTS design)
- While it may not be true especially in the Western world where manpower is very expensive but hardware is relatively cheap, here in India good managed switches, good routers with gigabit ports, policy-based routing, etc can be quite expensive compared to the cost of cabling. I would prefer to centrally aggregate circuits instead of putting switches here & there. Thus if I needed 1 port, I targeted to provision 2 ports and associated cabling from that point till the aggregation.
- Single place aggregation also has another advantage - it makes it easy to have one single “fat” wall plate with lots of ports and just 1, 2, or 3 port faceplates in the rest of the wall plates. Most of the old telephone boxes are 3 x 3 and can give a max of 2 cat6 ports. One might be able to fit 4 ports if a faceplate is found with those but would be very hard to manage it because there’s barely space to put the extra cable in a 3 x 3 box.
- All cables should be terminated on a faceplate. In past, I ran a network between different rooms without faceplates for the last 8 years and it gets messy as time goes. The area gets dirty, becomes a fragile point to disconnect, cleanup, etc. Thus this time I decided that all cables must terminate with a faceplate instead of hanging out of the wall.
- For cabling, cat6 made the most sense to me. Cat6 cost 7000 INR for 300m box. I used around half of the cable and thus it was 3500 INR for 150m / $48 USD for 492 feet. I have the option to return the remaining unused cable.
I couldn’t find cat6a locally but pricing hints were way higher than cat6. I can run gigabit right away on cat6 and can easily do 2 x 1Gbps LCAP LAG if needed between key locations (the room where uplinks come and the office where aggregation will happen). Furthermore, I can run 2.5Gbps & 5Gbps on each leg at some point when I upgrade the switch. I might be able to run 10G on copper though the longest leg might be a margin of 50m including patch cords. I doubt I will need more than 1Gbps to individual wall jacks where bandwidth is being “consumed” for at least the next 6-7 years. But might very well need more than 1Gbps between uplink room (location of core router, GEPON optical units) and office (aggregation switch). Thus I also got a two-strand single-mode fiber pulled in. That costs around 5.5Rs/meter. It’s a drop cable & I can add connectors to that directly without the need of splicing (though would still need cleaver for a 90 degrees sharp cut). I might light that up after few years especially when both home uplinks are 1Gbps or above. For now, they are 100Mbps (primary) and 50Mbps (secondary). Because fiber is there in the wall, I can in theory run 100Gbps in the far-off future if need to.
- Price of electronics changes very quickly. It makes sense to over-provision on the cable part but I would always avoid over-provision on the router/switch part. The cost of a 10G switch today is probably 10-20x more than what it will be after 6-7 years when I might actually need that kind of capacity.
- The highest traffic segment in the home is between the home server (Intel NUC) with a connected 4TB HDD. It does various monitoring, home NAS, site-to-site VPN & more. There are times when traffic touches 1Gbps between Intel NUC, core switch, and core router. Its port is 1Gbps and thus no upgrade path on that except to just replace it in the future. Newer NUC has a 2.5Gbps port. As cloud computing becomes super cheap, I doubt I will be putting super high compute at home in the near future. As of now, Intel NUC hosts only what has to be hosted locally.
- For faceplates my friend helped me to get a Hi-Fi modular plate - 2 x 8 port each and thus 16 port aggregation. These are basically electric plates with cat6 modules fitted in. So far working well. Though I have terminated cat6 countless times, it was a hard job on that one due to excess cabling. Challenge is: If I leave excess cable, 8 “excess cables” per plate make it hard to close the faceplate while if I leave minimal cable then it would be hard to tweak/fix/make changes in the future. I ended up not cutting the cat6 cable tearing string & tied it along the cable with electric tape. That way I can open up tape anytime & just pull that up to further peel off the outer jacket. For faceplates in the rest of the home, I found Anchor face plates locally. Again these were also electric plates with the option to add a cat6 jack in place of a single “button”. For patch cables, I prefer to use cat5e instead of cat6 because cat5e is less rigid. I prefer to use the same in datacenter environment as well. 1Gbps runs just fine on that & cabling management is a lot easier. Again all of these or a few can be upgraded to cat6 at some point when I have to run high speeds (2.5G/5G/10G). I think I made the mistake of not having an extra “box” to hold the extra cables. Since it was a fresh design, that could have been done easily during the built. So in total this was 13 cat6 cables, 2 ends, 8 strands each = 13 x 2 x 8 = 208 punch downs. In reality, it was probably 250 as I had to re-do few things twice due to less space in the wall box.
Cables are patched together since place isn’t ready yet to move stuff in & hence new cabling supporting older connectivity design. Wood work still ongoing as visible in picture :)