Category: IP routing
I’d be interested in seeing a updated/modernized DC internet edge design session, including some of the following topics:
- Carrier path selection and failover (AS-path based, performance/quality based, etc.)
- Path visibility (Tools like ThousandEyes, et al.)
- Public IP mobility and failover between DC’s (BGP strategy, iBGP topology and transport)
- Designing and building backbone connectivity between INET edges @ both DC’s (shared vs dedicated transport, routed vs. stretched segments, etc.)
- DDoS mitigation (BGP, DNS-based techniques, etc.)
- Impacts of IPv6 on all above topics (Design, hardware resources, platform selection, etc.)
How would you connect enterprise offices and DC today. What are some options – Physical + Logical – and then how would you do IGP? Will you do OSPF and if you do how would you go about areas? All branches have local upstreams for example.
Or is it now norm that everyone is doing private ASN and what have you and BGP between sites. What if your edge routers already have same ASN and BGP for upstream how will do BGP between sites and be forced to do a full mesh or RR setup.
When designing for a new (non-MPLS) internal WAN which will be dual stack sooner or later, what approach should one take regarding IGP selection and design? Is it generally better to have one IGP process which supports both address families? Which design choices should be made at the onset of the project to minimize design changes if v6 is added later?
It still seems the “default” IGP these days is OSPFv2 which only supports the IPv4 address family. Even if v4 is used on day one, should one consider OSPFv3, IS-IS or MP-BGP which has multifamily address support?
I think a subject for the design clinic can be subnets design (sizing/population). I’ve wrote a blog post on pros and cons of big/small and heterogenous subnets:
This is to argue for swimlane application and prevent L2 stretching. Maybe the blog can serve as a starting point. I might miss some points or maybe it would be better to split it into physical/virtual/cloud.
10 or more years ago I used to do some very simple DDoS prevention with AS:<666> communities and propagating upstream.
Not a solution but took the traffic off our upstream/peering links for everybody else’s benefit.
It would be useful to get an overview of those tech and implementation/approaches.
I see that “ddos mitigation with blackholing” in in the list, but I think that BGP Flowspec would be a more interesting and modern topic.
I wonder if you could include a discussion on how CloudFlare operate as well as perhaps Arbor, now known as NetScout for scrubbing.
Why don’t we use Spine Leaf design for service provider transport networks? Is Clos design only for DC why can’t we use outside DC?
It would be interesting to hear about leaf and spine architectures in an enterprise/campus network. I assume that the service provider case is very similar, but a large enterprise or campus has different operational and security models. Possibly a place to discuss how Charles Clos has taken over the networking world now that switches are non-blocking. Bell Labs and the carriers strike back! :-)
It would be interesting to go over adapting leaf and spine for ISPs or probably IXPs. Most of the materials that are currently available are focused on the datacenters with more of compute and storage nodes not ISPs and service based fabric. What do you think?
How do we design IP leaf-and-spine fabric without EVPN/VXLAN where we have lots of VRFs segmented by firewall.
We would like to use several dual-stacked VRFs between adjacent routers in a hub-spoke configuration.
What is the tipping point (primarily in terms of operational complexity) between running multiple instances of VRF-Lite versus something MPLS-ish? Are there other options?
At a small company I recently joined; they have a MPLS transport network with a dozen POPs running on low-end IOS-XR gear offering L2VPN services.
The director wants to offer IP transit services (full tables) using low-end IOS-XE gear (asr1000s) over that MPLS transport using xconnects/pseudowires. I suggested buying higher end SP gear and providing both L2VPN and L3VPN services with the same set of equipment and thus putting the IP transit in a VRF. I was told this isn’t possible but considering how beefy even middle of the road SP gear is when it comes to the amount of routes in the FIB, I feel this is a no brainer. Am I missing something?
It would be great to continue with your considerations about BGP add-path vs Internet in VRF.
I did a bit of analysis a while ago, and the memory requirements went through the roof.
Assuming that’s not a problem you should be reasonably safe.
When is it good to explicitly design ethernet WANs for Multi-Access vs Point to Point?
I’ve seen some point to point ethernet networks use SVIs which use spanning-tree (family ethernet) versus using sub-interfaces which do not (family inet). Are these equivalent, or not?
Is the original sin of ethernet multi-access? ;)