Cisco Live 2023 Day Four and Five

Thursday

My favourite non lab session was day four, Network Engineers Blueprint for ACI Forwarding. Joe Young was an awesome presenter, some of the grey areas were coloured in and the gaps filled, a lot was covered in this intense session, I’ll summarise it below, but I plan to do another post and go into the session fully.  

One of the most interesting sessions was Journey to Next Gen Service Provider Architecture, talking about (RON) Routed Optical Networking was great and linked in well with the 400Gbps DCO session I’d done on day two.  

The World of Solutions was also a good visit, we had a chat with Megaport and I beat Stephen on the cycling challenge with a 1057W Power Output on the bike. This seemed like a great competition to get everyone in the door, but I couldn’t tell you the cycle shops name.  

I can’t forget the closing Keynote however a bit niche, one for the football fans watching Pierluigi Collina break down free kicks wasn’t a highlight for me but all the same he’s an incredible person who confirmed sport and technology would continue their relationship. The after party, wow what an event however short lived for us as the queues for a beer or food were huge!  

08:30: BRKSPG-2343: Internet for the Future: Journey to Next Gen Service Provider Architecture and Operating Model

  • Session was presented by Rob Piasecki.  
  • Rob talked about the fourth industrial revolution driven by data.  
  • Use cases such as telemedicine, hybrid work, virtual education and connecting the unconnected were discussed.  
  • A great story the “Emperor, inventor and the game of chess.” was used, to explain that we’re moving into the second half of the chess board in this fourth industrial revolution.  
  • Silicon One and its development were discussed, Cisco 8K and Catalyst 9K use Silicon One. Silicon One is configured for multiple uses which drives cost down and accelerates development whilst reducing its power consumption and maintaining a 1RU form factor.  
  • Refining of optics is one of the key parts of the next generation service provider. The reduction of ASIC cost-per-bit has outpaced optic economics. Silicon and optics are essential for scaling 400Gbps and beyond. This section included the overview of QSFP-DD and showed that the move from traditional 100Gbps transponders into QSFP removes the need for four transponders and eight grey optics with the use of every 400Gbps QSFP-DD.    
  • Traditional service provider networks have the following transport network building blocks. 
    • Routers. 
    • OTN switches. 
    • Transponders. 
    • ROADMs. 
  • These traditional building blocks are the reason for large CAPEX in service providers, convergence of these blocks would ultimately reduce complexity and cost. There are three key tools that allow this convergence and the lead into (RON) Routed Optical Networks.  
    • Convergence with IP / MPLS (Segment Routing).  
    • Simplify architecture with hop-by-hop.  
    • Finally (DCO) Digital Coherent Optics.  
  • RON is a combination of Silicon One, DCO Optics, Software (SR and EVPN) and Systems.  
  • In traditional service provider we have Packet, OTN and DWDM control planes. With RON we have circuit style Segment Routing, Automation and Management plane and IP / MPLS Segment Routing and EVPN control plane.  
  • RON brings simplicity by reducing protocols and physical footprint.  
  • RON is applicable to (DCI) Data Centre Interconnects, Peering, Access layer, Aggregation layer.  
  • The total cost of RON ~45% (TCO) Total Cost of Ownership savings according to ACG Research. Savings are confirmed in the following areas: 
    • Equipment cost.   
    • Power.  
    • Cooling.   
    • Space.  
    • Operations (Personnel and Automation).  
  • The journey to RON is not overnight unless it’s greenfield. The following steps get us to the end goal of a RON network.  
    • Integrated transponders and automate using DCO optics (automate OLS and IP).  
    • Converge services into a single layer. Private line services transported on IP layer thanks to private line emulation and circuit style (SR) Segment Routing. Emulation of OTN switching capability without dedicated equipment. 
    • Simplify the network with a converged IP and Optical Architecture using a single control plane.    
  • RON isn’t just for greenfield, NCS 2000 can be integrated, existing Cienna and Infinera DWDM are compatible with Cisco Optics. Third party compatible.  
  • Converged SDN IP transport Architecture provides SR-PCE end to end path optimisation with SLAs. This utilises services such as BGP-L3VPN / L2VPN (EVPN), which is transported with SR-MPLS and SRv6.  
    • Interestingly Cisco confirmed that some Public Sector and large enterprises were utilising SRv6, this is because it isn’t reliant on MPLS and can be built purely on IP connectivity.  
    • This connectivity stitches together SDN Metro, SDN Core and SDN DC Domains by connecting them all to an SDN controller.  
  • The ultimate level of simplification is the use of SRv6. This allows to use the inherent scalability of IPv6. 
  • The following example was used to show the reduction of protocols required from legacy service provider to next generation.  
    • Traditional SP Unified-MPLS uses the following services / transport: 
      • Services: MP-BGP, LDP.  
      • Transport: BGP-LU, RSVP, LDP, IGP, MPLS.  
    • Next generation using MPLS SR with Controller uses the following services / transport:  
      • Services: MP-BGP. 
      • Transport: IGP / SR, MPLS.  
    • Next generation using SRv6 uses the following services / transport:  
      • Services: MP-BGP. 
      • Transport: IGP / SRv6.  
  • To finish Segment Routing is the future, this is industry wide and isn’t just Cisco. SRv6 doesn’t need MPLS, if you’re IP enabled you can move to SRv6.  

10:00: Introduction to Containers and Cloud Native

  • Session was presented by Jock Read. 
  • Containers, what are they?  
    • Application wrapper.  
    • Linux process and isolation.  
    • Sometimes looks like a (VM) Virtual Machine but its not a VM as there are no hardware drivers.  
    • They’re fast, consistent, and portable.  
  • Containers have existed for a very long time how ever Docker has revolutionised the use of containers. They allow us to get more done.  
  • Learning Linux is the key to containers, all the tools and troubleshooting Is Linux.  
  • All of this was followed up by a short demonstration of the installation and use of a Docker and Rancher containers. In the demonstration VSCODE was used and a Python Application container was demonstrated. The demonstration can be found at the link below.  

10:45: BRKDCN-3900: Network Engineers Blueprint for ACI Forwarding

All in all, this was a heavy session with an extremely deep dive into ACI forwarding, all the way down to the ASIC types and flows. It was amazing that so much information could be crammed in. I am planning on doing a post on this session separately to really share all the ACI troubleshooting tips and tricks that I learnt, but I will summarise the tools here in the meantime.  

  • Session was presented by Joseph Young.  
  • ACI Tools:  
    • (EP) EndPoint Tracker (UI).  
      • Where is the EP learnt? 
      • Have there been any state transitions?  
      • Is the EP behind a L3out?  
    • Atomic Counters.  
      • Shows the drops on the Overlay.  
      • Shows latency.  
      • Used for debugging buffer drops.  
    • Tenant Visibility.  
      • Can be found in the UI “Operational / Packets / * Drops”  
    • Port Counters.  
      • Never neglect the port counters.  
      • CRC show bad FCS.  
      • Stomped CRC shows the frame is cut-through the switch.  
      • Buffer Drops show congestion and potential broadcast storms.  
      • Use Moquery to check port counters fabric wide.  
    • ELAM.  
      • This is the most powerful tool in ACI.  
      • It’s a trip wire in hardware.  
      • The ASIC needs to be referenced for ELAM.  
      • Ereport command makes the ELAM output human readable.  
      • Ereport can be grepped to find specific information.  
      • ELAM is always correct, software captures aren’t as accurate as ELAM.  
    • Traceroute.  
      • Doesn’t work great, you can use itraceroute but you should rely on devices external to ACI for traces.  
    • FTRIAGE.  
      • Orchestrates end-to-end ELAMs.  
      • It can be slow, so it’s not done all the time if you’re running FTRIAGE go grab a coffee.  
      • You need a consistent flow for FTRIAGE to work.  
    • SPAN / ERSPAN.   
      • Very useful in ACI.  
      • Always have an L3 endpoint ready for SPAN for fault finding.  
    • External Tools.  
      • Netflow.  
      • Flow Telemetry.  

15:00: DEVNET-2409: Automate Migration from Cisco or 3rd Party to ACI

  • Session presented by Vladamir Joshevski and Bilgehan Oz.  
  • Automated Migration (AUTOMIG) is powered by Cisco NSO.  
  • Cisco NSO has a 200+ device capability so it can easily pull configuration from a plethora of vendors and hardware.  
  • NSO is used to discover, map, and deploy.  
  • The NSO UI was demonstrated during the session, VLANs were migrated from Catalyst 9Ks to ACI.  
  • I did note during the demonstration that no policy configuration was taken into account and there was a lack of information around contracts and preferred group configuration, although I did see the default was to include the preferred group. I stayed behind to ask the question about contracts and the team confirmed NSO can configure anything available in ACI can be configured from NSO before deployment. There is a lab available which I didn’t manage to get onto. Hopefully in future I can track this down.  
  • In summary, NSO looks like a great tool and something I would use in future migrations, if it works as demonstrated the rollback feature looks very easy to use although I’m unaware of the costs involved.  

The Hub and World of Solutions

After my final session of the day before the closing Keynote we took a walk around The World of Solutions and had a go at the cycling competition to see if we could beat the highest power output. Stephen and I failed to beat the record but had a good laugh having a go. 

Later, we then chatted with the team at Megaport, a great looking product well worth a look if you’re wanting connectivity to cloud services from your DC location, between your on-premises sites or connectivity into IX’s. 

Following on from this we all made sure we completed our sessions surveys and event survey so we could go and collect our Cisco Live Amsterdam 2023 t-shirt!  

17:30: Closing Keynote

  • Session presented by Pierluigi Collina 
  • A great presenter focused on the technicalities of football and the pressures of refereeing top level matches. However, this was quite niche, I’m not much of a football fan myself so I wasn’t fully on board until VAR and the new offside technology was discussed.   
  • Overall, a fantastic speaker and leader in his field, a very interesting non-Cisco speaker to wind down from a heavy week.  

Friday

After an incredible week jam packed full of nonstop technology, I was ready for an extra couple of hours sleep on the Friday morning followed by a walk around Amsterdam and a mini holiday, some Albert Heijn sushi, and the Eurostar home with a few hours to write up my week.  

In summary I highly recommend Cisco Live, it was a fantastic experience where I learnt more than I could write in these posts, I met some incredible people and had a great time. It felt great to be around like-minded engineers who deal with the real technical work on a day to day and like me have a vested interest in improving the industry as we move forward.  

If I could change anything from the week, I would have spoken to more of the sponsors in The World of Solutions and sat in on more of the walk-in technical sessions and walk-in technical labs that were available in The Hub. I found that most of my sessions were based around Service Provider and Data Center technologies which all linked in nicely however this was by chance, I should have looked at the sessions I was booking prior to the event with a wider view on what I was trying to achieve, I could have easily ended up jumping between network technologies and potentially just getting a high level overview of everything.