Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Blink: You're Too Slow

Dec, 11, 2012 Hi-network.com

When playing in the high speed switching game -timing is everything.  Timing 'sets the pace' forvisibilityto established the 'where and when,'correlationacross a broad computing environment plus compliance and digital forensics with precision time stamps.  Every element of the data center requires accurate timing at a level that leaves no room for error.

Speed is the other, more celebrated, if not obvious requirement, for the high speed switching game.  Speed that is measured in increments requiring some new additions to my vocabulary.

When looking at the ways in which we measure speed and regulate time throughout the network, I was of course familiar with NTP or Network Time Protocol.   NTP provides millisecond timing...which, crazy enough...is WAY TOO SLOW for this high speed market.   Now being from the South, I may blink a little slower than other people but I read that the average time it takes to blink an eye...is 300 to 400 milliseconds!  A millisecond is a thousandth of a second.  That is considered slow?

Turns out 'micro-second' level detail is our next consideration.  A microsecond is equal to one millionth (10?6 or 1/1,000,000) of a second.One microsecond is to one second as one second is to 11.54 days. To keep our blinking example alive:350,000 microseconds. Still too slow.

Next unit of measure?  The Nanosecond. A nanosecond is one billionth of a second.  One nanosecond is to one second as one second is to 31.7 years.  Time to blink is just silly at this point.

At one point in time I used to think higher speeds were attainable with higher degrees of bandwidth.  This may be why the idea of 'low latency' seems so counter-intuitive. As you hopefully understand at this point, there are limitations to how fast data can move and that real gains in this area can only be achieved through gains in efficiency -in other words, the elimination (as much as possible) of latency.

For ethernet, speed really is about latency.  Ethernet switch latency is defined as the time it takes for a switch to forward a packet from its ingress port to its egress port. The lower the latency, the faster the device can transmit packets to its final destination.  Also important within this 'need for speed' is avoiding packet loss. The magic is in within the balancing act: speed and accuracy that challenge our understanding of traditional physics.

Cisco's latest entrant to the world of high speed trading brings us the Nexus 3548.  A slim 48 port line rate switch with latency as low as 190 nanoseconds. It includes a Warp switch port analyzer (SPAN) feature that facilitates the efficient delivery of stock market data to financial trading servers in as littles as 50 nanoseconds and multiple other tweaks we uncover in this 1 hour deep dive into the fastest switch on the market. The first new member of the 2ndgeneration Nexus 3000 family.   (We featured the first generation Nexus 3000 series in April 2011)

This is a great show -it moves fast!

https://www.youtube.com/watch?v=gHPxrd3hdkU

Segment Outline:

  •  -Robb & Jimmy Ray with Keys to the Show
  •  -Berna Devrim introduces us to Cisco Algo Boost and the Nexus 3548
  •  -Will Ochandarena gives us a hardware show and tell
  •  -Jacob Rapp walks us through a few live simulations
  •  -Chih-Tsung, ASIC designer walks us through the custom silicon

 

Further Reading:

-Nexus 3548 Press Release

 

Cisco Blogs

Jacob Rapp:  Benchmarking at Ultra-Low Latency

Gabriel Dixon: The Algo Boost Series

  • Part 1: Nexus 3548 Latency Innovations
  • Part 2: Customer Perspectives on the Nexus 3548
  • Part 3: Commitment to Innovation

Dave Malik: Cisco Innovation provides Competitive Advantage

 

 

 

 

 


tag-icon Tags chauds: Centre de données TechWiseTV ASIC Nexus 3548 Algoboost ultra-low latency hft

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.