Page 50 - Market Analysis Report of Optical Communications Field in China & Global market 2022
P. 50

C H I N A
          3. Simple and clear migration                          powerful than last year’s TPUs with over 100 petaflops.
          The  technology  road  map  of  the  major  switch  and   But packing even more computing power into the silicon
          transceiver vendors shows a very clear and simple      will also increase the amount of energy to drive it, and
          migration path for customers who deploy parallel optics. I   therefore the amount of heat, which is why the same
          mentioned earlier that the majority of the tech companies   announcement said they are shifting to liquid cooling to the
          have followed this route, so when the optics are available   chip because the heat generated by TPU 3.0 has exceeded
          and they migrate from 100G to either 200 or 400G, their   the limits of its previous data center cooling solutions.
          fiber infrastructure remains in place with zero upgrades
          required. Those companies who decide to stay with a    In conclusion
          duplex, 2-fiber infrastructure may find themselves wanting   AI is the next wave of business innovation. The advantages
          to upgrade beyond 100G, but the WDM optics may not be   it brings from operational cost savings, additional revenue
          available within the time frame of their migration plans.  streams, simplified customer interaction, and much more
                                                                 efficient, data-driven ways of working are just too attractive
          Impact on data center design                           – not just to your CFO and shareholders but also your
          From a connectivity perspective, these networks are heavily   customers. This was confirmed in a recent panel discussion
          meshed fiber infrastructures to ensure that no one server   when the moderator talked about websites using Chatbots
          is more than two network hops from each other. But such is the   and claimed that if it wasn’t efficient and customer focused
          bandwidth demand that even the traditional 3:1 oversubscription   enough he would drop the conversation and that company
          ratio from the spine switch to the leaf switch is not sufficient and   would never receive his business again.
          is more typically used for distributed computing from the super
          spines between the different data halls.               So we have to embrace the technology and use it to our
                                                                 advantage, which also means adopting a different way of
                                                                 thinking about data center design and implementation.
                                                                 Thanks to the significant increase in performance at the
                                                                 ASICs we will ultimately see an increase in the IO speeds
                                                                 driving connectivity even deeper. Your data centers will
                                                                 need to be super-efficient, highly fiber-meshed, ultra-
                                                                 low latency, EastWest spine-and-leaf networks that
                                                                 accommodate your day-today production traffic while
                                                                 supporting ML training in parallel, which conveniently
                                                                 brings me to wrap this up.


                                                                 AI is the next wave of business innovation. The advantages
                                                                 it brings from operational cost savings, additional revenue
                                                                 streams, simplified customer interaction, and much more
                                                                 efficient, data-driven ways of working are just too attractive
                                                                 – not just to your CFO and shareholders but also your
                                                                 customers.


          Thanks to the significant increase in switch IO speeds,   We  have  seen  how  the  major  tech  companies  have
          network operators are striving for better utilization, higher   embraced AI and how deploying parallel single-mode has
          efficiencies, and the ultra-low latency we mentioned by   helped them to achieve greater capital and operational costs
          designing their systems using a 1:1 subscription ratio from   over traditional duplex methods, which promise lower
          spine to leaf, an expensive but necessary requirement in   costs from day one. But operating a data center starts at
          today’s AI crunching environment.                      day two and continues to evolve as our habits and ways
                                                                 of interacting personally and professionally continue to
          Additionally, we have another shift away from the      change, increase in speed, and add further complexity.
          traditional  data  center  design  following  the  recent   Installing the right cabling infrastructure solution now will
          announcement from Google of their latest AI hardware, a   enable your business to reap greater financial benefits from
          customized ASIC called Tensor Processing Unit (TPU 3.0)   the outset, retain and attract more customers, and give your
          which, in their giant pod design, will be eight times more   facility the flexibility to flourish no matter what demands
                                                                 are placed on it.
         48
   45   46   47   48   49   50   51   52   53   54   55