Page 48 - Market Analysis Report of Optical Communications Field in China & Global market 2022
P. 48
C H I N A
Artificial Intelligence and the Impact on Our Data Centers
Tony Robinson, Global Marketing Applications Manager
Corning Optical Communications
It never ceases to amaze how filmmakers are able to
introduce concepts that at the time seem so far from
reality, but in time those concepts make it into our daily
lives. In 1990, the Arnold Schwarzenegger movie “Total
Recall,” showed us the “Johnny Cab,” a driverless vehicle
that took them anywhere they wanted to go. Now, most
major car companies are investing millions into bringing
this technology to the masses. And thanks to “Back to
the Future II,” where Marty McFly evaded the thugs on a
hoverboard, our kids are now crashing into the furniture
(and each other) on something similar to what we saw back
in 1989.
It was way back in 1968 (which some of us can still Andrew Ng, chief scientist at Baidu’s Silicon Valley Lab,
remember) when we were introduced to Artificial said training one of Baidu’s Chinese speech recognition
Intelligence (AI) with HAL 9000, a sentient computer models requires not only four terabytes of training data, but
on board the Discovery One Spaceship in “2001: A also 20 exaFLOPS of compute, or 20 billion, billion math
Space Odyssey.” HAL was capable of speech and facial operations across the entire training cycle.
recognition, natural language processing, lip reading, art
appreciation, interpreting emotional behaviors, automated But what about our data center infrastructure? How
reasoning, and, of course, Hollywood’s favorite trick for does AI impact the design and deployment of all of the
computers, playing chess. differentsized and -shaped facilities that we are looking to
build, rent, or refresh to accommodate this innovative, cost-
Thoughtful servers saving, and life-saving technology?
So how does AI impact the data center? Well, back in 2014
Google deployed Deepmind AI (using machine learning, Machine learning (ML) can be run on a single machine,
an application of AI) in one of their facilities. The result? but thanks to the incredible amount of data throughput is
They were able to consistently achieve a 40 percent typically run across multiple machines, all interlinked to
reduction in the amount of energy used for cooling, which ensure continuous communication during the training and
equated to a 15 percent reduction in overall PUE overhead data processing phases, with low latency and absolutely
after accounting for electrical losses and other non-cooling no interruption to service at our fingertips, screens, or
inefficiencies. It also produced the lowest PUE the site audio devices. As a human race, our desire for more and
had ever seen. Based on these significant savings, Google more data is driving exponential growth in the amount of
looked to deploy the technology across their other sites and bandwidth required to satisfy our most simple of whims.
suggested other companies will do the same.
This bandwidth needs to be distributed within and across
Facebook’s mission is to “give people the power to multiple facilities using more complex architecture
build community and bring the world closer together,” designs where spine-and-leaf networks no longer cut it –
outlined in their white paper Applied Machine Learning we are talking about super-spine and super-leaf networks
at Facebook: A Datacenter Infrastructure Perspective. It to provide a highway for all of the complex algorithmic
describes the hardware and software infrastructure that computing to flow between different devices and ultimately
supports machine learning at a global scale. To give you back to our receptors.
an idea of how much computing power AI and ML needs,
45