From cars that autonomously navigate dark and icy roads, to MRI scanners trained to spot brain abnormalities, to warehouses managed by sensors, drones, and robots, machine learning is already transforming industries in profound ways.
These applications are emerging amid a faltering Moore’s Law, which has run up against the laws of semiconductor physics. For four decades, we could count on the doubling of computational power every two years. Now, traditional semiconductors can only deliver about 10% performance gains in this timeframe. That means the performance gains that sustained advancements in the use of information technology through the PC, mobile, and cloud eras can no longer be relied upon to propel the promise of machine learning.
Instead, graphics processing units (GPUs) – chips evolved from those that power image-intensive video games and professional visualization applications – will provide the computational power needed to drive the machine learning revolution. A new computing model, called accelerated computing, takes advantage of the GPU’s faster processing speeds to train the complex algorithms used in machine learning software.
However, most companies’ data centers, where the algorithm training must take place, run on servers with traditional processors. This is hardly surprising, given that machine learning has only recently verged on mainstream business operations. An enterprise that intends to transform itself using machine learning will need to invest in the necessary combination of hardware and software to tap the vast promise of AI.
The power behind the algorithms
Machine learning is poised to change the way business is done across a range of industries. Consider the following examples.
Transportation. Automakers, at the forefront of AI’s transformation of the $10 trillion transportation industry, are racing to show how AI can differentiate their brands. Enhancing safety will be high on the list, as each year there are tens of millions of accidents worldwide and over a million fatalities. Companies worldwide are using a compact, GPU-powered supercomputer in the vehicle that is capable of guiding autonomous cars.
The same holds true for truck manufacturers and logistics businesses. GPU-powered servers in the data center are being used to train, virtually, autonomous trucks and other vehicles how to drive on millions of miles of high-definition mapped roads in a broad range of weather, road, and traffic conditions. Through such simulated driving efforts, the algorithms that run autonomous vehicles will be able to learn continuously from data collected from actual driving situations to make real-time decisions.
Healthcare. Medical imaging alone is estimated to become a $49 billion market worldwide by 2020, making it the biggest source of data in healthcare. Radiology, a prime area for machine learning advances, accounts for a large portion of medical images. According to Academic Radiology, the average radiologist must interpret a CT or MRI examination every three to four seconds to meet workload demands. In an eight-hour workday, that adds up to 8,000 images per radiologist a day.
AI algorithms can be trained to spot abnormalities using real and simulated medical images. This makes devices such as MRI scanners the first line of defense in spotting disease. These and similar devices can speed diagnosis, greatly improve accuracy, and allow doctors to concentrate their energies on the most difficult cases.
Manufacturing and agriculture. Advances in image recognition are creating a range of industrial Internet of Things opportunities. For example, IoT is becoming central to warehouses and fulfillment centers. Machine learning – fueled by image recognition, data, and sensors – steers robots among humans in warehouses.
Manufacturing companies are using connected machines such as drones and robots to inspect industrial equipment, which offers companies potential savings of tens of millions of dollars annually. Industrial farming won’t be left behind. Images taken from drones and satellites will be treated with machine learning to boost crop yields. Farming companies can use images and algorithms to process all the data captured by satellites to monitor the soil conditions and overall crop health. Analytics can track and predict weather changes that could impact crop yields.
An infrastructure for machine learning
All told, the nascent business opportunities enabled by massive data collection and the implementation of algorithms will require rethinking the data center. Without investments in enterprise IT infrastructure, machine learning can’t deliver what it promises.
A critical step toward business transformation is to make sure an organization’s data center can support compute-intensive workloads. GPU-accelerated computing redefines the economics of data center computing, replacing racks of CPU-based servers with far less hardware installation, power, and cost. For example, a company could potentially replace 300 CPU-based servers with one or two GPU-based servers, for a cost savings of more than 85%.
Those managing a company’s data center infrastructure need to ensure they have enough accelerated computing power and storage to handle all the data needed. This involves evaluating the whole picture to understand the incredible savings that can come from modernizing your architecture for the AI world.
Business leaders who perform due diligence to ensure their hardware is a match for their company’s machine learning ambitions will quickly understand the value of GPU computing.
To learn more about the technology requirements for deep learning, check out this webcast on May 24, 2018 and this white paper.