Going Deep
Issue: Volume 40 Issue 3: (May/Jun 2017)

Going Deep

The main topic of the 2016 Nvidia GPU Technology Conference was virtual reality, with a smaller emphasis on deep learning, particularly self-driving vehicles. What a difference a year makes.

At the 2017 conference earlier this month, the main focal point was deep learning, with continued but far less attention given to virtual, augmented, and mixed reality than there had been the previous year. In 2016, Jensen Huang, founder and CEO of Nvidia, addressed the rise of autonomous vehicles in his keynote. That was just the tip of a much larger iceberg.

For anyone attending the conference recently, it would have been difficult not to have noticed the importance placed on deep learning across the varied industries represented. It was the topic of many sessions and was heavily stressed by Huang in

his latest keynote.

What is deep learning? It is a class of machine learning whereby software attempts to “think” using a deep learning network, from which it learns to recognize patterns associated with digital representations of images and other data. This may sound like the stuff of far-off science fiction, but in reality, it is the science of today. In fact, the so-called big bang of AI has already occurred, resulting in what Huang calls “one of the most amazing progresses in computer science.”

There are a few main ingredients required for deep learning, including algorithms, enormous amounts of data, and equally enormous computation capability for training and developing a model.

For many years, the industry had been governed by Moore’s law, the widely accepted maxim of microprocessor development that processing power doubles every 18 months to two years. That is now changing. Engineers at Nvidia began to forge a new, different path forward with Cuda, a parallel computing platform that enables dramatic increases in computing performance by harnessing GPUs to provide extraordinarily fast processing. Also, the past few years have given rise to extraordinarily massive databases with loads of information – databases that can be used to “train” computers. Algorithms began enabling computers to recognize information. For instance, to “look” at an image and determine what is important about it, extracting from the raw data and recognizing what the image is – a dog, a cat, a sunrise, and so forth. The end result is a deep learning network for deep learning – computer technology that teaches computers to learn by themselves, as opposed to being programmed by human engineers.

Huang provided some interesting examples of deep learning in action. A computer that can style a photograph into a Monet or Picasso. Or one that can learn a style from one photo and apply it to another by understanding structure and applying it accordingly. Or a robot that learned to play hockey. Also, self-driving vehicles (which are becoming more common). And language translation software that makes a human user a multi-linguist instantaneously.

Indeed, deep learning is revolutionizing transportation. But it is also having a profound effect on other markets, including medicine and health care, retail, finance, security, manufacturing, architecture… even visual effects! And, Nvidia has positioned itself in the thick of this development, thanks to a number of new advancements (see related announcements in the news section on CGW.com).

With the advent of deep learning, a new era of computing has begun – one involving artificial intelligence. Think about the possibilities.

Karen Moltenbrey, Editor-in-Chief
karen@CGW.com