Authored by Dan Skinner, Vice President of Strategy & Business Development
The latest installment of the Supercomputing conference (SC17), one of the most widely attended conferences for high-performance computing in the world, was held in Denver earlier this month. With 350 exhibitors representing 27 countries, it’s the place to be if you are a supplier or user of high-performance gear. It’s also a great place to look for the innovations that will someday change the world.
After long hours of walking the exhibition floor and talking to numerous people representing hundreds of products, I was left wondering “so what?” This sentiment was soon followed by: “is there anything truly revolutionary here?”
Everything was bigger, better, faster than last year. But with the end of Moore’s law looming – what will be the driver of bigger, better, faster in the years to come? Is massive scaling the last resort, our last hope, to increase our compute capability? If so, at what cost? More specifically, if I double the size of my data center, how close will I come to getting 2x the compute capability?
At another conference a few years ago, I joined an expert panel talking about increasing compute infrastructure without breaking the power bank. I was struck that some of the biggest names in processors and computer architectures all had the same idea – make better, more economical, cooling systems for the data centers.
That’s the answer to increasingly power-hungry datacenters – build better cooling systems?
Some suggested geothermal heat sinks, others talked about placing datacenters in locales like Iceland where glaciers could be used for chilling. But none talked about what I thought was an obvious solution – reduce the energy needed to compute a given problem.
If scale-out and better cooling are the primary strategies to build the bigger, better, faster compute systems of tomorrow, then we’ll pay the price in power/cooling, storage, floor space, and overall cost. And then, guess what? For every doubling of scale, at best I’ll approach 2x in increased capability. That’s significant investment for modest gains. More thoughts on this in a minute.
Best of SC17
First, a fan favorite: the virtual reality booths. Attendees were quickly immersed in a virtual world, moving through a three-dimensional space and manipulating objects with hand and body motions. A headset provides video and audio stimulus to the user, while sensors feed user motion data to the computer program to create an ever-changing environment for the user. It’s mainly the domain of gamers, but there are interesting offshoots of this technology that could have industrial, medical, or non-gaming home uses. One example (not at SC17) is a company in Boise, Idaho, called Black Box VR, that makes a virtual reality fitness experience.
The next most common eye-catchers were the image classification systems. These systems were mainly classifying vehicles for autonomous driving applications, or live-streaming images of people passing by the booth and identifying them as, alas, people. One booth was doing facial recognition, but could only identify workers at the booth whose facial features were previously “trained” into the system.
Contrary to the impressions from constant news about large tech companies continually testing self-driving vehicles, the classification systems I saw have a long way to go before we can nap behind the wheel on the way to Grandma’s house. At one leading tech company’s booth, I watched as numerous vehicles on the road in front of the camera failed to be recognized, even vehicles that were braking! I realize collision avoidance systems don’t use video imaging, but many other tasks in a self-driving car do use video. The failure to “see” a stop sign or a pedestrian under all lighting conditions, all rotational angles, and all levels of obscurity, could have deadly results.
The first of my two favorite things at SC17 was ARM, makers of processors and IP that are widely used in mobile and embedded applications around the world. ARM is pushing a distributed compute model, building on their success in mobile phones and IOT applications, where AI takes the form of voice recognition, predictive text, and computational photography. ARM believes that network bandwidth limitations will continue to restrict the compute density of datacenters and that distributed processing at the edge will become more necessary going forward. I agree with their analysis, except that I believe there are huge gains to be made in the method of data processing, both in the core of the datacenter and at the edge of the network.
My other favorite at SC17 was a company called Data Vortex, who makes a congestion-free network switch based on concentric rings of data pathways with busses connecting the rings. From what I could learn it’s a low-latency, low overhead switch for small data packets that has a nearly flat latency response, regardless of switch loading. The Data Vortex solution hides the control processing time in the data transfer and far outperforms crossbar switches for small packets. Their technology is truly innovative – not just bigger, better, faster.
Back to the Question at Hand…
I left Denver wondering if there is a better solution, rather than massive scale-out of traditional computer architectures, that improves the metrics for performance, power and cost simultaneously? At Natural Intelligence Semiconductor, we believe there is. That’s exactly why we’re bringing the Natural Neural Processor (NNP) to market. We don’t believe that increasing the number of cores in an x86-based or GPU-based system is the best solution.
Today’s computing architecture, first described by John von Neumann in 1945, revolutionized the world. It brought us cell phones, home computers, the internet, and the infrastructure to support society as we know it. However, it wasn’t designed to efficiently handle the difficult problems we face today in the fields of artificial intelligence, machine learning, and unstructured data analytics. It also doesn’t incorporate one of the most sophisticated and efficient computing models we rely on every day.
Your brain can almost instantaneously and correctly identify pictures of a chihuahuas and blueberry muffins. In fact, when you read the words “chihuahuas” and “blueberry muffin,” your mind “saw” those pictures, right? Likewise, your brain helped you navigate to where ever you are sitting right now, correctly identifying everything in your environment, and doing so with very little energy. The human brain uses pattern matching to identify your surroundings and help you navigate.
The NNP processor from Natural Intelligence uses some of these same pattern matching principles to find patterns in data, very efficiently and with less energy than traditional compute systems. As opposed to massive scale-out, Natural Intelligence believes that pattern matching processors, such as the Natural Neural Processor, will provide better performance at lower energy and cost for many of today’s most difficult big data problems.
At a future Supercomputing conference, we hope to remove the “so what” moments from your experience as you learn how our innovative approach to data processing can help improve performance, power, and cost for your applications. In the meantime, watch for more announcements about the Natural Neural Processor and Natural Intelligence Semiconductor in the coming months.