As you explore and learn more about Natural Intelligence, we hope you begin to share our belief that we are on the cusp of a major advancement in how complex information can be processed. In many ways, we believe that we are in a position similar to Intel when they introduced their first microprocessor in 1971. Now, nearly five decades later, the architecture that Intel introduced to the world is itself becoming the limiting factor in our ability to analyze the complex, unstructured data streams that are now so important to virtually every part of society. These limitations exist not only in the ubiquitous x86 architecture, but also in any architecture that is based on the von Neumann model of computing.
The architectural concepts first conceived by John von Neumann have served the industry well and have powered the information age for decades. Over this time period, the von Neumann architecture provided the framework and Moore’s law provided the means by which the computer industry has made incredible advancements. Now, we are facing an unprecedented confluence of circumstances, which threaten the predictable pace at which technology advances – the slowing of Moore’s Law and the inadequacy of existing architectures to meet the new data processing challenges we are now facing.
Taken together, these two factors mandate that we rethink how information is processed. The old familiar methods are straining under the expectations that each successive generation of technology should be faster, cheaper and consume less energy. The potential negative implications of this are immeasurable to both the high-tech industry and the users that have come to depend on these regular advancements.
The only way out of this potentially devastating circumstance is to rethink the very foundation on which we have built our computing industry. This leads us to a profound question. If the von Neumann architecture is not the answer for the next generation of computing, then what is? Fortunately, we have the answer. The answer lies in something that each and every one of us is familiar with and something we use extensively and rely upon every day. It is our brain.
The incredible processing power of the human neocortex provides a working example of how information can be processed efficiently by an architecture that has virtually nothing in common with the von Neumann architecture that is so commonly used today. The trick, then, is to capture the spatial/temporal pattern matching capabilities of the human neocortex, in a practical and cost effective semiconductor architecture. While this research began more than 10 years ago in the Architecture Development Group at Micron Technology, it has now advanced to a point where it can provide the basis for a brand new fabless semiconductor startup.
Natural Intelligence Semiconductor is committed to moving beyond ‘artificial’ intelligence, by leveraging and advancing the immense power of nature’s most efficient data processing machine. In doing so, we expect to begin and lead a revolution that will power the computing industry for the next 50 years and beyond. We have not only proven that this is possible, we have demonstrated that it can be achieved in a practical way, to which computer scientists and developers can relate.
We hope you are inspired by what Natural Intelligence Semiconductor has achieved so far and, also, by the great opportunity we have in front of us.