How many years has it been since Andrew Ng started teaching the world via Coursera about the power of machine learning and deep learning? Although I was an early adopter of GPUs and ML technologies over 10 years ago, I thought his course was outstanding and highly motivating. Andrew’s course helped explain the value of low-cost, GPU-based, high performance computing to many executives and engineers better than anything else I knew at the time, way back in the days before “Deep Learning” became a universally used term.
The convergence of the availability of new SIMD hardware accelerators with theoretical and practical advances in deep learning algorithms by Yoshua Bengio, Geoffrey Hinton, Yann LeCun, Jurgen Schmidhuber, and many others, resulted in a robust AI Spring that spawned billions of dollars of new business and tremendous optimism about the future of AI from layman to academic alike.
While it is clear that the pattern recognition abilities of deep learning algorithms and state-of-the-art hardware accelerators are a major step forward, there is a growing realization that perhaps “deep learning” is a misnomer and that achieving artificial intelligence will require far more than current statistical learning techniques.
Adversarial examples and saliency map analyses were early signs that perhaps there was nothing very deep about the knowledge representation capabilities of the unfortunately named “deep learning” movement. From a technical point of view, the name is of course appropriate as it refers to the large numbers of layers often utilized in deep learning models.
The unmitigated success of deep learning in a wide range of applications from object recognition and natural language processing to anomaly detection and more, along with the billions of dollars of exponentially increasing value deep learning brings to the global economy, limits the duration and depth of the Deep Learning Winter predicted by many. This assumes a Deep Learning Winter will occur at all.
When a pioneer of deep learning like Hinton says “We need to start over,” and that he is “deeply suspicious of back-propagation,” it may be time to think differently regarding artificial intelligence. There exists a societal need and market demand for AI technologies that offer orders of magnitude improvement in the ability to learn and generalize.
One promising solution we have developed at Cisco is a multi-modal neuro-symbolic framework based on a model of knowledge. In this model, knowledge is defined as hierarchical structure where there are explicit levels of abstraction. The backbone of this model is comprised of anti-symmetric relations on top of which symmetric relations provide additional refinement.
We cover this retail example in more detail in the paper
The applicability of this approach is broad. We have developed several proofs of concept ranging from open conservation, retail, smart city, improvement of the signal-to-noise of ML anomaly detection, unsupervised analysis of tens of thousands of time series, and natural language understanding applications.
While our approach leverages the outstanding developments from the Artificial General Intelligence (AGI) community, our focus is near-term, high-value impact, both societal and economic. Our goal is natural intelligence augmentation that improves both productivity and jobs.
It’s still too early to know if our specific approach to advancing state-of-the-art AI beyond deep learning is “the way.” Our plan is to open as much as possible our research and code while we continue to support researchers such as Pei Wang and his OpenNARS team out of Temple University, Kristinn Thorisson and his AERA team out of Reykjavík, Hong Kong University and Ben Goertzel’s SingularityNet team. You can find further updates and details on our metamodel-based approach to applied AGI here: https://www.researchgate.net/project/A-Metamodel-and-Framework-For-AGI.