Who can deny the chilly breeze blowing through some quarters of the AI world? While many continue to bask in the glorious summertime ushered in by the ascendency of deep learning, some are sensing autumnal winds which carry with them cautionary words we have all heard many times, such as “black box”, “poor generalization”, “brittle”, “lacking reasoning”, “biased”, “no common sense”, and “unsustainable”. Whether or not we are truly headed for a new AI winter, artificial intelligence certainly has a long way to go to take on human intelligence.
And yet, human intelligence is not a particularly new topic of research. It has long been studied by many of mankind’s most piercing intellects, going back at least 2300 years to Aristotle, the “father of logic” and the “father of psychology”. Through the six works comprising his Organon, as well as a few others such as his Metaphysics and On the Soul, Aristotle laid the foundations for our understanding of logic, reasoning, and knowledge. It was so thorough, in fact, that 2000 years later Kant wrote, “Since Aristotle…logic has not been able to advance a single step.”
While there have certainly been more recent advances, Aristotle’s logic still stands strong, poignantly describing the building blocks of human reasoning. Yet many of the key ingredients described by Aristotle are conspicuously absent from modern AI, especially in deep learning.
My colleague and I have recently proposed1 how to redesign deep learning, developing a new framework for training deep neural networks that is no longer reliant on crude gradient-based statistical optimization. Instead, it is consistent with a wide range of ideas from the cognitive sciences, including Aristotle’s and other philosophers’ theories of human reasoning. By doing this, many of deep learning’s notorious limitations disappear, most notably the infamous black box. These new deep neural networks are now, among other things, fully interpretable and explainable, capable of generalizing out-of-distribution to novel tasks, and more robust to adversarial attacks.
In this essay I will outline some key points of Aristotelian logic and epistemology to show how their absence in traditional deep learning is responsible for deep learning’s well-known limitations. Our recent results have demonstrated that by redesigning deep learning to include these points we do indeed overcome these limitations.
Inductive reasoning: the missing science
Aristotle divides human reasoning into two types: inductive and deductive. Through inductive reasoning, the mind learns generalized principles from individual examples. The goal of inductive reasoning is to abstract away details, find commonalities and differences, and discover the essences of things. It serves as the basis of human learning, scientific discovery, and statistical inference. On the other hand, deductive reasoning is the process by which we reason from already-known truths to uncover new truths. Its purpose is to discover the implications of knowledge we already have.
Perhaps the biggest problem with traditional deep learning is that it skips inductive reasoning. Many may find this surprising, as deep learning is meant to discover patterns from example data. However, inductive reasoning is a process distinct from deductive reasoning, while traditional deep learning does not respect this separation. A neural network mimics deductive reasoning whenever it produces an output from a set of inputs. The neural network is trained by repeatedly testing its deductive capabilities on training data, followed by small changes designed by backpropagation and gradient descent to improve these deductions. Inductive reasoning for the explicit, dedicated purpose of learning general principles is not actually present.
This might seem like splitting hairs. After all, the overall purpose of deep learning is to learn a model that generalizes from particular examples to new data. Doesn’t that mean deep learning is at least implicitly performing induction even if it is not directly implemented?
No, and we know this intuitively. One of the great surprises of deep learning has been that it works at all, because our intuition tells us that training an overparameterized, opaque model to minimize a loss function on some training data does not rise to the same level as human intelligence. Of course, deep learning really only works when test datasets come from the same distribution as the training samples. It doesn’t perform adequately when we try to transfer knowledge out-of-distribution from the training set to other test sets. This flaw is not simply an overfitting problem, producing poor within-distribution interpolation, but rather a problem with deep learning’s ability to extrapolate out-of-distribution, which instead requires inductively learned generalizations.
This leads us to the final nail in the coffin for deep learning’s masquerade as inductive reasoning, which is the existence of adversarial examples. When test samples are perturbed in unnoticeable, presumably inconsequential ways, deep neural networks invariably fall apart because they did not actually learn general rules in the first place. Indeed, it has been admitted that deep neural networks “only work on a very small amount of all the many possible inputs they might encounter.” We can therefore see that inductive learning is indeed absent from traditional deep learning, notwithstanding our desire to make post hoc claims to the contrary.
Because traditional deep learning does not learn generalizations, it is simply a sophisticated pattern-matching technology, not a true learning method. As Judea Pearl put it, “all the impressive achievements of deep learning amount to just fitting a curve to data.” Curve-fitting is not the same thing as inductive reasoning. Consider a monkey that is painstakingly taught how to impressively play some pieces on the piano. Does it have the same understanding of music as a classically trained pianist? Of course not. The former has learned how to approximate a copy of its training, while the latter has used prior experiences to develop the deep and general understanding necessary to play new music, to improvise, and to appreciate the music played by others.
So how should inductive reasoning work? We will consider Aristotle’s admittedly limited description of inductive reasoning by examining his description of the mind’s knowledge of something’s essence. Aristotle equated essence with definition, which must contain two parts. The first is the genus (plural, genera), which is the class of similar concepts to which our concept belongs. The second are the differentiae (single, differentia), which are attributes necessary to differentiate our concept from other members of the same genus. Aristotle’s classic example of an essential definition is that of human, which he gives as “rational animal”. Humans belong to the genus of animals and are differentiated from all other animals by the differentia of rationality. In a similar way, inductive reasoning is a process of identifying essential similarities and differences between particular examples in order to draw more generalized conclusions. This allows our minds to abstract away unimportant details, compress knowledge, and discover essential qualities.
Genera: the missing structure
Another vital piece of Aristotle’s logic is the hierarchy of concepts. This was famously depicted by the 3rd-century philosopher Porphyry in his “Porphyrian tree” to illustrate Aristotle’s description of the hierarchical organization of concepts. As we have seen, each concept is a member of a higher genus, but this genus is itself a concept that is a member of an even higher genus, and so on up (until we reach the so-called “categories”). For example, Socrates is a human, humans are animals, animals are living things, living things are bodies, bodies are substances.
Sometimes we can skip steps going up the conceptual hierarchy, as we can recognize people as living things without first having to recognize them as animals. However, there are times when we don't skip steps. Consider the following characters: “1”, “n”, “9”, “H”, “3”, “q”. Some are numbers and others letters, but there is nothing inherent in their shape that makes them so. In considering them, our mind first recognizes each individual character as a specific letter or a specific number and then, and only then, does it recognize each character as a member of a genus (e.g. “number”). This inability to make hierarchical shortcuts happens, for instance, with things that are solely understandable extensionally. In addition to human-created characters, another example would be recognizing both tadpoles and adult frogs as the same animal species.
Deep learning does not typically learn hierarchies of concepts. People often describe the consecutive layers of neurons as building hierarchical representations, but these hierarchies do not have discernible meaning at the intermediate levels; if they did, deep neural networks would not be the black boxes that they are. Furthermore, while convolutional neural networks are also often described as discovering hierarchical features in data, the exact nature of each feature at each level is generally fairly uncertain. It seems that these higher-level features may actually be diffusely distributed representations of features in the real world.
Deep learning must stop treating intermediate neuron layers as “hidden layers”. Instead, it should have each layer be meaningful and part of an interpretable hierarchy.
Making deep learning reason through conceptual hierarchies would have the added benefit of making AI more composable, since hierarchies of concepts permit us to use components at each level modularly and for different purposes. To do so, deep learning should take advantage of unsupervised learning methods to first discover underlying conceptual structures and then incorporate this into the architecture of the neural network.
Propositions: the missing building blocks
Aristotle described how all reasoning, deductive or inductive, is composed of propositions. A proposition is the act of saying something about something else (i.e. the act of predicating something of a subject). A single proposition can be either an affirmation or a denial. Human language often combines these propositions in complex or poetic ways, but at its core reasoning proceeds by propositions that build upon one another.
Deep learning also does not respect this basic building block of human reasoning in its neural information processing. The absence of meaningful propositions, more than anything else, is why traditional deep learning is a black box.
Human reasoning prefers that complexity be captured by multiple simple propositions integrated in complex hierarchies. Think about fully interpretable AI systems, such as rule-based systems or decision trees. Each one of these is composed of individual parts that represent simple, understandable statements. It is the way in which they are arranged structurally that allows the whole system to handle complexity.
How can we generalize the idea of propositions when building AI systems? First, because the propositions are learned from training data, after training they end up making analogies to the specific training data they had previously seen, thereby saying a test sample looks more like some training samples than others. The propositions learned during training (i.e. induction) must distinguish between relevant samples to produce these helpful analogies used during testing (i.e. deduction).
The purpose of a proposition is therefore to make a distinction, to differentiate ideas from one another. There is a famous aphorism, “Rarely affirm, often deny, always distinguish,” that emphasizes the centrality of distinctions in understanding things. Distinctions allow us to bridge inductive and deductive reasoning. Inductive reasoning is about uncovering the similarities and differences that allow us to distinguish between particular examples and arrive at general principles. These same distinctions can then be used on new examples in order to understand their relation to these discovered generalizations.
Enter the artificial neuron. It receives some set of inputs and produces an output. This forms a proposition. When the neuron has high activity, it is affirming something of the inputs, and when it has low activity it is denying it. In order to be a meaningful proposition, this affirmation or denial should be meaningful, which means the specific training samples it is designed to distinguish must also be meaningful.
However, our own research has found that, in traditional deep learning, artificial neurons typically learn to make uninterpretable propositions that loosely distinguish between large swaths of unrelated training samples. This lack of meaningful and specialized propositions is why deep learning has proven to be a black box.
This is worth dwelling on. In traditional deep learning, decisions are distributed across neural populations, with each neuron responsible for a little bit of this and a little bit of that, but doing none of those things well by itself. This is not a robust design. Consider a healthcare system in which each physician does not specialize in one specific type of medicine (i.e. neurology, cardiology, obstetrics, etc.) but instead picks and chooses little bits of medicine to clumsily handle (i.e. a doctor who only knows a little bit about how to manage seizures, congenital heart defects, preterm labor, etc.). A hospital could conceivably assemble a workforce of such physicians that adequately covers all typical medical conditions. But the results would be terribly inefficient, as large teams would be necessary for a single patient. The system would also be extremely brittle when dealing with rare or atypical medical cases, and would also not be very adaptable, as physician vacations and new medical advances would be very difficult to accommodate. This hospital is what, in effect, traditional deep learning has proven to be.
An Aristotelian redesign of deep learning
So far I have argued what is needed to redesign deep learning to be more like human intelligence. We need inductive reasoning rather than statistical optimization, concept hierarchies rather than hidden layers, and meaningful propositions rather than diffusely distributed representations.
In our own work, we have shown how to integrate these three ingredients of human intelligence with deep neural networks. In order for neurons to make good propositions, each neuron is designed during the inductive training phase to distinguish between well-defined groups of training samples. Like inductive reasoning, we group together similar samples into a concept and then try to figure out what distinguishes this concept from other concepts. A concept can be either pre-defined via training labels, discovered unsupervised within or across labeled concepts, or learned via a mixed semi-supervised learning process. We can understand a discovered concept in the same way that humans generally understand a concept: by exemplars (e.g. enumeration of its training samples), by prototype (e.g. average of its training samples), by decision boundaries (e.g. edge cases used to distinguish concepts), by definition (e.g. how the concept is described by propositions), etc. The utility of the network’s concepts can be assessed by inductively training the model and then testing its performance.
To design propositions, consider a neuron that distinguishes some specific concept A of training samples from all of the other training samples. The proposition or analogy it makes after training is either “the input is like concept A” or “the input is not like concept A”. Let’s call this a concept neuron, since it either affirms or denies a specific concept depending on the inputs. Now consider a neuron designed to distinguish concept A from a different concept B of training samples. The proposition is now either “the input is more like concept A than B” or “the input is more like concept B than A”. Let’s call this a differentia neuron, since it either affirms or denies qualities of the inputs that help differentiate between A and B.
These propositions are integrated to encode conceptual hierarchies. For example, multiple differentia neurons that all point to concept A will feed into a downstream concept neuron for concept A. Concept neurons can further feed into each other to move up or down the conceptual hierarchy.
Because the training method for these new neural networks is similar to Aristotle’s description of understanding essences, we have called them essence neural networks (ENNs). The training method is quite general and permits a wide variety of neural network architectures and learning modifications. We have already demonstrated several architecture prototypes that can match or surpass the performance of traditionally trained deep neural networks.
Training deep neural networks according to this framework makes them inherently interpretable and explainable. In fact, in some cases trained ENNs can even be directly translated to rule-based systems like computer code. This intrinsic explainability has already permitted new opportunities for deep learning, such as temporary post-training modifications, rational neural architecture design, and thorough error analysis. Furthermore, the explicit inclusion of both inductive reasoning and meaningful propositions makes well-designed ENNs capable of out-of-distribution generalization, even to inputs much more complex than the training set. Our work has shown not only that it is possible to train deep neural networks according to Aristotelian principles of human intelligence, but that doing so provides new capabilities and opportunities for deep learning.
***
One of the unfortunate ironies of artificial intelligence research has been its divorce from a rich intellectual history of insight into human intelligence. Logic, epistemology, and psychology have just as much—if not more—to offer AI as do statistics and neuroscience. Future developments in AI must reflect this priceless patrimony. It is my belief that philosophy, psychology, neuroscience, and artificial intelligence can work together synergistically to make technological progress and to further our understanding of the human mind.
1This paper is also available at the journal's website here.
Writer Profile
Paul Blazek is an MD/PhD student at UT Southwestern whose dissertation research bridged the cognitive sciences to merge deep learning with symbolic reasoning.
Citation
For attribution in academic contexts or books, please cite this work as
Paul Blazek, "How Aristotle is Fixing Deep Learning's Flaws", The Gradient, 2022.
BibTeX citation:
@article{blazek2022aristotle,
author = {Blazed, Paul},
title = {How Aristotle is Fixing Deep Learning's Flaws},
journal = {The Gradient},
year = {2022},
howpublished = {\url{https://thegradient.pub/how-aristotle-is-fixing-deep-learnings-flaws} },
}