During my recent visit to the Whitney Museum in New York, I was introduced to the pioneering work of Harold Cohen, an artist and computer scientist who created AARON, a mechanical artist. Quite surprisingly, Harold invented the software in the late 1960s at the University of California and named it AARON in the early 1970s.
Cohen’s meticulous design process for Aaron included defining the proportions of the human hand and introducing the concept of compositionality. His rule-based system enabled AARON to distinguish how to draw subjects in various spatial relationships, a challenge that modern AI systems still wrestle with.
The Machine as Artist by Harold Cohen is a magnificent manifesto for Symbolic AI.
This visit returned to me while reading a recent research paper published in Nature on June 19, 2024, by Evelina Fedorenko, Steven T. Piantadosi, and Edward A. F. Gibson.
The paper argues that language is primarily a tool for communication rather than a mechanism for thought and reasoning. This perspective highlights the development course for AI systems, particularly concerning the recent research on large language models (LLMs).
What is Symbolic AI?
Symbolic AI is a branch of artificial intelligence that uses symbols and rules to represent knowledge and reasoning processes. Unlike deep learning, which relies on patterns and statistical methods, symbolic AI uses explicit representations of problems, logic, and rules to derive conclusions. This approach allows for more interpretability and precision in specific tasks, making it ideal for applications requiring clear reasoning and decision-making processes. The synergy between continuous information processing that characterizes deep learning and large language models and discrete information extraction and reasoning of symbolic AI is what we call neurosymbolic AI.
Integrating Symbolic Reasoning in AI
The research conducted by Fedorenko and colleagues provides compelling evidence that language does not engage the brain’s symbolic reasoning functions.
As illustrated in the image above, the study demonstrates that the language network in the brain is activated during language comprehension and production tasks, such as understanding or producing sentences, lists of words, and even nonwords. However, when individuals engage in tasks requiring thinking and reasoning, such as executive functions, novel problem solving, mathematics, or understanding computer code, entirely different brain networks are activated—the multiple demand network.
This dissociation seems to indicate that language is not necessary for thought. The brain regions involved in high-level cognitive tasks do not overlap with those used for language, suggesting that language serves primarily as a communication code. Language facilitates the transfer of information rather than being the substrate for complex thought processes. The study remarks, “Many individuals with acquired brain damage exhibit difficulties in reasoning and problem-solving but appear to have full command of their linguistic abilities,” reinforcing that undamaged language abilities do not imply intact thought.
Understanding that language is a means of communication, it becomes clear that AI systems, particularly LLMs, need to incorporate logic middleware for symbolic reasoning. This middleware can bridge the gap between pattern recognition and logical reasoning, enabling AI systems to perform more complex and accurate tasks. An example of this integration is the flourishing implementation of Graph Retrieval-Augmented Generation (Graph RAG) solutions, such as those provided by WordLift. These solutions demonstrate how combining LLMs with symbolic reasoning can enhance AI’s capabilities, making them more effective in task requiring pattern recognition and logical inference. For a deeper dive into this approach, I refer to the seminal work by Microsoft Research: GraphRAG: Unlocking LLM Discovery on Narrative Private Data.
Harold Cohen’s work with AARON provides an interesting historical parallel. Over 50 years ago, Cohen used a symbolic rule-based system to help Aaron understand how to draw elements like the human hand and spatial relationships. His approach highlights how symbolic reasoning can enhance the ability of generative systems to create accurate depictions, something modern LLMs still need to work on. As seen recently with Stability AI’s release of Stable Diffusion 3 Medium, the latest AI image-synthesis model has been heavily criticized online for generating anatomically incorrect images. Despite advancements in AI, these visual abominations underscore the ongoing challenges in accurately depicting human forms, a problem Cohen’s symbolic approach addressed over half a century ago.
By integrating symbolic reasoning into AI, we build on the legacy of brilliant minds like Harold Cohen and push the boundaries of what AI systems can achieve. As we continue researching and developing LLMs, adding symbolic logic middleware represents a significant step forward, enhancing their ability to reason, plan, and understand the world more comprehensively.
This trend has been recently highlighted by Yann LeCun, Chief AI Scientist at Meta and professor at NYU. He pointed out that LLMs, while helpful, lack critical characteristics of intelligent behavior, such as understanding the physical world, persistent memory, reasoning, and planning. He explained that:
“LLMs can do none of those or they can only do them in a very primitive way and they don’t really understand the physical world. They don’t really have persistent memory. They can’t really reason and they certainly can’t plan.”
Yann LeCun – Chief AI Scientist at Meta
LeCun also emphasized that LLMs are trained on vast amounts of text data, which oversimplifies how the human brain works. He noted that a significant portion of human learning, especially in early life, comes from sensory input and interaction with the real world rather than language alone. He highlighted that these systems lack proper persistent memory:
“Through sensory input, we see a lot more information than we do through language, and that despite our intuition, most of what we learn and most of our knowledge is through our observation and interaction with the real world, not through language. Everything that we learn in the first few years of life, and certainly everything that animals learn has nothing to do with language.”
Conclusions
Integrating symbolic AI with modern machine learning techniques offers a promising path forward. This approach is particularly relevant for SEO and content marketing, where understanding and reasoning about the context of information is crucial. By leveraging symbolic reasoning, AI can enhance content discovery, improve relevance, and deliver more accurate and meaningful results, ultimately driving better engagement and conversions.
Understanding that language is a communication function of the human brain clarifies that while training LLMs on language is effective, it oversimplifies the brain’s complexity. To achieve true intelligence in AI, incorporating symbolic reasoning and addressing the need for persistent memory is crucial.
This is precisely what we are building at WordLift for enterprise companies using semantic ontologies, bridging the gap between pattern recognition and logical reasoning.
Excited to learn more? Drop us a line!
References:
- Fedorenko, E., Piantadosi, S.T. & Gibson, E.A.F. Language is primarily a tool for communication rather than thought. Nature 630, 575–586 (2024).
- Harold Cohen: AARON (Whitney Museum of American Art, New York, February 3–May 19, 2024)
- Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI | Lex Fridman Podcast #416
- A thread on X started by @danbri that got me started.
#Power #Symbolic #Reasoning