\n\n \n","children":[{"type":"text","text":""}]},{"type":"p","children":[{"type":"text","text":"On March 27th, the world lost a towering intellect in Daniel Kahneman, a Nobel\nlaureate renowned for his groundbreaking work in psychology and behavioral\neconomics. In a notable moment in December 2019, Yoshua Bengio, a pioneering\nfigure in deep learning, spotlighted Kahneman's dual-process theory of\ncognition during a\n"},{"type":"a","url":"https://www.youtube.com/watch?v=T3sxeTgT4qc","title":null,"children":[{"type":"text","text":"keynote at NeurIPS 2019"}]},{"type":"text","text":". This\ntheory, popularized in Kahneman's book\n\""},{"type":"a","url":"https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow","title":null,"children":[{"type":"text","text":"Thinking, Fast and Slow"}]},{"type":"text","text":",\"\ndistinguishes between two systems of thinking."}]},{"type":"p","children":[{"type":"text","text":"\"System 1\" thinking is rapid and intuitive, drawing upon immediate\nmemory recall and statistical pattern recognition to make quick judgments or\ndecisions. In contrast, \"System 2\" thinking is slower and more\nmethodical, and requires deliberate effort and logical reasoning. It is\nassociated with complex problem-solving and decision-making processes that\ninvolve extended chains of reasoning and adhere to the principles of symbolic\nlogic. For example, a simple arithmetic problem like \"2+2\" typically\nentails the fast, automatic response of System 1. On the other hand, proving the\nPythagorean theorem activates System 2, as it demands a more thoughtful and\nanalytical approach."}]},{"type":"p","children":[{"type":"text","text":"Interestingly, Large Language Models (LLMs) are proving adept at System 1\nreasoning, efficiently handling straightforward queries.\n"},{"type":"a","url":"https://arxiv.org/abs/2302.06706","title":null,"children":[{"type":"text","text":"Researchers are pushing the boundaries"}]},{"type":"text","text":",\nchallenging LLMs with tasks requiring complex thought, often with notable\nsuccess. However, these models are not infallible: mistakes happen, particularly\nwith complex reasoning. This is where integrating System 2 thinking (a symbolic\nreasoner) could help rectify the mistakes of System 1 reasoners like LLMs."}]},{"type":"p","children":[{"type":"text","text":"Take the "},{"type":"a","url":"https://github.com/tex216/Block-World","title":null,"children":[{"type":"text","text":"Blocksworld"}]},{"type":"text","text":" problem as an\nexample, where one must rearrange a stack of blocks into a new configuration.\nThis planning task requires a System 2 thinking. We first experimented with LLMs\nsuggesting actions at each step, then enhanced it with a formal system (like\n"},{"type":"a","url":"https://lean-lang.org/","title":null,"children":[{"type":"text","text":"Lean"}]},{"type":"text","text":" or "},{"type":"a","url":"https://coq.inria.fr/","title":null,"children":[{"type":"text","text":"Coq"}]},{"type":"text","text":") to check each\naction's validity.\n"},{"type":"a","url":"https://www.youtube.com/watch?v=xnG4St6okdo","title":null,"children":[{"type":"text","text":"In our experiments"}]},{"type":"text","text":", we used\nRelationalAI’s query language as the formal symbolic system. As a result,\nincorrect actions were corrected, illustrating how symbolic logic systems can\nguide LLMs when dealing with simple logical rules."}]},{"type":"mdxJsxFlowElement","name":"ImgFig","children":[{"type":"text","text":""}],"props":{"src":"/blog/thinking-fast-slow-in-neuro-symbolic-ai-world/blocksworld-problem.png","alt":"Blocksworld - reasoning with Large Language Models and RelationalAI query language","caption":"In the Blocksworld domain figure above, we can unstack blocks from on top of each other and only pick up blocks that are on the table. The LLM is understandably mistaken about this detail, but RelationalAI’s system is able to provide real-time feedback so that the LLM can correct its action and ultimately generate a valid plan."}},{"type":"p","children":[{"type":"text","text":"However, real-world problems often encompass logical constraints and complex\ndata relationships. This is where Knowledge Graphs, such as RelationalAI's\nknowledge graph coprocessor for data clouds, become invaluable, providing the\ndata management and relational capabilities needed for more sophisticated tasks\nmore simply and at scale. The synergy between LLMs and KGs has been applied to\nmore complex problems, such as mathematical proofs. This year's NeurIPS\ngave extensive coverage on the topic, together with a\n"},{"type":"a","url":"https://neurips.cc/virtual/2023/tutorial/73946","title":null,"children":[{"type":"text","text":"tutorial"}]},{"type":"text","text":" and a\n"},{"type":"a","url":"https://neurips.cc/virtual/2023/workshop/66522","title":null,"children":[{"type":"text","text":"workshop"}]},{"type":"text","text":" and you can see our\n"},{"type":"a","url":"https://drive.google.com/drive/u/0/folders/1ZdHgTF8LXgmFEbwBr8E3jy8ILXSVkKX1","title":null,"children":[{"type":"text","text":"analysis"}]},{"type":"text","text":"\non the topic too."}]},{"type":"p","children":[{"type":"text","text":"In the upcoming post, we’ll explore how a LLM collaborates with the RelationalAI\nSDK for Python to tackle complex optimization challenges."}]},{"type":"p","children":[{"type":"text","text":"As you can see, Daniel Kahneman, the brilliant psychologist and Nobel\nPrize-winning economist, has profoundly shaped our understanding of\ndecision-making. His research continues to resonate, leaving an enduring legacy\nthat we honor today, like the Blocksworld experiment above."}]},{"type":"h3","children":[{"type":"text","text":"About the Authors"}],"id":"about-the-authors"},{"type":"mdxJsxFlowElement","name":"AuthorProfile","children":[{"type":"text","text":""}],"props":{"src":"/blog/thinking-fast-slow-in-neuro-symbolic-ai-world/giancarlo-fissore-headshot.jpg","children":{"type":"root","children":[{"type":"p","children":[{"type":"html_inline","value":"Giancarlo Fissore","children":[{"type":"text","text":""}]},{"type":"text","text":" is a Data Scientist at RelationalAI focusing on\nNatural Language Processing and Large Language Models. He earned his PhD in\nmachine learning on topics related to generative models, bringing a unique\nblend of self-taught programming skills and formal education to his current\nrole. His work aims to enhance the capabilities of generative AI systems in\nunderstanding and processing human language with the help of Knowledge Graphs."}]}]}}},{"type":"mdxJsxFlowElement","name":"AuthorProfile","children":[{"type":"text","text":""}],"props":{"src":"/blog/thinking-fast-slow-in-neuro-symbolic-ai-world/nikolaos-vasiloglou-headshot.jpg","children":{"type":"root","children":[{"type":"p","children":[{"type":"html_inline","value":"Nikolaos Vasiloglou","children":[{"type":"text","text":""}]},{"type":"text","text":" is the VP of Research-ML at RelationalAI. He has\nspent his career on building ML software and leading data science projects in\nRetail, Online Advertising and Security. He is a member of the\nICLR/ICML/NeurIPS/UAI/MLconf/KGC/IEEE S&P community, having served as an\nauthor, reviewer, and organizer of workshops and the main conference. Nikolaos\nis leading the research and strategic initiatives at the intersection of Large\nLanguage Models and Knowledge Graphs for RelationalAI."}]}]}}},{"type":"h3","children":[{"type":"text","text":"About RelationalAI"}],"id":"about-relationalai"},{"type":"p","children":[{"type":"text","text":"RelationalAI is the industry's first AI coprocessor for data clouds and language\nmodels. Its groundbreaking relational knowledge graph system expands data clouds\nwith integrated support for graph analytics, business rules, optimization, and\nother composite AI workloads, powering better business decisions. RelationalAI\nis cloud-native and built with the proven and trusted relational paradigm. These\ncharacteristics enable RelationalAI to seamlessly extend data clouds and empower\nyou to implement intelligent applications with semantic layers on a data-centric\nfoundation."}]}],"_content_source":{"queryId":"src/content/resources/thinking-fast-slow-in-neuro-symbolic-ai-world.mdx","path":["resource","body"]}},"_content_source":{"queryId":"src/content/resources/thinking-fast-slow-in-neuro-symbolic-ai-world.mdx","path":["resource"]}}},"errors":null,"query":"\n query resource($relativePath: String!) {\n resource(relativePath: $relativePath) {\n ... on Document {\n _sys {\n filename\n basename\n breadcrumbs\n path\n relativePath\n extension\n }\n id\n }\n ...ResourceParts\n }\n}\n \n fragment ResourceParts on Resource {\n __typename\n title\n description\n date\n image\n categories\n authors {\n __typename\n name\n link\n }\n seo {\n __typename\n keywords\n description\n image\n image_alt\n canonical_url\n author\n published\n modified\n language\n robots\n site_name\n content_type\n }\n body\n}\n ","variables":{"relativePath":"thinking-fast-slow-in-neuro-symbolic-ai-world.mdx"}},"src/content/meta/meta.md":{"data":{"meta":{"_sys":{"filename":"meta","basename":"meta.md","breadcrumbs":["meta"],"path":"src/content/meta/meta.md","relativePath":"meta.md","extension":".md"},"id":"src/content/meta/meta.md","__typename":"Meta","banner":{"__typename":"MetaBanner","enabled":true,"content":{"type":"root","children":[{"type":"p","children":[{"type":"text","text":"Check out "},{"type":"a","url":"/resources/highlights-of-relationalai-at-snowflake-data-cloud-summit-2024","title":"SF summit highlights","children":[{"type":"text","text":"highlights"}]},{"type":"text","text":" of RelationalAI at "},{"type":"text","text":"Snowflake's Data Cloud Summit 2024!","bold":true}]}],"_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","banner","content"]}},"_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","banner"]}},"header":{"__typename":"MetaHeader","links":[{"__typename":"MetaHeaderLinks","text":"Product","url":"/product","style":"default","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","header","links",0]}},{"__typename":"MetaHeaderLinks","text":"Company","url":"/company","style":"default","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","header","links",1]}},{"__typename":"MetaHeaderLinks","text":"Docs","url":"/docs","style":"default","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","header","links",2]}},{"__typename":"MetaHeaderLinks","text":"Resources","url":"/resources/all/1","style":"default","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","header","links",3]}},{"__typename":"MetaHeaderLinks","text":"Get Started","url":"/get-started","style":"cta","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","header","links",4]}}],"_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","header"]}},"footer":{"__typename":"MetaFooter","sections":[{"__typename":"MetaFooterSections","name":"Product","links":[{"__typename":"MetaFooterSectionsLinks","text":"Overview","url":"/product","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",0,"links",0]}},{"__typename":"MetaFooterSectionsLinks","text":"Use Cases","url":"/product#for-problems-that-matter","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",0,"links",1]}},{"__typename":"MetaFooterSectionsLinks","text":"Capabilities","url":"/product#a-new-toolset","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",0,"links",2]}}],"_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",0]}},{"__typename":"MetaFooterSections","name":"Resources","links":[{"__typename":"MetaFooterSectionsLinks","text":"Documentation","url":"/docs/getting_started","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",1,"links",0]}},{"__typename":"MetaFooterSectionsLinks","text":"News","url":"/resources/news/1","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",1,"links",1]}},{"__typename":"MetaFooterSectionsLinks","text":"Research","url":"/resources/research/1","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",1,"links",2]}},{"__typename":"MetaFooterSectionsLinks","text":"Releases","url":"/resources/releases/1","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",1,"links",3]}}],"_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",1]}},{"__typename":"MetaFooterSections","name":"About Us","links":[{"__typename":"MetaFooterSectionsLinks","text":"Our Company","url":"/company","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",2,"links",0]}},{"__typename":"MetaFooterSectionsLinks","text":"Contact Us","url":"/get-started","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",2,"links",1]}},{"__typename":"MetaFooterSectionsLinks","text":"Careers","url":"/careers","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",2,"links",2]}},{"__typename":"MetaFooterSectionsLinks","text":"Legal","url":"/legal","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",2,"links",3]}},{"__typename":"MetaFooterSectionsLinks","text":"GDPR","url":"/gdpr","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",2,"links",4]}},{"__typename":"MetaFooterSectionsLinks","text":"Security & Trust","url":"https://trust.relational.ai/","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",2,"links",5]}}],"_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","sections",2]}}],"socials":[{"__typename":"MetaFooterSocials","text":"GitHub","url":"https://github.com/RelationalAI","icon":"https://assets.tina.io/91d76337-e55d-4722-acb5-3106adb895b6/img/logos/github.png","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","socials",0]}},{"__typename":"MetaFooterSocials","text":"LinkedIn","url":"https://www.linkedin.com/company/relationalai/about","icon":"https://assets.tina.io/91d76337-e55d-4722-acb5-3106adb895b6/img/logos/linkedin.png","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","socials",1]}},{"__typename":"MetaFooterSocials","text":"Twitter","url":"https://twitter.com/relationalai","icon":"https://assets.tina.io/91d76337-e55d-4722-acb5-3106adb895b6/img/logos/twitter.png","_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer","socials",2]}}],"_content_source":{"queryId":"src/content/meta/meta.md","path":["meta","footer"]}},"_content_source":{"queryId":"src/content/meta/meta.md","path":["meta"]}}},"errors":null,"query":"\n query meta($relativePath: String!) {\n meta(relativePath: $relativePath) {\n ... on Document {\n _sys {\n filename\n basename\n breadcrumbs\n path\n relativePath\n extension\n }\n id\n }\n ...MetaParts\n }\n}\n \n fragment MetaParts on Meta {\n __typename\n banner {\n __typename\n enabled\n content\n }\n header {\n __typename\n links {\n __typename\n text\n url\n style\n }\n }\n footer {\n __typename\n sections {\n __typename\n name\n links {\n __typename\n text\n url\n }\n }\n socials {\n __typename\n text\n url\n icon\n }\n }\n}\n ","variables":{"relativePath":"./meta.md"}}};
globalThis.tina_info = tina;
})();
Thinking Fast and Slow in the Neuro-symbolic AI World · RelationalAI
Check out highlights of RelationalAI at Snowflake's Data Cloud Summit 2024!
Thinking Fast and Slow in the Neuro-symbolic AI World
On March 27th, the world lost a towering intellect in Daniel Kahneman, a Nobel
laureate renowned for his groundbreaking work in psychology and behavioral
economics. In a notable moment in December 2019, Yoshua Bengio, a pioneering
figure in deep learning, spotlighted Kahneman's dual-process theory of
cognition during a
keynote at NeurIPS 2019. This
theory, popularized in Kahneman's book
"Thinking, Fast and Slow,"
distinguishes between two systems of thinking.
"System 1" thinking is rapid and intuitive, drawing upon immediate
memory recall and statistical pattern recognition to make quick judgments or
decisions. In contrast, "System 2" thinking is slower and more
methodical, and requires deliberate effort and logical reasoning. It is
associated with complex problem-solving and decision-making processes that
involve extended chains of reasoning and adhere to the principles of symbolic
logic. For example, a simple arithmetic problem like "2+2" typically
entails the fast, automatic response of System 1. On the other hand, proving the
Pythagorean theorem activates System 2, as it demands a more thoughtful and
analytical approach.
Interestingly, Large Language Models (LLMs) are proving adept at System 1
reasoning, efficiently handling straightforward queries.
Researchers are pushing the boundaries,
challenging LLMs with tasks requiring complex thought, often with notable
success. However, these models are not infallible: mistakes happen, particularly
with complex reasoning. This is where integrating System 2 thinking (a symbolic
reasoner) could help rectify the mistakes of System 1 reasoners like LLMs.
Take the Blocksworld problem as an
example, where one must rearrange a stack of blocks into a new configuration.
This planning task requires a System 2 thinking. We first experimented with LLMs
suggesting actions at each step, then enhanced it with a formal system (like
Lean or Coq) to check each
action's validity.
In our experiments, we used
RelationalAI’s query language as the formal symbolic system. As a result,
incorrect actions were corrected, illustrating how symbolic logic systems can
guide LLMs when dealing with simple logical rules.
In the Blocksworld domain figure above, we can unstack blocks from on top of each other and only pick up blocks that are on the table. The LLM is understandably mistaken about this detail, but RelationalAI’s system is able to provide real-time feedback so that the LLM can correct its action and ultimately generate a valid plan.
However, real-world problems often encompass logical constraints and complex
data relationships. This is where Knowledge Graphs, such as RelationalAI's
knowledge graph coprocessor for data clouds, become invaluable, providing the
data management and relational capabilities needed for more sophisticated tasks
more simply and at scale. The synergy between LLMs and KGs has been applied to
more complex problems, such as mathematical proofs. This year's NeurIPS
gave extensive coverage on the topic, together with a
tutorial and a
workshop and you can see our
analysis
on the topic too.
In the upcoming post, we’ll explore how a LLM collaborates with the RelationalAI
SDK for Python to tackle complex optimization challenges.
As you can see, Daniel Kahneman, the brilliant psychologist and Nobel
Prize-winning economist, has profoundly shaped our understanding of
decision-making. His research continues to resonate, leaving an enduring legacy
that we honor today, like the Blocksworld experiment above.
About the Authors
Giancarlo Fissore is a Data Scientist at RelationalAI focusing on
Natural Language Processing and Large Language Models. He earned his PhD in
machine learning on topics related to generative models, bringing a unique
blend of self-taught programming skills and formal education to his current
role. His work aims to enhance the capabilities of generative AI systems in
understanding and processing human language with the help of Knowledge Graphs.
Nikolaos Vasiloglou is the VP of Research-ML at RelationalAI. He has
spent his career on building ML software and leading data science projects in
Retail, Online Advertising and Security. He is a member of the
ICLR/ICML/NeurIPS/UAI/MLconf/KGC/IEEE S&P community, having served as an
author, reviewer, and organizer of workshops and the main conference. Nikolaos
is leading the research and strategic initiatives at the intersection of Large
Language Models and Knowledge Graphs for RelationalAI.
About RelationalAI
RelationalAI is the industry's first AI coprocessor for data clouds and language
models. Its groundbreaking relational knowledge graph system expands data clouds
with integrated support for graph analytics, business rules, optimization, and
other composite AI workloads, powering better business decisions. RelationalAI
is cloud-native and built with the proven and trusted relational paradigm. These
characteristics enable RelationalAI to seamlessly extend data clouds and empower
you to implement intelligent applications with semantic layers on a data-centric
foundation.