AlphaProof, AlphaGeometry, ChatGPT, and why the future of AI is neurosymbolic

2024-07-28 Gary Marcus

My strong intuition, having studied neural networks for over 30 years (they were part of my dissertation) and LLMs since 2019, is that LLMs are simply never going to work reliably, at least not in the general form that so many people last year seemed to be hoping. Perhaps the deepest problem is that LLMs literally can’t sanity-check their own work.

Since LLMs inevitably hallucinate and are constitutionally incapable of checking their own work, there are really only two possibilities: we abandon them, or we use them as components in larger systems that can reason and plan better, much as grownups and older children use times tables as part of a solution for multiplication, but not the whole solution.

The idea is to try to take the best of two worlds, combining (akin to Kahneman’s System I and System II), neural networks, which are good at kind of quick intuition from familiar examples (a la Kahneman’s System I) with explicit symbolic systems that use formal logic and other reasoning tools (a la Kahneman’s System II).

https://garymarcus.substack.com/p/alphaproof-alphageometry-chatgpt

What Are Dreams For?

2023-08-31 Amanda Gefter

In a series of papers, Blumberg articulated his theory that the brain uses REM sleep to “learn” the body. You wouldn’t think that the body is something a brain needs to learn, but we aren’t born with maps of our bodies

In 2013, Blumberg published a paper in Current Biology titled “Twitching in Sensorimotor Development from Sleeping Rats to Robots.” In it, he asked, “Can twitching, as a special form of self-generated movement, contribute to a robot’s knowledge about its body and how it works?”

https://www.newyorker.com/science/elements/what-are-dreams-for

A Thought on the Lovelace Test

The Lovelace test demands of a computing machine that it not only produce an artifact that is by conventional standards amazing, but it leaves everyone looking at it
stupefied as to how it does what it does: including in this stupefaction the creators and designers of the machine.
– Selmer Bingsjord [1]

 

An artificial agent, designed by a human, passes the [Lovelace] test only if it originates a “program” that it was not engineered to produce. The outputting of the new program—it could be an idea, a novel, a piece of music, anything—can’t be a hardware fluke, and it must be the result of processes the artificial agent can reproduce. Now here’s the kicker: The agent’s designers must not be able to explain how their original code led to this new program. [2]

 

My thought is if God made humans, can a human use creativity to make an artifact, so that God can’t explain how it was made?

 

[1]
Artificial Intelligence: Will Machines Take Over? (Science Uprising, Ep. 10)
2022-09-21 YouTube channel: “Discovery Science”
https://youtu.be/suuxAZbDCYE?t=362

[2]
Forget Turing, the Lovelace Test Has a Better Shot at Spotting AI
2014-07-08 Jordan Pearson
https://www.vice.com/en/article/pgaany/forget-turing-the-lovelace-test-has-a-better-shot-at-spotting-ai