Skip to main content

AlphaProof, AlphaGeometry, ChatGPT, and why the future of AI is neurosymbolic

2024-07-28 Gary Marcus

My strong intuition, having studied neural networks for over 30 years (they were part of my dissertation) and LLMs since 2019, is that LLMs are simply never going to work reliably, at least not in the general form that so many people last year seemed to be hoping. Perhaps the deepest problem is that LLMs literally can’t sanity-check their own work.

Since LLMs inevitably hallucinate and are constitutionally incapable of checking their own work, there are really only two possibilities: we abandon them, or we use them as components in larger systems that can reason and plan better, much as grownups and older children use times tables as part of a solution for multiplication, but not the whole solution.

The idea is to try to take the best of two worlds, combining (akin to Kahneman’s System I and System II), neural networks, which are good at kind of quick intuition from familiar examples (a la Kahneman’s System I) with explicit symbolic systems that use formal logic and other reasoning tools (a la Kahneman’s System II).

https://garymarcus.substack.com/p/alphaproof-alphageometry-chatgpt

Geoffrey Hinton’s Forward-Forward Algorithm Charts a New Path for Neural Networks

2022-12-08 by Synced

there is increasing interest in whether the biological brain follows backpropagation or, as Hinton asks, whether it has some other way of getting the gradients needed to adjust the weights on its connections. In this regard, Hinton proposes the FF [Forward-Forward] algorithm as an alternative to backpropagation for neural network learning.

It aims to replace the forward and backward passes of backpropagation with two forward passes: a positive pass that operates on real data and adjusts weights “to improve the goodness in every hidden layer,” and a negative pass that operates on externally supplied or model-generated “negative data” and adjusts weights to deteriorate the goodness.

https://syncedreview.com/2022/12/08/geoffrey-hintons-forward-forward-algorithm-charts-a-new-path-for-neural-networks/

The Forward-Forward Algorithm: Some Preliminary
Investigations by Geoffrey Hinton
https://www.cs.toronto.edu/~hinton/FFA13.pdf

Using the Forward-Forward Algorithm for Image Classification
Includes sample Python code using the Keras Python library.
https://keras.io/examples/vision/forwardforward/

Training Computer Vision Models on Random Noise Instead of Real Images

2021-12-09 Martin Anderson

Researchers from MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have experimented with using random noise images in computer vision datasets to train computer vision models , and have found that instead of producing garbage, the method is surprisingly effective

https://www.unite.ai/training-computer-vision-models-on-random-noise-instead-of-real-images/

Memristor Breakthrough: First Single Device To Act Like a Neuron

2020-09-01 Samuel K. Moore

It combines resistance, capacitance, and what’s called a Mott memristor all in the same device. Memristors are devices that hold a memory, in the form of resistance, of the current that has flowed through them. Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance. Materials in a Mott transition go between insulating and conducting according to their temperature. It’s a property seen since the 1960s, but only recently explored in nanoscale devices.

The transition happens in a nanoscale sliver of niobium oxide in the memristor. Here when a DC voltage is applied, the NbO2 heats up slightly, causing it to transition from insulating to conducting. Once that switch happens, the charge built up in the capacitance pours through. Then the device cools just enough to trigger the transition back to insulating. The result is a spike of current that resembles a neuron’s action potential.

https://spectrum.ieee.org/nanoclast/semiconductors/devices/memristor-first-single-device-to-act-like-a-neuron

Also: https://www.nature.com/articles/s41586-020-2735-5.epdf?sharing_token=B11PDbIH67ccrQscLpqM19RgN0jAjWel9jnR3ZoTv0OdeNphDinnZf2DfBr6sMtOQnlA9ClIX5PlqiQovl5PS67A1_SeUDz_GOTcpm9U8FJOwFmzPM8n_1wR_XcVzo9nasoynqgc04XmOkuXv1UxU95v5wjS-eNBbDS0aEI6zvz9aX0jlTRX9soTeiiWwoHX-JFpZUeYiamNdcA3x8Vr8eOQFWRjS7vQ0Ji-WYiQAvIhdiylBLMCTx5sY6HEBVNO2EAlUzWxg8JW4JFhkFf9Fd_P8V18BwKJ_k_eJ2TofXNsyjmPTa-r98OT104dU21Eev4zf-LFX6_7z34scRoUTA%3D%3D&tracking_referrer=spectrum.ieee.org

Artificial brains may need sleep too

2020-06-08 James Riordon

States that resemble sleep-like cycles in simulated neural networks quell the instability that comes with uninterrupted self-learning in artificial analogs of brains

Watkins and her research team found that the network simulations became unstable after continuous periods of unsupervised learning. When they exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. “It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.

https://www.lanl.gov/discover/news-release-archive/2020/June/0608-artificial-brains.php