Geoffrey Hinton’s Forward-Forward Algorithm Charts a New Path for Neural Networks

2022-12-08 by Synced

there is increasing interest in whether the biological brain follows backpropagation or, as Hinton asks, whether it has some other way of getting the gradients needed to adjust the weights on its connections. In this regard, Hinton proposes the FF [Forward-Forward] algorithm as an alternative to backpropagation for neural network learning.

It aims to replace the forward and backward passes of backpropagation with two forward passes: a positive pass that operates on real data and adjusts weights “to improve the goodness in every hidden layer,” and a negative pass that operates on externally supplied or model-generated “negative data” and adjusts weights to deteriorate the goodness.

https://syncedreview.com/2022/12/08/geoffrey-hintons-forward-forward-algorithm-charts-a-new-path-for-neural-networks/

The Forward-Forward Algorithm: Some Preliminary
Investigations by Geoffrey Hinton
https://www.cs.toronto.edu/~hinton/FFA13.pdf

A Thought on the Lovelace Test

The Lovelace test demands of a computing machine that it not only produce an artifact that is by conventional standards amazing, but it leaves everyone looking at it
stupefied as to how it does what it does: including in this stupefaction the creators and designers of the machine.
– Selmer Bingsjord [1]

 

An artificial agent, designed by a human, passes the [Lovelace] test only if it originates a “program” that it was not engineered to produce. The outputting of the new program—it could be an idea, a novel, a piece of music, anything—can’t be a hardware fluke, and it must be the result of processes the artificial agent can reproduce. Now here’s the kicker: The agent’s designers must not be able to explain how their original code led to this new program. [2]

 

My thought is if God made humans, can a human use creativity to make an artifact, so that God can’t explain how it was made?

 

[1]
Artificial Intelligence: Will Machines Take Over? (Science Uprising, Ep. 10)
2022-09-21 YouTube channel: “Discovery Science”
https://youtu.be/suuxAZbDCYE?t=362

[2]
Forget Turing, the Lovelace Test Has a Better Shot at Spotting AI
2014-07-08 Jordan Pearson
https://www.vice.com/en/article/pgaany/forget-turing-the-lovelace-test-has-a-better-shot-at-spotting-ai

Training Computer Vision Models on Random Noise Instead of Real Images

2021-12-09 Martin Anderson

Researchers from MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have experimented with using random noise images in computer vision datasets to train computer vision models , and have found that instead of producing garbage, the method is surprisingly effective

https://www.unite.ai/training-computer-vision-models-on-random-noise-instead-of-real-images/