Geoffrey Hinton’s Forward-Forward Algorithm Charts a New Path for Neural Networks

2022-12-08 by Synced

there is increasing interest in whether the biological brain follows backpropagation or, as Hinton asks, whether it has some other way of getting the gradients needed to adjust the weights on its connections. In this regard, Hinton proposes the FF [Forward-Forward] algorithm as an alternative to backpropagation for neural network learning.

It aims to replace the forward and backward passes of backpropagation with two forward passes: a positive pass that operates on real data and adjusts weights “to improve the goodness in every hidden layer,” and a negative pass that operates on externally supplied or model-generated “negative data” and adjusts weights to deteriorate the goodness.

The Forward-Forward Algorithm: Some Preliminary
Investigations by Geoffrey Hinton

Ryan Reynolds reveals the No. 1 skill that’s helped him succeed: ‘It really changed my life’

2022-10-13 Morgan Smith

“We live in a world that’s increasingly gamified, and I think we have an instinct to win, crush and kill,” he said. “But if you can disengage or disarm that instinct for a second and replace it with seeking to learn about somebody instead, that, as a leadership quality, for me, has quite literally changed every aspect of my life.”

A Thought on the Lovelace Test

The Lovelace test demands of a computing machine that it not only produce an artifact that is by conventional standards amazing, but it leaves everyone looking at it
stupefied as to how it does what it does: including in this stupefaction the creators and designers of the machine.
– Selmer Bingsjord [1]


An artificial agent, designed by a human, passes the [Lovelace] test only if it originates a “program” that it was not engineered to produce. The outputting of the new program—it could be an idea, a novel, a piece of music, anything—can’t be a hardware fluke, and it must be the result of processes the artificial agent can reproduce. Now here’s the kicker: The agent’s designers must not be able to explain how their original code led to this new program. [2]


My thought is if God made humans, can a human use creativity to make an artifact, so that God can’t explain how it was made?


Artificial Intelligence: Will Machines Take Over? (Science Uprising, Ep. 10)
2022-09-21 YouTube channel: “Discovery Science”

Forget Turing, the Lovelace Test Has a Better Shot at Spotting AI
2014-07-08 Jordan Pearson

Vladimir Putin Has Fallen Into the Dictator Trap

2022-03-16 Brian Klaas

For those of us living in liberal democracies, criticizing the boss is risky, but we’re not going to be shipped off to a gulag or watch our family get tortured. In authoritarian regimes, those all-too-real risks have a way of focusing the mind. Is it ever worthwhile for authoritarian advisers to speak truth to power?

As a result, despots rarely get told that their stupid ideas are stupid, or that their ill-conceived wars are likely to be catastrophic. Offering honest criticism is a deadly game and most advisers avoid doing so. Those who dare to gamble eventually lose and are purged. So over time, the advisers who remain are usually yes-men who act like bobbleheads, nodding along when the despot outlines some crackpot scheme.

When despots screw up, they need to watch their own back. Yet again, they can become victims of the dictator trap. To crush prospective enemies, they must demand loyalty and crack down on criticism. But the more they do so, the lower the quality of information they receive, and the less they can trust the people who purport to serve them. As a result, even when government officials learn about plots to overthrow an autocrat, they may not share that knowledge. This is known as the “vacuum effect”—and it means that authoritarian presidents might learn of coup attempts and putsches only when it’s too late. This raises a question that should keep Putin awake at night: If the oligarchs were to eventually make a move against him, would anyone warn him?