2024-05-05 Miguel Grinberg
https://blog.miguelgrinberg.com/post/how-llms-work-explained-without-math
2024-05-05 Miguel Grinberg
https://blog.miguelgrinberg.com/post/how-llms-work-explained-without-math
Visualizations of low dimension artificial neural networks transforming the input into a representation that can be separated by a line with the different classes of data on each side.
Neural Networks, Manifolds, and Topology
2014-04-06 Christopher Olah
https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/
2022-12-08 by Synced
there is increasing interest in whether the biological brain follows backpropagation or, as Hinton asks, whether it has some other way of getting the gradients needed to adjust the weights on its connections. In this regard, Hinton proposes the FF [Forward-Forward] algorithm as an alternative to backpropagation for neural network learning.
…
It aims to replace the forward and backward passes of backpropagation with two forward passes: a positive pass that operates on real data and adjusts weights “to improve the goodness in every hidden layer,” and a negative pass that operates on externally supplied or model-generated “negative data” and adjusts weights to deteriorate the goodness.
The Forward-Forward Algorithm: Some Preliminary
Investigations by Geoffrey Hinton
https://www.cs.toronto.edu/~hinton/FFA13.pdf
Using the Forward-Forward Algorithm for Image Classification
Includes sample Python code using the Keras Python library.
https://keras.io/examples/vision/forwardforward/
2021-12-13 Anil Ananthaswamy
2021-12-09 Martin Anderson
Researchers from MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have experimented with using random noise images in computer vision datasets to train computer vision models , and have found that instead of producing garbage, the method is surprisingly effective
https://www.unite.ai/training-computer-vision-models-on-random-noise-instead-of-real-images/
It combines resistance, capacitance, and what’s called a Mott memristor all in the same device. Memristors are devices that hold a memory, in the form of resistance, of the current that has flowed through them. Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance. Materials in a Mott transition go between insulating and conducting according to their temperature. It’s a property seen since the 1960s, but only recently explored in nanoscale devices.
The transition happens in a nanoscale sliver of niobium oxide in the memristor. Here when a DC voltage is applied, the NbO2 heats up slightly, causing it to transition from insulating to conducting. Once that switch happens, the charge built up in the capacitance pours through. Then the device cools just enough to trigger the transition back to insulating. The result is a spike of current that resembles a neuron’s action potential.
2020-09-27
The story of how neural nets evolved from the earliest days of AI to now.
States that resemble sleep-like cycles in simulated neural networks quell the instability that comes with uninterrupted self-learning in artificial analogs of brains
…
Watkins and her research team found that the network simulations became unstable after continuous periods of unsupervised learning. When they exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. “It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.
https://www.lanl.gov/discover/news-release-archive/2020/June/0608-artificial-brains.php
2020-04-30 Kim Martineau
They showed that a deep neural network could perform with only one-tenth the number of connections if the right subnetwork was found early in training.
Train the model, prune its weakest connections, retrain the model at its fast, early training rate, and repeat, until the model is as tiny as you want.
https://news.mit.edu/2020/foolproof-way-shrink-deep-learning-models-0430
2020-02-17 Martijn van Wezel
The SNNs bio-inspired neural networks are different from conventional neural networks due that the conventional neural networks communicate with numbers. Instead, SNNs communicate through spikes. … Having multiple spikes in a short period can stimulate the neuron to fire. However, if the time periods are to big between spikes, the neuron lose interest, and goes to sleep again.
… one major benefit of a Spiking Neural Networks is the power consumption. A ‘normal’ neural network uses big GPUs or CPUs that draw hundreds of Watts of power. SNN only uses for the same network size just a few nano Watts.