2021-12-13 Anil Ananthaswamy
2021-12-09 Martin Anderson
Researchers from MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have experimented with using random noise images in computer vision datasets to train computer vision models , and have found that instead of producing garbage, the method is surprisingly effective
2020-09-01 Samuel K. Moore
It combines resistance, capacitance, and what’s called a Mott memristor all in the same device. Memristors are devices that hold a memory, in the form of resistance, of the current that has flowed through them. Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance. Materials in a Mott transition go between insulating and conducting according to their temperature. It’s a property seen since the 1960s, but only recently explored in nanoscale devices.
The transition happens in a nanoscale sliver of niobium oxide in the memristor. Here when a DC voltage is applied, the NbO2 heats up slightly, causing it to transition from insulating to conducting. Once that switch happens, the charge built up in the capacitance pours through. Then the device cools just enough to trigger the transition back to insulating. The result is a spike of current that resembles a neuron’s action potential.
States that resemble sleep-like cycles in simulated neural networks quell the instability that comes with uninterrupted self-learning in artificial analogs of brains
Watkins and her research team found that the network simulations became unstable after continuous periods of unsupervised learning. When they exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. “It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.
2020-04-30 Kim Martineau
They showed that a deep neural network could perform with only one-tenth the number of connections if the right subnetwork was found early in training.
Train the model, prune its weakest connections, retrain the model at its fast, early training rate, and repeat, until the model is as tiny as you want.
2020-02-17 Martijn van Wezel
The SNNs bio-inspired neural networks are different from conventional neural networks due that the conventional neural networks communicate with numbers. Instead, SNNs communicate through spikes. … Having multiple spikes in a short period can stimulate the neuron to fire. However, if the time periods are to big between spikes, the neuron lose interest, and goes to sleep again.
… one major benefit of a Spiking Neural Networks is the power consumption. A ‘normal’ neural network uses big GPUs or CPUs that draw hundreds of Watts of power. SNN only uses for the same network size just a few nano Watts.
2020-03-09 Katherine Harmon Courage
In the February 20 issue of Cell, one team of scientists announced that they — and a powerful deep learning algorithm — had found a totally new antibiotic, one with an unconventional mechanism of action that allows it to fight infections that are resistant to multiple drugs. The compound was hiding in plain sight (as a possible diabetes treatment) because humans didn’t know what to look for. …
Collins, Barzilay and their team trained their network to look for any compound that would inhibit the growth of the bacterium Escherichia coli. They did so by presenting the system with a database of more than 2,300 chemical compounds that had known molecular structures and were classified as “hits” or “non-hits” on tests of their ability to inhibit the growth of E. coli. From that data, the neural net learned what atom arrangements and bond structures were common to the molecules that counted as hits. …
The researchers … also trained the algorithm to predict the toxicity of compounds and to weed out candidate molecules on that basis. …
They then turned the trained network loose on the Drug Repurposing Hub, a library of more than 6,000 compounds that are already being vetted for use in humans for a wide variety of conditions.
2020-02-13 Tommy Thompson
One of the biggest headlines in AI research for 2019 was the unveiling of AlphaStar – Google DeepMind’s project to create the worlds best player of Blizzard’s real-time strategy game StarCraft II. After shocking the world in January as the system defeated two high ranking players in closed competition, an updated version was revealed in November that had achieved grandmaster status: ranking among the top 0.15% in Europe’s 90,000 active players. So let’s look at how AlphaStar works, the underpinning technology and theory that drives it, the truth behind the media sensationalism and how it achieved grandmaster rank in online multiplayer.