Artificial brains may need sleep too

2020-06-08 James Riordon

States that resemble sleep-like cycles in simulated neural networks quell the instability that comes with uninterrupted self-learning in artificial analogs of brains

Watkins and her research team found that the network simulations became unstable after continuous periods of unsupervised learning. When they exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. “It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.

Spiking Neural Networks

2020-02-17 Martijn van Wezel

The SNNs bio-inspired neural networks are different from conventional neural networks due that the conventional neural networks communicate with numbers. Instead, SNNs communicate through spikes. … Having multiple spikes in a short period can stimulate the neuron to fire. However, if the time periods are to big between spikes, the neuron lose interest, and goes to sleep again.

… one major benefit of a Spiking Neural Networks is the power consumption. A ‘normal’ neural network uses big GPUs or CPUs that draw hundreds of Watts of power. SNN only uses for the same network size just a few nano Watts.

Machine Learning Takes On Antibiotic Resistance

2020-03-09 Katherine Harmon Courage

In the February 20 issue of Cell, one team of scientists announced that they — and a powerful deep learning algorithm — had found a totally new antibiotic, one with an unconventional mechanism of action that allows it to fight infections that are resistant to multiple drugs. The compound was hiding in plain sight (as a possible diabetes treatment) because humans didn’t know what to look for. …

Collins, Barzilay and their team trained their network to look for any compound that would inhibit the growth of the bacterium Escherichia coli. They did so by presenting the system with a database of more than 2,300 chemical compounds that had known molecular structures and were classified as “hits” or “non-hits” on tests of their ability to inhibit the growth of E. coli. From that data, the neural net learned what atom arrangements and bond structures were common to the molecules that counted as hits. …

The researchers … also trained the algorithm to predict the toxicity of compounds and to weed out candidate molecules on that basis. …

They then turned the trained network loose on the Drug Repurposing Hub, a library of more than 6,000 compounds that are already being vetted for use in humans for a wide variety of conditions.

How AlphaStar Became a StarCraft Grandmaster

2020-02-13 Tommy Thompson

One of the biggest headlines in AI research for 2019 was the unveiling of AlphaStar – Google DeepMind’s project to create the worlds best player of Blizzard’s real-time strategy game StarCraft II.  After shocking the world in January as the system defeated two high ranking players in closed competition, an updated version was revealed in November that had achieved grandmaster status: ranking among the top 0.15% in Europe’s 90,000 active players.  So let’s look at how AlphaStar works, the underpinning technology and theory that drives it, the truth behind the media sensationalism and how it achieved grandmaster rank in online multiplayer.

Google Colaboratory Notebook and Repository Gallery

2019-11-25 firmai / Derek Snow

“A curated list of repositories with fully functional click-and-run colab notebooks with data, code and description. The code in these repositories are in Python unless otherwise stated.”

Some of the Google Colaboratory Notebooks listed use artificial neural networks. Google Colaboratory Notebooks are Jupyter notebooks that run Python or other code on Google’s cloud for free.


Exploring Weight Agnostic Neural Networks


When training a neural network to accomplish a given task, be it image classification or reinforcement learning, one typically refines a set of weights associated with each connection within the network. Another approach to creating successful neural networks that has shown substantial progress is neural architecture search, which constructs neural network architectures out of hand-engineered components such as convolutional network components or transformer blocks. It has been shown that neural network architectures built with these components, such as deep convolutional networks, have strong inductive biases for image processing tasks, and can even perform them when their weights are randomly initialized.