Spiking Neural Networks

2020-02-17 Martijn van Wezel

The SNNs bio-inspired neural networks are different from conventional neural networks due that the conventional neural networks communicate with numbers. Instead, SNNs communicate through spikes. … Having multiple spikes in a short period can stimulate the neuron to fire. However, if the time periods are to big between spikes, the neuron lose interest, and goes to sleep again.

… one major benefit of a Spiking Neural Networks is the power consumption. A ‘normal’ neural network uses big GPUs or CPUs that draw hundreds of Watts of power. SNN only uses for the same network size just a few nano Watts.


Machine Learning Takes On Antibiotic Resistance

2020-03-09 Katherine Harmon Courage

In the February 20 issue of Cell, one team of scientists announced that they — and a powerful deep learning algorithm — had found a totally new antibiotic, one with an unconventional mechanism of action that allows it to fight infections that are resistant to multiple drugs. The compound was hiding in plain sight (as a possible diabetes treatment) because humans didn’t know what to look for. …

Collins, Barzilay and their team trained their network to look for any compound that would inhibit the growth of the bacterium Escherichia coli. They did so by presenting the system with a database of more than 2,300 chemical compounds that had known molecular structures and were classified as “hits” or “non-hits” on tests of their ability to inhibit the growth of E. coli. From that data, the neural net learned what atom arrangements and bond structures were common to the molecules that counted as hits. …

The researchers … also trained the algorithm to predict the toxicity of compounds and to weed out candidate molecules on that basis. …

They then turned the trained network loose on the Drug Repurposing Hub, a library of more than 6,000 compounds that are already being vetted for use in humans for a wide variety of conditions.


Sonic’s very fast redesign

When the redesign [of the Sonic the Hedgehog movie] was commissioned, Artist Tyson Hesse, who had worked on previous Sonic the Hedgehog media, was brought on to lead the redesign. “But a lot of the work that the team had already done as far as the look of Sonic’s fur, should he have eyelashes and what they look like…how we should handle the eyes…all that kind of stuff was actually directly translatable over to the new design,” comments Wright. “I think the redesign took only seven or eight weeks, which is a record at MPC for the design of a 3D character.”

The process was helped by Hesse having a strong relationship with Sega previously. The team flew to London, “where we all sat with the guy that was concept sculpting and literally just did it in real-time.” Wright felt that because there was a clear remit for Hesse to take the redesign lead, it allowed everything to fall into place very simply. “Actually, funnily enough, the redesign was pretty painless. In some respects, a more exaggerated character is a simpler thing for the team to execute,” he comments.


Real-time, in-camera background compositing in The Mandalorian

2020-02-06 Jay Holben

For decades, green- and bluescreen compositing was the go-to solution for bringing fantastic environments and actors together on the screen. . . . However, when characters are wearing highly reflective costumes, as is the case with Mando (Pedro Pascal), the title character of The Mandalorian, the reflection of green- and bluescreen in the wardrobe causes costly problems in post-production. . . .

The solution was what might be described as the heir to rear projection — a dynamic, real-time, photo-real background played back on a massive LED video wall and ceiling, which not only provided the pixel-accurate representation of exotic background content, but was also rendered with correct camera positional data.

The Mandalorian: This Is the Way in American Cinematographer

Also see The Virtual Production of The Mandalorian, Season One at YouTube.

Discovered from: https://news.ycombinator.com/item?id=22378679

How AlphaStar Became a StarCraft Grandmaster

2020-02-13 Tommy Thompson

One of the biggest headlines in AI research for 2019 was the unveiling of AlphaStar – Google DeepMind’s project to create the worlds best player of Blizzard’s real-time strategy game StarCraft II.  After shocking the world in January as the system defeated two high ranking players in closed competition, an updated version was revealed in November that had achieved grandmaster status: ranking among the top 0.15% in Europe’s 90,000 active players.  So let’s look at how AlphaStar works, the underpinning technology and theory that drives it, the truth behind the media sensationalism and how it achieved grandmaster rank in online multiplayer.


Google Colaboratory Notebook and Repository Gallery

2019-11-25 firmai / Derek Snow

“A curated list of repositories with fully functional click-and-run colab notebooks with data, code and description. The code in these repositories are in Python unless otherwise stated.”

Some of the Google Colaboratory Notebooks listed use artificial neural networks. Google Colaboratory Notebooks are Jupyter notebooks that run Python or other code on Google’s cloud for free.