BU LISP : Research

On the Web

How randomness can protect neural networks against adversarial attacks.

By Ben Dickson

As deep learning and neural networks become more and more prominent in important tasks, there’s increasing concern over how they might be compromised for evil purposes. It’s one thing for an attacker to hack your Netflix content recommendation algorithm, but a totally different problem when it’s your self-driving car that’s being fooled to bypass a stop sign or miss to detect a pedestrian. As we continue to learn about the unique security threats of deep learning algorithms entail, one of the areas of focus are adversarial attacks, perturbation in input data that cause artificial intelligence algorithms to behave in unexpected (and perhaps dangerous) ways. Read more on BdTechTalks.com

Could artificial intelligence make life harder for hackers?

By Art Jahnke, Boston University

As the volume of digital information in corporate networks continues to grow, so grows the number of cyberattacks, and their cost. One cybersecurity vendor, Juniper Networks, estimates that the cost of data breaches worldwide will reach $2.1 trillion in 2019, roughly four times the cost of breaches in 2015. Now, two Boston University computer scientists, working with researchers at Draper, a not-for-profit engineering solutions company located in Cambridge, have developed a tool that could make it harder for hackers to find their way into networks where they don't belong. Read more on phys.org