BU LISP : Research

Current Projects

Applying Manifold Learning Technique to Design Recurrent Architecture for Low Dimension Classification

Deep Neural Networks (DNNs) can have very high performance in a visual recognition task but are prone to noise and adversarial attacks. One main problem of training a DNN is the input often lay in very high dimensional space which leads to a high number of parameters to train. This raises the question of reducing the number of dimensions of the dataset. Given a high dimension dataset such as a visual dataset, how can we find a lower dimension representation that keeps the essential information of the images? With a low dimension representation, we can hopefully use a more shallow/simple architecture that can decently classify high dimensional datasets.

Complex Valued Neural Networks

Current neural network models that deal with data on the spectral plane (magnitude and phase) only take as input the magnitude, and do not incorporate, in a meaningful way, the phase information. Research has shown that the output of biological neurons is affected by the phase of its inputs. In order to bridge this separation between artificial neurons and biological neurons, I am experimenting with the effectiveness and implementation of complex valued neural networks, which would integrate the phase information meaningfully. In particular, in order to take advantage of the popular neural network software package and framework, PyTorch, I am working to simulate complex valued neural network operations through real valued neural networks. I hope that by implementing complex valued neural networks through this framework, it will be easy for other researchers to use and experiment on.

Efficient Neural Networks: Reducing Network Architecture Size

Neural networks are an immensely powerful tool for many difficult problems, but often require computational power beyond that of small devices such as embedded systems, Internet of Things devices, and mobile phones. In this project we aim to create computationally efficient neural networks, networks with smaller memory and compute footprints without losing objective functionality.

Finding Dimensionality in Large Data

The "intrinsic" dimensionality of a dataset is a quantity of great interest in the machine learning community. There are many techniques aimed at estimating this "intrinsic" dimensionality, but there are currently none which have demonstrated scalability to large, complex datasets. In this project we aim to find the intrinsic dimension of both small and large, simple and complex datasets.

Information Propagation in Multilayer Networks

With the emergence of social media, information and influence propagation in online networks has become an active field of research over the last decade. Individuals often participate in multiple social networks which leads to information spreading faster and the propagation becoming more complex. We would like to understand the pattern of such propagation in multilayer networks using game theoretic tools.

Information Propagation Through Graph Neural Networks and Relation to the Brain

A popular theory of intelligence argues that intelligence arises from the connections between primitive computing units rather than the computing units themselves. Interestingly, neurons in the brain form topological structures for processing different types of information. We are exploring the relationship between graph topology and model performance using Graph Neural Networks, and comparing our findings to known phenomena in the brain.

nFlip : Deep Reinforcement Learning in Multiplayer FlipIt1

Reinforcement learning has shown much success in games such as chess, backgammon and Go. However, in most of these games, agents have full knowledge of the environment at all times. We describe a deep learning model that successfully maximizes its score using reinforcement learning in a game with incomplete and imperfect information. We apply our model to FlipIt, a two-player game in which both players, the attacker and the defender, compete for ownership of a shared resource and only receive information on the current state upon making a move. Our model is a deep neural network combined with Q-learning and is trained to maximize the defender’s time of ownership of the resource. We extend FlipIt to a larger action-spaced game with the introduction of a new lower-cost move and generalize the model to multiplayer FlipIt.

1 van Dijk, M., Juels, A., Oprea, A., Rivest, R.L. FlipIt : The Game of “Stealthy Takeover”. Journal of Cryptology 26,655-713 (2013).

Referring Expression Problem

The problem of referring expression is a more domain specific area of image captioning with the goal of describing a sub-region of a given image. Rational Speech Act (RSA) framework is a probabilistic reasoning approach that can generate sentences based on game theory systems of speaker - listener. The advantage of RSA is its explainability - namely answer the question of why a speaking agent choosing a specific word/phrase over another. Can RSA be applied to referring expression problem to generate a better/more explainable description?

Using Game Theory and Reinforcement Learning to Predict the Future

Baseball is a well known, repeated, finite, adversarial, stochastic game that has a massive amount of available data. On the other hand, Reinforcement Learning (RL) models take significant time and resources to train. By fusing Game Theory and RL, we are answering interesting questions such as "given a video of a pitch, can we compute the utility of a pitch given the desired location, resulting location, and setting?"

Using Machine Learning for Side Channel Analysis

Side channel analysis involves using externally recorded signals from a device (such as electromagnetic radiation or power consumption) to determine what the device is preforming. Our current work involves using this avenue of data in conjunction with machine learning techniques to accomplish two tasks. First is anomaly detection, in which a model takes as input a side channel signal and outputs a determination of whether or not the device is running software that the model has been trained on, or if it is running "anomalous" software. The second task is instruction level tracking, where a model is trained to recognize "jump" commands within code, and then mark in a recording where those jump commands occur.