As of now, I am a research assistant and a Ph.D. student at Boston University. I take pleasure doing my research in the computer systems group, where I am advised by Prof. Azer Bestavros. I am mainly interested in computer systems, including distributed and operating systems. More specifically, my current research focuses on designing and building computer systems for big data.


(2019) Yawn: A CPU Idle-state Governor for Datacenter Applications
E Sharafzadeh, S Sanaee,E Asyabi, M Sharifi, Proceedings of the 10th ACM SIGOPS Asia-Pacific Workshop on Systems , 2019

(2019) CTS: An operating system CPU scheduler to mitigate tail latency for latency-sensitive multi-threaded applications
E Asyabi, E Sharafzadeh, S Sanaee, M Sharifi, Journal of Parallel and Distributed Computing Volume , 2019

(2018) TerrierTail: Mitigating Tail Latency of Cloud Virtual Machines
E Asyabi, S Sanaee, M Sharifi, A Bestavros ,IEEE Transactions on Parallel and Distributed Systems, 2018

(2018) ppXen: A hypervisor CPU scheduler for mitigating performance variability in virtualized clouds
E Asyabi, M Sharifi, A Bestavros, Future Generation Computer Systems, 2018

(2016) Kani: A QoS-Aware Hypervisor Level Scheduler for Cloud Computing Environments
E Asyabi, A Azhdari, M Dehsangi, Michel Gokan, M Sharifi, S Azhari, Future Generation Computer Systems, 2016

(2015) cCluster: A Core Clustering Mechanism for Workload-Aware Virtual Machine Scheduling
M Dehsangi, E Asyabi , M Sharifi and S Azhari, The 3rd International Conference on Future Internet of Things and Cloud, Rome, Italy, 2015

(2015) A New Approach for Dynamic Virtual Machine Consolidation in Cloud Data Centers
E Asyabi and M Sharifi, International Journal of Modern Education and Computer Science, 2015

Research Projects


Key-Value Stores (Current Project) In-memory Key-Value (KV) stores are non-persistent storage backbones for an ever-growing number of large-scale applications. KV stores, therefore, notably contribute to the cost and energy consumption of data centers. In this project, we demonstrate that existing power saving mechanisms that leverage dynamic voltage and frequency scaling (DVFS) and idle-state governors cannot hold up to short service time and high arrival rate of KV stores, leading to negligible power saving. We are currently working on the design and development of an in-memory event-driven KV store that saves power while offering microsecond-scale tail latency


See more
IO-Bound Workloads in Clouds Large-scale online services parallelize sub-operations of a user's request across a large number of physical machines (service components) to enhance the responsiveness. Even a temporary spike in latency of any service component can notably inflate the end-to-end delay; therefore, the tail of the latency distribution of service components has become a subject of intensive research. The key characteristics of clouds such as elasticity and on-demand resource provisioning have made clouds attractive for hosting large-scale online services wherein VMs are the building blocks of services. However, adherence to traditional hypervisor scheduling policies has led to unpredictable CPU access latencies for virtual CPUs (vCPUs) that are responsible for performing network IO. This has resulted in poor and unpredictable performance for network IO, exacerbating VMs' long tail latencies and discouraging the hosting of large-scale parallel web services on virtualized clouds. We have designed several scheduling policies and built their prototypes in Xen hypervisor. Our prototypes substantially outperform the existing schedulers regarding the quality of delivered services for IO-bound workloads in virtualized clouds. This research project has led to several publications such as TerrierTail, ppXen, Kani and cCluster.


See more
Multithreaded Workloads in Data Centers A large number of latency-sensitive applications hosted on individual servers use a thread-driven concurrency model wherein a thread is spawned for each user connection. Threaded applications rely on the operating system CPU scheduler for determining the order of thread execution. Our experiments show that the default Linux scheduler (CFS) idiosyncrasies result in LCFS (Last Come First Served) scheduling of threads belonging to the same application. On the other hand, studies have shown that FCFS (First Come First Served) scheduling yields the lowest response time variability and tail latency, making the default scheduler of Linux a source of long-tail latency for multi-threaded applications. In this project, we design CTS, an operating system CPU scheduler to trim the tail of the latency distribution for latency-sensitive multi-threaded applications while maintaining the key characteristics of the default Linux scheduler (e.g., fairness). CTS tracks threads belonging to an application promptly and schedules them in FCFS order, reducing the tail latency. To keep the existing features of the default Linux scheduler intact, CTS keeps CFS responsible for system-wide load balancing and core level process scheduling; CTS merely schedules threads of the CFS chosen process in FCFS order, ensuring tail latency mitigation without sacrificing the default Linux scheduler properties. Experiments with a prototype implementation of CTS in the Linux kernel demonstrate that CTS significantly outperforms the Linux default scheduler. This project has led to a publication (CTS).


See more
Power Consumption of Data Centers Idle-state governors partially turn off idle CPUs, allowing them to go to states known as idle-states to save power. Exiting from these idle-sates, however, imposes delays on the execution of tasks and aggravates tail latency. Menu, the default idle-state governor of Linux, predicts periods of idleness based on the historical data and the disk I/O information to choose proper idle-sates. Our experiments show that Menu can save power, but at the cost of sacrificing tail latency, making Menu an inappropriate governor for data centers that host latency-sensitive applications. In this project, we design and implement Yawn, an idle-state governor that aims to mitigate tail latency without sacrificing power. Yawn leverages online machine learning techniques to predict the idle periods based on information gathered from all parameters affecting idleness, including network I/O, resulting in more accurate predictions, which in turn leads to reduced response times. Benchmarking results demonstrate that Yawn significantly outperforms Menu in terms of tail latency while saving the amount of power. This project has led to a publication (Yawn).


Teaching Fellow for Fundamentals of Computing Systems (Boston University - Spring 2019, Fall 2019)

Teaching Fellow for Advanced Software Systems (Boston University- Fall 2017)





111 Cummington Mall # 140D, Boston, MA 02215- Office number:117F