Blog Tech

The foreground/background scheduling model for real-time and cloud systems

In real-time systems, the very basic task model assumes every real-time task (process) is either periodic or can be modeled as periodic one (e.g. sporadic servers). A periodic task is generally assigned three properties: period (T), worst case execution time (WCET, or C) and deadline (D). In every period T, the task needs to complete some job before its deadline D. In the worst possible case with all sorts of execution interference (e.g. interrupts, synchronization, cache evictions, memory bus contention), it takes C amount of time to finish the job.

Read More

Why you should consider using memory latency metric instead of memory bandwidth

On multicore platforms with shared memory, the performance of memory bus is critical to the overall performance of the system. Heavy contention on the memory bus leads to unpredictable task completion time in real-time systems and throughput fluctuation in server systems. Currently, people monitor the memory bus contention by looking at the whole system's memory bandwidth (e.g. bytes/sec). When the observed bandwidth usage exceeds a certain threshold (which is hardware dependent), the system is considered under memory bus contention. While memory bandwidth metric has been widely used, both in academia and industry, I would argue that it is not accurate enough to identify contention.

Read More

Embedded virtualization is NOT server virtualization

Back in the days when virtualization was just invented by IBM, it was intended to solve the problem of single-user operating systems (OS). By creating virtual machines (VM), multiple users could log in their dedicated VMs and perform tasks simultaneously. Today though, single-user OSes have become a history and multi-user OSes are commonly accepted. Many people can log in the same machine without the help of virtualization. Luckily, virtualization technology has found its new roles as well.

Read More