Home » Real-time Systems: Full Importance of Latency, Scheduling and Memory.

Real-time Systems: Full Importance of Latency, Scheduling and Memory.

real-time systems
Spread the love

Real-time Systems can be classified as programs for which the correctness of operation depends on logical results of the computation. And the rate or speed at which the results are produced. In this section we will be going through (sub-sectionally) the primary concepts that define a real-time system. Alongside what impact they have and how they will be affecting a hard-real-time system. It is important that the time constraints are met as it is a critical criterion.

real-time systems

Latency and real-time

It is believed to be a common misunderstanding amongst the public that computers are capable of generating results instantly. Any typical gamer or people who surf the web a lot would tell you otherwise. Even if the time is almost nothing to humans, computers take their time to complete tasks. In hindsight, the more tasks that are being run by the computer, it’s more likely to slow down with its processes. When we say a computer is operating a system in real-time. It means it is doing so with very very little delay. A computer unfortunately does not produce results immediately. As the time between a process to start and the output is considered as latency. In order for the delay to be non-existing, the latency should be relatively low, hence creating “real-time systems”.

For most tasks, low latency is desirable, but consistent latency is usually more important. Simulating a hard real-time system works accordingly. You are better off receiving a consistent latency than an unpredictable one, ultimately causing the system to fail. Typically, humans are incapable of detecting a delay if it is 8ms or 16ms, so it is overall better to have your system run the latter if it means keeping your latency consistent.

Task Scheduling

Theoretically, real-time task scheduling means simply determining the order in which the various tasks are to be taken up for execution by the operating system. In every operating system, all processes need something like a task scheduler. Which would organize them in order for them to run in.

All task schedulers are different in their own way depending on the algorithm they use, as there are a large number of developed algorithms for real-time systems. Since the early 1970’s Real-time task scheduling on uniprocessors has been under development.

There are quite a few classification schemes of real-time task scheduling algorithms. A popular scheme classifies the real-time task scheduling algorithms based on how the scheduling points are defined. The three main types of schedulers according to this classification scheme are: 

  • Clock-driven
  • Event-driven,
  • and hybrid.

For example, if you ever come across the task manager of your computer you would see a lot of processes running or trying to run. Some of these are waiting on the processor to be free, whilst others await their course of action depending on the running processes. This is predominantly why all new computers come equipped with multiple “cores”. Dual, Quad, Octa-core processors define the amount of processes they are capable of running at a time. Despite this, there are still processes left in queue. A task scheduler comes into play here and optimizes the core to run tasks on certain algorithms, whichever may seem fit to the program.

An Example in real-time Systems

Fedora Linux, a Linux system, is the core of the operating system. It decides what process runs and which does not. That is why it is known as the kernel. This “scheduling” when it meets the processor, gets known as processor scheduler. The kernel also manages memory access, hard drive access, and so on. Again, the algorithm for how a scheduler should operate is dependent on the user needs as well as the requirements. For instance, when you take a hard drive, the scheduler would pick the physical location of data on a disk prior to deciding what process runs in what order.

Memory management

Memory management is considered as one of the most critical aspect of any operating system and is even more vital in a Real-time system compared to that of a standard operating system. Primarily, in a real-time system, the speed at which memory is assigned is critically important. Typically, a standard memory allocation scheme scans what can be considered as a linked list of medium length to find a vacant memory block; however, it is important that this process occurs in a fixed time when it comes to a real-time system. 

Secondly, memory runs the risk of being fragmented as free regions open and get separated by regions that are being used in other places, even though theoretically, there should be memory available, but since a program is stalled, memory isn’t received. Memory allocation algorithms that slowly accumulate fragmentation may work perfectly well for desktop machines rebooted every day or so but are unacceptable for embedded systems that often run for months without rebooting. 

For More Related stuff, visit DataFifty.

Leave a Reply

Your email address will not be published. Required fields are marked *