Configuration Is Key to Success with Embedded VirtualizationBy Chris Ault, Wind River
Delivering new functionality for graphics or networking
In the “old days,” configuring an embedded system was simple. A processor had a single central-processing-unit (CPU) core executing an operating system (OS). Depending on the product’s needs, a general-purpose or real-time operating system (RTOS) would be chosen. If both were needed, the design would require two processors. Yet today’s powerful single-core and multicore processors can actually be configured in many different configurations.
Figure 1: Shown are several CPU-core configuration options
A multicore processor can be managed by a single symmetrical- multiprocessing (SMP) operating system, which manages all of the cores (see Figure 1). Alternatively, each core can be given to a separate OS in an asymmetrical-multiprocessing (AMP) configuration. SMP and AMP both have their challenges and advantages. For example, SMP doesn’t always scale well, depending on workload. For its part, AMP can be difficult to configure with regard to which OS gets access to which device. Operating systems assume that they have full control over the hardware devices that they detect. Often, this creates conflicts in the AMP case.
A technology that facilitates the configuration of these multicore processors is embedded virtualization. Embedded virtualization introduces a thin, real-time virtual-machine monitor (or hypervisor) directly on top of the hardware. This hypervisor then creates virtual boards (partitions) that contain the operating systems. As a result, system designers can utilize a wide variety of configurations—including mixes of AMP, SMP, and core virtualization—to build their next-generation embedded systems. The hypervisor manages the hardware and partitions within which OSs execute. Each partition is given access to resources (processing cores, memory, and devices), can host an operating system (guest OS), and is protected from the other partitions. Simply put, an embedded hypervisor provides technology to partition or virtualize processing cores, memory, and devices between the multiple OSs that are used to build a system.
In the information-technology (IT) industry, virtualization is well-known, well-understood, and well-embraced. In embedded devices, however, virtualization is nascent. It shares some of the benefits offered to the IT industry while providing other advantages that are unique to the embedded industry.
For example, virtualization in the IT industry focuses on hardware abstraction to virtualize access to all devices on the host server. In doing so, it provides maximize guest-OS consolidation and homogeneous host environments. The resulting compute platforms appear identical to all guest OSs, regardless of the physical host and its hardware. Yet virtualization in the embedded industry focuses on a different set of benefits. The OSs in an embedded product need to collaborate to deliver complete device functionality. Each OS uses its own set of hardware devices, memory, and processing cores. In addition, it needs to communicate with the other OSs in the device. At the same time, the OS usually has to operate within tight memory limitations and adhere to strict timing requirements. Sometimes, it also needs to be certified to certain safety standards.
Embedded virtualization can be adopted to utilize a mixture of different OSs to build an embedded device (see Figure 2). For example, it may make more sense to manage and control sensors and actuators with an RTOS. Yet graphics and the networking aspects of the product could be better suited if they were supported by a general-purpose OS that offers improved graphics support and connectivity.
Figure 2: Specific operating systems are suitable for specific purposes
Challenges like the following can occur when a complete solution has one subset of hardware devices controlled by one OS and another controlled by a different OS:
- Configuring and directing device access among multiple OSs
- Partitioning or virtualizing a subset of devices among multiple OSs
- Maintaining deterministic timing behavior when moving to a virtualized environment
- Migrating existing applications to new multicore CPUs without making widespread software changes
Figure 3: Shown is an example of an embedded hypervisor configuration
With an embedded hypervisor that can be configured to present system-level definitions and hardware-device mappings to the virtual boards and guest OSs, the developer gains a mechanism to describe all device configurations in one location (see Figure 3). Such device assignments include partitioning physical-memory ranges and local physicalhardware devices, assigning interrupts, and allocating CPU cores to guest OSs. Contrast this with virtualization in the IT industry, where most hardware devices are virtualized and visible to all guest OSs for maximal virtualization. Embedded virtualization puts the system developer in control. Only he or she can make sure that the system partitioning is done in such a way that the final system behaves in the desired fashion.
Figure 4: This hypervisor configuration shows the specific configuration of devices
The system-level configuration used by an embedded hypervisor partitions the system into multiple virtual boards (see Figure 4). Each virtual board executes a real-time or general-purpose guest OS. The virtual board is managed by the hypervisor. Based on the configuration presented to it at boot time, the hypervisor controls the cores on which the virtual board executes, the memory range, and devices that can be accessed by the guest OS. The memory, PCI attributes, and interrupts can be directly mapped into individual guests. The hypervisor isn’t involved in the datapath to or from the devices. The resulting performance is equal to native non-virtualized performance.
Figure 5: Achieve innovation by migrating to multicore and extending functionality
With the ability to explicitly describe the devices detected by the OSs in the virtual boards, a system developer can port legacy applications—hosted by older OSs running on singlecore CPUs—onto new multicore CPUs and hardware (see Figure 5). This can be done by hosting the legacy application in a virtual board, which is presented with the same devices (and address ranges, interrupts, etc.) as detected on the old hardware. An embedded hypervisor that supports unmodified guest OSs facilitates this task without requiring software changes to the trusted legacy application stack. Device vendors are able to port their software assets—intact—to new hardware. They can therefore leverage multicore CPUs that offer increased performance/ power ratios.
In this way, migration can be leveraged to add new functionality to an embedded product while isolating the existing application from the new feature extensions. Migrating a legacy application from a single-core CPU to one of the cores on a multicore CPU provides extra compute power, which can be used for this new functionality. For example, some new features that can be implemented could be those that exploit the enhanced graphics or networking capabilities of a general-purpose OS, such as Linux or Windows.
In summary, multicore CPUs and embedded virtualization offer many opportunities, such as power savings and improved compute performance, in embedded products. With these opportunities come challenges in partitioning and assigning the hardware devices of the embedded board to the various CPU cores and virtualized OSs. With an embedded hypervisor, it’s possible to maintain complete control of the embedded system’s hardware devices to retain the real-time requirements in a virtualized environment. The embedded hypervisor manages and partitions hardware devices based on the configuration presented to it at boot time. By partitioning the hardware devices through configuration that’s enforced by a hypervisor, the real-time requirements of the embedded system are maintained. At the same time, the benefits of embedded virtualization and multicore CPUs can be realized.
With the ability to explicitly describe the devices detected by an OS within a virtual board, legacy applications can be easily migrated from old to new hardware, thereby offering increased performance/power ratios. With increased CPU performance, new features can be delivered by the development teams. With the isolation enforced by the hypervisor and virtual boards, new functionality can be delivered using general-purpose OSs with improved libraries for graphics or networking.
Chris Ault is a senior product manager with Wind River focusing on virtualization solutions. Prior to joining Wind River, Ault worked in various roles ranging from software engineering to product management at Mitel, Nortel, Ciena, AppZero, and Liquid Computing. His focus was on application and server virtualization products, technologies, and sales. Ault holds electronics, computer science, and economics degrees from Carleton University and Algonquin College. He resides in Ottawa, Canada.