Virtualize Real-Time Operating Systems over Streaming Networks

Software-streaming platforms are growing beyond the PC space to bring advantages to embedded developers.

By Patrick Farenga

Recent breakthroughs in Ethernet and broadband technologies are changing the speed of network communications and their architecture and purposes. The headaches for system managers, for example, are inextricably linked to device management and software distribution on conventional network architectures. These architectures require excessive system monitoring and maintenance in the background while target devices execute. In addition, licensing and security-management issues can be complex and other peripheral operations often require special drivers and attention. The fragile nature of conventional networks, such as how the network loses speed as memory and processing time are shared by the server, has forced many manufacturers to operate with de-centralized system management and costly proprietary hardware. They can then maintain control and performance during manufacturing.

Software-streaming technology eliminates these common headaches for system managers. At the same time, it benefits original equipment manufacturers (OEMs) and other companies with x86-based embedded platforms. They gain tremendous opportunities to save time and money by avoiding proprietary implementations for their embedded systems.

Software streaming is especially cost effective because it can easily fit with most existing network architectures. It also integrates with off-the-shelf servers, network, storage, and applications. Software streaming builds on the centralized- management attributes of thin clients. But it represents a major technical advance by delivering the speed and functionality of a full PC to each client including real-time performance. In addition, it can continue to execute even if the network goes down (see the Figure). In short, software streaming is managed centrally, but the applications process locally.

To enable client PCs to run without hard drives, software streaming creates a virtual disk on the server that sits on the network. The entire operating system and all applications can be accessed (with permission) by the clients. Each client takes only the parts it needs, loads them into RAM, and begins to execute. The client machines can then function with full utilization of their own processors, memory, and peripherals. The network provides software delivery to each client without requiring special installation packages or software-distribution technologies.

Software streaming also can eliminate the burdens of hands-on visits to distributed devices, as it delivers the operating system (OS) and necessary applications to each device. Rather than downloading the entire OS and applications to every device, the software-streaming approach sends just enough of the software so that each device can begin to execute. As the system requires more resources, those elements are delivered as needed. Client devices therefore continue to execute even if the network goes down--with no hard drives!

Neither Thin Client nor RAM disk
The thin-client model doesn’t require a hard drive on the client computers either. But it does require the server to run several instances of an application within a single operating system. This demand can be problematic for companies that use consumer off-the-shelf applications that aren’t coded to run in a server-based environment. In a thin-client model, multimedia and local devices in particular can suffer from choppy and unreliable performance. After all, client devices share memory and processing time on the server. Note that if one client’s application generates a shutdown error, that error can cause problems for all of the other machines running the same application on that server.

In thin-client networks, each peripheral device needs to be installed on the server. In addition, each requires a server driver. All of the drivers for each user then need to be installed on a single server. Such a solution can become very costly if the network needs to be scaled out, as there are so many variables and resources required for each application on the server. While thin client may have its place for enterprise situations, it isn’t suitable for OEMs using embedded systems that require performance, flexibility, and guaranteed reliability.

With software streaming, on the other hand, application-compatibility issues disappear because each client accesses a master image of the operating system and application (the virtual disk or “V-disk”). Plus, each client runs all applications without the need to validate or modify any code. If an application has some code that prevents it from working in a server-based environment, software streaming can provide a Windows 2000 or Windows XP operating system for it. The application can then run as it was intended. Streaming won’t reduce or eliminate the need to license software. Yet it does allow companies to license each desktop using traditional desktop-licensing standards.

In thin-client architectures, all of the network servers are upgraded one at a time. With software streaming, an upgrade is done by simply creating a second version of the virtual disk as the client devices continue to execute. In the background, the upgrade is applied to the virtual disk one time--just as if an installation was running in Windows. There are no special modes to enter or special installation scripts to generate.

After testing the installation, the new version of the virtual disk is distributed— again, without any knowledge of the client devices. Once the image is in place, the client devices automatically upgrade upon a reboot. When the client device is restarted, it is directed to the new version of the virtual disk and runs the latest versions of the OS and its application. Hundreds of client devices--each with different functions--can be upgraded with a single installation program. If a problem is found in the installation, clients can be painlessly rolled back to the original virtual disk. Just by rebooting, the clients will again be running stable software.

Another approach, called RAM disk, is a solution to networked OS and application delivery that also allows for diskless operation over a network. But it comes at the price of continual and costly hardware upgrades at the client device, effectively de-centralizing management and upgrades. With RAM disk, the entire OS and application image must be downloaded to RAM upon every boot sequence.

Often, significant hardware upgrades are required to accommodate new applications or operating systems. Software streaming requires Gigabit Ethernet at the server in order to scale. But it uses the client’s existing RAM and hardware to its maximum capabilities. This software- only solution works with existing hardware and networks. It also extends the life of PC-based clients, as operating- system changes affect the client PCs much less.

Virtualizing the Operating System
A comprehensive software-streaming platform, such as Ardence’s Device Edition, can create a central repository of “golden” operating-system and application images. Those images contain the configuration required to support various business or manufacturing functions, such as a “Robot arm control OS/application image” or “Blood gas analysis OS/application image.” When the server is powered up and the boot sequence is initiated, the platform streams the required image to each device from a central pool of images or a single master image. That required image is pre-assigned by using an administrative dashboard.

Each device then starts immediately running in the assigned configuration because there’s no need to download the entire OS to each client or device. A given device could take on a personality, such as a self-service information kiosk in a retail store. Upon reboot, it could be assigned a different configuration image—say point-of-sale checkout--and be used during peak periods. Similar re-purposing can be easily accomplished in other embedded environments like manufacturing, simulation, or medical devices--just to name a few.

Another advantage of a comprehensive software- streaming platform is its use in harsh environments. Here, shock and vibration hazards and airborne particulates are abundant. The machinery requires specialized hardware to protect the hard drive. Software streaming increases reliability and manageability by eliminating the need for local hard-disk drives, which are the most common point of system failure. At the same time, they utilize the processing power of the local devices to maintain total system performance-- inclusive of real-time applications and regardless of network size.

The dynamic abilities of software streaming allow incredible flexibility for repurposing devices. When coupled with real-time extensions for Windows like Ardence RTX, the automated processes and high-speed networks of major manufacturers become accessible to small and medium-sized businesses and researchers. For instance, a lab can use RTX for real-time data acquisition and multi-modal stimulus control for neurophysiological experiments on vision. Windows XP and a consumer-model desktop computer can be used to control everything. With a suitable software-streaming platform, they also can add a communication link between the PC running data acquisition and stimulus software and the video PCs that are using modern high-performance video cards. Plus, multi-core PCs can be added that will enable the researchers to perform “on-the-fly” analysis of data as it is acquired.

A comprehensive software-streaming platform is a software-only solution. It easily fits into existing hardware infrastructures, is scalable, and is inexpensive to integrate. OEMs and embedded-systems managers can therefore protect their existing investments while relying on consumer-off-the-shelf (COTS) solutions for expanding their operations. To ramp up production by adding new devices to a production line, one can perform a simple “plug-and-play” operation with software streaming. New devices plug into the network and start streaming immediately. If the server goes down, target devices continue to process applications locally. The client devices can control the behavior of peripheral devices. There’s no reliance on the network for control of the peripherals.

\Patrick Farenga published books and a bi-monthly magazine for 16 years as Publisher for Holt Associates Inc. He is now a technical writer and editor with 25 years of experience in marketing and consulting. Farenga earned a BA from Boston College in English and an MA from the University of Wisconsin-Madison in English. He can be reached at pfarenga@comcast.net.