Headlines

Headlines

Power and Performance in Architectural Migration

By John Blyler, Editorial Director

It is no secret that today’s market favors electronic products that use less power while providing ever greater feature sets at higher levels of performance. These conflicting requirements have caused many embedded hardware and software developers to consider competing processor architectures for their next design iteration. Architecture migrations are tricky since they involve taking software designed to run on one computer hardware and porting the same software to execute on a totally different system.

Migrating software to run on a new processing platform can be risky and time consuming. Many factors must be considered, such as the choice of an operating system. Common concerns center on the best way to optimize both power and performance during the migration. Other issues deal with available tools for debugging in the new environment.

These questions and many others are addressed by a new book by Lori M. Matassa and Max Domeika published by Intel Press. The book is titled "Break Away with Intel® Atom™ processors: A Guide to Architecture Migration." The book obviously focuses on migration strategies from competing platforms to embeded Intel Atom processors. Still, an interview with one of the author’s reveals a variety of useful development tips that apply to architectural migration in general.

EIS: What motivated you to write a book on migration strategies?
Domeika: Lori and I saw a need to help embedded software developers and engineers migrate to the Intel Atom processor. Over the last several years, our customers have asked many questions about the details of things they need to know to be successful in the migration. Lori and I wanted to document all of these questions and answers in one place to benefit a broad spectrum of people, from managers considering migration to the engineers that have to do the work.

EIS: Which competing processor platforms are covered in the book? Also, will the migration be to a single core or the new double-core version of the Intel Atom processor?
Domeika: We primarily cover migration strategies from the two big architectures of PowerPC and Arm. Many customers have experience on these embedded architectures but now want to explore of move over to the Intel Atom processor. Some developers want to know the low-level architecture details, such as the special features of x86 assembly language. Other details we cover include the pros and cons of an in-order processor instead of an out-of-order processor like our other, bigger processors. Many questions center around the issue of porting existing software to a new platform. One common issue that we see from customers deals with byte order. How do you migrate from a larger processor architecture like to PowerPC to a smaller one like the x86? The multicore question is a challenging one. Once the software is migrated to the Intel Atom processor, it is easier to take advantage of new multicore platforms. In general, embedded developers are still learning the advantages of multicore systems. One of the challenges is that no one roadmap exists for customers in the multicore space. We still have customers who are struggling with the same multicore issues that we were talking about two to three years ago.

EIS: The cover of the book has pictures of tablets, nettops and smartphones. Do these different development targets have different migration strategies? Or are the differences minimal, confined to hardware-specific issues like display screen resolution and memory?
Domeika: Every migration has both common and unique aspects, which made writing the book a hard task. You don’t want to be so specific that you have things that only apply to one person. On the other hand, you don’t want to be so general that it applies to nobody. Lori and I have done our best to try to generalize and discuss the key topic areas. As I mentioned, some folks want to know about the low-level details. But many don’t need the low-level details. These developers don’t need to know the details of assembly language or Intel Atom processor architecture, especially if they are application developers coding in a higher level language like C++.

Operating system (OS) issues are common to most customers. Some use commercial off the shelf OSes that make certain tasks easier but other tasks harder. Other folks are bound by a proprietary OS that they need to port. Proprietary OSes bring in other system level and assembly language issues in terms of device drivers. So it really depends. We try to be general enough to suit the needs of many folks, but provide enough detail that it is of some value.

EIS: Let’s talk about available migration tools.
Domeika: Historically, Intel’s tool focus has been on best performance, i.e., trying to get the most optimized performance. Our compiler engineers sit closely with the Intel Atom processor architectures so we are able to design compilers that know the internals and create very fast code. Similarly, our profiling tools are tuned to watch for events that have more or less impact on the processor. One of the big embedded tool areas is power optimization. How do you optimize the processor for power? There are tools available now and some coming out later. One of the currently available, common open source tools is called PowerTop.

There were many demonstrations at the last Embedded Systems Conference (ESC) that relied on external electrical devices to measure power. These devices had external probes that would monitor the power on a chip or board. PowerTop is different. It is a software tool that monitors the idle states of a processor – specifically, the C and P states – while the software is running. Idle processors use less power. PowerTop monitors the processor as it is transitioning between its C states and P states. Knowing the transition timing allows the designer to figure out what part of the software is causing the processor to wake up. Too many interrupts may increase the systems power consumption. The software developer can use this information to determine if all of those interrupts are actually needed. Perhaps fewer processor interrupts can be used. Sometimes, the solution is silly things, like insistent polling of the processor by an application. One solution may be changing the polling behavior or even moving to a different processing architecture.

EIS: How about chip power management systems that are based on RTOS software control, e.g., turning off specific sections of the chip as needed?
Domeika: Those low power techniques are certainly useful. However, my focus has been on the software development side. Many developers don’t want to go to a deep level of detail. This has been an eye opener for me, a realization that has caused me to think in a new way.

While there are ways to micromanage the chips power usage, it is usually more efficient to simply let the chip manage the power at that level of detail. A great many power decisions are controlled by the processor. We’ve found that application developers don’t have a big desire to manually tell the processor which sleep state to enter or when to wake up. Their interest is at a higher level, such as deciding how often to interrupt the processor.
This is analogous to threading issues in multicore processing. Multicore threading is considered the assembly language of multicore programming. Here, too, the questions arise as to whether it is better to have libraries that address most level power and performance issues so developers can focus exclusively on their software applications. Not surprisingly, mainstream developers want things to be easier.

EIS: Are there any third-party tools that can be used for multicore design?
Domeika: The book also covers some third-party tools. One such tool is called Prism, by CriticalBlue. This tool supports multicore programming on embedded processors by allowing users to play “what if” performance scenarios. For example, what if you were able to make a section of the code run in parallel? How much faster would the code run across four codes? What are some of the potential issues that you’d have to worry about if you are going to make something run in parallel? Common issues include the use of shared variables and parallelism, concurrency concerns and ensuring that the code runs correctly.

 
blyler

John Blyler can be reached at: jblyler@extensionmedia.com

 

 

 

 

 

11_b Max Domeika is an tools architect in the Developer Products Division at Intel, creating tools targeting the Intel Architecture market. He earned a BS in Computer Science from the University of Puget Sound, an MS in Computer Science from Clemson University, and a MS in Management in Science & Technology from Oregon Graduate Institute. Max is the author of “Software Development for Embedded Multi-core Systems” from Elsevier and “Break Away with Intel Atom Processors” from Intel Press. In 2008, Max was awarded an Intel Achievement Award for innovative compiler technology that aids in architecture migrations.

 

11_c Lori Matassa is a Staff Technical Marketing Engineer in Intel’s Embedded and Communications Division and holds a BS in Information Technology. She has over 20 years of engineering experience developing software for embedded systems. In recent years at Intel she has contributed to Carrier Grade Linux, as well as the software enablement of multicore adoption and architecture migration for embedded and communication applications.