The Future of the Intel® Atom™ Processor
The next challenge will be to strike a balance between BIOS, middleware, applications and OS in the silicon itselfBy Ed Sperling, Contributing Editor
Ed Sperling sat down with Jonathan Luse, director of marketing for Intel’s Low-Power Embedded Products Division. What follows are excerpts of that conversation.
EIS: Will Intel® Atom™ microarchitecture use the new Tri-Gate technology?
Luse: We have no announcements at this point because Tri-Gate is a 22nm technology. We do plan to use that in the future, though.
EIS: What node is the Intel® Atom™ processor at?
Luse: We’re at 32nm. So Tri-Gate is the next-generation process technology. You can expect to see the entire Intel® product line use this at 22nm.
EIS: Where will the Intel Atom processor be targeted? Is it geared toward smart phones or is it other devices that will require low power?
Luse: It’s both. The Intel Atom processor has found success so far in ‘tethered’ devices—things that are plugged into the wall that still need low power. For example, think about industrial HMIs, controllers and medical equipment like ultrasound. Even though they’re looking at bandwidth and low power, most of the traction has been in a scaled down version of the traditional (Intel) business model. Going forward, we’re at a tipping point with the new Intel® Atom™ processor Z6xx series, which reduces the power by about 30% from the previous generation. We’re at 3.3 watts. The average power is about one-third or one-half of that. That opens up a tremendous market for untethered devices. It will be things like portable retail devices or industrial handhelds for factory automation and medical. We’re really getting to the point where it’s battery-powered portable devices. This is the next big wave. It’s battery life, standby power, and performance per watt per cubic inch.
EIS: Is the future of the Intel Atom processor as a single processor or would it be in a chip set with other processors, as well?
Luse: That depends on the customer application. We provide the CPU and the graphics processors, and we have a couple Intel® architecture (IA)-based acceleration processors. Then customers can add their own acceleration processors or graphics cards. If you get into print imaging, the Intel architecture would do print control and motor control, but color correction and image control and rotation might be done in a proprietary ASIC. A lot of that is the customer’s secret sauce for performance, and doing a dedicated process like half-toning or color balancing is going to be faster on a dedicated chip than doing it in software on a general-purpose processor. It makes sense to put a dedicated-function device right next to the processor.
EIS: Where do your acquisitions like McAfee and Wind River fit in?
Luse: McAfee and Wind River have a broad customer base. Part of our acquisition strategy was to leave them isolated from the core Intel® silicon business because they need to operate autonomously. In my world, it’s all about IA.
EIS: But a general-purpose operating system running on IA isn’t always the most efficient. Are you seeing an uptake beyond just the Intel-Microsoft alliance?
Luse: The breadth and depth of operating systems required to service the market is increasing. We had about a dozen OSes we used for the first generation of Intel Atom processors. We now support about 30, with full board support and packages optimized for those OSes. It’s definitely increasing, and there’s a bit of specialization. When you get into industrial applications, real-time operating systems come into play. VxWorks and Green Hills Software become very attractive markets. You multiply that times the number of segments we pursue and there’s a big market.
EIS: Is virtualization being used in portable devices?
Luse: I’ve seen a lot of investigation in the market, but I haven’t seen it in widespread use yet. But when multicore first came out eight or nine years ago, there was a lot of interest and it still took time. We’re seeing a lot of interest for a lot of different applications. Some people are using it to isolate systems. So for an industrial application you might see the machine controller on one side of the virtualization and the safety management system on the other, so you can monitor the safety aspects of the system and shut down, if necessary. Today that has to go through a bunch of regulatory approvals. Sometimes it takes a while to prove that’s robust enough, but the benefits are definitely there.
EIS: This is the same approach as separating work and home environments on the same device, right?
Luse: Yes, and I’ve had multiple conversations around the country and the world about those usage models. Security becomes more important as malware begins hitting embedded devices. There’s a range of how you use those devices. Some people would maximize performance and minimize security. Others would maximize security and minimize performance. You have to build an architecture in between. But from a usage perspective, there is a huge demand for better and better experiences. That could be productivity or it could be simplifying your life. How do you apply technology in a way where it’s invisible?
EIS: Is there enough real estate now, with the process shrink, that you can do more with a piece of silicon?
Luse: Yes, and we’re surprised with the type of precision and compute performance you can put into a device. If you look at the long term instead of just one generation, 10 years ago the most powerful Intel processor was the Intel® Pentium® II processor. It had a SPECint of 18, ran at 40 watts and cost $1,000. Today the average Intel Atom processor is less than $50, has a SPECint of 56 and runs at 2 watts. That type of progression allows people to do a lot of things that were not practical even with the last generation. We’re now into our third generation of Intel Atom processor.
EIS: So what is all this extra silicon being used for?
Luse: It’s allowing customers to add features they couldn’t add in a given power target. Now that you have performance, you can add another application to a device without affecting performance.
EIS: Are the applications being written to take advantage of more than one core?
Luse: That’s part of the phenomenon that Bill Gates predicted back in 2003. It’s going to take 7 to 10 years for the software to catch up to multicore. There’s a lot of legacy code that isn’t built for multithreading and multicore. Multicore is the way to get performance increases within a power target.
EIS: You can thread to two or maybe even four cores, but can you really use more cores for some of these applications?
Luse: That’s where you use system-level partitioning. You may have a core dedicated to intrusion detection while another does Skype and a third does Photoshop.
EIS: Then it’s by function instead of application?
Luse: It can be either way. When my laptop is scanning my hard drive for malware, for instance, I see a performance degradation. If it was partitioned properly at the OS level that wouldn’t happen.
EIS: Is there any shift in terms of how Intel looks at building processors. Is software now a consideration rather than just hardware performance?
Luse: Absolutely. The discussion is much more platform-centric versus 10 years ago, when it was silicon-centric. Software was seen as a second stage of conversation. Once the silicon was architected the software came into play. Now software is part of the solution from day one. Going forward it will definitely be platform discussion.
EIS: This is an architectural discussion?
Luse: Yes. BIOS, middleware, applications and OS have their own attributes. How do you take advantage of the platform of choice and still support more than one? That’s the challenge—figuring out how to strike a balance in the silicon itself.
EIS: Intel has been talking for years about moving into the SoC market. Does the Intel Atom processor become a base platform for an SoC, possibly in a 3D stack?
Luse: That’s probably a couple steps away from where we are right now. Because of the fragmentation you need a lot of SoCs. There is differentiation on BOM (bill of material) costs and power. If you have a general-purpose CPU it’s going to be large and more expensive than the market requires. You have to figure out ways to design a catalog of them, and each one is more of an application-specific rather than a generalized approach. That’s the challenge we’re looking at. There’s a need for general-purpose chips, but there’s a need for market-specific ones, as well.
Ed Sperling is Contributing Editor for Embedded Intel® Solutions and the Editor-in-Chief of the “System Level Design” portal. Ed has received numerous awards for technical journalism.