The Hottest Technology: Touch

What is needed to drive today’s robust touchscreens, including large sizes?

By Binay Bajaj, Atmel Corporation

The explosion of touchscreens started merely four years back, in 2007. Touchscreen technology has been around for nearly 20 years but it never made it to the masses. So why is it just now that it is hitting mainstream and being adopted by all walks of life? The answer is easy. Touchscreen technologies with multi-touch gestures, flicking and light touches are being developed to be more intuitive, easy-to-use, intelligent interfaces to the everyday user, whether they are 3 or 90 years old. Apple® definitely helped this mainstream adoption with the launch of the (fourth-generation) iPod® in 2004, the iPhone® in 2007 and the iPad in 2010.

To understand the success of these devices, one must understand the basics of touchscreen technologies and features device manufacturers look for in order to make a phone that is truly intuitive to the user. Let’s explore the basics required to make a touchscreen successful.

The basics
Large or small, the success of any touchscreen device is a function of the technology choices made in designing it, the most important being projected-capacitance technology, sensor design and driver chip.

Projected-capacitance technology
Today’s devices overwhelmingly use capacitive touchscreens, which operate by measuring small changes in capacitance—the ability to hold an electrical charge—when an object (such as a finger) approaches or touches the surface of the screen. However, all capacitive touchscreens are not created equal. Choices in the capacitive-to-digital conversion (CDC) technique and the spatial arrangement of the electrodes that collect the charge determine the overall performance and functionality the device can achieve.

Device manufacturers have two basic options for arranging and measuring capacitance changes in a touchscreen: self-capacitance and mutual-capacitance. Most early capacitive touchscreens relied on self-capacitance, which measures an entire row or column of electrodes for capacitive change. This approach is fine for one-touch or simple two-touch interactions. But it presents serious limitations for more advanced applications, because it introduces positional ambiguity when the user touches down in two places. Effectively, the system detects touches at two (x) coordinates and two (y) coordinates, but has no way to know which (x) goes with which (y). This leads to “ghost” positions when interpreting the touch points, reducing accuracy and performance.

Alternatively, mutual-capacitance touchscreens use transmit and receive electrodes arranged as an orthogonal matrix, allowing them to measure the point where a row and column of electrodes intersect. In this way, they detect each touch as a specific pair of (x,y) coordinates. For example, a mutual-capacitance system will detect two touches as (x1,y3) and (x2,y0), whereas a self-capacitance system will detect simply (x1,x2,y0,y3). (See Figure 1.)

The underlying CDC technique also affects performance. The receive lines are held at zero potential during the charge-acquisition process, and only the charge between the specific transmitter X and receiver Y electrodes touched by the user is transferred. Other techniques are available, but the key advantage of the CDC is its immunity to the noise and parasitic effects. This immunity allows for additional system design flexibility; for example the sensor IC can be placed either on the FPC immediately adjacent to the sensor or farther away on the main circuit board.

Sensor design
Electrode pitch, a key parameter in sensor design, refers to the density of electrodes—or more specifically, (x,y) “nodes”—on the touchscreen, and to a large extent determines the touchscreen resolution, accuracy and finger separation. Naturally, different applications have different resolution requirements. But today’s multi-touch applications, which need to interpret fine-scale touch movements such as stretching and pinching fingertips, require high resolutions to uniquely identify several adjacent touches.

Typically, touchscreens need a row and column electrode pitch of approximately five millimeters or less (derived from measuring the tip-to-tip distance between the thumb and forefinger when pinched together). This allows the device to properly track fingertip movements, support stylus input and, with proper firmware algorithms, reject unintended touches. When the electrode pitch is between three to five millimeters, the touchscreen becomes capable of supporting input with a stylus that has a finer tip—a boost in accuracy that will allow the device to support a broader range of applications.

Touchscreen driver chip
At the core of any successful touch-sensor system is the underlying chip and software technology. As with any other chip design, the touchscreen driver chip should have high integration, minimal footprint and close to zero power consumption along with the flexibility to support a broad range of sensor designs and implementation scenarios. Any driver chip will be measured by the balance of speed, power and flexibility it achieves.

 

40

 

Supersizing the touchscreen
The considerations described above apply to any size touchscreen device. But what are the specific considerations for moving to large-format devices? Manufacturers will find that the key requirements for modern touchscreen technologies—multi-touch support, performance, flexibility and efficiency—become even more critical when users adopt larger screens and the more complex touch applications they enable.

Delivering true multi-touch
Users of the Apple iPhone and other contemporary devices will be familiar with today’s multi-touch gestures; typically pinching or stretching of two fingers. With a larger screen, however, it becomes possible to envision much more complex multi-touch gestures. Imagine painting and music applications for young students, for example, that involve gesturing with all 10 fingers and thumbs. Or new tablet-based games that pit two or more users against each other on the same screen. However large-format touch computing evolves, application developers will want the flexibility to take full advantage of new kinds of touchscreen interactions. Device manufacturers don’t want to stand in their way—and certainly don’t want to build a device that can’t support the next hugely popular touch application.

As large-format touch applications begin using four, five and 10 touches, it’s important to consider not just how new applications might exploit these capabilities, but also how the controller chip will use this richer information to create a better user experience. For example, the ability to track incidental touches around the edge of a screen and classify them as “suppressed” is even more important on a large-format device than on a small one.

Just as a mobile phone’s touchscreen needs to be able to recognize when a user is holding the phone or resting the screen against her cheek, so a larger-format system must account for the different ways that users will hold and use the device—for example, resting the edge of the hand on the screen when using a stylus, or resting both palms when using a virtual keyboard. And it’s not enough to simply identify and suppress incidental touches; the device must track them so that they remain suppressed even if they stray into the active region. The more touches that a controller can unambiguously resolve, classify and track at once, the more intuitive and accurate the user experience can be.

Achieving high performance
Touchscreen performance is a function of six basic factors:

  • Accuracy means the fidelity with which the touchscreen reports the user’s finger or stylus location on the touchscreen. An accurate touchscreen should report touch position better than +/- 1 millimeter.
  • Linearity measures how “straight” a line drawn across the screen is. Linearity depends on sound screen-pattern design, and should also be accurate within +/- 1 millimeter.
  • Finger separation describes how closely the user can bring two fingers together before the device recognizes them as a single touch.
  • Response time measures how long it takes the device to register a touch and respond. For basic touch gestures such as tapping, the device should register the input and provide feedback to the user in less than 100 milliseconds. Factoring in various system latencies, that typically means that touchscreens need to report a first qualified touch position in less than 15 milliseconds. Applications such as handwriting recognition require even faster response.
  • Resolution is the smallest detectable amount of finger or stylus motion. It is important to reduce the resolution to the fraction-of-a-millimeter level for a number of reasons: chief among them being the enabling of the stylus-based handwriting and drawing applications.
  • Signal-to-noise (SNR) ratio refers to the touchscreen’s ability to discriminate between the capacitive signal arising from real touches and the capacitive signal arising from accidental noise. Capacitive-touchscreen controllers measure very small changes in the row-to-column coupling capacitance, and the way those measurements are performed has a strong influence on the controller’s susceptibility to external noise. Large-format touchscreens are especially challenging in this regard, as one of the most significant noise generators is the LCD itself.

As touchscreens get larger and support more simultaneous touches—and more complex interactive content—achieving top performance in all of these categories becomes even more important.

Touchscreen flexibility
Most of today’s small touchscreens are designed to support a specific device, and often, specific software and applications. Emerging large-format touchscreens, however, will need to be much more versatile. For example, a tablet device is a natural fit for handwritten input using a stylus. But to support that, the touchscreen needs a higher resolution than one intended for fingertip gestures on a five-inch screen.

The majority of the mobile handset and tablet devices have been shipping with Android™ operating systems (OS). However as Microsoft® comes closer to delivering Windows® 8 (Win8) OS, Windows Hardware Quality Labs (WHQL) certification will be required. As Win8 OS become available,users will have more choices on the OS they select for their tablet devices. The big application processor hardware vendors who play in this space currently include NVIDIA®, Qualcomm®, Intel®, Texas Instruments®, etc. Other applications for the Atmel® large-screen solution is the e-reader application and the players such as the Kindle from Amazon, Nook from Barnes and Noble and Novel e-reader from Pandigital.

41_a

Power efficiency
While they won’t be used like a mobile phone, tablet devices will still need to be light enough to be comfortably used in a variety of positions and locations. That means they need to have small, lightweight batteries—which means highly efficient microcontrollers with excellent power consumption. Balancing this requirement against the need for greater accuracy and responsiveness (after all, a larger screen inherently means higher resolutions, more nodes to process and exponentially higher processing power) is one of the biggest challenges in designing an effective large-format touchscreen.

In general, device manufacturers look for features referenced above. Touch controller companies that can provide the best capabilities will win a design. This will ultimately enable users to have more talk time with lower power, better intuitive touches after the first touch, better response times and an overall seamless touch experience with their touchscreen devices.

 


41_b

Binay Bajaj is a director of marketing for touch technologies at Atmel Corporation, focusing on touch products, new product development and introductions, strategic partner alliances with diverse OEMs and vendors in the mobile, consumer and PC industry. His previous work experience includes engineering and leadership positions at Synaptics, MIPS, Intel and National Semiconductor.