By Jean Renard Ward

“Haptics”, “force sensing” and “proximity sensing”, especially on personal devices like smartphones, may be at issue in intellectual property litigation.   In such cases a force sensing and haptics expert witness may be required.

If you look up a dictionary definition for “haptic”, it just means “relating to the sense of touch”(i). But for computer devices, it gets applied more casually and broadly to any kind of sensing (especially “force sensing”) or physical-like feedback that relates to how a user does the touching.  (We will talk about this with touchscreens as a familiar example, but touch input and haptics can be used to make a tangible user interface with any kind of object.)  For example:

• Did the user just touch, or press hard and hold?
• Does the user change the pressure while they scroll or paint?
• Was a tap a sharp tap, or a soft tap?

• Does the device make a physical reaction (e.g. a “thunk” vibration that feels like a key click) when the user taps hard?
• Perhaps a fast click sound instead?
The user would be fooled that they felt an actual click.
    • “Bouncing” what’s on the screen when the user presses hard during a scroll – again, to fool the user they felt something.

Let’s not stop there.
Haptic input like “force sensing” has also been used in many other kinds of applications such as:

• Biometric authentication “on the fly”
The pattern of how someone types on an on-screen keyboard (ii), or the contact shape of their hands when they hold the device (iii), or the profile of their fingertip iv,  can be used to check whether they are an impostor or the correct user.

• Adding “touch” to virtual environments and augmented reality
When a user presses harder, a virtual paintbrush may paint a wider line (as the virtual paintbrush bristles flair out) v. Perhaps the screen also vibrates ultrasonically to feel more or less slick. A virtual piano keyboard plays louder or software, like a real piano (vi).  When the user hits a virtual ball harder, it goes higher.
You find this also in tabletop computing and interactive surfaces – these are large horizontal touchscreen displays, such as the Mitsubishi Tabletop (vii).

Of course, it takes special software—which can be very complex—to make these biometric and augmented reality things happen.  Those are rich topics, and would need whole separate articles by themselves.  So for now, let’s just talk about force sensing that has been used to sense taps, hard and soft touching, or extra-firm dragging and pushing on a touch screen.

One way this had been added to touchscreens is to put one or more force sensors behind the touchscreen: for example, mounting force-sensing transducers at the four corners.  Engineers might use force-sensitive resistors (when squeezed, the resistance changes (viii) ), piezoelectric transducers (compressing it creates a voltage), or flexible or compressible capacitors (capacitance changes as two small electrodes are pushed closer together), and many other kinds of sensors.    What engineers have used for force sensors can be quite creative, for example mounting a touchpad on a (very thin) spring mechanism or cushion, with a mechanical switch that closes when you press the touch surface down (ix).

There is also conductive rubber: Common kinds of rubber are an insulator, but engineers have added tiny metal particles that start touching and conducting when the rubber is pressed and pushes the particles together (x).

If you are using a stylus (and not just fingers), engineers have put a force sensor (of one kind or another) in the tip of the stylus—or perhaps on the top, at the back end of the ink cartridge in a writing stylus (a.k.a. “a pen”)(xi) .

These are just a few examples of what has been added to touchscreens to do hard and soft touches.  There are many kinds of touch and touchscreen input technologies: they could work by resistance (or with a combination of resistive and conductive sheets (xii)), by capacitance or projected capacitive, by surface acoustic waves, by active or passive “sonar”, by infrared light beams, by FTIR (“frustrated total internal reflection”) inside glass (xiii), computer vision, electrooptics, be integrated with a display “in-cell” or on top of it, or in many other ways (xiv).

Engineers have also used just force sensors at the four corners, by themselves, to make touchscreens by triangulating the forces to figure out where the user is pressing (xv).

(Side note: There is also confusion about force sensing, and pressure sensing.    Just a few pounds of force on a hard stylus tip could generate enough focused pressure to shatter a touchscreen, but the same pounds of force with the palm of your hand would have low spread-out pressure.  Force and Pressure sensing often get mixed up, but they are not the same, nor necessarily interchangeable.)

People usually have more than one fingertip. For that matter, most have more than one hand, and on those “tabletop” systems there can be more than one user. In the world of “real” reality you might swipe something clear with the sides of a hand (or both hands), not just a fingertip.

Nowadays, just about everyone has heard about multi-touch input: think “pinch-to-zoom”. There are many kinds of multi-touch  / full-touch technologies.

Engineers have combined these with some of the techniques above.  But that doesn’t really tell you how hard you are pressing with each finger (or whatever).  It would be hard to tell whether the user was doing a twisting pressure, or a see-saw pressing with different fingers, or to tell which fingers/hand/users were pressing harder (much less, just how hard).

So what have engineers done for multi-touch and full-touch input?

One technique is to make a whole touchscreen out of a grid of force sensors.  There are various techniques for making the grid transparent: one is simply to make the sensors small, and to space them out so you can see between them. This is called “screen-door transparency”.

If you measure the force at every sensor in the grid, you can make a “heat map” of all the values. You can see where the touches are, as well as how hard the touch is in each area: you can use the center or peak of each area as the location of the fingertip: the picture on the left is an example (xvi).

 

These kinds of force-sensing grids have been known for many years.  For example,
• robotic fingertip sensors with a sense of touch   (picture on left) or even whole robotic “skins”
• pressure mats in hospitals and industry
• checking a person’s “bite” in a dental exam
• measuring how someone walks for orthopedic shoes or for sports medicine
• and (of course), in touchscreens and touch sensors (xxi).

“Heat maps” techniques are also useful with other kinds of full-touch sensing.

(Robot touch sensor from 1985)

Touchscreens in most smartphones today use “projected capacitance” sensing.

The touchscreen has a grid of tiny capacitors—either single electrode / self-capacitance, or two electrode / mutual capacitance—just below the surface. When something conductive like a finger or part of a hand is on the touchscreen, it changes the capacitance just because it is very close to the electrode.

If you check the capacitance at every grid sensor and make “heat maps”, you can see the touch areas and take the centers or peaks to get the locations (xxii).

It doesn’t really give you the force: it gives you proximity (xxiii) (something is really close).
But there are some tricks:

• A human fingertip (and most of the front of the hand) is compliant—it flexes a bit like rubber.  When you press harder and mash the fingertip down a bit, it spreads out.  With the right software (which can be complex), you can use that spreading (or perhaps, how fast the spreading happens) to get an indication of how hard the user is pressing.  It’s not direct force sensing, but it can be close enough, depending on what you want to try in your software.

There are also “touchscreen pens” you can buy that have a special conductive rubbery tip: the rubbery tip spreads out much like a fingertip (xxiv).

The tricks don’t stop there.

You see not just the size of the area, but also you can see its shape.  With the right software tricks, you use the shape (especially as it changes) to figure out something about not just how hard the user is touching, but whether they are twisting a finger around, or rolling it to one side.  You can figure out whether you are seeing a finger, or the wide image of a thumb, or touching “flat” instead of just with the tip of the finger, or the user is using the side of the hand.  From the entire hand pattern, you can recognize right and left hands, hand postures or gestures, and even multiple users.  (And recognize things besides hands, too. (xxv xxvi))

So, with so much for haptic and touch sensing that had been done before (going back 40 years and more), and all the many applications that have been done (gestures, biometrics, and augmented reality are just a few), just what is really new about any of this today?
Or more particularly, what things could be novel enough to be patentable today?

As patent practitioners know, it all depends on exactly what a patent’s claims say, and how that compares with the details of prior art. A qualified force sensing and haptics expert witness who is knowledgeable in these matters and their long history may be helpful in understanding the claims and knowing what the prior art might be.

About the Author:  Jean Renard Ward is highly experienced, MIT-educated expert witness in patent litigation. Mr. Ward’s areas of design and development expertise include multi-touch/touchscreen and tablet hardware, capacitive touch and proximity sensors, styli/electronic pens, haptics; gestures, user interfaces (UIs), touchscreen graphics, and accessibility user interfaces (blind/visually-impaired); digital rights management (DRM), digital encryption and authentication (PKI), and malware detection; programming/coding (C/C++/Java, other systems), source-code analysis and reverse-engineering, and firmware. Clients include Google, Samsung, Ericsson, Lenovo, Motorola, Nokia, and Lucent Technologies. Mr. Ward has been granted multiple US patents.  He received his degree in Computer Science and Electrical Engineering Degree from M.I.T.  Mr. Ward can be contacted at Rueters-Ward Services; Website: www.ruetersward.com Phone: (617) 600-4095; Cell: (781) 267-0156; Email: jrward@alum.mit.edu

Copyright (c 2017) Jean Renard Ward
___________________________________________________

i     (See https://en.wiktionary.org/wiki/haptic)
ii     (Jerome Saltzer,  “The Protection of Information in Computer Systems” , 4th ACM Symposium on Computer Systems,  October 1973;  US Patent 4,646,351, 1987)
iii    (Ivan Poupyref, “Touché: Touch and Gesture Sensing for the Real World”, UbiComp 2012)
iv    (US Patent 5,325,442; 1994; US Patent 4,353,067, 1982)
v    (See Margaret Minsky below)
vi    (Robert Moog: “A Multiply Touch-sensitive Clavier for Computer Music Systems”, Proc. Intl. Computer Music Conference, 1982)
vii   (Mitsubishi “DiamondTouch”, www.merl.com, 2004; “Entertaible concept: combination of electronic gaming and traditional board games”, Philips Research www.research.philips.com, 2006; see also “SmartSkin” below)
viii  (US Patent 5,184,120, 1993; See also https://en.wikipedia.org/wiki/Force-sensing_resistor)
ix   (MicroTouch “UnMouse” touchpad, 1989).
x    (S. Jin, “Optically Transparent, Electrically Conductive Composite Medium”, Science, Jan. 24, 1992; I. Rosenberg, “IMPAD: An Inexpensive Multi-Touch Pressure Acquisition Device”, CHI 2009)
xi   (Wacom SD-420 Cordless Digitizer, 1989; Nuno Boscaglia, “A Low Cost Prototype for an Optical and Haptic Pen”, Proc. Biosignals and Biorobotics Conf, January 2011))
xii  (“Touch screens diversify”, Electronic Products, November 1985;  “Generation of X and Y Coordinate Information”, IBM Tech. Disclosure Journal, April 1, 1959)
xiii (Jeff Han,”Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection”,
Proc. 18th ACM Symposium on User Interface Software and Technology, 2005)
xiv     (See https://en.wikipedia.org/wiki/Touchscreen)
xv  (Margaret Minsky, “Manipulating Simulated Objects with Real-world Gestures using a
Force and Position Sensitive Screen” Computer Graphics, July 1984)
xvi  (Wayne Westerman, “Hand Tracking, Finger Identification, and Chordic Manipulation
on a multi-touch surface”, PhD Thesis, Univ. Delaware, 1999)
xvii  (“Robotic Tactile Sensing”, BYTE Magazine, January 1986)
xviii  (US Patent 5,010,772, 1991)
xix  (“A Thin, Flexible, Matrix-Based Pressure Sensor”, Sensors Magazine, September 1998)
xx  (US Patent 4,644,801, 1987)
xxi  (“Editing Pad Using Force Sensitive Resistors”, Xerox Disclosure Journal, July/August 1992)
xxii  (US Patent 7,663,607, 2010).
xxiii  (Capacitive proximity sensors are used for much more than just computer user interfaces.
For a quick partial list, see https://en.wikipedia.org/wiki/Proximity_sensor)
xxiv  (“Drawing on the iPad: 12 touchscreen styluses reviewed”, MacWorld, May 6, 2011)
xxv  (J, Rekimoto, “SmartSkin: An Infrastructure for Freehand Manipulation on Interactive Surfaces”, CHI 2002)
xxvi  (US Patent 6,570,078, 2003)