Nowadays, everyone is familiar with gestures on touchscreens from using their smartphone or tablet: gestures such as slide, pinch-to-zoom, swipe (or flick), twist, and press-and-hold (or long press).
Smartphones with touchscreens and gestures started taking off with the Apple iPod Touch and iPhone in 2007. But the technical art of gestures is much older. It goes back even before the days of the now-ubiquitous WIMP (Windows + Icons + Menus + Pointer-device) user interfaces that have dominated personal computing for the last 30 years.
(Source: Apple iPod Touch User Guide 2.0, 2008)
Gestures were already used on portable tablet computers at least since the early 1990’s. For example, the PenPoint tablet computers made extensive use of gestures, and gestures(i) (called Command-Stroke-Equivalents) were used in the Palm PDAs(ii) .
But how far back did gestures on touchscreens get their start?
As an engineer with extensive experience in touchscreen user interfaces and hardware serving as a user interface expert witness, I have been called to deal with the history of graphical user interfaces with and without gestures. Let’s first discuss without gestures.
The Xerox Alto (later: Xerox Star) personal computer from about 1974(iii) is often cited as the first WIMP user interface on a personal computer:
The basic user-interface interaction was point-and-click: point at an icon or text (e.g. with a mouse, or with a stylus/tablet or touchpad), and click on it. It was much simpler than typing commands in a command-line interface, and much more flexible than having physical knobs and dials with a keyboard. But because it was only point-and-click, some kinds of interactions were definitely awkward.
(Xerox Alto, circa 1974. Source: J. Johnson, “The Xerox Star – A Retrospective”, 1983)
The Xerox Star system also required a number of special function keys: “move”, “copy”, “delete”, “make same”, “properties”, and more. To move, the user had to point-and-click, look away from their work, reach over to the keyboard, tap on the ”Move” key, and then look back to point-and-click again.
To get around this sort of awkwardness, user-interface designers in systems back then (and still now) put extra buttons on the mouse (or puck) instead: a sixteen-button mouse/puck was not unusual. This meant at least you didn’t have to look away to the keyboard from where you were working. But it was both less flexible and harder to learn to use.
(Source: GTCO, DigiPad 5 product)
Apple’s MacIntosh from 1984(iv) is often cited as having the first “gesture” in a WIMP interface: drag. The Apple MacIntosh’s mouse famously had only one button. But the Mac also had features like dragging and pressing — not just point-and-clicking. A user could press on an icon, drag the cursor up and down (or even zig-sag) to the menu command wanted, and release the mouse button to perform the command.
Basically, a gesture was anything beyond just point-and-click.
As a graphical user interface (GUI) expert witness I can say definitively that the 1984 Mac wasn’t really the first invention of “gesture” in a GUI.
There were actually quite a few notable predecessors, graphical applications that used multiple gestures, using different shapes(v). Here are a few highlights:
In the early 1970’s, one of the first really successful CAD/CAM companies was Applicon(vii) . The user interface used an electronic tablet — a large touchpad with a special pen or stylus. The system had gesture commands with easy-to-draw shapes like circles, triangles, angles, and caret-marks. A user could also define new gesture shapes and commands by simply drawing an example to “train” the system. The system could be trained to recognize the complete alphabets and numbers. Gesture recognition algorithms like those used by Applicon were published as an appendix in Computer Graphics textbooks(viii) .
There were quite a number of research systems that used gestures. In 1969, gestures were used for editing text in systems one at the U.S. Army War College(ix) . It used gestures like familiar proofreader’s marks (lasso, paste, transpose, etc.) to edit text right on the display.
Before that, around 1967, there was the GRAIL system(x) . It had electronic ink, handwriting recognition, and gestures — including gestures for zooming in and out. The GRAIL system was used for modeling, and for creating and using digital maps in real-time. Its feature of zoomable maps is interesting to compare with services like MapQuest and Google Maps.
Handwriting recognition and interactive gestures were used on what many people cite as the first “tablet” computer device(xi) , the RAND tablet.(xii)
But even that might not be the earliest: one system I find interesting is U.S. Patent 1,117,184, “Controller”, granted to H.E. Goldberg in 1914. It used gestures (number shapes), recognized by a completely electro-mechanical computing system to control manufacturing equipment. This figure from the patent gives you an idea how it worked:
So, with all that prior history, what’s really new about gestures on tablets today? Or more to the point, what could be left that could be patentable about gestures?
As patent practitioners know, it all depends on exactly what patent claims say. A qualified touchscreen gesture user interface patent expert may be of assistance to resolve relevant claims.
About the Author: Jean Renard Ward is highly experienced, MIT-educated expert witness in patent litigation. Mr. Ward’s areas of design and development expertise include multi-touch/touchscreen and tablet hardware, capacitive touch and proximity sensors, styli/electronic pens, haptics; gestures, user interfaces (UIs), touchscreen graphics, and accessibility user interfaces (blind/visually-impaired); digital rights management (DRM), digital encryption and authentication (PKI), and malware detection; programming/coding (C/C++/Java, other systems), source-code analysis and reverse-engineering, and firmware. Clients include Google, Samsung, Ericsson, Lenovo, Motorola, Nokia, and Lucent Technologies. Mr. Ward has been Granted multiple US patents. He received his degree in Computer Science and Electrical Engineering Degree from M.I.T. Mr. Ward can be contacted at Rueters-Ward Services; Phone: (617) 600-4095; Cell: (781) 267-0156; Email: mailto:jrward@alum.mit.edu Website: www.ruetersward.com
__________________________________________________________________________________________
i (Robert Carr, “The Power of PenPoint, 1991)
ii (3Com, “Palm Pilot Handbook”, 1997)
iii (Xerox Corporation, “Alto User’s Handbook”, 1976)
iv (Apple MacIntosh User’s Handbook, 1984)
v (A “Drag” or a “Pinch-to-zoom” gesture, on the other hand, does involve tracing any particular shape. But it is hard to have a lot of different gestures without using something like different shapes.)
vi (CAM: Computer-Aided Design, such as electronic drafting
vii (“Applicon CAD System with Trainable Hand-Drawn Symbol Recognition”, youtube.com)
viii (Newman and Sproull, “Principles of Interactive Computer Grapics”, Appendix VIII, 1973)
ix (M.L. Coleman, “Text editing on a graphic display device using hand-drawn proofreaders’ symbols”, 1969)
x (J.P. Haverty, “GRAIL/GPSS: Graphic On-line Modeling”, 1968)
xi (Electronic tablets actually go much farther back than that, to at least 1884(!). But that is a topic for a different article.)
xii (M.L. Davis, “The RAND Tablet: A Man-Machine Graphical Communication Device”, 1964)