ISO/IEC 30113-11:2017
(Main)Information technology — Gesture-based interfaces across devices and methods — Part 11: Single-point gestures for common system actions
Information technology — Gesture-based interfaces across devices and methods — Part 11: Single-point gestures for common system actions
ISO/IEC 30113-11:2017 defines single-point gestures for common system actions used in information and communication technology (ICT) systems. It specifies movements for clear and classified gestures recognized by the systems and applications. The single-point gestures are performed using an input device (e.g. a mouse, a stylus, etc.) or a body part (e.g. a fingertip, a hand, etc.). These single-point gestures are intended to operate in a consistent manner regardless of the system, platform, application or device. NOTE ICT systems include, but are not limited to, digital televisions, set-top boxes, video game consoles, communication devices, Internet devices, entertainment devices and personal computers.
Technologies de l'information — Interfaces gestuelles entre dispositifs et méthodes — Partie 11: Gestes à point unique pour actions courantes du système
General Information
Standards Content (Sample)
INTERNATIONAL ISO/IEC
STANDARD 30113-11
First edition
2017-08
Information technology — Gesture-
based interfaces across devices and
methods —
Part 11:
Single-point gestures for common
system actions
Technologies de l'information — Interfaces gestuelles entre dispositifs
et méthodes —
Partie 11: Gestes à point unique pour actions courantes du système
Reference number
ISO/IEC 30113-11:2017(E)
©
ISO/IEC 2017
---------------------- Page: 1 ----------------------
ISO/IEC 30113-11:2017(E)
COPYRIGHT PROTECTED DOCUMENT
© ISO/IEC 2017, Published in Switzerland
All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized otherwise in any form
or by any means, electronic or mechanical, including photocopying, or posting on the internet or an intranet, without prior
written permission. Permission can be requested from either ISO at the address below or ISO’s member body in the country of
the requester.
ISO copyright office
Ch. de Blandonnet 8 • CP 401
CH-1214 Vernier, Geneva, Switzerland
Tel. +41 22 749 01 11
Fax +41 22 749 09 47
copyright@iso.org
www.iso.org
ii © ISO/IEC 2017 – All rights reserved
---------------------- Page: 2 ----------------------
ISO/IEC 30113-11:2017(E)
Contents Page
Foreword .iv
Introduction .v
1 Scope . 1
2 Normative references . 1
3 Terms and definitions . 1
4 General . 2
5 Types of gestures . 2
5.1 Mediated gesture . . 2
5.2 Direct touch gesture . 2
5.3 Non-contact gesture . 3
5.4 Descriptions of gestures . 3
6 Gesture adaptation . 4
7 Movement metaphor . 4
7.1 General . 4
7.2 Consideration of the movement metaphor . 6
8 Descriptions of the single-point gestures . 6
8.1 General . 6
8.2 Directional gestures . 7
8.2.1 General. 7
8.2.2 “Left” gesture . 8
8.2.3 “Right” gesture .10
8.2.4 “Up” gesture .11
8.2.5 “Down” gesture . . .13
8.2.6 “Forward” gesture .14
8.2.7 “Backward” gesture .16
8.2.8 “Clockwise rotation” gesture .17
8.2.9 “Counter-clockwise rotation” gesture .18
8.3 Command gestures .20
8.3.1 General.20
8.3.2 “Tap” gesture .21
8.3.3 “Displace” gesture .22
8.3.4 “Double-tap” gesture .23
8.3.5 “Tap and hold” gesture . . .24
8.3.6 “Horizontal wave” gesture .25
8.3.7 “Vertical wave” gesture . .26
9 System level functions .28
10 Common functions across applications .29
© ISO/IEC 2017 – All rights reserved iii
---------------------- Page: 3 ----------------------
ISO/IEC 30113-11:2017(E)
Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are
members of ISO or IEC participate in the development of International Standards through technical
committees established by the respective organization to deal with particular fields of technical
activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other international
organizations, governmental and non-governmental, in liaison with ISO and IEC, also take part in the
work. In the field of information technology, ISO and IEC have established a joint technical committee,
ISO/IEC JTC 1.
The procedures used to develop this document and those intended for its further maintenance are
described in the ISO/IEC Directives, Part 1. In particular the different approval criteria needed for
the different types of document should be noted. This document was drafted in accordance with the
editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives).
Attention is drawn to the possibility that some of the elements of this document may be the subject
of patent rights. ISO and IEC shall not be held responsible for identifying any or all such patent
rights. Details of any patent rights identified during the development of the document will be in the
Introduction and/or on the ISO list of patent declarations received (see www.iso.org/patents).
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation on the voluntary nature of standards, the meaning of ISO specific terms and
expressions related to conformity assessment, as well as information about ISO's adherence to the
World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT) see the following
URL: www.iso.org/iso/foreword.html.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 35, User interfaces.
A list of all parts in the ISO/IEC 30113 series can be found on the ISO website.
iv © ISO/IEC 2017 – All rights reserved
---------------------- Page: 4 ----------------------
ISO/IEC 30113-11:2017(E)
Introduction
A single-point gesture is generated by a user with a pointer [which includes an input device (e.g. a mouse,
a stylus, etc.) or a body part (e.g. a fingertip, a hand, etc.)] providing gesture actions using a single point
of interest (POI). The gesture can be recognized and interpreted as a command which is understood by
platforms, systems or applications running information and communications technology (ICT) systems.
If the user feels that the gesture is natural and it is convenient and comfortable to generate, he/she can
easily utilize the gesture for the applications on the ICT systems without spending an excessive amount
of time learning and practising how to use the gesture.
A single-point gesture is used in several commercially-available ICT systems including personal
computers, smart televisions, smart phones and video game consoles. There is a need for an International
Standard to define the single-point gesture so that a user does not get confused if a gesture is associated
with a different command for another application. Potential variety and inconsistency among single-
point gestures might cause a serious usability problem in using applications on ICT systems.
This document presents descriptions of single-point gestures and their corresponding gesture
commands for system level functions and common functions across applications on ICT systems. The
system level functions include functions for selecting, initiating, terminating, resuming and restarting
applications which are handled by operating systems or platforms. The common functions across
applications are also identified. These functions include navigating menus, opening and closing objects,
“help”, “undo”, “redo”, and so on.
The single-point gestures will harmonize with other gestures of multiple POIs. It is expected that users
can execute applications more easily and without confusion by utilizing the single-point gestures. The
gestures are to be performed by the users regardless of a specific recognition technique, a certain
interaction method or a device. The focus of this document is on the users and their cognitive aspects.
© ISO/IEC 2017 – All rights reserved v
---------------------- Page: 5 ----------------------
INTERNATIONAL STANDARD ISO/IEC 30113-11:2017(E)
Information technology — Gesture-based interfaces across
devices and methods —
Part 11:
Single-point gestures for common system actions
1 Scope
This document defines single-point gestures for common system actions used in information and
communication technology (ICT) systems. It specifies movements for clear and classified gestures
recognized by the systems and applications. The single-point gestures are performed using an input
device (e.g. a mouse, a stylus, etc.) or a body part (e.g. a fingertip, a hand, etc.). These single-point
gestures are intended to operate in a consistent manner regardless of the system, platform, application
or device.
NOTE ICT systems include, but are not limited to, digital televisions, set-top boxes, video game consoles,
communication devices, Internet devices, entertainment devices and personal computers.
2 Normative references
The following documents are referred to in the text in such a way that some or all of their content
constitutes requirements of this document. For dated references, only the edition cited applies. For
undated references, the latest edition of the referenced document (including any amendments) applies.
ISO/IEC 30113-1:2015, Information technology — User interface — Gesture-based interfaces across
devices and methods — Part 1: Framework
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.
ISO and IEC maintain terminological databases for use in standardization at the following addresses:
— IEC Electropedia: available at http://www.electropedia.org/
— ISO Online browsing platform: available at http://www.iso.org/obp
3.1
point of interest
POI
specific point location that a user utilizes to formulate a gesture
3.2
single-point gesture
sequence of actions of moving a POI (3.1) by a user which are clearly recognized by the system
3.3
multi-point gesture
sequence of actions of moving multiple POIs (3.1) by a user which are clearly recognized by the system
EXAMPLE A gesture with two hands or a gesture with multiple fingers.
© ISO/IEC 2017 – All rights reserved 1
---------------------- Page: 6 ----------------------
ISO/IEC 30113-11:2017(E)
4 General
A user interface conforming to this document shall be able to recognize single-point gestures within a
pre-defined space where a user performs the gestures. Once a single-point gesture is recognized by a
user interface, its corresponding gesture command shall be invoked and executed.
The gestures in this document are discussed from a user’s viewpoint. The gestures are defined
regardless of recognition techniques, interaction methods, input devices and so on. This document
emphasizes and focuses on users who perform the gestures.
The accessibility of gestures is important to meet the needs of people with disabilities and elderly people.
EXAMPLE Gestural interactions can make use of eye-tracking and/or recognition of facial expressions for
some physically disabled users.
5 Types of gestures
5.1 Mediated gesture
Mediated gestures include a sequence of movements with a single-point input device which is physically
connected to and provides input data to a system such as a mouse, a joystick, a track ball, etc. In order
to distinguish a gesture input from other movements of the input device, such as a mouse pointing, it is
important for a user to know how to activate the gesture input.
EXAMPLE The basic actions of the mediated gestures using a mouse include “clicking”, “double-clicking”,
“dragging”, and so on. In case of a mouse with two buttons, its right button can be used for activating a gesture.
Then, the gesture enters the initial state.
In order to minimize errors of misinterpreting a gesture input, a system ignores any unintentional
movement of a pointing device in generating gestures. For a mediated gesture using a mouse, the
minimum movement of the pointer of the mouse shall be over a specific number of pixels (e.g. 30 pixels)
along horizontal, vertical or diagonal directions.
After activating a gesture input, the system enters into intermediate states. A visual trail (which
presents pointer’s movements and guides) should be displayed on an output device, such as a screen, in
order to provide a feedback and/or a feed-forward to a user. This can improve a usability of the system
or the platform. However, in case of a gesture command that contains continuous manipulation of a POI
(e.g. in case of the “Displace” gesture), no visual trail should be displayed because it may interfere with
a pointer or a cursor.
5.2 Direct touch gesture
A direct touch gesture is a sequence of movements (e.g. pinching, swiping, etc.) of a user with a body
part (e.g. a finger) or a physical object (e.g. a stylus) on an input device (e.g. a touch pad or a touch
screen). As soon as a user touches the pad or the screen, an initial state of a gesture starts. Then, its
intermediate states begin as the movements of the touch (e.g. swiping, pressing, holding, etc.) proceed.
There are several elementary actions of the direct touch gestures that are important to a range of the
gestures.
— “Tapping”: The tapping action is performed by tapping a pad or a screen with a body part or a
physical object manipulated by a user. It is a gesture that can be defined as a brief contact between
the user and the aimed device. It starts with a contact and is rapidly followed by a lack of contact.
The input device might sense the pressure from the body part or the physical object. This action can
be used as a selector of an object. An action of “double-tapping” is performed as consecutive actions
of tapping. The double-tapping can be used as an activator or an initiator of an application.
— “Holding”: The holding action is started with a direct contact (i.e. “Tapping”) to a pad or a screen by
a user. Then, the position and the area of the tapping body part or the physical object do not change
2 © ISO/IEC 2017 – All rights reserved
---------------------- Page: 7 ----------------------
ISO/IEC 30113-11:2017(E)
over the specific time period. The difference between the holding action and the tapping action is
the time of touching the pad or the screen.
— “Pressing”: The pressing action is performed by a combination of the tapping action and the holding
action on a pad or a screen with a body part or a physical object by a user. The input device might
sense the pressure from the touch or the changes of the touching area involved in the touch.
— “Swiping”: The swiping actions are similar to the four-directional gestures of Table 1. The major
difference is a speed of performing the gestures. The swiping action requires a performance with a
speed. The specific speed of the swiping action shall be defined in its corresponding input device.
5.3 Non-contact gesture
A non-contact gesture involves a sequence of movements of a body part or a physical object (which
is physically not connected or wirelessly connected to a system) by a user to invoke a command. It
is assumed that the gesture does not touch an input screen or an input surface of the system. The
non-contact gesture is generally performed in a two-dimensional or three-dimensional (2D or 3D)
space where the gesture should be clearly recognized by sensors or cameras of the system.
5.4 Descriptions of gestures
The representations of the gestures in this document shall conform to the following rules.
— If a position of a body part is involved, the gesture representation involves its starting position
and ending position. A user is expected to make a clear distinction between a starting point and an
ending point of each non-contact gesture for the clear recognition of the gesture.
— This document defines the starting point of the gesture as an initial state and the ending point of the
gesture as a final state.
— The starting point is visually represented by a single circle with a solid line (which represents a POI
as shown in Figure 1), while the ending point is represented by a circle with a dotted line as shown
in Figure 2.
Figure 1 — Single circle represents a starting point of a gesture
Figure 2 — Ending point of a gesture represented by a circle with a dotted line
— An arrow is used for representing a single stroke with a specific direction. The following example
presents a gesture of moving a POI from its current position (the starting point) to its left side until
it arrives the ending point.
EXAMPLE 1 The “Left” gesture (G11-1 in Clause 8).
— A number of arrows represent a speed of performing gestures. The speed is expressed by adding one
more arrow as shown in Figure 3. The two arrows express a “faster” stroke of the “Left” gesture. It
describes a “swiping” gesture of the direct touch gesture, which is a variation of the “Left” gesture;
Figure 3 — Variation of the "Left" gesture, i.e. “Left swiping”
© ISO/IEC 2017 – All rights reserved 3
---------------------- Page: 8 ----------------------
ISO/IEC 30113-11:2017(E)
— A combination of lines and arrows are used for expressing repetitive strokes of waving. If the
number of repetitive strokes is predefined to be recognized, it should be specified using “xn”, where
the number, “n”, denotes an unspecified number of the repetitions, as shown below.
EXAMPLE 2 The “Horizontal wave” gesture (G11-13 in Clause 8).
— An action of holding a POI is illustrated as the starting point icon and an icon of an hourglass. The
hourglass icon symbolizes a certain period of time for holding.
EXAMPLE 3 The “Tap and hold” gesture (G11-12 in Clause 8).
6 Gesture adaptation
A 3D non-contact gesture can be cognitively complex to perform, because it requires several dimensions
for a user and a device to combine. Therefore, he/she often executes a 3D non-contact gesture in a two-
dimensional (2D) space.
EXAMPLE When a user has to produce a 3D non-contact gesture of drawing a sphere, he/she usually
produces a circle or a set of circles on a virtual 2D temporary plane.
A user naturally optimizes his/her gestures from a kinematic point of view. This is another reason why
users express their 3D non-contact gestures using their body parts in a 2D plane; it costs less in terms
of energy. Muscles, paths of blood flow, bone articulators and so on require optimization in terms of
kinematics.
A gesture-based interface should be able to recognize and interpret the simplified and/or optimized
gestures. The gesture-based interface should be adapted in order to support a user’s performances of
more convenient, comfortable and natural gestures.
Experts or advanced users tend to optimize their gesture performances. They tend to move less and to
execute shorter strokes. Therefore, it is important for an interface to offer and understand predefined
gestures for the expert or advanced users.
Users with disabilities or users involved in simultaneous multiple tasks can be limited as they are
performing gestures: some users might be able to move only a single hand, some users might be able
to move their hand only but not their fingers, some users might be able to move their eyes only, and
so on. Such limitations might be due to a physiological or cognitive disability or to another parallel
task. An interface shall offer a way of adapting gestures (such as “eye-tracking”, “recognition of facial
expressions”, and so on) so that it enables equivalent interaction opportunities as for non-disabled users.
Gestures may deliver different meanings depending on cultural and linguistic background of their
users. A gesture-based interface should be able to offer a way of adapting gestures based on personal
preferences, ability and cultural and linguistic adaptability.
7 Movement metaphor
7.1 General
A user views digital content such as a document or an image through a viewing window of a screen on
an ICT system. When the size of the viewing window is smaller than the size of the content, either the
window or the content should be moved. There are two metaphors for moving a POI for viewing the
content:
— moving the viewing window over the content;
4 © ISO/IEC 2017 – All rights reserved
---------------------- Page: 9 ----------------------
ISO/IEC 30113-11:2017(E)
— moving the content with the fixed viewing window.
The first metaphor is “movement of a viewing window”, which allows the user to navigate the content
by moving the viewing window.
As illustrated in Figure 4, the user displaces the viewing window at a specific position to view a part of
the content by moving a POI. In Figure 4, the user views the parts of the content, “F”, “G”, “2”, and “3” in
the viewing window. If the user moves the viewing window to a new position along the direction, the
user will view the parts of the content, “C”, “D”, “G”, and “H”.
a
c
1
Key
1 viewing window
a
Moving the viewing window by the “move” command from a user.
b
Content to be viewed.
c
Viewing window in a new position.
Figure 4 — Movement of a viewing window
The second metaphor is “movement of content”, which allows the user to navigate the content by moving
a POI. Figure 5 illustrates moving the content by the user. If the user initially views “F”, “G”, “2”, and “3”
through a viewing window, the user will view “1”, “2”, “5”, and “6” after moving a POI along the direction
under the “movement of content” metaphor.
Key
1 viewing window
a
Moving the content by the “move” command from a user.
b
Content to be viewed.
c
New content presented through the viewing window.
Figure 5 — Movement of content
© ISO/IEC 2017 – All rights reserved 5
---------------------- Page: 10 ----------------------
ISO/IEC 30113-11:2017(E)
7.2 Consideration of the movement metaphor
The movement metaphor is used in various ICT systems and applications. The systems and the
applications utilize the metaphor depending on their features. When a user manipulates a specific
object, the “movement of content” metaphor is applied to the content. On the other hand, when the user
navigates a whole object, the “movement of a window” metaphor is applied to the navigation of the
whole content. In order to help the user to understand on which type of the movement metaphor he is
utilizing, the system should indicate what is currently manipulated (using a focus or other audio/visual
effects).
It follows that a gesture of moving a POI generates different results according to its movement
metaphor. This should be considered when the gestures and their associated functions are discussed in
this document.
8 Descriptions of the single-point gestures
8.1 General
Two types of gestures are defined in this document: the “directional” gestures and the “command”
gestures. The directional gestures are formulated using directional movements of a POI. The
movements denote directional information of the gestures such as “moving an object to a left direction”.
The command gestures denote specific meanings of commands such as “initiation of an application”,
“termination of an application” and so on.
For each gesture in this document, an initial state, an intermediate state and a final state shall be
defined. A starting point of a gesture (where a user's POI activates the gesture and it is recognized
by an ICT system or an application) denotes an initial state. An ending point of a gesture (where the
gesture terminates and is recognized) represents a final state. There shall be a clear distinction
between a starting point and an ending point of each single-point gesture for the clear recognition of
gestures. An intermediate state is expressed by movements between a starting point and an ending
point. Therefore, a gesture-based interface shall recognize and classify the user’s movements as an
initial state, an intermediate state, and a final state of a gesture.
Key
1 initial state
2 intermediate state
3 final state
Figure 6 — State transition diagram of the gestures
The gestures in this document are expressed using the template which is defined in ISO/IEC 30113-
1:2015, A.3.5. The state transition diagram in Figure 6 is referred by the descriptions of the gestures in
this document.
6 © ISO/IEC 2017 – All rights reserved
---------------------- Page: 11 ----------------------
ISO/IEC 30113-11:2017(E)
8.2 Directional gestures
8.2.1 General
The most primitive single-point gestures are the directional gestures. As summarized in Table 1,
gestures of denoting four directional movements of a POI (“left”, “right”, “up”, and “down”) are the
simple directional gestures.
If a user performs a mediated gesture using a mouse, for example, he/she presses a right button of the
mouse to start the gesture. Then, a POI should be moved to one of the four directions using the mouse. If
the right bu
...
Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.