Tangibles
This commit is contained in:
@@ -119,16 +119,16 @@ As presence, \SoE in \AR is a recent topic and little is known about its percept
|
||||
\subsection{Direct Hand Manipulation in AR}
|
||||
|
||||
Both \AR/\VR and haptic systems are able to render virtual objects and environments as sensations displayed to the user's senses.
|
||||
However, as presented in \figref[introduction]{interaction-loop}, the user must be able to manipulate the virtual objects and environments to complete the loop, \eg through a hand-held controller, a tangible object, or even directly with the hands.
|
||||
An \emph{interaction technique} is then required to map user inputs to actions on the \VE~\cite{laviola20173d}.
|
||||
However, as presented in \figref[introduction]{interaction-loop}, the user must be able to manipulate the virtual objects and environments to complete the loop through a \UI, \eg using a hand-held controller, a tangible object, or even directly with the hands.
|
||||
A \emph{interaction technique} is then required to map these user inputs to actions on the \VE~\cite{laviola20173d}.
|
||||
|
||||
\subsubsection{Interaction Techniques}
|
||||
\subsubsection{User Interfaces and Interaction Techniques}
|
||||
|
||||
For a user to interact with a computer system, they first perceive the state of the system and then act on it using an input interface.
|
||||
An input interface can be either an \emph{active sensing}, physically held or worn device, such as a mouse, a touchscreen, or a hand-held controller, or a \emph{passive sensing}, not requiring any physical contact, such as eye trackers, voice recognition, or hand tracking.
|
||||
The sensors' information gathered by the input interface are then translated into actions within the computer system by an interaction technique.
|
||||
For a user to interact with a computer system, they first perceive the state of the system and then act on it with inputs through a \UI.
|
||||
An input \UI can be either an \emph{active sensing}, physically held or worn device, such as a mouse, a touchscreen, or a hand-held controller, or a \emph{passive sensing}, not requiring any physical contact, such as eye trackers, voice recognition, or hand tracking.
|
||||
The sensors' information gathered by the \UI are then translated into actions within the computer system by an interaction technique.
|
||||
For example, a cursor on a screen can be moved either with a mouse or with arrow keys on a keyboard, or a two-finger swipe on a touchscreen can be used to scroll or zoom an image.
|
||||
Choosing useful and efficient input interfaces and interaction techniques is crucial for the user experience and the tasks that can be performed within the system~\cite{laviola20173d}.
|
||||
Choosing useful and efficient \UIs and interaction techniques is crucial for the user experience and the tasks that can be performed within the system~\cite{laviola20173d}.
|
||||
|
||||
\fig[0.5]{interaction-technique}{An interaction technique map user inputs to actions within a computer system. Adapted from \textcite{billinghurst2005designing}.}
|
||||
|
||||
@@ -178,9 +178,18 @@ Such \emph{reality based interaction}~\cite{jacob2008realitybased} in immersive
|
||||
|
||||
\paragraph{Manipulating with Tangibles}
|
||||
|
||||
\cite{issartel2016tangible}
|
||||
\cite{englmeier2020tangible}
|
||||
en OST-AR \cite{kahl2021investigation,kahl2022influence,kahl2023using}
|
||||
As \AR integrates visual virtual content into the \RE perception, it can involve real surrounding objects as a \UI: to visually augment them, \eg by superimposing a visual texture~\cite{gupta2020replicate}, and to use them as physical proxies to support the interaction with \VOs~\cite{ishii1997tangible}.
|
||||
According to \textcite{billinghurst2005designing}, each \VO is coupled with a tangible object, and the \VO is physically manipulated via the tangible object, providing a direct, efficient and seamless interactions with both the real and virtual content.
|
||||
This is a technique similar to mapping a physical mouse movement to a virtual cursor on a screen.
|
||||
|
||||
Methods have been developed to automatically pair and adapt the \VOs to render with available tangibles of similar shape and size~\cite{simeone2015substitutional,hettiarachchi2016annexing}.
|
||||
The issue with these \enquote{space-multiplexed} interfaces is the high number and the diversity of tangibles required.
|
||||
An alternative is to use a single \enquote{universal} tangible object, such as a cube~\cite{issartel2016tangible} or a sphere~\cite{englmeier2020tangible}, like a hand-held controller
|
||||
Such \enquote{time-multiplexed} interfaces require interaction techniques to allow the user to pair the tangible with any \VO, \eg by placing the tangible into the \VO and pressing the fingers~\cite{issartel2016tangible}, similar to a real grasp (\secref{grasp_types}).
|
||||
|
||||
Still, the virtual visual rendering and the tangible haptic sensations can be inconsistent.
|
||||
When performing a precision grasp (\secref{grasp_types}) in \VR, only a certain relative difference between the tangible and the \VO is noticeable: \percent{6} for the object width, \percent{44} for the surface orientation, and \percent{67} for the surface curvature~\cite{detinguy2019how}.
|
||||
Similarly, in immersive \OST-\AR,
|
||||
|
||||
Triple problème :
|
||||
il faut un tangible par objet, problème de l'association qui ne fonctionne pas toujours (\cite{hettiarachchi2016annexing}) et du nombre de tangibles à avoir
|
||||
|
||||
@@ -59,6 +59,8 @@
|
||||
\acronym{RE}{real environment}
|
||||
\acronym{RV}{reality-virtuality}
|
||||
\acronym{SoE}{sense of embodiment}
|
||||
\acronym{TUI}{tangible user interface}
|
||||
\acronym{UI}{user interface}
|
||||
\acronym{v}{visual}
|
||||
\acronym{VCA}{voice-coil actuator}
|
||||
\acronym{VE}{virtual environment}
|
||||
|
||||
Reference in New Issue
Block a user