Cite pages of books
This commit is contained in:
@@ -58,7 +58,7 @@ Yet, the user experience in \AR is still highly dependent on the display used.
|
||||
\label{ar_displays}
|
||||
|
||||
To experience a virtual content combined and registered with the \RE, an output \UI that display the \VE to the user is necessary.
|
||||
There is a large variety of \AR displays with different methods of combining the real and virtual content, and different locations on the \RE or the user \cite{billinghurst2015survey}.
|
||||
There is a large variety of \AR displays with different methods of combining the real and virtual content, and different locations on the \RE or the user \cite[p.126]{billinghurst2015survey}.
|
||||
|
||||
In \emph{\VST-\AR}, the virtual images are superimposed to images of the \RE captured by a camera \cite{marchand2016pose}, and the combined real-virtual image is displayed on a screen to the user, as illustrated in \figref{itoh2022indistinguishable_vst}, \eg \figref{hartl2013mobile}.
|
||||
This augmented view through the camera has the advantage of a complete control on the real-virtual combination such as mutual occlusion between real and virtual objects \cite{macedo2023occlusion}, coherent lighting and no delay between the real and virtual images \cite{kruijff2010perceptual}.
|
||||
@@ -68,7 +68,7 @@ An \emph{\OST-\AR} directly combines the virtual images with the real world view
|
||||
These displays feature a direct, preserved view of the \RE at the cost of more difficult registration (spatial misalignment or temporal latency between the real and virtual content) \cite{grubert2018survey} and mutual real-virtual occlusion \cite{macedo2023occlusion}.
|
||||
|
||||
Finally, \emph{projection-based \AR} overlays the virtual images on the real world using a projector, as illustrated in \figref{roo2017one_2}, \eg \figref{roo2017inner}.
|
||||
It doesn't require the user to wear the display, but requires a real surface to project the virtual on, and is vulnerable to shadows created by the user or the real objects \cite{billinghurst2015survey}.
|
||||
It doesn't require the user to wear the display, but requires a real surface to project the virtual on, and is vulnerable to shadows created by the user or the real objects \cite[p.137]{billinghurst2015survey}.
|
||||
|
||||
\begin{subfigs}{ar_displays}{Simplified operating diagram of \AR display methods. }[][
|
||||
\item \VST-\AR \cite{itoh2022indistinguishable}.
|
||||
@@ -83,7 +83,7 @@ It doesn't require the user to wear the display, but requires a real surface to
|
||||
|
||||
Regardless the \AR display, it can be placed at different locations \cite{bimber2005spatial}, as shown in \figref{roo2017one_1}.
|
||||
\emph{Spatial \AR} is usually projection-based displays placed at fixed location (\figref{roo2017inner}), but it can also be \OST or \VST \emph{fixed windows} (\figref{lee2013spacetop}).
|
||||
Alternatively, \AR displays can be \emph{hand-held}, like a \VST smartphone (\figref{hartl2013mobile}), or body-attached, like a micro-projector used as a flashlight \cite{billinghurst2015survey}.
|
||||
Alternatively, \AR displays can be \emph{hand-held}, like a \VST smartphone (\figref{hartl2013mobile}), or body-attached, like a micro-projector used as a flashlight \cite[p.141]{billinghurst2015survey}.
|
||||
Finally, \AR displays can be head-worn like \VR \emph{headsets} or glasses, providing a highly immersive and portable experience.
|
||||
%Smartphones, shipped with sensors, computing ressources and algorithms, are the most common \AR today's displays, but research and development promise more immersive and interactive \AR with headset displays \cite{billinghurst2021grand}.
|
||||
|
||||
@@ -145,7 +145,7 @@ In all examples of \AR applications shown in \secref{ar_applications}, the user
|
||||
\label{interaction_techniques}
|
||||
|
||||
For a user to interact with a computer system (desktop, mobile, \AR, etc.), they first perceive the state of the system and then acts upon it through an input \UI.
|
||||
Inputs \UI can be either an \emph{active sensing}, a held or worn device, such as a mouse, a touch screen, or a hand-held controller, or a \emph{passive sensing}, that does not require a contact, such as eye trackers, voice recognition, or hand tracking \cite{laviolajr20173d}.
|
||||
Inputs \UI can be either an \emph{active sensing}, a held or worn device, such as a mouse, a touch screen, or a hand-held controller, or a \emph{passive sensing}, that does not require a contact, such as eye trackers, voice recognition, or hand tracking \cite[p.294]{laviolajr20173d}.
|
||||
The information gathered from the sensors by the \UI is then translated into actions within the computer system by an \emph{interaction technique} (\figref{interaction-technique}).
|
||||
For example, a cursor on a screen can be moved using either with a mouse or with the arrow keys on a keyboard, or a two-finger swipe on a touchscreen can be used to scroll or zoom an image.
|
||||
Choosing useful and efficient \UIs and interaction techniques is crucial for the user experience and the tasks that can be performed within the system.
|
||||
@@ -155,7 +155,7 @@ Choosing useful and efficient \UIs and interaction techniques is crucial for the
|
||||
\subsubsection{Tasks with Virtual Environments}
|
||||
\label{ve_tasks}
|
||||
|
||||
\textcite{laviolajr20173d} classify interaction techniques into three categories based on the tasks they enable users to perform: manipulation, navigation, and system control.
|
||||
\textcite[p.385]{laviolajr20173d} classify interaction techniques into three categories based on the tasks they enable users to perform: manipulation, navigation, and system control.
|
||||
\textcite{hertel2021taxonomy} proposed a taxonomy of interaction techniques specifically for immersive \AR.
|
||||
|
||||
The \emph{manipulation tasks} are the most fundamental tasks in \AR and \VR systems, and the building blocks for more complex interactions.
|
||||
@@ -196,7 +196,7 @@ As of today, an immersive \AR system tracks itself with the user in \ThreeD, usi
|
||||
It enables the \VE to be registered with the \RE and the user simply moves to navigate within the virtual content.
|
||||
%This tracking and mapping of the user and \RE into the \VE is named the \enquote{extent of world knowledge} by \textcite{skarbez2021revisiting}, \ie to what extent the \AR system knows about the \RE and is able to respond to changes in it.
|
||||
However, direct hand manipulation of virtual content is a challenge that requires specific interaction techniques \cite{billinghurst2021grand}.
|
||||
It is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands} \cite{billinghurst2015survey,hertel2021taxonomy}.
|
||||
It is often achieved using two interaction techniques: \emph{tangible objects} and \emph{virtual hands} \cite[p.165]{billinghurst2015survey,hertel2021taxonomy}.
|
||||
|
||||
\subsubsection{Manipulating with Tangibles}
|
||||
\label{ar_tangibles}
|
||||
@@ -232,16 +232,17 @@ Similarly, in \secref{tactile_rendering} we described how a material property (\
|
||||
\subsubsection{Manipulating with Virtual Hands}
|
||||
\label{ar_virtual_hands}
|
||||
|
||||
Natural \UIs allow the user to use their body movements directly as inputs to the \VE \cite{billinghurst2015survey}.
|
||||
Our hands allow us to manipulate real everyday objects with both strength and precision (\secref{grasp_types}), so virtual hand interaction techniques seem to be the most natural way to manipulate virtual objects \cite{laviolajr20173d}.
|
||||
Natural \UIs allow the user to use their body movements directly as inputs to the \VE, as defined by \textcite[p.172]{billinghurst2015survey}.
|
||||
In daily life, our hands allow us to manipulate real objects with both strength and precision (\secref{grasp_types}), so virtual hand interaction techniques seem to be the most natural way to manipulate virtual objects \cite[p.400]{laviolajr20173d}.
|
||||
It is also called mid-air interaction.
|
||||
Initially tracked by active sensing devices such as gloves or controllers, it is now possible to track hands in real time using cameras and computer vision algorithms natively integrated into \AR/\VR headsets \cite{tong2023survey}.
|
||||
|
||||
The user's hand is therefore tracked and reconstructed as a \emph{virtual hand} model in the \VE \cite{billinghurst2015survey,laviolajr20173d}.
|
||||
The user's hand is therefore tracked and reconstructed as a \emph{virtual hand} model in the \VE \cite[p.405]{laviolajr20173d}.
|
||||
The simplest models represent the hand as a rigid \ThreeD object that follows the movements of the real hand with \qty{6}{\DoF} (position and orientation in space) \cite{talvas2012novel}.
|
||||
An alternative is to model only the fingertips (\figref{lee2007handy}) or the whole hand (\figref{hilliges2012holodesk_1}) as points.
|
||||
The most common technique is to reconstruct all the phalanges of the hand in an articulated kinematic model (\secref{hand_anatomy}) \cite{borst2006spring}.
|
||||
|
||||
The contacts between the virtual hand model and the \VOs are then simulated using heuristic or physics-based techniques \cite{laviolajr20173d}.
|
||||
The contacts between the virtual hand model and the \VOs are then simulated using heuristic or physics-based techniques \cite[p.405]{laviolajr20173d}.
|
||||
Heuristic techniques use rules to determine the selection, manipulation and release of a \VO (\figref{piumsomboon2013userdefined_1}).
|
||||
However, they produce unrealistic behaviour and are limited to the cases predicted by the rules.
|
||||
Physics-based techniques simulate forces at the points of contact between the virtual hand and the \VO.
|
||||
|
||||
@@ -144,7 +144,7 @@ Participants signed an informed consent, including the declaration of having no
|
||||
\subsection{Collected Data}
|
||||
\label{metrics}
|
||||
|
||||
Inspired by \textcite{laviolajr20173d}, we collected the following metrics during the experiment:
|
||||
Inspired by \textcite[p.674]{laviolajr20173d}, we collected the following metrics during the experiment:
|
||||
\begin{itemize}
|
||||
\item \response{Completion Time}, defined as the time elapsed between the first contact with the virtual cube and its correct placement inside the target volume; as subjects were asked to complete the tasks as fast as possible, lower completion times mean better performance.
|
||||
\item \response{Contacts}, defined as the number of separate times the user's hand makes contact with the virtual cube; in both tasks, a lower number of contacts means a smoother continuous interaction with the object.
|
||||
|
||||
Reference in New Issue
Block a user