Typo
This commit is contained in:
@@ -88,7 +88,8 @@ In particular, we will investigate the effect of the visual feedback of the virt
|
||||
\paragraph{Presence}
|
||||
\label{ar_presence}
|
||||
|
||||
\AR and \VR are both essentially illusions as the virtual content does not physically exist but is just digitally simulated and rendered to the user's senses through display devices.
|
||||
\AR and \VR are both essentially illusions.
|
||||
The virtual content does not actually exist physically, but is only digitally simulated and rendered to the user's senses through display devices.
|
||||
Such experience of disbelief suspension in \VR is what is called \emph{presence}, and it can be decomposed into two dimensions: place illusion and plausibility \cite{slater2009place,slater2022separate}.
|
||||
\emph{Place illusion} is the sense of the user of \enquote{being there} in the \VE (\figref{presence-vr}).
|
||||
It emerges from the real time rendering of the \VE from the user's perspective: to be able to move around inside the \VE and look from different point of views.
|
||||
@@ -131,9 +132,9 @@ In all examples of \AR applications shown in \secref{ar_applications}, the user
|
||||
|
||||
For a user to interact with a computer system (desktop, mobile, \AR, etc.), they first perceive the state of the system and then acts upon it through an input device \cite[p.145]{laviolajr20173d}.
|
||||
Such input devices form an input \emph{\UI} that captures and translates user's actions to the computer.
|
||||
Similarly, an output \UI render and display the state of the system to the user (such as a \AR/\VR display, \secref{ar_displays}, or an haptic actuator, \secref{wearable_haptic_devices}).
|
||||
Similarly, an output \UI render and display the state of the system to the user (such as an \AR/\VR display, \secref{ar_displays}, or an haptic actuator, \secref{wearable_haptic_devices}).
|
||||
|
||||
Inputs \UI can be either an \emph{active sensing}, a held or worn device, such as a mouse, a touch screen, or a hand-held controller, or a \emph{passive sensing}, that does not require a contact, such as eye trackers, voice recognition, or hand tracking \cite[p.294]{laviolajr20173d}.
|
||||
Inputs \UI can be either an \emph{active sensing}, \ie a held or worn device, such as a mouse, a touch screen, or a hand-held controller, or a \emph{passive sensing}, that does not require a contact, such as eye trackers, voice recognition, or hand tracking \cite[p.294]{laviolajr20173d}.
|
||||
The captured information from the sensors is then translated into actions within the computer system by an \emph{interaction technique}.
|
||||
For example, a cursor on a screen can be moved using either with a mouse or with the arrow keys on a keyboard, or a two-finger swipe on a touchscreen can be used to scroll or zoom an image.
|
||||
Choosing useful and efficient \UIs and interaction techniques is crucial for the user experience and the tasks that can be performed within the system.
|
||||
|
||||
@@ -101,7 +101,7 @@ The overall workload (mean NASA-TLX score) was low (\num{21 \pm 14}), with no st
|
||||
The textures were also overall found to be very much caused by the finger movements (\response{Texture Agency}, \num{4.5 \pm 1.0}) with a very low perceived latency (\response{Texture Latency}, \num{1.6 \pm 0.8}), and to be quite realistic (\response{Texture Realism}, \num{3.6 \pm 0.9}) and quite plausible (\response{Texture Plausibility}, \num{3.6 \pm 1.0}).
|
||||
The vibrations were felt a slightly weak overall (\response{Vibration Strength}, \num{4.2 \pm 1.1}), and the vibrotactile device was perceived as neither distracting (\response{Device Distraction}, \num{1.2 \pm 0.4}) nor uncomfortable (\response{Device Discomfort}, \num{1.3 \pm 0.6}).
|
||||
|
||||
Participants were mixed between feeling the vibrations on the surface or on the top of their finger (\response{Vibration Location}, \num{3.9 \pm 1.7}); the distribution of scores was split between the two poles of the scale with \level{Real} and \level{Mixed} renderings (\percent{42.5} more on surface or on finger top, \percent{15} neutral), but there was a trend towards the top of the finger in VR renderings (\percent{65} \vs \percent{25} more on surface and \percent{10} neutral), but this difference was not statistically significant neither.
|
||||
Participants were mixed between feeling the vibrations on the surface or on the top of their finger (\response{Vibration Location}, \num{3.9 \pm 1.7}); the distribution of scores was split between the two poles of the scale with \level{Real} and \level{Mixed} renderings (\percent{42.5} more on surface or on finger top, \percent{15} neutral), but there was a trend towards the top of the finger in \VR renderings (\percent{65} \vs \percent{25} more on surface and \percent{10} neutral), but this difference was not statistically significant neither.
|
||||
|
||||
\begin{tab}{questions2}
|
||||
{NASA-TLX questions asked to participants after each \factor{Visual Rendering} block of trials.}
|
||||
|
||||
@@ -11,11 +11,11 @@ Our results showed that the visual virtuality of the hand (real or virtual) and
|
||||
The textures were on average perceived as \enquote{rougher} when touched with the real hand alone than with a virtual hand either in \AR, with \VR in between.
|
||||
Similarly, the sensitivity to differences in roughness was better with the real hand, less good with \AR, and in between with \VR.
|
||||
Exploration behaviour was also slower in \VR than with real hand alone, although subjective evaluation of the texture was not affected.
|
||||
We hypothesised that this difference in perception was due to the \emph{perceived latency} between the finger movements and the different visual, haptic and proprioceptive feedbacks, which were the same in all visual renderings, but were more noticeable in \AR and \VR than without visual augmentation.
|
||||
We hypothesized that this difference in perception was due to the \emph{perceived latency} between the finger movements and the different visual, haptic and proprioceptive feedbacks, which were the same in all visual renderings, but were more noticeable in \AR and \VR than without visual augmentation.
|
||||
|
||||
%We can outline recommendations for future \AR/\VR studies or applications using wearable haptics.
|
||||
This study suggests that attention should be paid to the respective latencies of the visual and haptic sensory feedbacks inherent in such systems and, more importantly, to \emph{the perception of their possible asynchrony}.
|
||||
Latencies should be measured \cite{friston2014measuring}, minimized to an acceptable level for users and kept synchronised with each other \cite{diluca2019perceptual}.
|
||||
Latencies should be measured \cite{friston2014measuring}, minimized to an acceptable level for users and kept synchronized with each other \cite{diluca2019perceptual}.
|
||||
It seems also that the visual aspect of the hand or the environment on itself has little effect on the perception of haptic feedback, but the degree of visual virtuality can affect the asynchrony perception of the latencies, even though the latencies remain identical.
|
||||
When designing for wearable haptics or integrating it into \AR/\VR, it seems important to test its perception in real, augmented and virtual environments.
|
||||
%With a better understanding of how visual factors influence the perception of haptically augmented real objects, the many wearable haptic systems that already exist but have not yet been fully explored with \AR can be better applied and new visuo-haptic renderings adapted to \AR can be designed.
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
\label{intro}
|
||||
|
||||
Touching, grasping and manipulating virtual objects are fundamental interactions in \AR (\secref[related_work]{ve_tasks}) and essential for many of its applications (\secref[related_work]{ar_applications}).
|
||||
%The most common current \AR systems, in the form of portable and immersive \OST-\AR headsets \cite{hertel2021taxonomy}, allow real-time hand tracking and direct interaction with virtual objects with bare hands (\secref[related_work]{real_virtual_gap}).
|
||||
Manipulation of virtual objects is achieved using a virtual hand interaction technique that represents the user's hand in the \VE and simulates interaction with virtual objects (\secref[related_work]{ar_virtual_hands}).
|
||||
The visual feedback of the virtual hand is a key element for interacting and manipulating virtual objects in \VR \cite{prachyabrued2014visual,grubert2018effects}.
|
||||
Some work has also investigated the visual feedback of the virtual hand in \AR, but not in a context of virtual object manipulation \cite{al-kalbani2016analysis,yoon2020evaluating} or was limited to a single visual hand augmentation \cite{piumsomboon2014graspshell,blaga2017usability,maisto2017evaluation}.
|
||||
|
||||
Reference in New Issue
Block a user