Fix acronyms
This commit is contained in:
@@ -110,7 +110,7 @@ It doesn't mean that the virtual events are realistic, but that they are plausib
|
||||
|
||||
%The \AR presence is far less defined and studied than for \VR \cite{tran2024survey}
|
||||
For \AR, \textcite{slater2022separate} proposed to invert place illusion to what we can call \enquote{object illusion}, \ie the sense of the \VO to \enquote{feels here} in the \RE (\figref{presence-ar}).
|
||||
As with VR, \VOs must be able to be seen from different angles by moving the head but also, this is more difficult, be consistent with the \RE, \eg occlude or be occluded by real objects \cite{macedo2023occlusion}, cast shadows or reflect lights.
|
||||
As with \VR, \VOs must be able to be seen from different angles by moving the head but also, this is more difficult, be consistent with the \RE, \eg occlude or be occluded by real objects \cite{macedo2023occlusion}, cast shadows or reflect lights.
|
||||
The plausibility can be applied to \AR as is, but the \VOs must additionally have knowledge of the \RE and react accordingly to it.
|
||||
%\textcite{skarbez2021revisiting} also named place illusion for \AR as \enquote{immersion} and plausibility as \enquote{coherence}, and these terms will be used in the remainder of this thesis.
|
||||
%One main issue with presence is how to measure it both in \VR \cite{slater2022separate} and \AR \cite{tran2024survey}.
|
||||
|
||||
@@ -96,7 +96,7 @@ For example, in a fixed \VST-\AR screen (\secref{ar_displays}), by visually defo
|
||||
%In all of these studies, the visual expectations of participants influenced their haptic perception.
|
||||
%In particular, in \AR and \VR, the perception of a haptic rendering or augmentation can be influenced by the visual rendering of the \VO.
|
||||
|
||||
\subsubsection{Perception of Visuo-Haptic Rendering in AR and VR}
|
||||
\subsubsection{Perception of Visuo-Haptic Rendering in AR and \VR}
|
||||
\label{AR_vs_VR}
|
||||
|
||||
Some studies have investigated the visuo-haptic perception of \VOs rendered with force-feedback and vibrotactile feedback in \AR and \VR.
|
||||
@@ -248,7 +248,7 @@ A user study was conducted in \VR to compare the perception of visuo-haptic stif
|
||||
% \subfig{pezent2019tasbi_4}
|
||||
%\end{subfigs}
|
||||
|
||||
% \cite{sarac2022perceived,palmer2022haptic} not in AR but studies on relocating to the wrist the haptic feedback of the fingertip-object contacts.
|
||||
% \cite{sarac2022perceived,palmer2022haptic} not in \AR but studies on relocating to the wrist the haptic feedback of the fingertip-object contacts.
|
||||
|
||||
%\subsection{Conclusion}
|
||||
%\label{visuo_haptic_conclusion}
|
||||
|
||||
@@ -18,16 +18,16 @@ Worn on the finger, but not directly on the fingertip to keep it free to interac
|
||||
%
|
||||
However, the use of wearable haptic devices has been little explored in Augmented Reality (AR), where visual virtual content is integrated into the real-world environment, especially for augmenting texture sensations \cite{punpongsanon2015softar,maisto2017evaluation,meli2018combining,chan2021hasti,teng2021touch,fradin2023humans}.
|
||||
%
|
||||
A key difference in AR compared to VR is that the user can still see the real-world surroundings, including their hands, the augmented tangible objects and the worn haptic devices.
|
||||
A key difference in \AR compared to \VR is that the user can still see the real-world surroundings, including their hands, the augmented tangible objects and the worn haptic devices.
|
||||
%
|
||||
One additional issue of current AR systems is their visual display limitations, or virtual content that may not be seen as consistent with the real world \cite{kim2018revisiting,macedo2023occlusion}.
|
||||
One additional issue of current \AR systems is their visual display limitations, or virtual content that may not be seen as consistent with the real world \cite{kim2018revisiting,macedo2023occlusion}.
|
||||
%
|
||||
These two factors have been shown to influence the perception of haptic stiffness rendering \cite{knorlein2009influence,gaffary2017ar}.
|
||||
%
|
||||
It remains to be investigated whether simultaneous and co-localized visual and haptic texture augmentation of tangible surfaces in AR can be perceived in a coherent and realistic manner, and to what extent each sensory modality would contribute to the overall perception of the augmented texture.
|
||||
It remains to be investigated whether simultaneous and co-localized visual and haptic texture augmentation of tangible surfaces in \AR can be perceived in a coherent and realistic manner, and to what extent each sensory modality would contribute to the overall perception of the augmented texture.
|
||||
%
|
||||
Being able to coherently substitute the visuo-haptic texture of an everyday surface directly touched by a finger is an important step towards new AR applications capable of visually and haptically augmenting the real environment of a user in a plausible way.
|
||||
Being able to coherently substitute the visuo-haptic texture of an everyday surface directly touched by a finger is an important step towards new \AR applications capable of visually and haptically augmenting the real environment of a user in a plausible way.
|
||||
|
||||
In this paper, we investigate how users perceive a tangible surface touched with the index finger when it is augmented with a visuo-haptic roughness texture using immersive optical see-through AR (OST-AR) and wearable vibrotactile stimuli provided on the index.
|
||||
In this paper, we investigate how users perceive a tangible surface touched with the index finger when it is augmented with a visuo-haptic roughness texture using immersive optical see-through \AR (OST-AR) and wearable vibrotactile stimuli provided on the index.
|
||||
%
|
||||
In a user study, twenty participants freely explored and evaluated the coherence, realism and roughness of the combination of nine representative pairs of visuo-haptic texture augmentations (\figref{setup}, left) from the HaTT database \cite{culbertson2014one}.
|
||||
|
||||
@@ -5,9 +5,9 @@
|
||||
\item The nine visuo-haptic textures used in the user study, selected from the HaTT database \cite{culbertson2014one}.
|
||||
The texture names were never shown, so as to prevent the use of the user's visual or haptic memory of the textures.
|
||||
\item Experimental setup.
|
||||
Participant sat in front of the tangible surfaces, which were augmented with visual textures displayed by the HoloLens~2 AR headset and haptic roughness textures rendered by the vibrotactile haptic device placed on the middle index phalanx.
|
||||
Participant sat in front of the tangible surfaces, which were augmented with visual textures displayed by the HoloLens~2 \AR headset and haptic roughness textures rendered by the vibrotactile haptic device placed on the middle index phalanx.
|
||||
A webcam above the surfaces tracked the finger movements.
|
||||
\item First person view of the user study, as seen through the immersive AR headset HoloLens~2.
|
||||
\item First person view of the user study, as seen through the immersive \AR headset HoloLens~2.
|
||||
The visual texture overlays are statically displayed on the surfaces, allowing the user to move around to view them from different angles.
|
||||
The haptic roughness texture is generated based on HaTT data-driven texture models and finger speed, and it is rendered on the middle index phalanx as it slides on the considered surface.
|
||||
]
|
||||
@@ -16,7 +16,7 @@
|
||||
\subfig[0.32]{experiment/view}
|
||||
\end{subfigs}
|
||||
|
||||
The user study aimed at analyzing the user perception of tangible surfaces when augmented through a visuo-haptic texture using AR and vibrotactile haptic feedback provided on the finger touching the surfaces.
|
||||
The user study aimed at analyzing the user perception of tangible surfaces when augmented through a visuo-haptic texture using \AR and vibrotactile haptic feedback provided on the finger touching the surfaces.
|
||||
%
|
||||
Nine representative visuo-haptic texture pairs from the HaTT database \cite{culbertson2014one} were investigated in two tasks:
|
||||
%
|
||||
@@ -27,7 +27,7 @@ Our objective is to assess which haptic textures were associated with which visu
|
||||
\subsection{The textures}
|
||||
\label{textures}
|
||||
|
||||
The 100 visuo-haptic texture pairs of the HaTT database \cite{culbertson2014one} were preliminary tested and compared using AR and vibrotactile haptic feedback on the finger on a tangible surface.
|
||||
The 100 visuo-haptic texture pairs of the HaTT database \cite{culbertson2014one} were preliminary tested and compared using \AR and vibrotactile haptic feedback on the finger on a tangible surface.
|
||||
%
|
||||
These texture models were chosen as they are visuo-haptic representations of a wide range of real textures that are publicly available online.
|
||||
%
|
||||
@@ -69,7 +69,7 @@ The user study was held in a quiet room with no windows, with one light source o
|
||||
|
||||
Participants were first given written instructions about the experimental setup, the tasks, and the procedure of the user study.
|
||||
%
|
||||
Then, after having signed an informed consent form, they were asked to seat in front of the table with the experimental setup and to wear the HoloLens~2 AR headset. The experimenter firmly attached the plastic shell encasing the vibrotactile actuator to the middle index phalanx of their dominant hand.
|
||||
Then, after having signed an informed consent form, they were asked to seat in front of the table with the experimental setup and to wear the HoloLens~2 \AR headset. The experimenter firmly attached the plastic shell encasing the vibrotactile actuator to the middle index phalanx of their dominant hand.
|
||||
%
|
||||
As the haptic device generated no audible noise, participants did not wear any noise reduction headphones.
|
||||
%
|
||||
@@ -119,9 +119,9 @@ One participant was left-handed, all others were right-handed; they all performe
|
||||
%
|
||||
All participants had normal or corrected-to-normal vision and none of them had a known hand or finger impairment.
|
||||
%
|
||||
They rated their experience with haptics, AR, and VR (\enquote{I use it every month or more}); 10 were experienced with haptics, 2 with AR, and 10 with VR.
|
||||
They rated their experience with haptics, \AR, and \VR (\enquote{I use it every month or more}); 10 were experienced with haptics, 2 with \AR, and 10 with \VR.
|
||||
%
|
||||
Experiences were correlated between haptics and AR (\spearman{0.53}), haptics and VR (\spearman{0.61}), and AR and VR (\spearman{0.74}); but not with age (\spearman{-0.06} to \spearman{-0.05}) or gender (\spearman{0.10} to \spearman{0.27}).
|
||||
Experiences were correlated between haptics and \AR (\spearman{0.53}), haptics and \VR (\spearman{0.61}), and \AR and \VR (\spearman{0.74}); but not with age (\spearman{-0.06} to \spearman{-0.05}) or gender (\spearman{0.10} to \spearman{0.27}).
|
||||
%
|
||||
Participants were recruited at the university on a voluntary basis.
|
||||
%
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
The number in a cell is the proportion of times the corresponding haptic texture was selected in response to the presentation of the corresponding visual texture.
|
||||
The diagonal represents the expected correct answers.
|
||||
Holm-Bonferroni adjusted binomial test results are marked in bold when the proportion is higher than chance (\ie more than 11~\%, \pinf{0.05}).
|
||||
\item Means with bootstrap 95~\% confidence interval of the three rankings of the haptic textures alone, the visual textures alone, and the visuo-haptic texture pairs.
|
||||
\item Means with bootstrap 95~\% \CI of the three rankings of the haptic textures alone, the visual textures alone, and the visuo-haptic texture pairs.
|
||||
A lower rank means that the texture was considered rougher, a higher rank means smoother.
|
||||
]
|
||||
\subfig[0.58]{results/matching_confusion_matrix}%
|
||||
@@ -50,7 +50,7 @@ To verify that the difficulty with all the visual textures was the same on the m
|
||||
%
|
||||
As the \textit{Completion Time} results were Gamma distributed, they were transformed with a log to approximate a normal distribution.
|
||||
%
|
||||
A linear mixed model (LMM) on the log \textit{Completion Time} with the \textit{Visual Texture} as fixed effect and the \textit{Participant} as random intercept was performed.
|
||||
A \LMM on the log \textit{Completion Time} with the \textit{Visual Texture} as fixed effect and the \textit{Participant} as random intercept was performed.
|
||||
%
|
||||
Normality was verified with a QQ-plot of the model residuals.
|
||||
%
|
||||
@@ -169,7 +169,7 @@ This shows that the participants consistently identified the roughness of each v
|
||||
|
||||
\figref{results_questions} presents the questionnaire results of the matching and ranking tasks.
|
||||
%
|
||||
A non-parametric analysis of variance based on the Aligned Rank Transform (ART) was used on the \textit{Difficulty} and \textit{Realism} question results, while the other question results were analyzed using Wilcoxon signed-rank tests.
|
||||
A non-parametric \ANOVA on an \ART model was used on the \textit{Difficulty} and \textit{Realism} question results, while the other question results were analyzed using Wilcoxon signed-rank tests.
|
||||
%
|
||||
On \textit{Difficulty}, there were statistically significant effects of \textit{Task} (\anova{1}{57}{13}, \pinf{0.001}) and of \textit{Modality} (\anova{1}{57}{8}, \p{0.007}), but no interaction effect \textit{Task} \x \textit{Modality} (\anova{1}{57}{2}, \ns).
|
||||
%
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
\section{Discussion}
|
||||
\label{discussion}
|
||||
|
||||
In this study, we investigated the perception of visuo-haptic texture augmentation of tangible surfaces touched directly with the index fingertip, using visual texture overlays in AR and haptic roughness textures generated by a vibrotactile device worn on the middle index phalanx.
|
||||
In this study, we investigated the perception of visuo-haptic texture augmentation of tangible surfaces touched directly with the index fingertip, using visual texture overlays in \AR and haptic roughness textures generated by a vibrotactile device worn on the middle index phalanx.
|
||||
%
|
||||
The nine evaluated pairs of visuo-haptic textures, taken from the HaTT database \cite{culbertson2014one}, are models of real texture captures.
|
||||
%
|
||||
@@ -41,7 +41,7 @@ The last visuo-haptic roughness ranking (\figref{results_matching_ranking}, righ
|
||||
%
|
||||
Several strategies were reported: some participants first classified visually and then corrected with haptics, others classified haptically and then integrated visuals.
|
||||
%
|
||||
While visual sensation did influence perception, as observed in previous haptic AR studies \cite{punpongsanon2015softar,gaffary2017ar,fradin2023humans}, haptic sensation dominated here.
|
||||
While visual sensation did influence perception, as observed in previous haptic \AR studies \cite{punpongsanon2015softar,gaffary2017ar,fradin2023humans}, haptic sensation dominated here.
|
||||
%
|
||||
This indicates that participants were more confident and relied more on the haptic roughness perception than on the visual roughness perception when integrating both in one coherent perception.
|
||||
%
|
||||
@@ -65,9 +65,9 @@ Another limitation that may have affected the perception of haptic textures is t
|
||||
%
|
||||
Finally, the visual textures used were also simple color captures not meant to be used in an immersive virtual environment.
|
||||
%
|
||||
However, our objective was not to accurately reproduce real textures, but to alter the perception of simultaneous visual and haptic roughness augmentation of a real surface directly touched by the finger in AR.
|
||||
However, our objective was not to accurately reproduce real textures, but to alter the perception of simultaneous visual and haptic roughness augmentation of a real surface directly touched by the finger in \AR.
|
||||
%
|
||||
In addition of these limitations, both visual and haptic texture models should be improved by integrating the rendering of spatially localized breaks, edges or patterns, like real textures \cite{richardson2022learning}, and by being adaptable to individual sensitivities, as personalized haptics is a promising approach \cite{malvezzi2021design,young2020compensating}.
|
||||
%
|
||||
More generally, a wide range of haptic feedbacks should be integrated to form rich and complete haptic augmentations in AR \cite{maisto2017evaluation,detinguy2018enhancing,salazar2020altering}.
|
||||
More generally, a wide range of haptic feedbacks should be integrated to form rich and complete haptic augmentations in \AR \cite{maisto2017evaluation,detinguy2018enhancing,salazar2020altering}.
|
||||
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
\label{conclusion}
|
||||
|
||||
\fig[0.6]{experiment/use_case}{%
|
||||
Illustration of the texture augmentation in AR through an interior design scenario. %
|
||||
A user wearing an AR headset and a wearable vibrotactile haptic device worn on their index is applying different virtual visuo-haptic textures to a real wall to compare them visually and by touch.
|
||||
Illustration of the texture augmentation in \AR through an interior design scenario. %
|
||||
A user wearing an \AR headset and a wearable vibrotactile haptic device worn on their index is applying different virtual visuo-haptic textures to a real wall to compare them visually and by touch.
|
||||
}
|
||||
|
||||
We investigated how users perceived visuo-haptic roughness texture augmentations on tangible surfaces seen in immersive OST-AR and touched directly with the index finger.
|
||||
@@ -18,8 +18,8 @@ The texture rankings did indeed show that participants perceived the roughness o
|
||||
%
|
||||
There are still many improvements to be made to the respective renderings of the haptic and visual textures used in this work to make them more realistic for finger perception and immersive virtual environment contexts.
|
||||
%
|
||||
However, these results suggest that AR visual textures that augments tangible surfaces can be enhanced with a set of data-driven vibrotactile haptic textures in a coherent and realistic manner.
|
||||
However, these results suggest that \AR visual textures that augments tangible surfaces can be enhanced with a set of data-driven vibrotactile haptic textures in a coherent and realistic manner.
|
||||
%
|
||||
This paves the way for new AR applications capable of augmenting a real environment with virtual visuo-haptic textures, such as visuo-haptic painting in artistic, object design or interior design contexts.
|
||||
This paves the way for new \AR applications capable of augmenting a real environment with virtual visuo-haptic textures, such as visuo-haptic painting in artistic, object design or interior design contexts.
|
||||
%
|
||||
The latter is illustrated in \figref{experiment/use_case}, where a user applies different visuo-haptic textures to a wall to compare them visually and by touch.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
% Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in AR and VR, which we aim to investigate in this work.
|
||||
% Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in \AR and \VR, which we aim to investigate in this work.
|
||||
|
||||
%Imagine you're an archaeologist or in a museum, and you want to examine an ancient object.
|
||||
%
|
||||
@@ -10,7 +10,7 @@
|
||||
%
|
||||
%Such tactile augmentation is made possible by wearable haptic devices, which are worn directly on the finger or hand and can provide a variety of sensations on the skin, while being small, light and discreet \cite{pacchierotti2017wearable}.
|
||||
%
|
||||
Wearable haptic devices, worn directly on the finger or hand, have been used to render a variety of tactile sensations to virtual objects seen in VR \cite{choi2018claw,detinguy2018enhancing,pezent2019tasbi} or AR \cite{maisto2017evaluation,meli2018combining,teng2021touch}.
|
||||
Wearable haptic devices, worn directly on the finger or hand, have been used to render a variety of tactile sensations to virtual objects seen in \VR \cite{choi2018claw,detinguy2018enhancing,pezent2019tasbi} or \AR \cite{maisto2017evaluation,meli2018combining,teng2021touch}.
|
||||
%
|
||||
They have also been used to alter the perception of roughness, stiffness, friction, and local shape perception of real tangible objects \cite{asano2015vibrotactile,detinguy2018enhancing,salazar2020altering}.
|
||||
%
|
||||
@@ -18,42 +18,42 @@ Such techniques place the actuator \emph{close} to the point of contact with the
|
||||
%
|
||||
This combined use of wearable haptics with tangible objects enables a haptic \emph{augmented} reality (HAR) \cite{bhatia2024augmenting} that can provide a rich and varied haptic feedback.
|
||||
|
||||
The degree of reality/virtuality in both visual and haptic sensory modalities can be varied independently, but wearable haptic AR has been little explored with VR and (visual) AR \cite{choi2021augmenting}.
|
||||
The degree of reality/virtuality in both visual and haptic sensory modalities can be varied independently, but wearable haptic \AR has been little explored with \VR and (visual) \AR \cite{choi2021augmenting}.
|
||||
%
|
||||
Although AR and VR are closely related, they have significant differences that can affect the user experience \cite{genay2021virtual,macedo2023occlusion}.
|
||||
Although \AR and \VR are closely related, they have significant differences that can affect the user experience \cite{genay2021virtual,macedo2023occlusion}.
|
||||
%
|
||||
%By integrating visual virtual content into the real environment, AR keeps the hand of the user, the haptic devices worn and the tangibles touched visible, unlike VR where they are hidden by immersing the user into a visual virtual environment.
|
||||
%By integrating visual virtual content into the real environment, \AR keeps the hand of the user, the haptic devices worn and the tangibles touched visible, unlike \VR where they are hidden by immersing the user into a visual virtual environment.
|
||||
%
|
||||
%Current AR systems also suffer from display and rendering limitations not present in VR, affecting the user experience with virtual content that may be less realistic or inconsistent with the real augmented environment \cite{kim2018revisiting,macedo2023occlusion}.
|
||||
%Current \AR systems also suffer from display and rendering limitations not present in \VR, affecting the user experience with virtual content that may be less realistic or inconsistent with the real augmented environment \cite{kim2018revisiting,macedo2023occlusion}.
|
||||
%
|
||||
It therefore seems necessary to investigate and understand the potential effect of these differences in visual rendering on the perception of haptically augmented tangible objects.
|
||||
%
|
||||
Previous works have shown, for example, that the stiffness of a virtual piston rendered with a force feedback haptic system seen in AR is perceived as less rigid than in VR \cite{gaffary2017ar} or when the visual rendering is ahead of the haptic rendering \cite{diluca2011effects,knorlein2009influence}.
|
||||
Previous works have shown, for example, that the stiffness of a virtual piston rendered with a force feedback haptic system seen in \AR is perceived as less rigid than in \VR \cite{gaffary2017ar} or when the visual rendering is ahead of the haptic rendering \cite{diluca2011effects,knorlein2009influence}.
|
||||
%
|
||||
%Taking our example from the beginning of this introduction, you now want to learn more about the context of the discovery of the ancient object or its use at the time of its creation by immersing yourself in a virtual environment in VR.
|
||||
%Taking our example from the beginning of this introduction, you now want to learn more about the context of the discovery of the ancient object or its use at the time of its creation by immersing yourself in a virtual environment in \VR.
|
||||
%
|
||||
%But how different is the perception of the haptic augmentation in AR compared to VR, with a virtual hand instead of the real hand?
|
||||
%But how different is the perception of the haptic augmentation in \AR compared to \VR, with a virtual hand instead of the real hand?
|
||||
|
||||
The goal of this paper is to study the role of the visual rendering of the hand (real or virtual) and its environment (AR or VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device worn on the finger.
|
||||
The goal of this paper is to study the role of the visual rendering of the hand (real or virtual) and its environment (AR or \VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device worn on the finger.
|
||||
%
|
||||
We focus on the perception of roughness, one of the main tactile sensations of materials \cite{baumgartner2013visual,hollins1993perceptual,okamoto2013psychophysical} and one of the most studied haptic augmentations \cite{asano2015vibrotactile,culbertson2014modeling,friesen2024perceived,strohmeier2017generating,ujitoko2019modulating}.
|
||||
%
|
||||
By understanding how these visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with AR can be better applied and new visuo-haptic renderings adapted to AR can be designed.
|
||||
By understanding how these visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with \AR can be better applied and new visuo-haptic renderings adapted to \AR can be designed.
|
||||
|
||||
Our contributions are:
|
||||
%
|
||||
\begin{itemize}
|
||||
\item A system for rendering virtual vibrotactile roughness textures in real time on a tangible surface touched directly with the finger, integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
|
||||
\item A psychophysical study with 20 participants to evaluate the perception of these virtual roughness textures in three visual rendering conditions: without visual augmentation, with a realistic virtual hand rendering in AR, and with the same virtual hand in VR.
|
||||
\item A psychophysical study with 20 participants to evaluate the perception of these virtual roughness textures in three visual rendering conditions: without visual augmentation, with a realistic virtual hand rendering in \AR, and with the same virtual hand in \VR.
|
||||
\end{itemize}
|
||||
%First, we present a system for rendering virtual vibrotactile textures in real time without constraints on hand movements and integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
|
||||
%
|
||||
%An experimental setup is then presented to compare haptic roughness augmentation with an optical AR headset (Microsoft HoloLens~2) that can be transformed into a VR headset using a cardboard mask.
|
||||
%An experimental setup is then presented to compare haptic roughness augmentation with an optical \AR headset (Microsoft HoloLens~2) that can be transformed into a \VR headset using a cardboard mask.
|
||||
%
|
||||
%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in AR, and (3) with the same virtual hand in VR.
|
||||
%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in \AR, and (3) with the same virtual hand in \VR.
|
||||
|
||||
%\fig[1]{teaser/teaser2}{%
|
||||
% Vibrotactile textures were rendered in real time on a real surface using a wearable vibrotactile device worn on the finger.
|
||||
% %
|
||||
% Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in AR, and (Virtual) the same virtual hand in VR.
|
||||
% Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in \AR, and (Virtual) the same virtual hand in \VR.
|
||||
%}
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
%
|
||||
In this section, we describe a system for rendering vibrotactile roughness texture in real time, on any tangible surface, touched directly with the index fingertip, with no constraints on hand movement and using a simple camera to track the finger pose.
|
||||
%
|
||||
We also describe how to pair this tactile rendering with an immersive AR or VR headset visual display to provide a coherent, multimodal visuo-haptic augmentation of the real environment.
|
||||
We also describe how to pair this tactile rendering with an immersive \AR or \VR headset visual display to provide a coherent, multimodal visuo-haptic augmentation of the real environment.
|
||||
|
||||
\section{Principle}
|
||||
\label{principle}
|
||||
@@ -36,7 +36,7 @@ The system is composed of three main components: the pose estimation of the trac
|
||||
|
||||
\begin{subfigs}{setup}{Visuo-haptic texture rendering system setup. }[][
|
||||
\item HapCoil-One voice-coil actuator with a fiducial marker on top attached to a participant's right index finger.
|
||||
\item HoloLens~2 AR headset, the two cardboard masks to switch the real or virtual environments with the same field of view, and the 3D-printed piece for attaching the masks to the headset.
|
||||
\item HoloLens~2 \AR headset, the two cardboard masks to switch the real or virtual environments with the same field of view, and the 3D-printed piece for attaching the masks to the headset.
|
||||
\item User exploring a virtual vibrotactile texture on a tangible sheet of paper.
|
||||
]
|
||||
\subfig[0.325]{device}
|
||||
@@ -70,7 +70,7 @@ In addition, the pose and size of the virtual textures are defined on the virtua
|
||||
%
|
||||
During the experiment, the system uses marker pose estimates to align the virtual models with their real-world counterparts. %, according to the condition being tested.
|
||||
%
|
||||
This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the real environment (\figref{renderings}), using the considered AR or VR headset.
|
||||
This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the real environment (\figref{renderings}), using the considered \AR or \VR headset.
|
||||
|
||||
In our implementation, the virtual hand and environment are designed with Unity and the Mixed Reality Toolkit (MRTK).
|
||||
%
|
||||
@@ -80,7 +80,7 @@ It was chosen over VST-AR because OST-AR only adds virtual content to the real e
|
||||
%
|
||||
Indeed, one of our objectives (\secref{experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
|
||||
%
|
||||
To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (\figref{headset}).
|
||||
To simulate a \VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (\figref{headset}).
|
||||
|
||||
\section{Vibrotactile Signal Generation and Rendering}
|
||||
\label{texture_generation}
|
||||
@@ -139,7 +139,7 @@ The tactile texture is described and rendered in this work as a one dimensional
|
||||
|
||||
%As shown in \figref{diagram} and described above, the system includes various haptic and visual sensors and rendering devices linked by software processes for image processing, 3D rendering and audio generation.
|
||||
%
|
||||
Because the chosen AR headset is a standalone device (like most current AR/VR headsets) and cannot directly control the sound card and haptic actuator, the image capture, pose estimation and audio signal generation steps are performed on an external computer.
|
||||
Because the chosen \AR headset is a standalone device (like most current AR/VR headsets) and cannot directly control the sound card and haptic actuator, the image capture, pose estimation and audio signal generation steps are performed on an external computer.
|
||||
%
|
||||
All computation steps run in a separate thread to parallelize them and reduce latency, and are synchronised with the headset via a local network and the ZeroMQ library.
|
||||
%
|
||||
@@ -157,7 +157,7 @@ The haptic loop also includes the voice-coil latency \qty{15}{\ms} (as specified
|
||||
%
|
||||
The total haptic latency is below the \qty{60}{\ms} detection threshold in vibrotactile feedback \cite{okamoto2009detectability}.
|
||||
%
|
||||
The total visual latency can be considered slightly high, yet it is typical for an AR rendering involving vision-based tracking \cite{knorlein2009influence}.
|
||||
The total visual latency can be considered slightly high, yet it is typical for an \AR rendering involving vision-based tracking \cite{knorlein2009influence}.
|
||||
|
||||
The two filters also introduce a constant lag between the finger movement and the estimated position and velocity, measured at \qty{160 +- 30}{\ms}.
|
||||
%
|
||||
|
||||
@@ -4,4 +4,4 @@
|
||||
%Summary of the research problem, method, main findings, and implications.
|
||||
|
||||
We designed and implemented a system for rendering virtual haptic grating textures on a real tangible surface touched directly with the fingertip, using a wearable vibrotactile voice-coil device mounted on the middle phalanx of the finger. %, and allowing free explorative movements of the hand on the surface.
|
||||
This tactile feedback was integrated with an immersive visual virtual environment, using an OST-AR headset, to provide users with a coherent multimodal visuo-haptic augmentation of the real environment, that can be switched between an AR and a VR view.
|
||||
This tactile feedback was integrated with an immersive visual virtual environment, using an OST-AR headset, to provide users with a coherent multimodal visuo-haptic augmentation of the real environment, that can be switched between an \AR and a \VR view.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
% Insist on the advantage of wearable : augment any surface see bau2012revel
|
||||
|
||||
% Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in AR and VR, which we aim to investigate in this work.
|
||||
% Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in \AR and \VR, which we aim to investigate in this work.
|
||||
|
||||
%Imagine you're an archaeologist or in a museum, and you want to examine an ancient object.
|
||||
%
|
||||
@@ -12,7 +12,7 @@
|
||||
%
|
||||
%Such tactile augmentation is made possible by wearable haptic devices, which are worn directly on the finger or hand and can provide a variety of sensations on the skin, while being small, light and discreet \cite{pacchierotti2017wearable}.
|
||||
%
|
||||
Wearable haptic devices, worn directly on the finger or hand, have been used to render a variety of tactile sensations to virtual objects seen in VR \cite{choi2018claw,detinguy2018enhancing,pezent2019tasbi} or AR \cite{maisto2017evaluation,meli2018combining,teng2021touch}.
|
||||
Wearable haptic devices, worn directly on the finger or hand, have been used to render a variety of tactile sensations to virtual objects seen in \VR \cite{choi2018claw,detinguy2018enhancing,pezent2019tasbi} or \AR \cite{maisto2017evaluation,meli2018combining,teng2021touch}.
|
||||
%
|
||||
They have also been used to alter the perception of roughness, stiffness, friction, and local shape perception of real tangible objects \cite{asano2015vibrotactile,detinguy2018enhancing,salazar2020altering}.
|
||||
%
|
||||
@@ -20,42 +20,42 @@ Such techniques place the actuator \emph{close} to the point of contact with the
|
||||
%
|
||||
This combined use of wearable haptics with tangible objects enables a haptic \emph{augmented} reality (HAR) \cite{bhatia2024augmenting} that can provide a rich and varied haptic feedback.
|
||||
|
||||
The degree of reality/virtuality in both visual and haptic sensory modalities can be varied independently, but wearable haptic AR has been little explored with VR and (visual) AR \cite{choi2021augmenting}.
|
||||
The degree of reality/virtuality in both visual and haptic sensory modalities can be varied independently, but wearable haptic \AR has been little explored with \VR and (visual) \AR \cite{choi2021augmenting}.
|
||||
%
|
||||
Although AR and VR are closely related, they have significant differences that can affect the user experience \cite{genay2021virtual,macedo2023occlusion}.
|
||||
Although \AR and \VR are closely related, they have significant differences that can affect the user experience \cite{genay2021virtual,macedo2023occlusion}.
|
||||
%
|
||||
%By integrating visual virtual content into the real environment, AR keeps the hand of the user, the haptic devices worn and the tangibles touched visible, unlike VR where they are hidden by immersing the user into a visual virtual environment.
|
||||
%By integrating visual virtual content into the real environment, \AR keeps the hand of the user, the haptic devices worn and the tangibles touched visible, unlike \VR where they are hidden by immersing the user into a visual virtual environment.
|
||||
%
|
||||
%Current AR systems also suffer from display and rendering limitations not present in VR, affecting the user experience with virtual content that may be less realistic or inconsistent with the real augmented environment \cite{kim2018revisiting,macedo2023occlusion}.
|
||||
%Current \AR systems also suffer from display and rendering limitations not present in \VR, affecting the user experience with virtual content that may be less realistic or inconsistent with the real augmented environment \cite{kim2018revisiting,macedo2023occlusion}.
|
||||
%
|
||||
It therefore seems necessary to investigate and understand the potential effect of these differences in visual rendering on the perception of haptically augmented tangible objects.
|
||||
%
|
||||
Previous works have shown, for example, that the stiffness of a virtual piston rendered with a force feedback haptic system seen in AR is perceived as less rigid than in VR \cite{gaffary2017ar} or when the visual rendering is ahead of the haptic rendering \cite{diluca2011effects,knorlein2009influence}.
|
||||
Previous works have shown, for example, that the stiffness of a virtual piston rendered with a force feedback haptic system seen in \AR is perceived as less rigid than in \VR \cite{gaffary2017ar} or when the visual rendering is ahead of the haptic rendering \cite{diluca2011effects,knorlein2009influence}.
|
||||
%
|
||||
%Taking our example from the beginning of this introduction, you now want to learn more about the context of the discovery of the ancient object or its use at the time of its creation by immersing yourself in a virtual environment in VR.
|
||||
%Taking our example from the beginning of this introduction, you now want to learn more about the context of the discovery of the ancient object or its use at the time of its creation by immersing yourself in a virtual environment in \VR.
|
||||
%
|
||||
%But how different is the perception of the haptic augmentation in AR compared to VR, with a virtual hand instead of the real hand?
|
||||
%But how different is the perception of the haptic augmentation in \AR compared to \VR, with a virtual hand instead of the real hand?
|
||||
|
||||
The goal of this paper is to study the role of the visual rendering of the hand (real or virtual) and its environment (AR or VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device worn on the finger.
|
||||
The goal of this paper is to study the role of the visual rendering of the hand (real or virtual) and its environment (AR or \VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device worn on the finger.
|
||||
%
|
||||
We focus on the perception of roughness, one of the main tactile sensations of materials \cite{baumgartner2013visual,hollins1993perceptual,okamoto2013psychophysical} and one of the most studied haptic augmentations \cite{asano2015vibrotactile,culbertson2014modeling,friesen2024perceived,strohmeier2017generating,ujitoko2019modulating}.
|
||||
%
|
||||
By understanding how these visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with AR can be better applied and new visuo-haptic renderings adapted to AR can be designed.
|
||||
By understanding how these visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with \AR can be better applied and new visuo-haptic renderings adapted to \AR can be designed.
|
||||
|
||||
Our contributions are:
|
||||
%
|
||||
\begin{itemize}
|
||||
\item A system for rendering virtual vibrotactile roughness textures in real time on a tangible surface touched directly with the finger, integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
|
||||
\item A psychophysical study with 20 participants to evaluate the perception of these virtual roughness textures in three visual rendering conditions: without visual augmentation, with a realistic virtual hand rendering in AR, and with the same virtual hand in VR.
|
||||
\item A psychophysical study with 20 participants to evaluate the perception of these virtual roughness textures in three visual rendering conditions: without visual augmentation, with a realistic virtual hand rendering in \AR, and with the same virtual hand in \VR.
|
||||
\end{itemize}
|
||||
%First, we present a system for rendering virtual vibrotactile textures in real time without constraints on hand movements and integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
|
||||
%
|
||||
%An experimental setup is then presented to compare haptic roughness augmentation with an optical AR headset (Microsoft HoloLens~2) that can be transformed into a VR headset using a cardboard mask.
|
||||
%An experimental setup is then presented to compare haptic roughness augmentation with an optical \AR headset (Microsoft HoloLens~2) that can be transformed into a \VR headset using a cardboard mask.
|
||||
%
|
||||
%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in AR, and (3) with the same virtual hand in VR.
|
||||
%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in \AR, and (3) with the same virtual hand in \VR.
|
||||
|
||||
\fig[1]{teaser/teaser2}{%
|
||||
Vibrotactile textures were rendered in real time on a real surface using a wearable vibrotactile device worn on the finger.
|
||||
%
|
||||
Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in AR, and (Virtual) the same virtual hand in VR.
|
||||
Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in \AR, and (Virtual) the same virtual hand in \VR.
|
||||
}
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
\label{experiment}
|
||||
|
||||
\begin{subfigs}{renderings}{
|
||||
The three visual rendering conditions and the experimental procedure of the two-alternative forced choice (2AFC) psychophysical study.
|
||||
The three visual rendering conditions and the experimental procedure of the \TIFC psychophysical study.
|
||||
}[
|
||||
During a trial, two tactile textures were rendered on the augmented area of the paper sheet (black rectangle) for \qty{3}{\s} each, one after the other, then the participant chose which one was the roughest.
|
||||
The visual rendering stayed the same during the trial.
|
||||
@@ -17,11 +17,11 @@
|
||||
\subfig[0.32]{experiment/virtual}
|
||||
\end{subfigs}
|
||||
|
||||
Our visuo-haptic rendering system, described in \secref{method}, allows free exploration of virtual vibrotactile textures on tangible surfaces directly touched with the bare finger to simulate roughness augmentation, while the visual rendering of the hand and environment can be controlled to be in AR or VR.
|
||||
Our visuo-haptic rendering system, described in \secref{method}, allows free exploration of virtual vibrotactile textures on tangible surfaces directly touched with the bare finger to simulate roughness augmentation, while the visual rendering of the hand and environment can be controlled to be in \AR or \VR.
|
||||
%
|
||||
The user study aimed to investigate the effect of visual hand rendering in AR or VR on the perception of roughness texture augmentation. % of a touched tangible surface.
|
||||
The user study aimed to investigate the effect of visual hand rendering in \AR or \VR on the perception of roughness texture augmentation. % of a touched tangible surface.
|
||||
%
|
||||
In a two-alternative forced choice (2AFC) task, participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (\figref{renderings}, \level{Real}), in AR with a realistic virtual hand superimposed on the real hand (\figref{renderings}, \level{Mixed}), and in VR with the same virtual hand as an avatar (\figref{renderings}, \level{Virtual}).
|
||||
In a \TIFC task, participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (\figref{renderings}, \level{Real}), in \AR with a realistic virtual hand superimposed on the real hand (\figref{renderings}, \level{Mixed}), and in \VR with the same virtual hand as an avatar (\figref{renderings}, \level{Virtual}).
|
||||
%
|
||||
In order not to influence the perception, as vision is an important source of information and influence for the perception of texture \cite{bergmanntiest2007haptic,yanagisawa2015effects,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed.
|
||||
|
||||
@@ -34,9 +34,9 @@ All participants had normal or corrected-to-normal vision, none of them had a kn
|
||||
%
|
||||
One was left-handed while the rest were right-handed; they all performed the task with their right index.
|
||||
%
|
||||
In rating their experience with haptics, AR and VR (\enquote{I use it several times a year}), 12 were experienced with haptics, 5 with AR, and 10 with VR.
|
||||
In rating their experience with haptics, \AR and \VR (\enquote{I use it several times a year}), 12 were experienced with haptics, 5 with \AR, and 10 with \VR.
|
||||
%
|
||||
Experiences were correlated between haptics and VR (\pearson{0.59}), and AR and VR (\pearson{0.67}) but not haptics and AR (\pearson{0.20}) nor haptics, AR, or VR with age (\pearson{0.05} to \pearson{0.12}).
|
||||
Experiences were correlated between haptics and \VR (\pearson{0.59}), and \AR and \VR (\pearson{0.67}) but not haptics and \AR (\pearson{0.20}) nor haptics, \AR, or \VR with age (\pearson{0.05} to \pearson{0.12}).
|
||||
%
|
||||
Participants were recruited at the university on a voluntary basis.
|
||||
%
|
||||
@@ -45,7 +45,7 @@ They all signed an informed consent form before the user study and were unaware
|
||||
\subsection{Apparatus}
|
||||
\label{apparatus}
|
||||
|
||||
An experimental environment similar as \textcite{gaffary2017ar} was created to ensure a similar visual rendering in AR and VR (\figref{renderings}).
|
||||
An experimental environment similar as \textcite{gaffary2017ar} was created to ensure a similar visual rendering in \AR and \VR (\figref{renderings}).
|
||||
%
|
||||
It consisted of a \qtyproduct{300 x 210 x 400}{\mm} medium-density fibreboard (MDF) box with a paper sheet glued inside, and a \qtyproduct{15 x 5}{\mm} rectangle printed on the sheet to delimit the area where the tactile textures were rendered.
|
||||
%
|
||||
@@ -65,11 +65,11 @@ Its size was adjusted to match the real hand of the participants before the expe
|
||||
%
|
||||
The visual rendering of the virtual hand and environment is described in \secref{virtual_real_alignment}.
|
||||
%
|
||||
%In the \level{Virtual} rendering, a cardboard mask (with holes for sensors) was attached to the headset to block the view of the real environment and simulate a VR headset (\figref{method/headset}).
|
||||
%In the \level{Virtual} rendering, a cardboard mask (with holes for sensors) was attached to the headset to block the view of the real environment and simulate a \VR headset (\figref{method/headset}).
|
||||
%
|
||||
To ensure for the same FoV in all \factor{Visual Rendering} condition, a cardboard mask was attached to the AR headset (\figref{method/headset}).
|
||||
To ensure for the same FoV in all \factor{Visual Rendering} condition, a cardboard mask was attached to the \AR headset (\figref{method/headset}).
|
||||
%
|
||||
In the \level{Virtual} rendering, the mask had only holes for sensors to block the view of the real environment and simulate a VR headset.
|
||||
In the \level{Virtual} rendering, the mask had only holes for sensors to block the view of the real environment and simulate a \VR headset.
|
||||
%
|
||||
In the \level{Mixed} and \level{Real} conditions, the mask had two additional holes for the eyes that matched the FoV of the HoloLens~2 (\figref{method/headset}).
|
||||
%
|
||||
@@ -142,7 +142,7 @@ The user study was a within-subjects design with two factors:
|
||||
\item \factor{Amplitude Difference}, consisting of the difference in amplitude between the comparison and the reference textures, with 6 levels: \qtylist{0; +-12.5; +-25.0; +-37.5}{\%}.
|
||||
\end{itemize}
|
||||
|
||||
A trial consisted on a two-alternative forced choice (2AFC) task where a participant had to touch two virtual vibrotactile textures one after the other and decide which one was the roughest.
|
||||
A trial consisted on a \TIFC task where a participant had to touch two virtual vibrotactile textures one after the other and decide which one was the roughest.
|
||||
%
|
||||
To avoid any order effect, the order of \factor{Visual Rendering} conditions was counterbalanced between participants using a balanced Latin square design.
|
||||
%
|
||||
|
||||
@@ -4,42 +4,42 @@
|
||||
\subsection{Trial Measures}
|
||||
\label{results_trials}
|
||||
|
||||
All measures from trials were analysed using linear mixed models (LMM) or generalised linear mixed models (GLMM) with \factor{Visual Rendering}, \factor{Amplitude Difference} and their interaction as within-participant factors, and by-participant random intercepts.
|
||||
All measures from trials were analysed using \LMM or \GLMM with \factor{Visual Rendering}, \factor{Amplitude Difference} and their interaction as within-participant factors, and by-participant random intercepts.
|
||||
%
|
||||
Depending on the data, different random effect structures were tested.
|
||||
%
|
||||
Only the best converging models are reported, with the lowest Akaike Information Criterion (AIC) values.
|
||||
%
|
||||
Post-hoc pairwise comparisons were performed using the Tukey's Honest Significant Difference (HSD) test.
|
||||
Post-hoc pairwise comparisons were performed using the Tukey's \HSD test.
|
||||
%
|
||||
Each estimate is reported with its 95\% confidence interval (CI) as follows: \ci{\textrm{lower limit}}{\textrm{upper limit}}.
|
||||
Each estimate is reported with its 95\% \CI as follows: \ci{\textrm{lower limit}}{\textrm{upper limit}}.
|
||||
|
||||
\subsubsection{Discrimination Accuracy}
|
||||
\label{discrimination_accuracy}
|
||||
|
||||
A GLMM was adjusted to the \response{Texture Choice} in the 2AFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (\figref{results/trial_predictions}).
|
||||
A \GLMM was adjusted to the \response{Texture Choice} in the \TIFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (\figref{results/trial_predictions}).
|
||||
%
|
||||
The points of subjective equality (PSEs, see \figref{results/trial_pses}) and just-noticeable differences (JNDs, see \figref{results/trial_jnds}) for each visual rendering and their respective differences were estimated from the model, along with their corresponding 95\% CI, using a non-parametric bootstrap procedure (1000 samples).
|
||||
The \PSEs (\figref{results/trial_pses}) and \JNDs (\figref{results/trial_jnds}) for each visual rendering and their respective differences were estimated from the model, along with their corresponding 95\% \CI, using a non-parametric bootstrap procedure (1000 samples).
|
||||
%
|
||||
The PSE represents the estimated amplitude difference at which the comparison texture was perceived as rougher than the reference texture 50\% of the time. %, \ie it is the accuracy of participants in discriminating vibrotactile roughness.
|
||||
A \PSE represents the estimated amplitude difference at which the comparison texture was perceived as rougher than the reference texture 50\% of the time. %, \ie it is the accuracy of participants in discriminating vibrotactile roughness.
|
||||
%
|
||||
The \level{Real} rendering had the highest PSE (\percent{7.9} \ci{1.2}{4.1}) and was statistically significantly different from the \level{Mixed} rendering (\percent{1.9} \ci{-2.4}{6.1}) and from the \level{Virtual} rendering (\percent{5.1} \ci{2.4}{7.6}).
|
||||
A \level{Real} rendering had the highest \PSE (\percent{7.9} \ci{1.2}{4.1}) and was statistically significantly different from the \level{Mixed} rendering (\percent{1.9} \ci{-2.4}{6.1}) and from the \level{Virtual} rendering (\percent{5.1} \ci{2.4}{7.6}).
|
||||
%
|
||||
The JND represents the estimated minimum amplitude difference between the comparison and reference textures that participants could perceive,
|
||||
The \JND represents the estimated minimum amplitude difference between the comparison and reference textures that participants could perceive,
|
||||
% \ie the sensitivity to vibrotactile roughness differences,
|
||||
calculated at the 84th percentile of the predictions of the GLMM (\ie one standard deviation of the normal distribution) \cite{ernst2002humans}.
|
||||
calculated at the 84th percentile of the predictions of the \GLMM (\ie one standard deviation of the normal distribution) \cite{ernst2002humans}.
|
||||
%
|
||||
The \level{Real} rendering had the lowest JND (\percent{26} \ci{23}{29}), the \level{Mixed} rendering had the highest (\percent{33} \ci{30}{37}), and the \level{Virtual} rendering was in between (\percent{30} \ci{28}{32}).
|
||||
The \level{Real} rendering had the lowest \JND (\percent{26} \ci{23}{29}), the \level{Mixed} rendering had the highest (\percent{33} \ci{30}{37}), and the \level{Virtual} rendering was in between (\percent{30} \ci{28}{32}).
|
||||
%
|
||||
All pairwise differences were statistically significant.
|
||||
|
||||
\begin{subfigs}{discrimination_accuracy}{Results of the vibrotactile texture roughness discrimination task. }[
|
||||
Curves represent predictions from the GLMM model (probit link function), and points are estimated marginal means with non-parametric bootstrap 95\% confidence intervals.
|
||||
Curves represent predictions from the \GLMM model (probit link function), and points are estimated marginal means with non-parametric bootstrap 95\% confidence intervals.
|
||||
][
|
||||
\item Proportion of trials in which the comparison texture was perceived as rougher than the reference texture, as a function of the amplitude difference between the two textures and the visual rendering.
|
||||
\item Estimated points of subjective equality (PSE) of each visual rendering.
|
||||
\item Estimated \PSE of each visual rendering.
|
||||
%, defined as the amplitude difference at which both reference and comparison textures are perceived to be equivalent, \ie the accuracy in discriminating vibrotactile roughness.
|
||||
\item Estimated just-noticeable difference (JND) of each visual rendering.
|
||||
\item Estimated \JND of each visual rendering.
|
||||
%, defined as the minimum perceptual amplitude difference, \ie the sensitivity to vibrotactile roughness differences.
|
||||
]
|
||||
\subfig[0.85]{results/trial_predictions}\\
|
||||
@@ -50,7 +50,7 @@ All pairwise differences were statistically significant.
|
||||
\subsubsection{Response Time}
|
||||
\label{response_time}
|
||||
|
||||
A LMM analysis of variance (AOV) with by-participant random slopes for \factor{Visual Rendering}, and a log transformation (as \response{Response Time} measures were gamma distributed) indicated a statistically significant effects on \response{Response Time} of \factor{Visual Rendering} (\anova{2}{18}{6.2}, \p{0.009}, see \figref{results/trial_response_times}).
|
||||
A \LMM \ANOVA with by-participant random slopes for \factor{Visual Rendering}, and a log transformation (as \response{Response Time} measures were gamma distributed) indicated a statistically significant effects on \response{Response Time} of \factor{Visual Rendering} (\anova{2}{18}{6.2}, \p{0.009}, see \figref{results/trial_response_times}).
|
||||
%
|
||||
Participants took longer on average to respond with the \level{Virtual} rendering (\geomean{1.65}{s} \ci{1.59}{1.72}) than with the \level{Real} rendering (\geomean{1.38}{s} \ci{1.32}{1.43}), which is the only statistically significant difference (\ttest{19}{0.3}, \p{0.005}).
|
||||
%
|
||||
@@ -61,20 +61,20 @@ The \level{Mixed} rendering was in between (\geomean{1.56}{s} \ci{1.49}{1.63}).
|
||||
|
||||
The frames analysed were those in which the participants actively touched the comparison textures with a finger speed greater than \SI{1}{\mm\per\second}.
|
||||
%
|
||||
A LMM AOV with by-participant random slopes for \factor{Visual Rendering} indicated only one statistically significant effect on the total distance traveled by the finger in a trial of \factor{Visual Rendering} (\anova{2}{18}{3.9}, \p{0.04}, see \figref{results/trial_distances}).
|
||||
A \LMM \ANOVA with by-participant random slopes for \factor{Visual Rendering} indicated only one statistically significant effect on the total distance traveled by the finger in a trial of \factor{Visual Rendering} (\anova{2}{18}{3.9}, \p{0.04}, see \figref{results/trial_distances}).
|
||||
%
|
||||
On average, participants explored a larger distance with the \level{Real} rendering (\geomean{20.0}{\cm} \ci{19.4}{20.7}) than with \level{Virtual} rendering (\geomean{16.5}{\cm} \ci{15.8}{17.1}), which is the only statistically significant difference (\ttest{19}{1.2}, \p{0.03}), with the \level{Mixed} rendering (\geomean{17.4}{\cm} \ci{16.8}{18.0}) in between.
|
||||
%
|
||||
Another LMM AOV with by-trial and by-participant random intercepts but no random slopes indicated only one statistically significant effect on \response{Finger Speed} of \factor{Visual Rendering} (\anova{2}{2142}{2.0}, \pinf{0.001}, see \figref{results/trial_speeds}).
|
||||
Another \LMM \ANOVA with by-trial and by-participant random intercepts but no random slopes indicated only one statistically significant effect on \response{Finger Speed} of \factor{Visual Rendering} (\anova{2}{2142}{2.0}, \pinf{0.001}, see \figref{results/trial_speeds}).
|
||||
%
|
||||
On average, the textures were explored with the highest speed with the \level{Real} rendering (\geomean{5.12}{\cm\per\second} \ci{5.08}{5.17}), the lowest with the \level{Virtual} rendering (\geomean{4.40}{\cm\per\second} \ci{4.35}{4.45}), and the \level{Mixed} rendering (\geomean{4.67}{\cm\per\second} \ci{4.63}{4.71}) in between.
|
||||
%
|
||||
All pairwise differences were statistically significant: \level{Real} \vs \level{Virtual} (\ttest{19}{1.17}, \pinf{0.001}), \level{Real} \vs \level{Mixed} (\ttest{19}{1.10}, \pinf{0.001}), and \level{Mixed} \vs \level{Virtual} (\ttest{19}{1.07}, \p{0.02}).
|
||||
%
|
||||
%This means that within the same time window on the same surface, participants explored the comparison texture on average at a greater distance and at a higher speed when in the real environment without visual representation of the hand (\level{Real} condition) than when in VR (\level{Virtual} condition).
|
||||
%This means that within the same time window on the same surface, participants explored the comparison texture on average at a greater distance and at a higher speed when in the real environment without visual representation of the hand (\level{Real} condition) than when in \VR (\level{Virtual} condition).
|
||||
|
||||
\begin{subfigs}{results_finger}{Results of the performance metrics for the rendering condition. }[
|
||||
Boxplots and geometric means with bootstrap 95~\% confidence interval, with pairwise Tukey's HSD tests: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
|
||||
Boxplots and geometric means with bootstrap 95~\% \CI, with Tukey's \HSD pairwise comparisons: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
|
||||
][
|
||||
\item Response time at the end of a trial.
|
||||
\item Distance travelled by the finger in a trial.
|
||||
@@ -105,7 +105,7 @@ Overall, participants' sense of control over the virtual hand was very high (\re
|
||||
%
|
||||
The textures were also overall found to be very much caused by the finger movements (\response{Texture Agency}, \num{4.5 +- 1.0}) with a very low perceived latency (\response{Texture Latency}, \num{1.6 +- 0.8}), and to be quite realistic (\response{Texture Realism}, \num{3.6 +- 0.9}) and quite plausible (\response{Texture Plausibility}, \num{3.6 +- 1.0}).
|
||||
%
|
||||
Participants were mixed between feeling the vibrations on the surface or on the top of their finger (\response{Vibration Location}, \num{3.9 +- 1.7}); the distribution of scores was split between the two poles of the scale with \level{Real} and \level{Mixed} renderings (42.5\% more on surface or on finger top, 15\% neutral), but there was a trend towards the top of the finger in VR renderings (65\% \vs 25\% more on surface and 10\% neutral), but this difference was not statistically significant neither.
|
||||
Participants were mixed between feeling the vibrations on the surface or on the top of their finger (\response{Vibration Location}, \num{3.9 +- 1.7}); the distribution of scores was split between the two poles of the scale with \level{Real} and \level{Mixed} renderings (42.5\% more on surface or on finger top, 15\% neutral), but there was a trend towards the top of the finger in \VR renderings (65\% \vs 25\% more on surface and 10\% neutral), but this difference was not statistically significant neither.
|
||||
%
|
||||
The vibrations were felt a slightly weak overall (\response{Vibration Strength}, \num{4.2 +- 1.1}), and the vibrotactile device was perceived as neither distracting (\response{Device Distraction}, \num{1.2 +- 0.4}) nor uncomfortable (\response{Device Discomfort}, \num{1.3 +- 0.6}).
|
||||
%
|
||||
|
||||
@@ -3,40 +3,40 @@
|
||||
|
||||
%Interpret the findings in results, answer to the problem asked in the introduction, contrast with previous articles, draw possible implications. Give limitations of the study.
|
||||
|
||||
% But how different is the perception of the haptic augmentation in AR compared to VR, with a virtual hand instead of the real hand?
|
||||
% The goal of this paper is to study the visual rendering of the hand (real or virtual) and its environment (AR or VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device mounted on the finger.
|
||||
% But how different is the perception of the haptic augmentation in \AR compared to \VR, with a virtual hand instead of the real hand?
|
||||
% The goal of this paper is to study the visual rendering of the hand (real or virtual) and its environment (AR or \VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device mounted on the finger.
|
||||
|
||||
The results showed a difference in vibrotactile roughness perception between the three visual rendering conditions.
|
||||
%
|
||||
Given the estimated point of subjective equality (PSE), the textures in the \level{Real} rendering were on average perceived as \enquote{rougher} than in the \level{Virtual} (\percent{-2.8}) and \level{Mixed} (\percent{-6.0}) renderings (\figref{results/trial_pses}).
|
||||
Given the estimated \PSE, the textures in the \level{Real} rendering were on average perceived as \enquote{rougher} than in the \level{Virtual} (\percent{-2.8}) and \level{Mixed} (\percent{-6.0}) renderings (\figref{results/trial_pses}).
|
||||
%
|
||||
\textcite{gaffary2017ar} found a PSE difference in the same range between AR and VR for perceived stiffness, with the VR perceived as \enquote{stiffer} and the AR as \enquote{softer}.
|
||||
\textcite{gaffary2017ar} found a \PSE difference in the same range between \AR and \VR for perceived stiffness, with the \VR perceived as \enquote{stiffer} and the \AR as \enquote{softer}.
|
||||
%
|
||||
%However, the difference between the \level{Virtual} and \level{Mixed} conditions was not significant.
|
||||
%
|
||||
Surprisingly, the PSE of the \level{Real} rendering was shifted to the right (to be "rougher", \percent{7.9}) compared to the reference texture, whereas the PSEs of the \level{Virtual} (\percent{5.1}) and \level{Mixed} (\percent{1.9}) renderings were closer to the reference texture, being perceived as \enquote{smoother} (\figref{results/trial_predictions}).
|
||||
Surprisingly, the \PSE of the \level{Real} rendering was shifted to the right (to be "rougher", \percent{7.9}) compared to the reference texture, whereas the \PSEs of the \level{Virtual} (\percent{5.1}) and \level{Mixed} (\percent{1.9}) renderings were closer to the reference texture, being perceived as \enquote{smoother} (\figref{results/trial_predictions}).
|
||||
%
|
||||
The sensitivity of participants to roughness differences (just-noticeable differences, JND) also varied between all the visual renderings, with the \level{Real} rendering having the best JND (\percent{26}), followed by the \level{Virtual} (\percent{30}) and \level{Virtual} (\percent{33}) renderings (\figref{results/trial_jnds}).
|
||||
The sensitivity of participants to roughness \JND also varied between all the visual renderings, with the \level{Real} rendering having the best \JND (\percent{26}), followed by the \level{Virtual} (\percent{30}) and \level{Virtual} (\percent{33}) renderings (\figref{results/trial_jnds}).
|
||||
%
|
||||
These JND values are in line with and at the upper end of the range of previous studies \cite{choi2013vibrotactile}, which may be due to the location of the actuator on the top of the middle phalanx of the finger, being less sensitive to vibration than the fingertip.
|
||||
These \JNDs are in line with and at the upper end of the range of previous studies \cite{choi2013vibrotactile}, which may be due to the location of the actuator on the top of the middle phalanx of the finger, being less sensitive to vibration than the fingertip.
|
||||
%
|
||||
Thus, compared to no visual rendering (\level{Real}), the addition of a visual rendering of the hand or environment reduced the roughness sensitivity (JND) and the average roughness perception (PSE), as if the virtual haptic textures felt \enquote{smoother}.
|
||||
Thus, compared to no visual rendering (\level{Real}), the addition of a visual rendering of the hand or environment reduced the roughness sensitivity (\JND) and the average roughness perception (\PSE), as if the virtual haptic textures felt \enquote{smoother}.
|
||||
|
||||
Differences in user behaviour were also observed between the visual renderings (but not between the haptic textures).
|
||||
%
|
||||
On average, participants responded faster (\percent{-16}), explored textures at a greater distance (\percent{+21}) and at a higher speed (\percent{+16}) without visual augmentation (\level{Real} rendering) than in VR (\level{Virtual} rendering) (\figref{results_finger}).
|
||||
On average, participants responded faster (\percent{-16}), explored textures at a greater distance (\percent{+21}) and at a higher speed (\percent{+16}) without visual augmentation (\level{Real} rendering) than in \VR (\level{Virtual} rendering) (\figref{results_finger}).
|
||||
%
|
||||
The \level{Mixed} rendering, displaying both the real and virtual hands, was always in between, with no significant difference from the other two renderings.
|
||||
%
|
||||
This suggests that touching a virtual vibrotactile texture on a tangible surface with a virtual hand in VR is different from touching it with one's own hand: users were more cautious or less confident in their exploration in VR.
|
||||
This suggests that touching a virtual vibrotactile texture on a tangible surface with a virtual hand in \VR is different from touching it with one's own hand: users were more cautious or less confident in their exploration in \VR.
|
||||
%
|
||||
This seems not due to the realism of the virtual hand or environment, nor the control of the virtual hand, that were all rated high to very high by the participants (\secref{questions}) in both the \level{Mixed} and \level{Virtual} renderings.
|
||||
%
|
||||
Very interestingly, the evaluation of the vibrotactile device and textures was also the same between the visual rendering, with a very high sensation of control, a good realism and a very low perceived latency of the textures (\secref{questions}).
|
||||
%
|
||||
However, the perceived latency of the virtual hand (\response{Hand Latency} question) seems to be related to the perceived roughness of the textures (with the PSEs).
|
||||
However, the perceived latency of the virtual hand (\response{Hand Latency} question) seems to be related to the perceived roughness of the textures (with the \PSEs).
|
||||
%
|
||||
The \level{Mixed} rendering had the lowest PSE and highest perceived latency, the \level{Virtual} rendering had a higher PSE and lower perceived latency, and the \level{Real} rendering had the highest PSE and no virtual hand latency (as it was not displayed).
|
||||
The \level{Mixed} rendering had the lowest \PSE and highest perceived latency, the \level{Virtual} rendering had a higher \PSE and lower perceived latency, and the \level{Real} rendering had the highest \PSE and no virtual hand latency (as it was not displayed).
|
||||
|
||||
Our visuo-haptic augmentation system aimed to provide a coherent multimodal virtual rendering integrated with the real environment.
|
||||
%
|
||||
@@ -58,10 +58,10 @@ The main limitation of our study is, of course, the absence of a visual represen
|
||||
%
|
||||
This is indeed a source of information as important as haptic sensations for perception for both real textures \cite{baumgartner2013visual,bergmanntiest2007haptic,vardar2019fingertip} and virtual textures \cite{degraen2019enhancing,gunther2022smooth}.
|
||||
%
|
||||
%Specifically, it remains to be investigated how to visually represent vibrotactile textures in an immersive AR or VR context, as the visuo-haptic coupling of such grating textures is not trivial \cite{unger2011roughness} even with real textures \cite{klatzky2003feeling}.
|
||||
%Specifically, it remains to be investigated how to visually represent vibrotactile textures in an immersive \AR or \VR context, as the visuo-haptic coupling of such grating textures is not trivial \cite{unger2011roughness} even with real textures \cite{klatzky2003feeling}.
|
||||
%
|
||||
The interactions between the visual and haptic sensory modalities is complex and deserves further investigations, in particular in the context of visuo-haptic AR.
|
||||
The interactions between the visual and haptic sensory modalities is complex and deserves further investigations, in particular in the context of visuo-haptic \AR.
|
||||
%
|
||||
Also, our study was conducted with an OST-AR headset, but the results may be different with a VST-AR headset.
|
||||
%
|
||||
More generally, we focused on the perception of roughness sensations using wearable haptics in AR \vs VR, but many other haptic feedbacks could be investigated using the same system and methodology, such as stiffness, friction, local deformations, or temperature.
|
||||
More generally, we focused on the perception of roughness sensations using wearable haptics in \AR \vs \VR, but many other haptic feedbacks could be investigated using the same system and methodology, such as stiffness, friction, local deformations, or temperature.
|
||||
|
||||
@@ -5,9 +5,9 @@
|
||||
|
||||
We designed and implemented a system for rendering virtual haptic grating textures on a real tangible surface touched directly with the fingertip, using a wearable vibrotactile voice-coil device mounted on the middle phalanx of the finger. %, and allowing free explorative movements of the hand on the surface.
|
||||
%
|
||||
This tactile feedback was integrated with an immersive visual virtual environment, using an OST-AR headset, to provide users with a coherent multimodal visuo-haptic augmentation of the real environment, that can be switched between an AR and a VR view.
|
||||
This tactile feedback was integrated with an immersive visual virtual environment, using an OST-AR headset, to provide users with a coherent multimodal visuo-haptic augmentation of the real environment, that can be switched between an \AR and a \VR view.
|
||||
%
|
||||
We investigated then with a psychophysical user study the effect of visual rendering of the hand and its environment on the roughness perception of the designed tactile texture augmentations: without visual augmentation (\level{Real} rendering), in AR with a realistic virtual hand superimposed on the real hand (\level{Mixed} rendering), and in VR with the same virtual hand as an avatar (\level{Virtual} rendering).
|
||||
We investigated then with a psychophysical user study the effect of visual rendering of the hand and its environment on the roughness perception of the designed tactile texture augmentations: without visual augmentation (\level{Real} rendering), in \AR with a realistic virtual hand superimposed on the real hand (\level{Mixed} rendering), and in \VR with the same virtual hand as an avatar (\level{Virtual} rendering).
|
||||
%
|
||||
%Only the amplitude $A$ varied between the reference and comparison textures to create the different levels of roughness.
|
||||
%
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
Augmented reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment and promising natural and seamless interactions with real and virtual objects.
|
||||
%
|
||||
Virtual object manipulation is particularly critical for useful and effective AR usage, such as in medical applications, training, or entertainment \cite{laviolajr20173d, kim2018revisiting}.
|
||||
Virtual object manipulation is particularly critical for useful and effective \AR usage, such as in medical applications, training, or entertainment \cite{laviolajr20173d, kim2018revisiting}.
|
||||
%
|
||||
Hand tracking technologies \cite{xiao2018mrtouch}, grasping techniques \cite{holl2018efficient}, and real-time physics engines permit users to directly manipulate virtual objects with their bare hands as if they were real \cite{piumsomboon2014graspshell}, without requiring controllers \cite{krichenbauer2018augmented}, gloves \cite{prachyabrued2014visual}, or predefined gesture techniques \cite{piumsomboon2013userdefined, ha2014wearhand}.
|
||||
%
|
||||
Optical see-through AR (OST-AR) head-mounted displays (HMDs), such as the Microsoft HoloLens 2 or the Magic Leap, are particularly suited for this type of direct hand interaction \cite{kim2018revisiting}.
|
||||
Optical see-through \AR (OST-AR) head-mounted displays (HMDs), such as the Microsoft HoloLens 2 or the Magic Leap, are particularly suited for this type of direct hand interaction \cite{kim2018revisiting}.
|
||||
|
||||
However, there are still several haptic and visual limitations that affect manipulation in OST-AR, degrading the user experience.
|
||||
%
|
||||
@@ -14,23 +14,23 @@ Similarly, it is challenging to ensure confident and realistic contact with a vi
|
||||
%
|
||||
These limitations also make it difficult to confidently move a grasped object towards a target \cite{maisto2017evaluation, meli2018combining}.
|
||||
|
||||
To address these haptic and visual limitations, we investigate two types of sensory feedback that are known to improve virtual interactions with hands, but have not been studied together in an AR context: visual hand rendering and delocalized haptic rendering.
|
||||
To address these haptic and visual limitations, we investigate two types of sensory feedback that are known to improve virtual interactions with hands, but have not been studied together in an \AR context: visual hand rendering and delocalized haptic rendering.
|
||||
%
|
||||
A few works explored the effect of a visual hand rendering on interactions in AR by simulating mutual occlusion between the real hand and virtual objects \cite{ha2014wearhand, piumsomboon2014graspshell, al-kalbani2016analysis}, or displaying a 3D virtual hand model, semi-transparent \cite{ha2014wearhand, piumsomboon2014graspshell} or opaque \cite{blaga2017usability, yoon2020evaluating, saito2021contact}.
|
||||
A few works explored the effect of a visual hand rendering on interactions in \AR by simulating mutual occlusion between the real hand and virtual objects \cite{ha2014wearhand, piumsomboon2014graspshell, al-kalbani2016analysis}, or displaying a 3D virtual hand model, semi-transparent \cite{ha2014wearhand, piumsomboon2014graspshell} or opaque \cite{blaga2017usability, yoon2020evaluating, saito2021contact}.
|
||||
%
|
||||
Indeed, some visual hand renderings are known to improve interactions or user experience in virtual reality (VR), where the real hand is not visible \cite{prachyabrued2014visual, argelaguet2016role, grubert2018effects, schwind2018touch, vanveldhuizen2021effect}.
|
||||
%
|
||||
However, the role of a visual hand rendering superimposed and seen above the real tracked hand has not yet been investigated in AR.
|
||||
However, the role of a visual hand rendering superimposed and seen above the real tracked hand has not yet been investigated in \AR.
|
||||
%
|
||||
Conjointly, several studies have demonstrated that wearable haptics can significantly improve interactions performance and user experience in AR \cite{maisto2017evaluation, meli2018combining, sarac2022perceived}.
|
||||
Conjointly, several studies have demonstrated that wearable haptics can significantly improve interactions performance and user experience in \AR \cite{maisto2017evaluation, meli2018combining, sarac2022perceived}.
|
||||
%
|
||||
But haptic rendering for AR remains a challenge as it is difficult to provide rich and realistic haptic sensations while limiting their negative impact on hand tracking \cite{pacchierotti2016hring} and keeping the fingertips and palm free to interact with the real environment \cite{lopes2018adding, teng2021touch, sarac2022perceived, palmer2022haptic}.
|
||||
But haptic rendering for \AR remains a challenge as it is difficult to provide rich and realistic haptic sensations while limiting their negative impact on hand tracking \cite{pacchierotti2016hring} and keeping the fingertips and palm free to interact with the real environment \cite{lopes2018adding, teng2021touch, sarac2022perceived, palmer2022haptic}.
|
||||
%
|
||||
Therefore, the haptic feedback of the fingertip contact with the virtual environment needs to be rendered elsewhere on the hand, it is unclear which positioning should be preferred or which type of haptic feedback is best suited for manipulating virtual objects in AR.
|
||||
Therefore, the haptic feedback of the fingertip contact with the virtual environment needs to be rendered elsewhere on the hand, it is unclear which positioning should be preferred or which type of haptic feedback is best suited for manipulating virtual objects in \AR.
|
||||
%
|
||||
A final question is whether one or the other of these (haptic or visual) hand renderings should be preferred \cite{maisto2017evaluation, meli2018combining}, or whether a combined visuo-haptic rendering is beneficial for users.
|
||||
%
|
||||
In fact, both hand renderings can provide sufficient sensory cues for efficient manipulation of virtual objects in AR, or conversely, they can be shown to be complementary.
|
||||
In fact, both hand renderings can provide sufficient sensory cues for efficient manipulation of virtual objects in \AR, or conversely, they can be shown to be complementary.
|
||||
|
||||
In this paper, we investigate the role of the visuo-haptic rendering of the hand during 3D manipulation of virtual objects in OST-AR.
|
||||
%
|
||||
@@ -43,7 +43,7 @@ The main contributions of this work are:
|
||||
\end{itemize}
|
||||
|
||||
\begin{subfigs}{hands}{The six visual hand renderings}[
|
||||
Depicted as seen by the user through the AR headset during the two-finger grasping of a virtual cube.
|
||||
Depicted as seen by the user through the \AR headset during the two-finger grasping of a virtual cube.
|
||||
][
|
||||
\item No visual rendering \emph{(None)}.
|
||||
\item Cropped virtual content to enable hand-cube occlusion \emph{(Occlusion, Occl)}.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
\section{User Study}
|
||||
\label{method}
|
||||
|
||||
This first experiment aims to analyze whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with bare hands in AR.
|
||||
This first experiment aims to analyze whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with bare hands in \AR.
|
||||
|
||||
\subsection{Visual Hand Renderings}
|
||||
\label{hands}
|
||||
@@ -19,7 +19,7 @@ However, while the real hand can of course penetrate virtual objects, the visual
|
||||
\subsubsection{None~(\figref{method/hands-none})}
|
||||
\label{hands_none}
|
||||
|
||||
As a reference, we considered no visual hand rendering, as is common in AR \cite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
|
||||
As a reference, we considered no visual hand rendering, as is common in \AR \cite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
|
||||
%
|
||||
Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched.
|
||||
%
|
||||
@@ -55,12 +55,12 @@ This rendering schematically renders the joints and phalanges of the fingers wit
|
||||
%
|
||||
It can be seen as an extension of the Tips rendering to include the complete fingers articulations.
|
||||
%
|
||||
It is widely used in VR \cite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR \cite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
|
||||
It is widely used in \VR \cite{argelaguet2016role, schwind2018touch, chessa2019grasping} and \AR \cite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
|
||||
|
||||
\subsubsection{Mesh (\figref{method/hands-mesh})}
|
||||
\label{hands_mesh}
|
||||
|
||||
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR \cite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
|
||||
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in \VR \cite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
|
||||
%
|
||||
It can be seen as a filled version of the Contour hand rendering, thus partially covering the view of the real hand.
|
||||
|
||||
@@ -163,7 +163,7 @@ This setup enabled a good and consistent tracking of the user's fingers.
|
||||
|
||||
First, participants were given a consent form that briefed them about the tasks and the protocol of the experiment.
|
||||
%
|
||||
Then, participants were asked to comfortably sit in front of a table and wear the HoloLens~2 headset as shown in~\figref{tasks}, perform the calibration of the visual hand size as described in~\secref{apparatus}, and complete a 2-minutes training to familiarize with the AR rendering and the two considered tasks.
|
||||
Then, participants were asked to comfortably sit in front of a table and wear the HoloLens~2 headset as shown in~\figref{tasks}, perform the calibration of the visual hand size as described in~\secref{apparatus}, and complete a 2-minutes training to familiarize with the \AR rendering and the two considered tasks.
|
||||
%
|
||||
During this training, we did not use any of the six hand renderings we want to test, but rather a fully-opaque white hand rendering that completely occluded the real hand of the user.
|
||||
|
||||
@@ -182,9 +182,9 @@ None of the participants reported any deficiencies in their visual perception ab
|
||||
%
|
||||
Two subjects were left-handed, while the twenty-two other were right-handed; they all used their dominant hand during the trials.
|
||||
%
|
||||
Ten subjects had significant experience with VR (\enquote{I use it every week}), while the fourteen other reported little to no experience with VR.
|
||||
Ten subjects had significant experience with \VR (\enquote{I use it every week}), while the fourteen other reported little to no experience with \VR.
|
||||
%
|
||||
Two subjects had significant experience with AR (\enquote{I use it every week}), while the twenty-two other reported little to no experience with AR.
|
||||
Two subjects had significant experience with \AR (\enquote{I use it every week}), while the twenty-two other reported little to no experience with \AR.
|
||||
%
|
||||
Participants signed an informed consent, including the declaration of having no conflict of interest.
|
||||
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
\label{results}
|
||||
|
||||
\begin{subfigs}{push_results}{Results of the push task performance metrics for each visual hand rendering. }[
|
||||
Geometric means with bootstrap 95~\% confidence interval
|
||||
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
Geometric means with bootstrap 95~\% \CI
|
||||
and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
][
|
||||
\item Time to complete a trial.
|
||||
\item Number of contacts with the cube.
|
||||
@@ -15,8 +15,8 @@
|
||||
\end{subfigs}
|
||||
|
||||
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each visual hand rendering. }[
|
||||
Geometric means with bootstrap 95~\% confidence interval
|
||||
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
Geometric means with bootstrap 95~\% \CI
|
||||
and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
][
|
||||
\item Time to complete a trial.
|
||||
\item Number of contacts with the cube.
|
||||
@@ -29,11 +29,11 @@
|
||||
\subfig[0.24]{results/Grasp-GripAperture-Hand-Overall-Means}
|
||||
\end{subfigs}
|
||||
|
||||
Results of each trials measure were analyzed with a linear mixed model (LMM), with the order of the two manipulation tasks and the six visual hand renderings (Order), the visual hand renderings (Hand), the target volume position (Target), and their interactions as fixed effects and the Participant as random intercept.
|
||||
Results of each trials measure were analyzed with a \LMM, with the order of the two manipulation tasks and the six visual hand renderings (Order), the visual hand renderings (Hand), the target volume position (Target), and their interactions as fixed effects and the Participant as random intercept.
|
||||
%
|
||||
For every LMM, residuals were tested with a Q-Q plot to confirm normality.
|
||||
For every \LMM, residuals were tested with a Q-Q plot to confirm normality.
|
||||
%
|
||||
On statistically significant effects, estimated marginal means of the LMM were compared pairwise using Tukey's HSD test.
|
||||
On statistically significant effects, estimated marginal means of the \LMM were compared pairwise using Tukey's \HSD test.
|
||||
%
|
||||
Only significant results were reported.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
\section{Discussion}
|
||||
\label{discussion}
|
||||
|
||||
We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two virtual object manipulation tasks in AR.
|
||||
We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two virtual object manipulation tasks in \AR.
|
||||
|
||||
During the Push task, the Skeleton hand rendering was the fastest (\figref{results/Push-CompletionTime-Hand-Overall-Means}), as participants employed fewer and longer contacts to adjust the cube inside the target volume (\figref{results/Push-ContactsCount-Hand-Overall-Means} and \figref{results/Push-MeanContactTime-Hand-Overall-Means}).
|
||||
%
|
||||
@@ -11,9 +11,9 @@ However, during the Grasp task, despite no difference in completion time, provid
|
||||
%
|
||||
Indeed, participants found the None and Occlusion renderings less effective (\figref{results/Ranks-Grasp}) and less precise (\figref{questions}).
|
||||
%
|
||||
To understand whether the participants' previous experience might have played a role, we also carried out an additional statistical analysis considering VR experience as an additional between-subjects factor, \ie VR novices vs. VR experts (\enquote{I use it every week}, see \secref{participants}).
|
||||
To understand whether the participants' previous experience might have played a role, we also carried out an additional statistical analysis considering \VR experience as an additional between-subjects factor, \ie \VR novices vs. \VR experts (\enquote{I use it every week}, see \secref{participants}).
|
||||
%
|
||||
We found no statistically significant differences when comparing the considered metrics between VR novices and experts.
|
||||
We found no statistically significant differences when comparing the considered metrics between \VR novices and experts.
|
||||
|
||||
Interestingly, all visual hand renderings showed grip apertures very close to the size of the virtual cube, except for the None rendering (\figref{results/Grasp-GripAperture-Hand-Overall-Means}), with which participants applied stronger grasps, \ie less distance between the fingertips.
|
||||
%
|
||||
@@ -35,17 +35,17 @@ while others found that it gave them a better sense of the contact points and im
|
||||
%
|
||||
This result are consistent with \textcite{saito2021contact}, who found that displaying the points of contacts was beneficial for grasping a virtual object over an opaque visual hand overlay.
|
||||
|
||||
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating virtual objects with bare hands in AR.
|
||||
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating virtual objects with bare hands in \AR.
|
||||
%
|
||||
These results contrast with similar manipulation studies, but in non-immersive, on-screen AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance \cite{blaga2017usability,maisto2017evaluation,meli2018combining}.
|
||||
These results contrast with similar manipulation studies, but in non-immersive, on-screen \AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance \cite{blaga2017usability,maisto2017evaluation,meli2018combining}.
|
||||
%
|
||||
Our results show the most effective visual hand rendering to be the Skeleton one{. Participants appreciated that} it provided a detailed and precise view of the tracking of the real hand{, without} hiding or masking it.
|
||||
%
|
||||
Although the Contour and Mesh hand renderings were also highly rated, some participants felt that they were too visible and masked the real hand.
|
||||
%
|
||||
This result is in line with the results of virtual object manipulation in VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the virtual environment.
|
||||
This result is in line with the results of virtual object manipulation in \VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the virtual environment.
|
||||
%
|
||||
This type of Skeleton rendering was also the one that provided the best sense of agency (control) in VR \cite{argelaguet2016role, schwind2018touch}.
|
||||
This type of Skeleton rendering was also the one that provided the best sense of agency (control) in \VR \cite{argelaguet2016role, schwind2018touch}.
|
||||
|
||||
These results have of course some limitations as they only address limited types of manipulation tasks and visual hand characteristics, evaluated in a specific OST-AR setup.
|
||||
%
|
||||
@@ -55,4 +55,4 @@ Testing a wider range of virtual objects and more ecological tasks \eg stacking,
|
||||
%
|
||||
Similarly, a broader experimental study might shed light on the role of gender and age, as our subject pool was not sufficiently diverse in this respect.
|
||||
%
|
||||
However, we believe that the results presented here provide a rather interesting overview of the most promising approaches in AR manipulation.
|
||||
However, we believe that the results presented here provide a rather interesting overview of the most promising approaches in \AR manipulation.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
|
||||
This paper presented two human subject studies aimed at better understanding the role of visuo-haptic rendering of the hand during virtual object manipulation in OST-AR.
|
||||
%
|
||||
The first experiment compared six visual hand renderings in two representative manipulation tasks in AR, \ie push-and-slide and grasp-and-place of a virtual object.
|
||||
The first experiment compared six visual hand renderings in two representative manipulation tasks in \AR, \ie push-and-slide and grasp-and-place of a virtual object.
|
||||
%
|
||||
Results show that a visual hand rendering improved the performance, perceived effectiveness, and user confidence.
|
||||
%
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
\section{User Study}
|
||||
\label{method}
|
||||
|
||||
Providing haptic feedback during free-hand manipulation in AR is not trivial, as wearing haptic devices on the hand might affect the tracking capabilities of the system.
|
||||
Providing haptic feedback during free-hand manipulation in \AR is not trivial, as wearing haptic devices on the hand might affect the tracking capabilities of the system.
|
||||
%
|
||||
Moreover, it is important to leave the user capable of interacting with both virtual and real objects, avoiding the use of haptic interfaces that cover the fingertips or palm.
|
||||
%
|
||||
For this reason, it is often considered beneficial to move the point of application of the haptic rendering elsewhere on the hand.% (\secref{haptics}).
|
||||
|
||||
This second experiment aims to evaluate whether a visuo-haptic hand rendering affects the performance and user experience of manipulation of virtual objects with bare hands in AR.
|
||||
This second experiment aims to evaluate whether a visuo-haptic hand rendering affects the performance and user experience of manipulation of virtual objects with bare hands in \AR.
|
||||
%
|
||||
The chosen visuo-haptic hand renderings are the combination of the two most representative visual hand renderings established in the first experiment, \ie Skeleton and None, described in \secref[visual_hand]{hands}, with two contact vibration techniques provided at four delocalized positions on the hand.
|
||||
|
||||
@@ -80,8 +80,8 @@ Similarly, we designed the distance vibration technique (Dist) so that interpene
|
||||
\end{subfigs}
|
||||
|
||||
\begin{subfigs}{push_results}{Results of the grasp task performance metrics. }[
|
||||
Geometric means with bootstrap 95~\% confidence interval for each vibrotactile positioning (a, b and c) or visual hand rendering (d)
|
||||
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
Geometric means with bootstrap 95~\% \CI for each vibrotactile positioning (a, b and c) or visual hand rendering (d)
|
||||
and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
][
|
||||
\item Time to complete a trial.
|
||||
\item Number of contacts with the cube.
|
||||
@@ -189,10 +189,10 @@ They all had a normal or corrected-to-normal vision.
|
||||
%
|
||||
Thirteen subjects participated also in the previous experiment.
|
||||
|
||||
Participants rated their expertise (\enquote{I use it more than once a year}) with VR, AR, and haptics in a pre-experiment questionnaire.
|
||||
Participants rated their expertise (\enquote{I use it more than once a year}) with \VR, \AR, and haptics in a pre-experiment questionnaire.
|
||||
%
|
||||
There were twelve experienced with VR, eight experienced with AR, and ten experienced with haptics.
|
||||
There were twelve experienced with \VR, eight experienced with \AR, and ten experienced with haptics.
|
||||
%
|
||||
VR and haptics expertise were highly correlated (\pearson{0.9}), as well as AR and haptics expertise (\pearson{0.6}).
|
||||
VR and haptics expertise were highly correlated (\pearson{0.9}), as well as \AR and haptics expertise (\pearson{0.6}).
|
||||
%
|
||||
Other expertise correlations were low ($r<0.35$).
|
||||
|
||||
@@ -31,11 +31,11 @@ Although the Distance technique provided additional feedback on the interpenetra
|
||||
|
||||
\figref{results_questions} shows the questionnaire results for each vibrotactile positioning.
|
||||
%
|
||||
Questionnaire results were analyzed using Aligned Rank Transform (ART) non-parametric analysis of variance (\secref{metrics}).
|
||||
Questionnaire results were analyzed using \ART non-parametric \ANOVA (\secref{metrics}).
|
||||
%
|
||||
Statistically significant effects were further analyzed with post-hoc pairwise comparisons with Holm-Bonferroni adjustment.
|
||||
%
|
||||
Wilcoxon signed-rank tests were used for main effects and ART contrasts procedure for interaction effects.
|
||||
Wilcoxon signed-rank tests were used for main effects and \ART contrasts procedure for interaction effects.
|
||||
%
|
||||
Only significant results are reported.
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
\label{results}
|
||||
|
||||
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each vibrotactile positioning. }[
|
||||
Geometric means with bootstrap 95~\% confidence and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
Geometric means with bootstrap 95~\% confidence and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
][
|
||||
\item Time to complete a trial.
|
||||
\item Number of contacts with the cube.
|
||||
@@ -17,5 +17,5 @@
|
||||
|
||||
Results were analyzed similarly as for the first experiment (\secref{results}).
|
||||
%
|
||||
The LMM were fitted with the order of the five vibrotactile positionings (Order), the vibrotactile positionings (Positioning), the visual hand rendering (Hand), the {contact vibration techniques} (Technique), and the target volume position (Target), and their interactions as fixed effects and Participant as random intercept.
|
||||
The \LMM were fitted with the order of the five vibrotactile positionings (Order), the vibrotactile positionings (Positioning), the visual hand rendering (Hand), the {contact vibration techniques} (Technique), and the target volume position (Target), and their interactions as fixed effects and Participant as random intercept.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
\section{Discussion}
|
||||
\label{discussion}
|
||||
|
||||
We evaluated sixteen visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in AR as in the first experiment, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the first experiment.
|
||||
We evaluated sixteen visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in \AR as in the first experiment, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the first experiment.
|
||||
|
||||
In the Push task, vibrotactile haptic hand rendering has been proven beneficial with the Proximal positioning, which registered a low completion time, but detrimental with the Fingertips positioning, which performed worse (\figref{results/Push-CompletionTime-Location-Overall-Means}) than the Proximal and Opposite (on the contralateral hand) positionings.
|
||||
%
|
||||
@@ -59,11 +59,11 @@ It is also worth noting that the improved hand tracking and grasp helper improve
|
||||
%
|
||||
This improvement could also be the reason for the smaller differences between the Skeleton and the None visual hand renderings in this second experiment.
|
||||
|
||||
In summary, the positioning of the vibrotactile haptic rendering of the hand affected on the performance and experience of users manipulating virtual objects with their bare hands in AR.
|
||||
In summary, the positioning of the vibrotactile haptic rendering of the hand affected on the performance and experience of users manipulating virtual objects with their bare hands in \AR.
|
||||
%
|
||||
The closer the vibrotactile hand rendering was to the point of contact, the better it was perceived in terms of effectiveness, usefulness, and realism.
|
||||
%
|
||||
These subjective appreciations of wearable haptic hand rendering for manipulating virtual objects in AR were also observed by \textcite{maisto2017evaluation} and \textcite{meli2018combining}.
|
||||
These subjective appreciations of wearable haptic hand rendering for manipulating virtual objects in \AR were also observed by \textcite{maisto2017evaluation} and \textcite{meli2018combining}.
|
||||
%
|
||||
However, the best performance was obtained with the farthest positioning on the contralateral hand, which is somewhat surprising.
|
||||
%
|
||||
|
||||
@@ -11,6 +11,6 @@ However, the farthest positioning on the contralateral hand gave the best perfor
|
||||
%
|
||||
The visual hand rendering was perceived less necessary than the vibrotactile haptic hand rendering, but still provided a useful feedback on the hand tracking.
|
||||
|
||||
Future work will focus on including richer types of haptic feedback, such as pressure and skin stretch, analyzing the best compromise between well-round haptic feedback and wearability of the system with respect to AR constraints.
|
||||
Future work will focus on including richer types of haptic feedback, such as pressure and skin stretch, analyzing the best compromise between well-round haptic feedback and wearability of the system with respect to \AR constraints.
|
||||
%
|
||||
As delocalizing haptic feedback seems to be a simple but very promising approach for haptic-enabled AR, we will keep including this dimension in our future study, even when considering other types of haptic sensations.
|
||||
As delocalizing haptic feedback seems to be a simple but very promising approach for haptic-enabled \AR, we will keep including this dimension in our future study, even when considering other types of haptic sensations.
|
||||
|
||||
@@ -42,12 +42,18 @@
|
||||
\acronym[ThreeD]{3D}{three-dimensional}
|
||||
\acronym{AE}{augmented environment}
|
||||
\acronym{AC}{alternating current}
|
||||
\acronym{ANOVA}{analysis of variance}
|
||||
\acronym{ART}{aligned rank transform}
|
||||
\acronym{AR}{augmented reality}
|
||||
\acronym{CI}{confidence interval}
|
||||
\acronym{DC}{direct current}
|
||||
\acronym{DoF}{degree of freedom}
|
||||
\acronym{ERM}{eccentric rotating mass}
|
||||
\acronym{GLMM}{generalized linear mixed models}
|
||||
\acronym{HSD}{honest significant difference}
|
||||
\acronym{JND}{just noticeable difference}
|
||||
\acronym{LRA}{linear resonant actuator}
|
||||
\acronym{LMM}{linear mixed models}
|
||||
\acronym{MLE}{maximum-likelihood estimation}
|
||||
\acronym{MR}{mixed reality}
|
||||
\acronym{OST}{optical see-through}
|
||||
|
||||
Reference in New Issue
Block a user