Fix acronyms
This commit is contained in:
@@ -1,10 +1,10 @@
|
||||
Augmented reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment and promising natural and seamless interactions with real and virtual objects.
|
||||
%
|
||||
Virtual object manipulation is particularly critical for useful and effective AR usage, such as in medical applications, training, or entertainment \cite{laviolajr20173d, kim2018revisiting}.
|
||||
Virtual object manipulation is particularly critical for useful and effective \AR usage, such as in medical applications, training, or entertainment \cite{laviolajr20173d, kim2018revisiting}.
|
||||
%
|
||||
Hand tracking technologies \cite{xiao2018mrtouch}, grasping techniques \cite{holl2018efficient}, and real-time physics engines permit users to directly manipulate virtual objects with their bare hands as if they were real \cite{piumsomboon2014graspshell}, without requiring controllers \cite{krichenbauer2018augmented}, gloves \cite{prachyabrued2014visual}, or predefined gesture techniques \cite{piumsomboon2013userdefined, ha2014wearhand}.
|
||||
%
|
||||
Optical see-through AR (OST-AR) head-mounted displays (HMDs), such as the Microsoft HoloLens 2 or the Magic Leap, are particularly suited for this type of direct hand interaction \cite{kim2018revisiting}.
|
||||
Optical see-through \AR (OST-AR) head-mounted displays (HMDs), such as the Microsoft HoloLens 2 or the Magic Leap, are particularly suited for this type of direct hand interaction \cite{kim2018revisiting}.
|
||||
|
||||
However, there are still several haptic and visual limitations that affect manipulation in OST-AR, degrading the user experience.
|
||||
%
|
||||
@@ -14,23 +14,23 @@ Similarly, it is challenging to ensure confident and realistic contact with a vi
|
||||
%
|
||||
These limitations also make it difficult to confidently move a grasped object towards a target \cite{maisto2017evaluation, meli2018combining}.
|
||||
|
||||
To address these haptic and visual limitations, we investigate two types of sensory feedback that are known to improve virtual interactions with hands, but have not been studied together in an AR context: visual hand rendering and delocalized haptic rendering.
|
||||
To address these haptic and visual limitations, we investigate two types of sensory feedback that are known to improve virtual interactions with hands, but have not been studied together in an \AR context: visual hand rendering and delocalized haptic rendering.
|
||||
%
|
||||
A few works explored the effect of a visual hand rendering on interactions in AR by simulating mutual occlusion between the real hand and virtual objects \cite{ha2014wearhand, piumsomboon2014graspshell, al-kalbani2016analysis}, or displaying a 3D virtual hand model, semi-transparent \cite{ha2014wearhand, piumsomboon2014graspshell} or opaque \cite{blaga2017usability, yoon2020evaluating, saito2021contact}.
|
||||
A few works explored the effect of a visual hand rendering on interactions in \AR by simulating mutual occlusion between the real hand and virtual objects \cite{ha2014wearhand, piumsomboon2014graspshell, al-kalbani2016analysis}, or displaying a 3D virtual hand model, semi-transparent \cite{ha2014wearhand, piumsomboon2014graspshell} or opaque \cite{blaga2017usability, yoon2020evaluating, saito2021contact}.
|
||||
%
|
||||
Indeed, some visual hand renderings are known to improve interactions or user experience in virtual reality (VR), where the real hand is not visible \cite{prachyabrued2014visual, argelaguet2016role, grubert2018effects, schwind2018touch, vanveldhuizen2021effect}.
|
||||
%
|
||||
However, the role of a visual hand rendering superimposed and seen above the real tracked hand has not yet been investigated in AR.
|
||||
However, the role of a visual hand rendering superimposed and seen above the real tracked hand has not yet been investigated in \AR.
|
||||
%
|
||||
Conjointly, several studies have demonstrated that wearable haptics can significantly improve interactions performance and user experience in AR \cite{maisto2017evaluation, meli2018combining, sarac2022perceived}.
|
||||
Conjointly, several studies have demonstrated that wearable haptics can significantly improve interactions performance and user experience in \AR \cite{maisto2017evaluation, meli2018combining, sarac2022perceived}.
|
||||
%
|
||||
But haptic rendering for AR remains a challenge as it is difficult to provide rich and realistic haptic sensations while limiting their negative impact on hand tracking \cite{pacchierotti2016hring} and keeping the fingertips and palm free to interact with the real environment \cite{lopes2018adding, teng2021touch, sarac2022perceived, palmer2022haptic}.
|
||||
But haptic rendering for \AR remains a challenge as it is difficult to provide rich and realistic haptic sensations while limiting their negative impact on hand tracking \cite{pacchierotti2016hring} and keeping the fingertips and palm free to interact with the real environment \cite{lopes2018adding, teng2021touch, sarac2022perceived, palmer2022haptic}.
|
||||
%
|
||||
Therefore, the haptic feedback of the fingertip contact with the virtual environment needs to be rendered elsewhere on the hand, it is unclear which positioning should be preferred or which type of haptic feedback is best suited for manipulating virtual objects in AR.
|
||||
Therefore, the haptic feedback of the fingertip contact with the virtual environment needs to be rendered elsewhere on the hand, it is unclear which positioning should be preferred or which type of haptic feedback is best suited for manipulating virtual objects in \AR.
|
||||
%
|
||||
A final question is whether one or the other of these (haptic or visual) hand renderings should be preferred \cite{maisto2017evaluation, meli2018combining}, or whether a combined visuo-haptic rendering is beneficial for users.
|
||||
%
|
||||
In fact, both hand renderings can provide sufficient sensory cues for efficient manipulation of virtual objects in AR, or conversely, they can be shown to be complementary.
|
||||
In fact, both hand renderings can provide sufficient sensory cues for efficient manipulation of virtual objects in \AR, or conversely, they can be shown to be complementary.
|
||||
|
||||
In this paper, we investigate the role of the visuo-haptic rendering of the hand during 3D manipulation of virtual objects in OST-AR.
|
||||
%
|
||||
@@ -43,7 +43,7 @@ The main contributions of this work are:
|
||||
\end{itemize}
|
||||
|
||||
\begin{subfigs}{hands}{The six visual hand renderings}[
|
||||
Depicted as seen by the user through the AR headset during the two-finger grasping of a virtual cube.
|
||||
Depicted as seen by the user through the \AR headset during the two-finger grasping of a virtual cube.
|
||||
][
|
||||
\item No visual rendering \emph{(None)}.
|
||||
\item Cropped virtual content to enable hand-cube occlusion \emph{(Occlusion, Occl)}.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
\section{User Study}
|
||||
\label{method}
|
||||
|
||||
This first experiment aims to analyze whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with bare hands in AR.
|
||||
This first experiment aims to analyze whether the chosen visual hand rendering affects the performance and user experience of manipulating virtual objects with bare hands in \AR.
|
||||
|
||||
\subsection{Visual Hand Renderings}
|
||||
\label{hands}
|
||||
@@ -19,7 +19,7 @@ However, while the real hand can of course penetrate virtual objects, the visual
|
||||
\subsubsection{None~(\figref{method/hands-none})}
|
||||
\label{hands_none}
|
||||
|
||||
As a reference, we considered no visual hand rendering, as is common in AR \cite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
|
||||
As a reference, we considered no visual hand rendering, as is common in \AR \cite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
|
||||
%
|
||||
Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched.
|
||||
%
|
||||
@@ -55,12 +55,12 @@ This rendering schematically renders the joints and phalanges of the fingers wit
|
||||
%
|
||||
It can be seen as an extension of the Tips rendering to include the complete fingers articulations.
|
||||
%
|
||||
It is widely used in VR \cite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR \cite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
|
||||
It is widely used in \VR \cite{argelaguet2016role, schwind2018touch, chessa2019grasping} and \AR \cite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
|
||||
|
||||
\subsubsection{Mesh (\figref{method/hands-mesh})}
|
||||
\label{hands_mesh}
|
||||
|
||||
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR \cite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
|
||||
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in \VR \cite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
|
||||
%
|
||||
It can be seen as a filled version of the Contour hand rendering, thus partially covering the view of the real hand.
|
||||
|
||||
@@ -163,7 +163,7 @@ This setup enabled a good and consistent tracking of the user's fingers.
|
||||
|
||||
First, participants were given a consent form that briefed them about the tasks and the protocol of the experiment.
|
||||
%
|
||||
Then, participants were asked to comfortably sit in front of a table and wear the HoloLens~2 headset as shown in~\figref{tasks}, perform the calibration of the visual hand size as described in~\secref{apparatus}, and complete a 2-minutes training to familiarize with the AR rendering and the two considered tasks.
|
||||
Then, participants were asked to comfortably sit in front of a table and wear the HoloLens~2 headset as shown in~\figref{tasks}, perform the calibration of the visual hand size as described in~\secref{apparatus}, and complete a 2-minutes training to familiarize with the \AR rendering and the two considered tasks.
|
||||
%
|
||||
During this training, we did not use any of the six hand renderings we want to test, but rather a fully-opaque white hand rendering that completely occluded the real hand of the user.
|
||||
|
||||
@@ -182,9 +182,9 @@ None of the participants reported any deficiencies in their visual perception ab
|
||||
%
|
||||
Two subjects were left-handed, while the twenty-two other were right-handed; they all used their dominant hand during the trials.
|
||||
%
|
||||
Ten subjects had significant experience with VR (\enquote{I use it every week}), while the fourteen other reported little to no experience with VR.
|
||||
Ten subjects had significant experience with \VR (\enquote{I use it every week}), while the fourteen other reported little to no experience with \VR.
|
||||
%
|
||||
Two subjects had significant experience with AR (\enquote{I use it every week}), while the twenty-two other reported little to no experience with AR.
|
||||
Two subjects had significant experience with \AR (\enquote{I use it every week}), while the twenty-two other reported little to no experience with \AR.
|
||||
%
|
||||
Participants signed an informed consent, including the declaration of having no conflict of interest.
|
||||
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
\label{results}
|
||||
|
||||
\begin{subfigs}{push_results}{Results of the push task performance metrics for each visual hand rendering. }[
|
||||
Geometric means with bootstrap 95~\% confidence interval
|
||||
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
Geometric means with bootstrap 95~\% \CI
|
||||
and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
][
|
||||
\item Time to complete a trial.
|
||||
\item Number of contacts with the cube.
|
||||
@@ -15,8 +15,8 @@
|
||||
\end{subfigs}
|
||||
|
||||
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each visual hand rendering. }[
|
||||
Geometric means with bootstrap 95~\% confidence interval
|
||||
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
Geometric means with bootstrap 95~\% \CI
|
||||
and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
][
|
||||
\item Time to complete a trial.
|
||||
\item Number of contacts with the cube.
|
||||
@@ -29,11 +29,11 @@
|
||||
\subfig[0.24]{results/Grasp-GripAperture-Hand-Overall-Means}
|
||||
\end{subfigs}
|
||||
|
||||
Results of each trials measure were analyzed with a linear mixed model (LMM), with the order of the two manipulation tasks and the six visual hand renderings (Order), the visual hand renderings (Hand), the target volume position (Target), and their interactions as fixed effects and the Participant as random intercept.
|
||||
Results of each trials measure were analyzed with a \LMM, with the order of the two manipulation tasks and the six visual hand renderings (Order), the visual hand renderings (Hand), the target volume position (Target), and their interactions as fixed effects and the Participant as random intercept.
|
||||
%
|
||||
For every LMM, residuals were tested with a Q-Q plot to confirm normality.
|
||||
For every \LMM, residuals were tested with a Q-Q plot to confirm normality.
|
||||
%
|
||||
On statistically significant effects, estimated marginal means of the LMM were compared pairwise using Tukey's HSD test.
|
||||
On statistically significant effects, estimated marginal means of the \LMM were compared pairwise using Tukey's \HSD test.
|
||||
%
|
||||
Only significant results were reported.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
\section{Discussion}
|
||||
\label{discussion}
|
||||
|
||||
We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two virtual object manipulation tasks in AR.
|
||||
We evaluated six visual hand renderings, as described in \secref{hands}, displayed on top of the real hand, in two virtual object manipulation tasks in \AR.
|
||||
|
||||
During the Push task, the Skeleton hand rendering was the fastest (\figref{results/Push-CompletionTime-Hand-Overall-Means}), as participants employed fewer and longer contacts to adjust the cube inside the target volume (\figref{results/Push-ContactsCount-Hand-Overall-Means} and \figref{results/Push-MeanContactTime-Hand-Overall-Means}).
|
||||
%
|
||||
@@ -11,9 +11,9 @@ However, during the Grasp task, despite no difference in completion time, provid
|
||||
%
|
||||
Indeed, participants found the None and Occlusion renderings less effective (\figref{results/Ranks-Grasp}) and less precise (\figref{questions}).
|
||||
%
|
||||
To understand whether the participants' previous experience might have played a role, we also carried out an additional statistical analysis considering VR experience as an additional between-subjects factor, \ie VR novices vs. VR experts (\enquote{I use it every week}, see \secref{participants}).
|
||||
To understand whether the participants' previous experience might have played a role, we also carried out an additional statistical analysis considering \VR experience as an additional between-subjects factor, \ie \VR novices vs. \VR experts (\enquote{I use it every week}, see \secref{participants}).
|
||||
%
|
||||
We found no statistically significant differences when comparing the considered metrics between VR novices and experts.
|
||||
We found no statistically significant differences when comparing the considered metrics between \VR novices and experts.
|
||||
|
||||
Interestingly, all visual hand renderings showed grip apertures very close to the size of the virtual cube, except for the None rendering (\figref{results/Grasp-GripAperture-Hand-Overall-Means}), with which participants applied stronger grasps, \ie less distance between the fingertips.
|
||||
%
|
||||
@@ -35,17 +35,17 @@ while others found that it gave them a better sense of the contact points and im
|
||||
%
|
||||
This result are consistent with \textcite{saito2021contact}, who found that displaying the points of contacts was beneficial for grasping a virtual object over an opaque visual hand overlay.
|
||||
|
||||
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating virtual objects with bare hands in AR.
|
||||
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating virtual objects with bare hands in \AR.
|
||||
%
|
||||
These results contrast with similar manipulation studies, but in non-immersive, on-screen AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance \cite{blaga2017usability,maisto2017evaluation,meli2018combining}.
|
||||
These results contrast with similar manipulation studies, but in non-immersive, on-screen \AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance \cite{blaga2017usability,maisto2017evaluation,meli2018combining}.
|
||||
%
|
||||
Our results show the most effective visual hand rendering to be the Skeleton one{. Participants appreciated that} it provided a detailed and precise view of the tracking of the real hand{, without} hiding or masking it.
|
||||
%
|
||||
Although the Contour and Mesh hand renderings were also highly rated, some participants felt that they were too visible and masked the real hand.
|
||||
%
|
||||
This result is in line with the results of virtual object manipulation in VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the virtual environment.
|
||||
This result is in line with the results of virtual object manipulation in \VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the virtual environment.
|
||||
%
|
||||
This type of Skeleton rendering was also the one that provided the best sense of agency (control) in VR \cite{argelaguet2016role, schwind2018touch}.
|
||||
This type of Skeleton rendering was also the one that provided the best sense of agency (control) in \VR \cite{argelaguet2016role, schwind2018touch}.
|
||||
|
||||
These results have of course some limitations as they only address limited types of manipulation tasks and visual hand characteristics, evaluated in a specific OST-AR setup.
|
||||
%
|
||||
@@ -55,4 +55,4 @@ Testing a wider range of virtual objects and more ecological tasks \eg stacking,
|
||||
%
|
||||
Similarly, a broader experimental study might shed light on the role of gender and age, as our subject pool was not sufficiently diverse in this respect.
|
||||
%
|
||||
However, we believe that the results presented here provide a rather interesting overview of the most promising approaches in AR manipulation.
|
||||
However, we believe that the results presented here provide a rather interesting overview of the most promising approaches in \AR manipulation.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
|
||||
This paper presented two human subject studies aimed at better understanding the role of visuo-haptic rendering of the hand during virtual object manipulation in OST-AR.
|
||||
%
|
||||
The first experiment compared six visual hand renderings in two representative manipulation tasks in AR, \ie push-and-slide and grasp-and-place of a virtual object.
|
||||
The first experiment compared six visual hand renderings in two representative manipulation tasks in \AR, \ie push-and-slide and grasp-and-place of a virtual object.
|
||||
%
|
||||
Results show that a visual hand rendering improved the performance, perceived effectiveness, and user confidence.
|
||||
%
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
\section{User Study}
|
||||
\label{method}
|
||||
|
||||
Providing haptic feedback during free-hand manipulation in AR is not trivial, as wearing haptic devices on the hand might affect the tracking capabilities of the system.
|
||||
Providing haptic feedback during free-hand manipulation in \AR is not trivial, as wearing haptic devices on the hand might affect the tracking capabilities of the system.
|
||||
%
|
||||
Moreover, it is important to leave the user capable of interacting with both virtual and real objects, avoiding the use of haptic interfaces that cover the fingertips or palm.
|
||||
%
|
||||
For this reason, it is often considered beneficial to move the point of application of the haptic rendering elsewhere on the hand.% (\secref{haptics}).
|
||||
|
||||
This second experiment aims to evaluate whether a visuo-haptic hand rendering affects the performance and user experience of manipulation of virtual objects with bare hands in AR.
|
||||
This second experiment aims to evaluate whether a visuo-haptic hand rendering affects the performance and user experience of manipulation of virtual objects with bare hands in \AR.
|
||||
%
|
||||
The chosen visuo-haptic hand renderings are the combination of the two most representative visual hand renderings established in the first experiment, \ie Skeleton and None, described in \secref[visual_hand]{hands}, with two contact vibration techniques provided at four delocalized positions on the hand.
|
||||
|
||||
@@ -80,8 +80,8 @@ Similarly, we designed the distance vibration technique (Dist) so that interpene
|
||||
\end{subfigs}
|
||||
|
||||
\begin{subfigs}{push_results}{Results of the grasp task performance metrics. }[
|
||||
Geometric means with bootstrap 95~\% confidence interval for each vibrotactile positioning (a, b and c) or visual hand rendering (d)
|
||||
and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
Geometric means with bootstrap 95~\% \CI for each vibrotactile positioning (a, b and c) or visual hand rendering (d)
|
||||
and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
][
|
||||
\item Time to complete a trial.
|
||||
\item Number of contacts with the cube.
|
||||
@@ -189,10 +189,10 @@ They all had a normal or corrected-to-normal vision.
|
||||
%
|
||||
Thirteen subjects participated also in the previous experiment.
|
||||
|
||||
Participants rated their expertise (\enquote{I use it more than once a year}) with VR, AR, and haptics in a pre-experiment questionnaire.
|
||||
Participants rated their expertise (\enquote{I use it more than once a year}) with \VR, \AR, and haptics in a pre-experiment questionnaire.
|
||||
%
|
||||
There were twelve experienced with VR, eight experienced with AR, and ten experienced with haptics.
|
||||
There were twelve experienced with \VR, eight experienced with \AR, and ten experienced with haptics.
|
||||
%
|
||||
VR and haptics expertise were highly correlated (\pearson{0.9}), as well as AR and haptics expertise (\pearson{0.6}).
|
||||
VR and haptics expertise were highly correlated (\pearson{0.9}), as well as \AR and haptics expertise (\pearson{0.6}).
|
||||
%
|
||||
Other expertise correlations were low ($r<0.35$).
|
||||
|
||||
@@ -31,11 +31,11 @@ Although the Distance technique provided additional feedback on the interpenetra
|
||||
|
||||
\figref{results_questions} shows the questionnaire results for each vibrotactile positioning.
|
||||
%
|
||||
Questionnaire results were analyzed using Aligned Rank Transform (ART) non-parametric analysis of variance (\secref{metrics}).
|
||||
Questionnaire results were analyzed using \ART non-parametric \ANOVA (\secref{metrics}).
|
||||
%
|
||||
Statistically significant effects were further analyzed with post-hoc pairwise comparisons with Holm-Bonferroni adjustment.
|
||||
%
|
||||
Wilcoxon signed-rank tests were used for main effects and ART contrasts procedure for interaction effects.
|
||||
Wilcoxon signed-rank tests were used for main effects and \ART contrasts procedure for interaction effects.
|
||||
%
|
||||
Only significant results are reported.
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
\label{results}
|
||||
|
||||
\begin{subfigs}{grasp_results}{Results of the grasp task performance metrics for each vibrotactile positioning. }[
|
||||
Geometric means with bootstrap 95~\% confidence and Tukey's HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
Geometric means with bootstrap 95~\% confidence and Tukey's \HSD pairwise comparisons: *** is \pinf{0.001}, ** is \pinf{0.01}, and * is \pinf{0.05}.
|
||||
][
|
||||
\item Time to complete a trial.
|
||||
\item Number of contacts with the cube.
|
||||
@@ -17,5 +17,5 @@
|
||||
|
||||
Results were analyzed similarly as for the first experiment (\secref{results}).
|
||||
%
|
||||
The LMM were fitted with the order of the five vibrotactile positionings (Order), the vibrotactile positionings (Positioning), the visual hand rendering (Hand), the {contact vibration techniques} (Technique), and the target volume position (Target), and their interactions as fixed effects and Participant as random intercept.
|
||||
The \LMM were fitted with the order of the five vibrotactile positionings (Order), the vibrotactile positionings (Positioning), the visual hand rendering (Hand), the {contact vibration techniques} (Technique), and the target volume position (Target), and their interactions as fixed effects and Participant as random intercept.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
\section{Discussion}
|
||||
\label{discussion}
|
||||
|
||||
We evaluated sixteen visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in AR as in the first experiment, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the first experiment.
|
||||
We evaluated sixteen visuo-haptic renderings of the hand, in the same two virtual object manipulation tasks in \AR as in the first experiment, as the combination of two vibrotactile contact techniques provided at four delocalized positions on the hand with the two most representative visual hand renderings established in the first experiment.
|
||||
|
||||
In the Push task, vibrotactile haptic hand rendering has been proven beneficial with the Proximal positioning, which registered a low completion time, but detrimental with the Fingertips positioning, which performed worse (\figref{results/Push-CompletionTime-Location-Overall-Means}) than the Proximal and Opposite (on the contralateral hand) positionings.
|
||||
%
|
||||
@@ -59,11 +59,11 @@ It is also worth noting that the improved hand tracking and grasp helper improve
|
||||
%
|
||||
This improvement could also be the reason for the smaller differences between the Skeleton and the None visual hand renderings in this second experiment.
|
||||
|
||||
In summary, the positioning of the vibrotactile haptic rendering of the hand affected on the performance and experience of users manipulating virtual objects with their bare hands in AR.
|
||||
In summary, the positioning of the vibrotactile haptic rendering of the hand affected on the performance and experience of users manipulating virtual objects with their bare hands in \AR.
|
||||
%
|
||||
The closer the vibrotactile hand rendering was to the point of contact, the better it was perceived in terms of effectiveness, usefulness, and realism.
|
||||
%
|
||||
These subjective appreciations of wearable haptic hand rendering for manipulating virtual objects in AR were also observed by \textcite{maisto2017evaluation} and \textcite{meli2018combining}.
|
||||
These subjective appreciations of wearable haptic hand rendering for manipulating virtual objects in \AR were also observed by \textcite{maisto2017evaluation} and \textcite{meli2018combining}.
|
||||
%
|
||||
However, the best performance was obtained with the farthest positioning on the contralateral hand, which is somewhat surprising.
|
||||
%
|
||||
|
||||
@@ -11,6 +11,6 @@ However, the farthest positioning on the contralateral hand gave the best perfor
|
||||
%
|
||||
The visual hand rendering was perceived less necessary than the vibrotactile haptic hand rendering, but still provided a useful feedback on the hand tracking.
|
||||
|
||||
Future work will focus on including richer types of haptic feedback, such as pressure and skin stretch, analyzing the best compromise between well-round haptic feedback and wearability of the system with respect to AR constraints.
|
||||
Future work will focus on including richer types of haptic feedback, such as pressure and skin stretch, analyzing the best compromise between well-round haptic feedback and wearability of the system with respect to \AR constraints.
|
||||
%
|
||||
As delocalizing haptic feedback seems to be a simple but very promising approach for haptic-enabled AR, we will keep including this dimension in our future study, even when considering other types of haptic sensations.
|
||||
As delocalizing haptic feedback seems to be a simple but very promising approach for haptic-enabled \AR, we will keep including this dimension in our future study, even when considering other types of haptic sensations.
|
||||
|
||||
Reference in New Issue
Block a user