WIP
This commit is contained in:
@@ -23,35 +23,35 @@
|
||||
|
||||
Augmented reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment and promising natural and seamless interactions with real and virtual objects.
|
||||
%
|
||||
Virtual object manipulation is particularly critical for useful and effective AR usage, such as in medical applications, training, or entertainment~\cite{laviolajr20173d, kim2018revisiting}.
|
||||
Virtual object manipulation is particularly critical for useful and effective AR usage, such as in medical applications, training, or entertainment \cite{laviolajr20173d, kim2018revisiting}.
|
||||
%
|
||||
Hand tracking technologies~\cite{xiao2018mrtouch}, grasping techniques~\cite{holl2018efficient}, and real-time physics engines permit users to directly manipulate virtual objects with their bare hands as if they were real~\cite{piumsomboon2014graspshell}, without requiring controllers~\cite{krichenbauer2018augmented}, gloves~\cite{prachyabrued2014visual}, or predefined gesture techniques~\cite{piumsomboon2013userdefined, ha2014wearhand}.
|
||||
Hand tracking technologies \cite{xiao2018mrtouch}, grasping techniques \cite{holl2018efficient}, and real-time physics engines permit users to directly manipulate virtual objects with their bare hands as if they were real \cite{piumsomboon2014graspshell}, without requiring controllers \cite{krichenbauer2018augmented}, gloves \cite{prachyabrued2014visual}, or predefined gesture techniques \cite{piumsomboon2013userdefined, ha2014wearhand}.
|
||||
%
|
||||
Optical see-through AR (OST-AR) head-mounted displays (HMDs), such as the Microsoft HoloLens 2 or the Magic Leap, are particularly suited for this type of direct hand interaction~\cite{kim2018revisiting}.
|
||||
Optical see-through AR (OST-AR) head-mounted displays (HMDs), such as the Microsoft HoloLens 2 or the Magic Leap, are particularly suited for this type of direct hand interaction \cite{kim2018revisiting}.
|
||||
|
||||
However, there are still several haptic and visual limitations that affect manipulation in OST-AR, degrading the user experience.
|
||||
%
|
||||
For example, it is difficult to estimate the position of one's hand in relation to a virtual content because mutual occlusion between the hand and the virtual object is often lacking~\cite{macedo2023occlusion}, the depth of virtual content is underestimated~\cite{diaz2017designing, peillard2019studying}, and hand tracking still has a noticeable latency~\cite{xiao2018mrtouch}.
|
||||
For example, it is difficult to estimate the position of one's hand in relation to a virtual content because mutual occlusion between the hand and the virtual object is often lacking \cite{macedo2023occlusion}, the depth of virtual content is underestimated \cite{diaz2017designing, peillard2019studying}, and hand tracking still has a noticeable latency \cite{xiao2018mrtouch}.
|
||||
%
|
||||
Similarly, it is challenging to ensure confident and realistic contact with a virtual object due to the lack of haptic feedback and the intangibility of the virtual environment, which of course cannot apply physical constraints on the hand~\cite{maisto2017evaluation, meli2018combining, lopes2018adding, teng2021touch}.
|
||||
Similarly, it is challenging to ensure confident and realistic contact with a virtual object due to the lack of haptic feedback and the intangibility of the virtual environment, which of course cannot apply physical constraints on the hand \cite{maisto2017evaluation, meli2018combining, lopes2018adding, teng2021touch}.
|
||||
%
|
||||
These limitations also make it difficult to confidently move a grasped object towards a target~\cite{maisto2017evaluation, meli2018combining}.
|
||||
These limitations also make it difficult to confidently move a grasped object towards a target \cite{maisto2017evaluation, meli2018combining}.
|
||||
|
||||
To address these haptic and visual limitations, we investigate two types of sensory feedback that are known to improve virtual interactions with hands, but have not been studied together in an AR context: visual hand rendering and delocalized haptic rendering.
|
||||
%
|
||||
A few works explored the effect of a visual hand rendering on interactions in AR by simulating mutual occlusion between the real hand and virtual objects~\cite{ha2014wearhand, piumsomboon2014graspshell, al-kalbani2016analysis}, or displaying a 3D virtual hand model, semi-transparent~\cite{ha2014wearhand, piumsomboon2014graspshell} or opaque~\cite{blaga2017usability, yoon2020evaluating, saito2021contact}.
|
||||
A few works explored the effect of a visual hand rendering on interactions in AR by simulating mutual occlusion between the real hand and virtual objects \cite{ha2014wearhand, piumsomboon2014graspshell, al-kalbani2016analysis}, or displaying a 3D virtual hand model, semi-transparent \cite{ha2014wearhand, piumsomboon2014graspshell} or opaque \cite{blaga2017usability, yoon2020evaluating, saito2021contact}.
|
||||
%
|
||||
Indeed, some visual hand renderings are known to improve interactions or user experience in virtual reality (VR), where the real hand is not visible~\cite{prachyabrued2014visual, argelaguet2016role, grubert2018effects, schwind2018touch, vanveldhuizen2021effect}.
|
||||
Indeed, some visual hand renderings are known to improve interactions or user experience in virtual reality (VR), where the real hand is not visible \cite{prachyabrued2014visual, argelaguet2016role, grubert2018effects, schwind2018touch, vanveldhuizen2021effect}.
|
||||
%
|
||||
However, the role of a visual hand rendering superimposed and seen above the real tracked hand has not yet been investigated in AR.
|
||||
%
|
||||
Conjointly, several studies have demonstrated that wearable haptics can significantly improve interactions performance and user experience in AR~\cite{maisto2017evaluation, meli2018combining, sarac2022perceived}.
|
||||
Conjointly, several studies have demonstrated that wearable haptics can significantly improve interactions performance and user experience in AR \cite{maisto2017evaluation, meli2018combining, sarac2022perceived}.
|
||||
%
|
||||
But haptic rendering for AR remains a challenge as it is difficult to provide rich and realistic haptic sensations while limiting their negative impact on hand tracking~\cite{pacchierotti2016hring} and keeping the fingertips and palm free to interact with the real environment~\cite{lopes2018adding, teng2021touch, sarac2022perceived, palmer2022haptic}.
|
||||
But haptic rendering for AR remains a challenge as it is difficult to provide rich and realistic haptic sensations while limiting their negative impact on hand tracking \cite{pacchierotti2016hring} and keeping the fingertips and palm free to interact with the real environment \cite{lopes2018adding, teng2021touch, sarac2022perceived, palmer2022haptic}.
|
||||
%
|
||||
Therefore, the haptic feedback of the fingertip contact with the virtual environment needs to be rendered elsewhere on the hand, it is unclear which positioning should be preferred or which type of haptic feedback is best suited for manipulating virtual objects in AR.
|
||||
%
|
||||
A final question is whether one or the other of these (haptic or visual) hand renderings should be preferred~\cite{maisto2017evaluation, meli2018combining}, or whether a combined visuo-haptic rendering is beneficial for users.
|
||||
A final question is whether one or the other of these (haptic or visual) hand renderings should be preferred \cite{maisto2017evaluation, meli2018combining}, or whether a combined visuo-haptic rendering is beneficial for users.
|
||||
%
|
||||
In fact, both hand renderings can provide sufficient sensory cues for efficient manipulation of virtual objects in AR, or conversely, they can be shown to be complementary.
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ We compared a set of the most popular visual hand renderings.%, as also presente
|
||||
%
|
||||
Since we address hand-centered manipulation tasks, we only considered renderings including the fingertips.
|
||||
%
|
||||
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in~\cite{yoon2020evaluating, vanveldhuizen2021effect}.
|
||||
Moreover, as to keep the focus on the hand rendering itself, we used neutral semi-transparent grey meshes, consistent with the choices made in \cite{yoon2020evaluating, vanveldhuizen2021effect}.
|
||||
%
|
||||
All considered hand renderings are drawn following the tracked pose of the user's real hand.
|
||||
%
|
||||
@@ -21,7 +21,7 @@ However, while the real hand can of course penetrate virtual objects, the visual
|
||||
\subsubsection{None~(\figref{method/hands-none})}
|
||||
\label{hands_none}
|
||||
|
||||
As a reference, we considered no visual hand rendering, as is common in AR~\cite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
|
||||
As a reference, we considered no visual hand rendering, as is common in AR \cite{hettiarachchi2016annexing, blaga2017usability, xiao2018mrtouch, teng2021touch}.
|
||||
%
|
||||
Users have no information about hand tracking and no feedback about contact with the virtual objects, other than their movement when touched.
|
||||
%
|
||||
@@ -31,9 +31,9 @@ As virtual content is rendered on top of the real environment, the hand of the u
|
||||
\subsubsection{Occlusion (Occl,~\figref{method/hands-occlusion})}
|
||||
\label{hands_occlusion}
|
||||
|
||||
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible~\cite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
|
||||
To avoid the abovementioned undesired occlusions due to the virtual content being rendered on top of the real environment, we can carefully crop the former whenever it hides real content that should be visible \cite{macedo2023occlusion}, \eg the thumb of the user in \figref{method/hands-occlusion}.
|
||||
%
|
||||
This approach is frequent in works using VST-AR headsets~\cite{knorlein2009influence, ha2014wearhand, piumsomboon2014graspshell, suzuki2014grasping, al-kalbani2016analysis}.
|
||||
This approach is frequent in works using VST-AR headsets \cite{knorlein2009influence, ha2014wearhand, piumsomboon2014graspshell, suzuki2014grasping, al-kalbani2016analysis}.
|
||||
|
||||
|
||||
\subsubsection{Tips (\figref{method/hands-tips})}
|
||||
@@ -41,7 +41,7 @@ This approach is frequent in works using VST-AR headsets~\cite{knorlein2009influ
|
||||
|
||||
This rendering shows small visual rings around the fingertips of the user, highlighting the most important parts of the hand and contact with virtual objects during fine manipulation.
|
||||
%
|
||||
Unlike work using small spheres~\cite{maisto2017evaluation, meli2014wearable, grubert2018effects, normand2018enlarging, schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
|
||||
Unlike work using small spheres \cite{maisto2017evaluation, meli2014wearable, grubert2018effects, normand2018enlarging, schwind2018touch}, this ring rendering also provides information about the orientation of the fingertips.
|
||||
|
||||
|
||||
\subsubsection{Contour (Cont,~\figref{method/hands-contour})}
|
||||
@@ -51,7 +51,7 @@ This rendering is a {1-mm-thick} outline contouring the user's hands, providing
|
||||
%
|
||||
Unlike the other renderings, it is not occluded by the virtual objects, as shown in \figref{method/hands-contour}.
|
||||
%
|
||||
This rendering is not as usual as the previous others in the literature~\cite{kang2020comparative}.
|
||||
This rendering is not as usual as the previous others in the literature \cite{kang2020comparative}.
|
||||
|
||||
|
||||
\subsubsection{Skeleton (Skel,~\figref{method/hands-skeleton})}
|
||||
@@ -61,13 +61,13 @@ This rendering schematically renders the joints and phalanges of the fingers wit
|
||||
%
|
||||
It can be seen as an extension of the Tips rendering to include the complete fingers articulations.
|
||||
%
|
||||
It is widely used in VR~\cite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR~\cite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
|
||||
It is widely used in VR \cite{argelaguet2016role, schwind2018touch, chessa2019grasping} and AR \cite{blaga2017usability, yoon2020evaluating}, as it is considered simple yet rich and comprehensive.
|
||||
|
||||
|
||||
\subsubsection{Mesh (\figref{method/hands-mesh})}
|
||||
\label{hands_mesh}
|
||||
|
||||
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR~\cite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
|
||||
This rendering is a 3D semi-transparent ($a=0.2$) hand model, which is common in VR \cite{prachyabrued2014visual, argelaguet2016role, schwind2018touch, chessa2019grasping, yoon2020evaluating, vanveldhuizen2021effect}.
|
||||
%
|
||||
It can be seen as a filled version of the Contour hand rendering, thus partially covering the view of the real hand.
|
||||
|
||||
@@ -88,7 +88,7 @@ It can be seen as a filled version of the Contour hand rendering, thus partially
|
||||
\subfig[0.23]{method/task-grasp}
|
||||
\end{subfigs}
|
||||
|
||||
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies~\cite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
|
||||
Following the guidelines of \textcite{bergstrom2021how} for designing object manipulation tasks, we considered two variations of a 3D pick-and-place task, commonly found in interaction and manipulation studies \cite{prachyabrued2014visual, maisto2017evaluation, meli2018combining, blaga2017usability, vanveldhuizen2021effect}.
|
||||
|
||||
|
||||
\subsubsection{Push Task}
|
||||
@@ -184,7 +184,7 @@ During this training, we did not use any of the six hand renderings we want to t
|
||||
|
||||
Participants were asked to carry out the two tasks as naturally and as fast as possible.
|
||||
%
|
||||
Similarly to~\cite{prachyabrued2014visual, maisto2017evaluation, blaga2017usability, vanveldhuizen2021effect}, we only allowed the use of the dominant hand.
|
||||
Similarly to \cite{prachyabrued2014visual, maisto2017evaluation, blaga2017usability, vanveldhuizen2021effect}, we only allowed the use of the dominant hand.
|
||||
%
|
||||
The experiment took around 1 hour and 20 minutes to complete.
|
||||
|
||||
@@ -218,7 +218,7 @@ Finally, (iii) the mean \emph{Time per Contact}, defined as the total time any p
|
||||
%
|
||||
Solely for the grasp-and-place task, we also measured the (iv) \emph{Grip Aperture}, defined as the average distance between the thumb's fingertip and the other fingertips during the grasping of the cube;
|
||||
%
|
||||
lower values indicate a greater finger interpenetration with the cube, resulting in a greater discrepancy between the real hand and the visual hand rendering constrained to the cube surfaces and showing how confident users are in their grasp~\cite{prachyabrued2014visual, al-kalbani2016analysis, blaga2017usability, chessa2019grasping}.
|
||||
lower values indicate a greater finger interpenetration with the cube, resulting in a greater discrepancy between the real hand and the visual hand rendering constrained to the cube surfaces and showing how confident users are in their grasp \cite{prachyabrued2014visual, al-kalbani2016analysis, blaga2017usability, chessa2019grasping}.
|
||||
%
|
||||
Taken together, these measures provide an overview of the performance and usability of each of the visual hand renderings tested, as we hypothesized that they should influence the behavior and effectiveness of the participants.
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ This result are consistent with \textcite{saito2021contact}, who found that disp
|
||||
|
||||
To summarize, when employing a visual hand rendering overlaying the real hand, participants were more performant and confident in manipulating virtual objects with bare hands in AR.
|
||||
%
|
||||
These results contrast with similar manipulation studies, but in non-immersive, on-screen AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance~\cite{blaga2017usability,maisto2017evaluation,meli2018combining}.
|
||||
These results contrast with similar manipulation studies, but in non-immersive, on-screen AR, where the presence of a visual hand rendering was found by participants to improve the usability of the interaction, but not their performance \cite{blaga2017usability,maisto2017evaluation,meli2018combining}.
|
||||
%
|
||||
Our results show the most effective visual hand rendering to be the Skeleton one{. Participants appreciated that} it provided a detailed and precise view of the tracking of the real hand{, without} hiding or masking it.
|
||||
%
|
||||
@@ -45,7 +45,7 @@ Although the Contour and Mesh hand renderings were also highly rated, some parti
|
||||
%
|
||||
This result is in line with the results of virtual object manipulation in VR of \textcite{prachyabrued2014visual}, who found that the most effective visual hand rendering was a double representation of both the real tracked hand and a visual hand physically constrained by the virtual environment.
|
||||
%
|
||||
This type of Skeleton rendering was also the one that provided the best sense of agency (control) in VR~\cite{argelaguet2016role, schwind2018touch}.
|
||||
This type of Skeleton rendering was also the one that provided the best sense of agency (control) in VR \cite{argelaguet2016role, schwind2018touch}.
|
||||
|
||||
These results have of course some limitations as they only address limited types of manipulation tasks and visual hand characteristics, evaluated in a specific OST-AR setup.
|
||||
%
|
||||
|
||||
@@ -32,13 +32,13 @@ We evaluated both the delocalized positioning and the contact vibration techniqu
|
||||
}
|
||||
|
||||
\begin{itemize}
|
||||
\item \textit{Fingertips (Tips):} Vibrating actuators were placed right above the nails, similarly to~\cite{ando2007fingernailmounted}. This is the positioning closest to the fingertips.
|
||||
\item \textit{Fingertips (Tips):} Vibrating actuators were placed right above the nails, similarly to \cite{ando2007fingernailmounted}. This is the positioning closest to the fingertips.
|
||||
%
|
||||
\item \textit{Proximal Phalanges (Prox):} Vibrating actuators were placed on the dorsal side of the proximal phalanges, similarly to~\cite{maisto2017evaluation, meli2018combining, chinello2020modular}.
|
||||
\item \textit{Proximal Phalanges (Prox):} Vibrating actuators were placed on the dorsal side of the proximal phalanges, similarly to \cite{maisto2017evaluation, meli2018combining, chinello2020modular}.
|
||||
%
|
||||
\item \textit{Wrist (Wris):} Vibrating actuators providing contacts rendering for the index and thumb were placed on ulnar and radial sides of the wrist, similarly to~\cite{pezent2019tasbi, palmer2022haptic, sarac2022perceived}.
|
||||
\item \textit{Wrist (Wris):} Vibrating actuators providing contacts rendering for the index and thumb were placed on ulnar and radial sides of the wrist, similarly to \cite{pezent2019tasbi, palmer2022haptic, sarac2022perceived}.
|
||||
%
|
||||
\item \textit{Opposite fingertips (Oppo):} Vibrating actuators were placed on the fingertips of contralateral hand, also above the nails, similarly to~\cite{prattichizzo2012cutaneous, detinguy2018enhancing}.
|
||||
\item \textit{Opposite fingertips (Oppo):} Vibrating actuators were placed on the fingertips of contralateral hand, also above the nails, similarly to \cite{prattichizzo2012cutaneous, detinguy2018enhancing}.
|
||||
%
|
||||
\item \textit{Nowhere (Nowh):} As a reference, we also considered the case where we provided no vibrotactile rendering.
|
||||
\end{itemize}
|
||||
@@ -121,7 +121,7 @@ Apparatus and protocol were very similar to the first experiment, as described i
|
||||
%
|
||||
We report here only the differences.
|
||||
|
||||
We employed the same vibrotactile device used by~\cite{devigne2020power}.
|
||||
We employed the same vibrotactile device used by \cite{devigne2020power}.
|
||||
%
|
||||
It is composed of two encapsulated Eccentric Rotating Mass (ERM) vibration motors (Pico-Vibe 304-116, Precision Microdrive, UK).
|
||||
%
|
||||
|
||||
@@ -71,7 +71,7 @@ This apparent paradox could be explained in two ways.
|
||||
%
|
||||
On the one hand, participants behave differently when the haptic rendering was given on the fingers, close to the contact point, with shorter pushes and larger grip apertures.
|
||||
%
|
||||
This behavior has likely given them a better experience of the tasks and more confidence in their actions, as well as leading to a lower interpenetration/force applied to the cube~\cite{pacchierotti2015cutaneous}.
|
||||
This behavior has likely given them a better experience of the tasks and more confidence in their actions, as well as leading to a lower interpenetration/force applied to the cube \cite{pacchierotti2015cutaneous}.
|
||||
%
|
||||
On the other hand, the unfamiliarity of the contralateral hand positioning caused participants to spend more time understanding the haptic stimuli, which might have made them more focused on performing the task.
|
||||
%
|
||||
@@ -83,9 +83,9 @@ Finally, it was interesting to note that the visual hand renderings was apprecia
|
||||
|
||||
As we already said in \secref[visual_hand]{discussion}, these results have some limitations as they address limited types of visuo-haptic renderings and manipulations were restricted to the thumb and index fingertips.
|
||||
%
|
||||
While the simpler vibration technique (Impact technique) was sufficient to confirm contacts with the cube, richer vibrotactile renderings may be required for more complex interactions, such as collision or friction renderings between objects~\cite{kuchenbecker2006improving, pacchierotti2015cutaneous} or texture rendering~\cite{culbertson2014one, asano2015vibrotactile}.
|
||||
While the simpler vibration technique (Impact technique) was sufficient to confirm contacts with the cube, richer vibrotactile renderings may be required for more complex interactions, such as collision or friction renderings between objects \cite{kuchenbecker2006improving, pacchierotti2015cutaneous} or texture rendering \cite{culbertson2014one, asano2015vibrotactile}.
|
||||
%
|
||||
More generally, a broader range of haptic sensations should be considered, such as pressure or stretching of the skin~\cite{maisto2017evaluation, teng2021touch}.
|
||||
More generally, a broader range of haptic sensations should be considered, such as pressure or stretching of the skin \cite{maisto2017evaluation, teng2021touch}.
|
||||
%
|
||||
However, moving the point of application of the sensation away may be challenging for some types of haptic rendering.
|
||||
%
|
||||
@@ -93,4 +93,4 @@ Similarly, as the interactions were limited to the thumb and index fingertips, p
|
||||
%
|
||||
Also, given that some users found the vibration rendering too strong, adapting/personalizing the haptic feedback to one's preference (and body positioning) might also be a promising approach.
|
||||
%
|
||||
Indeed, personalized haptics is recently gaining interest in the community~\cite{malvezzi2021design, umair2021exploring}.
|
||||
Indeed, personalized haptics is recently gaining interest in the community \cite{malvezzi2021design, umair2021exploring}.
|
||||
|
||||
Reference in New Issue
Block a user