WIP xr-perception
This commit is contained in:
@@ -126,12 +126,12 @@ Because the visuo-haptic \VE is displayed in real time, colocalized and aligned
|
|||||||
|
|
||||||
In this context, we identify two main research challenges that we address in this thesis:
|
In this context, we identify two main research challenges that we address in this thesis:
|
||||||
\begin{enumerate*}[label=(\Roman*)]
|
\begin{enumerate*}[label=(\Roman*)]
|
||||||
\item providing plausible and coherent visuo-haptic augmentations, and
|
\item \textbf{providing plausible and coherent visuo-haptic augmentations}, and
|
||||||
\item enabling effective manipulation of the augmented environment.
|
\item \textbf{enabling effective manipulation of the augmented environment}.
|
||||||
\end{enumerate*}
|
\end{enumerate*}
|
||||||
Each of these challenges also raises numerous design, technical and human issues specific to each of the two types of feedback, wearable haptics and immersive \AR, as well as multimodal rendering and user experience issues in integrating these two sensorimotor feedbacks into a coherent and seamless visuo-haptic \AE.
|
Each of these challenges also raises numerous design, technical and human issues specific to each of the two types of feedback, wearable haptics and immersive \AR, as well as multimodal rendering and user experience issues in integrating these two sensorimotor feedbacks into a coherent and seamless visuo-haptic \AE.
|
||||||
|
|
||||||
\subsectionstarbookmark{Provide Plausible and Coherent Visuo-Haptic Augmentations}
|
\subsectionstarbookmark{Challenge I: Provide Plausible and Coherent Visuo-Haptic Augmentations}
|
||||||
|
|
||||||
Many haptic devices have been designed and evaluated specifically for use in \VR, providing realistic and varied kinesthetic and tactile feedback to \VOs.
|
Many haptic devices have been designed and evaluated specifically for use in \VR, providing realistic and varied kinesthetic and tactile feedback to \VOs.
|
||||||
Although closely related, (visual) \AR and \VR have key differences in their respective renderings that can affect user perception.
|
Although closely related, (visual) \AR and \VR have key differences in their respective renderings that can affect user perception.
|
||||||
@@ -148,7 +148,7 @@ So far, \AR can only add visual and haptic sensations to the user's overall perc
|
|||||||
These added virtual sensations can therefore be perceived as out of sync or even inconsistent with the sensations of the \RE, for example with a lower rendering quality, a temporal latency, a spatial shift, or a combination of these.
|
These added virtual sensations can therefore be perceived as out of sync or even inconsistent with the sensations of the \RE, for example with a lower rendering quality, a temporal latency, a spatial shift, or a combination of these.
|
||||||
It is therefore unclear to what extent the real and virtual visuo-haptic sensations will be perceived as realistic or plausible, and to what extent they will conflict or complement each other in the perception of the \AE.
|
It is therefore unclear to what extent the real and virtual visuo-haptic sensations will be perceived as realistic or plausible, and to what extent they will conflict or complement each other in the perception of the \AE.
|
||||||
|
|
||||||
\subsectionstarbookmark{Enable Effective Manipulation of the Augmented Environment}
|
\subsectionstarbookmark{Challenge II: Enable Effective Manipulation of the Augmented Environment}
|
||||||
|
|
||||||
Touching, grasping and manipulating \VOs are fundamental interactions for \AR \cite{kim2018revisiting}, \VR \cite{bergstrom2021how} and \VEs in general \cite{laviolajr20173d}.
|
Touching, grasping and manipulating \VOs are fundamental interactions for \AR \cite{kim2018revisiting}, \VR \cite{bergstrom2021how} and \VEs in general \cite{laviolajr20173d}.
|
||||||
As the hand is not occupied or covered with a haptic device to not impair interaction with the \RE, as described in the previous section, one can expect a seamless and direct manipulation of the hand with the virtual content as if it were real.
|
As the hand is not occupied or covered with a haptic device to not impair interaction with the \RE, as described in the previous section, one can expect a seamless and direct manipulation of the hand with the virtual content as if it were real.
|
||||||
@@ -177,8 +177,8 @@ Our approach is to
|
|||||||
|
|
||||||
We consider two main axes of research, each addressing one of the research challenges identified above:
|
We consider two main axes of research, each addressing one of the research challenges identified above:
|
||||||
\begin{enumerate*}[label=(\Roman*)]
|
\begin{enumerate*}[label=(\Roman*)]
|
||||||
\item modifying the perception of tangible surfaces using visuo-haptic texture augmentations, and
|
\item \textbf{modifying the texture perception of tangible surfaces}, and % with visuo-haptic texture augmentations, and
|
||||||
\item improving the manipulation of virtual objects using visuo-haptic augmentations of the hand-object interaction.
|
\item \textbf{improving the manipulation of virtual objects}.% with visuo-haptic augmentations of the hand-object interaction.
|
||||||
\end{enumerate*}
|
\end{enumerate*}
|
||||||
Our contributions in these two axes are summarized in \figref{contributions}.
|
Our contributions in these two axes are summarized in \figref{contributions}.
|
||||||
|
|
||||||
@@ -188,33 +188,33 @@ Our contributions in these two axes are summarized in \figref{contributions}.
|
|||||||
The second axis focuses on \textbf{(II)} improving the manipulation of \VOs with the bare hand using visuo-haptic augmentations of the hand as interaction feedback.
|
The second axis focuses on \textbf{(II)} improving the manipulation of \VOs with the bare hand using visuo-haptic augmentations of the hand as interaction feedback.
|
||||||
]
|
]
|
||||||
|
|
||||||
\subsectionstarbookmark{Axis I: Modifying the Perception of Tangible Surfaces with Visuo-Haptic Texture Augmentations}
|
\subsectionstarbookmark{Axis I: Modifying the Texture Perception of Tangible Surfaces}
|
||||||
|
|
||||||
Wearable haptic devices have proven to be effective in modifying the perception of a touched tangible surface, without modifying the tangible, nor covering the fingertip, forming a haptic \AE \cite{bau2012revel,detinguy2018enhancing,salazar2020altering}.
|
Wearable haptic devices have proven to be effective in modifying the perception of a touched tangible surface, without modifying the tangible, nor covering the fingertip, forming a haptic \AE \cite{bau2012revel,detinguy2018enhancing,salazar2020altering}.
|
||||||
%It is achieved by placing the haptic actuator close to the fingertip, to let it free to touch the surface, and rendering tactile stimuli timely synchronised with the finger movement.
|
%It is achieved by placing the haptic actuator close to the fingertip, to let it free to touch the surface, and rendering tactile stimuli timely synchronised with the finger movement.
|
||||||
%It enables rich haptic feedback as the combination of kinesthetic sensation from the tangible and cutaneous sensation from the actuator.
|
%It enables rich haptic feedback as the combination of kinesthetic sensation from the tangible and cutaneous sensation from the actuator.
|
||||||
However, wearable haptic \AR have been little explored with visual \AR, as well as the visuo-haptic augmentation of textures.
|
However, wearable haptic \AR have been little explored with visual \AR, as well as the visuo-haptic augmentation of textures.
|
||||||
Texture is indeed one of the main tactile sensation of a surface material \cite{hollins1993perceptual,okamoto2013psychophysical}, perceived equally well by both sight and touch \cite{bergmanntiest2007haptic,baumgartner2013visual}, and one of the most studied haptic (only, without visual) rendering \cite{unger2011roughness,culbertson2014modeling,strohmeier2017generating}.
|
Texture is indeed one of the main tactile sensation of a surface material \cite{hollins1993perceptual,okamoto2013psychophysical}, perceived equally well by both sight and touch \cite{bergmanntiest2007haptic,baumgartner2013visual}, and one of the most studied haptic (only, without visual) rendering \cite{unger2011roughness,culbertson2014modeling,strohmeier2017generating}.
|
||||||
For this first axis of research, we propose to design and evaluate the perception of virtual visuo-haptic textures augmenting tangible surfaces. %, using an immersive \AR headset and a wearable vibrotactile device.
|
For this first axis of research, we propose to \textbf{design and evaluate the perception of virtual visuo-haptic textures augmenting tangible surfaces}. %, using an immersive \AR headset and a wearable vibrotactile device.
|
||||||
To this end, we (1) design a system for rendering virtual visuo-haptic texture augmentations, to (2) evaluate how the perception of these textures is affected by the visual virtuality of the hand and the environment (\AR \vs \VR), and (3) investigate the perception of co-localized visuo-haptic texture augmentations in \AR.
|
To this end, we (1) design a system for rendering virtual visuo-haptic texture augmentations, to (2) evaluate how the perception of these textures is affected by the visual virtuality of the hand and the environment (\AR \vs \VR), and (3) investigate the perception of co-localized visuo-haptic texture augmentations in \AR.
|
||||||
|
|
||||||
First, an effective approach to rendering haptic textures is to generate a vibrotactile signal that represents the finger-texture interaction \cite{culbertson2014modeling,asano2015vibrotactile}.
|
First, an effective approach to rendering haptic textures is to generate a vibrotactile signal that represents the finger-texture interaction \cite{culbertson2014modeling,asano2015vibrotactile}.
|
||||||
Yet, to achieve the natural interaction with the hand and a coherent visuo-haptic feedback, it requires a real time rendering of the textures, no constraints on the hand movements, and a good synchronization between the visual and haptic feedback.
|
Yet, to achieve the natural interaction with the hand and a coherent visuo-haptic feedback, it requires a real time rendering of the textures, no constraints on the hand movements, and a good synchronization between the visual and haptic feedback.
|
||||||
Thus, our first objective is to design an immersive, real time system that allows free exploration with the bare hand of visuo-haptic texture augmentations on tangible surfaces.
|
Thus, our first objective is to \textbf{design an immersive, real time system that allows free exploration with the bare hand of visuo-haptic texture augmentations on tangible surfaces}.
|
||||||
|
|
||||||
Second, many works have investigated the haptic rendering of virtual textures, but few have integrated them with immersive \VEs or have considered the influence of the visual rendering on their perception.
|
Second, many works have investigated the haptic augmentation of textures, but none have integrated them with \AR and \VR, or have considered the influence of the visual rendering on their perception.
|
||||||
Still, it is known that the visual feedback can alter the perception of real and virtual haptic sensations \cite{schwind2018touch,choi2021augmenting} but also that the force feedback perception of grounded haptic devices is not the same in \AR and \VR \cite{diluca2011effects,gaffary2017ar}.
|
Still, it is known that the visual feedback can alter the perception of real and virtual haptic sensations \cite{schwind2018touch,choi2021augmenting} but also that the force feedback perception of grounded haptic devices is not the same in \AR and \VR \cite{diluca2011effects,gaffary2017ar}.
|
||||||
Hence, our second objective is to understand how the perception of haptic texture augmentation differs depending on the degree of visual virtuality of the hand and the environment.
|
Hence, our second objective is to \textbf{evaluate how the perception of haptic texture augmentation is affected by the visual virtuality of the hand and the environment}.
|
||||||
|
|
||||||
Finally, some visuo-haptic texture databases have been modeled from real texture captures \cite{culbertson2014penn,balasubramanian2024sens3}, to be rendered as virtual textures with graspable haptics that are perceived as similar to real textures \cite{culbertson2015should,friesen2024perceived}.
|
Finally, some visuo-haptic texture databases have been modeled from real texture captures \cite{culbertson2014penn,balasubramanian2024sens3}, to be rendered as virtual textures with graspable haptics that are perceived as similar to real textures \cite{culbertson2015should,friesen2024perceived}.
|
||||||
However, the rendering of these textures in an immersive and natural visuo-haptic \AR using wearable haptics remains to be investigated.
|
However, the rendering of these textures in an immersive and natural visuo-haptic \AR using wearable haptics remains to be investigated.
|
||||||
Our third objective is to evaluate the perception of simultaneous and co-localized visuo-haptic texture augmentation of tangible surfaces in \AR, directly touched by the hand, and to understand to what extent each sensory modality contributes to the overall perception of the augmented texture.
|
Our third objective is to \textbf{evaluate the perception of simultaneous and co-localized visuo-haptic texture augmentation of tangible surfaces in \AR}, directly touched by the hand, and to understand to what extent each sensory modality contributes to the overall perception of the augmented texture.
|
||||||
|
|
||||||
\subsectionstarbookmark{Axis II: Improving Virtual Object Manipulation with Visuo-Haptic Augmentations of the Hand}
|
\subsectionstarbookmark{Axis II: Improving the Manipulation of Virtual Objects}
|
||||||
|
|
||||||
In immersive and wearable visuo-haptic \AR, the hand is free to touch and interact seamlessly with real, augmented, and virtual objects, and one can expect natural and direct contact and manipulation of \VOs with the bare hand.
|
In immersive and wearable visuo-haptic \AR, the hand is free to touch and interact seamlessly with real, augmented, and virtual objects, and one can expect natural and direct contact and manipulation of \VOs with the bare hand.
|
||||||
However, the intangibility of the visual \VE, the display limitations of current visual \OST-\AR systems and the inherent spatial and temporal discrepancies between the user's hand actions and the visual feedback in the \VE can make the interaction with \VOs particularly challenging.
|
However, the intangibility of the visual \VE, the display limitations of current visual \OST-\AR systems and the inherent spatial and temporal discrepancies between the user's hand actions and the visual feedback in the \VE can make the interaction with \VOs particularly challenging.
|
||||||
%However, the intangibility of the virtual visual environment, the lack of kinesthetic feedback of wearable haptics, the visual rendering limitations of current \AR systems, as well as the spatial and temporal discrepancies between the real environment, the visual feedback, and the haptic feedback, can make the interaction with \VOs with bare hands particularly challenging.
|
%However, the intangibility of the virtual visual environment, the lack of kinesthetic feedback of wearable haptics, the visual rendering limitations of current \AR systems, as well as the spatial and temporal discrepancies between the \RE, the visual feedback, and the haptic feedback, can make the interaction with \VOs with bare hands particularly challenging.
|
||||||
Two particular sensory feedbacks are known to improve such direct \VO manipulation, but they have not been properly investigated in immersive \AR: visual rendering of the hand \cite{piumsomboon2014graspshell,prachyabrued2014visual} and delocalized haptic rendering \cite{lopes2018adding,teng2021touch}.
|
Two particular sensory feedbacks are known to improve such direct \VO manipulation, but they have not been properly investigated in immersive \AR: visual rendering of the hand \cite{piumsomboon2014graspshell,prachyabrued2014visual} and delocalized haptic rendering \cite{lopes2018adding,teng2021touch}.
|
||||||
For this second axis of research, we propose to design and evaluate \textbf{the role of visuo-haptic augmentations of the hand as interaction feedback with \VOs in immersive \OST-\AR}.
|
For this second axis of research, we propose to design and evaluate \textbf{the role of visuo-haptic augmentations of the hand as interaction feedback with \VOs in immersive \OST-\AR}.
|
||||||
We consider the effect on the user performance an experience of (1) the visual rendering as hand augmentation and (2) combination of different visuo-haptic rendering of the hand manipulation with \VOs
|
We consider the effect on the user performance an experience of (1) the visual rendering as hand augmentation and (2) combination of different visuo-haptic rendering of the hand manipulation with \VOs
|
||||||
@@ -227,7 +227,7 @@ Thus, our fourth objective is to \textbf{investigate the visual rendering as han
|
|||||||
Second, as described above, wearable haptics for visual \AR rely on moving the haptic actuator away from the fingertips to not impair the hand movements, sensations, and interactions with the \RE.
|
Second, as described above, wearable haptics for visual \AR rely on moving the haptic actuator away from the fingertips to not impair the hand movements, sensations, and interactions with the \RE.
|
||||||
Previous works have shown that wearable haptics that provide feedback on the hand manipulation with \VOs in \AR can significantly improve the user performance and experience \cite{maisto2017evaluation,meli2018combining}.
|
Previous works have shown that wearable haptics that provide feedback on the hand manipulation with \VOs in \AR can significantly improve the user performance and experience \cite{maisto2017evaluation,meli2018combining}.
|
||||||
However, it is unclear which positioning of the actuator is the most beneficial nor how a haptic augmentation of the hand compares or complements with a visual augmentation of the hand.
|
However, it is unclear which positioning of the actuator is the most beneficial nor how a haptic augmentation of the hand compares or complements with a visual augmentation of the hand.
|
||||||
Our last objective is to \textbf{investigate the visuo-haptic rendering of the hand manipulation} with \VOs in \OST-\AR using wearable vibrotactile haptic.
|
Our last objective is to \textbf{investigate the visuo-haptic rendering of hand manipulation with \VOs} in \OST-\AR using wearable vibrotactile haptic.
|
||||||
|
|
||||||
\section{Thesis Overview}
|
\section{Thesis Overview}
|
||||||
\label{thesis_overview}
|
\label{thesis_overview}
|
||||||
@@ -237,35 +237,35 @@ In \textbf{\partref{background}}, we describe the context and background of our
|
|||||||
|
|
||||||
In \textbf{\chapref{related_work}}, we then review previous work on the perception and manipulation with virtual and augmented objects, directly with the hand, using either wearable haptics, \AR, or their combination.
|
In \textbf{\chapref{related_work}}, we then review previous work on the perception and manipulation with virtual and augmented objects, directly with the hand, using either wearable haptics, \AR, or their combination.
|
||||||
First, we overview how the hand perceives and manipulate real everyday objects.
|
First, we overview how the hand perceives and manipulate real everyday objects.
|
||||||
Second, we present wearable haptics and haptic augmentations of roughness and hardness of real objects.
|
Second, we present wearable haptics and haptic augmentations of texture and hardness of real objects.
|
||||||
Third, we introduce \AR, and how \VOs can be manipulated directly with the hand.
|
Third, we introduce \AR, and how \VOs can be manipulated directly with the hand.
|
||||||
Finally, we describe how multimodal visual and haptic feedback have been combined in \AR to enhance perception and interaction with the hand.
|
Finally, we describe how multimodal visual and haptic feedback have been combined in \AR to enhance perception and interaction with the hand.
|
||||||
|
|
||||||
We then address each of our two research axes in a dedicated part.
|
We then address each of our two research axes in a dedicated part.
|
||||||
|
|
||||||
\noindentskip
|
\noindentskip
|
||||||
In \textbf{\partref{perception}}, we describe our contributions to the first axis of research, augmenting the visuo-haptic texture perception of tangible surfaces.
|
In \textbf{\partref{perception}}, we describe our contributions to the first axis of research: modifying the visuo-haptic texture perception of tangible surfaces.
|
||||||
We evaluate how the visual rendering of the hand (real or virtual), the environment (\AR or \VR) and the textures (displayed or hidden) affect the roughness perception of virtual vibrotactile textures rendered on real surfaces and touched directly with the index finger.
|
We evaluate how the visual rendering of the hand (real or virtual), the environment (\AR or \VR) and the textures (coherent, different or not shown) affect the perception of virtual vibrotactile textures rendered on real surfaces and touched directly with the index finger.
|
||||||
|
|
||||||
In \textbf{\chapref{vhar_system}}, we detail a system for rendering visuo-haptic virtual textures that augment tangible surfaces using an immersive \AR/\VR headset and a wearable vibrotactile device.
|
In \textbf{\chapref{vhar_system}}, we detail a system for rendering visuo-haptic virtual textures that augment tangible surfaces using an immersive \AR/\VR headset and a wearable vibrotactile device.
|
||||||
The haptic textures are rendered as a real-time vibrotactile signal representing a grating texture, and is provided to the middle phalanx of the index finger touching the texture using a voice-coil actuator.
|
The haptic textures are rendered as a real-time vibrotactile signal representing a grating texture, and is provided to the middle phalanx of the index finger touching the texture using a voice-coil actuator.
|
||||||
The tracking of the real hand and environment is done using marker-based technique, and the visual rendering of their virtual counterparts is done using the immersive \OST \AR headset Microsoft HoloLens~2.
|
The tracking of the real hand and environment is done using marker-based technique, and the visual rendering of their virtual counterparts is done using the immersive \OST-\AR headset Microsoft HoloLens~2.
|
||||||
|
|
||||||
In \textbf{\chapref{xr_perception}}, we investigate, in a user study, how different the perception of virtual haptic textures is in \AR \vs \VR and when touched by a virtual hand \vs one's own hand.
|
In \textbf{\chapref{xr_perception}}, we investigate, in a user study, how different the perception of haptic texture augmentations is in \AR \vs \VR and when touched by a virtual hand \vs one's own hand.
|
||||||
We use psychophysical methods to measure the user roughness perception of the virtual textures, and extensive questionnaires to understand how this perception is affected by the visual rendering of the hand and the environment.
|
We use psychophysical methods to measure the user perception, and extensive questionnaires to understand how this perception is affected by the visual rendering of the hand and the environment.
|
||||||
|
|
||||||
In \textbf{\chapref{ar_textures}}, we evaluate the perception of visuo-haptic texture augmentations, touched directly with one's own hand in \AR.
|
In \textbf{\chapref{ar_textures}}, we evaluate the perception of visuo-haptic texture augmentations, touched directly with one's own hand in \AR.
|
||||||
The virtual textures are paired visual and tactile models of real surfaces \cite{culbertson2014one} that we render as visual and haptic overlays on the touched augmented surfaces.
|
The virtual textures are paired visual and tactile models of real surfaces \cite{culbertson2014one} that we render as visual and haptic overlays on the touched augmented surfaces.
|
||||||
Our objective is to assess the perceived realism, coherence and roughness of the combination of nine representative visuo-haptic texture pairs.
|
Our objective is to assess the perceived realism, coherence and roughness of the combination of nine representative visuo-haptic texture pairs.
|
||||||
|
|
||||||
\noindentskip
|
\noindentskip
|
||||||
In \textbf{\partref{manipulation}}, we describe our contributions to the second axis of research: improving the manipulation of \VOs using visuo-haptic augmentations of the hand as interaction feedback with \VOs in immersive \OST-\AR.
|
In \textbf{\partref{manipulation}}, we describe our contributions to the second axis of research: improving the free and direct hand manipulation of \VOs using visuo-haptic augmentations of the hand as interaction feedback with \VOs in immersive \OST-\AR.
|
||||||
|
|
||||||
In \textbf{\chapref{visual_hand}}, we investigate in a user study the effect of six visual renderings as hand augmentations for the direct manipulation of \VOs, as a set of the most popular hand renderings in the \AR literature.
|
In \textbf{\chapref{visual_hand}}, we investigate in a user study of six visual renderings as hand augmentations, as a set of the most popular hand renderings in the \AR literature.
|
||||||
Using the \OST-\AR headset Microsoft HoloLens~2, we evaluate the user performance and experience in two representative manipulation tasks: push-and-slide and grasp-and-place a \VO directly with the hand.
|
Using the \OST-\AR headset Microsoft HoloLens~2, we evaluate their effect on the user performance and experience in two representative manipulation tasks: push-and-slide and grasp-and-place a \VO directly with the hand.
|
||||||
|
|
||||||
In \textbf{\chapref{visuo_haptic_hand}}, we evaluate in a user study two vibrotactile contact techniques, provided at four different positionings on the user's hand, as haptic rendering of the hand manipulation with \VOs.
|
In \textbf{\chapref{visuo_haptic_hand}}, we evaluate in a user study the visuo-haptic rendering of manual object manipulation with two vibrotactile contact techniques, provided at four different positionings on the user's hand, as haptic rendering of the hand manipulation with \VOs.
|
||||||
They are compared to the two most representative visual hand renderings from the previous chapter, and the user performance and experience are evaluated within the same \OST-\AR setup and manipulation tasks.
|
They are compared to the two most representative visual hand renderings from the previous chapter, resulting in sixteen visuo-haptic hand renderings that are evaluated within the same experimental setup and design.
|
||||||
|
|
||||||
\noindentskip
|
\noindentskip
|
||||||
In \textbf{\chapref{conclusion}}, we conclude this thesis and discuss short-term future work and long-term perspectives for each of our contributions and research axes.
|
In \textbf{\chapref{conclusion}}, we conclude this thesis and discuss short-term future work and long-term perspectives for each of our contributions and research axes.
|
||||||
|
|||||||
@@ -135,7 +135,7 @@ Several types of vibrotactile actuators are used in haptics, with different trad
|
|||||||
|
|
||||||
An \ERM is a direct current (DC) motor that rotates an off-center mass when a voltage or current is applied (\figref{precisionmicrodrives_erm}). \ERMs are easy to control, inexpensive and can be encapsulated in a few millimeters cylinder or coin form factor. However, they have only one \DoF because both the frequency and amplitude of the vibration are coupled to the speed of the rotation, \eg low (high) frequencies output at low (high) amplitudes, as shown on \figref{precisionmicrodrives_erm_performances}.
|
An \ERM is a direct current (DC) motor that rotates an off-center mass when a voltage or current is applied (\figref{precisionmicrodrives_erm}). \ERMs are easy to control, inexpensive and can be encapsulated in a few millimeters cylinder or coin form factor. However, they have only one \DoF because both the frequency and amplitude of the vibration are coupled to the speed of the rotation, \eg low (high) frequencies output at low (high) amplitudes, as shown on \figref{precisionmicrodrives_erm_performances}.
|
||||||
|
|
||||||
\begin{subfigs}{erm}{Diagram and performance of \ERMs. }[][
|
\begin{subfigs}{erm}{Diagram and performance of an \ERM. }[][
|
||||||
\item Diagram of a cylindrical encapsulated \ERM. From Precision Microdrives~\footnotemark.
|
\item Diagram of a cylindrical encapsulated \ERM. From Precision Microdrives~\footnotemark.
|
||||||
\item Amplitude and frequency output of an \ERM as a function of the input voltage.
|
\item Amplitude and frequency output of an \ERM as a function of the input voltage.
|
||||||
]
|
]
|
||||||
@@ -357,9 +357,9 @@ We describe them in the \secref{vhar_haptics}.
|
|||||||
Haptic systems aim to provide virtual interactions and sensations similar to those with real objects.
|
Haptic systems aim to provide virtual interactions and sensations similar to those with real objects.
|
||||||
The complexity of the haptic sense has led to the design of numerous haptic devices and renderings.
|
The complexity of the haptic sense has led to the design of numerous haptic devices and renderings.
|
||||||
While many haptic devices can be worn on the hand, only a few can be considered wearable as they are compact and portable, but they are limited to cutaneous feedback.
|
While many haptic devices can be worn on the hand, only a few can be considered wearable as they are compact and portable, but they are limited to cutaneous feedback.
|
||||||
If the haptic rendering is timely associated with the user's touch actions on a real object, the perceived haptic properties of the object can be modified.
|
If the haptic rendering of the device is timely associated with the user's touch actions on a real object, the perceived haptic properties of the object can be modified.
|
||||||
Several rendering methods have been developed to modify the perceived roughness and hardness, mostly using vibrotactile feedback and, to a lesser extent, pressure feedback.
|
Several haptic augmentation methods have been developed to modify the perceived roughness and hardness, mostly using vibrotactile feedback and, to a lesser extent, pressure feedback.
|
||||||
However, not all of these haptic augmentations have been already transposed to wearable haptics.
|
However, not all of these haptic augmentations have been already transposed to wearable haptics, and use of wearable haptic augmentations have not been yet studied in the context of \AR.
|
||||||
|
|
||||||
%, unlike most previous actuators that are designed specifically for fingertips and would require mechanical adaptation to be placed on other parts of the hand.
|
%, unlike most previous actuators that are designed specifically for fingertips and would require mechanical adaptation to be placed on other parts of the hand.
|
||||||
% thanks to the vibration propagation and the sensory capabilities distributed throughout the skin, they can be placed without adaption and on any part of the hand
|
% thanks to the vibration propagation and the sensory capabilities distributed throughout the skin, they can be placed without adaption and on any part of the hand
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ Immersive systems such as headsets leave the hands free to interact with \VOs, p
|
|||||||
|
|
||||||
%\begin{subfigs}{sutherland1968headmounted}{Photos of the first \AR system \cite{sutherland1968headmounted}. }[
|
%\begin{subfigs}{sutherland1968headmounted}{Photos of the first \AR system \cite{sutherland1968headmounted}. }[
|
||||||
% \item The \AR headset.
|
% \item The \AR headset.
|
||||||
% \item Wireframe \ThreeD \VOs were displayed registered in the real environment (as if there were part of it).
|
% \item Wireframe \ThreeD \VOs were displayed registered in the \RE (as if there were part of it).
|
||||||
% ]
|
% ]
|
||||||
% \subfigsheight{45mm}
|
% \subfigsheight{45mm}
|
||||||
% \subfig{sutherland1970computer3}
|
% \subfig{sutherland1970computer3}
|
||||||
@@ -237,7 +237,7 @@ Our hands allow us to manipulate real everyday objects with both strength and pr
|
|||||||
Initially tracked by active sensing devices such as gloves or controllers, it is now possible to track hands in real time using cameras and computer vision algorithms natively integrated into \AR/\VR headsets \cite{tong2023survey}.
|
Initially tracked by active sensing devices such as gloves or controllers, it is now possible to track hands in real time using cameras and computer vision algorithms natively integrated into \AR/\VR headsets \cite{tong2023survey}.
|
||||||
|
|
||||||
The user's hand is therefore tracked and reconstructed as a \emph{virtual hand} model in the \VE \cite{billinghurst2015survey,laviolajr20173d}.
|
The user's hand is therefore tracked and reconstructed as a \emph{virtual hand} model in the \VE \cite{billinghurst2015survey,laviolajr20173d}.
|
||||||
The simplest models represent the hand as a rigid 3D object that follows the movements of the real hand with \qty{6}{\DoF} (position and orientation in space) \cite{talvas2012novel}.
|
The simplest models represent the hand as a rigid \ThreeD object that follows the movements of the real hand with \qty{6}{\DoF} (position and orientation in space) \cite{talvas2012novel}.
|
||||||
An alternative is to model only the fingertips (\figref{lee2007handy}) or the whole hand (\figref{hilliges2012holodesk_1}) as points.
|
An alternative is to model only the fingertips (\figref{lee2007handy}) or the whole hand (\figref{hilliges2012holodesk_1}) as points.
|
||||||
The most common technique is to reconstruct all the phalanges of the hand in an articulated kinematic model (\secref{hand_anatomy}) \cite{borst2006spring}.
|
The most common technique is to reconstruct all the phalanges of the hand in an articulated kinematic model (\secref{hand_anatomy}) \cite{borst2006spring}.
|
||||||
|
|
||||||
@@ -246,7 +246,7 @@ Heuristic techniques use rules to determine the selection, manipulation and rele
|
|||||||
However, they produce unrealistic behaviour and are limited to the cases predicted by the rules.
|
However, they produce unrealistic behaviour and are limited to the cases predicted by the rules.
|
||||||
Physics-based techniques simulate forces at the points of contact between the virtual hand and the \VO.
|
Physics-based techniques simulate forces at the points of contact between the virtual hand and the \VO.
|
||||||
In particular, \textcite{borst2006spring} have proposed an articulated kinematic model in which each phalanx is a rigid body simulated with the god-object method \cite{zilles1995constraintbased}:
|
In particular, \textcite{borst2006spring} have proposed an articulated kinematic model in which each phalanx is a rigid body simulated with the god-object method \cite{zilles1995constraintbased}:
|
||||||
The virtual phalanx follows the movements of the real phalanx, but remains constrained to the surface of the virtual objects during contact.
|
The virtual phalanx follows the movements of the real phalanx, but remains constrained to the surface of the \VOs during contact.
|
||||||
The forces acting on the object are calculated as a function of the distance between the real and virtual hands (\figref{borst2006spring}).
|
The forces acting on the object are calculated as a function of the distance between the real and virtual hands (\figref{borst2006spring}).
|
||||||
More advanced techniques simulate the friction phenomena \cite{talvas2013godfinger} and finger deformations \cite{talvas2015aggregate}, allowing highly accurate and realistic interactions, but which can be difficult to compute in real time.
|
More advanced techniques simulate the friction phenomena \cite{talvas2013godfinger} and finger deformations \cite{talvas2015aggregate}, allowing highly accurate and realistic interactions, but which can be difficult to compute in real time.
|
||||||
|
|
||||||
@@ -264,7 +264,7 @@ More advanced techniques simulate the friction phenomena \cite{talvas2013godfing
|
|||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
However, the lack of physical constraints on the user's hand movements makes manipulation actions tiring \cite{hincapie-ramos2014consumed}.
|
However, the lack of physical constraints on the user's hand movements makes manipulation actions tiring \cite{hincapie-ramos2014consumed}.
|
||||||
While the user's fingers traverse the virtual object, a physics-based virtual hand remains in contact with the object, a discrepancy that may degrade the user's performance in \VR \cite{prachyabrued2012virtual}.
|
While the user's fingers traverse the \VO, a physics-based virtual hand remains in contact with the object, a discrepancy that may degrade the user's performance in \VR \cite{prachyabrued2012virtual}.
|
||||||
Finally, in the absence of haptic feedback on each finger, it is difficult to estimate the contact and forces exerted by the fingers on the object during grasping and manipulation \cite{maisto2017evaluation,meli2018combining}.
|
Finally, in the absence of haptic feedback on each finger, it is difficult to estimate the contact and forces exerted by the fingers on the object during grasping and manipulation \cite{maisto2017evaluation,meli2018combining}.
|
||||||
While a visual rendering of the virtual hand in \VR can compensate for these issues \cite{prachyabrued2014visual}, the visual and haptic rendering of the virtual hand, or their combination, in \AR is under-researched.
|
While a visual rendering of the virtual hand in \VR can compensate for these issues \cite{prachyabrued2014visual}, the visual and haptic rendering of the virtual hand, or their combination, in \AR is under-researched.
|
||||||
|
|
||||||
@@ -303,7 +303,7 @@ In a collaborative task in immersive \OST-\AR \vs \VR, \textcite{yoon2020evaluat
|
|||||||
Finally, \textcite{maisto2017evaluation} and \textcite{meli2018combining} compared the visual and haptic rendering of the hand in \VST-\AR, as detailed in the next section (\secref{vhar_rings}).
|
Finally, \textcite{maisto2017evaluation} and \textcite{meli2018combining} compared the visual and haptic rendering of the hand in \VST-\AR, as detailed in the next section (\secref{vhar_rings}).
|
||||||
Taken together, these results suggest that a visual rendering of the hand in \AR could improve usability and performance in direct hand manipulation tasks, but the best rendering has yet to be determined.
|
Taken together, these results suggest that a visual rendering of the hand in \AR could improve usability and performance in direct hand manipulation tasks, but the best rendering has yet to be determined.
|
||||||
%\cite{chan2010touching} : cues for touching (selection) \VOs.
|
%\cite{chan2010touching} : cues for touching (selection) \VOs.
|
||||||
%\textcite{saito2021contact} found that masking the real hand with a textured 3D opaque virtual hand did not improve performance in a reach-to-grasp task but displaying the points of contact on the \VO did.
|
%\textcite{saito2021contact} found that masking the real hand with a textured \ThreeD opaque virtual hand did not improve performance in a reach-to-grasp task but displaying the points of contact on the \VO did.
|
||||||
%To the best of our knowledge, evaluating the role of a visual rendering of the hand displayed \enquote{and seen} directly above real tracked hands in immersive OST-AR has not been explored, particularly in the context of \VO manipulation.
|
%To the best of our knowledge, evaluating the role of a visual rendering of the hand displayed \enquote{and seen} directly above real tracked hands in immersive OST-AR has not been explored, particularly in the context of \VO manipulation.
|
||||||
|
|
||||||
\begin{subfigs}{visual-hands}{Visual hand renderings in \AR. }[][
|
\begin{subfigs}{visual-hands}{Visual hand renderings in \AR. }[][
|
||||||
|
|||||||
@@ -97,7 +97,7 @@ For example, in a fixed \VST-\AR screen (\secref{ar_displays}), by visually defo
|
|||||||
%In particular, in \AR and \VR, the perception of a haptic rendering or augmentation can be influenced by the visual rendering of the \VO.
|
%In particular, in \AR and \VR, the perception of a haptic rendering or augmentation can be influenced by the visual rendering of the \VO.
|
||||||
|
|
||||||
\subsubsection{Perception of Visuo-Haptic Rendering in AR and \VR}
|
\subsubsection{Perception of Visuo-Haptic Rendering in AR and \VR}
|
||||||
\label{AR_vs_VR}
|
\label{ar_vr_haptic}
|
||||||
|
|
||||||
Some studies have investigated the visuo-haptic perception of \VOs rendered with force-feedback and vibrotactile feedback in \AR and \VR.
|
Some studies have investigated the visuo-haptic perception of \VOs rendered with force-feedback and vibrotactile feedback in \AR and \VR.
|
||||||
|
|
||||||
|
|||||||
@@ -1,20 +1,17 @@
|
|||||||
% Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in \AR and \VR, which we aim to investigate in this work.
|
% Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in \AR and \VR, which we aim to investigate in this work.
|
||||||
|
|
||||||
%Imagine you're an archaeologist or in a museum, and you want to examine an ancient object.
|
Wearable haptic devices have proven to be effective in modifying the perception of a touched tangible surface, without modifying the tangible, nor covering the fingertip, forming a haptic \AE \cite{bau2012revel,detinguy2018enhancing,salazar2020altering}.
|
||||||
%
|
|
||||||
%But it is too fragile to touch directly.
|
Second, many works have investigated the haptic augmentation of textures, but few have integrated them with immersive \VEs or have considered the influence of the visual rendering on their perception.
|
||||||
%
|
Still, it is known that the visual feedback can alter the perception of real and virtual haptic sensations \cite{schwind2018touch,choi2021augmenting} but also that the force feedback perception of grounded haptic devices is not the same in \AR and \VR \cite{diluca2011effects,gaffary2017ar}.
|
||||||
%What if you could still grasp it and manipulate it through a tangible object in your hand, whose visual appearance has been modified using Augmented Reality (AR)?
|
|
||||||
%
|
% Insist on the advantage of wearable : augment any surface see bau2012revel
|
||||||
%And what if you could also feel its shape or texture?
|
|
||||||
%
|
Wearable haptic devices, worn directly on the finger or hand, have been used to render a variety of tactile sensations to \VOs seen in \VR \cite{choi2018claw,detinguy2018enhancing,pezent2019tasbi} or \AR \cite{maisto2017evaluation,meli2018combining,teng2021touch}.
|
||||||
%Such tactile augmentation is made possible by wearable haptic devices, which are worn directly on the finger or hand and can provide a variety of sensations on the skin, while being small, light and discreet \cite{pacchierotti2017wearable}.
|
|
||||||
%
|
|
||||||
Wearable haptic devices, worn directly on the finger or hand, have been used to render a variety of tactile sensations to virtual objects seen in \VR \cite{choi2018claw,detinguy2018enhancing,pezent2019tasbi} or \AR \cite{maisto2017evaluation,meli2018combining,teng2021touch}.
|
|
||||||
%
|
%
|
||||||
They have also been used to alter the perception of roughness, stiffness, friction, and local shape perception of real tangible objects \cite{asano2015vibrotactile,detinguy2018enhancing,salazar2020altering}.
|
They have also been used to alter the perception of roughness, stiffness, friction, and local shape perception of real tangible objects \cite{asano2015vibrotactile,detinguy2018enhancing,salazar2020altering}.
|
||||||
%
|
%
|
||||||
Such techniques place the actuator \emph{close} to the point of contact with the real environment, leaving the user free to directly touch the tangible.
|
Such techniques place the actuator \emph{close} to the point of contact with the \RE, leaving the user free to directly touch the tangible.
|
||||||
%
|
%
|
||||||
This combined use of wearable haptics with tangible objects enables a haptic \emph{augmented} reality (HAR) \cite{bhatia2024augmenting} that can provide a rich and varied haptic feedback.
|
This combined use of wearable haptics with tangible objects enables a haptic \emph{augmented} reality (HAR) \cite{bhatia2024augmenting} that can provide a rich and varied haptic feedback.
|
||||||
|
|
||||||
@@ -22,7 +19,7 @@ The degree of reality/virtuality in both visual and haptic sensory modalities ca
|
|||||||
%
|
%
|
||||||
Although \AR and \VR are closely related, they have significant differences that can affect the user experience \cite{genay2021virtual,macedo2023occlusion}.
|
Although \AR and \VR are closely related, they have significant differences that can affect the user experience \cite{genay2021virtual,macedo2023occlusion}.
|
||||||
%
|
%
|
||||||
%By integrating visual virtual content into the real environment, \AR keeps the hand of the user, the haptic devices worn and the tangibles touched visible, unlike \VR where they are hidden by immersing the user into a visual virtual environment.
|
%By integrating visual virtual content into the \RE, \AR keeps the hand of the user, the haptic devices worn and the tangibles touched visible, unlike \VR where they are hidden by immersing the user into a visual virtual environment.
|
||||||
%
|
%
|
||||||
%Current \AR systems also suffer from display and rendering limitations not present in \VR, affecting the user experience with virtual content that may be less realistic or inconsistent with the real augmented environment \cite{kim2018revisiting,macedo2023occlusion}.
|
%Current \AR systems also suffer from display and rendering limitations not present in \VR, affecting the user experience with virtual content that may be less realistic or inconsistent with the real augmented environment \cite{kim2018revisiting,macedo2023occlusion}.
|
||||||
%
|
%
|
||||||
@@ -40,13 +37,15 @@ We focus on the perception of roughness, one of the main tactile sensations of m
|
|||||||
%
|
%
|
||||||
By understanding how these visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with \AR can be better applied and new visuo-haptic renderings adapted to \AR can be designed.
|
By understanding how these visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with \AR can be better applied and new visuo-haptic renderings adapted to \AR can be designed.
|
||||||
|
|
||||||
Our contributions are:
|
\noindentskip The contributions of this chapter are:
|
||||||
%
|
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item A system for rendering virtual vibrotactile roughness textures in real time on a tangible surface touched directly with the finger, integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
|
\item The rendering of virtual vibrotactile roughness textures in real time using webcam to track the finger touching.
|
||||||
\item A psychophysical study with 20 participants to evaluate the perception of these virtual roughness textures in three visual rendering conditions: without visual augmentation, with a realistic virtual hand rendering in \AR, and with the same virtual hand in \VR.
|
\item A system to provide a coherent multimodal visuo-haptic texture augmentations of the \RE in direct touch context using an immersive visual AR/VR headset and wearable haptics.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
%First, we present a system for rendering virtual vibrotactile textures in real time without constraints on hand movements and integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
|
|
||||||
|
\noindentskip In the remainder of this chapter, we describe the principles of the system, how the real and virtual environments are registered, the generation of the vibrotactile textures, and measures of visual and haptic rendering latencies.
|
||||||
|
|
||||||
|
%First, we present a system for rendering virtual vibrotactile textures in real time without constraints on hand movements and integrated with an immersive visual AR/VR headset to provide a coherent multimodal visuo-haptic augmentation of the \RE.
|
||||||
%
|
%
|
||||||
%An experimental setup is then presented to compare haptic roughness augmentation with an optical \AR headset (Microsoft HoloLens~2) that can be transformed into a \VR headset using a cardboard mask.
|
%An experimental setup is then presented to compare haptic roughness augmentation with an optical \AR headset (Microsoft HoloLens~2) that can be transformed into a \VR headset using a cardboard mask.
|
||||||
%
|
%
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
%
|
%
|
||||||
In this section, we describe a system for rendering vibrotactile roughness textures in real time, on any tangible surface, touched directly with the index fingertip, with no constraints on hand movement and using a simple camera to track the finger pose.
|
In this section, we describe a system for rendering vibrotactile roughness textures in real time, on any tangible surface, touched directly with the index fingertip, with no constraints on hand movement and using a simple camera to track the finger pose.
|
||||||
%
|
%
|
||||||
We also describe how to pair this tactile rendering with an immersive \AR or \VR headset visual display to provide a coherent, multimodal visuo-haptic augmentation of the real environment.
|
We also describe how to pair this tactile rendering with an immersive \AR or \VR headset visual display to provide a coherent, multimodal visuo-haptic augmentation of the \RE.
|
||||||
|
|
||||||
\section{Principle}
|
\section{Principle}
|
||||||
\label{principle}
|
\label{principle}
|
||||||
@@ -11,19 +11,19 @@ The visuo-haptic texture rendering system is based on
|
|||||||
%
|
%
|
||||||
\begin{enumerate*}[label=(\arabic*)]
|
\begin{enumerate*}[label=(\arabic*)]
|
||||||
\item a real-time interaction loop between the finger movements and a coherent visuo-haptic feedback simulating the sensation of a touched texture,
|
\item a real-time interaction loop between the finger movements and a coherent visuo-haptic feedback simulating the sensation of a touched texture,
|
||||||
\item a precise alignment of the virtual environment with its real counterpart, and
|
\item a precise alignment of the \VE with its real counterpart, and
|
||||||
\item a modulation of the signal frequency by the estimated finger speed with a phase matching.
|
\item a modulation of the signal frequency by the estimated finger speed with a phase matching.
|
||||||
\end{enumerate*}
|
\end{enumerate*}
|
||||||
%
|
%
|
||||||
\figref{diagram} shows the interaction loop diagram and \eqref{signal} the definition of the vibrotactile signal.
|
\figref{diagram} shows the interaction loop diagram and \eqref{signal} the definition of the vibrotactile signal.
|
||||||
%
|
%
|
||||||
The system consists of three main components: the pose estimation of the tracked real elements, the visual rendering of the virtual environment, and the vibrotactile signal generation and rendering.
|
The system consists of three main components: the pose estimation of the tracked real elements, the visual rendering of the \VE, and the vibrotactile signal generation and rendering.
|
||||||
|
|
||||||
\figwide[1]{diagram}{Diagram of the visuo-haptic texture rendering system. }[
|
\figwide[1]{diagram}{Diagram of the visuo-haptic texture rendering system. }[
|
||||||
Fiducial markers attached to the voice-coil actuator and to tangible surfaces to track are captured by a camera.
|
Fiducial markers attached to the voice-coil actuator and to tangible surfaces to track are captured by a camera.
|
||||||
The positions and rotations (the poses) ${}^c\mathbf{T}_i$, $i=1..n$ of the $n$ defined markers in the camera frame $\mathcal{F}_c$ are estimated, then filtered with an adaptive low-pass filter.
|
The positions and rotations (the poses) ${}^c\mathbf{T}_i$, $i=1..n$ of the $n$ defined markers in the camera frame $\mathcal{F}_c$ are estimated, then filtered with an adaptive low-pass filter.
|
||||||
%These poses are transformed to the \AR/\VR headset frame $\mathcal{F}_h$ and applied to the virtual model replicas to display them superimposed and aligned with the real environment.
|
%These poses are transformed to the \AR/\VR headset frame $\mathcal{F}_h$ and applied to the virtual model replicas to display them superimposed and aligned with the \RE.
|
||||||
These poses are used to move and display the virtual model replicas aligned with the real environment.
|
These poses are used to move and display the virtual model replicas aligned with the \RE.
|
||||||
A collision detection algorithm detects a contact of the virtual hand with the virtual textures.
|
A collision detection algorithm detects a contact of the virtual hand with the virtual textures.
|
||||||
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\mathcal{F}_t$.
|
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\mathcal{F}_t$.
|
||||||
The vibrotactile signal $s_k$ is generated by modulating the (scalar) finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
|
The vibrotactile signal $s_k$ is generated by modulating the (scalar) finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
|
||||||
@@ -36,13 +36,13 @@ The system consists of three main components: the pose estimation of the tracked
|
|||||||
|
|
||||||
\begin{subfigs}{setup}{Visuo-haptic texture rendering system setup. }[][
|
\begin{subfigs}{setup}{Visuo-haptic texture rendering system setup. }[][
|
||||||
\item HapCoil-One voice-coil actuator with a fiducial marker on top attached to a participant's right index finger.
|
\item HapCoil-One voice-coil actuator with a fiducial marker on top attached to a participant's right index finger.
|
||||||
\item HoloLens~2 \AR headset, the two cardboard masks to switch the real or virtual environments with the same field of view, and the 3D-printed piece for attaching the masks to the headset.
|
\item HoloLens~2 \AR headset, the two cardboard masks to switch the real or virtual environments with the same \FoV, and the \ThreeD-printed piece for attaching the masks to the headset.
|
||||||
\item User exploring a virtual vibrotactile texture on a tangible sheet of paper.
|
\item User exploring a virtual vibrotactile texture on a tangible sheet of paper.
|
||||||
]
|
]
|
||||||
\subfig[0.325]{device}
|
\subfig[0.325]{device}
|
||||||
\subfig[0.65]{headset}
|
%\subfig[0.65]{headset}
|
||||||
\par\vspace{2.5pt}
|
%\par\vspace{2.5pt}
|
||||||
\subfig[0.992]{apparatus}
|
%\subfig[0.992]{apparatus}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
A fiducial marker (AprilTag) is glued to the top of the actuator (\figref{device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{apparatus}).
|
A fiducial marker (AprilTag) is glued to the top of the actuator (\figref{device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{apparatus}).
|
||||||
@@ -63,8 +63,8 @@ The optimal filter parameters were determined using the method of \textcite{casi
|
|||||||
%
|
%
|
||||||
The velocity (without angular velocity) of the marker, denoted as ${}^c\dot{\mathbf{X}}_i$, is estimated using the discrete derivative of the position and an other 1€ filter with the same parameters.
|
The velocity (without angular velocity) of the marker, denoted as ${}^c\dot{\mathbf{X}}_i$, is estimated using the discrete derivative of the position and an other 1€ filter with the same parameters.
|
||||||
|
|
||||||
To be able to compare virtual and augmented realities, we then create a virtual environment that closely replicate the real one.
|
To be able to compare virtual and augmented realities, we then create a \VE that closely replicate the real one.
|
||||||
%Before a user interacts with the system, it is necessary to design a virtual environment that will be registered with the real environment during the experiment.
|
%Before a user interacts with the system, it is necessary to design a virtual environment that will be registered with the \RE during the experiment.
|
||||||
%
|
%
|
||||||
Each real element tracked by a marker is modelled virtually, \ie the hand and the augmented tangible surface (\figref{renderings}).
|
Each real element tracked by a marker is modelled virtually, \ie the hand and the augmented tangible surface (\figref{renderings}).
|
||||||
%
|
%
|
||||||
@@ -72,24 +72,24 @@ In addition, the pose and size of the virtual textures are defined on the virtua
|
|||||||
%
|
%
|
||||||
During the experiment, the system uses marker pose estimates to align the virtual models with their real-world counterparts. %, according to the condition being tested.
|
During the experiment, the system uses marker pose estimates to align the virtual models with their real-world counterparts. %, according to the condition being tested.
|
||||||
%
|
%
|
||||||
This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the real environment (\figref{renderings}), using the considered \AR or \VR headset.
|
This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the \RE (\figref{renderings}), using the considered \AR or \VR headset.
|
||||||
|
|
||||||
In our implementation, the virtual hand and environment are designed with Unity and the Mixed Reality Toolkit (MRTK).
|
In our implementation, the virtual hand and environment are designed with Unity and the Mixed Reality Toolkit (MRTK).
|
||||||
%
|
%
|
||||||
The visual rendering is achieved using the Microsoft HoloLens~2, an \OST-\AR headset with a \qtyproduct{43 x 29}{\degree} \FoV, a \qty{60}{\Hz} refresh rate, and self-localisation capabilities.
|
The visual rendering is achieved using the Microsoft HoloLens~2, an \OST-\AR headset with a \qtyproduct{43 x 29}{\degree} \FoV, a \qty{60}{\Hz} refresh rate, and self-localisation capabilities.
|
||||||
%
|
%
|
||||||
It was chosen over \VST-\AR because \OST-\AR only adds virtual content to the real environment, while \VST-\AR streams a real-time video capture of the real environment \cite{macedo2023occlusion}.
|
It was chosen over \VST-\AR because \OST-\AR only adds virtual content to the \RE, while \VST-\AR streams a real-time video capture of the \RE \cite{macedo2023occlusion}.
|
||||||
%
|
%
|
||||||
Indeed, one of our objectives (\secref{experiment}) is to directly compare a virtual environment that replicates a real one, rather than a video feed that introduces many supplementary visual limitations \cite{kim2018revisiting,macedo2023occlusion}.
|
Indeed, one of our objectives (\secref{experiment}) is to directly compare a \VE that replicates a real one, rather than a video feed that introduces many supplementary visual limitations \cite{kim2018revisiting,macedo2023occlusion}.
|
||||||
%
|
%
|
||||||
To simulate a \VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (\figref{headset}).
|
To simulate a \VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the \RE (\figref{headset}).
|
||||||
|
|
||||||
\section{Vibrotactile Signal Generation and Rendering}
|
\section{Vibrotactile Signal Generation and Rendering}
|
||||||
\label{texture_generation}
|
\label{texture_generation}
|
||||||
|
|
||||||
A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}.
|
A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}.
|
||||||
%
|
%
|
||||||
The voice-coil actuator is encased in a 3D printed plastic shell and firmly attached to the middle phalanx of the user's index finger with a Velcro strap, to enable the fingertip to directly touch the environment (\figref{device}).
|
The voice-coil actuator is encased in a \ThreeD printed plastic shell and firmly attached to the middle phalanx of the user's index finger with a Velcro strap, to enable the fingertip to directly touch the environment (\figref{device}).
|
||||||
%
|
%
|
||||||
The actuator is driven by a class D audio amplifier (XY-502 / TPA3116D2, Texas Instrument). %, which has proven to be an effective type of amplifier for driving moving-coil \cite{mcmahan2014dynamic}.
|
The actuator is driven by a class D audio amplifier (XY-502 / TPA3116D2, Texas Instrument). %, which has proven to be an effective type of amplifier for driving moving-coil \cite{mcmahan2014dynamic}.
|
||||||
%
|
%
|
||||||
@@ -154,7 +154,7 @@ The tactile texture is described and rendered in this work as a one dimensional
|
|||||||
\section{System Latency}
|
\section{System Latency}
|
||||||
\label{latency}
|
\label{latency}
|
||||||
|
|
||||||
%As shown in \figref{diagram} and described above, the system includes various haptic and visual sensors and rendering devices linked by software processes for image processing, 3D rendering and audio generation.
|
%As shown in \figref{diagram} and described above, the system includes various haptic and visual sensors and rendering devices linked by software processes for image processing, \ThreeD rendering and audio generation.
|
||||||
%
|
%
|
||||||
Because the chosen \AR headset is a standalone device (like most current \AR/\VR headsets) and cannot directly control the sound card and haptic actuator, the image capture, pose estimation and audio signal generation steps are performed on an external computer.
|
Because the chosen \AR headset is a standalone device (like most current \AR/\VR headsets) and cannot directly control the sound card and haptic actuator, the image capture, pose estimation and audio signal generation steps are performed on an external computer.
|
||||||
%
|
%
|
||||||
@@ -166,20 +166,20 @@ The rendering system provides a user with two interaction loops between the move
|
|||||||
%
|
%
|
||||||
Measures are shown as mean $\pm$ standard deviation (when it is known).
|
Measures are shown as mean $\pm$ standard deviation (when it is known).
|
||||||
%
|
%
|
||||||
The end-to-end latency from finger movement to feedback is measured at \qty{36 +- 4}{\ms} in the haptic loop and \qty{43 +- 9}{\ms} in the visual loop.
|
The end-to-end latency from finger movement to feedback is measured at \qty{36 \pm 4}{\ms} in the haptic loop and \qty{43 \pm 9}{\ms} in the visual loop.
|
||||||
%
|
%
|
||||||
Both are the result of latency in image capture \qty{16 +- 1}{\ms}, markers tracking \qty{2 +- 1}{\ms} and network communication \qty{4 +- 1}{\ms}.
|
Both are the result of latency in image capture \qty{16 \pm 1}{\ms}, markers tracking \qty{2 \pm 1}{\ms} and network communication \qty{4 \pm 1}{\ms}.
|
||||||
%
|
%
|
||||||
The haptic loop also includes the voice-coil latency \qty{15}{\ms} (as specified by the manufacturer\footnotemark[1]), whereas the visual loop includes the latency in 3D rendering \qty{16 +- 5}{\ms} (60 frames per second) and display \qty{5}{\ms}.
|
The haptic loop also includes the voice-coil latency \qty{15}{\ms} (as specified by the manufacturer\footnotemark[1]), whereas the visual loop includes the latency in \ThreeD rendering \qty{16 \pm 5}{\ms} (60 frames per second) and display \qty{5}{\ms}.
|
||||||
%
|
%
|
||||||
The total haptic latency is below the \qty{60}{\ms} detection threshold in vibrotactile feedback \cite{okamoto2009detectability}.
|
The total haptic latency is below the \qty{60}{\ms} detection threshold in vibrotactile feedback \cite{okamoto2009detectability}.
|
||||||
%
|
%
|
||||||
The total visual latency can be considered slightly high, yet it is typical for an \AR rendering involving vision-based tracking \cite{knorlein2009influence}.
|
The total visual latency can be considered slightly high, yet it is typical for an \AR rendering involving vision-based tracking \cite{knorlein2009influence}.
|
||||||
|
|
||||||
The two filters also introduce a constant lag between the finger movement and the estimated position and velocity, measured at \qty{160 +- 30}{\ms}.
|
The two filters also introduce a constant lag between the finger movement and the estimated position and velocity, measured at \qty{160 \pm 30}{\ms}.
|
||||||
%
|
%
|
||||||
With respect to the real hand position, it causes a distance error in the displayed virtual hand position, and thus a delay in the triggering of the vibrotactile signal.
|
With respect to the real hand position, it causes a distance error in the displayed virtual hand position, and thus a delay in the triggering of the vibrotactile signal.
|
||||||
%
|
%
|
||||||
This is proportional to the speed of the finger, \eg distance error is \qty{12 +- 2.3}{\mm} when the finger moves at \qty{75}{\mm\per\second}.
|
This is proportional to the speed of the finger, \eg distance error is \qty{12 \pm 2.3}{\mm} when the finger moves at \qty{75}{\mm\per\second}.
|
||||||
%
|
%
|
||||||
%and of the vibrotactile signal frequency with respect to the finger speed.%, that is proportional to the speed of the finger.
|
%and of the vibrotactile signal frequency with respect to the finger speed.%, that is proportional to the speed of the finger.
|
||||||
|
|||||||
@@ -4,4 +4,4 @@
|
|||||||
%Summary of the research problem, method, main findings, and implications.
|
%Summary of the research problem, method, main findings, and implications.
|
||||||
|
|
||||||
We designed and implemented a system for rendering virtual haptic grating textures on a real tangible surface touched directly with the fingertip, using a wearable vibrotactile voice-coil device mounted on the middle phalanx of the finger. %, and allowing free explorative movements of the hand on the surface.
|
We designed and implemented a system for rendering virtual haptic grating textures on a real tangible surface touched directly with the fingertip, using a wearable vibrotactile voice-coil device mounted on the middle phalanx of the finger. %, and allowing free explorative movements of the hand on the surface.
|
||||||
This tactile feedback was integrated with an immersive visual virtual environment, using an OST-AR headset, to provide users with a coherent multimodal visuo-haptic augmentation of the real environment, that can be switched between an \AR and a \VR view.
|
This tactile feedback was integrated with an immersive visual virtual environment, using an OST-AR headset, to provide users with a coherent multimodal visuo-haptic augmentation of the \RE, that can be switched between an \AR and a \VR view.
|
||||||
|
|||||||
@@ -1,69 +1,27 @@
|
|||||||
% Delivers the motivation for your paper. It explains why you did the work you did.
|
% Delivers the motivation for your paper. It explains why you did the work you did.
|
||||||
|
|
||||||
% Insist on the advantage of wearable : augment any surface see bau2012revel
|
Most of the haptic augmentations of tangible surfaces using with wearable haptic devices, including roughness textures (\secref[related_work]{texture_rendering}), have been studied without a visual feedback, and none have considered the influence of the visual rendering on their perception or integrated them in \AR and \VR (\secref[related_work]{texture_rendering}).
|
||||||
|
Still, it is known that the visual rendering of a tangible can influence the perception of its haptic properties (\secref[related_work]{visual_haptic_influence}), and that the perception of same haptic force-feedback or vibrotactile rendering can differ between \AR and \VR, probably due to difference in perceived simultaneity between visual and haptic stimuli (\secref[related_work]{ar_vr_haptic}).
|
||||||
|
Indeed, while in \AR, the user can see their own hand touching, the haptic device worn and the \RE, in \VR they are hidden by the \VE while.
|
||||||
|
|
||||||
%Imagine you're an archaeologist or in a museum, and you want to examine an ancient object.
|
In this chapter, we investigate the role of the visual rendering of the hand (real or virtual) and its environment (\AR or \VR) on the perception of a \textbf{tangible surface whose haptic roughness is augmented with a wearable vibrotactile} device worn on the finger.
|
||||||
%
|
To do so, we employed the visuo-haptic system presented in \chapref{vhar_system} to render the texture augmentations of tangibles that can be directly touched with the bare finger.
|
||||||
%But it is too fragile to touch directly.
|
We evaluated, in \textbf{user study with psychophysical methods and extensive questionnaire}, the perceived roughness augmentation in three visual rendering conditions: \textbf{(1) without visual augmentation}, in \textbf{(2) \OST-\AR with a realistic virtual hand} rendering, and in \textbf{(3) \VR with the same virtual hand}.
|
||||||
%
|
To control for the influence of the visual rendering, the tangible surface was not visually augmented.
|
||||||
%What if you could still grasp it and manipulate it through a tangible object in your hand, whose visual appearance has been modified using Augmented Reality (AR)?
|
|
||||||
%
|
|
||||||
%And what if you could also feel its shape or texture?
|
|
||||||
%
|
|
||||||
%Such tactile augmentation is made possible by wearable haptic devices, which are worn directly on the finger or hand and can provide a variety of sensations on the skin, while being small, light and discreet \cite{pacchierotti2017wearable}.
|
|
||||||
%
|
|
||||||
Wearable haptic devices, worn directly on the finger or hand, have been used to render a variety of tactile sensations to virtual objects in \VR \cite{detinguy2018enhancing,pezent2019tasbi} and \AR \cite{maisto2017evaluation,teng2021touch}.
|
|
||||||
%
|
|
||||||
They have also been used to alter the perception of roughness, stiffness, friction, and local shape perception of real tangible objects \cite{asano2015vibrotactile,detinguy2018enhancing,salazar2020altering}.
|
|
||||||
%
|
|
||||||
Such techniques place the actuator \emph{close} to the point of contact with the real environment, leaving the user free to directly touch the tangible.
|
|
||||||
%
|
|
||||||
This combined use of wearable haptics with tangible objects enables a haptic \emph{augmented} reality (HAR) \cite{bhatia2024augmenting} that can provide a rich and varied haptic feedback.
|
|
||||||
|
|
||||||
The degree of reality/virtuality in both visual and haptic sensory modalities can be varied independently, but wearable haptic \AR has been little explored with \VR and (visual) \AR \cite{choi2021augmenting}.
|
\noindentskip The contributions of this chapter is: A psychophysical user study with 20 participants to evaluate the effect of visual hand rendering in \OST-\AR or \VR on the perception of haptic roughness texture augmentations, using wearable vibrotactile haptics.
|
||||||
%
|
|
||||||
Although \AR and \VR are closely related, they have significant differences that can affect the user experience \cite{genay2021virtual,macedo2023occlusion}.
|
|
||||||
%
|
|
||||||
%By integrating visual virtual content into the real environment, \AR keeps the hand of the user, the haptic devices worn and the tangibles touched visible, unlike \VR where they are hidden by immersing the user into a visual virtual environment.
|
|
||||||
%
|
|
||||||
%Current \AR systems also suffer from display and rendering limitations not present in \VR, affecting the user experience with virtual content that may be less realistic or inconsistent with the real augmented environment \cite{kim2018revisiting,macedo2023occlusion}.
|
|
||||||
%
|
|
||||||
Therefore, it seems necessary to investigate and understand the potential effect of these differences in visual rendering on the HAR perception.
|
|
||||||
%
|
|
||||||
For example, previous works have shown that the stiffness of a virtual piston rendered with a force feedback haptic system seen in \AR is perceived as less rigid than in \VR \cite{gaffary2017ar}, or when the visual rendering is ahead of the haptic rendering \cite{diluca2011effects,knorlein2009influence}.
|
|
||||||
%
|
|
||||||
%Taking our example from the beginning of this introduction, you now want to learn more about the context of the discovery of the ancient object or its use at the time of its creation by immersing yourself in a virtual environment in \VR.
|
|
||||||
%
|
|
||||||
%But how different is the perception of the haptic augmentation in \AR compared to \VR, with a virtual hand instead of the real hand?
|
|
||||||
|
|
||||||
The goal of this paper is to study the role of the visual rendering of the hand (real or virtual) and its environment (AR or \VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device worn on the finger.
|
\noindentskip In the remainder of this chapter, we first describe the experimental design and apparatus of the user study.
|
||||||
%
|
We then present the results obtained and discuss them before concluding.
|
||||||
We focus on the perception of roughness, one of the main tactile sensations of materials \cite{baumgartner2013visual,hollins1993perceptual,okamoto2013psychophysical} and one of the most studied haptic augmentations \cite{asano2015vibrotactile,culbertson2014modeling,friesen2024perceived,strohmeier2017generating,ujitoko2019modulating}.
|
|
||||||
|
|
||||||
Our contributions are:
|
%First, we present a system for rendering virtual vibrotactile textures in real time without constraints on hand movements and integrated with an immersive visual \AR/\VR headset to provide a coherent multimodal visuo-haptic augmentation of the \RE.
|
||||||
%
|
|
||||||
\begin{itemize}
|
|
||||||
\item A system for rendering virtual vibrotactile roughness textures in real time on a tangible surface touched directly with the finger, integrated with an immersive visual \AR/\VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment; and %It is presented in \secref{method}.
|
|
||||||
\item A psychophysical study with 20 participants to evaluate the perception of these virtual roughness textures in three visual rendering conditions: without visual augmentation, with a realistic virtual hand rendering in \AR, and with the same virtual hand in \VR. %It is described in \secref{experiment} and those results are detailed in \secref{discussion}.
|
|
||||||
\end{itemize}
|
|
||||||
|
|
||||||
%In the remainder of this paper, we first present related work on wearable haptic texture augmentations and the haptic perception in \AR and \VR in \secref{related_work}.
|
|
||||||
%
|
|
||||||
%We then describe the visuo-haptic texture rendering system in \secref{method}.
|
|
||||||
%
|
|
||||||
%We present the experimental protocol and apparatus of the user study in \secref{experiment}, and the results obtained in \secref{results}.
|
|
||||||
%
|
|
||||||
%We discuss these results in \secref{discussion}, and conclude in \secref{conclusion}.
|
|
||||||
|
|
||||||
%In the remainder of this paper, we first present related work on perception in \VR and \AR in Section 2. Then, in Section 3, we describe the protocol and apparatus of our experimental study. The results obtained are presented in Section 4, followed by a discussion in Section 5. The paper ends with a general conclusion in Section 6.
|
|
||||||
%First, we present a system for rendering virtual vibrotactile textures in real time without constraints on hand movements and integrated with an immersive visual \AR/\VR headset to provide a coherent multimodal visuo-haptic augmentation of the real environment.
|
|
||||||
%
|
|
||||||
%An experimental setup is then presented to compare haptic roughness augmentation with an optical \AR headset (Microsoft HoloLens~2) that can be transformed into a \VR headset using a cardboard mask.
|
%An experimental setup is then presented to compare haptic roughness augmentation with an optical \AR headset (Microsoft HoloLens~2) that can be transformed into a \VR headset using a cardboard mask.
|
||||||
%
|
|
||||||
%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in \AR, and (3) with the same virtual hand in \VR.
|
%We then conduct a psychophysical study with 20 participants, where various virtual haptic textures on a tangible surface directly touched with the finger are compared in a two-alternative forced choice (2AFC) task in three visual rendering conditions: (1) without visual augmentation, (2) with a realistic virtual hand rendering in \AR, and (3) with the same virtual hand in \VR.
|
||||||
|
|
||||||
\fig[1]{teaser/teaser2}{
|
\bigskip
|
||||||
|
|
||||||
|
\fig[0.9]{teaser/teaser2}{
|
||||||
Vibrotactile textures were rendered in real time on a real surface using a wearable vibrotactile device worn on the finger.
|
Vibrotactile textures were rendered in real time on a real surface using a wearable vibrotactile device worn on the finger.
|
||||||
}[
|
}[%
|
||||||
Participants explored this haptic roughness augmentation with (Real) their real hand alone, (Mixed) a realistic virtual hand overlay in \AR, and (Virtual) the same virtual hand in \VR.
|
Participants explored this haptic roughness augmentation with (\level{Real}) their real hand alone, (\level{Mixed}) a realistic virtual hand overlay in \AR, and (\level{Virtual}) the same virtual hand in \VR.
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -1,169 +1,132 @@
|
|||||||
\section{User Study}
|
\section{User Study}
|
||||||
\label{experiment}
|
\label{experiment}
|
||||||
|
|
||||||
The visuo-haptic rendering system, described in \secref[vhar_system]{method}, allows free exploration of virtual vibrotactile textures on tangible surfaces directly touched with the bare finger to simulate roughness augmentation, while the visual rendering of the hand and environment can be controlled to be in \AR or \VR.
|
%The visuo-haptic rendering system, described in \secref[vhar_system]{method}, allows free exploration of virtual vibrotactile textures on tangible surfaces directly touched with the bare finger to simulate roughness augmentation, while the visual rendering of the hand and environment can be controlled to be in \AR or \VR.
|
||||||
%
|
%
|
||||||
The user study aimed to investigate the effect of visual hand rendering in \AR or \VR on the perception of roughness texture augmentation. % of a touched tangible surface.
|
The user study aimed to investigate the effect of visual hand rendering in \AR or \VR on the perception of roughness texture augmentation of a touched tangible surface.
|
||||||
%
|
%
|
||||||
In a \TIFC task, participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (\figref{renderings}, \level{Real}), in \AR with a realistic virtual hand superimposed on the real hand (\figref{renderings}, \level{Mixed}), and in \VR with the same virtual hand as an avatar (\figref{renderings}, \level{Virtual}).
|
In a \TIFC task (\secref[related_work]{sensations_perception}), participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (\level{Real}, \figref{renderings}), in \AR with a realistic virtual hand superimposed on the real hand (\level{Mixed}, \figref{renderings}), and in \VR with the same virtual hand as an avatar (\level{Virtual}, \figref{renderings}).
|
||||||
%
|
%
|
||||||
In order not to influence the perception, as vision is an important source of information and influence for the perception of texture \cite{bergmanntiest2007haptic,yanagisawa2015effects,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed.
|
In order not to influence the perception, as vision is an important source of information and influence for the perception of texture \cite{bergmanntiest2007haptic,yanagisawa2015effects,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed.
|
||||||
|
|
||||||
\subsection{Participants}
|
\begin{subfigs}{renderings}{
|
||||||
\label{participants}
|
The three visual rendering conditions and the experimental procedure of the \TIFC psychophysical study.
|
||||||
|
}[%
|
||||||
Twenty participants were recruited for the study (16 males, 3 females, 1 preferred not to say), aged between 18 and 61 years (\median{26}{}, \iqr{6.8}{}).
|
During a trial, two tactile textures were rendered on the augmented area of the paper sheet (black rectangle) for \qty{3}{\s} each, one after the other, then the participant chose which one was the roughest.
|
||||||
%
|
The visual rendering stayed the same during the trial.
|
||||||
All participants had normal or corrected-to-normal vision, and none had a known hand or finger impairment.
|
The pictures are captured directly from the Microsoft HoloLens 2 headset.
|
||||||
%
|
][
|
||||||
One was left-handed and the rest were right-handed; they all performed the task with their right index.
|
\item The real environment and real hand view without any visual augmentation.
|
||||||
%
|
\item The real environment and hand view with the virtual hand.
|
||||||
When rating their experience with haptics, \AR and \VR (\enquote{I use it several times a year}), 12 were experienced with haptics, 5 with \AR, and 10 with \VR.
|
\item Virtual environment with the virtual hand.
|
||||||
%
|
]
|
||||||
Experience was correlated between haptics and \VR (\pearson{0.59}), and \AR and \VR (\pearson{0.67}), but not haptics and \AR (\pearson{0.20}), nor haptics, \AR, or \VR with age (\pearson{0.05} to \pearson{0.12}).
|
\subfig[0.326]{experiment/real}
|
||||||
%
|
\subfig[0.326]{experiment/mixed}
|
||||||
Participants were recruited at the university on a voluntary basis.
|
\subfig[0.326]{experiment/virtual}
|
||||||
%
|
\end{subfigs}
|
||||||
They all signed an informed consent form before the user study and were unaware of its purpose.
|
|
||||||
|
|
||||||
\subsection{Apparatus}
|
\subsection{Apparatus}
|
||||||
\label{apparatus}
|
\label{apparatus}
|
||||||
|
|
||||||
An experimental environment was created to ensure a similar visual rendering in \AR and \VR (\figref{renderings}).
|
An experimental environment was created to ensure a similar visual rendering in \AR and \VR (\figref{renderings}).
|
||||||
%
|
|
||||||
It consisted of a \qtyproduct{300 x 210 x 400}{\mm} medium-density fibreboard (MDF) box with a paper sheet glued inside and a \qtyproduct{50 x 15}{\mm} rectangle printed on the sheet to delimit the area where the tactile textures were rendered.
|
It consisted of a \qtyproduct{300 x 210 x 400}{\mm} medium-density fibreboard (MDF) box with a paper sheet glued inside and a \qtyproduct{50 x 15}{\mm} rectangle printed on the sheet to delimit the area where the tactile textures were rendered.
|
||||||
%
|
|
||||||
A single light source of \qty{800}{\lumen} placed \qty{70}{\cm} above the table fully illuminated the inside of the box.
|
A single light source of \qty{800}{\lumen} placed \qty{70}{\cm} above the table fully illuminated the inside of the box.
|
||||||
%
|
|
||||||
Participants rated the roughness of the paper (without any texture augmentation) before the experiment on a 7-point Likert scale (1~=~Extremely smooth, 7~=~Extremely rough) as quite smooth (\mean{2.5}, \sd{1.3}).
|
Participants rated the roughness of the paper (without any texture augmentation) before the experiment on a 7-point Likert scale (1~=~Extremely smooth, 7~=~Extremely rough) as quite smooth (\mean{2.5}, \sd{1.3}).
|
||||||
|
|
||||||
%The visual rendering of the virtual hand and environment was achieved using the Microsoft HoloLens~2, an OST-AR headset with a \qtyproduct{43 x 29}{\degree} field of view (FoV) and a \qty{60}{\Hz} refresh rate, running a custom application made with Unity 2021.1.0f1 and Mixed Reality Toolkit (MRTK) 2.7.2.
|
The visual rendering of the virtual hand and environment was achieved using the \OST-\AR headset Microsoft HoloLens~2 (\secref[vhar_system]{virtual_real_alignment}) running at \qty{60}{FPS} a custom application made with Unity 2021.1 and Mixed Reality Toolkit (MRTK) 2.7.2\footnoteurl{https://learn.microsoft.com/windows/mixed-reality/mrtk-unity}.
|
||||||
%f
|
An \OST-\AR headset was chosen over a \VST-\AR headset because the former only adds virtual content to the \RE, while the latter streams a real-time video capture of the \RE, and one of our objectives was to directly compare a \VE replicating a real one, not to a video feed that introduces many other visual limitations (\secref[related_work]{ar_displays}).
|
||||||
The virtual environment carefully reproduced the real environment, including the geometry of the box, textures, lighting, and shadows (\figref{renderings}, \level{Virtual}).
|
The \VE carefully reproduced the \RE, including the geometry of the box, textures, lighting, and shadows (\figref{renderings}, \level{Virtual}).
|
||||||
%
|
|
||||||
The virtual hand model was a gender-neutral human right hand with realistic skin texture, similar to that used by \textcite{schwind2017these}.
|
The virtual hand model was a gender-neutral human right hand with realistic skin texture, similar to that used by \textcite{schwind2017these}.
|
||||||
%
|
|
||||||
Its size was adjusted to match the real hand of the participants before the experiment.
|
Its size was adjusted to match the real hand of the participants before the experiment.
|
||||||
%
|
The visual rendering of the virtual hand and environment is described in \secref[vhar_system]{virtual_real_alignment}.
|
||||||
%An OST-AR headset (Microsoft HoloLens~2) was chosen over a VST-AR headset because the former only adds virtual content to the real environment, while the latter streams a real-time video capture of the real environment, and one of our objectives was to directly compare a virtual environment replicating a real one, not to a video feed that introduces many other visual limitations \cite{macedo2023occlusion}.
|
|
||||||
%
|
To ensure the same \FoV in all \factor{Visual Rendering} condition, a cardboard mask was attached to the \AR headset (\figref{experiment/headset}).
|
||||||
The visual rendering of the virtual hand and environment is described in \secref{virtual_real_alignment}.
|
In the \level{Virtual} rendering, the mask only had holes for sensors to block the view of the \RE and simulate a \VR headset.
|
||||||
%
|
In the \level{Mixed} and \level{Real} conditions, the mask had two additional holes for the eyes that matched the \FoV of the HoloLens~2 (\figref{experiment/headset}).
|
||||||
%In the \level{Virtual} rendering, a cardboard mask (with holes for sensors) was attached to the headset to block the view of the real environment and simulate a \VR headset (\figref{method/headset}).
|
|
||||||
%
|
|
||||||
To ensure the same \FoV in all \factor{Visual Rendering} condition, a cardboard mask was attached to the \AR headset (\figref{method/headset}).
|
|
||||||
%
|
|
||||||
In the \level{Virtual} rendering, the mask only had holes for sensors to block the view of the real environment and simulate a \VR headset.
|
|
||||||
%
|
|
||||||
In the \level{Mixed} and \level{Real} conditions, the mask had two additional holes for the eyes that matched the \FoV of the HoloLens~2 (\figref{method/headset}).
|
|
||||||
%
|
|
||||||
\figref{renderings} shows the resulting views in the three considered \factor{Visual Rendering} conditions.
|
\figref{renderings} shows the resulting views in the three considered \factor{Visual Rendering} conditions.
|
||||||
|
|
||||||
%A vibrotactile voice-coil device (HapCoil-One, Actronika), incased in a 3D-printed plastic shell, was firmly attached to the right index finger of the participants using a Velcro strap (\figref{method/device}), was used to render the textures
|
Participants sat comfortably in front of the box at a distance of \qty{30}{\cm}, wearing the HoloLens~2 with a cardboard mask attached, so that only the inside of the box was visible, as shown in \figref{experiment/apparatus}.
|
||||||
%
|
The vibrotactile voice-coil actuator (HapCoil-One, Actronika) was firmly attached to the middle phalanx of the right index finger of the participants using a Velcro strap.
|
||||||
%This voice-coil was chosen for its wide frequency range (\qtyrange{10}{1000}{\Hz}) and its relatively low acceleration distortion, as specified by the manufacturer\footnotemark[1].
|
The generation of the virtual texture is described in \secref[vhar_system]{texture_generation}.
|
||||||
%
|
They also wore headphones with a brown noise masking the sound of the voice-coil.
|
||||||
%It was driven by an audio amplifier (XY-502, not branded) connected to a computer that generated the audio signal of the textures as described in \secref{method}, using the NAudio library and the WASAPI driver in exclusive mode.
|
The user study was held in a quiet room with no windows, and took on average one hour to complete.
|
||||||
%
|
|
||||||
%The position of the finger relative to the sheet was estimated using a webcam placed on top of the box (StreamCam, Logitech) and the OpenCV library by tracking a \qty{2}{\cm} square fiducial marker (AprilTag) glued to top of the vibrotactile actuator.
|
|
||||||
%
|
|
||||||
%The total texture latency was measured to \qty{36 \pm 4}{\ms}, as a result of latency in image acquisition \qty{16 \pm 1}{\ms}, fiducial marker detection \qty{2 \pm 1}{\ms}, audio sampling \qty{3 \pm 1}{\ms}, and the vibrotactile actuator latency (\qty{15}{\ms}, as specified by the manufacturer\footnotemark[1]), and was below the \qty{60}{\ms} threshold for vibrotactile feedback \cite{okamoto2009detectability}.
|
|
||||||
%
|
|
||||||
%The virtual hand followed the position of the fiducial marker with a slightly higher latency due to the network synchronization \qty{4 \pm 1}{\ms} between the computer and the HoloLens~2.
|
|
||||||
|
|
||||||
Participants sat comfortably in front of the box at a distance of \qty{30}{\cm}, wearing the HoloLens~2 with a cardboard mask attached, so that only the inside of the box was visible, as shown in \figref{method/apparatus}.
|
\begin{subfigs}{setup}{Visuo-haptic texture rendering system setup. }[][
|
||||||
%
|
\item HoloLens~2 \OST-\AR headset, the two cardboard masks to switch the real or virtual environments with the same field of view, and the \ThreeD-printed piece for attaching the masks to the headset.
|
||||||
%A vibrotactile voice-coil actuator (HapCoil-One, Actronika) was encased in a 3D printed plastic shell with a \qty{2}{\cm} AprilTag glued to top, and firmly attached to the middle phalanx of the right index finger of the participants using a Velcro strap.
|
\item User exploring a virtual vibrotactile texture on a tangible sheet of paper.
|
||||||
%
|
]
|
||||||
The generation of the virtual texture and the control of the virtual hand are described in \secref{method}.
|
\subfigsheight{48.5mm}
|
||||||
%
|
\subfig{experiment/headset}
|
||||||
They also wore headphones with a pink noise masking the sound of the voice-coil.
|
\subfig{experiment/apparatus}
|
||||||
%
|
\end{subfigs}
|
||||||
The experimental setup was held in a quiet room with no windows.
|
|
||||||
%
|
|
||||||
The user study took on average one hour to complete.
|
|
||||||
|
|
||||||
\subsection{Procedure}
|
\subsection{Procedure}
|
||||||
\label{procedure}
|
\label{procedure}
|
||||||
|
|
||||||
Participants were first given written instructions about the experimental setup and procedure, the informed consent form to sign, and a demographic questionnaire.
|
Participants were first given written instructions about the experimental setup and procedure, the informed consent form to sign, and a demographic questionnaire.
|
||||||
%
|
A calibration was then performed to adjust the HoloLens~2 to the participant's interpupillary distance (IPD), the virtual hand to the real hand size, and the fiducial marker to the finger position.
|
||||||
%They were then asked to sit in front of the box and wear the HoloLens~2 and headphones while the experimenter firmly attached the vibrotactile device to the middle phalanx of their right index finger (\figref{method/apparatus}).
|
They familiarized themselves with the task by completing four training trials with the most different pair of textures.
|
||||||
%
|
|
||||||
A calibration was then performed to adjust the HoloLens~2 to the participant's interpupillary distance, the virtual hand to the real hand size, and the fiducial marker to the finger position.
|
|
||||||
%
|
|
||||||
They familiarised themselves with the task by completing four training trials with the most different pair of textures.
|
|
||||||
%
|
|
||||||
The trials were divided into three blocks, one for each \factor{Visual Rendering} condition, with a break and questionnaire between each block.
|
The trials were divided into three blocks, one for each \factor{Visual Rendering} condition, with a break and questionnaire between each block.
|
||||||
%
|
Before each block, the experimenter ensured that the \VE and the virtual hand were correctly aligned with their real equivalents, that the haptic device was in place, and attached the cardboard mask corresponding to the next \factor{Visual Rendering} condition to the headset.
|
||||||
Before each block, the experimenter ensured that the virtual environment and the virtual hand were correctly aligned with their real equivalents, that the haptic device was in place, and attached the cardboard mask corresponding to the next \factor{Visual Rendering} condition to the headset.
|
|
||||||
|
|
||||||
The participant started the trial by clicking the middle button of a mouse with the left hand.
|
The participant started the trial by clicking the middle button of a mouse with the left hand.
|
||||||
%
|
|
||||||
The first texture was then rendered on the augmented area of the paper sheet for \qty{3}{\s} and, after a \qty{1}{\s} pause, the second texture was also rendered for \qty{3}{\s}.
|
The first texture was then rendered on the augmented area of the paper sheet for \qty{3}{\s} and, after a \qty{1}{\s} pause, the second texture was also rendered for \qty{3}{\s}.
|
||||||
%
|
|
||||||
The participant then had to decide which texture was the roughest by clicking the left (for the first texture) or right (for the second texture) button of the mouse and confirming their choice by clicking the middle button again.
|
The participant then had to decide which texture was the roughest by clicking the left (for the first texture) or right (for the second texture) button of the mouse and confirming their choice by clicking the middle button again.
|
||||||
%
|
|
||||||
If the participant moved their finger away from the texture area, the texture timer was paused until they returned.
|
If the participant moved their finger away from the texture area, the texture timer was paused until they returned.
|
||||||
%
|
|
||||||
Participants were asked to explore the textures as they would in real life by moving their finger back and forth over the texture area at different speeds.
|
Participants were asked to explore the textures as they would in real life by moving their finger back and forth over the texture area at different speeds.
|
||||||
|
|
||||||
One of the textures in the tested pair was always the reference texture, while the other was the comparison texture.
|
One of the textures in the tested pair was always the reference texture, while the other was the comparison texture.
|
||||||
%
|
|
||||||
Participants were not told that there was a reference and a comparison texture.
|
Participants were not told that there was a reference and a comparison texture.
|
||||||
%
|
The order of presentation was randomized and not revealed to the participants.
|
||||||
The order of presentation was randomised and not revealed to the participants.
|
All textures were rendered as described in \secref[vhar_system]{texture_generation} with period $\lambda$ of \qty{2}{\mm}, but with different amplitudes $A$ to create different levels of roughness.
|
||||||
%
|
|
||||||
All textures were rendered as described in \secref{texture_generation} with period $\lambda$ of \qty{2}{\mm}, but with different amplitudes $A$ to create different levels of roughness.
|
|
||||||
%
|
|
||||||
Preliminary studies allowed us to determine a range of amplitudes that could be felt by the participants and were not too uncomfortable.
|
Preliminary studies allowed us to determine a range of amplitudes that could be felt by the participants and were not too uncomfortable.
|
||||||
%
|
|
||||||
The reference texture was chosen to be the one with the middle amplitude to compare it with lower and higher roughness levels and to determine key perceptual variables such as the \PSE and the \JND of each \factor{Visual Rendering} condition.
|
The reference texture was chosen to be the one with the middle amplitude to compare it with lower and higher roughness levels and to determine key perceptual variables such as the \PSE and the \JND of each \factor{Visual Rendering} condition.
|
||||||
%
|
The chosen \TIFC task is a common psychophysical method used in haptics to determine \PSE and \JND by testing comparison stimuli against a fixed reference stimulus and by fitting a psychometric function to the participant's responses (\secref[related_work]{sensations_perception}).
|
||||||
The chosen \TIFC task is a common psychophysical method used in haptics to determine \PSE and \JND by testing comparison stimuli against a fixed reference stimulus and by fitting a psychometric function to the participant's responses \cite{jones2013application}.
|
|
||||||
|
|
||||||
\subsection{Experimental Design}
|
\subsection{Experimental Design}
|
||||||
\label{experimental_design}
|
\label{experimental_design}
|
||||||
|
|
||||||
The user study was a within-subjects design with two factors:
|
The user study was a within-subjects design with two factors:
|
||||||
%
|
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item \factor{Visual Rendering} consists of the augmented or virtual view of the environment, the hand and the wearable haptic device, with 3 levels: real environment and real hand view without any visual augmentation (\figref{renderings}, \level{Real}), real environment and hand view with the virtual hand (\figref{renderings}, \level{Mixed}) and virtual environment with the virtual hand (\figref{renderings}, \level{Virtual}).
|
\item \factor{Visual Rendering} consists of the augmented or virtual view of the environment, the hand and the wearable haptic device, with 3 levels: real environment and hand view without any visual augmentation (\figref{renderings}, \level{Real}), real environment and hand view with the superimposed virtual hand (\figref{renderings}, \level{Mixed}) and virtual environment with the virtual hand (\figref{renderings}, \level{Virtual}).
|
||||||
\item \factor{Amplitude Difference} consists of the difference in amplitude of the comparison texture with the reference texture (which is identical for all visual renderings), with 6 levels: \qtylist{+-12.5; +-25.0; +-37.5}{\%}.
|
\item \factor{Amplitude Difference} consists of the difference in amplitude of the comparison texture with the reference texture (which is identical for all visual renderings), with 6 levels: \qtylist{\pm 12.5; \pm 25.0; \pm 37.5}{\%}.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
A trial consisted of a \TIFC task in which the participant touched two virtual vibrotactile textures one after the other and decided which one was the roughest.
|
A trial consisted of a \TIFC task in which the participant touched two virtual vibrotactile textures one after the other and decided which one was the roughest.
|
||||||
%
|
|
||||||
To avoid any order effect, the order of \factor{Visual Rendering} conditions was counterbalanced between participants using a balanced Latin square design.
|
To avoid any order effect, the order of \factor{Visual Rendering} conditions was counterbalanced between participants using a balanced Latin square design.
|
||||||
%
|
|
||||||
Within each condition, the presentation order of the reference and comparison textures was also counterbalanced, and all possible texture pairs were presented in random order and repeated three times.
|
Within each condition, the presentation order of the reference and comparison textures was also counterbalanced, and all possible texture pairs were presented in random order and repeated three times.
|
||||||
%
|
|
||||||
A total of 3 visual renderings \x 6 amplitude differences \x 2 texture presentation order \x 3 repetitions = 108 trials were performed by each participant.
|
A total of 3 visual renderings \x 6 amplitude differences \x 2 texture presentation order \x 3 repetitions = 108 trials were performed by each participant.
|
||||||
|
|
||||||
|
\subsection{Participants}
|
||||||
|
\label{participants}
|
||||||
|
|
||||||
|
Twenty participants were recruited for the study (16 males, 3 females, 1 preferred not to say), aged between 18 and 61 years (\median{26}{}, \iqr{6.8}{}).
|
||||||
|
All participants had normal or corrected-to-normal vision, and none had a known hand or finger impairment.
|
||||||
|
One was left-handed and the rest were right-handed; they all performed the task with their right index.
|
||||||
|
When rating their experience with haptics, \AR and \VR (\enquote{I use it several times a year}), 12 were experienced with haptics, 5 with \AR, and 10 with \VR.
|
||||||
|
Experience was correlated between haptics and \VR (\pearson{0.59}), and \AR and \VR (\pearson{0.67}), but not haptics and \AR (\pearson{0.20}), nor haptics, \AR, or \VR with age (\pearson{0.05} to \pearson{0.12}).
|
||||||
|
Participants were recruited at the university on a voluntary basis.
|
||||||
|
They all signed an informed consent form before the user study and were unaware of its purpose.
|
||||||
|
|
||||||
\subsection{Collected Data}
|
\subsection{Collected Data}
|
||||||
\label{collected_data}
|
\label{collected_data}
|
||||||
|
|
||||||
For each trial, the \response{Texture Choice} by the participant as the roughest of the pair was recorded.
|
For each trial, the \response{Texture Choice} by the participant as the roughest of the pair was recorded.
|
||||||
%
|
|
||||||
The \response{Response Time} between the end of the trial and the choice of the participant was also measured as an indicator of the difficulty of the task.
|
The \response{Response Time} between the end of the trial and the choice of the participant was also measured as an indicator of the difficulty of the task.
|
||||||
%
|
|
||||||
At each frame, the \response{Finger Position} and \response{Finger Speed} were recorded to control for possible differences in texture exploration behaviour.
|
At each frame, the \response{Finger Position} and \response{Finger Speed} were recorded to control for possible differences in texture exploration behaviour.
|
||||||
%
|
|
||||||
Participants also rated their experience after each \factor{Visual Rendering} block of trials using the questions shown in \tabref{questions}.
|
Participants also rated their experience after each \factor{Visual Rendering} block of trials using the questions shown in \tabref{questions}.
|
||||||
%After each \factor{Visual Rendering} block of trials, participants rated their experience with the vibrotactile textures (all blocks), the vibrotactile device (all blocks), the virtual hand rendering (all except \level{Mixed} block) and the virtual environment (\level{Virtual} block) using the questions shown in \tabref{questions}.
|
After each \factor{Visual Rendering} block of trials, participants rated their experience with the vibrotactile textures (all blocks), the vibrotactile device (all blocks), the virtual hand rendering (all except \level{Mixed} block) and the \VE (\level{Virtual} block) using the questions shown in \tabref{questions}.
|
||||||
%
|
|
||||||
They also assessed their workload with the NASA Task Load Index (\response{NASA-TLX}) questionnaire after each blocks of trials.
|
They also assessed their workload with the NASA Task Load Index (\response{NASA-TLX}) questionnaire after each blocks of trials.
|
||||||
%
|
|
||||||
For all questions, participants were shown only labels (\eg \enquote{Not at all} or \enquote{Extremely}) and not the actual scale values (\eg 1 or 5) \cite{muller2014survey}.
|
For all questions, participants were shown only labels (\eg \enquote{Not at all} or \enquote{Extremely}) and not the actual scale values (\eg 1 or 5) \cite{muller2014survey}.
|
||||||
|
|
||||||
\newcommand{\scalegroup}[2]{\multirow{#1}{1\linewidth}{#2}}
|
\newcommand{\scalegroup}[2]{\multirow{#1}{1\linewidth}{#2}}
|
||||||
\begin{tabwide}{questions}
|
\begin{tabwide}{questions}
|
||||||
{Questions asked to participants after each \factor{Visual Rendering} block of trials.}
|
{Questions asked to participants after each \factor{Visual Rendering} block of trials.}
|
||||||
[
|
[
|
||||||
Unipolar scale questions were 5-point Likert scales (1 = Not at all, 2 = Slightly, 3 = Moderately, 4 = Very and 5 = Extremely).
|
Unipolar scale questions were 5-point Likert scales (1~=~Not at all, 2~=~Slightly, 3~=~Moderately, 4~=~Very and 5~=~Extremely).
|
||||||
Bipolar scale questions were 7-point Likert scales (1 = Extremely A, 2 = Moderately A, 3 = Slightly A, 4 = Neither A nor B, 5 = Slightly B, 6 = Moderately B, 7 = Extremely B),
|
Bipolar scale questions were 7-point Likert scales (1~=~Extremely A, 2~=~Moderately A, 3~=~Slightly A, 4~=~Neither A nor B, 5~=~Slightly B, 6~=~Moderately B, 7~=~Extremely B),
|
||||||
where A and B are the two poles of the scale (indicated in parentheses in the Scale column of the questions).
|
where A and B are the two poles of the scale (indicated in parentheses in the Scale column of the questions).
|
||||||
NASA TLX questions were bipolar 100-points scales (0 = Very Low and 100 = Very High, except for Performance where 0 = Perfect and 100 = Failure).
|
NASA TLX questions were bipolar 100-points scales (0~=~Very Low and 100~=~Very High, except for Performance where 0~=~Perfect and 100~=~Failure).
|
||||||
Participants were shown only the labels for all questions.
|
Participants were shown only the labels for all questions.
|
||||||
]
|
]
|
||||||
\begin{tabularx}{\linewidth}{l X p{0.2\linewidth}}
|
\begin{tabularx}{\linewidth}{l X p{0.2\linewidth}}
|
||||||
@@ -175,7 +138,7 @@ For all questions, participants were shown only labels (\eg \enquote{Not at all}
|
|||||||
Texture Plausibility & Did you feel like you were actually touching textures? & \\
|
Texture Plausibility & Did you feel like you were actually touching textures? & \\
|
||||||
Texture Latency & Did the sensations of texture seem to lag behind your movements? & \\
|
Texture Latency & Did the sensations of texture seem to lag behind your movements? & \\
|
||||||
\midrule
|
\midrule
|
||||||
Vibration Location & Did the vibrations seem to come from the surface you were touching or did you feel them on the top of your finger? & Bipolar (1=surface, 7=top of finger) \\
|
Vibration Location & Did the vibrations seem to come from the surface you were touching or did you feel them on the top of your finger? & Bipolar (1=surface, 7=finger) \\
|
||||||
Vibration Strength & Overall, how weak or strong were the vibrations? & Bipolar (1=weak, 7=strong) \\
|
Vibration Strength & Overall, how weak or strong were the vibrations? & Bipolar (1=weak, 7=strong) \\
|
||||||
Device Distraction & To what extent did the vibrotactile device distract you from the task? & \scalegroup{2}{Unipolar (1-5)} \\
|
Device Distraction & To what extent did the vibrotactile device distract you from the task? & \scalegroup{2}{Unipolar (1-5)} \\
|
||||||
Device Discomfort & How uncomfortable was it to use the vibrotactile device? & \\
|
Device Discomfort & How uncomfortable was it to use the vibrotactile device? & \\
|
||||||
@@ -185,7 +148,7 @@ For all questions, participants were shown only labels (\eg \enquote{Not at all}
|
|||||||
Hand Ownership & Did you feel the virtual hand was your own hand? & \\
|
Hand Ownership & Did you feel the virtual hand was your own hand? & \\
|
||||||
Hand Latency & Did the virtual hand seem to lag behind your movements? & \\
|
Hand Latency & Did the virtual hand seem to lag behind your movements? & \\
|
||||||
Hand Distraction & To what extent did the virtual hand distract you from the task? & \\
|
Hand Distraction & To what extent did the virtual hand distract you from the task? & \\
|
||||||
Hand Reference & Overall, did you focus on your own hand or the virtual hand to complete the task? & Bipolar (1=own hand, 7=virtual hand) \\
|
Hand Reference & Overall, did you focus on your own hand or the virtual hand to complete the task? & Bipolar (1=own, 7=virtual) \\
|
||||||
\midrule
|
\midrule
|
||||||
Virtual Realism & How realistic was the virtual environment? & \scalegroup{2}{Unipolar (1-5)} \\
|
Virtual Realism & How realistic was the virtual environment? & \scalegroup{2}{Unipolar (1-5)} \\
|
||||||
Virtual Similarity & How similar was the virtual environment to the real one? & \\
|
Virtual Similarity & How similar was the virtual environment to the real one? & \\
|
||||||
|
|||||||
@@ -4,76 +4,58 @@
|
|||||||
\subsection{Trial Measures}
|
\subsection{Trial Measures}
|
||||||
\label{results_trials}
|
\label{results_trials}
|
||||||
|
|
||||||
All measures from trials were analysed using \LMM or \GLMM with \factor{Visual Rendering}, \factor{Amplitude Difference} and their interaction as within-participant factors, and by-participant random intercepts.
|
All measures from trials were analyzed using \LMM or \GLMM with \factor{Visual Rendering}, \factor{Amplitude Difference} and their interaction as within-participant factors, and by-participant random intercepts.
|
||||||
%
|
|
||||||
Depending on the data, different random effect structures were tested.
|
Depending on the data, different random effect structures were tested.
|
||||||
%
|
|
||||||
Only the best converging models are reported, with the lowest Akaike Information Criterion (AIC) values.
|
Only the best converging models are reported, with the lowest Akaike Information Criterion (AIC) values.
|
||||||
%
|
|
||||||
Post-hoc pairwise comparisons were performed using the Tukey's \HSD test.
|
Post-hoc pairwise comparisons were performed using the Tukey's \HSD test.
|
||||||
%
|
|
||||||
Each estimate is reported with its \percent{95} \CI as follows: \ci{\textrm{lower limit}}{\textrm{upper limit}}.
|
Each estimate is reported with its \percent{95} \CI as follows: \ci{\textrm{lower limit}}{\textrm{upper limit}}.
|
||||||
|
|
||||||
\subsubsection{Discrimination Accuracy}
|
\subsubsection{Discrimination Accuracy}
|
||||||
\label{discrimination_accuracy}
|
\label{discrimination_accuracy}
|
||||||
|
|
||||||
A \GLMM was adjusted to the \response{Texture Choice} in the \TIFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (\figref{results/trial_predictions}).
|
A \GLMM was adjusted to the \response{Texture Choice} in the \TIFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (\figref{results/trial_predictions}).
|
||||||
%
|
|
||||||
The \PSEs (\figref{results/trial_pses}) and \JNDs (\figref{results/trial_jnds}) for each visual rendering and their respective differences were estimated from the model, along with their corresponding \percent{95} \CI, using a non-parametric bootstrap procedure (1000 samples).
|
The \PSEs (\figref{results/trial_pses}) and \JNDs (\figref{results/trial_jnds}) for each visual rendering and their respective differences were estimated from the model, along with their corresponding \percent{95} \CI, using a non-parametric bootstrap procedure (1000 samples).
|
||||||
%
|
The \PSE represents the estimated amplitude difference at which the comparison texture was perceived as rougher than the reference texture \percent{50} of the time, \ie it is the accuracy of participants in discriminating vibrotactile roughness.
|
||||||
The \PSE represents the estimated amplitude difference at which the comparison texture was perceived as rougher than the reference texture \percent{50} of the time. %, \ie it is the accuracy of participants in discriminating vibrotactile roughness.
|
|
||||||
%
|
|
||||||
The \level{Real} rendering had the highest \PSE (\percent{7.9} \ci{1.2}{4.1}) and was statistically significantly different from the \level{Mixed} rendering (\percent{1.9} \ci{-2.4}{6.1}) and from the \level{Virtual} rendering (\percent{5.1} \ci{2.4}{7.6}).
|
The \level{Real} rendering had the highest \PSE (\percent{7.9} \ci{1.2}{4.1}) and was statistically significantly different from the \level{Mixed} rendering (\percent{1.9} \ci{-2.4}{6.1}) and from the \level{Virtual} rendering (\percent{5.1} \ci{2.4}{7.6}).
|
||||||
%
|
The \JND represents the estimated minimum amplitude difference between the comparison and reference textures that participants could perceive, \ie the sensitivity to vibrotactile roughness differences,
|
||||||
The \JND represents the estimated minimum amplitude difference between the comparison and reference textures that participants could perceive,
|
|
||||||
% \ie the sensitivity to vibrotactile roughness differences,
|
|
||||||
calculated at the 84th percentile of the predictions of the \GLMM (\ie one standard deviation of the normal distribution) \cite{ernst2002humans}.
|
calculated at the 84th percentile of the predictions of the \GLMM (\ie one standard deviation of the normal distribution) \cite{ernst2002humans}.
|
||||||
%
|
|
||||||
The \level{Real} rendering had the lowest \JND (\percent{26} \ci{23}{29}), the \level{Mixed} rendering had the highest (\percent{33} \ci{30}{37}), and the \level{Virtual} rendering was in between (\percent{30} \ci{28}{32}).
|
The \level{Real} rendering had the lowest \JND (\percent{26} \ci{23}{29}), the \level{Mixed} rendering had the highest (\percent{33} \ci{30}{37}), and the \level{Virtual} rendering was in between (\percent{30} \ci{28}{32}).
|
||||||
%
|
|
||||||
All pairwise differences were statistically significant.
|
All pairwise differences were statistically significant.
|
||||||
|
|
||||||
\begin{subfigs}{discrimination_accuracy}{Results of the vibrotactile texture roughness discrimination task. }[
|
\fig[0.7]{results/trial_predictions}{Proportion of trials in which the comparison texture was perceived as rougher than the reference texture, as a function of the amplitude difference between the two textures and the visual rendering.}[
|
||||||
Curves represent predictions from the \GLMM model (probit link function), and points are estimated marginal means with non-parametric bootstrap \percent{95} confidence intervals.
|
Curves represent predictions from the \GLMM model (probit link function), and points are estimated marginal means with non-parametric bootstrap \percent{95} \CIs.
|
||||||
][
|
]
|
||||||
\item Proportion of trials in which the comparison texture was perceived as rougher than the reference texture, as a function of the amplitude difference between the two textures and the visual rendering.
|
|
||||||
\item Estimated \PSE of each visual rendering.
|
\begin{subfigs}{discrimination_accuracy}{Results of the vibrotactile texture roughness discrimination task. }[][
|
||||||
%, defined as the amplitude difference at which both reference and comparison textures are perceived to be equivalent, \ie the accuracy in discriminating vibrotactile roughness.
|
\item Estimated \PSE of each visual rendering, defined as the amplitude difference at which both reference and comparison textures are perceived to be equivalent.%, \ie the accuracy in discriminating vibrotactile roughness.
|
||||||
\item Estimated \JND of each visual rendering.
|
\item Estimated \JND of each visual rendering.
|
||||||
%, defined as the minimum perceptual amplitude difference, \ie the sensitivity to vibrotactile roughness differences.
|
%, defined as the minimum perceptual amplitude difference, \ie the sensitivity to vibrotactile roughness differences.
|
||||||
]
|
]
|
||||||
\subfig[0.85]{results/trial_predictions}\\
|
\subfig[0.35]{results/trial_pses}
|
||||||
\subfig[0.45]{results/trial_pses}
|
\subfig[0.35]{results/trial_jnds}
|
||||||
\subfig[0.45]{results/trial_jnds}
|
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\subsubsection{Response Time}
|
\subsubsection{Response Time}
|
||||||
\label{response_time}
|
\label{response_time}
|
||||||
|
|
||||||
A \LMM \ANOVA with by-participant random slopes for \factor{Visual Rendering}, and a log transformation (as \response{Response Time} measures were gamma distributed) indicated a statistically significant effect on \response{Response Time} of \factor{Visual Rendering} (\anova{2}{18}{6.2}, \p{0.009}, \figref{results/trial_response_times}).
|
A \LMM \ANOVA with by-participant random slopes for \factor{Visual Rendering}, and a log transformation (as \response{Response Time} measures were gamma distributed) indicated a statistically significant effect on \response{Response Time} of \factor{Visual Rendering} (\anova{2}{18}{6.2}, \p{0.009}, \figref{results/trial_response_times}).
|
||||||
%
|
|
||||||
Reported response times are \GM.
|
Reported response times are \GM.
|
||||||
%
|
|
||||||
Participants took longer on average to respond with the \level{Virtual} rendering (\geomean{1.65}{\s} \ci{1.59}{1.72}) than with the \level{Real} rendering (\geomean{1.38}{\s} \ci{1.32}{1.43}), which is the only statistically significant difference (\ttest{19}{0.3}, \p{0.005}).
|
Participants took longer on average to respond with the \level{Virtual} rendering (\geomean{1.65}{\s} \ci{1.59}{1.72}) than with the \level{Real} rendering (\geomean{1.38}{\s} \ci{1.32}{1.43}), which is the only statistically significant difference (\ttest{19}{0.3}, \p{0.005}).
|
||||||
%
|
|
||||||
The \level{Mixed} rendering was in between (\geomean{1.56}{\s} \ci{1.49}{1.63}).
|
The \level{Mixed} rendering was in between (\geomean{1.56}{\s} \ci{1.49}{1.63}).
|
||||||
|
|
||||||
\subsubsection{Finger Position and Speed}
|
\subsubsection{Finger Position and Speed}
|
||||||
\label{finger_position_speed}
|
\label{finger_position_speed}
|
||||||
|
|
||||||
The frames analysed were those in which the participants actively touched the comparison textures with a finger speed greater than \SI{1}{\mm\per\second}.
|
The frames analyzed were those in which the participants actively touched the comparison textures with a finger speed greater than \SI{1}{\mm\per\second}.
|
||||||
%
|
|
||||||
A \LMM \ANOVA with by-participant random slopes for \factor{Visual Rendering} indicated only one statistically significant effect on the total distance traveled by the finger in a trial of \factor{Visual Rendering} (\anova{2}{18}{3.9}, \p{0.04}, \figref{results/trial_distances}).
|
A \LMM \ANOVA with by-participant random slopes for \factor{Visual Rendering} indicated only one statistically significant effect on the total distance traveled by the finger in a trial of \factor{Visual Rendering} (\anova{2}{18}{3.9}, \p{0.04}, \figref{results/trial_distances}).
|
||||||
%
|
|
||||||
On average, participants explored a larger distance with the \level{Real} rendering (\geomean{20.0}{\cm} \ci{19.4}{20.7}) than with \level{Virtual} rendering (\geomean{16.5}{\cm} \ci{15.8}{17.1}), which is the only statistically significant difference (\ttest{19}{1.2}, \p{0.03}), with the \level{Mixed} rendering (\geomean{17.4}{\cm} \ci{16.8}{18.0}) in between.
|
On average, participants explored a larger distance with the \level{Real} rendering (\geomean{20.0}{\cm} \ci{19.4}{20.7}) than with \level{Virtual} rendering (\geomean{16.5}{\cm} \ci{15.8}{17.1}), which is the only statistically significant difference (\ttest{19}{1.2}, \p{0.03}), with the \level{Mixed} rendering (\geomean{17.4}{\cm} \ci{16.8}{18.0}) in between.
|
||||||
%
|
|
||||||
Another \LMM \ANOVA with by-trial and by-participant random intercepts but no random slopes indicated only one statistically significant effect on \response{Finger Speed} of \factor{Visual Rendering} (\anova{2}{2142}{2.0}, \pinf{0.001}, \figref{results/trial_speeds}).
|
Another \LMM \ANOVA with by-trial and by-participant random intercepts but no random slopes indicated only one statistically significant effect on \response{Finger Speed} of \factor{Visual Rendering} (\anova{2}{2142}{2.0}, \pinf{0.001}, \figref{results/trial_speeds}).
|
||||||
%
|
|
||||||
On average, the textures were explored with the highest speed with the \level{Real} rendering (\geomean{5.12}{\cm\per\second} \ci{5.08}{5.17}), the lowest with the \level{Virtual} rendering (\geomean{4.40}{\cm\per\second} \ci{4.35}{4.45}), and the \level{Mixed} rendering (\geomean{4.67}{\cm\per\second} \ci{4.63}{4.71}) in between.
|
On average, the textures were explored with the highest speed with the \level{Real} rendering (\geomean{5.12}{\cm\per\second} \ci{5.08}{5.17}), the lowest with the \level{Virtual} rendering (\geomean{4.40}{\cm\per\second} \ci{4.35}{4.45}), and the \level{Mixed} rendering (\geomean{4.67}{\cm\per\second} \ci{4.63}{4.71}) in between.
|
||||||
%
|
|
||||||
All pairwise differences were statistically significant: \level{Real} \vs \level{Virtual} (\ttest{19}{1.17}, \pinf{0.001}), \level{Real} \vs \level{Mixed} (\ttest{19}{1.10}, \pinf{0.001}), and \level{Mixed} \vs \level{Virtual} (\ttest{19}{1.07}, \p{0.02}).
|
All pairwise differences were statistically significant: \level{Real} \vs \level{Virtual} (\ttest{19}{1.17}, \pinf{0.001}), \level{Real} \vs \level{Mixed} (\ttest{19}{1.10}, \pinf{0.001}), and \level{Mixed} \vs \level{Virtual} (\ttest{19}{1.07}, \p{0.02}).
|
||||||
%
|
|
||||||
%This means that within the same time window on the same surface, participants explored the comparison texture on average at a greater distance and at a higher speed when in the real environment without visual representation of the hand (\level{Real} condition) than when in \VR (\level{Virtual} condition).
|
This means that within the same time window on the same surface, participants explored the comparison texture on average at a greater distance and at a higher speed when in the \RE without visual representation of the hand (\level{Real} condition) than when in \VR (\level{Virtual} condition).
|
||||||
|
|
||||||
\begin{subfigs}{results_finger}{Results of the performance metrics for the rendering condition.}[
|
\begin{subfigs}{results_finger}{Results of the performance metrics for the rendering condition.}[
|
||||||
Boxplots and geometric means with bootstrap \percent{95} \CI, with Tukey's \HSD pairwise comparisons: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
|
Boxplots and geometric means with bootstrap \percent{95} \CI, with Tukey's \HSD pairwise comparisons: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}.
|
||||||
@@ -82,9 +64,9 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
|
|||||||
\item Distance travelled by the finger in a trial.
|
\item Distance travelled by the finger in a trial.
|
||||||
\item Speed of the finger in a trial.
|
\item Speed of the finger in a trial.
|
||||||
]
|
]
|
||||||
\subfig[0.32]{results/trial_response_times}
|
\subfig[0.25]{results/trial_response_times}
|
||||||
\subfig[0.32]{results/trial_distances}
|
\subfig[0.25]{results/trial_distances}
|
||||||
\subfig[0.32]{results/trial_speeds}
|
\subfig[0.25]{results/trial_speeds}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|
||||||
\subsection{Questionnaires}
|
\subsection{Questionnaires}
|
||||||
@@ -93,25 +75,22 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
|
|||||||
%\figref{results/question_heatmaps} shows the median and interquartile range (IQR) ratings to the questions in \tabref{questions} and to the NASA-TLX questionnaire.
|
%\figref{results/question_heatmaps} shows the median and interquartile range (IQR) ratings to the questions in \tabref{questions} and to the NASA-TLX questionnaire.
|
||||||
%
|
%
|
||||||
Friedman tests were employed to compare the ratings to the questions (\tabref{questions}), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests.
|
Friedman tests were employed to compare the ratings to the questions (\tabref{questions}), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests.
|
||||||
%
|
|
||||||
\figref{results_questions} shows these ratings for questions where statistically significant differences were found (results are shown as mean $\pm$ standard deviation):
|
\figref{results_questions} shows these ratings for questions where statistically significant differences were found (results are shown as mean $\pm$ standard deviation):
|
||||||
%
|
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item \response{Hand Ownership}: participants slightly feel the virtual hand as their own with the \level{Mixed} rendering (\num{2.3 +- 1.0}) but quite with the \level{Virtual} rendering (\num{3.5 +- 0.9}, \pinf{0.001}).
|
\item \response{Hand Ownership}: participants slightly feel the virtual hand as their own with the \level{Mixed} rendering (\num{2.3 \pm 1.0}) but quite with the \level{Virtual} rendering (\num{3.5 \pm 0.9}, \pinf{0.001}).
|
||||||
\item \response{Hand Latency}: the virtual hand was found to have a moderate latency with the \level{Mixed} rendering (\num{2.8 +- 1.2}) but a low one with the \level{Virtual} rendering (\num{1.9 +- 0.7}, \pinf{0.001}).
|
\item \response{Hand Latency}: the virtual hand was found to have a moderate latency with the \level{Mixed} rendering (\num{2.8 \pm 1.2}) but a low one with the \level{Virtual} rendering (\num{1.9 \pm 0.7}, \pinf{0.001}).
|
||||||
\item \response{Hand Reference}: participants focused slightly more on their own hand with the \level{Mixed} rendering (\num{3.2 +- 2.0}) but slightly more on the virtual hand with the \level{Virtual} rendering (\num{5.3 +- 2.1}, \pinf{0.001}).
|
\item \response{Hand Reference}: participants focused slightly more on their own hand with the \level{Mixed} rendering (\num{3.2 \pm 2.0}) but slightly more on the virtual hand with the \level{Virtual} rendering (\num{5.3 \pm 2.1}, \pinf{0.001}).
|
||||||
\item \response{Hand Distraction}: the virtual hand was slightly distracting with the \level{Mixed} rendering (\num{2.1 +- 1.1}) but not at all with the \level{Virtual} rendering (\num{1.2 +- 0.4}, \p{0.004}).
|
\item \response{Hand Distraction}: the virtual hand was slightly distracting with the \level{Mixed} rendering (\num{2.1 \pm 1.1}) but not at all with the \level{Virtual} rendering (\num{1.2 \pm 0.4}, \p{0.004}).
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
%
|
|
||||||
Overall, participants' sense of control over the virtual hand was very high (\response{Hand Agency}, \num{4.4 +- 0.6}), felt the virtual hand was quite similar to their own hand (\response{Hand Similarity}, \num{3.5 +- 0.9}), and that the virtual environment was very realistic (\response{Virtual Realism}, \num{4.2 +- 0.7}) and very similar to the real one (\response{Virtual Similarity}, \num{4.5 +- 0.7}).
|
Overall, participants' sense of control over the virtual hand was very high (\response{Hand Agency}, \num{4.4 \pm 0.6}), felt the virtual hand was quite similar to their own hand (\response{Hand Similarity}, \num{3.5 \pm 0.9}), and that the \VE was very realistic (\response{Virtual Realism}, \num{4.2 \pm 0.7}) and very similar to the real one (\response{Virtual Similarity}, \num{4.5 \pm 0.7}).
|
||||||
%
|
The overall workload (mean NASA-TLX score) was low (\num{21 \pm 14}), with no statistically significant differences found between the visual renderings for any of the subscales or the overall score.
|
||||||
The textures were also overall found to be very much caused by the finger movements (\response{Texture Agency}, \num{4.5 +- 1.0}) with a very low perceived latency (\response{Texture Latency}, \num{1.6 +- 0.8}), and to be quite realistic (\response{Texture Realism}, \num{3.6 +- 0.9}) and quite plausible (\response{Texture Plausibility}, \num{3.6 +- 1.0}).
|
|
||||||
%
|
The textures were also overall found to be very much caused by the finger movements (\response{Texture Agency}, \num{4.5 \pm 1.0}) with a very low perceived latency (\response{Texture Latency}, \num{1.6 \pm 0.8}), and to be quite realistic (\response{Texture Realism}, \num{3.6 \pm 0.9}) and quite plausible (\response{Texture Plausibility}, \num{3.6 \pm 1.0}).
|
||||||
Participants were mixed between feeling the vibrations on the surface or on the top of their finger (\response{Vibration Location}, \num{3.9 +- 1.7}); the distribution of scores was split between the two poles of the scale with \level{Real} and \level{Mixed} renderings (\percent{42.5} more on surface or on finger top, \percent{15} neutral), but there was a trend towards the top of the finger in VR renderings (\percent{65} \vs \percent{25} more on surface and \percent{10} neutral), but this difference was not statistically significant neither.
|
The vibrations were felt a slightly weak overall (\response{Vibration Strength}, \num{4.2 \pm 1.1}), and the vibrotactile device was perceived as neither distracting (\response{Device Distraction}, \num{1.2 \pm 0.4}) nor uncomfortable (\response{Device Discomfort}, \num{1.3 \pm 0.6}).
|
||||||
%
|
|
||||||
The vibrations were felt a slightly weak overall (\response{Vibration Strength}, \num{4.2 +- 1.1}), and the vibrotactile device was perceived as neither distracting (\response{Device Distraction}, \num{1.2 +- 0.4}) nor uncomfortable (\response{Device Discomfort}, \num{1.3 +- 0.6}).
|
Participants were mixed between feeling the vibrations on the surface or on the top of their finger (\response{Vibration Location}, \num{3.9 \pm 1.7}); the distribution of scores was split between the two poles of the scale with \level{Real} and \level{Mixed} renderings (\percent{42.5} more on surface or on finger top, \percent{15} neutral), but there was a trend towards the top of the finger in VR renderings (\percent{65} \vs \percent{25} more on surface and \percent{10} neutral), but this difference was not statistically significant neither.
|
||||||
%
|
|
||||||
Finally, the overall workload (mean NASA-TLX score) was low (\num{21 +- 14}), with no statistically significant differences found between the visual renderings for any of the subscales or the overall score.
|
|
||||||
|
|
||||||
%\figwide{results/question_heatmaps}{%
|
%\figwide{results/question_heatmaps}{%
|
||||||
%
|
%
|
||||||
@@ -132,8 +111,8 @@ Finally, the overall workload (mean NASA-TLX score) was low (\num{21 +- 14}), wi
|
|||||||
\item Hand reference.
|
\item Hand reference.
|
||||||
\item Hand distraction.
|
\item Hand distraction.
|
||||||
]
|
]
|
||||||
\subfig[0.24]{results/questions_hand_ownership}
|
\subfig[0.18]{results/questions_hand_ownership}
|
||||||
\subfig[0.24]{results/questions_hand_latency}
|
\subfig[0.18]{results/questions_hand_latency}
|
||||||
\subfig[0.24]{results/questions_hand_reference}
|
\subfig[0.18]{results/questions_hand_reference}
|
||||||
\subfig[0.24]{results/questions_hand_distraction}
|
\subfig[0.18]{results/questions_hand_distraction}
|
||||||
\end{subfigs}
|
\end{subfigs}
|
||||||
|
|||||||
@@ -1,84 +1,33 @@
|
|||||||
\section{Discussion}
|
\section{Discussion}
|
||||||
\label{discussion}
|
\label{discussion}
|
||||||
|
|
||||||
%Interpret the findings in results, answer to the problem asked in the introduction, contrast with previous articles, draw possible implications. Give limitations of the study.
|
|
||||||
|
|
||||||
% But how different is the perception of the haptic augmentation in \AR compared to \VR, with a virtual hand instead of the real hand?
|
|
||||||
% The goal of this paper is to study the visual rendering of the hand (real or virtual) and its environment (\AR or \VR) on the perception of a tangible surface whose texture is augmented with a wearable vibrotactile device mounted on the finger.
|
|
||||||
|
|
||||||
The results showed a difference in vibrotactile roughness perception between the three visual rendering conditions.
|
The results showed a difference in vibrotactile roughness perception between the three visual rendering conditions.
|
||||||
%
|
|
||||||
Given the estimated \PSEs, the textures were on average perceived as \enquote{rougher} in the \level{Real} rendering than in the \level{Virtual} (\percent{-2.8}) and \level{Mixed} (\percent{-6.0}) renderings (\figref{results/trial_pses}).
|
Given the estimated \PSEs, the textures were on average perceived as \enquote{rougher} in the \level{Real} rendering than in the \level{Virtual} (\percent{-2.8}) and \level{Mixed} (\percent{-6.0}) renderings (\figref{results/trial_pses}).
|
||||||
%
|
A \PSE difference in the same range was found for perceived stiffness, with the \VR perceived as \enquote{stiffer} and the \AR as \enquote{softer} \cite{gaffary2017ar}.
|
||||||
A \\PSE difference in the same range was found for perceived stiffness, with the \VR perceived as \enquote{stiffer} and the \AR as \enquote{softer} \cite{gaffary2017ar}.
|
|
||||||
%
|
%
|
||||||
%However, the difference between the \level{Virtual} and \level{Mixed} conditions was not significant.
|
%However, the difference between the \level{Virtual} and \level{Mixed} conditions was not significant.
|
||||||
%
|
%
|
||||||
Surprisingly, the \\PSE of the \level{Real} rendering was shifted to the right (to be "rougher", \percent{7.9}) compared to the reference texture, whereas the \PSEs of the \level{Virtual} (\percent{5.1}) and \level{Mixed} (\percent{1.9}) renderings were perceived as \enquote{smoother} and closer to the reference texture (\figref{results/trial_predictions}).
|
Surprisingly, the \PSE of the \level{Real} rendering was shifted to the right (to be "rougher", \percent{7.9}) compared to the reference texture, whereas the \PSEs of the \level{Virtual} (\percent{5.1}) and \level{Mixed} (\percent{1.9}) renderings were perceived as \enquote{smoother} and closer to the reference texture (\figref{results/trial_predictions}).
|
||||||
%
|
|
||||||
The sensitivity of participants to roughness differences also varied, with the \level{Real} rendering having the best \JND (\percent{26}), followed by the \level{Virtual} (\percent{30}) and \level{Virtual} (\percent{33}) renderings (\figref{results/trial_jnds}).
|
The sensitivity of participants to roughness differences also varied, with the \level{Real} rendering having the best \JND (\percent{26}), followed by the \level{Virtual} (\percent{30}) and \level{Virtual} (\percent{33}) renderings (\figref{results/trial_jnds}).
|
||||||
%
|
|
||||||
These \JND values are in line with and at the upper end of the range of previous studies \cite{choi2013vibrotactile}, which may be due to the location of the actuator on the top of the finger middle phalanx, being less sensitive to vibration than the fingertip.
|
These \JND values are in line with and at the upper end of the range of previous studies \cite{choi2013vibrotactile}, which may be due to the location of the actuator on the top of the finger middle phalanx, being less sensitive to vibration than the fingertip.
|
||||||
%
|
|
||||||
Thus, compared to no visual rendering (\level{Real}), the addition of a visual rendering of the hand or environment reduced the roughness sensitivity (\JND) and the roughness perception (\PSE), as if the virtual vibrotactile textures felt \enquote{smoother}.
|
Thus, compared to no visual rendering (\level{Real}), the addition of a visual rendering of the hand or environment reduced the roughness sensitivity (\JND) and the roughness perception (\PSE), as if the virtual vibrotactile textures felt \enquote{smoother}.
|
||||||
|
|
||||||
Differences in user behaviour were also observed between the visual renderings (but not between the haptic textures).
|
Differences in user behaviour were also observed between the visual renderings (but not between the haptic textures).
|
||||||
%
|
|
||||||
On average, participants responded faster (\percent{-16}), explored textures at a greater distance (\percent{+21}) and at a higher speed (\percent{+16}) without visual augmentation (\level{Real} rendering) than in \VR (\level{Virtual} rendering) (\figref{results_finger}).
|
On average, participants responded faster (\percent{-16}), explored textures at a greater distance (\percent{+21}) and at a higher speed (\percent{+16}) without visual augmentation (\level{Real} rendering) than in \VR (\level{Virtual} rendering) (\figref{results_finger}).
|
||||||
%
|
|
||||||
The \level{Mixed} rendering was always in between, with no significant difference from the other two.
|
The \level{Mixed} rendering was always in between, with no significant difference from the other two.
|
||||||
%
|
|
||||||
This suggests that touching a virtual vibrotactile texture on a tangible surface with a virtual hand in \VR is different from touching it with one's own hand: users were more cautious or less confident in their exploration in \VR.
|
This suggests that touching a virtual vibrotactile texture on a tangible surface with a virtual hand in \VR is different from touching it with one's own hand: users were more cautious or less confident in their exploration in \VR.
|
||||||
%
|
This does not seem to be due to the realism of the virtual hand or the environment, nor to the control of the virtual hand, all of which were rated high to very high by the participants (\secref{results_questions}) in both the \level{Mixed} and \level{Virtual} renderings.
|
||||||
This does not seem to be due to the realism of the virtual hand or the environment, nor to the control of the virtual hand, all of which were rated high to very high by the participants (\secref{questions}) in both the \level{Mixed} and \level{Virtual} renderings.
|
Very interestingly, the evaluation of the vibrotactile device and the textures was also the same between the visual rendering, with a very high sense of control, a good realism and a very low perceived latency of the textures (\secref{results_questions}).
|
||||||
%
|
|
||||||
Very interestingly, the evaluation of the vibrotactile device and the textures was also the same between the visual rendering, with a very high sense of control, a good realism and a very low perceived latency of the textures (\secref{questions}).
|
|
||||||
%
|
|
||||||
Conversely, the perceived latency of the virtual hand (\response{Hand Latency} question) seemed to be related to the perceived roughness of the textures (with the \PSEs).
|
Conversely, the perceived latency of the virtual hand (\response{Hand Latency} question) seemed to be related to the perceived roughness of the textures (with the \PSEs).
|
||||||
%
|
|
||||||
The \level{Mixed} rendering had the lowest \PSE and highest perceived latency, the \level{Virtual} rendering had a higher \PSE and lower perceived latency, and the \level{Real} rendering had the highest \PSE and no virtual hand latency (as it was not displayed).
|
The \level{Mixed} rendering had the lowest \PSE and highest perceived latency, the \level{Virtual} rendering had a higher \PSE and lower perceived latency, and the \level{Real} rendering had the highest \PSE and no virtual hand latency (as it was not displayed).
|
||||||
|
|
||||||
Our visuo-haptic augmentation system aimed to provide a coherent multimodal virtual rendering integrated with the real environment.
|
Our visuo-haptic augmentation system, described in \chapref{vhar_system}, aimed to provide a coherent visuo-haptic augmentation integrated with the \RE.
|
||||||
%
|
Yet, it involves different sensory interaction loops between the user's movements and the visuo-haptic feedback (\figref{method/diagram} and \figref[introduction]{interaction_loop}), which may not feel to be in synchronized with each other or with proprioception.
|
||||||
Yet, it involves different sensory interaction loops between the user's movements and the visuo-haptic feedback (\figref{method/diagram}), which may not feel to be in synchronised with each other or with proprioception.
|
|
||||||
%
|
|
||||||
%When a user runs their finger over a vibrotactile virtual texture, the haptic sensations and eventual display of the virtual hand lag behind the visual displacement and proprioceptive sensations of the real hand.
|
%When a user runs their finger over a vibrotactile virtual texture, the haptic sensations and eventual display of the virtual hand lag behind the visual displacement and proprioceptive sensations of the real hand.
|
||||||
%
|
%
|
||||||
Thereby, we hypothesise that the differences in the perception of vibrotactile roughness are less due to the visual rendering of the hand or the environment and their associated differences in exploration behaviour, but rather to the difference in the \emph{perceived} latency between one's own hand (visual and proprioception) and the virtual hand (visual and haptic).
|
Thereby, we hypothesize that the differences in the perception of vibrotactile roughness are less due to the visual rendering of the hand or the environment and their associated differences in exploration behaviour, but rather to the difference in the \emph{perceived} latency between one's own hand (visual and proprioception) and the virtual hand (visual and haptic).
|
||||||
%
|
|
||||||
The perceived delay was the most important in \AR, where the virtual hand visually lags significantly behind the real one, but less so in \VR, where only the proprioceptive sense can help detect the lag.
|
The perceived delay was the most important in \AR, where the virtual hand visually lags significantly behind the real one, but less so in \VR, where only the proprioceptive sense can help detect the lag.
|
||||||
%
|
This delay was not perceived when touching the virtual haptic textures without visual augmentation, because only the finger velocity was used to render them, and, despite the varied finger movements and velocities while exploring the textures, the participants did not perceive any latency in the vibrotactile rendering (\secref{results_questions}).
|
||||||
This delay was not perceived when touching the virtual haptic textures without visual augmentation, because only the finger velocity was used to render them, and, despite the varied finger movements and velocities while exploring the textures, the participants did not perceive any latency in the vibrotactile rendering (\secref{questions}). %, and the exploratory movements typically observed in our study had a fairly constant speed during a passage over the textures.
|
\textcite{diluca2011effects} demonstrated similarly, in a \VST-\AR setup, how visual latency relative to proprioception increased the perception of stiffness of a virtual piston, while haptic latency decreased it (\secref[related_work]{ar_vr_haptic}).
|
||||||
%
|
Another complementary explanation could be a pseudo-haptic effect (\secref[related_work]{visual_haptic_influence}) of the displacement of the virtual hand, as already observed with this vibrotactile texture rendering, but seen on a screen in a non-immersive context \cite{ujitoko2019modulating}.
|
||||||
\textcite{diluca2011effects} demonstrated similarly, in a \VST-\AR setup, how visual latency relative to proprioception increased the perception of stiffness of a virtual piston, while haptic latency decreased it.
|
|
||||||
%
|
|
||||||
Another complementary explanation could be a pseudo-haptic effect of the displacement of the virtual hand, as already observed with this vibrotactile texture rendering, but seen on a screen in a non-immersive context \cite{ujitoko2019modulating}.
|
|
||||||
%
|
|
||||||
Such hypotheses could be tested by manipulating the latency and tracking accuracy of the virtual hand or the vibrotactile feedback. % to observe their effects on the roughness perception of the virtual textures.
|
Such hypotheses could be tested by manipulating the latency and tracking accuracy of the virtual hand or the vibrotactile feedback. % to observe their effects on the roughness perception of the virtual textures.
|
||||||
|
|
||||||
We can outline recommendations for future \AR/\VR studies or applications using wearable haptics.
|
|
||||||
%
|
|
||||||
Attention should be paid to the respective latencies of the visual and haptic sensory feedbacks inherent in such systems and, more importantly, to \emph{the perception of their possible asynchrony}.
|
|
||||||
%
|
|
||||||
%This is in line with embodiment studies in \VR that compared realism, latency and control \cite{waltemate2016impact,fribourg2020avatar}.
|
|
||||||
%
|
|
||||||
Latencies should be measured \cite{friston2014measuring}, minimised to an acceptable level for users and kept synchronised with each other \cite{diluca2019perceptual}.
|
|
||||||
%
|
|
||||||
It seems that the visual aspect of the hand or the environment on itself has little effect on the perception of haptic feedback, but the degree of visual reality-virtuality can affect the asynchrony sensation of the latencies, even though they remain identical.
|
|
||||||
%
|
|
||||||
%As we have shown, the visual representation of the hand or the environment can affect the experience of the unchanged latencies and thus the perception of haptic feedback.
|
|
||||||
%
|
|
||||||
Therefore, when designing for wearable haptics or integrating it into \AR/\VR, it seems important to test its perception in real, augmented and virtual environments.
|
|
||||||
%Finally, a visual hand representation in OST-\AR together with wearable haptics should be avoided until acceptable tracking latencies are achieved, as was also observed for virtual object interaction with the bare hand \cite{normand2024visuohaptic}.
|
|
||||||
|
|
||||||
The main limitation of our study is the absence of a visual representation of the virtual texture.
|
|
||||||
%
|
|
||||||
This is indeed a source of information as important as haptic sensations for the perception of both real textures \cite{baumgartner2013visual,bergmanntiest2007haptic,vardar2019fingertip} and virtual textures \cite{degraen2019enhancing,gunther2022smooth}, and their interaction in the overall perception is complex.
|
|
||||||
%
|
|
||||||
%Specifically, it remains to be investigated how to visually represent vibrotactile textures in an immersive \AR or \VR context, as the visuo-haptic coupling of such grating textures is not trivial \cite{unger2011roughness} even with real textures \cite{klatzky2003feeling}.
|
|
||||||
|
|
||||||
Also, our study was conducted with an \OST-\AR headset, but the results may be different with a \VST-\AR headset.
|
|
||||||
%
|
|
||||||
Finally, we focused on the perception of roughness sensations using wearable haptics in \AR \vs \VR using a square wave vibrotactile signal, but different haptic texture rendering methods should be considered.
|
|
||||||
%
|
|
||||||
More generally, many other haptic feedbacks could be investigated in \AR \vs \VR using the same system and methodology, such as stiffness, friction, local deformations, or temperature.
|
|
||||||
|
|||||||
@@ -1,7 +1,14 @@
|
|||||||
\section{Conclusion}
|
\section{Conclusion}
|
||||||
\label{conclusion}
|
\label{conclusion}
|
||||||
|
|
||||||
%Summary of the research problem, method, main findings, and implications.
|
In this chapter, we studied how the perception of wearable haptic augmented textures is affected by the visual virtuality of the hand and the environment.
|
||||||
|
Using the visuo-haptic augmentation system presented in \chapref{vhar_system}, we rendered virtual vibrotactile patterned textures
|
||||||
|
|
||||||
|
investigated virtual textures that modify the roughness perception of real, tangible surfaces, using a wearable vibrotactile device worn on the finger.
|
||||||
|
|
||||||
|
evaluated the perception of haptic textures on a tangible surface augmented with a wearable vibrotactile device worn on the finger.
|
||||||
|
|
||||||
|
different such wearable haptic augmented textures are perceived when touched with a virtual hand instead of one's own hand, and when the hand and its environment are visually rendered in AR or VR.
|
||||||
|
|
||||||
We investigated virtual textures that modify the roughness perception of real, tangible surfaces, using a wearable vibrotactile device worn on the finger.
|
We investigated virtual textures that modify the roughness perception of real, tangible surfaces, using a wearable vibrotactile device worn on the finger.
|
||||||
%
|
%
|
||||||
@@ -10,7 +17,7 @@ We investigated virtual textures that modify the roughness perception of real, t
|
|||||||
To this end, we first designed and implemented a visuo-haptic texture rendering system that allows free exploration of the augmented surface using a visual AR/VR headset.
|
To this end, we first designed and implemented a visuo-haptic texture rendering system that allows free exploration of the augmented surface using a visual AR/VR headset.
|
||||||
%to render virtual vibrotactile textures on any tangible surface, allowing free exploration of the surface, and integrated them with an immersive visual OST-AR headset, that could be switched to a VR view.
|
%to render virtual vibrotactile textures on any tangible surface, allowing free exploration of the surface, and integrated them with an immersive visual OST-AR headset, that could be switched to a VR view.
|
||||||
%
|
%
|
||||||
%This provided a coherent and synchronised multimodal visuo-haptic augmentation of the real environment, which could also be switched between an AR and a VR view.
|
%This provided a coherent and synchronised multimodal visuo-haptic augmentation of the \RE, which could also be switched between an AR and a VR view.
|
||||||
%
|
%
|
||||||
We then conducted a psychophysical user study with 20 participants to assess the roughness perception of these virtual texture augmentations directly touched with the finger (1) without visual augmentation, (2) with a realistic virtual hand rendering in AR, and (3) with the same virtual hand in VR.
|
We then conducted a psychophysical user study with 20 participants to assess the roughness perception of these virtual texture augmentations directly touched with the finger (1) without visual augmentation, (2) with a realistic virtual hand rendering in AR, and (3) with the same virtual hand in VR.
|
||||||
%
|
%
|
||||||
@@ -21,3 +28,18 @@ The textures were on average perceived as \enquote{rougher} and with a higher se
|
|||||||
We hypothesised that this difference in perception was due to the \emph{perceived latency} between the finger movements and the different visual, haptic and proprioceptive feedbacks, which were the same in all visual renderings, but were more noticeable in AR and VR. % than without visual augmentation.
|
We hypothesised that this difference in perception was due to the \emph{perceived latency} between the finger movements and the different visual, haptic and proprioceptive feedbacks, which were the same in all visual renderings, but were more noticeable in AR and VR. % than without visual augmentation.
|
||||||
%
|
%
|
||||||
With a better understanding of how visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with AR can be better applied and new visuo-haptic renderings adapted to AR can be designed.
|
With a better understanding of how visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with AR can be better applied and new visuo-haptic renderings adapted to AR can be designed.
|
||||||
|
|
||||||
|
We can outline recommendations for future \AR/\VR studies or applications using wearable haptics.
|
||||||
|
Attention should be paid to the respective latencies of the visual and haptic sensory feedbacks inherent in such systems and, more importantly, to \emph{the perception of their possible asynchrony}.
|
||||||
|
%
|
||||||
|
%This is in line with embodiment studies in \VR that compared realism, latency and control \cite{waltemate2016impact,fribourg2020avatar}.
|
||||||
|
%
|
||||||
|
Latencies should be measured \cite{friston2014measuring}, minimised to an acceptable level for users and kept synchronised with each other \cite{diluca2019perceptual}.
|
||||||
|
%
|
||||||
|
It seems that the visual aspect of the hand or the environment on itself has little effect on the perception of haptic feedback, but the degree of visual reality-virtuality can affect the asynchrony sensation of the latencies, even though they remain identical.
|
||||||
|
%
|
||||||
|
%As we have shown, the visual representation of the hand or the environment can affect the experience of the unchanged latencies and thus the perception of haptic feedback.
|
||||||
|
%
|
||||||
|
Therefore, when designing for wearable haptics or integrating it into \AR/\VR, it seems important to test its perception in real, augmented and virtual environments.
|
||||||
|
%Finally, a visual hand representation in OST-\AR together with wearable haptics should be avoided until acceptable tracking latencies are achieved, as was also observed for \VO interaction with the bare hand \cite{normand2024visuohaptic}.
|
||||||
|
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 4.3 MiB After Width: | Height: | Size: 4.3 MiB |
|
Before Width: | Height: | Size: 1.7 MiB After Width: | Height: | Size: 1.7 MiB |
@@ -24,11 +24,45 @@ The results showed that providing vibrotactile feedback \textbf{improved the per
|
|||||||
The wearable visuo-haptic augmentations of perception and manipulation we presented and the user studies we conducted in this thesis have of course some limitations.
|
The wearable visuo-haptic augmentations of perception and manipulation we presented and the user studies we conducted in this thesis have of course some limitations.
|
||||||
In this section, we present some future work for each chapter that could address these.
|
In this section, we present some future work for each chapter that could address these.
|
||||||
|
|
||||||
\subsection*{Visual Rendering of the Hand for Manipulating Virtual Objects in Augmented Reality}
|
\subsection*{Augmenting the Visuo-haptic Texture Perception of Tangible Surfaces}
|
||||||
|
|
||||||
|
\paragraph{Other Augmented Object Properties}
|
||||||
|
|
||||||
|
We focused on the visuo-haptic augmentation of roughness using vibrotactile feedback, because it is one of the most salient properties of surfaces (\secref[related_work]{object_properties}), one of the most studied in haptic perception (\secref[related_work]{texture_rendering}), and equally perceived by sight and touch (\secref[related_work]{visual_haptic_influence}).
|
||||||
|
However, many other wearable augmentation of object properties could be considered, such as hardness, friction, temperature, or local deformations.
|
||||||
|
Such integration of haptic augmentation of a tangible surface has almost been achieved with the hand-held devices of \citeauthor{culbertson2017ungrounded} \cite{culbertson2017importance,culbertson2017ungrounded}, but it remains to be explored with wearable haptic devices.
|
||||||
|
In addition, combination with pseudo-haptic rendering techniques \cite{ujitoko2021survey} should be systematically investigated to expand the range of possible wearable haptic augmentations.
|
||||||
|
|
||||||
|
\paragraph{Fully Integrated Tracking}
|
||||||
|
|
||||||
|
In our system, we registered the real and virtual environments (\secref[related_work]{ar_definition}) using fiducial markers and a webcam external to the \AR headset.
|
||||||
|
This only allowed us to track the index finger and the surface to be augmented with the haptic texture, but the tracking was reliable and accurate enough for our needs.
|
||||||
|
In fact, preliminary tests we conducted showed that the built-in tracking capabilities of the Microsoft HoloLens~2 were not able to track the hands wearing a voice-coil.
|
||||||
|
A more robust hand tracking system would support wearing haptic devices on the hand, as well as holding real objects.
|
||||||
|
A complementary solution would be to embed tracking sensors in the wearable haptic devices, such as an inertial measurement unit (IMU) or cameras, such as \textcite{preechayasomboon2021haplets}.
|
||||||
|
This would allow a complete portable and wearable visuo-haptic system to be used in more ecological applications.
|
||||||
|
|
||||||
|
\subsection*{Perception of Haptic Texture Augmentation in Augmented and Virtual Reality}
|
||||||
|
|
||||||
|
\paragraph{Visual Representation of the Virtual Texture}
|
||||||
|
|
||||||
|
The main limitation of our study is the absence of a visual representation of the virtual texture.
|
||||||
|
This is indeed a source of information as important as haptic sensations for the perception of both real textures \cite{baumgartner2013visual,bergmanntiest2007haptic,vardar2019fingertip} and virtual textures \cite{degraen2019enhancing,gunther2022smooth}, and their interaction in the overall perception is complex.
|
||||||
|
Specifically, it remains to be investigated how to visually represent vibrotactile textures in an immersive \AR or \VR context, as the visuo-haptic coupling of such grating textures is not trivial \cite{unger2011roughness} even with real textures \cite{klatzky2003feeling}.
|
||||||
|
|
||||||
|
\paragraph{Broader Visuo-Haptic Conditions}
|
||||||
|
|
||||||
|
Also, our study was conducted with an \OST-\AR headset, but the results may be different with a \VST-\AR headset.
|
||||||
|
Finally, we focused on the perception of roughness sensations using wearable haptics in \AR \vs \VR using a square wave vibrotactile signal, but different haptic texture rendering methods should be considered.
|
||||||
|
More generally, many other haptic feedbacks could be investigated in \AR \vs \VR using the same system and methodology, such as stiffness, friction, local deformations, or temperature.
|
||||||
|
|
||||||
|
\subsection*{Perception of Visual and Haptic Texture Augmentations in Augmented Reality}
|
||||||
|
|
||||||
|
\subsection*{Visual Rendering of the Hand for Manipulating Virtual Objects in AR}
|
||||||
|
|
||||||
\paragraph{Other AR Displays}
|
\paragraph{Other AR Displays}
|
||||||
|
|
||||||
The visual hand renderings we evaluated were displayed on the Microsoft HoloLens~2, which is a common \OST-\AR headset.
|
The visual hand renderings we evaluated were displayed on the Microsoft HoloLens~2, which is a common \OST-\AR headset \cite{hertel2021taxonomy}.
|
||||||
We purposely chose this type of display as it is with \OST-\AR that the lack of mutual occlusion between the hand and the \VO is the most challenging to solve \cite{macedo2023occlusion}.
|
We purposely chose this type of display as it is with \OST-\AR that the lack of mutual occlusion between the hand and the \VO is the most challenging to solve \cite{macedo2023occlusion}.
|
||||||
We thus hypothesized that a visual hand rendering would be more beneficial to users with this type of display.
|
We thus hypothesized that a visual hand rendering would be more beneficial to users with this type of display.
|
||||||
However, the user's visual perception and experience is very different with other types of displays, such as \VST-\AR, where the \RE view is seen through a screen (\secref[related_work]{ar_displays}).
|
However, the user's visual perception and experience is very different with other types of displays, such as \VST-\AR, where the \RE view is seen through a screen (\secref[related_work]{ar_displays}).
|
||||||
@@ -41,7 +75,7 @@ While these tasks are fundamental building blocks for more complex manipulation
|
|||||||
Similarly, a broader experimental study might shed light on the role of gender and age, as our subject pool was not sufficiently diverse in this regard.
|
Similarly, a broader experimental study might shed light on the role of gender and age, as our subject pool was not sufficiently diverse in this regard.
|
||||||
Finally, all visual hand renderings received low and high rank rates from different participants, suggesting that users should be able to choose and personalize some aspects of the visual hand rendering according to their preferences or needs, and this should be also be evaluated.
|
Finally, all visual hand renderings received low and high rank rates from different participants, suggesting that users should be able to choose and personalize some aspects of the visual hand rendering according to their preferences or needs, and this should be also be evaluated.
|
||||||
|
|
||||||
\subsection*{Visuo-Haptic Rendering of Hand Manipulation With Virtual Objects in Augmented Reality}
|
\subsection*{Visuo-Haptic Rendering of Hand Manipulation With Virtual Objects in AR}
|
||||||
|
|
||||||
\paragraph{Richer Haptic Feedback}
|
\paragraph{Richer Haptic Feedback}
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user