Remove "see" before section or figure reference
This commit is contained in:
@@ -33,4 +33,4 @@ Being able to coherently substitute the visuo-haptic texture of an everyday surf
|
||||
|
||||
In this paper, we investigate how users perceive a tangible surface touched with the index finger when it is augmented with a visuo-haptic roughness texture using immersive optical see-through AR (OST-AR) and wearable vibrotactile stimuli provided on the index.
|
||||
%
|
||||
In a user study, twenty participants freely explored and evaluated the coherence, realism and roughness of the combination of nine representative pairs of visuo-haptic texture augmentations (see \figref{setup}, left) from the HaTT database~\cite{culbertson2014one}.
|
||||
In a user study, twenty participants freely explored and evaluated the coherence, realism and roughness of the combination of nine representative pairs of visuo-haptic texture augmentations (\figref{setup}, left) from the HaTT database~\cite{culbertson2014one}.
|
||||
|
||||
@@ -34,7 +34,7 @@ The 100 visuo-haptic texture pairs of the HaTT database~\cite{culbertson2014one}
|
||||
%
|
||||
These texture models were chosen as they are visuo-haptic representations of a wide range of real textures that are publicly available online.
|
||||
%
|
||||
Nine texture pairs were selected (see \figref{setup}, left) to cover various perceived roughness, from rough to smooth, as listed: Metal Mesh, Sandpaper~100, Brick~2, Cork, Sandpaper~320, Velcro Hooks, Plastic Mesh~1, Terra Cotta, Coffee Filter.
|
||||
Nine texture pairs were selected (\figref{setup}, left) to cover various perceived roughness, from rough to smooth, as listed: Metal Mesh, Sandpaper~100, Brick~2, Cork, Sandpaper~320, Velcro Hooks, Plastic Mesh~1, Terra Cotta, Coffee Filter.
|
||||
%
|
||||
All these visual and haptic textures are isotropic: their rendering (appearance or roughness) is the same whatever the direction of the movement on the surface, \ie there are no local deformations (holes, bumps, or breaks).
|
||||
|
||||
@@ -52,7 +52,7 @@ Similarly, a 2-cm-square fiducial marker was glued on top of the vibrotactile ac
|
||||
%
|
||||
Positioned \qty{20}{\cm} above the surfaces, a webcam (StreamCam, Logitech) filmed the markers to track finger movements relative to the surfaces.
|
||||
%
|
||||
The visual textures were displayed on the tangible surfaces using the HoloLens~2 OST-AR headset (see \figref{setup}, middle and right) within a \qtyproduct{43 x 29}{\degree} field of view at \qty{60}{\Hz}; a set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the visual textures, that were used throughout the user study.
|
||||
The visual textures were displayed on the tangible surfaces using the HoloLens~2 OST-AR headset (\figref{setup}, middle and right) within a \qtyproduct{43 x 29}{\degree} field of view at \qty{60}{\Hz}; a set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the visual textures, that were used throughout the user study.
|
||||
%
|
||||
When a haptic texture was touched, a \qty{48}{kHz} audio signal was generated using the corresponding HaTT haptic texture model and the measured tangential speed of the finger, using the rendering procedure described in Culbertson \etal~\cite{culbertson2014modeling}.
|
||||
%
|
||||
|
||||
@@ -86,11 +86,11 @@ These results indicate, with \figref{results_matching_ranking} (right), that the
|
||||
\label{results_similarity}
|
||||
|
||||
\begin{subfigs}{results_similarity}{%
|
||||
(Left) Correspondence analysis of the matching task confusion matrix (see \figref{results_matching_ranking}, left).
|
||||
(Left) Correspondence analysis of the matching task confusion matrix (\figref{results_matching_ranking}, left).
|
||||
The visual textures are represented as blue squares, the haptic textures as red circles. %
|
||||
The closer the textures are, the more similar they were judged. %
|
||||
The first dimension (horizontal axis) explains 60~\% of the variance, the second dimension (vertical axis) explains 30~\% of the variance.
|
||||
(Right) Dendrograms of the hierarchical clusterings of the haptic textures (left) and visual textures (right) of the matching task confusion matrix (see \figref{results_matching_ranking}, left), using Euclidian distance and Ward's method. %
|
||||
(Right) Dendrograms of the hierarchical clusterings of the haptic textures (left) and visual textures (right) of the matching task confusion matrix (\figref{results_matching_ranking}, left), using Euclidian distance and Ward's method. %
|
||||
The height of the dendrograms represents the distance between the clusters. %
|
||||
}
|
||||
\begin{minipage}[c]{0.50\linewidth}%
|
||||
@@ -105,15 +105,15 @@ These results indicate, with \figref{results_matching_ranking} (right), that the
|
||||
\end{minipage}%
|
||||
\end{subfigs}
|
||||
|
||||
The high level of agreement between participants on the three haptic, visual and visuo-haptic rankings (see \secref{results_ranking}), as well as the similarity of the within-participant rankings, suggests that participants perceived the roughness of the textures similarly, but differed in their strategies for matching the haptic and visual textures in the matching task (see \secref{results_matching}).
|
||||
The high level of agreement between participants on the three haptic, visual and visuo-haptic rankings (\secref{results_ranking}), as well as the similarity of the within-participant rankings, suggests that participants perceived the roughness of the textures similarly, but differed in their strategies for matching the haptic and visual textures in the matching task (\secref{results_matching}).
|
||||
%
|
||||
To further investigate the perceived similarity of the haptic and visual textures and to identify groups of textures that were perceived as similar on the matching task, a correspondence analysis and a hierarchical clustering were performed on the matching task confusion matrix (see \figref{results_matching_ranking}, left).
|
||||
To further investigate the perceived similarity of the haptic and visual textures and to identify groups of textures that were perceived as similar on the matching task, a correspondence analysis and a hierarchical clustering were performed on the matching task confusion matrix (\figref{results_matching_ranking}, left).
|
||||
|
||||
The correspondence analysis captured 60~\% and 29~\% of the variance in the first and second dimensions, respectively, with the remaining dimensions each accounting for less than 5~\% each.
|
||||
%
|
||||
\figref{results_similarity} (left) shows the first two dimensions with the 18 haptic and visual textures.
|
||||
%
|
||||
The first dimension was similar to the rankings (see \figref{results_matching_ranking}, right), distributing the textures according to their perceived roughness.
|
||||
The first dimension was similar to the rankings (\figref{results_matching_ranking}, right), distributing the textures according to their perceived roughness.
|
||||
%
|
||||
It seems that the second dimension opposed textures that were perceived as hard with those perceived as softer, as also reported by participants.
|
||||
%
|
||||
@@ -121,13 +121,13 @@ Stiffness is indeed an important perceptual dimension of a material~\cite{okamot
|
||||
|
||||
\figref{results_similarity} (right) shows the dendrograms of the two hierarchical clusterings of the haptic and visual textures, constructed using the Euclidean distance and the Ward's method on squared distance.
|
||||
%
|
||||
The four identified haptic texture clusters were: "Roughest" \{Metal Mesh, Sandpaper~100, Brick~2, Cork\}; "Rougher" \{Sandpaper~320, Velcro Hooks\}; "Smoother" \{Plastic Mesh~1, Terra Cotta\}; "Smoothest" \{Coffee Filter\} (see \figref{results_similarity}, top-right).
|
||||
The four identified haptic texture clusters were: "Roughest" \{Metal Mesh, Sandpaper~100, Brick~2, Cork\}; "Rougher" \{Sandpaper~320, Velcro Hooks\}; "Smoother" \{Plastic Mesh~1, Terra Cotta\}; "Smoothest" \{Coffee Filter\} (\figref{results_similarity}, top-right).
|
||||
%
|
||||
Similar to the haptic ranks (see \figref{results_matching_ranking}, right), the clusters could have been named according to their perceived roughness.
|
||||
Similar to the haptic ranks (\figref{results_matching_ranking}, right), the clusters could have been named according to their perceived roughness.
|
||||
%
|
||||
It also shows that the participants compared and ranked the haptic textures during the matching task to select the one that best matched the given visual texture.
|
||||
%
|
||||
The five identified visual texture clusters were: "Roughest" \{Metal Mesh\}; "Rougher" \{Sandpaper~100, Brick~2, Velcro Hooks\}; "Medium" \{Cork, Plastic Mesh~1\}; "Smoother" \{Sandpaper~320, Terra Cotta\}; "Smoothest" \{Coffee Filter\} (see \figref{results_similarity}, bottom-right).
|
||||
The five identified visual texture clusters were: "Roughest" \{Metal Mesh\}; "Rougher" \{Sandpaper~100, Brick~2, Velcro Hooks\}; "Medium" \{Cork, Plastic Mesh~1\}; "Smoother" \{Sandpaper~320, Terra Cotta\}; "Smoothest" \{Coffee Filter\} (\figref{results_similarity}, bottom-right).
|
||||
%
|
||||
They are also easily identifiable on the visual ranking results, which also made it possible to name them.
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ The visual textures were displayed statically on the tangible surface, while the
|
||||
%
|
||||
In addition, the interaction with the textures was designed to be as natural as possible, without imposing a specific speed of finger movement, as in similar studies~\cite{asano2015vibrotactile,friesen2024perceived}.
|
||||
|
||||
In the matching task, participants were not able to effectively match the original visual and haptic texture pairs (see \figref{results_matching_ranking}, left), except for the Coffee Filter texture, which was the smoothest both visually and haptically.
|
||||
In the matching task, participants were not able to effectively match the original visual and haptic texture pairs (\figref{results_matching_ranking}, left), except for the Coffee Filter texture, which was the smoothest both visually and haptically.
|
||||
%
|
||||
However, almost all visual textures, except Sandpaper~100, were matched with at least one haptic texture at a level above chance.
|
||||
%
|
||||
@@ -23,13 +23,13 @@ Indeed, the majority of users explained that, based on the roughness, granularit
|
||||
%
|
||||
Several strategies were used, as some participants reported using vibration frequency and/or amplitude to match a haptic texture.
|
||||
%
|
||||
It should be noted that the task was rather difficult (see \figref{results_questions}), as participants had no prior knowledge of the textures, there were no additional visual cues such as the shape of an object, and the term \enquote{roughness} had not been used by the experimenter prior to the ranking task.
|
||||
It should be noted that the task was rather difficult (\figref{results_questions}), as participants had no prior knowledge of the textures, there were no additional visual cues such as the shape of an object, and the term \enquote{roughness} had not been used by the experimenter prior to the ranking task.
|
||||
|
||||
The correspondence analysis (see \figref{results_similarity}, left) highlighted that participants did indeed match visual and haptic textures primarily on the basis of their perceived roughness (60\% of variance), which is in line with previous perception studies on real~\cite{baumgartner2013visual} and virtual~\cite{culbertson2014modeling} textures.
|
||||
The correspondence analysis (\figref{results_similarity}, left) highlighted that participants did indeed match visual and haptic textures primarily on the basis of their perceived roughness (60\% of variance), which is in line with previous perception studies on real~\cite{baumgartner2013visual} and virtual~\cite{culbertson2014modeling} textures.
|
||||
%
|
||||
The rankings (see \figref{results_matching_ranking}, right) confirmed that the participants all perceived the roughness of haptic textures very similarly, but that there was less consensus for visual textures, which is also in line with roughness rankings for real haptic and visual textures~\cite{bergmanntiest2007haptic}.
|
||||
The rankings (\figref{results_matching_ranking}, right) confirmed that the participants all perceived the roughness of haptic textures very similarly, but that there was less consensus for visual textures, which is also in line with roughness rankings for real haptic and visual textures~\cite{bergmanntiest2007haptic}.
|
||||
%
|
||||
These results made it possible to identify and name groups of textures in the form of clusters, and to construct confusion matrices between these clusters and between visual texture ranks with haptic clusters, showing that participants consistently identified and matched haptic and visual textures (see \figref{results_clusters}).
|
||||
These results made it possible to identify and name groups of textures in the form of clusters, and to construct confusion matrices between these clusters and between visual texture ranks with haptic clusters, showing that participants consistently identified and matched haptic and visual textures (\figref{results_clusters}).
|
||||
%
|
||||
Interestingly, 30\% of the matching variance was captured with a second dimension, opposing the roughest textures (Metal Mesh, Sandpaper~100), and to a lesser extent the smoothest (Coffee Filter, Sandpaper~320), with all other textures.
|
||||
%
|
||||
@@ -37,7 +37,7 @@ One hypothesis is that this dimension could be the perceived stiffness of the te
|
||||
%
|
||||
Stiffness is, with roughness, one of the main characteristics perceived by the vision and touch of real materials~\cite{baumgartner2013visual,vardar2019fingertip}, but also on virtual haptic textures~\cite{culbertson2014modeling,degraen2019enhancing}.
|
||||
%
|
||||
The last visuo-haptic roughness ranking (see \figref{results_matching_ranking}, right) showed that both haptic and visual sensory information were well integrated as the resulting roughness ranking was being in between the two individual haptic and visual rankings.
|
||||
The last visuo-haptic roughness ranking (\figref{results_matching_ranking}, right) showed that both haptic and visual sensory information were well integrated as the resulting roughness ranking was being in between the two individual haptic and visual rankings.
|
||||
%
|
||||
Several strategies were reported: some participants first classified visually and then corrected with haptics, others classified haptically and then integrated visuals.
|
||||
%
|
||||
@@ -51,7 +51,7 @@ A few participants even reported that they clearly sensed patterns on haptic tex
|
||||
%
|
||||
However, the visual and haptic textures used were isotropic and homogeneous models of real texture captures, \ie their rendered roughness was constant and did not depend on the direction of movement but only on the speed of the finger.
|
||||
%
|
||||
Overall, the haptic device was judged to be comfortable, and the visual and haptic textures were judged to be fairly realistic and to work well together (see \figref{results_questions}).
|
||||
Overall, the haptic device was judged to be comfortable, and the visual and haptic textures were judged to be fairly realistic and to work well together (\figref{results_questions}).
|
||||
|
||||
These results have of course some limitations as they addressed a small set of visuo-haptic textures augmenting the perception of smooth white tangible surfaces.
|
||||
%
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
%
|
||||
If so, the velocity of the finger marker ${}^c\dot{\mathbf{X}}_f$ is estimated using discrete derivative of position and adaptive low-pass filtering, then transformed onto the texture frame $\mathcal{F}_t$.
|
||||
%
|
||||
The vibrotactile signal $s_k$ is generated by modulating the finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (see \eqref{signal}).
|
||||
The vibrotactile signal $s_k$ is generated by modulating the finger velocity ${}^t\hat{\dot{X}}_f$ in the texture direction with the texture period $\lambda$ (\eqref{signal}).
|
||||
%
|
||||
The signal is sampled at 48~kHz and sent to the voice-coil actuator via an audio amplifier.
|
||||
%
|
||||
@@ -56,9 +56,9 @@ The system is composed of three main components: the pose estimation of the trac
|
||||
\subfig[0.992]{method/apparatus}
|
||||
\end{subfigs}
|
||||
|
||||
A fiducial marker (AprilTag) is glued to the top of the actuator (see \figref{method/device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (see \figref{method/apparatus}).
|
||||
A fiducial marker (AprilTag) is glued to the top of the actuator (\figref{method/device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{method/apparatus}).
|
||||
%
|
||||
Other markers are placed on the tangible surfaces to augment to estimate the relative position of the finger with respect to the surfaces (see \figref{setup}).
|
||||
Other markers are placed on the tangible surfaces to augment to estimate the relative position of the finger with respect to the surfaces (\figref{setup}).
|
||||
%
|
||||
Contrary to similar work which either constrained hand to a constant speed to keep the signal frequency constant~\cite{asano2015vibrotactile,friesen2024perceived}, or used mechanical sensors attached to the hand~\cite{friesen2024perceived,strohmeier2017generating}, using vision-based tracking allows both to free the hand movements and to augment any tangible surface.
|
||||
%
|
||||
@@ -75,13 +75,13 @@ The velocity of the marker is estimated using the discrete derivative of the pos
|
||||
To be able to compare virtual and augmented realities, we then create a virtual environment that closely replicate the real one.
|
||||
%Before a user interacts with the system, it is necessary to design a virtual environment that will be registered with the real environment during the experiment.
|
||||
%
|
||||
Each real element tracked by a marker is modelled virtually, \ie the hand and the augmented tangible surface (see \figref{renderings}).
|
||||
Each real element tracked by a marker is modelled virtually, \ie the hand and the augmented tangible surface (\figref{renderings}).
|
||||
%
|
||||
In addition, the pose and size of the virtual textures are defined on the virtual replicas.
|
||||
%
|
||||
During the experiment, the system uses marker pose estimates to align the virtual models with their real-world counterparts. %, according to the condition being tested.
|
||||
%
|
||||
This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the real environment (see \figref{renderings}), using the considered AR or VR headset.
|
||||
This allows to detect if a finger touches a virtual texture using a collision detection algorithm (Nvidia PhysX), and to show the virtual elements and textures in real-time, aligned with the real environment (\figref{renderings}), using the considered AR or VR headset.
|
||||
|
||||
In our implementation, the virtual hand and environment are designed with Unity and the Mixed Reality Toolkit (MRTK).
|
||||
%
|
||||
@@ -89,9 +89,9 @@ The visual rendering is achieved using the Microsoft HoloLens~2, an OST-AR heads
|
||||
%
|
||||
It was chosen over VST-AR because OST-AR only adds virtual content to the real environment, while VST-AR streams a real-time video capture of the real environment~\cite{macedo2023occlusion}.
|
||||
%
|
||||
Indeed, one of our objectives (see \secref{experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
|
||||
Indeed, one of our objectives (\secref{experiment}) is to directly compare a virtual environment that replicates a real one. %, rather than a video feed that introduces many supplementary visual limitations.
|
||||
%
|
||||
To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (see \figref{method/headset}).
|
||||
To simulate a VR headset, a cardboard mask (with holes for sensors) is attached to the headset to block the view of the real environment (\figref{method/headset}).
|
||||
|
||||
|
||||
\subsection{Vibrotactile Signal Generation and Rendering}
|
||||
@@ -99,7 +99,7 @@ To simulate a VR headset, a cardboard mask (with holes for sensors) is attached
|
||||
|
||||
A voice-coil actuator (HapCoil-One, Actronika) is used to display the vibrotactile signal, as it allows the frequency and amplitude of the signal to be controlled independently over time, covers a wide frequency range (\qtyrange{10}{1000}{\Hz}), and outputs the signal accurately with relatively low acceleration distortion\footnote{HapCoil-One specific characteristics are described in its data sheet: \url{https://web.archive.org/web/20240228161416/https://tactilelabs.com/wp-content/uploads/2023/11/HapCoil_One_datasheet.pdf}}.
|
||||
%
|
||||
The voice-coil actuator is encased in a 3D printed plastic shell and firmly attached to the middle phalanx of the user's index finger with a Velcro strap, to enable the fingertip to directly touch the environment (see \figref{method/device}).
|
||||
The voice-coil actuator is encased in a 3D printed plastic shell and firmly attached to the middle phalanx of the user's index finger with a Velcro strap, to enable the fingertip to directly touch the environment (\figref{method/device}).
|
||||
%
|
||||
The actuator is driven by a Class D audio amplifier (XY-502 / TPA3116D2, Texas Instrument). %, which has proven to be an effective type of amplifier for driving moving-coil~\cite{mcmahan2014dynamic}.
|
||||
%
|
||||
@@ -131,7 +131,7 @@ Note that the finger position and velocity are transformed from the camera frame
|
||||
%
|
||||
However, when a new finger position is estimated at time $t_j$, the phase $\phi_j$ needs to be adjusted as well with the frequency to ensure a continuity in the signal as described in \eqref{signal}.
|
||||
%
|
||||
This approach avoids sudden changes in the actuator movement thus affecting the texture perception in an uncontrolled way (see \figref{method/phase_adjustment}) and, contrary to previous work~\cite{asano2015vibrotactile,friesen2024perceived}, it enables no constraints a free exploration of the texture by the user with no constraints on the finger speed.
|
||||
This approach avoids sudden changes in the actuator movement thus affecting the texture perception in an uncontrolled way (\figref{method/phase_adjustment}) and, contrary to previous work~\cite{asano2015vibrotactile,friesen2024perceived}, it enables no constraints a free exploration of the texture by the user with no constraints on the finger speed.
|
||||
%
|
||||
Finally, as \textcite{ujitoko2019modulating}, a square wave is chosen over a sine wave to get a rendering closer to a real grating texture with the sensation of crossing edges, and because the roughness perception of sine wave textures has been shown not to reproduce the roughness perception of real grating textures~\cite{unger2011roughness}.
|
||||
%
|
||||
|
||||
@@ -26,7 +26,7 @@ Our visuo-haptic rendering system, described in \secref{method}, allows free exp
|
||||
%
|
||||
The user study aimed to investigate the effect of visual hand rendering in AR or VR on the perception of roughness texture augmentation. % of a touched tangible surface.
|
||||
%
|
||||
In a two-alternative forced choice (2AFC) task, participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (see \figref{renderings}, \level{Real}), in AR with a realistic virtual hand superimposed on the real hand (see \figref{renderings}, \level{Mixed}), and in VR with the same virtual hand as an avatar (see \figref{renderings}, \level{Virtual}).
|
||||
In a two-alternative forced choice (2AFC) task, participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (\figref{renderings}, \level{Real}), in AR with a realistic virtual hand superimposed on the real hand (\figref{renderings}, \level{Mixed}), and in VR with the same virtual hand as an avatar (\figref{renderings}, \level{Virtual}).
|
||||
%
|
||||
In order not to influence the perception, as vision is an important source of information and influence for the perception of texture~\cite{bergmanntiest2007haptic,yanagisawa2015effects,normand2024augmenting,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed.
|
||||
|
||||
@@ -52,7 +52,7 @@ They all signed an informed consent form before the user study and were unaware
|
||||
\subsection{Apparatus}
|
||||
\label{apparatus}
|
||||
|
||||
An experimental environment similar as \textcite{gaffary2017ar} was created to ensure a similar visual rendering in AR and VR (see \figref{renderings}).
|
||||
An experimental environment similar as \textcite{gaffary2017ar} was created to ensure a similar visual rendering in AR and VR (\figref{renderings}).
|
||||
%
|
||||
It consisted of a \qtyproduct{300 x 210 x 400}{\mm} medium-density fibreboard (MDF) box with a paper sheet glued inside, and a \qtyproduct{15 x 5}{\mm} rectangle printed on the sheet to delimit the area where the tactile textures were rendered.
|
||||
%
|
||||
@@ -62,7 +62,7 @@ Participants rated the roughness of the paper (without any texture augmentation)
|
||||
|
||||
%The visual rendering of the virtual hand and environment was achieved using the Microsoft HoloLens~2, an OST-AR headset with a \qtyproduct{43 x 29}{\degree} field of view (FoV) and a \qty{60}{\Hz} refresh rate, running a custom application made with Unity 2021.1.0f1 and Mixed Reality Toolkit (MRTK) 2.7.2.
|
||||
%f
|
||||
The virtual environment was carefully reproducing the real environment including the geometry of the box, the textures, the lighting, and the shadows (see \figref{renderings}, \level{Virtual}).
|
||||
The virtual environment was carefully reproducing the real environment including the geometry of the box, the textures, the lighting, and the shadows (\figref{renderings}, \level{Virtual}).
|
||||
%
|
||||
The virtual hand model was a gender-neutral human right hand with realistic skin texture, similar to the one used by \textcite{schwind2017these}.
|
||||
%
|
||||
@@ -72,17 +72,17 @@ Its size was adjusted to match the real hand of the participants before the expe
|
||||
%
|
||||
The visual rendering of the virtual hand and environment is described in \secref{virtual_real_alignment}.
|
||||
%
|
||||
%In the \level{Virtual} rendering, a cardboard mask (with holes for sensors) was attached to the headset to block the view of the real environment and simulate a VR headset (see \figref{method/headset}).
|
||||
%In the \level{Virtual} rendering, a cardboard mask (with holes for sensors) was attached to the headset to block the view of the real environment and simulate a VR headset (\figref{method/headset}).
|
||||
%
|
||||
To ensure for the same FoV in all \factor{Visual Rendering} condition, a cardboard mask was attached to the AR headset (see \figref{method/headset}).
|
||||
To ensure for the same FoV in all \factor{Visual Rendering} condition, a cardboard mask was attached to the AR headset (\figref{method/headset}).
|
||||
%
|
||||
In the \level{Virtual} rendering, the mask had only holes for sensors to block the view of the real environment and simulate a VR headset.
|
||||
%
|
||||
In the \level{Mixed} and \level{Real} conditions, the mask had two additional holes for the eyes that matched the FoV of the HoloLens~2 (see \figref{method/headset}).
|
||||
In the \level{Mixed} and \level{Real} conditions, the mask had two additional holes for the eyes that matched the FoV of the HoloLens~2 (\figref{method/headset}).
|
||||
%
|
||||
\figref{renderings} shows the resulting views in the three considered \factor{Visual Rendering} conditions.
|
||||
|
||||
%A vibrotactile voice-coil device (HapCoil-One, Actronika), incased in a 3D-printed plastic shell, was firmly attached to the right index finger of the participants using a Velcro strap (see \figref{method/device}), was used to render the textures
|
||||
%A vibrotactile voice-coil device (HapCoil-One, Actronika), incased in a 3D-printed plastic shell, was firmly attached to the right index finger of the participants using a Velcro strap (\figref{method/device}), was used to render the textures
|
||||
%
|
||||
%This voice-coil was chosen for its wide frequency range (\qtyrange{10}{1000}{\Hz}) and its relatively low acceleration distortion, as specified by the manufacturer\footnotemark[1].
|
||||
%
|
||||
@@ -110,7 +110,7 @@ The user study was held in a quiet room with no windows.
|
||||
|
||||
Participants were first given written instructions about the experimental setup and procedure, the informed consent form to sign, and a demographic questionnaire.
|
||||
%
|
||||
%They were then asked to sit in front of the box and wear the HoloLens~2 and headphones while the experimenter firmly attached the vibrotactile device to the middle phalanx of their right index finger (see \figref{method/apparatus}).
|
||||
%They were then asked to sit in front of the box and wear the HoloLens~2 and headphones while the experimenter firmly attached the vibrotactile device to the middle phalanx of their right index finger (\figref{method/apparatus}).
|
||||
%
|
||||
A calibration was then performed to adjust the HoloLens~2 to the participant's interpupillary distance, the virtual hand to the real hand size, and the fiducial marker to the finger position.
|
||||
%
|
||||
@@ -147,7 +147,7 @@ Preliminary studies allowed us to determine a range of amplitudes that could be
|
||||
The user study was a within-subjects design with two factors:
|
||||
%
|
||||
\begin{itemize}
|
||||
\item \factor{Visual Rendering}, consisting of the augmented or virtual view of the environment, the hand and the wearable haptic device, with 3 levels: real environment and real hand view without any visual augmentation (see \figref{renderings}, \level{Real}), real environment and hand view with the virtual hand (see \figref{renderings}, \level{Mixed}) and virtual environment with the virtual hand (see \figref{renderings}, \level{Virtual}).
|
||||
\item \factor{Visual Rendering}, consisting of the augmented or virtual view of the environment, the hand and the wearable haptic device, with 3 levels: real environment and real hand view without any visual augmentation (\figref{renderings}, \level{Real}), real environment and hand view with the virtual hand (\figref{renderings}, \level{Mixed}) and virtual environment with the virtual hand (\figref{renderings}, \level{Virtual}).
|
||||
\item \factor{Amplitude Difference}, consisting of the difference in amplitude between the comparison and the reference textures, with 6 levels: \qtylist{0; +-12.5; +-25.0; +-37.5}{\%}.
|
||||
\end{itemize}
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ Each estimate is reported with its 95\% confidence interval (CI) as follows: \ci
|
||||
\subsubsection{Discrimination Accuracy}
|
||||
\label{discrimination_accuracy}
|
||||
|
||||
A GLMM was adjusted to the \response{Texture Choice} in the 2AFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (see \figref{results/trial_predictions}).
|
||||
A GLMM was adjusted to the \response{Texture Choice} in the 2AFC vibrotactile texture roughness discrimination task, with by-participant random intercepts but no random slopes, and a probit link function (\figref{results/trial_predictions}).
|
||||
%
|
||||
The points of subjective equality (PSEs, see \figref{results/trial_pses}) and just-noticeable differences (JNDs, see \figref{results/trial_jnds}) for each visual rendering and their respective differences were estimated from the model, along with their corresponding 95\% CI, using a non-parametric bootstrap procedure (1000 samples).
|
||||
%
|
||||
@@ -95,7 +95,7 @@ All pairwise differences were statistically significant: \level{Real} \vs \level
|
||||
|
||||
%\figref{results/question_heatmaps} shows the median and interquartile range (IQR) ratings to the questions in \tabref{questions} and to the NASA-TLX questionnaire.
|
||||
%
|
||||
Friedman tests were employed to compare the ratings to the questions (see \tabref{questions}), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests.
|
||||
Friedman tests were employed to compare the ratings to the questions (\tabref{questions}), with post-hoc Wilcoxon signed-rank tests and Holm-Bonferroni adjustment, except for the questions regarding the virtual hand that were directly compared with Wilcoxon signed-rank tests.
|
||||
%
|
||||
\figref{question_plots} shows these ratings for questions where statistically significant differences were found (results are shown as mean $\pm$ standard deviation):
|
||||
%
|
||||
|
||||
@@ -8,15 +8,15 @@
|
||||
|
||||
The results showed a difference in vibrotactile roughness perception between the three visual rendering conditions.
|
||||
%
|
||||
Given the estimated point of subjective equality (PSE), the textures in the \level{Real} rendering were on average perceived as \enquote{rougher} than in the \level{Virtual} (\percent{-2.8}) and \level{Mixed} (\percent{-6.0}) renderings (see \figref{results/trial_pses}).
|
||||
Given the estimated point of subjective equality (PSE), the textures in the \level{Real} rendering were on average perceived as \enquote{rougher} than in the \level{Virtual} (\percent{-2.8}) and \level{Mixed} (\percent{-6.0}) renderings (\figref{results/trial_pses}).
|
||||
%
|
||||
\textcite{gaffary2017ar} found a PSE difference in the same range between AR and VR for perceived stiffness, with the VR perceived as \enquote{stiffer} and the AR as \enquote{softer}.
|
||||
%
|
||||
%However, the difference between the \level{Virtual} and \level{Mixed} conditions was not significant.
|
||||
%
|
||||
Surprisingly, the PSE of the \level{Real} rendering was shifted to the right (to be "rougher", \percent{7.9}) compared to the reference texture, whereas the PSEs of the \level{Virtual} (\percent{5.1}) and \level{Mixed} (\percent{1.9}) renderings were closer to the reference texture, being perceived as \enquote{smoother} (see \figref{results/trial_predictions}).
|
||||
Surprisingly, the PSE of the \level{Real} rendering was shifted to the right (to be "rougher", \percent{7.9}) compared to the reference texture, whereas the PSEs of the \level{Virtual} (\percent{5.1}) and \level{Mixed} (\percent{1.9}) renderings were closer to the reference texture, being perceived as \enquote{smoother} (\figref{results/trial_predictions}).
|
||||
%
|
||||
The sensitivity of participants to roughness differences (just-noticeable differences, JND) also varied between all the visual renderings, with the \level{Real} rendering having the best JND (\percent{26}), followed by the \level{Virtual} (\percent{30}) and \level{Virtual} (\percent{33}) renderings (see \figref{results/trial_jnds}).
|
||||
The sensitivity of participants to roughness differences (just-noticeable differences, JND) also varied between all the visual renderings, with the \level{Real} rendering having the best JND (\percent{26}), followed by the \level{Virtual} (\percent{30}) and \level{Virtual} (\percent{33}) renderings (\figref{results/trial_jnds}).
|
||||
%
|
||||
These JND values are in line with and at the upper end of the range of previous studies~\cite{choi2013vibrotactile}, which may be due to the location of the actuator on the top of the middle phalanx of the finger, being less sensitive to vibration than the fingertip.
|
||||
%
|
||||
@@ -24,15 +24,15 @@ Thus, compared to no visual rendering (\level{Real}), the addition of a visual r
|
||||
|
||||
Differences in user behaviour were also observed between the visual renderings (but not between the haptic textures).
|
||||
%
|
||||
On average, participants responded faster (\percent{-16}), explored textures at a greater distance (\percent{+21}) and at a higher speed (\percent{+16}) without visual augmentation (\level{Real} rendering) than in VR (\level{Virtual} rendering) (see \figref{results_finger}).
|
||||
On average, participants responded faster (\percent{-16}), explored textures at a greater distance (\percent{+21}) and at a higher speed (\percent{+16}) without visual augmentation (\level{Real} rendering) than in VR (\level{Virtual} rendering) (\figref{results_finger}).
|
||||
%
|
||||
The \level{Mixed} rendering, displaying both the real and virtual hands, was always in between, with no significant difference from the other two renderings.
|
||||
%
|
||||
This suggests that touching a virtual vibrotactile texture on a tangible surface with a virtual hand in VR is different from touching it with one's own hand: users were more cautious or less confident in their exploration in VR.
|
||||
%
|
||||
This seems not due to the realism of the virtual hand or environment, nor the control of the virtual hand, that were all rated high to very high by the participants (see \secref{questions}) in both the \level{Mixed} and \level{Virtual} renderings.
|
||||
This seems not due to the realism of the virtual hand or environment, nor the control of the virtual hand, that were all rated high to very high by the participants (\secref{questions}) in both the \level{Mixed} and \level{Virtual} renderings.
|
||||
%
|
||||
Very interestingly, the evaluation of the vibrotactile device and textures was also the same between the visual rendering, with a very high sensation of control, a good realism and a very low perceived latency of the textures (see \secref{questions}).
|
||||
Very interestingly, the evaluation of the vibrotactile device and textures was also the same between the visual rendering, with a very high sensation of control, a good realism and a very low perceived latency of the textures (\secref{questions}).
|
||||
%
|
||||
However, the perceived latency of the virtual hand (\response{Hand Latency} question) seems to be related to the perceived roughness of the textures (with the PSEs).
|
||||
%
|
||||
@@ -40,7 +40,7 @@ The \level{Mixed} rendering had the lowest PSE and highest perceived latency, th
|
||||
|
||||
Our visuo-haptic augmentation system aimed to provide a coherent multimodal virtual rendering integrated with the real environment.
|
||||
%
|
||||
Yet, it involves different sensory interaction loops between the user's movements and the visuo-haptic feedback (see \figref{method/diagram}), which are subject to different latencies and may not be in synchronised with each other, or may even being inconsistent with other sensory modalities such as proprioception.
|
||||
Yet, it involves different sensory interaction loops between the user's movements and the visuo-haptic feedback (\figref{method/diagram}), which are subject to different latencies and may not be in synchronised with each other, or may even being inconsistent with other sensory modalities such as proprioception.
|
||||
%
|
||||
When a user runs their finger over a vibrotactile virtual texture, the haptic sensations and eventual display of the virtual hand lag behind the visual displacement and proprioceptive sensations of the real hand.
|
||||
%
|
||||
|
||||
Reference in New Issue
Block a user