From 3a8f68c094550537642c77770cd4e58d6a029326 Mon Sep 17 00:00:00 2001 From: Erwan Normand Date: Sun, 29 Sep 2024 23:13:19 +0200 Subject: [PATCH] WIP vhar_textures --- 1-background/introduction/introduction.tex | 33 +-- .../related-work/2-wearable-haptics.tex | 2 +- 2-perception/vhar-system/1-introduction.tex | 15 ++ 2-perception/vhar-system/2-method.tex | 2 +- 2-perception/vhar-textures/1-introduction.tex | 53 ++-- 2-perception/vhar-textures/2-experiment.tex | 184 ++++++-------- 2-perception/vhar-textures/3-results.tex | 232 ++++++++---------- 2-perception/vhar-textures/4-discussion.tex | 74 ++---- 2-perception/vhar-textures/5-conclusion.tex | 2 +- 2-perception/xr-perception/1-introduction.tex | 16 +- 2-perception/xr-perception/3-experiment.tex | 13 +- 4-conclusion/conclusion.tex | 54 ++-- config/acronyms.tex | 1 + 13 files changed, 313 insertions(+), 368 deletions(-) diff --git a/1-background/introduction/introduction.tex b/1-background/introduction/introduction.tex index 788ca96..bd750b6 100644 --- a/1-background/introduction/introduction.tex +++ b/1-background/introduction/introduction.tex @@ -3,7 +3,7 @@ \chaptertoc -This thesis, entitled \enquote{\ThesisTitle}, shows that wearable haptics, worn on the outside of the hand, improve direct bare-hand interaction in immersive \AR by augmenting the perception and manipulation of the virtual. +This thesis, entitled \enquote{\ThesisTitle}, shows how wearable haptics, worn on the outside of the hand, can improve direct bare-hand interaction in immersive \AR by augmenting the perception and manipulation of the virtual. \section{Visual and Haptic Object Augmentations} \label{visuo_haptic_augmentations} @@ -12,7 +12,7 @@ This thesis, entitled \enquote{\ThesisTitle}, shows that wearable haptics, worn In daily life, we simultaneously look and touch the everyday objects around us without even thinking about it. Many of these object properties can be perceived in a complementary way through both vision and touch, such as their shape, size or texture \cite{baumgartner2013visual}. -But vision often precedes touch, enabling us to expect the tactile sensations we will feel when touching the object, \eg stiffness or texture, and even to predict properties that we cannot see, \eg weight or temperature. +But vision often precedes touch, enabling us to expect the tactile sensations we will feel when touching the object, \eg hardness or texture, and even to predict properties that we cannot see, \eg weight or temperature. Information from different sensory sources may be complementary, redundant or contradictory \cite{ernst2004merging}. This is why we sometimes want to touch an object to check one of its properties that we have seen and to compare or confront our visual and tactile sensations. We then instinctively construct a unified perception of the object we are exploring and manipulating from these visual and tactile sensory modalities, as well as from the movement of our hand and fingers on the object \cite{ernst2002humans}. @@ -27,14 +27,14 @@ This rich and complex variety of actions and sensations makes it particularly di Haptic devices can be categorized according to how they interface with a user: graspable, touchable and wearable, as illustrated in \figref{haptic-categories}. Graspable interfaces are the traditional haptic devices that are held in the hand. They are either grounded devices that provide kinesthetic (force) feedback, \eg a robotic arm or a joystick, or ungrounded tactile devices, \eg a game controller or a smartphone. -Touchable interfaces are actuated devices that are directly touched and that can dynamically change their shape or surface property, such as stiffness or friction, providing simultaneous kinesthetic and cutaneous feedback. +Touchable interfaces are actuated devices that are directly touched and that can dynamically change their shape or surface property, such as hardness or friction, providing simultaneous kinesthetic and cutaneous feedback. However, graspable interfaces occupy the hand, preventing interaction with other objects, and touchable interfaces often involve cumbersome mechanisms and are by definition limited to their own working surface. Instead, wearable interfaces are directly mounted on the body to provide cutaneous sensations on the skin in a portable way and without restricting the user's movements \cite{pacchierotti2017wearable}. \begin{subfigs}{haptic-categories}{ Haptic devices can be classified into three categories according to their interface with the user: }[][ - \item graspable, + \item graspable (hand-held), \item touchable, and \item wearable. Adapted from \textcite{culbertson2018haptics}. ] @@ -54,7 +54,7 @@ But their use in combination with \AR has been little explored so far. \item Wolverine, a wearable exoskeleton that simulate contact and grasping of virtual objects with force feedback on the fingers \cite{choi2016wolverine}. \item Touch\&Fold, a wearable haptic device mounted on the nail that fold on demand to render contact, normal force and vibrations to the fingertip \cite{teng2021touch}. \item The hRing, a wearable haptic ring mounted on the proximal phalanx able to render normal and shear forces to the finger \cite{pacchierotti2016hring}. - \item Tasbi, a haptic bracelet capable of rendering squeeze and vibrotactile feedback to the wrist \cite{pezent2022design}. + \item Tasbi, a haptic bracelet capable of providing pressure and vibrotactile feedback to the wrist \cite{pezent2022design}. ] \subfigsheight{28mm} \subfig{choi2016wolverine} @@ -89,7 +89,7 @@ For example, a visual \AE that uses a tangible (touchable) object as a proxy to Haptic \AR is then the combination of real and virtual haptic stimuli \cite{bhatia2024augmenting} (middle row in \figref{visuo-haptic-rv-continuum3}). In particular, it has been implemented by enhancing the haptic perception of tangible objects by providing timely tactile stimuli using wearable haptics. \figref{salazar2020altering} shows an example of modifying the perceived stiffness of a tangible object in \VR using simultaneous pressure feedback on the finger (left middle cell in \figref{visuo-haptic-rv-continuum3}). -\figref{bau2012revel} shows another example of visuo-haptic \AR rendering of virtual texture when running the finger on a tangible surface (middle cell in the two axes in \figref{visuo-haptic-rv-continuum3}). +\figref{bau2012revel} shows another example of visuo-haptic augmentation of virtual texture when running the finger on a tangible surface (middle cell in the two axes in \figref{visuo-haptic-rv-continuum3}). Current \AR systems often lack haptic feedback, creating a deceptive and incomplete user experience when reaching the \VE with the hand. All visual \VOs are inherently intangible and cannot physically constrain a user's hand, making it difficult to perceive their properties congruently and interact with them with confidence and efficiency. @@ -100,7 +100,7 @@ The integration of wearable haptics with \AR seems to be one of the most promisi \item \AR environment with a real, tangible haptic object used as a proxy to manipulate a \VO \cite{kahl2023using}. \item \AR environment with a wearable haptic device that provides virtual, synthetic feedback from contact with a \VO \cite{meli2018combining}. \item A tangible object seen in a visual \VR environment whose haptic perception of stiffness is augmented with the hRing haptic device \cite{salazar2020altering}. - \item Visuo-haptic rendering of texture on a touched tangible object with a \AR display and haptic electrovibration feedback \cite{bau2012revel}. + \item Visuo-haptic texture augmentation of tangible object being touch, using a hand-held \AR display and haptic electrovibration feedback \cite{bau2012revel}. ] \subfigsheight{31mm} \subfig{kahl2023using} @@ -138,17 +138,18 @@ Each of these challenges also raises numerous design, technical and human issues Many haptic devices have been designed and evaluated specifically for use in \VR, providing realistic and varied kinesthetic and tactile feedback to \VOs. Although closely related, (visual) \AR and \VR have key differences in their respective renderings that can affect user perception. -First, the user's hand and \RE are visible in \AR, unlike \VR where there is total control over the visual rendering of the hand and \VE. +In \AR, the user can still see the real-world surroundings, including their hands, the augmented tangible objects and the worn haptic devices, unlike \VR where there is total control over the visual rendering of the hand and \VE. As such, in \VR, visual sensations are particularly dominant in perception, and conflicts with haptic sensations are also specifically created to influence the user's perception, for example to create pseudo-haptic \cite{ujitoko2021survey} or haptic retargeting \cite{azmandian2016haptic} effects. Moreover, many wearable haptic devices take the form of controllers, gloves or exoskeletons, all of which cover the fingertips and are therefore not suitable for \AR. The user's hand must be indeed free to touch and interact with the \RE while wearing a wearable haptic device. It is possible instead to place the haptic actuator close to the point of contact with the \RE, as described above to implement haptic augmentations, \eg providing haptic feedback on the nail \cite{ando2007fingernailmounted,teng2021touch}, another phalanx \cite{asano2015vibrotactile,salazar2020altering} or the wrist \cite{pezent2022design,sarac2022perceived} for rendering fingertip contacts with virtual content. Therefore, when touching a virtual or augmented object, the real and virtual visual sensations are seen as co-localized, but the virtual haptic feedback is not. -It remains to be investigated how such potential discrepancies affect the overall perception to design visuo-haptic renderings adapted to \AR. +It remains to be investigated how such potential discrepancies affect the overall perception to design visuo-haptic augmentations adapted to \AR. -So far, \AR can only add visual and haptic sensations to the user's overall perception of the environment, but conversely it is more difficult to remove sensations. +So far, \AR studies and applications often only add visual and haptic sensations to the user's overall perception of the environment, but conversely it is more difficult to remove sensations. These added virtual sensations can therefore be perceived as out of sync or even inconsistent with the sensations of the \RE, for example with a lower rendering quality, a temporal latency, a spatial shift, or a combination of these. It is therefore unclear to what extent the real and virtual visuo-haptic sensations will be perceived as realistic or plausible, and to what extent they will conflict or complement each other in the perception of the \AE. +With a better understanding of how visual factors can influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with \AR can be better applied and new visuo-haptic augmentations adapted to \AR can be designed. \subsectionstarbookmark{Challenge II: Enable Effective Manipulation of the Augmented Environment} @@ -196,22 +197,22 @@ Wearable haptic devices have proven to be effective in modifying the perception %It is achieved by placing the haptic actuator close to the fingertip, to let it free to touch the surface, and rendering tactile stimuli timely synchronised with the finger movement. %It enables rich haptic feedback as the combination of kinesthetic sensation from the tangible and cutaneous sensation from the actuator. However, wearable haptic augmentations have been little explored with \AR, as well as the visuo-haptic augmentation of textures. -Texture is indeed one of the main tactile sensation of a surface material \cite{hollins1993perceptual,okamoto2013psychophysical}, perceived equally well by both sight and touch \cite{bergmanntiest2007haptic,baumgartner2013visual}, and one of the most studied haptic (only, without visual) rendering \cite{unger2011roughness,culbertson2014modeling,strohmeier2017generating}. -%With a better understanding of how visual factors influence the perception of haptically augmented tangible objects, the many wearable haptic systems that already exist but have not yet been fully explored with \AR can be better applied and new visuo-haptic renderings adapted to \AR can be designed. +Texture is indeed one of the fundamental perceived property of a surface's material \cite{hollins1993perceptual,okamoto2013psychophysical}, perceived equally well by both sight and touch \cite{bergmanntiest2007haptic,baumgartner2013visual}, and one of the most studied haptic (only, without visual) augmentation \cite{unger2011roughness,culbertson2014modeling,strohmeier2017generating}. +Being able to coherently substitute the visuo-haptic texture of an everyday surface directly touched by a finger is an important step towards \AR capable of visually and haptically augmenting the \RE of a user in a plausible way. For this first axis of research, we propose to design and evaluate the \textbf{perception of wearable virtual visuo-haptic textures augmenting tangible surfaces}. %, using an immersive \AR headset and a wearable vibrotactile device. To this end, we (1) design a system for rendering wearable visuo-haptic texture augmentations, to (2) evaluate how the perception of haptic texture augmentations is affected by the visual virtuality of the hand and the environment (real, augmented, or virtual), and (3) investigate the perception of co-localized visuo-haptic texture augmentations. -First, an effective approach to rendering haptic textures is to generate a vibrotactile signal that represents the finger-texture interaction \cite{culbertson2014modeling,asano2015vibrotactile}. +First, an effective approach for rendering haptic textures is to generate a vibrotactile signal that represents the finger-texture interaction \cite{culbertson2014modeling,asano2015vibrotactile}. Yet, to achieve the natural interaction with the hand and a coherent visuo-haptic feedback, it requires a real time rendering of the textures, no constraints on the hand movements, and a good synchronization between the visual and haptic feedback. Thus, our first objective is to \textbf{design an immersive, real time system} that allows free exploration with the bare hand of \textbf{wearable visuo-haptic texture augmentations} on tangible surfaces. -Second, many works have investigated the haptic augmentation of textures, but none have integrated them with \AR and \VR, or have considered the influence of the visual rendering on their perception. +Second, many works have investigated the haptic augmentation of textures, but none have integrated them with \AR and \VR, or have considered the influence of the degree of visual virtuality on their perception. Still, it is known that the visual feedback can alter the perception of real and virtual haptic sensations \cite{schwind2018touch,choi2021augmenting} but also that the force feedback perception of grounded haptic devices is not the same in \AR and \VR \cite{diluca2011effects,gaffary2017ar}. Hence, our second objective is to \textbf{evaluate how the perception of wearable haptic texture augmentation is affected by the visual virtuality of the hand and the environment} (real, augmented, or virtual). -Finally, some visuo-haptic texture databases have been modeled from real texture captures \cite{culbertson2014penn,balasubramanian2024sens3}, to be rendered as virtual textures with graspable haptics that are perceived as similar to real textures \cite{culbertson2015should,friesen2024perceived}. +Finally, some visuo-haptic texture databases have been modeled from real texture captures \cite{culbertson2014penn,balasubramanian2024sens3}, to be rendered as virtual textures with hand-held haptic devices that are perceived as similar to real textures \cite{culbertson2015should,friesen2024perceived}. However, the rendering of these textures in an immersive and natural visuo-haptic \AR using wearable haptics remains to be investigated. -Our third objective is to \textbf{evaluate the perception of simultaneous and co-localized wearable visuo-haptic texture augmentations} of tangible surfaces in \AR, directly touched by the hand, and to understand to what extent each sensory modality contributes to the overall perception of the augmented texture. +Our third objective is to \textbf{evaluate the perception of simultaneous and co-localized wearable visuo-haptic texture augmentations} of tangible surfaces in \AR, and to understand to what extent each sensory modality contributes to the overall perception of the augmented texture. \subsectionstarbookmark{Axis II: Improving the Manipulation of Virtual Objects} diff --git a/1-background/related-work/2-wearable-haptics.tex b/1-background/related-work/2-wearable-haptics.tex index 2b923eb..45b7820 100644 --- a/1-background/related-work/2-wearable-haptics.tex +++ b/1-background/related-work/2-wearable-haptics.tex @@ -228,7 +228,7 @@ Participants matched the virtual textures to the real ones, with \qty{0.25}{\mm} Other models have been then developed to capture everyday textures (such as sandpaper) \cite{guruswamy2011iir} with many force and velocity measures \cite{romano2012creating,culbertson2014modeling}. Such data-based models are capable of interpolating from the user's measures of velocity and force as inputs to generate a virtual texture in real time (\secref{vibrotactile_actuators}). -This led to the release of the Penn Haptic Texture Toolkit (HaTT) database, a public set of stylus recordings and models of 100 haptic textures \cite{culbertson2014one}. +This led to the release of the \HaTT database, a public set of stylus recordings and models of 100 haptic textures \cite{culbertson2014one}. A similar database, but captured from a direct touch context with the fingertip, has recently been released \cite{balasubramanian2024sens3}. A limitation of these data-driven models is that they can only render \emph{isotropic} textures: their record does not depend on the position of the measure, and the rendering is the same regardless of the direction of the movement. Alternative models have been proposed to both render both isotropic and patterned textures \cite{chan2021hasti}. diff --git a/2-perception/vhar-system/1-introduction.tex b/2-perception/vhar-system/1-introduction.tex index 0d13a15..ad0d336 100644 --- a/2-perception/vhar-system/1-introduction.tex +++ b/2-perception/vhar-system/1-introduction.tex @@ -1,4 +1,19 @@ % Even before manipulating a visual representation to induce a haptic sensation, shifts and latencies between user input and co-localised visuo-haptic feedback can be experienced differently in \AR and \VR, which we aim to investigate in this work. +% Database of virtual visual and haptic (roughness) textures have been developed as captures and models of real everyday surfaces \cite{culbertson2014penn,balasubramanian2024sens3} + +When we look at the surface of an everyday object, we then touch it to confirm or contrast our initial visual impression and to estimate the properties of the object \cite{ernst2002humans}. +% +One of the main characteristics of a textured surface is its roughness, \ie the micro-geometry of the material \cite{klatzky2003feeling}, which is perceived equally well and similarly by both sight and touch \cite{bergmanntiest2007haptic,baumgartner2013visual,vardar2019fingertip}. +% +Many haptic devices and rendering methods have been used to generate realistic virtual rough textures \cite{culbertson2018haptics}. +% +One of the most common approaches is to reproduce the vibrations that occur when running across a surface, using a vibrotactile device attached to a hand-held tool \cite{culbertson2014modeling,culbertson2015should} or worn on the finger \cite{asano2015vibrotactile,friesen2024perceived}. +% +By providing timely vibrations synchronized with the movement of the tool or the finger moving on a real object, the perceived roughness of the surface can be augmented \cite{culbertson2015should,asano2015vibrotactile}. +% +In that sense, data-driven haptic textures have been developed as captures and models of real surfaces, resulting in the \HaTT database \cite{culbertson2014one}. +% +While these virtual haptic textures are perceived as similar to real textures \cite{culbertson2015should}, they have been evaluated using hand-held tools and not yet in a direct finger contact with the surface context, in particular combined with visual textures in an immersive \VE. Wearable haptic devices have proven to be effective in modifying the perception of a touched tangible surface, without modifying the tangible, nor covering the fingertip, forming a haptic \AE \cite{bau2012revel,detinguy2018enhancing,salazar2020altering}. diff --git a/2-perception/vhar-system/2-method.tex b/2-perception/vhar-system/2-method.tex index 2eb9dfe..171c5eb 100644 --- a/2-perception/vhar-system/2-method.tex +++ b/2-perception/vhar-system/2-method.tex @@ -45,7 +45,7 @@ The system consists of three main components: the pose estimation of the tracked %\subfig[0.992]{apparatus} \end{subfigs} -A fiducial marker (AprilTag) is glued to the top of the actuator (\figref{device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{apparatus}). +A \qty{2}{\cm} AprilTag fiducial marker \cite{wang2016apriltag} is glued to the top of the actuator (\figref{device}) to track the finger pose with a camera (StreamCam, Logitech) which is placed above the experimental setup and capturing \qtyproduct{1280 x 720}{px} images at \qty{60}{\hertz} (\figref{apparatus}). % Other markers are placed on the tangible surfaces to augment (\figref{setup}). % to estimate the relative position of the finger with respect to the surfaces % diff --git a/2-perception/vhar-textures/1-introduction.tex b/2-perception/vhar-textures/1-introduction.tex index 8e44dc9..f617bcc 100644 --- a/2-perception/vhar-textures/1-introduction.tex +++ b/2-perception/vhar-textures/1-introduction.tex @@ -1,33 +1,24 @@ -When we look at the surface of an everyday object, we then touch it to confirm or contrast our initial visual impression and to estimate the properties of the object \cite{ernst2002humans}. -% -One of the main characteristics of a textured surface is its roughness, \ie the micro-geometry of the material \cite{klatzky2003feeling}, which is perceived equally well and similarly by both sight and touch \cite{bergmanntiest2007haptic,baumgartner2013visual,vardar2019fingertip}. -% -Many haptic devices and rendering methods have been used to generate realistic virtual rough textures \cite{culbertson2018haptics}. -% -One of the most common approaches is to reproduce the vibrations that occur when running across a surface, using a vibrotactile device attached to a hand-held tool \cite{culbertson2014modeling,culbertson2015should} or worn on the finger \cite{asano2015vibrotactile,friesen2024perceived}. -% -By providing timely vibrations synchronized with the movement of the tool or the finger moving on a real object, the perceived roughness of the surface can be augmented \cite{culbertson2015should,asano2015vibrotactile}. -% -In that sense, data-driven haptic textures have been developed as captures and models of real surfaces, resulting in the Penn Haptic Texture Toolkit (HaTT) database \cite{culbertson2014one}. -% -While these virtual haptic textures are perceived as similar to real textures \cite{culbertson2015should}, they have been evaluated using hand-held tools and not yet in a direct finger contact with the surface context, in particular combined with visual textures in an immersive \VE. +\noindent When we look at the surface of an everyday object, we then touch it to confirm or contrast our initial visual impression and to estimate the properties of the object, particularly its texture \secref[related_work]{visual_haptic_influence}. +Among the various haptic texture augmentations, data-driven methods allow to capture, model and reproduce the roughness perception of real surfaces when touched touched by a hand-held stylus \secref[related_work]{texture_rendering}. +Databases of visuo-haptic textures have been developed in this way \cite{culbertson2014one,balasubramanian2024sens3}, but they have not yet been explored in an immersive and direct touch context with \AR and wearable haptics. -Combined with virtual reality (VR), where the user is immersed in a visual \VE, wearable haptic devices have also proven to be effective in modifying the visuo-haptic perception of tangible objects touched with the finger, without needing to modify the object \cite{asano2012vibrotactile,asano2015vibrotactile,salazar2020altering}. -% -Worn on the finger, but not directly on the fingertip to keep it free to interact with tangible objects, they have been used to alter perceived stiffness, softness, friction and local deformations \cite{detinguy2018enhancing,salazar2020altering}. -% -However, the use of wearable haptic devices has been little explored in Augmented Reality (AR), where visual virtual content is integrated into the real-world environment, especially for augmenting texture sensations \cite{punpongsanon2015softar,maisto2017evaluation,meli2018combining,chan2021hasti,teng2021touch,fradin2023humans}. -% -A key difference in \AR compared to \VR is that the user can still see the real-world surroundings, including their hands, the augmented tangible objects and the worn haptic devices. -% -One additional issue of current \AR systems is their visual display limitations, or virtual content that may not be seen as consistent with the real world \cite{kim2018revisiting,macedo2023occlusion}. -% -These two factors have been shown to influence the perception of haptic stiffness rendering \cite{knorlein2009influence,gaffary2017ar}. -% -It remains to be investigated whether simultaneous and co-localized visual and haptic texture augmentation of tangible surfaces in \AR can be perceived in a coherent and realistic manner, and to what extent each sensory modality would contribute to the overall perception of the augmented texture. -% -Being able to coherently substitute the visuo-haptic texture of an everyday surface directly touched by a finger is an important step towards new \AR applications capable of visually and haptically augmenting the \RE of a user in a plausible way. +In this chapter, we investigate whether simultaneous and \textbf{co-localized visual and wearable haptic texture augmentation of tangible surfaces} in \AR can be perceived in a coherent and realistic manner, and to what extent each sensory modality would contribute to the overall perception of the augmented texture. +We used nine pairs of \textbf{data-driven visuo-haptic textures} from the \HaTT database \cite{culbertson2014one}, which we rendered using the visuo-haptic system presented in \chapref{vhar_system}, an \OST-\AR headset, and a wearable voice-coil device worn on the finger. +In a \textbf{user study}, 20 participants freely explored the combination of the visuo-haptic texture pairs to rate their coherence, realism and perceived roughness. +We aimed to assess \textbf{which haptic textures were matched with which visual textures}, how the roughness of the visual and haptic textures was perceived, and whether \textbf{the perceived roughness} could explain the matches made between them. -In this paper, we investigate how users perceive a tangible surface touched with the index finger when it is augmented with a visuo-haptic roughness texture using immersive optical see-through \AR (OST-AR) and wearable vibrotactile stimuli provided on the index. -% -In a user study, twenty participants freely explored and evaluated the coherence, realism and roughness of the combination of nine representative pairs of visuo-haptic texture augmentations (\figref{setup}, left) from the HaTT database \cite{culbertson2014one}. +\noindentskip The contributions of this chapter are: +\begin{itemize} + \item Transposition of data-driven visuo-haptic textures to augment tangible objects in a direct touch context in immersive \AR. + \item A user study evaluating with 20 participants the coherence, realism and perceived roughness of nine pairs of these visuo-haptic texture augmentations. +\end{itemize} + +\noindentskip In the next sections, we first describe the apparatus of the user study experimental design, including the two tasks performed. We then present the results obtained and discuss them before concluding. + +\bigskip + +\fig[0.7]{experiment/view}{First person view of the user study. }[ + As seen through the immersive \AR headset Microsoft HoloLens~2. + The visual texture overlays were statically displayed on the surfaces, allowing the user to move around to view them from different angles. + The haptic texture augmentations were generated based on \HaTT data-driven texture models and finger speed, and were rendered on the middle index phalanx as it slides on the considered surface. +][] diff --git a/2-perception/vhar-textures/2-experiment.tex b/2-perception/vhar-textures/2-experiment.tex index d1cd294..8d8b413 100644 --- a/2-perception/vhar-textures/2-experiment.tex +++ b/2-perception/vhar-textures/2-experiment.tex @@ -1,145 +1,123 @@ \section{User Study} \label{experiment} -\begin{subfigs}{setup}{User Study. }[][ - \item The nine visuo-haptic textures used in the user study, selected from the HaTT database \cite{culbertson2014one}. - The texture names were never shown, to prevent the use of the user's visual or haptic memory of the textures. - \item Experimental setup. - Participant sat in front of the tangible surfaces, which were augmented with visual textures displayed by the HoloLens~2 \AR headset and haptic roughness textures rendered by the vibrotactile haptic device placed on the middle index phalanx. - A webcam above the surfaces tracked the finger movements. - \item First person view of the user study, as seen through the immersive \AR headset HoloLens~2. - The visual texture overlays are statically displayed on the surfaces, allowing the user to move around to view them from different angles. - The haptic roughness texture is generated based on HaTT data-driven texture models and finger speed, and it is rendered on the middle index phalanx as it slides on the considered surface. - ] - \subfig[0.32]{experiment/textures} - \subfig[0.32]{experiment/setup} - \subfig[0.32]{experiment/view} -\end{subfigs} - -The user study aimed at analyzing the user perception of tangible surfaces when augmented through a visuo-haptic texture using \AR and vibrotactile haptic feedback provided on the finger touching the surfaces. -% -Nine representative visuo-haptic texture pairs from the HaTT database \cite{culbertson2014one} were investigated in two tasks: -% -(1) a matching task, where participants had to find the haptic texture that best matched a given visual texture; and (2) a ranking task, where participants had to rank only the haptic textures, only the visual textures, and the visuo-haptic texture pairs according to their perceived roughness. -% -Our objective is to assess which haptic textures were associated with which visual textures, how the roughness of the visual and haptic textures are perceived, and whether the perceived roughness can explain the matches made between them. +%The user study aimed at analyzing the user perception of tangible surfaces when augmented through a visuo-haptic texture using \AR and vibrotactile haptic feedback provided on the finger touching the surfaces. +%Nine representative visuo-haptic texture pairs from the \HaTT database \cite{culbertson2014one} were investigated in two tasks: +%\begin{enumerate} +% \item \level{Matching} task: participants had to find the haptic texture that best matched a given visual texture; and +% \item \level{Ranking} task: participants had to rank the haptic textures, the visual textures, and the visuo-haptic texture pairs according to their perceived roughness. +%\end{enumerate} +%Our objective is to assess which haptic textures were associated with which visual textures, how the roughness of the visual and haptic textures are perceived, and whether the perceived roughness can explain the matches made between them. \subsection{The textures} \label{textures} -The 100 visuo-haptic texture pairs of the HaTT database \cite{culbertson2014one} were preliminary tested and compared using \AR and vibrotactile haptic feedback on the finger on a tangible surface. -% +The 100 visuo-haptic texture pairs of the \HaTT database \cite{culbertson2014one} were preliminary tested and compared using the apparatus described in \secref{apparatus} to select the most representative textures for the user study. +% visuo-haptic system presented in \chapref{vhar_system}, and with the vibrotactile haptic feedback provided on the middle-phalanx of the finger touching a tangible surface. on the finger on a tangible surface These texture models were chosen as they are visuo-haptic representations of a wide range of real textures that are publicly available online. -% -Nine texture pairs were selected (\figref{setup}, left) to cover various perceived roughness, from rough to smooth, as listed: Metal Mesh, Sandpaper~100, Brick~2, Cork, Sandpaper~320, Velcro Hooks, Plastic Mesh~1, Terra Cotta, Coffee Filter. -% +Nine texture pairs were selected (\figref{experiment/textures}) to cover various perceived roughness, from rough to smooth, as named on the database: \level{Metal Mesh}, \level{Sandpaper~100}, \level{Brick~2}, \level{Cork}, \level{Sandpaper~320}, \level{Velcro Hooks}, \level{Plastic Mesh~1}, \level{Terra Cotta}, \level{Coffee Filter}. All these visual and haptic textures are isotropic: their rendering (appearance or roughness) is the same whatever the direction of the movement on the surface, \ie there are no local deformations (holes, bumps, or breaks). \subsection{Apparatus} \label{apparatus} -\figref{setup} shows the experimental setup (middle) and the first person view (right) of the user study. -% -Nine \qty{5}{\cm} square cardboards with smooth, white melamine surface, arranged in a \numproduct{3 x 3} grid, were used as real tangible surfaces to augment. -% -Their poses were estimated with three \qty{2}{\cm} square AprilTag fiducial markers glued on the surfaces grid. -% -Similarly, a \qty{2}{\cm} square fiducial marker was glued on top of the vibrotactile actuator to detect the finger pose. -% -Positioned \qty{20}{\cm} above the surfaces, a webcam (StreamCam, Logitech) filmed the markers to track finger movements relative to the surfaces. -% -The visual textures were displayed on the tangible surfaces using the HoloLens~2 OST-AR headset (\figref{setup}, middle and right) within a \qtyproduct{43 x 29}{\degree} field of view at \qty{60}{\Hz}; a set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the visual textures, that were used throughout the user study. -% -When a haptic texture was touched, a \qty{48}{kHz} audio signal was generated using the corresponding HaTT haptic texture model and the measured tangential speed of the finger, using the rendering procedure described in Culbertson \etal \cite{culbertson2014modeling}. -% -The normal force on the texture was assumed to be constant at \qty{1.2}{\N} to generate the audio signal from the model, as Culbertson \etal \cite{culbertson2015should}, who found that the HaTT textures can be rendered using only the speed as input without decreasing their perceived realism. -% -An amplifier (XY-502, not branded) converted this audio signal to a current transmitted to the vibrotactile voice-coil actuator (HapCoil-One, Actronika), that was encased in a \ThreeD-printed plastic shell firmly attached to the middle index phalanx of the participant's dominant hand, similarly to previous studies \cite{asano2015vibrotactile,friesen2024perceived}. -% -This voice-coil actuator was chosen for its wide frequency range (\qtyrange{10}{1000}{\Hz}) and its relatively low acceleration distortion, as specified by the manufacturer\footnoteurl{https://www.actronika.com/haptic-solutions}. -% -Overall latency was measured to \qty{46 \pm 6}{\ms}, as a result of latency in image acquisition \qty{16 \pm 1}{\ms}, fiducial marker detection \qty{8 \pm 3}{\ms}, network synchronization \qty{4 \pm 1}{\ms}, audio sampling \qty{3 \pm 1}{\ms}, and the vibrotactile actuator latency (\qty{15}{\ms}, as specified by the manufacturer\footnotemark[5]). -% -This latency was below the \qty{60}{\ms} threshold for vibrotactile feedback \cite{okamoto2009detectability} and was not noticed by the participants. -% +\figref{experiment/setup} shows the experimental setup, and \figref{experiment/view} the first person view of participants during the user study. The user study was held in a quiet room with no windows, with one light source of \qty{800}{\lumen} placed \qty{70}{\cm} above the table. +Nine \qty{5}{\cm} square cardboards with smooth, white melamine surface, arranged in a \numproduct{3 x 3} grid, were used as real tangible surfaces to augment. +Their poses were estimated with three \qty{2}{\cm} AprilTag fiducial markers glued on the surfaces grid. +Similarly, a \qty{2}{\cm} fiducial marker was glued on top of the vibrotactile actuator to detect the finger pose. +Positioned \qty{20}{\cm} above the surfaces, a webcam (StreamCam, Logitech) filmed the markers to track finger movements relative to the surfaces, as described in \secref[vhar_system]{virtual_real_alignment}. +The visual textures were displayed on the tangible surfaces using the \OST-\AR headset Microsoft HoloLens~2 running a custom application at \qty{60}{FPS} made with Unity 2021.1 and Mixed Reality Toolkit (MRTK) 2.7.2\footnoteurl{https://learn.microsoft.com/windows/mixed-reality/mrtk-unity}. +A set of empirical tests enabled us to choose the best rendering characteristics in terms of transparency and brightness for the visual textures, that were used throughout the user study. + +When a virtual haptic texture was touched, a \qty{48}{kHz} audio signal was generated using the rendering procedure described in \cite{culbertson2014modeling} from the corresponding \HaTT haptic texture model and the measured tangential speed of the finger (\secref[vhar_system]{texture_generation}). +The normal force on the texture was assumed to be constant at \qty{1.2}{\N} to generate the audio signal from the model, as \textcite{culbertson2015should}, who found that the \HaTT textures can be rendered using only the speed as input without decreasing their perceived realism. +The rendering of the virtual texture is described in \secref[vhar_system]{texture_generation}. +The vibrotactile voice-coil actuator (HapCoil-One, Actronika) was firmly attached to the middle index phalanx of the participant's dominant hand using a Velcro strap, similarly to previous studies \cite{asano2015vibrotactile,friesen2024perceived}. +%An amplifier (XY-502, not branded) converted this audio signal to a current transmitted to the vibrotactile voice-coil actuator (HapCoil-One, Actronika), that was encased in a \ThreeD-printed plastic shell firmly attached to the middle index phalanx of the participant's dominant hand, similarly to previous studies \cite{asano2015vibrotactile,friesen2024perceived}. +%This voice-coil actuator was chosen for its wide frequency range (\qtyrange{10}{1000}{\Hz}) and its relatively low acceleration distortion, as specified by the manufacturer\footnoteurl{https://www.actronika.com/haptic-solutions}. +%Overall latency was measured to \qty{46 \pm 6}{\ms}, as a result of latency in image acquisition \qty{16 \pm 1}{\ms}, fiducial marker detection \qty{8 \pm 3}{\ms}, network synchronization \qty{4 \pm 1}{\ms}, audio sampling \qty{3 \pm 1}{\ms}, and the vibrotactile actuator latency (\qty{15}{\ms}, as specified by the manufacturer\footnotemark[5]). +%This latency was below the \qty{60}{\ms} threshold for vibrotactile feedback \cite{okamoto2009detectability} and was not noticed by the participants. + +\begin{subfigs}{setup}{Textures used and experimental setup of the user study. }[][ + \item The nine visuo-haptic textures used in the user study, selected from the \HaTT database \cite{culbertson2014one}. + The texture names were never shown, to prevent the use of the user's visual or haptic memory of the textures. + \item Experimental setup. + Participant sat in front of the tangible surfaces, which were augmented with visual textures displayed by the Microsoft HoloLens~2 \AR headset and haptic roughness textures rendered by the vibrotactile haptic device placed on the middle index phalanx. + A webcam above the surfaces tracked the finger movements. + ] + \subfig[0.49]{experiment/textures} + \subfig[0.49]{experiment/setup} +\end{subfigs} + \subsection{Procedure and Collected Data} \label{procedure} Participants were first given written instructions about the experimental setup, the tasks, and the procedure of the user study. -% -Then, after having signed an informed consent form, they were asked to seat in front of the table with the experimental setup and to wear the HoloLens~2 \AR headset. The experimenter firmly attached the plastic shell encasing the vibrotactile actuator to the middle index phalanx of their dominant hand. -% -As the haptic device generated no audible noise, participants did not wear any noise reduction headphones. -% +Then, after having signed an informed consent form, they were asked to seat in front of the table with the experimental setup and to wear the \AR headset. +%The experimenter firmly attached the plastic shell encasing the vibrotactile actuator to the middle index phalanx of their dominant hand. +As the haptic textures generated no audible noise, participants did not wear any noise reduction headphones. A calibration of both the HoloLens~2 and the hand tracking was performed to ensure the correct alignment of the visual and haptic textures on the tangible surfaces. -% -Finally, participants familiarized with the augmented surface in a 2-min training session with textures different from the ones used in the user study. +Finally, participants familiarized with the augmented surface in a \qty{2}{min} training session with textures different from the ones used in the user study. -Participants started with the \emph{matching task}. -% +Participants started with the \level{Matching} task. They were informed that the user study involved nine pairs of corresponding visual and haptic textures that were separated and shuffled. -% On each trial, the same visual texture was displayed on the nine tangible surfaces, while the nine haptic textures were rendered on only one of the surfaces at a time, \ie all surfaces were augmented by the same visual texture, but each surface was augmented by a different haptic texture. -% The placement of the haptic textures was randomized before each trial. -% Participants were instructed to look closely at the details of the visual textures and explore the haptic textures with a constant pressure and various speeds to find the haptic texture that best matched the visual texture, \ie choose the surface with the most coherent visual-haptic texture pair. -% -The texture names were never given or shown to prevent the use of visual or haptic memory of the textures, nor a definition of what roughness is was given, to let participants complete the task as naturally as possible, similarly to Bergmann Tiest \etal \cite{bergmanntiest2007haptic}. +The texture names were never given or shown to prevent the use of visual or haptic memory of the textures, nor a definition of what roughness is was given, to let participants complete the task as naturally as possible, similarly to \textcite{bergmanntiest2007haptic}. -Then, participants performed the \emph{ranking task}, employing the same setup as the matching task and the same 9 textures. -% +Then, participants performed the \level{Ranking} task, employing the same setup as the matching task and the same 9 textures. In this case, participants were asked to rank the textures according to their perceived roughness. -% -First, they ranked all the haptic textures (without any visual augmentation given), then all the visual textures (without any haptic augmentation given), and finally all the visuo-haptic texture pairs together, being informed that they were the correct matches as per the original HaTT database. -% +First, they ranked all the haptic textures (without any visual augmentation given), then all the visual textures (without any haptic augmentation given), and finally all the visuo-haptic texture pairs together, being informed that they were the correct matches as per the original \HaTT database. The placement of the textures was also randomized before each trial. -After each task, participants answered to the following 7-item Likert scale questions (1=Not at all, 7=Extremely): -% -(\textit{Haptic Difficulty}) How difficult was it to differentiate the tactile textures? -(\textit{Visual Difficulty}) How difficult was it to differentiate the visual textures? -(\textit{Textures Match}) For the visual-tactile pairs you have chosen, how coherent were the tactile textures with the corresponding visual textures? -(\textit{Haptic Realism}) How realistic were the tactile textures? -(\textit{Visual Realism}) How realistic were the visual textures? -(\textit{Uncomfort}) How uncomfortable was to use the haptic device? -% -In an open question, participants commented also on their strategy for completing the matching task (How did you associate the tactile textures with the visual textures?) and the ranking task (How did you rank the textures?). -% The user study took on average 1 hour to complete. +\subsection{Experimental Design} +\label{design} + +The user study was a within-subjects design with two tasks: \level{Matching} and \level{Ranking}. + +In the \level{Matching} task, participants had to find the haptic texture that best matched a given visual texture. +It had one within-subjects factor, \factor{Visual Texture} with the following levels: +\level{Metal Mesh}, \level{Sandpaper~100}, \level{Brick~2}, \level{Cork}, \level{Sandpaper~320}, \level{Velcro Hooks}, \level{Plastic Mesh~1}, \level{Terra Cotta}, \level{Coffee Filter}. +To account for learning and fatigue effects, the order of \factor{Visual Texture} was counterbalanced using a balanced \numproduct{18 x 18} Latin square design. +A total of 9 textures \x 3 repetitions = 18 matching trials were performed per participant. + +In the \level{Ranking} task, participants had to rank the haptic textures, the visual textures, and the visuo-haptic texture pairs according to their perceived roughness. +It had one within-subjects factor, \factor{Modality} with the following levels: \level{Visual}, \level{Haptic}, \level{Visuo-Haptic}. +Each modality level was ranked once per participant following the fixed order listed above (\secref{procedure}). +%The order of \level{Modality} was fixed as listed above, and. +%A total of 3 modalities = 60 ranking trials were collected. + \subsection{Participants} \label{participants} -Twenty participants took part to the user study (12 males, 7 females, 1 preferred not to say), aged between 20 and 60 years old (M=29.1, SD=9.4). -% +Twenty participants took part in the user study (12 males, 7 females, 1 preferred not to say), aged between 20 and 60 years (\mean{29.1}, \sd{9.4}). One participant was left-handed, all others were right-handed; they all performed the user study with their dominant hand. -% All participants had normal or corrected-to-normal vision and none of them had a known hand or finger impairment. -% They rated their experience with haptics, \AR, and \VR (\enquote{I use it every month or more}); 10 were experienced with haptics, 2 with \AR, and 10 with \VR. -% Experiences were correlated between haptics and \AR (\spearman{0.53}), haptics and \VR (\spearman{0.61}), and \AR and \VR (\spearman{0.74}); but not with age (\spearman{-0.06} to \spearman{-0.05}) or gender (\spearman{0.10} to \spearman{0.27}). -% Participants were recruited at the university on a voluntary basis. -% They all signed an informed consent form before the user study. -\subsection{Design} -\label{design} +\subsection{Collected Data} +\label{collected_data} -The matching task was a single-factor within-subjects design, \textit{Visual Texture}, with the following levels: -% -Metal Mesh, Sandpaper~100, Brick~2, Cork, Sandpaper~320, Velcro Hooks, Plastic Mesh~1, Terra Cotta, Coffee Filter. -% -To account for learning and fatigue effects, the order of \textit{Visual Texture} was counterbalanced using a balanced \numproduct{18 x 18} Latin square design. -% -A total of 20 participants \x 9 textures \x 3 repetitions = 540 matching trials were collected. -% -The ranking task was a single-factor within-subjects design, \textit{Modality}, with the following levels: Visual, Haptic, Visuo-Haptic. -% -The order of \textit{Modality} was fixed as listed above. -% -A total of 20 participants \x 3 modalities = 60 ranking trials were collected. +For each trial of the \level{Matching} task, the chosen \response{Haptic Texture} for the given displayed \factor{Visual Texture} was recorded. +The \response{Completion Time} was also measured as the time between the visual texture display and the haptic texture selection. +For each modality of the \level{Ranking} task, the \response{Rank} of each of the visual, haptic, or visuo-haptic pairs of the textures presented was recorded. + +\noindentskip After each of the two tasks, participants answered to the following 7-item Likert scale questions (1=Not at all, 7=Extremely): +\begin{itemize} + \item \response{Haptic Difficulty}: How difficult was it to differentiate the tactile textures? + \item \response{Visual Difficulty}: How difficult was it to differentiate the visual textures? + \item \response{Textures Match}: For the visual-tactile pairs you have chosen, how coherent were the tactile textures with the corresponding visual textures? + \item \response{Haptic Realism}: How realistic were the tactile textures? + \item \response{Visual Realism}: How realistic were the visual textures? + \item \response{Uncomfort}: How uncomfortable was to use the haptic device? +\end{itemize} + +\noindentskip In an open question, participants also commented on their strategy for completing the \level{Matching} task (\enquote{How did you associate the tactile textures with the visual textures?}) and the \level{Ranking} task (\enquote{How did you rank the textures?}). diff --git a/2-perception/vhar-textures/3-results.tex b/2-perception/vhar-textures/3-results.tex index 8005c33..d1d6563 100644 --- a/2-perception/vhar-textures/3-results.tex +++ b/2-perception/vhar-textures/3-results.tex @@ -4,158 +4,145 @@ \subsection{Textures Matching} \label{results_matching} -\subsubsection{Confusion Matrix} +\paragraph{Confusion Matrix} \label{results_matching_confusion_matrix} -\begin{subfigs}{results_matching_ranking}{Results of the matching and ranking tasks. }[][ - \item Confusion matrix of the matching task, with the presented visual textures as columns and the selected haptic texture in proportion as rows. - The number in a cell is the proportion of times the corresponding haptic texture was selected in response to the presentation of the corresponding visual texture. - The diagonal represents the expected correct answers. - Holm-Bonferroni adjusted binomial test results are marked in bold when the proportion is higher than chance (\ie more than 11~\%, \pinf{0.05}). - \item Means with bootstrap \percent{95} \CI of the three rankings of the haptic textures alone, the visual textures alone, and the visuo-haptic texture pairs. - A lower rank means that the texture was considered rougher, a higher rank means smoother. - ] - \subfig[0.58]{results/matching_confusion_matrix}% - \subfig[0.41]{results/ranking_mean_ci}% -\end{subfigs} +\figref{results/matching_confusion_matrix} shows the confusion matrix of the \level{Matching} task with the visual textures and the proportion of haptic texture selected in response, \ie the proportion of times the corresponding haptic texture was selected in response to the presentation of the corresponding visual texture. +A two-sample Pearson Chi-Squared test (\chisqr{64}{540}{420}, \pinf{0.001}) and Holm-Bonferroni adjusted binomial tests indicated that the following (\factor{Visual Texture}, \response{Haptic Texture}) pairs have proportion selections statistically significantly higher than chance (\ie \percent{11} each): +\begin{itemize} + \item (\level{Sandpaper~320}, \level{Coffee Filter}), (\level{Terra Cotta}, \level{Coffee Filter}), and (\level{Coffee Filter}, \level{Coffee Filter}) (\pinf{0.001} each); + \item (\level{Cork}, \level{Sandpaper~320}), (\level{Brick~2}, \level{Plastic Mesh~1}), (\level{Brick~2}, \level{Sandpaper~320}), (\level{Plastic Mesh~1}, \level{Sandpaper~320}), and (\level{Sandpaper~320}, \level{Plastic Mesh~1}) (\pinf{0.01}); and + \item (\level{Metal Mesh}, \level{Cork}), (\level{Cork}, \level{Velcro Hooks}), (\level{Velcro Hooks}, \level{Plastic Mesh~1}), (\level{Velcro Hooks}, \level{Sandpaper~320}), and (\level{Coffee Filter}, \level{Terra Cotta}) (\pinf{0.05} each). +\end{itemize} -\figref{results_matching_ranking} (left) shows the confusion matrix of the matching task with the visual textures and the proportion of haptic texture selected in response, \ie the proportion of times the corresponding haptic texture was selected in response to the presentation of the corresponding visual texture. -% -A two-sample Pearson Chi-Squared test (\chisqr{64}{540}{420}, \pinf{0.001}) and Holm-Bonferroni adjusted binomial tests indicated that the following (Visual Texture, Haptic Texture) pairs have proportion selections statistically significantly higher than chance (\ie 11~\% each): % -% -(Sandpaper~320, Coffee Filter), (Terra Cotta, Coffee Filter), and (Coffee Filter, Coffee Filter) (\pinf{0.001} each); % -(Cork, Sandpaper~320), (Brick~2, Plastic Mesh~1), (Brick~2, Sandpaper~320), (Plastic Mesh~1, Sandpaper~320), and (Sandpaper~320, Plastic Mesh~1) (\pinf{0.01}); % -and (Metal Mesh, Cork), (Cork, Velcro Hooks), (Velcro Hooks, Plastic Mesh~1), (Velcro Hooks, Sandpaper~320), and (Coffee Filter, Terra Cotta) (\pinf{0.05} each). -% -Except for one visual texture (Sandpaper~100) and 4 haptic textures (Metal Mesh, Sandpaper~100, Brick~2, and Terra Cotta), all haptic and visual textures were matched statistically significantly higher than chance with at least one visual and haptic texture, respectively. -% -However, many mistakes were made: the expected haptic texture was selected on average only 20~\% of the time for five of the visual textures, and even around 5~\% for (visual) Sandpaper~100, Brick~2, and Sandpaper~320. -% -Only haptic Coffee Filter was correctly selected 59~\% of the time, and was also particularly matched with the visual Sandpaper~320 and Terra Cotta (around 45~\% each). -% -Similarly, the haptic textures Sandpaper~320 and Plastic Mesh~1 were also selected for four and three visual textures, respectively (around 25~\% each). -% +Except for one visual texture (\level{Sandpaper~100}) and 4 haptic textures (\level{Metal Mesh}, \level{Sandpaper~100}, \level{Brick~2}, and \level{Terra Cotta}), all haptic and visual textures were matched statistically significantly higher than chance with at least one visual and haptic texture, respectively. +However, many mistakes were made: the expected haptic texture was selected on average only \percent{20} of the time for five of the visual textures, and even around \percent{5} for (visual) \level{Sandpaper~100}, \level{Brick~2}, and \level{Sandpaper~320}. +Only haptic \level{Coffee Filter} was correctly selected \percent{59} of the time, and was also particularly matched with the visual \level{Sandpaper~320} and \level{Terra Cotta} (around \percent{45} each). +Similarly, the haptic textures \level{Sandpaper~320} and \level{Plastic Mesh~1} were also selected for four and three visual textures, respectively (around \percent{25} each). Additionally, the Spearman correlations between the trials were computed for each participant and only 21 out of 60 were statistically significant (\pinf{0.05}), with a mean \spearman{0.52} (\ci{0.43}{0.59}). +\fig[0.82]{results/matching_confusion_matrix}{Confusion matrix of the \level{Matching} task.}[ + With the presented visual textures as columns and the selected haptic texture in proportion as rows. + The number in a cell is the proportion of times the corresponding haptic texture was selected in response to the presentation of the corresponding visual texture. + The diagonal represents the expected correct answers. + Holm-Bonferroni adjusted binomial test results are marked in bold when the proportion is higher than chance (\ie more than \percent{11}, \pinf{0.05}). +] + These results indicate that the participants hesitated between several haptic textures for a given visual texture, as also reported in several comments, some haptic textures being more favored while some others were almost not selected at all. -% Another explanation could be that the participants had difficulties to estimate the roughness of the visual textures. -% Indeed, many participants explained that they tried to identify or imagine the roughness of a given visual texture then to select the most plausible haptic texture, in terms of frequency and/or amplitude of vibrations. -\subsubsection{Completion Time} -\label{results_matching_time} +\paragraph{Completion Time} -To verify that the difficulty with all the visual textures was the same on the matching task, the \textit{Completion Time} of a trial, \ie the time between the visual texture display and the haptic texture selection, was analyzed. -% -As the \textit{Completion Time} results were Gamma distributed, they were transformed with a log to approximate a normal distribution. -% -A \LMM on the log \textit{Completion Time} with the \textit{Visual Texture} as fixed effect and the \textit{Participant} as random intercept was performed. -% +To verify that the difficulty with all the visual textures was the same on the \level{Matching} task, the \response{Completion Time} of a trial was analyzed. +As the \response{Completion Time} results were Gamma distributed, they were transformed with a log to approximate a normal distribution. +A \LMM on the log \response{Completion Time} with the \factor{Visual Texture} as fixed effect and the participant as random intercept was performed. Normality was verified with a QQ-plot of the model residuals. -% -No statistical significant effect of \textit{Visual Texture} was found (\anova{8}{512}{1.9}, \p{0.06}) on \textit{Completion Time} (\geomean{44}{\s}, \ci{42}{46}), indicating an equal difficulty and participant behaviour for all the visual textures. +No statistical significant effect of \factor{Visual Texture} was found (\anova{8}{512}{1.9}, \p{0.06}) on \response{Completion Time} (\geomean{44}{\s}, \ci{42}{46}), indicating an equal difficulty and participant behaviour for all the visual textures. \subsection{Textures Ranking} \label{results_ranking} -\figref{results_matching_ranking} (right) presents the results of the three rankings of the haptic textures alone, the visual textures alone, and the visuo-haptic texture pairs. -% -Almost all the texture pairs in the \textit{Haptic Textures Ranking} results were statistically significantly different (\chisqr{8}{20}{146}, \pinf{0.001}; \pinf{0.05} for each comparison), except between (Metal Mesh, Sandpaper~100), (Cork, Brick~2), (Cork, Sandpaper~320) (Plastic Mesh~1, Velcro Hooks), and (Plastic Mesh~1, Terra Cotta). -% +\figref{results/ranking_mean_ci} presents the results of the three rankings of the haptic textures alone, the visual textures alone, and the visuo-haptic texture pairs. + +\paragraph{Haptic Textures Ranking} + +Almost all the texture pairs in the haptic textures ranking results were statistically significantly different (\chisqr{8}{20}{146}, \pinf{0.001}; \pinf{0.05} for each comparison), except between (\level{Metal Mesh}, \level{Sandpaper~100}), (\level{Cork}, \level{Brick~2}), (\level{Cork}, \level{Sandpaper~320}) (\level{Plastic Mesh~1}, \level{Velcro Hooks}), and (\level{Plastic Mesh~1}, \level{Terra Cotta}). Average Kendall's Tau correlations between the participants indicated a high consensus (\kendall{0.82}, \ci{0.81}{0.84}) showing that participants perceived similarly the roughness of the haptic textures. -% -Most of the texture pairs in the \textit{Visual Textures Ranking} results were also statistically significantly different (\chisqr{8}{20}{119}, \pinf{0.001}; \pinf{0.05} for each comparison), except for the following groups: \{Metal Mesh, Cork, Plastic Mesh~1\}; \{Sandpaper~100, Brick~2, Plastic Mesh~1, Velcro Hooks\}; \{Cork, Velcro Hooks\}; \{Sandpaper~320, Terra Cotta\}; and \{Sandpaper~320, Coffee Filter\}. -% -Even though the consensus was high (\kendall{0.61}, \ci{0.58}{0.64}), the roughness of the visual textures were more difficult to estimate, in particular for Plastic Mesh~1 and Velcro Hooks. -% -Also, almost all the texture pairs in the \textit{Visuo-Haptic Textures Ranking} results were statistically significantly different (\chisqr{8}{20}{140}, \pinf{0.001}; \pinf{0.05} for each comparison), except for the following groups: \{Sandpaper~100, Cork\}; \{Cork, Brick~2\}; and \{Plastic Mesh~1, Velcro Hooks, Sandpaper~320\}. -% -The consensus between the participants was also high \kendall{0.77}, \ci{0.74}{0.79}. -% + +\paragraph{Visual Textures Ranking} + +Most of the texture pairs in the visual textures ranking results were also statistically significantly different (\chisqr{8}{20}{119}, \pinf{0.001}; \pinf{0.05} for each comparison), except for the following groups: \{\level{Metal Mesh}, \level{Cork}, \level{Plastic Mesh~1}\}; \{\level{Sandpaper~100}, \level{Brick~2}, \level{Plastic Mesh~1}, \level{Velcro Hooks}\}; \{\level{Cork}, \level{Velcro Hooks}\}; \{\level{Sandpaper~320}, \level{Terra Cotta}\}; and \{\level{Sandpaper~320}, \level{Coffee Filter}\}. +Even though the consensus was high (\kendall{0.61}, \ci{0.58}{0.64}), the roughness of the visual textures were more difficult to estimate, in particular for \level{Plastic Mesh~1} and \level{Velcro Hooks}. + +\paragraph{Visuo-Haptic Textures Ranking} + +Also, almost all the texture pairs in the visuo-haptic textures ranking results were statistically significantly different (\chisqr{8}{20}{140}, \pinf{0.001}; \pinf{0.05} for each comparison), except for the following groups: \{\level{Sandpaper~100}, \level{Cork}\}; \{\level{Cork}, \level{Brick~2}\}; and \{\level{Plastic Mesh~1}, \level{Velcro Hooks}, \level{Sandpaper~320}\}. +The consezsus between the participants was also high \kendall{0.77}, \ci{0.74}{0.79}. Finally, calculating the similarity of the three rankings of each participant, the \textit{Visuo-Haptic Textures Ranking} was on average highly similar to the \textit{Haptic Textures Ranking} (\kendall{0.79}, \ci{0.72}{0.86}) and moderately to the \textit{Visual Textures Ranking} (\kendall{0.48}, \ci{0.39}{0.56}). -% A Wilcoxon signed-rank test indicated that this difference was statistically significant (\wilcoxon{190}, \p{0.002}). -% -These results indicate, with \figref{results_matching_ranking} (right), that the two haptic and visual modalities were integrated together, the resulting roughness ranking being between the two rankings of the modalities alone, but with haptics predominating. +These results indicate that the two haptic and visual modalities were integrated together, the resulting roughness ranking being between the two rankings of the modalities alone, but with haptics predominating. + +\fig[0.6]{results/ranking_mean_ci}{Means with bootstrap \percent{95} \CI of the three rankings of the haptic textures alone, the visual textures alone, and the visuo-haptic texture pairs. }[ + A lower rank means that the texture was considered rougher, a higher rank means smoother. +] \subsection{Perceived Similarity of Visual and Haptic Textures} -\label{results_similarity} +\label{results_clusters} -\begin{subfigs}{results_similarity}{% - (Left) Correspondence analysis of the matching task confusion matrix (\figref{results_matching_ranking}, left). - The visual textures are represented as blue squares, the haptic textures as red circles. % - The closer the textures are, the more similar they were judged. % - The first dimension (horizontal axis) explains 60~\% of the variance, the second dimension (vertical axis) explains 30~\% of the variance. - (Right) Dendrograms of the hierarchical clusterings of the haptic textures (left) and visual textures (right) of the matching task confusion matrix (\figref{results_matching_ranking}, left), using Euclidian distance and Ward's method. % - The height of the dendrograms represents the distance between the clusters. % - } - \begin{minipage}[c]{0.50\linewidth}% - \centering% - \subfig[1.0]{results/matching_correspondence_analysis}% - \end{minipage}% - \begin{minipage}[c]{0.50\linewidth}% - \centering% - \subfig[0.66]{results/clusters_haptic}% - \par% - \subfig[0.66]{results/clusters_visual}% - \end{minipage}% -\end{subfigs} +The high level of agreement between participants on the three haptic, visual and visuo-haptic rankings in the \level{Ranking} task (\secref{results_ranking}), as well as the similarity of the within-participant rankings, suggest that participants perceived the roughness of the textures similarly, but differed in their strategies for matching the haptic and visual textures in the \level{Matching} task (\secref{results_matching}). -The high level of agreement between participants on the three haptic, visual and visuo-haptic rankings (\secref{results_ranking}), as well as the similarity of the within-participant rankings, suggests that participants perceived the roughness of the textures similarly, but differed in their strategies for matching the haptic and visual textures in the matching task (\secref{results_matching}). -% -To further investigate the perceived similarity of the haptic and visual textures and to identify groups of textures that were perceived as similar on the matching task, a correspondence analysis and a hierarchical clustering were performed on the matching task confusion matrix (\figref{results_matching_ranking}, left). +To further investigate the perceived similarity of the haptic and visual textures and to identify groups of textures that were perceived as similar on the \level{Matching} task, a correspondence analysis and a hierarchical clustering were performed on the matching task confusion matrix (\figref{results/matching_confusion_matrix}). -The correspondence analysis captured 60~\% and 29~\% of the variance in the first and second dimensions, respectively, with the remaining dimensions each accounting for less than 5~\% each. -% -\figref{results_similarity} (left) shows the first two dimensions with the 18 haptic and visual textures. -% -The first dimension was similar to the rankings (\figref{results_matching_ranking}, right), distributing the textures according to their perceived roughness. -% +\paragraph{Correspondence Analysis} + +The correspondence analysis captured \percent{60} and \percent{29} of the variance in the first and second dimensions, respectively, with the remaining dimensions each accounting for less than \percent{5} each. +\figref{results/matching_correspondence_analysis} shows the first two dimensions with the 18 haptic and visual textures. +The first dimension was similar to the rankings (\figref{results/ranking_mean_ci}), distributing the textures according to their perceived roughness. It seems that the second dimension opposed textures that were perceived as hard with those perceived as softer, as also reported by participants. -% -Stiffness is indeed an important perceptual dimension of a material \cite{okamoto2013psychophysical,culbertson2014modeling}. +Stiffness is indeed an important perceptual dimension of a material (\secref[related_work]{hardness}).% \cite{okamoto2013psychophysical,culbertson2014modeling}. -\figref{results_similarity} (right) shows the dendrograms of the two hierarchical clusterings of the haptic and visual textures, constructed using the Euclidean distance and the Ward's method on squared distance. -% -The four identified haptic texture clusters were: "Roughest" \{Metal Mesh, Sandpaper~100, Brick~2, Cork\}; "Rougher" \{Sandpaper~320, Velcro Hooks\}; "Smoother" \{Plastic Mesh~1, Terra Cotta\}; "Smoothest" \{Coffee Filter\} (\figref{results_similarity}, top-right). -% -Similar to the haptic ranks (\figref{results_matching_ranking}, right), the clusters could have been named according to their perceived roughness. -% -It also shows that the participants compared and ranked the haptic textures during the matching task to select the one that best matched the given visual texture. -% -The five identified visual texture clusters were: "Roughest" \{Metal Mesh\}; "Rougher" \{Sandpaper~100, Brick~2, Velcro Hooks\}; "Medium" \{Cork, Plastic Mesh~1\}; "Smoother" \{Sandpaper~320, Terra Cotta\}; "Smoothest" \{Coffee Filter\} (\figref{results_similarity}, bottom-right). -% +\fig[0.6]{results/matching_correspondence_analysis}{ + Correspondence analysis of the \level{Matching} task confusion matrix (\figref{matching_confusion_matrix}). +}[ + %The haptic textures are represented as green squares, the haptic textures as red circles. % + The closer the haptic and visual textures are, the more similar they were judged. % + The first dimension (horizontal axis) explains \percent{60} of the variance, the second dimension (vertical axis) explains \percent{30} of the variance. +] + +\paragraph{Hierarchical Clustering} + +\figref{results_clusters} shows the dendrograms of the two hierarchical clusterings of the haptic and visual textures, constructed using the Euclidean distance and the Ward's method on squared distance. + +The four identified haptic texture clusters were: "Roughest" \{\level{Metal Mesh}, \level{Sandpaper~100}, \level{Brick~2}, \level{Cork}\}; "Rougher" \{\level{Sandpaper~320}, \level{Velcro Hooks}\}; "Smoother" \{\level{Plastic Mesh~1}, \level{Terra Cotta}\}; "Smoothest" \{\level{Coffee Filter}\} (\figref{results/clusters_haptic}). +Similar to the haptic ranks (\figref{results/ranking_mean_ci}), the clusters could have been named according to their perceived roughness. +It also shows that the participants compared and ranked the haptic textures during the \level{Matching} task to select the one that best matched the given visual texture. + +The five identified visual texture clusters were: "Roughest" \{\level{Metal Mesh}\}; "Rougher" \{\level{Sandpaper~100}, \level{Brick~2}, \level{Velcro Hooks}\}; "Medium" \{\level{Cork}, \level{Plastic Mesh~1}\}; "Smoother" \{\level{Sandpaper~320}, \level{Terra Cotta}\}; "Smoothest" \{\level{Coffee Filter}\} (\figref{results/clusters_visual}). They are also easily identifiable on the visual ranking results, which also made it possible to name them. -\begin{subfigs}{results_clusters}{ - Confusion matrices of the visual textures with the corresponding haptic texture clusters selected in proportion. - }[ - Holm-Bonferroni adjusted binomial test results are marked in bold when the proportion is higher than chance (\ie more than 20~\%, \pinf{0.05}). - ][ - \item Confusion matrix of the visual texture clusters. - \item Confusion matrix of the visual texture ranks. +\begin{subfigs}{results_clusters}{Dendrograms of the hierarchical clusterings of the \level{Matching} task confusion matrix.}[ + Done with the Euclidean distance and the Ward's method on squared distance. + The height of the dendrograms represents the distance between the clusters. + ][% + \item For the haptic textures. + \item For the visual textures. ] - \subfig[1]{results/haptic_visual_clusters_confusion_matrices}% + \subfig[0.45]{results/clusters_haptic} + \subfig[0.45]{results/clusters_visual} \end{subfigs} +\paragraph{Confusion Matrices of Clusters} + Based on these results, two alternative confusion matrices were constructed. -% -\figref{results_clusters} (left) shows the confusion matrix of the matching task with visual texture clusters and the proportion of haptic texture clusters selected in response. -% -A two-sample Pearson Chi-Squared test (\chisqr{16}{540}{353}, \pinf{0.001}) and Holm-Bonferroni adjusted binomial tests indicated that the following (Visual Cluster, Haptic Cluster) pairs have proportion selections statistically significantly higher than chance (\ie 20~\% each): % + +\figref{results/haptic_visual_clusters_confusion_matrices} (left) shows the confusion matrix of the \level{Matching} task with visual texture clusters and the proportion of haptic texture clusters selected in response. +A two-sample Pearson Chi-Squared test (\chisqr{16}{540}{353}, \pinf{0.001}) and Holm-Bonferroni adjusted binomial tests indicated that the following (Visual Cluster, Haptic Cluster) pairs have proportion selections statistically significantly higher than chance (\ie \percent{20} each): % (Roughest, Roughest), (Rougher, Rougher), (Medium, Rougher), (Medium, Smoother), (Smoother, Smoother), (Smoother, Smoothest), and (Smoothest, Smoothest) (\pinf{0.005} each). -% -\figref{results_clusters} (right) shows the confusion matrix of the matching task with visual texture ranks and the proportion of haptic texture clusters selected in response. -% + +\figref{results/haptic_visual_clusters_confusion_matrices} (right) shows the confusion matrix of the \level{Matching} task with visual texture ranks and the proportion of haptic texture clusters selected in response. A two-sample Pearson Chi-Squared test (\chisqr{24}{540}{342}, \pinf{0.001}) and Holm-Bonferroni adjusted binomial tests indicated that the following (Visual Texture Rank, Haptic Cluster) pairs have proportion selections statistically significantly higher than chance: % (0, Roughest); (1, Rougher); (2, Rougher); (3, Rougher); (4, Rougher); (5, Smoother); (6, Smoother); (7, Smoothest); and (8, Smoothest) (\pinf{0.05} each). -% This shows that the participants consistently identified the roughness of each visual texture and selected the corresponding haptic texture cluster. +\fig{results/haptic_visual_clusters_confusion_matrices}{ + Confusion matrices of the visual texture (left) or rank (right) with the corresponding haptic texture clusters selected in proportion. +}[ + Holm-Bonferroni adjusted binomial test results are marked in bold when the proportion is higher than chance (\ie more than \percent{20}, \pinf{0.05}). +] + \subsection{Questionnaire} \label{results_questions} +\figref{results_questions} presents the questionnaire results of the \level{Matching} and \level{Ranking} tasks. +A non-parametric \ANOVA on an \ART model was used on the \response{Difficulty} and \response{Realism} question results, while the other question results were analyzed using Wilcoxon signed-rank tests. + +On \response{Difficulty}, there were statistically significant effects of \factor{Task} (\anova{1}{57}{13}, \pinf{0.001}) and of \response{Modality} (\anova{1}{57}{8}, \p{0.007}), but no interaction effect \factor{Task} \x \factor{Modality} (\anova{1}{57}{2}, \ns). +The \level{Ranking} task was found easier (\mean{2.9}, \sd{1.2}) than the \level{Matching} task (\mean{3.9}, \sd{1.5}), and the Haptic textures were found easier to discrimate (\mean{3.0}, \sd{1.3}) than the Visual ones (\mean{3.8}, \sd{1.5}). +Both haptic and visual textures were judged moderately realistic for both tasks (\mean{4.2}, \sd{1.3}), with no statistically significant effect of \factor{Task}, \factor{Modality} or their interaction on \response{Realism}. +No statistically significant effects of \factor{Task} on \response{Textures Match} and \response{Uncomfort} were found either. +The coherence of the texture pairs was considered moderate (\mean{4.6}, \sd{1.2}) and the haptic device was not felt uncomfortable (\mean{2.4}, \sd{1.4}). + \begin{subfigs}{results_questions}{Boxplots of the questionnaire results for each visual hand rendering.}[ Pairwise Wilcoxon signed-rank tests with Holm-Bonferroni adjustment: * is \pinf{0.05}, ** is \pinf{0.01} and *** is \pinf{0.001}. Lower is better for Difficulty and Uncomfortable; higher is better for Realism and Textures Match.% @@ -163,20 +150,7 @@ This shows that the participants consistently identified the roughness of each v \item By modality. \item By task. ] - \subfig[0.32]{results/questions_modalities}% - \subfig[0.49]{results/questions_tasks}% + \subfigsheight{70mm} + \subfig{results/questions_modalities}% + \subfig{results/questions_tasks}% \end{subfigs} - -\figref{results_questions} presents the questionnaire results of the matching and ranking tasks. -% -A non-parametric \ANOVA on an \ART model was used on the \textit{Difficulty} and \textit{Realism} question results, while the other question results were analyzed using Wilcoxon signed-rank tests. -% -On \textit{Difficulty}, there were statistically significant effects of \textit{Task} (\anova{1}{57}{13}, \pinf{0.001}) and of \textit{Modality} (\anova{1}{57}{8}, \p{0.007}), but no interaction effect \textit{Task} \x \textit{Modality} (\anova{1}{57}{2}, \ns). -% -The Ranking task was found easier (\mean{2.9}, \sd{1.2}) than the Matching task (\mean{3.9}, \sd{1.5}), and the Haptic textures were found easier to discrimate (\mean{3.0}, \sd{1.3}) than the Visual ones (\mean{3.8}, \sd{1.5}). -% -Both haptic and visual textures were judged moderately realistic for both tasks (\mean{4.2}, \sd{1.3}), with no statistically significant effect of \textit{Task}, \textit{Modality} or their interaction on \textit{Realism}. -% -No statistically significant effects of \textit{Task} on \textit{Textures Match} and \textit{Uncomfort} were found either. -% -The coherence of the texture pairs was considered moderate (\mean{4.6}, \sd{1.2}) and the haptic device was not felt uncomfortable (\mean{2.4}, \sd{1.4}). diff --git a/2-perception/vhar-textures/4-discussion.tex b/2-perception/vhar-textures/4-discussion.tex index 3a732d6..879475a 100644 --- a/2-perception/vhar-textures/4-discussion.tex +++ b/2-perception/vhar-textures/4-discussion.tex @@ -2,72 +2,34 @@ \label{discussion} In this study, we investigated the perception of visuo-haptic texture augmentation of tangible surfaces touched directly with the index fingertip, using visual texture overlays in \AR and haptic roughness textures generated by a vibrotactile device worn on the middle index phalanx. -% -The nine evaluated pairs of visuo-haptic textures, taken from the HaTT database \cite{culbertson2014one}, are models of real texture captures. -% -Their perception was evaluated in a two-task user study in which participants chose the most coherent combinations of visual and haptic textures (matching task), and ranked all textures according to their perceived roughness (ranking task). -% +The nine evaluated pairs of visuo-haptic textures, taken from the \HaTT database \cite{culbertson2014one}, are models of real texture captures (\secref[related_work]{texture_rendering}). + +Their perception was evaluated in a two-task user study in which participants chose the most coherent combinations of visual and haptic textures (\level{Matching} task), and ranked all textures according to their perceived roughness (\level{Ranking} task). The visual textures were displayed statically on the tangible surface, while the haptic textures adapted in real time to the speed of the finger on the surface, giving the impression that the visuo-haptic textures were integrated into the tangible surface. -% In addition, the interaction with the textures was designed to be as natural as possible, without imposing a specific speed of finger movement, as in similar studies \cite{asano2015vibrotactile,friesen2024perceived}. -In the matching task, participants were not able to effectively match the original visual and haptic texture pairs (\figref{results_matching_ranking}, left), except for the Coffee Filter texture, which was the smoothest both visually and haptically. -% -However, almost all visual textures, except Sandpaper~100, were matched with at least one haptic texture at a level above chance. -% +In the \level{Matching} task, participants were not able to effectively match the original visual and haptic texture pairs (\figref{results/matching_confusion_matrix}), except for the \level{Coffee Filter} texture, which was the smoothest both visually and haptically. +However, almost all visual textures, except \level{Sandpaper~100}, were matched with at least one haptic texture at a level above chance. Similarly, five haptic textures were favored over the others to be matched with the visual textures. -% Thus, it seems that not all participants perceived visual textures in the same way and that they also hesitated between several haptic textures for a given visual texture. -% Indeed, the majority of users explained that, based on the roughness, granularity, or imperfections of the visual texture, they matched the haptic texture that seemed most similar or natural to what they imagined. -% Several strategies were used, as some participants reported using vibration frequency and/or amplitude to match a haptic texture. -% -It should be noted that the task was rather difficult (\figref{results_questions}), as participants had no prior knowledge of the textures, there were no additional visual cues such as the shape of an object, and the term \enquote{roughness} had not been used by the experimenter prior to the ranking task. +It should be noted that the task was rather difficult (\figref{results_questions}), as participants had no prior knowledge of the textures, there were no additional visual cues such as the shape of an object, and the term \enquote{roughness} had not been used by the experimenter prior to the \level{Ranking} task. -The correspondence analysis (\figref{results_similarity}, left) highlighted that participants did indeed match visual and haptic textures primarily on the basis of their perceived roughness (60\% of variance), which is in line with previous perception studies on real \cite{baumgartner2013visual} and virtual \cite{culbertson2014modeling} textures. -% -The rankings (\figref{results_matching_ranking}, right) confirmed that the participants all perceived the roughness of haptic textures very similarly, but that there was less consensus for visual textures, which is also in line with roughness rankings for real haptic and visual textures \cite{bergmanntiest2007haptic}. -% -These results made it possible to identify and name groups of textures in the form of clusters, and to construct confusion matrices between these clusters and between visual texture ranks with haptic clusters, showing that participants consistently identified and matched haptic and visual textures (\figref{results_clusters}). -% -30\% of the matching variance was also captured with a second dimension, opposing the roughest textures (Metal Mesh, Sandpaper~100), and to a lesser extent the smoothest (Coffee Filter, Sandpaper~320), with all other textures. -% -One hypothesis is that this dimension could be the perceived stiffness of the textures, with Metal Mesh and smooth textures appearing stiffer than the other textures, whose granularity could have been perceived as bumps on the surface that could deform under finger pressure. -% -Stiffness is, with roughness, one of the main characteristics perceived by the vision and touch of real materials \cite{baumgartner2013visual,vardar2019fingertip}, but also on virtual haptic textures \cite{culbertson2014modeling,degraen2019enhancing}. -% -The last visuo-haptic roughness ranking (\figref{results_matching_ranking}, right) showed that both haptic and visual sensory information were well integrated as the resulting roughness ranking was being in between the two individual haptic and visual rankings. -% +The correspondence analysis (\figref{results/matching_correspondence_analysis}) highlighted that participants did indeed match visual and haptic textures primarily on the basis of their perceived roughness (\percent{60} of variance), which is in line with previous perception studies on real \cite{baumgartner2013visual} and virtual \cite{culbertson2014modeling} textures. +The rankings (\figref{results/ranking_mean_ci}) confirmed that the participants all perceived the roughness of haptic textures very similarly, but that there was less consensus for visual textures, which is also in line with roughness rankings for real haptic and visual textures \cite{bergmanntiest2007haptic}. +These results made it possible to identify and name groups of textures in the form of clusters (\figref{results_clusters}), and to construct confusion matrices between these clusters and between visual texture ranks with haptic clusters (\figref{results/haptic_visual_clusters_confusion_matrices}), showing that participants consistently identified and matched haptic and visual textures. +\percent{30} of the matching variance of the correspondence analysis was also captured with a second dimension, opposing the roughest textures (\level{Metal Mesh}, \level{Sandpaper~100}), and to a lesser extent the smoothest (\level{Coffee Filter}, \level{Sandpaper~320}), with all other textures (\figref{results/matching_correspondence_analysis}). + +One hypothesis is that this dimension could be the perceived hardness (\secref[related_work]{hardness}) of the virtual materials, with \level{Metal Mesh} and smooth textures appearing harder than the other textures, whose granularity could have been perceived as bumps on the surface that could deform under finger pressure. +Hardness is, with roughness, one of the main characteristics perceived by the vision and touch of real materials \cite{baumgartner2013visual,vardar2019fingertip}, but also on virtual haptic renderings \cite{culbertson2014modeling,degraen2019enhancing}. + +The last visuo-haptic roughness ranking (\figref{results/ranking_mean_ci}) showed that both haptic and visual sensory information were well integrated as the resulting roughness ranking was being in between the two individual haptic and visual rankings. Several strategies were reported: some participants first classified visually and then corrected with haptics, others classified haptically and then integrated visuals. -% While visual sensation did influence perception, as observed in previous haptic \AR studies \cite{punpongsanon2015softar,gaffary2017ar,fradin2023humans}, haptic sensation dominated here. -% This indicates that participants were more confident and relied more on the haptic roughness perception than on the visual roughness perception when integrating both in one coherent perception. -% -Several participants also described attempting to identify visual and haptic textures using spatial breaks, edges or patterns, that were not observed when these textures were displayed in non-immersive \VEs with a screen \cite{culbertson2014modeling,culbertson2015should}. -% + +Several participants also described attempting to identify visual and haptic textures using spatial breaks, edges or patterns, that were not reported when these textures were displayed in non-immersive \VEs with a screen \cite{culbertson2014modeling,culbertson2015should}. A few participants even reported that they clearly sensed patterns on haptic textures. -% -However, the visual and haptic textures used were isotropic and homogeneous models of real texture captures, \ie their rendered roughness was constant and did not depend on the direction of movement but only on the speed of the finger. -% +However, the visual and haptic textures used were isotropic and homogeneous models of real texture captures, \ie their rendered roughness was constant and did not depend on the direction of movement but only on the speed of the finger (\secref[related_work]{texture_rendering}). Overall, the haptic device was judged to be comfortable, and the visual and haptic textures were judged to be fairly realistic and to work well together (\figref{results_questions}). - -These results have of course some limitations as they addressed a small set of visuo-haptic textures augmenting the perception of smooth white tangible surfaces. -% -Indeed, the increase in visuo-haptic texture perception may be limited on surfaces that already have strong visual or haptic patterns \cite{asano2012vibrotactile}, or on objects with complex shapes. -% -In addition, the haptic textures used were modelled from the vibrations of a probe sliding over the captured surfaces. -% -The perception of surface roughness with the finger is actually more complex because it involves both the perception of vibrations and the spatial deformation of the skin \cite{klatzky2003feeling}, but also because the sensations generated when exploring a surface depend on factors other than the speed of the finger alone, such as the force of contact, the angle, the posture or the surface of the contact \cite{schafer2017transfer}, and the integration of these sensory information into one unified perception is not yet fully understood \cite{richardson2022learning}. -% -Another limitation that may have affected the perception of haptic textures is the lack of compensation for the frequency response of the actuator and amplifier \cite{asano2012vibrotactile,culbertson2014modeling,friesen2024perceived}. -% -Finally, the visual textures used were also simple color captures not meant to be used in an immersive \VE. -% -However, our objective was not to accurately reproduce real textures, but to alter the perception of simultaneous visual and haptic roughness augmentation of a real surface directly touched by the finger in \AR. -% -In addition of these limitations, both visual and haptic texture models should be improved by integrating the rendering of spatially localized breaks, edges or patterns, like real textures \cite{richardson2022learning}, and by being adaptable to individual sensitivities, as personalized haptics is a promising approach \cite{malvezzi2021design,young2020compensating}. -% -More generally, a wide range of haptic feedbacks should be integrated to form rich and complete haptic augmentations in \AR \cite{maisto2017evaluation,detinguy2018enhancing,salazar2020altering}. - diff --git a/2-perception/vhar-textures/5-conclusion.tex b/2-perception/vhar-textures/5-conclusion.tex index def271e..a63850b 100644 --- a/2-perception/vhar-textures/5-conclusion.tex +++ b/2-perception/vhar-textures/5-conclusion.tex @@ -9,7 +9,7 @@ We investigated how users perceived visuo-haptic roughness texture augmentations on tangible surfaces seen in immersive OST-AR and touched directly with the index finger. % -The haptic roughness texture was rendered using a wearable vibrotactile haptic device worn on the middle phalanx, based on HaTT data-driven models and finger speed. +The haptic roughness texture was rendered using a wearable vibrotactile haptic device worn on the middle phalanx, based on \HaTT data-driven models and finger speed. % Participants rated the coherence, realism and roughness of the combination of nine representative visuo-haptic texture pairs. % diff --git a/2-perception/xr-perception/1-introduction.tex b/2-perception/xr-perception/1-introduction.tex index 1b97cac..d64662f 100644 --- a/2-perception/xr-perception/1-introduction.tex +++ b/2-perception/xr-perception/1-introduction.tex @@ -1,18 +1,22 @@ % Delivers the motivation for your paper. It explains why you did the work you did. -Most of the haptic augmentations of tangible surfaces using with wearable haptic devices, including roughness textures (\secref[related_work]{texture_rendering}), have been studied without a visual feedback, and none have considered the influence of the visual rendering on their perception or integrated them in \AR and \VR (\secref[related_work]{texture_rendering}). +\noindent Most of the haptic augmentations of tangible surfaces using with wearable haptic devices, including roughness textures (\secref[related_work]{texture_rendering}), have been studied without a visual feedback, and none have considered the influence of the visual rendering on their perception or integrated them in \AR and \VR (\secref[related_work]{texture_rendering}). Still, it is known that the visual rendering of a tangible can influence the perception of its haptic properties (\secref[related_work]{visual_haptic_influence}), and that the perception of same haptic force-feedback or vibrotactile rendering can differ between \AR and \VR, probably due to difference in perceived simultaneity between visual and haptic stimuli (\secref[related_work]{ar_vr_haptic}). -Indeed, while in \AR, the user can see their own hand touching, the haptic device worn and the \RE, in \VR they are hidden by the \VE while. +Indeed, in \AR, the user can see their own hand touching, the haptic device worn and the \RE, while in \VR they are hidden by the \VE. -In this chapter, we investigate the role of the visual virtuality of the hand (real or virtual) and its environment (\AR or \VR) on the perception of a \textbf{tangible surface whose haptic roughness is augmented with a wearable vibrotactile} device worn on the finger. +In this chapter, we investigate the \textbf{role of the visual virtuality} of the hand (real or virtual) and its environment (\AR or \VR) on the perception of a \textbf{tangible surface whose haptic roughness is augmented} with a wearable voice-coil device worn on the finger. To do so, we used the visuo-haptic system presented in \chapref{vhar_system} to render virtual vibrotactile patterned textures (\secref[related_work]{texture_rendering}) to augment the tangible surface being touched.% touched by the finger.% that can be directly touched with the bare finger. We evaluated, in \textbf{user study with psychophysical methods and extensive questionnaire}, the perceived roughness augmentation in three visual rendering conditions: \textbf{(1) without visual augmentation}, in \textbf{(2) \OST-\AR with a realistic virtual hand} rendering, and in \textbf{(3) \VR with the same virtual hand}. -To control for the influence of the visual rendering, the tangible surface was not visually augmented. +To control for the influence of the visual rendering, the tangible surface was not visually augmented and stayed the same in all conditions. -\noindentskip The contributions of this chapter is: A psychophysical user study with 20 participants to evaluate the effect of visual hand rendering in \OST-\AR or \VR on the perception of haptic roughness texture augmentations, using wearable vibrotactile haptics. +\noindentskip The contributions of this chapter are: +\begin{itemize} + \item A psychophysical user study with 20 participants to evaluate the effect of visual hand rendering in \OST-\AR or \VR on the perception of haptic roughness texture augmentations, using wearable vibrotactile haptics. + \item A discussion and recommendations on the integration of wearable haptic augmentations in direct touch context with \AR and \VR. +\end{itemize} \noindentskip In the remainder of this chapter, we first describe the experimental design and apparatus of the user study. -We then present the results obtained and discuss them before concluding. +We then present the results obtained, discuss them, and outline recommendations for future \AR/\VR works using wearable haptic augmentations. %First, we present a system for rendering virtual vibrotactile textures in real time without constraints on hand movements and integrated with an immersive visual \AR/\VR headset to provide a coherent multimodal visuo-haptic augmentation of the \RE. %An experimental setup is then presented to compare haptic roughness augmentation with an optical \AR headset (Microsoft HoloLens~2) that can be transformed into a \VR headset using a cardboard mask. diff --git a/2-perception/xr-perception/3-experiment.tex b/2-perception/xr-perception/3-experiment.tex index 3f8e4d9..a85e2b8 100644 --- a/2-perception/xr-perception/3-experiment.tex +++ b/2-perception/xr-perception/3-experiment.tex @@ -3,10 +3,8 @@ %The visuo-haptic rendering system, described in \secref[vhar_system]{method}, allows free exploration of virtual vibrotactile textures on tangible surfaces directly touched with the bare finger to simulate roughness augmentation, while the visual rendering of the hand and environment can be controlled to be in \AR or \VR. % -The user study aimed to investigate the effect of visual hand rendering in \AR or \VR on the perception of roughness texture augmentation of a touched tangible surface. -% -In a \TIFC task (\secref[related_work]{sensations_perception}), participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (\level{Real}, \figref{renderings}), in \AR with a realistic virtual hand superimposed on the real hand (\level{Mixed}, \figref{renderings}), and in \VR with the same virtual hand as an avatar (\level{Virtual}, \figref{renderings}). -% +%The user study aimed to investigate the effect of visual hand rendering in \AR or \VR on the perception of roughness texture augmentation of a touched tangible surface. +In a \TIFC task (\secref[related_work]{sensations_perception}), participants compared the roughness of different tactile texture augmentations in three visual rendering conditions: without any visual augmentation (\level{Real}, \figref{experiment/real}), in \AR with a realistic virtual hand superimposed on the real hand (\level{Mixed}, \figref{experiment/mixed}), and in \VR with the same virtual hand as an avatar (\level{Virtual}, \figref{experiment/virtual}). In order not to influence the perception, as vision is an important source of information and influence for the perception of texture \cite{bergmanntiest2007haptic,yanagisawa2015effects,vardar2019fingertip}, the touched surface was visually a uniform white; thus only the visual aspect of the hand and the surrounding environment is changed. \begin{subfigs}{renderings}{ @@ -46,12 +44,12 @@ In the \level{Mixed} and \level{Real} conditions, the mask had two additional ho \figref{renderings} shows the resulting views in the three considered \factor{Visual Rendering} conditions. Participants sat comfortably in front of the box at a distance of \qty{30}{\cm}, wearing the HoloLens~2 with a cardboard mask attached, so that only the inside of the box was visible, as shown in \figref{experiment/apparatus}. -The vibrotactile voice-coil actuator (HapCoil-One, Actronika) was firmly attached to the middle phalanx of the right index finger of the participants using a Velcro strap. +The vibrotactile voice-coil actuator (HapCoil-One, Actronika) was firmly attached to the middle phalanx of the right index finger of the participants using a Velcro strap, similarly to previous studies \cite{asano2015vibrotactile,friesen2024perceived}. The generation of the virtual texture is described in \secref[vhar_system]{texture_generation}. They also wore headphones with a brown noise masking the sound of the voice-coil. -The user study was held in a quiet room with no windows, and took on average one hour to complete. +The user study was held in a quiet room with no windows. -\begin{subfigs}{setup}{Visuo-haptic texture rendering system setup. }[][ +\begin{subfigs}{setup}{Visuo-haptic textures rendering setup. }[][ \item HoloLens~2 \OST-\AR headset, the two cardboard masks to switch the real or virtual environments with the same field of view, and the \ThreeD-printed piece for attaching the masks to the headset. \item User exploring a virtual vibrotactile texture on a tangible sheet of paper. ] @@ -82,6 +80,7 @@ All textures were rendered as described in \secref[vhar_system]{texture_generati Preliminary studies allowed us to determine a range of amplitudes that could be felt by the participants and were not too uncomfortable. The reference texture was chosen to be the one with the middle amplitude to compare it with lower and higher roughness levels and to determine key perceptual variables such as the \PSE and the \JND of each \factor{Visual Rendering} condition. The chosen \TIFC task is a common psychophysical method used in haptics to determine \PSE and \JND by testing comparison stimuli against a fixed reference stimulus and by fitting a psychometric function to the participant's responses (\secref[related_work]{sensations_perception}). +The user study took on average one hour to complete. \subsection{Experimental Design} \label{experimental_design} diff --git a/4-conclusion/conclusion.tex b/4-conclusion/conclusion.tex index bc75edd..7417229 100644 --- a/4-conclusion/conclusion.tex +++ b/4-conclusion/conclusion.tex @@ -5,28 +5,32 @@ \section{Summary} -% we presented our research on direct hand interaction with real and virtual everyday objects, visually and haptically augmented using immersive \AR and wearable haptic devices. -In this thesis, entitled \enquote{\textbf{\ThesisTitle}}, we presented our research towards a coherent, natural and seamless visuo-haptic augmented reality that enables perception and interaction with everyday objects directly with the hand. -Worn on the hand, wearable haptics can provide a rich tactile feedback on \VOs and augment the perception of real objects, while preserving the freedom of movement and interaction with the \RE. +In this thesis, entitled \enquote{\textbf{\ThesisTitle}}, we showed how wearable haptics, worn on the outside of the hand, can improve direct hand interaction in immersive \AR by augmenting the perception and manipulation of the virtual. +Wearable haptics can provide a rich tactile feedback on \VOs and augment the perception of real objects, both directly touched with the hand, while preserving the freedom of movement and interaction with the \RE. However, their integration with \AR is still in its infancy, and presents many design, technical and human challenges. -We focused our research on two: \textbf{(I) modifying the texture perception of tangible surfaces}, and \textbf{(II) improving the manipulation of \VOs}. +We structured our research on two axes: \textbf{(I) modifying the texture perception of tangible surfaces}, and \textbf{(II) improving the manipulation of \VOs}. \noindentskip In \partref{perception} we focused on modifying the perception of wearable virtual visuo-haptic textures that augments tangible surfaces. +Texture is a fundamental property of an object, perceived equally by sight and touch. +It is also one of the most known haptic augmentation, but it had not yet been integrated with \AR or \VR. +%However, haptic texture augmentation had not yet been integrated with \AR or \VR. %We designed wearable visuo-haptic texture augmentations and evaluated how the degree of virtuality and the rendering of the visuals influenced the perception of the haptic textures. -We (1) proposed a \textbf{wearable visuo-haptic texture augmentation system}, (2) evaluated how the perception of haptic textures is \textbf{affected by the visual virtuality of the hand} and the environment (real, augmented, or virtual), and (3) investigated the \textbf{perception of co-localized visuo-haptic texture augmentations}. +We \textbf{(1)} proposed a \textbf{wearable visuo-haptic texture augmentation system}, \textbf{(2)} evaluated how the perception of haptic textures is \textbf{affected by the visual virtuality of the hand} and the environment (real, augmented, or virtual), and \textbf{(3)} investigated the \textbf{perception of co-localized visuo-haptic texture augmentations}. \noindentskip In \chapref{vhar_system}, -\noindentskip In \chapref{xr_perception}, we explored how the perception of wearable haptic augmented textures is affected by the visual virtuality of the hand and the environment, being either real, augmented or virtual. -We augmented the perceived roughness of the tangible surface by rendering virtual vibrotactile patterned textures on the voice-coil, and rendered the visual conditions with an immersive \OST-\AR headset that could be switched to a \VR only view. +\noindentskip In \chapref{xr_perception} we explored how the perception of wearable haptic augmented textures is affected by the visual virtuality of the hand and the environment, whether it is real, augmented or virtual. +We augmented the perceived roughness of the tangible surface with virtual vibrotactile patterned textures, and rendered the visual conditions by switching the \OST-\AR headset to a \VR-only view. We then conducted a psychophysical user study with 20 participants and extensive questionnaires to evaluate the perceived roughness augmentation in these three visual conditions. -The textures were perceived as \textbf{rougher when touched with the real hand alone} compared to a virtual hand in either \AR or \VR, possibly due to the \textbf{perceived latency} between finger movements and different visual, haptic, and proprioceptive feedbacks. +The textures were perceived as \textbf{rougher when touched with the real hand alone compared to a virtual hand} in either \AR or \VR, possibly due to the \textbf{perceived latency} between finger movements and different visual, haptic, and proprioceptive feedbacks. -\noindentskip In \chapref{vhar_textures}, +\noindentskip In \chapref{vhar_textures}, we investigated the perception of co-localized visual and wearable haptic texture augmentations on tangible surfaces. +We transposed the \textbf{data-driven visuo-haptic textures} from the \HaTT database to the system presented in \chapref{vhar_system} and conducted a user study with 20 participants to rate the coherence, realism, and perceived roughness of nine visuo-haptic texture pairs. +Participants integrated roughness sensations from both visual and haptic modalities well, with \textbf{haptics predominating the perception}, and consistently identified and matched \textbf{clusters of visual and haptic textures with similar perceived roughness}. \noindentskip In \partref{manipulation} we focused on improving the manipulation of \VOs directly with the hand in immersive \OST-\AR. -Our approach was to design, based on the literature, and evaluate in user studies the effect of visual rendering of the hand and the delocalized haptic rendering. -We first considered (1) \textbf{the visual rendering as hand augmentation} and then the (2) combination of different visuo-haptic \textbf{rendering of the hand manipulation with \VOs}. +Our approach was to design visual renderings of the hand and delocalized haptic rendering, based on the literature, and to evaluate them in user studies. +We first considered \textbf{(1) the visual rendering as hand augmentation} and then the \textbf{(2)} combination of different visuo-haptic \textbf{rendering of the hand manipulation with \VOs}. \noindentskip In \chapref{visual_hand}, we investigated the visual rendering as hand augmentation. Seen as an \textbf{overlay on the user's hand}, such visual hand rendering provide feedback on the hand tracking and the interaction with \VOs. @@ -65,18 +69,34 @@ This would allow a complete portable and wearable visuo-haptic system to be used \paragraph{Visual Representation of the Virtual Texture} -The main limitation of our study is the absence of a visual representation of the virtual texture. +The main limitation of this user study was the absence of a visual representation of the virtual texture. This is indeed a source of information as important as haptic sensations for the perception of both real textures \cite{baumgartner2013visual,bergmanntiest2007haptic,vardar2019fingertip} and virtual textures \cite{degraen2019enhancing,gunther2022smooth}, and their interaction in the overall perception is complex. -Specifically, it remains to be investigated how to visually represent vibrotactile textures in an immersive \AR or \VR context, as the visuo-haptic coupling of such grating textures is not trivial \cite{unger2011roughness} even with real textures \cite{klatzky2003feeling}. +Specifically, it remains to be investigated how to visually represent the vibrotactile patterned textures used in an immersive \AR or \VR context, as the visuo-haptic coupling of such patterned textures is not trivial \cite{unger2011roughness} even with real textures \cite{klatzky2003feeling}. \paragraph{Broader Visuo-Haptic Conditions} -Also, our study was conducted with an \OST-\AR headset, but the results may be different with a \VST-\AR headset. -Finally, we focused on the perception of roughness sensations using wearable haptics in \AR \vs \VR using a square wave vibrotactile signal, but different haptic texture rendering methods should be considered. -More generally, many other haptic feedbacks could be investigated in \AR \vs \VR using the same system and methodology, such as stiffness, friction, local deformations, or temperature. +Our study was conducted with an \OST-\AR headset, but the results may be different with a \VST-\AR headset, where the \RE is seen through cameras and screens, and the perceived simultaneity between visual and haptic stimuli, real or virtual, is different. +We also focused on the perception of roughness augmentation using wearable vibrotactile haptics and a square wave signal to simulate a patterned texture: Our objective was not to accurately reproduce real textures, but to induce various roughness perception on the same tangible surface with a well controlled parameters. +Yet, more accurate models to simulate interaction with virtual textures should be transposed to wearable haptic augmentations, such as in \textcite{unger2011roughness}. +Another limitation that may have affected the perception of the haptic texture augmentations is the lack of compensation for the frequency response of the actuator and amplifier \cite{asano2012vibrotactile,culbertson2014modeling,friesen2024perceived}. +The dynamic response of the finger should also be considered. \subsection*{Perception of Visual and Haptic Texture Augmentations in Augmented Reality} +\paragraph{Adapt to the Specificities of Direct Touch} + +As with the previous chapter, our objective was not to accurately reproduce real textures, but to alter the perception of simultaneous visual and haptic roughness augmentation of a tangible surface directly touched by the finger in \AR. +Yet, the haptic textures used models from the vibrations of a hand-held probe sliding over real surfaces captured. +We generated the vibrotactile textures based only on the finger speed, but the perceived roughness of real textures depends on other factors, such as the force of contact, the angle, the posture or the surface of the contact \cite{schafer2017transfer}, but their respective importance in the perception is not yet fully understood \cite{richardson2022learning}. +%Comparison from + +These results have of course some limitations as they addressed a small set of visuo-haptic textures augmenting the perception of smooth white tangible surfaces. +Indeed, the increase in visuo-haptic texture perception may be limited on surfaces that already have strong visual or haptic patterns \cite{asano2012vibrotactile}, or on objects with complex shapes. +In addition, the haptic textures used were modelled from the vibrations of a probe sliding over the captured surfaces. +Finally, the visual textures used were also simple color captures not meant to be used in an immersive \VE. +However, +In addition of these limitations, both visual and haptic texture models should be improved by integrating the rendering of spatially localized breaks, edges or patterns, like real textures \cite{richardson2022learning}, and by being adaptable to individual sensitivities, as personalized haptics is a promising approach \cite{malvezzi2021design,young2020compensating}. + \subsection*{Visual Rendering of the Hand for Manipulating \VOs in AR} \paragraph{Other AR Displays} @@ -84,7 +104,7 @@ More generally, many other haptic feedbacks could be investigated in \AR \vs \VR The visual hand renderings we evaluated were displayed on the Microsoft HoloLens~2, which is a common \OST-\AR headset \cite{hertel2021taxonomy}. We purposely chose this type of display as it is with \OST-\AR that the lack of mutual occlusion between the hand and the \VO is the most challenging to solve \cite{macedo2023occlusion}. We thus hypothesized that a visual hand rendering would be more beneficial to users with this type of display. -However, the user's visual perception and experience is different with other types of displays, such as \VST-\AR, where the \RE view is seen through a screen (\secref[related_work]{ar_displays}). +However, the user's visual perception and experience is different with other types of displays, such as \VST-\AR, where the \RE view is seen through cameras and screens (\secref[related_work]{ar_displays}). While the mutual occlusion problem and the hand tracking latency can be overcome with \VST-\AR, the visual hand rendering could still be beneficial to users as it provides depth cues and feedback on the hand tracking, and should be evaluated as such. \paragraph{More Ecological Conditions} diff --git a/config/acronyms.tex b/config/acronyms.tex index 240be26..cd49a76 100644 --- a/config/acronyms.tex +++ b/config/acronyms.tex @@ -50,6 +50,7 @@ \acronym{FoV}{field of view} \acronym{GLMM}{generalized linear mixed models} \acronym{GM}{geometric mean} +\acronym{HaTT}{Penn Haptic Texture Toolkit} \acronym{HSD}{honest significant difference} \acronym{JND}{just noticeable difference} \acronym{LRA}{linear resonant actuator}