Corrections
This commit is contained in:
@@ -68,7 +68,7 @@ The \MLE model implies that when seeing and touching a \VO in \AR, the combinati
|
||||
|
||||
Thus, a visuo-haptic perception of an object's property is robust to some difference between the two sensory modalities, as long as one can match their respective sensations to the same property.
|
||||
In particular, the texture perception of objects is known to be constructed from both vision and touch \cite{klatzky2010multisensory}.
|
||||
More precisely, when surfaces are evaluated by vision or touch alone, both senses discriminate their materials mainly by the same properties of roughness, hardness, and friction, and with similar performance \cite{bergmanntiest2007haptic,baumgartner2013visual,vardar2019fingertip}.
|
||||
More precisely, when surfaces are evaluated by vision or touch alone, both senses discriminate their materials mainly by the same properties of roughness, hardness, and friction, and with similar performance \cite{bergmanntiest2007haptic,baumgartner2013visual,vardar2019fingertip}.
|
||||
|
||||
The overall perception can then be modified by changing one of the sensory modalities.
|
||||
\textcite{yanagisawa2015effects} altered the perceived roughness, stiffness, and friction of real tactile materials touched by the finger by superimposing different real visual textures using a half-mirror.
|
||||
|
||||
Reference in New Issue
Block a user