\section{Introduction} \label{intro} Touching, grasping and manipulating virtual objects are fundamental interactions in \AR (\secref[related_work]{ve_tasks}) and essential for many of its applications (\secref[related_work]{ar_applications}). The most common current \AR systems, in the form of portable and immersive \OST-\AR headsets \cite{hertel2021taxonomy}, allow real-time hand tracking and direct interaction with virtual objects with bare hands (\secref[related_work]{real_virtual_gap}). Manipulation of virtual objects is achieved using a virtual hand interaction technique that represents the user's hand in the \VE and simulates interaction with virtual objects (\secref[related_work]{ar_virtual_hands}). However, direct hand manipulation is still challenging due to the intangibility of the \VE, the lack of mutual occlusion between the hand and the virtual object in \OST-\AR (\secref[related_work]{ar_displays}), and the inherent delays between the user's hand and the result of the interaction simulation (\secref[related_work]{ar_virtual_hands}). In this chapter, we investigate the \textbf{visual rendering as hand augmentation} for direct manipulation of virtual objects in \OST-\AR. To this end, we selected in the literature and compared the most popular visual hand renderings used to interact with virtual objects in \AR. The virtual hand is \textbf{displayed superimposed} on the user's hand with these visual rendering, providing a \textbf{feedback on the tracking} of the real hand, as shown in \figref{hands}. The movement of the virtual hand is also \textbf{constrained to the surface} of the virtual object, providing an additional \textbf{feedback on the interaction} with the virtual object. We \textbf{evaluate in a user study}, using the \OST-\AR headset Microsoft HoloLens~2, the effect of six visual hand renderings on the user performance and experience in two representative manipulation tasks: push-and-slide and grasp-and-place a virtual object directly with the hand. \noindentskip The main contributions of this chapter are: \begin{itemize} \item A comparison from the literature of the six most common visual hand renderings used to interact with virtual objects in \AR. \item A user study evaluating with 24 participants the performance and user experience of the six visual hand renderings as augmentation of the real hand during free and direct hand manipulation of virtual objects in \OST-\AR. \end{itemize} \noindentskip In the next sections, we first present the six visual hand renderings we considered and gathered from the literature. We then describe the experimental setup and design, the two manipulation tasks, and the metrics used. We present the results of the user study and discuss the implications of these results for the manipulation of virtual objects directly with the hand in \AR. \bigskip \begin{subfigs}{hands}{The six visual hand renderings as augmentation of the real hands.}[ As seen by the user through the \AR headset during the two-finger grasping of a virtual cube. ][ \item No visual rendering \level{(None)}. \item Cropped virtual content to enable hand-cube occlusion \level{(Occlusion, Occl)}. \item Rings on the fingertips \level{(Tips)}. \item Thin outline of the hand \level{(Contour, Cont)}. \item Fingers' joints and phalanges \level{(Skeleton, Skel)}. \item Semi-transparent \ThreeD hand model \level{(Mesh)}. ] \subfig[0.22]{method/hands-none} \subfig[0.22]{method/hands-occlusion} \subfig[0.22]{method/hands-tips} \par \subfig[0.22]{method/hands-contour} \subfig[0.22]{method/hands-skeleton} \subfig[0.22]{method/hands-mesh} \end{subfigs}