The IBV is developing a second generation of “ultra-realistic human avatars” for Extended Reality applications
The exponential growth of virtual content in the wake of the recent improvements in virtual and augmented reality glasses and the expansion of the metaverse, has led to the need for increasingly more accurate and realistic avatars, capable of hybridising the virtual and the real worlds. A challenge that has been met by the AVATARES project, in which the Instituto de Biomecánica (IBV) has been working to create a new generation of “avatars” or “digital human models” for extended reality applications in sectors such as entertainment, audiovisual, multimedia applications, meta-commerce or the metaverse.
This initiative was funded by the Directorate General for Innovation of the Department of Innovation, Industry, Commerce and Tourism of the Autonomous Government of Valencia. The aim of AVATARES was to develop a new generation of high-quality avatars with advanced features in order to improve their resemblance to their real-world counterparts. In addition, the improved textures and precise details of the skin, hair, clothes and other aspects, have made it possible to achieve ultra-realistic and visually appealing avatars, primarily for extended reality applications.
Using its dynamic body scanning technology, MOVE4D, which makes it possible to capture the surface of the human body in movement and to then use a homologous mesh to process the images, the IBV has been able to carry out this research to improve the ultra-realism of first-generation avatars in terms of texture and movement and the efficiency of content creation processes, and to capture the avatar in movement in real time.
2nd generation of digital human models or avatars
This new generation of avatars has sought to combine accuracy and realism so that they can be used not only in traditional fields such as biomechanical simulation , ergonomics or healthcare, but also in emerging sectors such as entertainment or the metaverse, which have very different requirements in terms of photorealism (both as regards the appearance and movements of the avatars and their interaction with people and objects in 3D environments), advanced personalisation options, real-time generation and interoperability so that they can be used in multiple applications.
Carmelo Lizarraga, Director of Innovation at the IBV’s Technology Department, explains that “we have focused on improving the ultra-realism of the avatars by faithfully capturing dynamic movements and by improving the textures – not only when the images are captured but also when they are processed – improving the detail and resolution of the faces and hands, visual fidelity and digital identity”.
Reducing occlusion when capturing objects is another aspect that the IBV has been working on. This has required more sensors and the design of capture modes to make it possible to manage the exposure times and capture surfaces with greater light absorption capacity, such as hair and certain fabrics.
“We have also optimised the algorithms we use to transmit point clouds, to allow us to move towards a real-time capture mode and the creation of different multi-resolution capture modes”, adds Lizarraga.
Lastly, work was carried out to ensure the interoperability of this new generation of avatars and to improve the efficiency of our content creation processes by adapting the avatars to mesh and skeleton topologies optimised for animation processes and by optimising the data export formats.
This research has benefited from the collaboration of companies such as Brainstorm Multimedia, Innoarea Projects, Play and Go Experience and Aumenta Vilanova, which have made it possible to ensure the market transfer of the results obtained through use cases and their adaptation to the needs of the different sectors.
Finally, the AVATARES project is funded by the Department of Innovation, Industry, Commerce and Tourism through the nominative line S0832 “Aid for Technology Institutions for innovation projects in collaboration with companies involved in intelligent specialisation” for the financial year 2024 (CONV24/DGINN/19).