In a bid to make virtual reality (VR) experience highly believable, researchers from Facebook Reality Labs (FRL) have developed a revolutionary system called “Codec Avatars” that gives VR users ability to interact with others while representing themselves with life-like avatars precisely animated in real-time.
“Our work demonstrates that it is possible to precisely animate photorealistic avatars from cameras closely mounted on a VR headset,” said the study’s lead author Shih-En Wei, a research scientist at Facebook.
The researchers have configured a headset with minimum sensors for facial capture and their system enables two-way, authentic social interaction in VR.
The team said the VR system can animate avatar heads with highly detailed personal likeness by precisely tracking users’ real-time facial expressions using a minimum set of headset-mounted cameras (HMC).
“By comparing these converted images using every pixel–not just sparse facial features–and the renderings of the 3D avatar,” noted Wei.
“We can precisely map between the images from tracking headset and the status of the 3D avatar through differentiable rendering. After the mapping is established, we train a neural network to predict face parameter from a minimal set of camera images in real time,” Wei added.
Besides animating the avatars in VR, Facebook’s team is also building systems that may enable people to quickly and easily create their avatars from just a few images or videos.
The Facebook team will demonstrate their VR real-time facial animation system at “SIGGRAPH 2019”, which is set to be held in Los Angeles between July 28-August 1.
The researchers will present an artificial intelligence (AI) technique based on Generative Adversarial Networks (GANs) that performs consistent multi-view image style translation to automatically convert HMC infrared images to images that look like a rendered avatar but with the same facial expression of the VR user.