Normally, creating characters for 3D games is a very complicated process and it requires many hours in front of the computer using 3D models, applying textures and adapting movement dynamics to get a realistic as possible effect. However, with the new technology demonstrated by Facebook, all these steps might be gone and everything can be done by simply analyzing a series of videos in which a real person does some activities, similar to the activities expected from the 3D characters.
The analysis is made using two neural processing networks called Pose2Pose and Pose2Frame. The Pose2Pose network has the purpose of converting real life activities such as dancing into virtual activities. The Pose2Frame system then recreates the person in 3D, including the objects and shadow. The 3D character can be inserted into any virtual scene and controlled using the gamepad or keyboard. Alternatively, the A.I. of the game can learn the movements and they can be recreated as many times as needed.
The main advantage of the A.I. system created by Facebook is that it can replicate the look and movements of a character just by using a few short videos that render the target activities, ignoring other people in the background and compensating for different filming angles.
This makes it possible to use the A.I. system even outside especially created studios, for example to generate virtual replicas of football players, along with the unique reactions they have during the match.
While the A.I. system is very promising, Facebook has more work to do because it’s not perfect. The system needs to better understand how to anchor people in a virtual environment so that movements can also account for the surface the character is on. The dynamic of the movements is a bit limited but it does manage to give us a sense of realism.
When the system is 100% finalized, it may transform gaming into a more personal activity allowing gamers to insert their own character in the gaming world.