Generating AAA Face Assets from user input
The aim of the project is to build a reliable, efficient, protected and highly customisable generative model to generate new high quality 3D face models (assets) for video games with a limited set of training data. We will leverage recent developments in generative modeling in order to automatically generate face assets that can easily be then easily improved and modified by artists. We will focus on the generation of 2D facial images with multi-layered textures representation that will be then converted to 3D models.
The generation of the 2D images will be produced by an efficient diffusion model. Even though those models have shown great flexibility and power, in this project we will investigate how to produce high-resolution photorealistic images with a limited set of training data, while making sure that the original training data is protected. Additionally, we will explore different ways of allowing Ubisoft artists to condition the final generation in order to produce the expected result.
Required knowledge
- Academic record in computer science, applied mathematics, or electrical engineering with expertise in computer vision, deep learning. Previous experience in generative modelling and diffusion models is a plus.
- Good programming skills in Python, scientific programming. Experience with deep learning libraries such as Pytoch is a plus.