Meta has announced Meta 3D Gen, a text to 3D generator, capable of creating high-quality 3D objects from simple text descriptions in less than a minute. While AI video generators have dominated recent headlines, it seems like that spotlight is now shifting to 3D generation technology.
Meta 3D Gen not only creates new 3D models but also retextures already existing models using new text prompts given by the users.
Meta 3D Gen significantly improves key metrics
Meta 3D Gen, Research Paper
for production-quality 3D assets, particularly for complex textual prompts.
This AI tool uses physically-based rendering (PBR) for realistic light interactions presenting accurate model representation. Further details have been mentioned in a research paper published by the Meta research team.
This new tool combines two key components: Meta 3D AssetGen and Meta 3D TextureGen. Here is how it works:
- 3D Asset Generation:
- First of all, Meta 3D AssetGen creates an initial 3D model with basic shape and texture in about 30 seconds.
- It then generates multiple views of the object and reconstructs a 3D version.
- Texture Refinement:
- Meta 3D TextureGen enhances the initial model’s textures and material properties in about 20 seconds.
- This stage can also add new textures to existing 3D models based on different text prompts provided by users
Meta claims 3D Gen outperforms other text-to-3D generators in both speed and quality. It shows particular strength in handling complex prompts and creating detailed characters and compositions.
This technology has potential applications in various fields, including:
- Video game development
- Movie visual effects
- Virtual reality experiences
- 3D printing
- Architecture and design visualization
Meta 3D Gen could significantly speed up 3D content creation, potentially reducing production times by 3 to 60 times compared to traditional methods. This efficiency could empower users to bring their ideas to life more quickly and easily in 3D form.