📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
Meta announced the audio2photoreal AI framework, which can generate character dialogue scenes by inputting dubbing files
Bit News Meta recently announced an AI framework called audio2photoreal, which is capable of generating a series of realistic NPC character models and automatically "lip-syncing" and "posing" the character models with the help of existing voice-over files.
The official research report pointed out that after receiving the dubbing file, the Audio2 photoreal framework will first generate a series of NPC models, and then use quantization technology and diffusion algorithm to generate model actions, in which quantization technology provides action sample reference for the framework and diffusion Algorithm is used to improve the effect of character actions generated by the frame.
Forty-three percent of the evaluators in the controlled experiment were "strongly satisfied" with the character dialogue scenes generated by the frame, so the researchers felt that the Audio2 photoreal framework was able to generate "more dynamic and expressive" movements than competing products in the industry. It is reported that the research team has now made the relevant code and dataset public on GitHub.