Google DeepMind’s new language model can generate soundtracks, dialogues for videos — what it is, other details

Google’s DeepMind research lab has unveiled a new AI model called V2A (Video-to-Audio) that can breathe life into silent videos by generating soundtracks and even dialogue. While the video generation technology is rapidly growing, most of the current systems can generate only create videos without sound. The new video-to-audio (V2A) technology allows synchronised audiovisual creation by combining video pixels with text prompts.

Read more

You may also like

Comments are closed.

More in IT