Home AI News Advancements in Music Synthesis Using Parallel Iterative Decoding and Acoustic Token Modeling

Advancements in Music Synthesis Using Parallel Iterative Decoding and Acoustic Token Modeling

Advancements in Music Synthesis Using Parallel Iterative Decoding and Acoustic Token Modeling

Significant advancements have been made in the autoregressive generation of speech and music through discrete acoustic token modeling. To improve picture creation, non-autoregressive parallel iterative decoding methods have been developed. These methods are better suited for infill tasks that require conditioning on both past and future sequence components. The study utilizes acoustic token modeling and simultaneous iterative decoding for music audio synthesis. It is the first time parallel iterative decoding has been used for neural audio music synthesis.

The researchers utilize token-based prompting to adapt their model, VampNet, for a wide range of applications. By concealing music token sequences, they demonstrate the model’s ability to fill in the missing parts of the music. The model can create high-quality audio compression methods or variations of the original input music in terms of style, genre, beat, and instrumentation while maintaining some nuances of timbre and rhythm. Unlike auto-regressive music models, their method allows prompts to be placed anywhere, enabling more flexibility in music continuations.

Various prompt designs, including periodic, compression, and beat-inspired masking, were investigated. The researchers found that VampNet performs well when creating loops and variations, hence the name VampNet. They provide the code for download and encourage people to check out their audio samples. VampNet, developed by researchers from Descript Inc. and Northwestern University, is a powerful tool for generating music using masked acoustic token modeling. It can generate music variants based on different prompting approaches, making it suitable for creating variations on a piece of music.

Musicians can use VampNet to record a short loop and let the system generate musical variations every time the loop is repeated. The researchers plan to further explore VampNet’s potential for interactive music co-creation and its representation learning capabilities in future work.

For more detailed information, you can check out the research paper.

Credit for this research goes to the researchers involved in the project. Don’t forget to join our 26k+ ML SubReddit, Discord Channel, and Email Newsletter where we share the latest AI research news, cool AI projects, and more.

If you’re interested in AI tools, you can check out over 800+ AI tools in AI Tools Club.

About the author:
Aneesh Tickoo is a consulting intern at MarktechPost. He is pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. Aneesh is passionate about machine learning and image processing. He loves connecting with people and collaborating on interesting projects.

Source link


Please enter your comment!
Please enter your name here