Adobe introduced Project Music GenAI Control, a platform that can generate audio from text descriptions (e.g., “happy dance,” “sad jazz”) or a reference tune, and allows users to customize the results within a single workflow.
Using Project Music GenAI Control, users can adjust things like tempo, intensity, repeating patterns, and structure. Or they can take a track and extend it to an arbitrary length, remixing the music or creating an endless loop.
Adobe developed this tool in collaboration with researchers at the University of California and Carnegie Mellon University. This is currently a research project, although it may later become publicly available. For now, the platform doesn't even have a user interface yet.
“So it’s really clear: artificial intelligence creates music with you as the director, and you can do a lot of things with it,” said the head of AI audio and video research Adobe intelligence Gautham Mysore. “[The instrument] creates music, but [also] gives you different forms of control so you can experience something. You don't have to be a composer, but you can express your musical ideas.”
Mysore said Adobe typically develops its generative AI tools on data that is licensed or in the public domain to avoid potential intellectual property issues. He added that Adobe is working on watermarking technology to help identify audio created by Project Music GenAI Control, but acknowledged that it is still a work in progress.
QA course. The simplest way to start a career in IT is also with guaranteed procedures. Find out about the course