• TheTechOasis
  • Posts
  • Music ControlNet, democratizing music, & the power of SLMs

Music ControlNet, democratizing music, & the power of SLMs

šŸ TheTechOasis šŸ

Breaking down the most advanced AI systems in the world to prepare you for your future.

5-minute weekly reads.

TLDR:

  • AI Research of the Week: Music ControlNet, Democratizing Music Creation

  • Leaders: SLMs, The Key to Deploying AI Everywhere in Our Lives

šŸ¤Æ AI Research of the week šŸ¤Æ

If thereā€™s one ubiquitous element across almost all recent AI solutions, is that almost all of them are based on text-prompting.

  • ChatGPT for Text-to-text

  • Stable Diffusion for Text-to-image

  • Runway for Text-to-video

And so on.

Basically, AI right now is all about requesting something through text and getting whatever that model specializes in. And that includes music.

In fact, Models like MusicGen have become extremely popular, as you can literally create music using prompts like ā€œClassic reggae track with an electronic guitar soloā€ and literally get something back that sounds good.

However, the fun ends pretty much there. You have no control over the generated melody, meaning that the practicality is nonexistent.

Itā€™s a fun toy and nothing more.

But now, researchers from Adobe Research and Carnegie Mellon University, have changed the game by applying the amazing discovery that revolutionized image models to music synthesis to music, creating the first model that hands you the power to create under your own accord.

For the first time, the AI musician is now under control, and the potential is unbound.

Itā€™s all a Transformation

If you asked me to define AI in one word, that would be ā€˜transformationā€™.

In a nutshell, what we describe as AI is nothing but a black box with a statistical algorithm inside that takes data in some form as an input and outputs data in another form, usually in the form of a prediction.

In its purest form, especially if we think about neural networks, these models are simply a huge function that models a certain task.

For instance:

  • ChatGPT is simply a huge function that has learned that, upon receiving a set of words, to predict the next one.

  • DALL-E, upon receiving a text description, predicts the noise in an image and takes it out to uncover the new image behind the noise.

And what makes neural networks so great is that, with enough data and some clever tweaking in the structure of this function, they can learn to perform any data transformation.

From words to music, and everything in between.

Heck, in last weekā€™s newsletter, we saw how they turned brain signals from imagined movements into real robotic movements!

This capacity to model ā€˜anythingā€™ is what makes neural networks be described as ā€˜universal function approximatorsā€™.

But why am I telling you this? Simple, because this is the intuition we need to understand how incredible Music ControlNet is.

Applying Control

Current text-to-music models like MusicGen by Meta or MusicLM by Google are already quite impressive, but lack any sense of control.

A common problem in the space

In short, they have learned very powerful global attributes of a multitude of genres and instruments, the appropriate moods of every music style, and of course the correct tempos.

Consequently, if you ask them to generate a reggae song, they will generate something that undoubtedly sounds like it.

But reggae, just like any other genre, has a manifold of different songs with varying melody, rhythms, or dynamics you canā€™t control with those models.

Thus, to understand how this has been solved for music, we need first to understand how it was solved for image synthesis.

What if, for instance, I wanted to generate several images in different scenarios while always depicting the same woman?

You couldnā€™tā€¦ until you could.

Harnessing control

To solve the image problem, Stanford University developed ControlNet, an image synthesis model that allows you to apply additional conditions besides the text prompt, like canny edges or human poses.

This results in consistent generations that stay true to the text prompt while respecting the newly set conditions.

But how does ControlNet work?

For standard image generation models, their variables, or weights, have specialized in one specific task, generating an image that describes the given text prompt.

But if you want to be able to provide a sketch so that the model generates the image based on it, these weights canā€™t do that.

You need new ones.

You can try and fine-tune the original weights, but that comes at a cost of potentially impacting the performance regarding the first objective, which is to generate an image that is semantically relevant to the prompt.

Bottom line, this line of action is always never an option.

Therefore, they applied a new set of weights at varying levels of the standard image generation model (they used Stable Diffusion but can be applied to any other model).

For training, they froze the weights of the standard model, and trained the new weights to learn how to ensure that the delivered image also respected the new condition, like any of the 8 shown in the image above.

The ā€˜zero convolutionā€™ part means that the new weights are initially set to zero to prevent them affecting the first generations too much.

A variable being ā€˜zeroā€™ doesnā€™t mean it doesnā€™t have ā€˜slopeā€™ i.e. a non-zero derivative, so the model still learns.

Therefore, once trained, when combining both sets of weights to generate a new image:

  • The original frozen weights ensure the new image respects the text condition set by the user

  • The new weights ensure the structure of the new image follows the new condition

Et voilĆ .

And now, this seminal architecture has been brought into musicā€¦ and the results are amazing.

A New Era

Just like Image ControlNet allows you to fix certain image conditions, Music ControlNet allows you to fix melody, rhythm, and dynamics.

The concept is very similar; they add three control blocks to the model, each a new pack of weights specialized in each new condition.

And just like in image synthesis, these weights encode the new conditions and add new information to the model during the decoding process, so that the new music not only respects global style but also applies the new conditions.

But here, researchers add a twist.

Modeling actual sound waves is really hard, as they range between 16 to 48 kHz, meaning that the model should learn to predict at least 16.000 sound data points per second.

Luckily, with sound, we have mel spectrograms, visual representations of the spectrum of frequencies of a signal as they vary with time.

The Mel scale is designed to mimic the non-linear human ear perception of sound, meaning it gives a more perceptually-relevant representation of audio.

A spectrogram is like a screenshot of the loudness and pitch of a sound at any given time, retaining most of the useful information from the real sound wave while having a much easier-to-model frequency (between 50 to 100 Hz).

Finally, as we already have neural networks that map spectrograms to actual sound, called vocoders, researchers here simply use a pre-trained one.

But they had one more thing, as the model also allows you to partially condition your generations, meaning that you can set the conditions for a part of the piece, and let the model improvize the rest, making Music ControlNet the first AI solution that really sets the stage for the complete disruption of the musical industry.

Democratizing Music

As you can hear for yourself, Music ControlNet represents a great leap in music synthesis.

Now, anyone can create their own music without actually having to play one single instrument.

The ethical implications of these types of technologies have yet to be understood, but what we can guarantee is that AI is leaving no stone unturned.

šŸ«” Key contributions šŸ«”

  • Music ControlNet allows the user to create unique music based on genre and style while giving full control of melody, dynamics, and rhythm to create unique music

  • It also allows you to partially condition the model, allowing it to improvise parts of the piece to enhance your musical creations

šŸ”® Practical implications šŸ”®

  • Everyone gets to be a musician in the AI-run world

  • For more educated artists, music edition and production will be taken to a completely new level

šŸ‘¾ Best news of the week šŸ‘¾

šŸ˜ Amazing intro to LLMs by the man himself, Andrej Karpathy

šŸ˜Ÿ Hereā€™s why Sam Altman got ousted, according to Reuters

šŸ„ø ChatGPT turns 1 year old. What have we learned?

šŸ˜Ž Take a look at the amazing text-to-video model, Stable Video

šŸ„‡ Leaders šŸ„‡

SLMs, The Key to Deploying AI Everywhere in Our Lives

With NLP foundation models like ChatGPT, we have finally broken down the language barrier with machines.

This, added to their core reasoning capabilities, makes them the ideal technology to be deployed, literally, everywhere.

But with current models thatā€™s not possible because of two things, huge computing requirements and costs.

Luckily, the rapid developments in the world of Small Language Models (SLMs), to the point of getting Microsoftā€™s CEO, Satya Nadella, ā€œvery excitedā€, not only solve these issues, but the quality/price ratio is so good that they could soon shift the worldā€™s interest away from LLMs, the current kings and queens of the AI industry, into these small guys. Undeniably, SLMs represent the key to unlocking AIā€™s impact across the world.

ā€œThe best things come in small wrappingsā€ as they say, and AI could just set another example of this.

Subscribe to Full Premium package to read the rest.

Become a paying subscriber of Full Premium package to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In.

A subscription gets you:

  • ā€¢ NO ADS
  • ā€¢ An additional insights email on Tuesdays
  • ā€¢ Gain access to TheWhiteBox's knowledge base to access four times more content than the free version on markets, cutting-edge research, company deep dives, AI engineering tips, & more