- TheTechOasis
- Posts
- SALMONN, the First Model that Hears like Humans do
SALMONN, the First Model that Hears like Humans do
š TheTechOasis š
Breaking down the most advanced AI systems in the world to prepare you for your future.
5-minute weekly reads.
TLDR:
AI Research of the Week: SALMONN, the First-Ever Audio-Language Multimodal LLM
Leaders: Why Bidenās AI Executive Order May Change the World Forever
š¤Æ AI Research of the week š¤Æ
People often underestimate the importance of hearing to function correctly in our world and, more importantly, as an essential tool for learning.
As the famed Helen Keller once said, āBlindness cuts us off from things, but deafness cuts us off from peopleā and letās not forget that this woman was blind and deaf.
Therefore, it's only natural to see hearing as an indispensable requirement for AI to become the sought-after superior ābeingā that some people predict it will become.
Sadly, current AI systems suck at hearing.
Yes, OpenAIās Whisper understands speech pretty well, and other models capture audio events very efficiently.
But hearing is much more than that. Humans understand speech, encode random noises, and enjoy music, making āgeneric hearingā one of the last features that AI couldnāt replicate from humans.
Now, a new model created by the company behind TikTok, ByteDance, challenges this vision.
SALMONN is the first-ever multimodal audio-language AI system for generic hearing, a model that can process random audio signals from the three main sound types: speech, audio events, and music.
Whatās more, as weāll see shortly, it showcases truly unique, never-seen-before capabilities like audio storytelling or audio-speech co-reasoning.
And today, we are understanding how it works.
Building True Multimodality
To build models that process data in more ways than text, we usually connect an encoder and a Large Language Model (LLM) through some sort of adaptor or cross-attention mechanism, with cases like the recently-covered LLaVa.
Image-language multimodality is very common, but audio, on the other hand, is much less visible today.
But in both cases, a latent problem persists.
Most of these models are great at standalone vision, audio, or language tasks, but fail miserably when facing cross-modal tasks, especially when talking about audio models.
For instance, if we send the model a sound file with the sentence āWhy am I shouting?ā it should give completely different answers depending on the situation of the person.
To the left, the audio would probably include gunshots or explosions, and the one on the right would include high music, singing, and so on.
Thus, both situations require different answers, but current audio models canāt handle such complex tasks.
Amazingly, SALMONN can.
A Tale of Three Steps
Although at first glance SALMONN looks similar to how other Multimodal LLMs are trained, it has some important distinctions.
Bridging data types
LLMs like ChatGPT take a text sequence and complete it by predicting the ānext wordā.
To do so, they have been trained by seeing millions upon millions of text passages.
But if we send them an image or audio, it wonāt work because they wonāt understand them, just like you wonāt get an answer back if you ask a Japanese person the question āWhereās the Shibuya crossing?ā while sightseeing in Tokyo, unless that person speaks English.
One thing you can do is show that person an image of the crossing, and he/she will naturally understand you want directions without having to speak English.
This works because a question about the Shibuya crossing and an image of the crossing itself refer to the same salient concept: the world-famous landmark.
We humans are multimodal, so the concept remains the same no matter the way it is presented to us, via speech or images.
But to achieve this in AI, we have to create a ājoint-embedding spaceā.
The embedding space is a compressed representation of our world that the model builds during training, where similar things have similar vectors, and dissimilar things have widely different vectors,
Machines only understand numbers. Hence, real life concepts like ācatā or ādogā are represented in the form of vectors so that machines can process them. This is called an embedding.
LLMs work only with text, so for them to be able to comprehend that an image and a text describing the āShibuya crossingā refer to the same thing, we need the output vectors of the image and the text to be very similar.
In other words, as the sound encoders process the music, audio, or speech events and transform them into compressed representations of those audio signals (represent them in vector form), we apply a transformation that āmapsā those outputs into the LLMās vector space.
This way, much like you can use a language translator to translate your English question into Japanese to ask for directions, the LLM will see the outputs of the audio and speech encoders in a form it understands.
This is what the Q-Former in the image does, allowing the LLM to process both modalities and answer back with information about both.
Letās see this with an example:
The audio input is sent to the sound encoders, which process it into a sequence of audio embeddings.
The Q-Former then takes those outputs and projects them into a vector space that the LLM can process.
Using the previous example, you can think of this as Q-Former taking an audio of a person describing how the Shibuya crossing looks like, and transforming it into a series of text tokens describing the Shibuya crossing.
In the meantime, the text prompt is processed through the LLMās embedding look-up matrix, transforming text data into text embeddings.
LLMs like ChatGPT donāt have a text encoder but a ālook upā matrix.
For instance, if the input sequence has the word ātableā the model searches for the equivalent embedding for ātableā in the matrix, instead of calculating it with an encoder.
Consequently, this matrix has as many rows as words the LLM has in its vocabulary.
The LLMās decoder then processes the projected audio embeddings and text embeddings as a unique sequence of data, performing the āword predictionā exercise that any LLM makes.
As audio tokens are transformed into text tokens with the Q-Former, this exercise becomes just like any other sequence-to-sequence word prediction.
As SALMONN can process generic audio, it is capable of separating the manās speech from the background noises to process what itās saying, while successfully grasping the context that those background noises provide.
Finally, the embeddings are sent to the LLMās decoder, using them to predict the next word.
This seemingly simple activity is what we call audio-language co-reasoning, a remarkable new feature for AI that is far from trivial because:
The model has to detach different types of sounds in a blended audio frame.
Audio events are usually processed in spectrogram form because they donāt require temporal correlations. On the other hand, speech is sequential, so they do. Thus, they require different types of encoders.
The model needs to showcase cross-modal capabilities, understanding the context around the speech to provide an accurate answer
And the model also needs to embed audio and language into a common embedding space to be able to answer the question
Additionally, you can ask the model to create stories based on audio files, an absolute first in AI too:
ByteDance bites first
SALMONN is, up to current knowledge, the first MLLM for generic audio and language, pulling us closer to creating models that āhearā as we do.
And just like that, think about the potential applications:
better hearing-aid products,
more sensitive and alert smart home assistants that detect when someone falls to call emergency systems, or when your baby starts crying,
or educational products that help you improve your music playing by giving you cues on how to improve, and many more.
Anyone can agree that with AI things can get āfishyā in terms of the negative impact of its misuse, but with models like SALMONN, itās in a good way.
Click here to access the GitHub repository and real examples.
š«” Key contributions š«”
ByteDance presents SALMONN, the first-ever MLLM that processes generic sounds and text
Itās capable of performing new, unprecedented tasks like audio storytelling or audio-image co-reasoning
š® Practical implications š®
Improving hearing-aid products
Next generation of Smart Home systems
Healthcare monitoring to track speech from patients and identify emotional states, or even health conditions
š¾ Best news of the week š¾
š§ Elon Musk released his first chatbot, xAI, yesterday
š„ Runwayās new video model is literally mind-blowing
š° Instagram might be building a customizable AI friend
š«¢ Microsoft presents its ground-breaking Phi 1.5 model
š„ Leaders š„
Why Bidenās AI Executive Order May Change The World Forever
Things just got real.
In case you were still wondering if countries were taking AI risks seriously, Joe Biden has presented the AI Executive Order, the first real effort to regulate AI globally.
But wait, wasnāt Europe much closer to passing an AI Act?
Yes, by a landslide, and indeed the US Congress was nowhere near close to reaching an agreement.
But then, how?
Simple, a Cold War law from the 1950s has been enforced.
The Defense Production Act, a piece of legislation that grants the POTUS the capacity to bypass Congress in situations when thereās a real risk to the national security of the United States of America, has been used to enforce this Executive Order.
In other words, in the eyes of the White House, AI now represents a real risk to national security at the level of nuclear, biological, or chemical weapons.
Yes, the seemingly harmless chatbot you talk to every day to speed up your essays could very well be the precursor to weapons of mass destruction.
As expected, this executive order has been met with a good amount of praiseā¦and also its fair share of heavy criticism.
Today we are diving deep into what this AI Executive Order really means to you, what could be the intentions behind it, and why it can change the course of history.
The War on Foundation Models
The whole Executive Orderās argument regarding AI risks is the fact that now everyone, including malicious actors, has access to ādual-use foundation modelsā.
But what is a dual-use foundation model?
When generalization became a bad thing
The critical discovery underpinning the success of AI during the last year is that we have finally figured out how to break what I call the ā1:1 model to task ratioā.
Historically, AI models have always been designed to do one task, and thatās all.
But when the seminal AI architecture defined as the āTransformerā was released in 2017, researchers realized they could train a model in a self-supervised manner at scale.
This meant that the supervisory signal, the way we tell a model if they are adequately predicting the task, is defined by the actual data, meaning that we donāt have to manually label it, and, thus, huge amounts of text could be used to train the models.
Exposing such quantities of data to models allowed for the creation of the first general-purpose models, models that could generalized to multiple seen and unseen tasks.
A Foundation Model according to DALL-E3
This is what we define as āfoundation modelsā, models that can be used as the āfoundationā to leverage AI for a myriad of tasks, ranging from asking it to give you ideas for an essayā¦ to building domestic bombs.
This concept, known as generalization, as the model learns to predict well with data not seen in training and the once AIās holy grail, is now also considered a huge problem.
FLOPs and large models, enemies of the state
The biggest surprise of all has been the decision that models above certain floating point operations required for training will be heavily scrutinized, meaning that companies training them must consistently report progress and, more importantly, heavily invest in red teaming efforts.
Red teaming is an adversarial testing process where people try to induce the model into giving harmful responses like building bombs.
The number?
10 Ć la 26 operations, or 100 trillion trillion operations.
That may seem like an insurmountable number, but far from that.
It is rumored that GPT-4 required around 20 trillion trillion operations, so itās fair to say we are approaching that threshold.
The rationale behind this idea is purely based on scaling laws, the belief that as models get larger they will become more and more capable and, thus, represent an increasing risk to humanity.
But, are they really?
Firstly, there is no proof of that to be the case; the only thing we know is that perplexity, the main metric to measure an LLMās capacity to model language ā predict the next word in a sequence ā has shown no signs of saturation, so the natural inclination of researchers is to make models bigger.
Perplexity measures how āsureā a model is regarding the next word. In mathematical terms, itās about maximizing the probablity that the chosen word is the expected one and optimizing against that function.
And to make things worse, the requirements to improve models demand exponential increases in computing, as we can see below, where a simple error reduction from 5% to 3% on Imagenet requires two orders of magnitude more computation and the equivalent CO2 emissions of New York City in a month, according to MIT.
But the greatest fear that keeps AI doomsayers like Joe Biden up at night when thinking about making models bigger is what we describe as emergent capabilities.
One of the most fascinating mysteries of current frontier AI models, it has fueled the creation of The Frontier Forum between Google, Microsoft, OpenAI, and Anthropic, and represents one of the main reasons some of the brightest minds of our time, like Ilya Sutskever or Yoshua Bengio, have heavily shifted their research focus to AI as an existential threat to humanity.
Subscribe to Full Premium package to read the rest.
Become a paying subscriber of Full Premium package to get access to this post and other subscriber-only content.
Already a paying subscriber? Sign In.
A subscription gets you:
- ā¢ NO ADS
- ā¢ An additional insights email on Tuesdays
- ā¢ Gain access to TheWhiteBox's knowledge base to access four times more content than the free version on markets, cutting-edge research, company deep dives, AI engineering tips, & more