Introducing Gradio Clients
WatchIntroducing Gradio Clients
WatchAutomatic speech recognition (ASR), the conversion of spoken speech to text, is a very important and thriving area of machine learning. ASR algorithms run on practically every smartphone, and are becoming increasingly embedded in professional workflows, such as digital assistants for nurses and doctors. Because ASR algorithms are designed to be used directly by customers and end users, it is important to validate that they are behaving as expected when confronted with a wide variety of speech patterns (different accents, pitches, and background audio conditions).
Using gradio
, you can easily build a demo of your ASR model and share that with a testing team, or test it yourself by speaking through the microphone on your device.
This tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a full-context model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it streaming, meaning that the audio model will convert speech as you speak.
Make sure you have the gradio
Python package already installed. You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries:
pip install torch transformers torchaudio
)Make sure you have at least one of these installed so that you can follow along the tutorial. You will also need ffmpeg
installed on your system, if you do not already have it, to process files from the microphone.
Here's how to build a real time speech recognition (ASR) app:
First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the model, whisper
.
Here is the code to load whisper
from Hugging Face transformers
.
from transformers import pipeline
p = pipeline("automatic-speech-recognition", model="openai/whisper-base.en")
That's it!
We will start by creating a full-context ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the pipeline
object above.
We will use gradio
's built in Audio
component, configured to take input from the user's microphone and return a filepath for the recorded audio. The output component will be a plain Textbox
.
import gradio as gr
from transformers import pipeline
import numpy as np
transcriber = pipeline("automatic-speech-recognition", model="openai/whisper-base.en")
def transcribe(audio):
sr, y = audio
# Convert to mono if stereo
if y.ndim > 1:
y = y.mean(axis=1)
y = y.astype(np.float32)
y /= np.max(np.abs(y))
return transcriber({"sampling_rate": sr, "raw": y})["text"]
demo = gr.Interface(
transcribe,
gr.Audio(sources="microphone"),
"text",
)
demo.launch()
The transcribe
function takes a single parameter, audio
, which is a numpy array of the audio the user recorded. The pipeline
object expects this in float32 format, so we convert it first to float32, and then extract the transcribed text.
To make this a streaming demo, we need to make these changes:
streaming=True
in the Audio
componentlive=True
in the Interface
state
to the interface to store the recorded audio of a userTake a look below.
import gradio as gr
from transformers import pipeline
import numpy as np
transcriber = pipeline("automatic-speech-recognition", model="openai/whisper-base.en")
def transcribe(stream, new_chunk):
sr, y = new_chunk
# Convert to mono if stereo
if y.ndim > 1:
y = y.mean(axis=1)
y = y.astype(np.float32)
y /= np.max(np.abs(y))
if stream is not None:
stream = np.concatenate([stream, y])
else:
stream = y
return stream, transcriber({"sampling_rate": sr, "raw": stream})["text"]
demo = gr.Interface(
transcribe,
["state", gr.Audio(sources=["microphone"], streaming=True)],
["state", "text"],
live=True,
)
demo.launch()
Notice that we now have a state variable because we need to track all the audio history. transcribe
gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio spoken so far in the state. As the interface runs, the transcribe
function gets called, with a record of all the previously spoken audio in the stream
and the new chunk of audio as new_chunk
. We return the new full audio to be stored back in its current state, and we also return the transcription. Here, we naively append the audio together and call the transcriber
object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio is received.
Now the ASR model will run inference as you speak!