File size: 16,583 Bytes
f260203
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3328789
 
f260203
a137b29
 
 
 
 
 
 
f260203
a137b29
 
 
 
 
 
f260203
a137b29
 
 
 
f260203
 
a137b29
f260203
 
 
 
a137b29
7674979
 
a137b29
 
f260203
 
a137b29
 
f260203
 
3328789
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7674979
 
3328789
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f260203
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
---
license: other
license_name: health-ai-developer-foundations
license_link: https://developers.google.com/health-ai-developer-foundations/terms
language:
- en
tags:
- medical
- medical-embeddings
- audio
- health-acoustic
extra_gated_heading: Access HeAR on Hugging Face
extra_gated_prompt: >-
  To access HeAR on Hugging Face, you're required to review and agree to [Health
  AI Developer Foundation's terms of
  use](https://developers.google.com/health-ai-developer-foundations/terms). To
  do this, please ensure you're logged in to Hugging Face and click below.
  Requests are processed immediately.
extra_gated_button_content: Acknowledge license
library_name: transformers
---
# HeAR model card

**Model documentation:** [HeAR](https://developers.google.com/health-ai-developer-foundations/hear)

**Resources**:

*   Model on Google Cloud Model Garden: [HeAR](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/hear)

*   Model on Hugging Face (PyTorch): [google/hear-pytorch](https://huggingface.co/google/hear-pytorch)

*   Model on Hugging Face (Tensorflow): [google/hear](https://huggingface.co/google/hear)

*   GitHub repository (supporting code, Colab notebooks, discussions, and
    issues): [HeAR](https://github.com/google-health/hear)

*   Quick start notebook (PyTorch): [notebooks/quick\_start\_pytorch](https://github.com/google-health/hear/blob/master/notebooks/quick_start_with_hugging_face_pytorch.ipynb)

*   Quick start notebook (Tensorflow): [notebooks/quick\_start](https://github.com/google-health/hear/blob/master/notebooks/quick_start_with_hugging_face.ipynb)

*   Support: See
    [Contact](https://developers.google.com/health-ai-developer-foundations/hear/get-started.md#contact).

Terms of use: [Health AI Developer Foundations terms of
use](https://developers.google.com/health-ai-developer-foundations/terms)

**Author**: Google

## Model information

This section describes the HeAR model and how to use it. HeAR was originally
released as a Tensorflow SavedModel at https://huggingface.co/google/hear.
This is an equivalent PyTorch implementation.

### Description

Health-related acoustic cues, originating from the respiratory system's airflow,
including sounds like coughs and breathing patterns can be harnessed for health
monitoring purposes. Such health sounds can also be collected via ambient
sensing technologies on ubiquitous devices such as mobile phones, which may
augment screening capabilities and inform clinical decision making. Health
acoustics, specifically non-semantic respiratory sounds, also have potential as
biomarkers to detect and monitor various health conditions, for example,
identifying disease status from cough sounds, or measuring lung function using
exhalation sounds made during spirometry.

Health Acoustic Representations, or HeAR, is a health acoustic foundation model
that is pre trained to efficiently represent these non-semantic respiratory
sounds to accelerate research and development of AI models that use these inputs
to make predictions. HeAR is trained unsupervised on a large and diverse
unlabelled corpus, which may generalize better than non-pretrained models to
unseen distributions and new tasks.

Key Features

*   Generates health-optimized embeddings for biological sounds such as coughs
    and breathes

*   Versatility: Exhibits strong performance across diverse health acoustic
    tasks.

*   Data Efficiency: Demonstrates high performance even with limited labeled
    training data for downstream tasks.

*   Microphone robustness: Downstream models trained using HeAR generalize
    well to sounds recorded from unseen devices.

Potential Applications

HeAR can be a useful tool for AI research geared towards
discovery of novel acoustic biomarkers in the following areas:

*   Aid screening & monitoring for respiratory diseases like COVID-19,
    tuberculosis, and COPD from cough and breath sounds.

*   Low-resource settings: Can potentially augment healthcare services in
    settings with limited resources by offering accessible screening and
    monitoring tools.

### How to use

Below are some example code snippets to help you quickly get started running the
model locally. If you want to use the model to run inference on a large amount
of audio, we recommend that you create a production version using [the Vertex
Model
Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/hear).

#### Audio representation with embeddings

```python
import torch
from fcv_detector.models.hear import (
    HearConfig,
    HearModel,
    HearForAudioClassification,
    HearFeatureExtractor
)

from transformers import (
    AutoConfig,
    AutoModel,
    AutoModelForAudioClassification,
    AutoFeatureExtractor,
)

AutoConfig.register("hear", HearConfig)
AutoModel.register(HearConfig, HearModel)
AutoModelForAudioClassification.register(HearConfig, HearForAudioClassification)
AutoFeatureExtractor.register(HearConfig, HearFeatureExtractor)

from huggingface_hub.utils import HfFolder
from huggingface_hub import notebook_login
if HfFolder.get_token() is None:
   notebook_login()


model_id = "audiblehealthai/hear-pytorch"
model = AutoModel.from_pretrained(model_id)
fe = AutoFeatureExtractor.from_pretrained(
    model_id
)

raw_audio_batch = torch.rand((4, 32000), dtype=torch.float32)
inputs = fe(raw_audio_batch, return_tensors="pt")
output = model(**inputs)
```

```
You are using a model of type vit to instantiate a model of type hear. This is not supported for all configurations of models and can yield errors.
BaseModelOutputWithPooling(last_hidden_state=tensor([[[ 0.1638,  0.0311, -0.3071,  ..., -0.1555, -0.0380, -0.3294],
         [ 0.1879,  0.9123,  0.3434,  ...,  2.1157, -0.2212, -0.5031],
         [ 0.4474, -0.3095,  0.1068,  ..., -1.1577, -0.1871, -1.1114],
         ...,
         [-1.1620, -0.6956,  0.0340,  ...,  0.2741, -0.3230, -0.7366],
         [-0.2818, -0.1758, -0.1667,  ...,  0.3051, -0.3197, -0.6817],
         [-0.5189, -0.3460,  0.0631,  ...,  0.2027, -0.5678, -0.2382]],

        [[ 0.1788,  0.0652, -0.2803,  ..., -0.1490, -0.0312, -0.2837],
         [-0.1547,  0.6340,  0.0806,  ...,  2.1374, -0.3951, -0.5316],
         [-0.2770,  0.7531,  0.4323,  ...,  0.9180, -0.3570, -0.1897],
         ...,
         [-1.3322, -0.0332, -0.2455,  ...,  0.4821, -0.0645, -0.9346],
         [-1.3276, -0.6403, -0.0455,  ...,  0.6166, -0.4472, -0.4335],
         [-1.0610,  0.2751, -0.2439,  ...,  0.7873, -0.1567, -0.4248]],

        [[ 0.1755,  0.1288, -0.2913,  ..., -0.1226, -0.0644, -0.3382],
         [ 0.1055,  1.1124, -0.2281,  ...,  3.2376, -0.3979, -0.5840],
         [-0.6490, -0.3893,  0.4327,  ...,  2.4446, -0.2480, -0.9221],
         ...,
         [-1.5817, -0.0733, -0.7567,  ...,  1.0221, -0.4246, -0.9694],
         [ 0.1373, -0.0258,  0.2139,  ...,  1.2905, -0.2469, -0.8213],
         [-1.2737,  0.2838, -0.1167,  ...,  0.8610, -0.2919, -0.8152]],

        [[ 0.1398,  0.1110, -0.2897,  ..., -0.1562, -0.0699, -0.3052],
         [-0.1940,  0.1297,  0.1607,  ...,  3.2720, -0.0289, -1.0005],
         [-0.3104,  0.6009, -0.1392,  ...,  2.7523, -0.0829, -0.6996],
         ...,
         [-0.9739, -0.4732,  0.0499,  ...,  1.8665, -0.2438, -0.7332],
         [-0.3944,  0.1800, -0.0829,  ...,  1.2693, -0.6084, -0.7625],
         [-1.5253,  0.4868, -0.3012,  ...,  1.5606, -0.0050, -0.4669]]],
       grad_fn=<NativeLayerNormBackward0>), pooler_output=tensor([[-0.3807,  0.9901,  0.5437,  ...,  1.0000,  0.5777,  0.9752],
        [-0.4004,  0.9932,  0.7021,  ...,  1.0000,  0.7681,  0.9804],
        [-0.3874,  0.9964,  0.5076,  ...,  1.0000,  0.8015,  0.9823],
        [-0.3838,  0.9970,  0.5793,  ...,  1.0000,  0.8024,  0.9895]],
       grad_fn=<TanhBackward0>), hidden_states=None, attentions=None)
```

#### Audio classification

```python
import torch
from fcv_detector.models.hear import (
    HearConfig,
    HearModel,
    HearForAudioClassification,
    HearFeatureExtractor
)

from transformers import (
    AutoConfig,
    AutoModel,
    AutoModelForAudioClassification,
    AutoFeatureExtractor,
)

AutoConfig.register("hear", HearConfig)
AutoModel.register(HearConfig, HearModel)
AutoModelForAudioClassification.register(HearConfig, HearForAudioClassification)
AutoFeatureExtractor.register(HearConfig, HearFeatureExtractor)

from huggingface_hub.utils import HfFolder
from huggingface_hub import notebook_login
if HfFolder.get_token() is None:
   notebook_login()


model_id = "audiblehealthai/hear-pytorch"
classifier = AutoModelForAudioClassification.from_pretrained(model_id)
fe = AutoFeatureExtractor.from_pretrained(
    model_id
)

raw_audio_batch = torch.rand((4, 32000), dtype=torch.float32)
inputs = fe(raw_audio_batch, return_tensors="pt")
cls_output = classifier(**inputs)
print(cls_output)
```

```
You are using a model of type vit to instantiate a model of type hear. This is not supported for all configurations of models and can yield errors.
Some weights of HearForAudioClassification were not initialized from the model checkpoint at audiblehealthai/hear-pytorch and are newly initialized: ['classifier.layernorm.bias', 'classifier.dense.weight', 'classifier.dense.bias', 'classifier.layernorm.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
SequenceClassifierOutput(loss=None, logits=tensor([[-0.0135,  0.0895],
        [-0.0071,  0.1055],
        [-0.0082,  0.0801],
        [ 0.0145,  0.1028]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
```

### Examples

See the following Colab notebooks for examples of how to use HeAR:

*   To give the model a quick try, running it locally with weights from Hugging
    Face, see [Quick start notebook in
    Colab](https://colab.research.google.com/github/google-health/hear/blob/master/notebooks/quick_start_with_hugging_face_pytorch.ipynb).


### Model architecture overview

HeAR is a [Masked Auto Encoder](https://arxiv.org/abs/2111.06377), a
[transformer-based](https://arxiv.org/abs/1706.03762) neural
network.

*   It was trained using masked auto-encoding on a large corpus of
    health-related sounds, with a self-supervised learning objective on a
    massive dataset (\~174k hours) of two-second audio clips. At training time,
    it tries to reconstruct masked spectrogram patches from the visible patches.

*   After it is trained, its encoder can generate low-dimensional
    representations of two-second audio clips, optimized for capturing and
    containing the most salient parts of health-related information from
    sounds like coughs and breathes.

*   These representations, or embeddings, can be used as inputs to other
    models trained for a variety of supervised tasks related to health.

*   The HeAR model was developed based on a [ViT-L architecture](https://arxiv.org/abs/2010.11929)

*   Instead of relying on CNNs, a pure transformer applied directly to
    sequences of image patches is the idea behind the model architecture,
    and it resulted in good performance in image classification tasks. This
    approach of using the Vision Transformer (ViT) attains excellent results
    compared to state-of-the-art convolutional networks while requiring
    substantially fewer computational resources to train.

*   The training process for HeAR comprised of three main components
  *   A data curation step (including a health acoustic event detector);
  *   A general purpose training step to develop an audio encoder (embedding
      model), and
  *   A task-specific evaluation step that adopts the trained embedding model
      for various downstream tasks.

*   The system is designed to encode two-second long audio clips and
      generate audio embeddings for use in downstream tasks.

### Technical Specifications

*   Model type: [ViT (vision transformer)](https://arxiv.org/abs/2010.11929)

*   Key publication: [https://arxiv.org/abs/2403.02522](https://arxiv.org/abs/2403.02522)

*   Model created: 2023-12-04

*   Model Version: 1.0.0

### Performance & Validation

HeAR's performance has been validated via linear probing the frozen embeddings
on a benchmark of 33 health acoustic tasks across 6 datasets.

HeAR is benchmarked on a diverse set of health acoustic tasks spanning 13 health
acoustic event detection tasks, 14 cough inference tasks, and 6 spirometry
inference tasks, across 6 datasets, and it demonstrated that simple linear
classifiers trained on top of our representations can perform as good or better
than many similar leading models.

### Key performance metrics

*   HeAR achieved high performance on **diverse health-relevant tasks**:
    inference of medical conditions (TB, COVID) and medically-relevant
    quantities (lung function, smoking status) from recordings of coughs or
    exhalations, including a task on predicting chest X-ray findings (pleural
    effusion, opacities etc.).

*   HeAR had **superior device generalizability** compared to other models
    (MRR=0.745 versus second-best being CLAP with MRR=0.497), which is
    crucially important for real-world applications.

*   HeAR is more **data efficient** than baseline models, sometimes reaching
    the same level of performance when trained on as little as 6.25% of the
    amount of training data.

### Inputs and outputs

**Input:** Two-second long 16 kHz mono audio clip. Inputs can be batched so you
can pass in n=10 as (10,32k) or n=1 as (1,32k)

**Output:** Embedding vector of floating point values in (n, 512) for n
two-second clips in the vector, or an embedding of length 512 for each
two-second input clip.

### Dataset details

### Training dataset

For training, a dataset of YT-NS (YouTube Non-Semantic) was curated, and it
consisted of two-second long audio clips extracted from three billion public
non-copyrighted YouTube videos using a health acoustic event detector, totalling
313.3 million two-second clips or roughly 174k hours of audio. We chose a
two-second window since most events we cared about were shorter than that. The
HeAR audio encoder is trained solely on this dataset.

### Evaluation dataset

Six datasets were used for evaluation:

* [FSD50K](https://zenodo.org/records/4060432)
* [Flusense](https://github.com/Forsad/FluSense-data)
* [CoughVID](https://zenodo.org/records/4048312)
* [Coswara](https://zenodo.org/records/7188627)
* [CIDRZ](https://www.kaggle.com/datasets/googlehealthai/google-health-ai)
* [SpiroSmart](https://dl.acm.org/doi/10.1145/2370216.2370261)

## License

The use of the HeAR is governed by the [Health AI Developer Foundations terms of
use](https://developers.google.com/health-ai-developer-foundations/terms).

### Implementation information

Details about the model internals.

### Software

Training was done using [JAX](https://github.com/jax-ml/jax)

JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.

## Use and limitations

### Intended use

*   Research and development of health-related acoustic biomarkers.

*   Exploration of novel applications in disease detection and health
    monitoring.

### Benefits

HeAR embeddings can be used for efficient training of AI models for
health acoustics tasks with significantly less data and compute than training
neural networks initialised randomly or from checkpoints trained on generic
datasets. This allows quick prototyping to see if health acoustics signals can
be used by themselves or combined with other signals to make predictions of
interest.

### Limitations

*   Limited Sequence Length: Primarily trained on 2-second audio clips.

*   Model Size: Current model size is too large for on-device deployment.

*   Bias Considerations: Potential for biases based on demographics and
    recording device quality, necessitating further investigation and
    mitigation strategies.

*   HeAR was trained using two-second audio clips of health-related sounds from
    a public non-copyrighted subset of Youtube. These clips come from a
    variety of sources but may be noisy or low-quality.

*   The model is only used to generate embeddings of the user-owned dataset.
    It does not generate any predictions or diagnosis on its own.

*   As with any research, developers should ensure that any downstream
    application is validated to understand performance using data that is
    appropriately representative of the intended use setting for the
    specific application (e.g., age, sex, gender, recording device,
    background noise, etc.).