π [ICLR 2025] SSLAM: Enhancing Self-Supervised Models with Audio Mixtures for Polyphonic Soundscapes
π SSLAM is a self-supervised learning framework designed to enhance audio representation quality for both polyphonic(multiple overlapping sounds) and monophonic soundscapes. Unlike traditional SSL models that focus on monophonic data, SSLAM introduces a novel source retention loss and audio mixture training, significantly improving performance on real-world polyphonic audio.
π Paper | ICLR 2025 Poster: Video & Slides | Open Review | π€ Models
SSLAM Pretrain (ViT Base, 15 epochs)
This repository provides an SSLAM checkpoint formatted for use with Hugging Face Transformers. It is intended for feature extraction in audio LLMs, sound event detection, and general purpose audio representation learning. The implementation follows the EAT code path while swapping in SSLAM pretrained weights.
π§ Usage
You can load and use the model for feature extraction directly via Hugging Face Transformers:
import torchaudio
import torch
import soundfile as sf
import numpy as np
from transformers import AutoModel
model_id = "ta012/SSLAM_pretrain"
model = AutoModel.from_pretrained(model_id, trust_remote_code=True).eval().cuda()
source_file = "/path/to/input.wav"
target_length = 1024 # Recommended: 1024 for 10s audio
norm_mean = -4.268
norm_std = 4.569
# Load and resample audio
wav, sr = sf.read(source_file)
waveform = torch.tensor(wav).float().cuda()
if sr != 16000:
waveform = torchaudio.functional.resample(waveform, sr, 16000)
# Normalize and convert to mel-spectrogram
waveform = waveform - waveform.mean()
mel = torchaudio.compliance.kaldi.fbank(
waveform.unsqueeze(0),
htk_compat=True,
sample_frequency=16000,
use_energy=False,
window_type='hanning',
num_mel_bins=128,
dither=0.0,
frame_shift=10
).unsqueeze(0)
# Pad or truncate
n_frames = mel.shape[1]
if n_frames < target_length:
mel = torch.nn.ZeroPad2d((0, 0, 0, target_length - n_frames))(mel)
else:
mel = mel[:, :target_length, :]
# Normalize
mel = (mel - norm_mean) / (norm_std * 2)
mel = mel.unsqueeze(0).cuda() # shape: [1, 1, T, F]
# Extract features
with torch.no_grad():
feat = model.extract_features(mel)
feat = feat.squeeze(0).cpu().numpy()
print(f"Feature shape: {feat.shape}")
π Notes
See the feature extraction guide for more instructions.
π Acknowledgments
This repository builds on the EAT implementation for Hugging Face models. We remap SSLAM weights to that interface. We are not affiliated with the EAT authors. All credit for the original implementation belongs to them.
π Citation
If you find our work useful, please cite it as:
@inproceedings{alex2025sslam,
title={{SSLAM}: Enhancing Self-Supervised Models with Audio Mixtures for Polyphonic Soundscapes},
author={Tony Alex and Sara Atito and Armin Mustafa and Muhammad Awais and Philip J B Jackson},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=odU59TxdiB}
}
Please also cite EAT:
@article{chen2024eat,
title={EAT: Self-supervised pre-training with efficient audio transformer},
author={Chen, Wenxi and Liang, Yuzhe and Ma, Ziyang and Zheng, Zhisheng and Chen, Xie},
journal={arXiv preprint arXiv:2401.03497},
year={2024}
}
- Downloads last month
- 17