Edit model card

An efficient encoder-decoder architecture with top-down attention for speech separation

PWC PWC

This repository is the official implementation of An efficient encoder-decoder architecture with top-down attention for speech separation Paper link.

@inproceedings{tdanet2023iclr,
  title={An efficient encoder-decoder architecture with top-down attention for speech separation},
  author={Li, Kai and Yang, Runxuan and Hu, Xiaolin},
  booktitle={ICLR},
  year={2023}
}

Training Dataset

  • LRS2-2Mix

Config

    enc_kernel_size: 4
    in_channels: 512
    num_blocks: 16
    num_sources: 2
    out_channels: 128
    upsampling_depth: 5
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .