|
## How to use the data sets |
|
|
|
This dataset contains more about 80,000 unique pairs of protein sequences and ligand SMILES, and the coordinates |
|
of their complexes from the PDB. Only ligands with a molecular weight >= 100 Da are included. |
|
|
|
SMILES are assumed to be tokenized by the regex from P. Schwaller. |
|
|
|
Every (x,y,z) ligand coordinate maps onto a SMILES token, and is *nan* if the token does not represent an atom |
|
|
|
Every receptor coordinate maps onto the Calpha coordinate of that residue. |
|
|
|
The dataset can be used to fine-tune a language model, all data comes from PDBind-cn. |
|
|
|
### Use the already preprocessed data |
|
|
|
Load a test/train split using |
|
|
|
``` |
|
from datasets import load_dataset |
|
train = load_dataset("jglaser/pdb_protein_ligand_complexes",split='train[:90%]') |
|
validation = load_dataset("jglaser/pdb_protein_ligand_complexes",split='train[90%:]') |
|
``` |
|
|
|
### Manual update from PDB |
|
|
|
``` |
|
# download the PDB archive into folder pdb/ |
|
sh rsync.sh 24 # number of parallel download processes |
|
|
|
# extract sequences and coordinates in parallel |
|
sbatch pdb.slurm |
|
# or |
|
mpirun -n 42 parse_complexes.py # desired number of tasks |
|
``` |
|
|