--- license: cc-by-nc-sa-4.0 library_name: transformers tags: - chemistry - selfies --- # chemfie-gpt-experiment-1 On-going training (2/4) ## Model Details - **Model Type**: GPT-2 - **Architecture**: L8, A6, H384 - **Task**: Generation of SELFIES strings - **Language**: N/A (Chemical representation) ## Personal Intended Use - Hands-on learning, research and experimentation in molecular generation - Baseline for ablation studies and comparisons with more advanced models ## Use Since this model doesn't use a proper GPT2 format tokenizer, special tokens still need to be set up manually (next experiment will use a proper one ofc): ```python from transformers import PreTrainedTokenizerFast, AutoModelForCausalLM import torch tokenizer = PreTrainedTokenizerFast( tokenizer_file="gpt2_tokenizer.json", model_max_length=512, unk_token="", pad_token="", eos_token="", bos_token="", mask_token="", ) model = AutoModelForCausalLM.from_pretrained("gbyuvd/chemfie-gpt-experiment-1") # Generate some sample outputs def generate_molecules(model, tokenizer, num_samples=5, max_length=100): model.eval() generated = [] for _ in range(num_samples): input_ids = torch.tensor([[tokenizer.bos_token_id]]).to(model.device) output = model.generate(input_ids, max_length=max_length, num_return_sequences=1, do_sample=True) generated.append(tokenizer.decode(output[0], skip_special_tokens=True)) return generated sample_molecules = generate_molecules(model, tokenizer) print("Sample generated molecules:") for i, mol in enumerate(sample_molecules, 1): print(f"{i}. {mol}") ``` Tokenized SELFIES to SMILES: ```python import selfies as sf test = "[C] [Branch1] [O] [=C] [C] [C] [C] [C] [C] [C] [C] [=Branch1] [=O] [O] [=C] [C] [C] [C] [Ring1]" test = test.replace(' ', '') print(sf.decoder(test)) """" C(CCCCCCCCO)=CCC=C """" ``` ## Training Data - **Source**: Curated and merged from COCONUTDB (Sorokina et al., 2021), ChemBL34 (Zdrazil et al., 2023), and SuperNatural3 (Gallo et al. 2023) database - **Total**: 2,933,355 samples - **Total Train**: 2,346,680 samples - **Validation**: 293,336 samples - **Per chunk**: 586,670 train, 73,334 validation, 73,334 test - **Random seed for split**: 42 ## Training Procedure - **Batch Size**: 64 - **Learning Rate**: 1.5e-5 - **Optimizer**: Ranger21 (MADGRAD-Lookahead-AdaBelief with gradient centralization, gradient clipping, and weight decay) ## Training Logs | Chunk | Chunk's Training Loss | Chunk's Validation Loss | Status | | :---: | :-------------------: | :---------------------: | :-------: | | I | 1.346400 | 1.065180 | Done | | II | 1.123500 | 0.993118 | Done | | III | | | Ongoing | | IV | | | Scheduled | ## Evaluation Results [To be filled after model evaluation] ## Limitations and Biases - May generate unrealistic or synthetically inaccessible molecules - Performance on complex, branched, and ringed molecules to be evaluated ## Ethical Considerations - Potential misuse for generating harmful or illegal substances - May produce biased results based on training data composition - The information and model provided is for academic purposes only. It is intended for educational and research use, and should not be used for any commercial or legal purposes. The author do not guarantee the accuracy, completeness, or reliability of the information. ## Additional Information - Part of experimental chemfie-gpt/T5 project - Serves as a baseline for future experiments with further curated datasets, training, and architectural modifications ## Citation ### BibTeX #### COCONUTDB ```bibtex @article{sorokina2021coconut, title={COCONUT online: Collection of Open Natural Products database}, author={Sorokina, Maria and Merseburger, Peter and Rajan, Kohulan and Yirik, Mehmet Aziz and Steinbeck, Christoph}, journal={Journal of Cheminformatics}, volume={13}, number={1}, pages={2}, year={2021}, doi={10.1186/s13321-020-00478-9} } ``` #### ChemBL34 ```bibtex @article{zdrazil2023chembl, title={The ChEMBL Database in 2023: a drug discovery platform spanning multiple bioactivity data types and time periods}, author={Zdrazil, Barbara and Felix, Eloy and Hunter, Fiona and Manners, Emma J and Blackshaw, James and Corbett, Sybilla and de Veij, Marleen and Ioannidis, Harris and Lopez, David Mendez and Mosquera, Juan F and Magarinos, Maria Paula and Bosc, Nicolas and Arcila, Ricardo and Kizil{\"o}ren, Tevfik and Gaulton, Anna and Bento, A Patr{\'i}cia and Adasme, Melissa F and Monecke, Peter and Landrum, Gregory A and Leach, Andrew R}, journal={Nucleic Acids Research}, year={2023}, volume={gkad1004}, doi={10.1093/nar/gkad1004} } @misc{chembl34, title={ChemBL34}, year={2023}, doi={10.6019/CHEMBL.database.34} } ``` #### SuperNatural3 ```bibtex @article{Gallo2023, author = {Gallo, K and Kemmler, E and Goede, A and Becker, F and Dunkel, M and Preissner, R and Banerjee, P}, title = {{SuperNatural 3.0-a database of natural products and natural product-based derivatives}}, journal = {Nucleic Acids Research}, year = {2023}, month = jan, day = {6}, volume = {51}, number = {D1}, pages = {D654-D659}, doi = {10.1093/nar/gkac1008} } ```