TourSynbio: A Multi-Modal Large Model and Agent Framework to Bridge Text and Protein Sequences for Protein Engineering

Toursun Synbio, Shanghai, China
Department of Computer Science, Johns Hopkins University, Baltimore, USA
Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
Shanghai AI Laboratory, Shanghai, China
Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
Department of Computer Science, City University of Hong Kong, Hong Kong, China
2025 IEEE International Conference on Bioinformatics and Biomedicine

*Indicates Equal Contribution

The above is the process of calling the Agent Through the keyword matching process and the validation process of the large model, the user's desired agent can be accurately called, proving the effectiveness of the process and the robustness of the framework.

Abstract

The structural similarities between protein sequences and natural languages have led to parallel advancements in deep learning across both domains. While large language models (LLMs) have achieved much progress in the domain of natural language processing, their potential in protein engineering remains largely unexplored. Previous approaches have equipped LLMs with protein understanding capabilities by incorporating external protein encoders, but this fails to fully leverage the inherent similarities between protein sequences and natural languages, resulting in sub-optimal performance and increased model complexity. To address this gap, we present TourSynbio-7B, the first multi-modal large model specifically designed for protein engineering tasks without external protein encoders. TourSynbio-7B demonstrates that LLMs can inherently learn to understand proteins as language. The model is post-trained and instruction fine-tuned on InternLM2-7B using ProteinLMDataset, a dataset comprising 17.46 billion tokens of text and protein sequence for self-supervised pretraining and 893K instructions for supervised fine-tuning. TourSynbio-7B outperforms GPT-4 on the ProteinLMBench, a benchmark of 944 manually verified multiple-choice questions, with 62.18\% accuracy. Leveraging TourSynbio-7B's enhanced protein sequence understanding capability, we introduce TourSynbio-Agent, an innovative framework capable of performing various protein engineering tasks, including mutation analysis, inverse folding, protein folding, and visualization. TourSynbio-Agent integrates previously disconnected deep learning models in the protein engineering domain, offering a unified conversational user interface for improved usability. Finally, we demonstrate the efficacy of TourSynbio-7B and TourSynbio-Agent through two wet lab case studies on vanilla key enzyme modification and steroid compound catalysis. Our results show that this combination facilitates protein engineering tasks in wet labs, leading to higher positive rates, improved mutations, shorter delivery times, and increased automation. The model weights are available at https://huggingface.co/tsynbio/Toursynbio and codes at https://github.com/tsynbio/TourSynbio.

Chinese version of Agent usage video.

Manuscript

BibTeX

@article{shen2024toursynbio,
  title={TourSynbio: A Multi-Modal Large Model and Agent Framework to Bridge Text and Protein Sequences for Protein Engineering},
  author={Shen, Yiqing and Chen, Zan and Mamalakis, Michail and Liu, Yungeng and Li, Tianbin and Su, Yanzhou and He, Junjun and Li{\`o}, Pietro and Wang, Yu Guang},
  journal={arXiv preprint arXiv:2408.15299},
  year={2024}
}