Use my Search Websuite to scan PubMed, PMCentral, Journal Hosts and Journal Archives, FullText.
Kick-your-searchterm to multiple Engines kick-your-query now !>
A dictionary by aggregated review articles of nephrology, medicine and the life sciences
Your one-stop-run pathway from word to the immediate pdf of peer-reviewed on-topic knowledge.

suck abstract from ncbi


10.1093/bfgp/elab025

http://scihub22266oqcxt.onion/10.1093/bfgp/elab025
suck pdf from google scholar
34050350!8194843!34050350
unlimited free pdf from europmc34050350    free
PDF from PMC    free
html from PMC    free

suck abstract from ncbi


Warning: imagejpeg(C:\Inetpub\vhosts\kidney.de\httpdocs\phplern\34050350.jpg): Failed to open stream: No such file or directory in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 117
pmid34050350      Brief+Funct+Genomics 2021 ; 20 (3): 181-195
Nephropedia Template TP

gab.com Text

Twit Text FOAVip

Twit Text #

English Wikipedia


  • Pretraining model for biological sequence data #MMPMID34050350
  • Song B; Li Z; Lin X; Wang J; Wang T; Fu X
  • Brief Funct Genomics 2021[Jun]; 20 (3): 181-195 PMID34050350show ga
  • With the development of high-throughput sequencing technology, biological sequence data reflecting life information becomes increasingly accessible. Particularly on the background of the COVID-19 pandemic, biological sequence data play an important role in detecting diseases, analyzing the mechanism and discovering specific drugs. In recent years, pretraining models that have emerged in natural language processing have attracted widespread attention in many research fields not only to decrease training cost but also to improve performance on downstream tasks. Pretraining models are used for embedding biological sequence and extracting feature from large biological sequence corpus to comprehensively understand the biological sequence data. In this survey, we provide a broad review on pretraining models for biological sequence data. Moreover, we first introduce biological sequences and corresponding datasets, including brief description and accessible link. Subsequently, we systematically summarize popular pretraining models for biological sequences based on four categories: CNN, word2vec, LSTM and Transformer. Then, we present some applications with proposed pretraining models on downstream tasks to explain the role of pretraining models. Next, we provide a novel pretraining scheme for protein sequences and a multitask benchmark for protein pretraining models. Finally, we discuss the challenges and future directions in pretraining models for biological sequences.
  • |*Algorithms[MESH]
  • |*Natural Language Processing[MESH]
  • |*Software[MESH]
  • |Computational Biology/*methods[MESH]
  • |Data Mining/*methods[MESH]
  • |Datasets as Topic[MESH]
  • |Deep Learning[MESH]
  • |High-Throughput Nucleotide Sequencing/*methods[MESH]
  • |Humans[MESH]


  • DeepDyve
  • Pubget Overpricing
  • suck abstract from ncbi

    Linkout box