Use my Search Websuite to scan PubMed, PMCentral, Journal Hosts and Journal Archives, FullText.
Kick-your-searchterm to multiple Engines kick-your-query now !>
A dictionary by aggregated review articles of nephrology, medicine and the life sciences
Your one-stop-run pathway from word to the immediate pdf of peer-reviewed on-topic knowledge.

suck abstract from ncbi


10.1055/s-0041-1723024

http://scihub22266oqcxt.onion/10.1055/s-0041-1723024
suck pdf from google scholar
33694142!7946598!33694142
unlimited free pdf from europmc33694142    free
PDF from PMC    free
html from PMC    free

Warning: file_get_contents(https://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=33694142&cmd=llinks): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 215

suck abstract from ncbi


Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Deprecated: Implicit conversion from float 231.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534

Warning: imagejpeg(C:\Inetpub\vhosts\kidney.de\httpdocs\phplern\33694142.jpg): Failed to open stream: No such file or directory in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 117
pmid33694142      Appl+Clin+Inform 2021 ; 12 (1): 170-178
Nephropedia Template TP

gab.com Text

Twit Text FOAVip

Twit Text #

English Wikipedia


  • Extracting Medical Information from Paper COVID-19 Assessment Forms #MMPMID33694142
  • White-Dzuro CG; Schultz JD; Ye C; Coco JR; Myers JM; Shackelford C; Rosenbloom ST; Fabbri D
  • Appl Clin Inform 2021[Jan]; 12 (1): 170-178 PMID33694142show ga
  • OBJECTIVE: This study examines the validity of optical mark recognition, a novel user interface, and crowdsourced data validation to rapidly digitize and extract data from paper COVID-19 assessment forms at a large medical center. METHODS: An optical mark recognition/optical character recognition (OMR/OCR) system was developed to identify fields that were selected on 2,814 paper assessment forms, each with 141 fields which were used to assess potential COVID-19 infections. A novel user interface (UI) displayed mirrored forms showing the scanned assessment forms with OMR results superimposed on the left and an editable web form on the right to improve ease of data validation. Crowdsourced participants validated the results of the OMR system. Overall error rate and time taken to validate were calculated. A subset of forms was validated by multiple participants to calculate agreement between participants. RESULTS: The OMR/OCR tools correctly extracted data from scanned forms fields with an average accuracy of 70% and median accuracy of 78% when the OMR/OCR results were compared with the results from crowd validation. Scanned forms were crowd-validated at a mean rate of 157 seconds per document and a volume of approximately 108 documents per day. A randomly selected subset of documents was reviewed by multiple participants, producing an interobserver agreement of 97% for documents when narrative-text fields were included and 98% when only Boolean and multiple-choice fields were considered. CONCLUSION: Due to the COVID-19 pandemic, it may be challenging for health care workers wearing personal protective equipment to interact with electronic health records. The combination of OMR/OCR technology, a novel UI, and crowdsourcing data-validation processes allowed for the efficient extraction of a large volume of paper medical documents produced during the COVID-19 pandemic.
  • |*Health Information Exchange[MESH]
  • |*Information Storage and Retrieval[MESH]
  • |COVID-19/*diagnosis[MESH]
  • |Crowdsourcing[MESH]
  • |Humans[MESH]
  • |Physicians[MESH]


  • DeepDyve
  • Pubget Overpricing
  • suck abstract from ncbi

    Linkout box