Skip to content Skip to sidebar Skip to footer

A Systematic Literature Review of Automated Clinical Coding and Classification Systems

  • Periodical List
  • J Am Med Inform Assoc
  • five.17(6); Nov-Dec 2010
  • PMC3000748

J Am Med Inform Assoc. 2010 November-Dec; 17(half dozen): 646–651.

A systematic literature review of automated clinical coding and classification systems

Mary H Stanfill

1American Wellness Information Management Association, Chicago, Illinois, Usa

Margaret Williams

iAmerican Health Information Management Association, Chicago, Illinois, USA

Susan H Fenton

2Health Information Direction, Texas State University, Texas, The states

Robert A Jenders

iiiDepartment of Medicine, Academy of California, Los Angeles, California, USA

William R Hersh

ivDepartment of Medical Informatics & Clinical Epidemiology, Oregon Wellness & Science University, Portland, Oregon, U.s.a.

Received 2009 Aug 28; Accepted 2010 Sep 1.

Supplementary Materials

Web But Data

GUID: 761F0D58-77F3-42FA-B6CB-0C9C62E6FBA3

GUID: 293D2DBC-27D3-4CDB-A225-3A32819A6D83

GUID: F9B30A48-C09C-4CD0-82E9-6C5CD45B61FD

GUID: 23D731DF-FD41-425F-A290-1DEE21C0F729

GUID: C71E4A2E-E960-411D-8CDB-E13AF7BA3960

GUID: 9986D7D5-DA8A-49AD-B919-746FBF5DC4A4

GUID: 74FDB763-219F-45A8-85F0-3C05098F00E3

GUID: 71CBE7A4-C81F-45D9-AA5F-43AA2F79E0D4

GUID: 3CF4CCED-173D-47EA-9F1E-8F26DAB7EADC

GUID: 90DEE377-5143-4CD2-8E80-9EFDCBE2DAA3

GUID: 7050007C-3756-4D49-AC94-BA91AD35C3A2

GUID: 9B9CADB5-0A31-4885-A281-597708E2FED2

Abstruse

Clinical coding and nomenclature processes transform natural language descriptions in clinical text into data that can later on be used for clinical care, inquiry, and other purposes. This systematic literature review examined studies that evaluated all types of automated coding and classification systems to determine the functioning of such systems. Studies indexed in Medline or other relevant databases prior to March 2009 were considered. The 113 studies included in this review prove that automated tools exist for a variety of coding and nomenclature purposes, focus on various healthcare specialties, and handle a wide diverseness of clinical document types. Automatic coding and classification systems themselves are not generalizable, nor are the results of the studies evaluating them. Published research shows these systems hold promise, only these data must be considered in context, with operation relative to the complexity of the task and the desired outcome.

Keywords: Medical records systems, computerized and natural language processing, forms and records command/methods, medical records/classification, automatic information processing, international classification of diseases

Introduction

Automated coding and nomenclature technologies encompass a variety of computer-based approaches that transform narrative text in clinical records into structured text, which may include assignment of codes from standard terminologies, without human interaction. Despite a peachy corporeality of research evaluating systems that perform coding and classification, information technology is non clear whether these automated systems perform too as manual coding or classification. We want to know if calculator applications can lawmaking or classify too equally or better than people. To brainstorm to explore this question, nosotros undertook a systematic literature review to identify and analyze the existing testify on the performance of automatic coding and classification systems.

According to Mulrow and Cook,i systematic reviews are concise summaries of the best bachelor bear witness that address sharply designed clinical questions. Furthermore, systematic reviews employ explicit and rigorous methods to identify, critically appraise, and synthesize relevant studies. They seek to assemble and examine all the available loftier-quality prove that bears on the question at hand.

To our knowledge, in that location are no systematic reviews on automated clinical coding and classification systems. Meystre et al 2 conducted a narrative review to examine published enquiry on the extraction of information from textual documents in the electronic health record. In that review, tongue processing techniques were examined, but few of the studies dealt specifically with automatic coding and nomenclature software. The authors focused on the performance of information extraction systems, a much broader concept than automated clinical coding and nomenclature. Coding and classification studies that Meystre et al reviewed were narrowly focused and did not reflect the full range of automated coding and nomenclature systems. Thus nosotros undertook a systematic literature review to place all published studies evaluating the performance of automated coding and classification systems. This paper presents the results of our systematic review.

Background

Automated coding and classification systems are an emerging engineering. Researchers are edifice and evaluating such systems. It is important to explore what is known concerning the operation of automated coding and classification systems to determine how applicable these systems are to the industry-wide coding process currently used to gather healthcare data.

Correct coding and reporting of healthcare diagnoses and services has become increasingly critical as healthcare data needs have evolved. The use of structured data in coded class continues to grow as the healthcare manufacture explores value-based purchasing and seeks overall comeback in the quality of care. The data used for these purposes are typically encoded via a manual coding procedure. This process involves man review of clinical documentation to identify applicable codes. When applying a complex coding scheme, the process may be assisted by the use of lawmaking books, picking from abbreviated lists, or employing software applications that facilitate alphabetic searches and provide edits and tips. Lawmaking assignment may be carried out by physicians, but is often performed past other personnel, such as coding professionals.

An American Health Information Management Clan (AHIMA) workgroup, convened to explore computer-assisted coding, reported that this manual coding workflow is expensive and inefficient in an industry where data needs take never been greater. 'The industry needs automated solutions to allow the coding process to become more productive, efficient, accurate, and consistent.'3 Figurer applications for automating this process are bachelor just currently non widely used, most likely considering the systems are even so in development and their performance in product unproven. This systematic literature review was undertaken to identify all published studies of automatic coding and classification systems to determine if any arrangement can perform the coding process currently used industry-wide to gather healthcare data. Recognizing that a great deal of enquiry has been carried out in this surface area, with simply a small portion focused on authoritative coding classification systems, we determined to review all types of automatic coding and nomenclature evaluation studies. Every bit such, this systematic literature review included published research on whatever estimator application designed to automatically generate whatsoever blazon of clinical lawmaking or classification from free-text clinical documents.

Methods

A search strategy was designed to place all potentially relevant publications nigh the performance of automatic coding and classification systems. It was used to search PubMed, the Cumulative Index to Nursing and Centrolineal Wellness Literature, the Association for Computing Machinery and Inspec databases, and Science Citation Index Expanded. Meet appendix A, available every bit an online information supplement at www.jamia.org, for search parameters and the details of the search statements used in searching the various databases. This review includes all studies published (or pre-published online) and, where applicable, indexed to MeSH terms prior to March 2009.

In addition to searching these databases, all articles in AHIMA's Body of Knowledge indexed to the subject 'estimator-assisted coding' were added for consideration. References in the 'FasterCures' report, 'Call up Research: Using Medical Records to Bridge Patient Care and Research' were checked for relevance. We likewise used the 'snowball' method (pursuing references of references) and sought input from a cadre group of researchers in the field to identify additional studies.

A principal criterion for inclusion in this systematic literature review was that the article had to accost the results of an original study involving research on the apply of a computer awarding to automatically generate clinical codes and/or assign classes from gratis-text clinical documents. In addition, the research had to have been carried out with documents produced in the process of clinical care where both the documents and the estimator application were in the English language. The report also must accept evaluated the performance of the computer application in assigning clinical codes or other classification schema.

The type of coding or nomenclature schema applied in the study did non affect inclusion. Recognizing the existence of multiple coding and classification schemas, including standardized classification systems, such every bit the International Classification of Diseases (ICD) or Current Procedural Terminology (CPT), and utilise-case-specific, non-standardized schemas, such as the presence or absenteeism of a given condition, this review was left open up to include any and all types of clinical codes or classes.

Studies were excluded if the automated awarding was non evaluated for performance of the code assignment. For example, instances where the study focused on evaluating content coverage of a classification or vocabulary were excluded. The difference is subtle, merely significant. Evaluating whether a terminology or classification is suitable or robust enough for a given purpose is different from evaluating whether an automatic system is accurate enough to replace humans. The latter was aligned with our research question, the former was not. Thus, studies testing the breadth of SNOMED CT, for example,4–six were excluded.

Studies were also excluded if no defined coding or classification arrangement was applied. As a outcome, some information retrieval, information extraction, and/or indexing studies were included and some were non. It can be difficult to discern the deviation between indexing and applying clinical codes, since codes are often used for the purpose of indexing or retrieving data. Where indexing was performed using a coding or nomenclature schema—for example, the application of MeSH terms—the study was included. Where indexing involved parsing or indexing documents with no specific code output to evaluate, the written report was excluded.

All potentially relevant studies identified were reviewed for inclusion. Each title and abstract retrieved was reviewed by two independent reviewers. When inclusion could not be determined from a title or abstract, the full text of the article was reviewed. When the two initial reviewers reached dissimilar conclusions applying inclusion criteria, a third reviewer adjudicated to produce a final decision. Summary data was extracted from all studies satisfying the inclusion criteria.

The systematic literature search yielded 2322 maybe relevant references. There were 2209 articles eliminated as not coming together all of the inclusion criteria, leaving a full of 113 studies for analysis in this systematic literature review. The 113 included studies are listed in online appendix B (bachelor at www.jamia.org). Meta-analysis of these studies was non possible, given the multifariousness of research purposes and study methodologies. Instead, the 113 studies were closely reviewed, and key data elements, such as the post-obit, were bathetic.

  • The nomenclature system practical by the automated organization and associated healthcare domain (eg, SNOMED for diagnoses on chest radiographs)

  • Objective of the study (eg, to decide if an automated system can replace transmission chart review to identify cases for a clinical trial)

  • The written report methodology (including sample size, sample selection, statistical analysis used, and who built the system versus who conducted the evaluation)

  • The reference standard for functioning

  • System performance

  • The purpose or use of the automated organisation

  • Conclusions from the study

This bathetic information was examined and key observations are reported hither.

Results

The earliest report in the included corpus was published in 1973. Another was published in 1976, and then none until 1990. All but four of the studies (96%) were published after 1994. Online figure one (available at www.jamia.org) shows the distribution of the studies over time.

The studies in this review focused on various weather or healthcare specialties and a wide variety of certificate types. Online table 1 (available at www.jamia.org) provides details on the atmospheric condition and certificate types specified in the included studies. Pneumonia was the condition nigh frequently addressed past these systems, including community-acquired pneumonia, acute bacterial pneumonia, and early detection of pneumonia in neonates. Interestingly, 37 of the studies that specified a item condition focused on a respiratory condition, which correlates with the most oftentimes studied documents, chest radiology reports. In general, diagnostic reports were studied more frequently than other report types, with 54 of the specified document types representing a diagnostic examination.

The studies evaluated the operation of various estimator applications, many of which were identified by proper name. Online tabular array 2 (available at www.jamia.org) provides details on these systems. There were 46 different systems named and 21 not named. Of the named systems, Columbia University'due south MedLEE was the system studied most often, followed in frequency by SymText, MMTx, and NegEx. These four systems together represent 91% of the named systems studied and 37% of the total corpus.

Study methodologies varied widely across the included corpus. One distinction was the machinery used to create a reference standard against which the automatic systems were evaluated. We found that reference standards fell into i of the following full general methodologies.

  • Gold standard: multiple, two or more than, independent reviewers with adjudication of disagreements to establish consensus in some manner—for example, by majority vote or review/discussion to obtain understanding

  • Trained standard: one expert reviewer classifies the majority of the preparation set, only validity of the reviewer'south consignment is verified and training is provided to amend the reviewer'due south performance/consistency

  • Regular do: one human reviewer, as in the usual transmission procedure; often an existing database reflecting the normal or usual practice was used

Table 3 applies this schema to the included corpus. Well-nigh 43% of the studies used a gold standard as divers above, a more rigorous, only plush approach. Approximately 51% of the studies compared the automatic procedure with the usual manual process, using regular exercise as the standard for comparison.

Table 3

Reference standards

Reference standard methodology No of studies
Regular practice 58
Gold standard 49
Trained standard five
Unknown (process for determining correctness not specified in the newspaper) 1

Statistical methods also varied in arrangement evaluation. Some studies reported simple accuracy rates. A handful of studies utilized more rigorous statistics, such as κ scores, F measures, and receiver operating characteristic curve assay. Many studies reported more than one measure—for example, sensitivity and specificity, or recall and precision. Table 4 shows the most usually reported statistics, with the most common measure being recall (or 'sensitivity').

Tabular array 4

Statistical methods

Statistical method reported No of studies
Recall or sensitivity 78
PPV or precision 52
Specificity 49
Accurateness 28

The type of coding or classification scheme applied by the system besides varied widely. We found the types of coding fell into ii primary groups: (one) those that used an existing classification, vocabulary, or terminology organization; (2) those that used a clinical guideline or clinical coding scheme, frequently developed specifically for the study. A total of 42 studies fell into group one, with the remaining 71 studies in group 2. Examples of coding classification systems practical by studies in group 1 include:

  • CPT

  • ICD-8

  • ICD-nine-CM

  • ICF

  • UMLS

  • MeSH terms

  • MedLEE's controlled vocabulary (MED)

  • HICDA (Mayo modification of ICD-8)

  • RxNorm

  • SNOMED (multiple versions: iii.five, RT, III, CT)

  • SNOP

The studies in group 2 were subdivided as follows (table v), reflecting the complexity of the coding and classification scheme applied:

  • Binary: a two-factor scheme, such as follow-up or no follow-up, presence or absence of a particular status, or positive/negative finding

  • Multiple binary: application of multiple two-factor schemes, such every bit the presence/absence of more ane condition

  • three-4 betoken scale: application of a limited set up of factors, such equally aye/no/perchance, present/absent/uncertain, or three to four unlike elements identified

  • Plenary: application of a much more than complex coding and nomenclature scheme with multiple conditions or codes. Some examples include: asthma management checklist, 1–5 gamble classes for severity, 56 respiratory conditions, Gleason tumor score, and the v A'due south of smoking abeyance (Enquire, Suggest, Assess, Assistance, Conform).

Tabular array 5

Subdivision of coding and classification schemes in the studies in group 2

Subdivision of group 2 studies No of studies
Plenary 33
Binary 16
Multiple binary 12
3–4 point scale 10

The wide variety of coding and classification schemas and study methodologies among the studies in this review fabricated them difficult to compare and contrast. This heterogeneity prevented united states of america from performing a meta-analysis. In improver, sensitivity without specificity cannot be interpreted as a statistical mensurate. Therefore, no statistical analysis was performed. Instead, we examined the study results as reported, and we observed the report results over fourth dimension for obvious patterns. Online figures ii, 3, four and 5 (available at www.jamia.org) reflect scatter plots for the nearly commonly reported results.

Every bit shown in online figures i to v (available at world wide web.jamia.org), the results were wide and varied with no obvious trends and, surprisingly, no obvious improvement in performance over time. Sensitivity scatter plots, shown in figures 6 and seven, dividing the studies by type equally identified to a higher place, likewise showed little significant pattern.

An external file that holds a picture, illustration, etc.  Object name is amiajnl1024fig1.jpg

Scatter plot of sensitivity or think results reported for grouping one studies. Annotation: grouping 1 studies included those that used an existing classification, vocabulary, or terminology system.

An external file that holds a picture, illustration, etc.  Object name is amiajnl1024fig2.jpg

Scatter plot of sensitivity or recall results reported for group 2 studies. Note: grouping 2 included studies that used a clinical guideline or clinical coding scheme, oft developed specifically for the study.

Further analysis is required to determine if the results did indeed remain static over time, or if this simply reflects attempts at more and more hard tasks by the automated systems existence evaluated. The more difficult tasks are those involving multiple parameters requiring multiple and complex computer algorithms. Thus, the virtually hard coding and nomenclature tasks for the computer applications studied hither were those that fell into either group ane or the plenary subdivision of group 2. Figure 8 shows that near all the grouping 2 plenary coding and classification studies were conducted since 2000, with most in 2005, 2006 and 2008. Nosotros did not try to correlate the complexity of the tasks undertaken with the evaluation results, but our review indicates that more than difficult tasks have been undertaken by automated coding and nomenclature systems in recent years.

An external file that holds a picture, illustration, etc.  Object name is amiajnl1024fig3.jpg

Group 2 subdivisions of coding and nomenclature tasks. Notation: group two included studies that used a clinical guideline or clinical coding scheme, ofttimes developed specifically for the study.

Given that these studies did not lend themselves to a meta-analysis, we focused on examining the study elements and results themselves for evidence on how the systems performed the tasks of coding or classification. We examined the corpus to decide if automated coding and classification systems were being used to solve practical existent-world problems, and constitute they take been developed for a number of different purposes, from clinical support to biosurveillance to reporting quality measures. Table 6 applies a schema to these purposes.

Table 6

Purposes of the automated systems studied

Purpose of the system Count Time bridge of studies
Structured text for clinical determination support/patient care 35 1996–2008
Facilitate retrieval of cases (eg, for inquiry) 21 1994–2005
Testing techniques (eg, NLP methodologies) 17 1998–2008
Biosurveillance xiii 1997–2008
Collect specific data 8 2000–2008
Administrative coding process 7 1973–2007
Automate trouble lists v 2005–2007
Apply clinical guidelines 4 1996–2003
Reporting quality measures 3 2007–2008

Discussion

It is clear from the time span these studies cover that researchers have been trying for years to solve the problem of fourth dimension-consuming chart review using automated methods. For case, attempts to identify subjects automatically for controlled trials, or applying clinical guidelines and structuring text for clinical decision support, have been studied since the mid-1990s. The timing of the evolution of automatic techniques for biosurveillance appears to exist related to ecology factors, given that the earliest arrangement studied was piloted at the 1996 Atlanta Olympics, with the anthrax exposures of 2001 and Salt Lake Urban center Olympics in 2002 spurring additional piece of work. The application of automated systems to reporting quality measures and automating trouble lists has but recently been studied, perhaps reflecting the current dual priorities of improving healthcare quality while reducing healthcare expenditures.

There are varying degrees of complexity associated with the coding or classification tasks studied, and more piece of work is needed to correlate purpose and related complexity with evaluation results. Conspicuously, computers can automatically assign codes and classes to unstructured information, but how well practise they actually perform? The researchers who conducted the evaluations had much to say about this. Chapman and Haug7 asserted every bit early as 1999 that the v algorithms tested in their evaluation performed better than lay persons and at least equal to physicians in a unproblematic binary task of identifying acute bacterial pneumonia on chest x-ray reports. They observed that computerized techniques were more consistent than humans, just that human being intuition applied to the chore made it difficult to compare humans and computers. In 2000, Elkins et al 8 institute that, when multiple parameters were involved (ie, not a binary task), computers were not every bit accurate as humans, but also noted that manual and automated coding each introduced split errors. Chapman et al 9 concluded in 2003 that 'text processing systems are becoming accurate enough to be practical to real-world medical problems.' However, as late as 2006, Kukafka et al 10 observed that 'coding tasks involving circuitous reasoning, such equally those in which disparate pieces of information must be connected, are a hard claiming for electric current NLP systems.' Of the 113 studies included in our review, 26 specifically asserted that the automated organisation performed improve than, or every bit well as, humans, while merely iv explicitly stated that humans outperformed the automated system. A recurring theme was that automated coding and classification system operation was relative to the complication of the task and the desired result.

Conspicuously, some systems perform well on specific tasks. The difficulty is recognizing what sort of problems automated systems tackle well. This is especially challenging equally medical natural language processing tools, commonly used in these tasks, are hard to adapt, generalize, and re-use.11 Turchin et al 12 reported that an obvious limitation in these tools was the lack of generalizability, '…a new set of regular expressions has to be developed and validated for each detail job.'

To assess whether automatic systems currently available for administrative coding purposes perform too as homo coders, we looked more than closely at the seven studies conducted to evaluate automation of the authoritative coding process. The written report elements outlined in online tabular array 7 (available at www.jamia.org) underscore the variability in methodology and focus of the studies included in this administrative coding subset. A number of different systems were tested, applying diverse classification schemes to various certificate types. Four studies created a gold standard for comparison, while three relied on regular practice as the reference standard.

Online table 8 (available at www.jamia.org) provides summary level data of the results of the studies in the administrative coding subset. Dinwoodie and Howell13 and Warnerfourteen evaluated the systems only on cases where the system was able to code with confidence. Eliminating cases that the organisation was unable, or uncertain how, to code introduced significant bias into their results. Findings by Morris et al 15 were promising, but rather than showing how well calculator systems performed, they simply underscored how difficult it was to use evaluation and management (E/Chiliad) lawmaking levels (a particularly difficult subset of codes) with any consistency. Results from Lussier et al,sixteen while pointing to opportunities for improvement, do not appear sufficient for production, while subsequent results from Kukafka et al 10 and Goldstein et al 17 do non necessarily show the improvement ane would hope to see and merely evoke cautious optimism. Findings past Pakhomov et al 18 were the nearly encouraging, with Blazon A results reaching 98% and Type B results from ninety% to 95%. These authors also presented a possibility for partially using automated coding systems in conjunction with human oversight via tiered organization outputs.

The 113 studies evaluating automated coding and classification systems included in this systematic literature review bear witness that automated tools are available for a variety of purposes, are focused on various healthcare specialties, and are applicative to a wide variety of clinical document types. Differing research methodologies made it difficult to compare system operation. Two important distinctions that fabricated it peculiarly difficult to evaluate performance were the mechanism used to create a reference standard against which the automated systems were evaluated and the statistical methods used to evaluate system performance. The complication of the coding and classification schema used also varied widely, calculation to this difficulty.

The types of coding and nomenclature schemas practical by the systems studied fell into two primary groups, those that practical an existing classification system and those that applied a clinical coding scheme, perhaps developed specifically for the study. Further assay is needed to correlate the complication of the coding and classification task undertaken with the study results achieved.

This systematic literature review of automated coding and classification systems underscores that automating clinical coding is a difficult job, made fifty-fifty more difficult by the clinical texts that must be processed. Barrows et al 19 stated, 'As if NLU (natural linguistic communication understanding) of narrative text documents by figurer systems is non hard plenty, the understanding of notational text documents is perhaps fifty-fifty more than difficult due to lack of punctuation and grammar, and frequent use of terse abbreviations and symbols.'

Conclusion

We conclude from this systematic literature review that automated clinical coding and nomenclature system performance is relative to the complication of the task and the desired issue. Automated coding and classification systems themselves are not generalizable, and neither are the evaluation results in the studies. More work to correlate the purpose and related complexity of these studies with evaluation results could be informative, as would further analysis to determine if performance of automated systems has remained static over time or if the lack of obvious statistical comeback is a reflection of more and more difficult tasks beingness attempted by the automated systems under evaluation.

The published research examined in this review shows that automated coding and classification systems agree promise, merely the application of automated coding must be considered in context. An boosted outcome requiring further study is what level of performance is required in society for these systems to perform useful real-globe clinical tasks, such as providing input to an automatic decision-back up system, a clinical research study, or a quality-measurement assay.20 Further development of these systems and a better understanding of the tasks for which they volition exist used are needed before we can conclude that automated coding and nomenclature systems meet operation standards acceptable for employ in complex clinical coding processes and are capable of applying appropriate guidelines for reporting these data.

Supplementary Textile

Footnotes

Competing interests: None.

Provenance and peer review: Not commissioned; externally peer reviewed.

References

one. Mulrow C, Melt D, eds. Systematic Reviews: Synthesis of Best Evidence for Wellness Care Decisions. Philadelphia, PA: American Higher of Physicians, 1998 [Google Scholar]

2. Meystre SM, Savova GK, Kipper-Schuler KC, et al. Extracting data from textual documents in the electronic wellness record: a review of recent research. Yearb Med Inform 2008:128–44 [PubMed] [Google Scholar]

three. AHIMA computer-assisted coding due east-HIM work group Delving into estimator-assisted coding. J AHIMA 2004;75:48A–48H [PubMed] [Google Scholar]

four. Campbell JR, Carpenter P, Sneiderman C, et al. Phase Ii evaluation of clinical coding schemes: completeness, taxonomy, mapping, definitions, and clarity. CPRI Work Grouping on Codes and Structures. J Am Med Inform Assoc 1997;four:238–51 [PMC free article] [PubMed] [Google Scholar]

5. Chute CG, Cohn SP, Campbell KE, et al. The content coverage of clinical classifications. For The Computer-Based Patient Record Institute'south Work Group on Codes & Structures. J Am Med Inform Assoc 1996;iii:224–33 [PMC free article] [PubMed] [Google Scholar]

6. Wasserman H, Wang J. An applied evaluation of SNOMED CT every bit a clinical vocabulary for the computerized diagnosis and problem listing. AMIA Annu Symp Proc 2003:699–703 [PMC free article] [PubMed] [Google Scholar]

7. Chapman WW, Haug PJ. Comparing practiced systems for identifying breast x-ray reports that support pneumonia. Proc AMIA Symp 1999:216–20 [PMC free article] [PubMed] [Google Scholar]

8. Elkins JS, Friedman C, Boden-Albala B, et al. Coding neuroradiology reports for the Northern Manhattan Stroke Study: a comparison of natural linguistic communication processing and manual review. Comput Biomed Res 2000;33:1–10 [PubMed] [Google Scholar]

ix. Chapman WW, Cooper GF, Hanbury P, et al. Creating a text classifier to detect radiology reports describing mediastinal findings associated with inhalational anthrax and other disorders. J Am Med Inform Assoc 2003;10:494–503 [PMC costless article] [PubMed] [Google Scholar]

10. Kukafka R, Bales ME, Burkhardt A, et al. Human and automated coding of rehabilitation belch summaries according to the International Classification of Functioning, Inability, and Health. J Am Med Inform Assoc 2006;13:508–15 [PMC free article] [PubMed] [Google Scholar]

xi. Zeng QT, Goryachev Southward, Weiss S, et al. Extracting master diagnosis, co-morbidity and smoking status for asthma research: evaluation of a natural language processing organization. BMC Med Inform Decis Mak 2006;six:30. [PMC costless article] [PubMed] [Google Scholar]

12. Turchin A, Kolatkar NS, Grant RW, et al. Using regular expressions to abstract blood pressure and treatment intensification data from the text of doctor notes. J Am Med Inform Assoc 2006;13:691–v [PMC free article] [PubMed] [Google Scholar]

13. Dinwoodie HP, Howell RW. Automatic disease coding: the 'fruit-machine' method in full general practise. Br J Prev Soc Med 1973;27:59. [PMC gratuitous article] [PubMed] [Google Scholar]

14. Warner Hour., Jr Tin natural language processing assistance outpatient coders? J AHIMA 2000;71:78–81; quiz 83–74. [PubMed] [Google Scholar]

xv. Morris WC, Heinze DT, Warner HR, Jr, et al. Assessing the accurateness of an automated coding system in emergency medicine. Proc AMIA Symp 2000:595–nine [PMC free commodity] [PubMed] [Google Scholar]

sixteen. Lussier YA, Shagina 50, Friedman C. Automating ICD-nine-CM encoding using medical language processing: a feasibility study. J Am Med Inform Assoc 2000:1072–2 [Google Scholar]

17. Goldstein I, Arzrumtsyan A, Uzuner O. Three approaches to automatic assignment of ICD-ix-CM codes to radiology reports. AMIA Annu Symp Proc 2007:279–83 [PMC free commodity] [PubMed] [Google Scholar]

18. Pakhomov SV, Buntrock JD, Chute CG. Automating the consignment of diagnosis codes to patient encounters using instance-based and machine learning techniques. J Am Med Inform Assoc 2006;13:516–25 [PMC free article] [PubMed] [Google Scholar]

xix. Barrows RC, Jr, Busuioc M, Friedman C. Limited parsing of notational text visit notes: ad-hoc vs. NLP approaches. Proc AMIA Symp 2000:51–5 [PMC free article] [PubMed] [Google Scholar]

20. Hersh W. Evaluation of biomedical text-mining systems: lessons learned from information retrieval. Cursory Bioinform 2005;6:344–56 [PubMed] [Google Scholar]


Manufactures from Periodical of the American Medical Information science Association : JAMIA are provided here courtesy of Oxford University Press


griffinwhisingerrim91.blogspot.com

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3000748/

Post a Comment for "A Systematic Literature Review of Automated Clinical Coding and Classification Systems"