Katya Hill, Ph.D., CCC-SLP
Edinboro University of Pennsylvania
Barry Romich, P.E.
Prentke Romich Company


Figure 1. A Model for AAC Evidence-Based Practice

AAC (augmentative and alternative communication) service delivery has been making a rapid shift from the art form of the past to the science of today. By virtue of the important language component, AAC service delivery is the domain of the speech-language pathologist (SLP), often working on a team with other professionals and stakeholders. The American Speech Language Hearing Association (ASHA) has recognized and encouraged the shift toward scientific methods through the revised ASHA Scope of Practice, the very definition of the profession of speech-language pathology in the United States (ASHA 2001). That document now articulates the expectation of data collection, outcomes measurement, and the provision of services in accordance with the principles of evidence-based practice.

Achieving the goal of AAC, the most effective communication possible for the individual, can only be expected through evidence-based practice. While the notion of evidence-based practice is accepted and the methods and tools are available to support data collection and analysis, many practicing SLPs have not received formal training on how to take this approach. Many university SLP training programs still do not offer a course on AAC, although over two thirds of SLPs claim to include AAC in their practice (ASHA 1999). Many AAC courses have allocated time and resources to teach only the basics, leaving students without course content on performance evidence or experience with the performance differences among various AAC language representation methods. Training in evidence-based practice requires instruction of systematic approaches on how to access, evaluate, interpret, and apply the research literature.

In addition, consumers and other stakeholders are frequently uninformed of the evidence related to clinical-decision making or are unaware of questions to ask regarding performance measurement and comparisons on communication performance. The purpose of this paper is to present an AAC evidence-based practice flow chart to serve as a systems model for clinical service delivery. The foundation for the model was established on the principles of evidence-based medicine, however, it is not within the scope of this paper to present a thorough review of the tenets of and approaches to evidence-based practice.

The principles of AAC evidence-based practice (EBP) are derived from evidence-based medicine. Evidence-based medicine requires conscientious and judicious use of current best evidence in making decisions about the care of individuals (Sackett, Rosenberg, Gray, Haynes, & Richardson 1996). Evidence-based practice is an approach that promotes the collection, interpretation, and integration of valid, important, and applicable patient-reported, clinician-observed, and research-derived evidence (McKibbon, Wilczynski, Hayward, Walker-Dilks, & Haynes 1995). In addition to the collection, evaluation, and interpretation of data, EBP emphasizes the dissemination of information so that the evidence can reach clinical practice. Therefore, well thought-out and careful clinical planning involves both individual clinical expertise and the best available external clinical evidence in addition to data collected from the individual.

The judgment and competencies for achieving clinical expertise in AAC are acquired through clinical experience and clinical practice. Recognition of clinical AAC expertise has not been formalized nationally or internationally for consumer protection as of this writing. However, DeCoste (1997) has indicated that a clinician should be involved in a substantial number of AAC evaluations before establishing a basic comfort level (proficiency). Suggested guidelines for AAC clinical competence have been developed by ASHA (2002). The various knowledge and skills identified in the report complement EBP, and include the following:

Knowledge and skill in using systematic observation.
Knowledge and skill in identifying and measuring outcomes.
Skill in preparing, monitoring, documenting, and analyzing goals, objectives, procedures, and progress.
Knowledge of performance ratings for AAC systems.

AAC external clinical evidence is obtained from clinically relevant and systematic research. Systematic research is identified in terms of levels of evidence. Various levels of evidence have been established as guidelines for reviewing the strength of the evidence from research studies. The Centre for Evidence Based Medicine has a hierarchy of ten levels of evidence (Phillips, Ball, Sackett, Badenoch, Straus, Haynes, & Dawes 1998). The American Academy for Cerebral Palsy and Developmental Medicine (AACPDM) has a classification system based on five levels of evidence (Butler 2001). The AACPDM Levels of Evidence provide a hierarchy for identifying the highest level of evidence for the research design used in a clinical study. In addition, the AACPDM has established a database to provide a description of the current state of evidence about various interventions used in the management of developmental disabilities. The AAC Institute has adopted the five Levels of Evidence of the AACPDM database as the standard for reporting evidence related to AAC clinical practice. However, categories of evidence reflect the distinction between research that is based on individuals who rely on AAC and research that is not. The results of research based on non-AAC subjects may not be valid in supporting evidence-based practice. Research based on AAC subjects may not apply to a particular individual. This growing database of evidence to support AAC practice is available at the web site of the AAC Institute.

Levels of Evidence take on significance when making clinical decisions. Clinicians evaluate the strength of the research evidence along with evidence collected from the individual receiving services to support decisions and advance practices. For example, the best available AAC clinical evidence can be used to invalidate previously accepted assessment and intervention practices. In addition, current best evidence replaces old assumptions and beliefs about AAC approaches with more powerful, accurate, and effective service delivery strategies and techniques.


Historically, the best available external clinical evidence has been collected using traditional or manual methods of observation and analysis. The AAC assessment has been defined as a process whereby data are collected and information is gathered to make clinical decisions (Lloyd, Fuller, & Arvidson 1997). Traditional methods of collecting language or performance samples for clinical practice or research generally have included personal observation or video and audio recording with subsequent transcription and analysis. These tools and methods represent a significant investment in human resources and time as well as delays between collecting the data and having analyzed results. Nevertheless, language sampling is considered the best evidence of an augmented communicator's performance (Light & Binger 1998).

Clinical research assessing the performance specific to individuals who rely on AAC systems has been limited. A systematic review of the research evidence identified a limited number of studies reporting results based on the performance of augmented communicators. Examples of frequently cited studies include identifying vocabulary use with alphabet-based systems (Beukelman, Yorkston, Poblete, & Naranjo 1984), studying the efficacy of various rate-enhancement strategies (Koester & Levine 1994), and investigating the vocabulary development of cognitively challenged children and adolescents (Romski & Sevcik 1996). In addition, most of the evidence makes comparisons across various studies difficult, because of the lack of agreement on the reported summary measures used to assess performance.

Currently, automated performance monitoring is providing methods and tools that report quantitative data based on units of measurement to support clinical decisions. A language activity monitor (LAM) was developed originally as a device to be added to existing AAC assistive technology systems (Romich & Hill 1999; Hill & Romich 2002). Several modern high performance AAC systems now have the data logging function as a built-in standard feature. Methods have been developed to use the LAM data to generate a summary measure report of communication performance (Hill & Romich 2001). Software applications such as the Augmentative Communication Quantitative Analysis (ACQUA) (Lesher, Rinkus, Moulton, & Higginbotham 2000) and others are forthcoming for automating the analysis process. Clinical research has reported evidence on summary measures that includes number of utterances, percent complete, percent spontaneous, mean length of utterance, total number of words, number of word roots, average and peak communication rates, selection rate, rate index, use of and communication rate using different language representation methods, selection errors, and spelling errors. These units of measurement provide a systematic and operationalized approach to reporting evidence on communication performance.

Table 2 provides an example of how automated performance measurement can be used to systematically collect and report results to answer clinical questions following the principles of evidence-based practice (Hill 2001). Similar units of measurement may be identified using traditional methods of observation and recording when considering other aspects of communication.

Table 2. Collecting and reporting evidence based on performance measurement

A language sample library containing analyzed performance data can be a powerful resource to support evidence-based practice. Such a library is being built and maintained at the web site of the AAC Institute . Entries can be retrieved using different methods, including the profile characteristics of the individual. Therefore, by entering the characteristics of the individual being served, analyzed language samples of others of like profile can be accessed for comparative purposes. (Contributions of samples representing best practice are welcome.)

An AAC evidence-based practice flow chart (Figure 1) serves as a systems model for clinical service delivery. The model reflects use of the recently developed methods and tools and the growing body of evidence of communication performance that can be achieved by various populations of individuals who rely on AAC.

The process starts with characterizing the individual. The primary purpose of this step is to permit the retrieval of the communication performance that others with similar profiles have been able to achieve. Then the communication performance of the subject individual is measured. This may be done using automated language activity monitoring to collect a language sample and then analyzing the LAM data to create a performance report. The performance is compared to the previous performance of the individual and also to that of others, with consideration of relevant research. This evidence, combined with stakeholder input, environmental considerations, field knowledge, and clinical experience, drives the therapy process. Response to therapy is determined by reiteration of this process starting with performance measurement. When the individual's characterization changes, such as with aging, diagnosis, education, etc., that should be noted with corresponding changes to the performance of other individuals and the most current research. Documentation of progress and outcomes measurement is inherent.

In "How to Get Started" (Oxman, Sackett, & Guvatt, 1993), clinicians are provided with tools to evaluate the literature. Three basic questions should be asked to evaluate the strength of the evidence for clinical practice: 1) Are the results of the study valid? 2) What are the results of the study? 3) Will the results help me in caring for my patients (clients). Once these basic questions are answered, the following questions may be used to help evaluate the evidence and support clinical decisions that will lead to the most effective communication for individuals who rely on AAC systems:

1. Was the evidence collected using individuals who rely on AAC systems or able-bodied participants?
2. What was/were the methods of utterance generation used during communication?
3. What were the language representation methods available to the individuals for vocabulary selection?
4. Were the units of measurement clearly described and reported?

As in good medicine, good AAC practitioners use both individual clinical expertise and the best available external evidence to support practices. This model blends clinical expertise with data to provide the services that result in the most effective communication for individuals who rely on AAC systems. As appropriate to the situation, the frequency of use of the model may range from weekly with every therapy session to quarterly. The growing library of analyzed language samples available at the AAC Institute web site can be accessed for the performance of other individuals. Practical use of this model should satisfy in part the IDEA (Individuals with Disabilities Education Act) requirements of outcomes measurement for every student on an IEP (Individualized Education Plan). Inclusion of this model and related performance reports can substantially strengthen requests for funding for AAC assistive technology.

American Speech-Language-Hearing Association (ASHA). (2001) Scope of Practice. Rockville, Maryland

ASHA (1999). Omnibus Survey. Rockville, Maryland.

ASHA (2002). Augmentative and alternative communication: Knowledge and skills for service delivery. Rockville, Maryland.

Beukelman, D.R., Yorkston, K.M, Poblete, M., & Naranjo, C. (1984). Frequency of word occurrence in communication samples produced by adult communication aid users. Journal of Speech & Hearing Disorders, 49, 360-367.

Butler, C. (2001). The AACPDM Database of Evidence Reports. American Academy for Cerebral Palsy and Developmental Medicine News, Vol. 51, No. 2, pp. 3-4.

DeCoste, D.C. (1997). Augmentative and alternative communication assessment strategies: Motor access and visual considerations. In Handbook of augmentative and alternative communication. S.L. Glennen & D.C. DeCoste. San Diego: Singular Publishing Group, Inc. 243-282.

Hill, K.J. (2001). The development of a model for automated performance measurement and the establishment of performance indices for augmented communicators under two sampling conditions. Unpublished dissertation. University of Pittsburgh, Pittsburgh, Pennsylvania.

Hill, K. & Romich, B. (2002 in press). A Language Activity Monitor to support AAC evidence-based clinical practice. Assistive Technology 13(1).

Hill, K.J. & Romich, B.A. (2001). A summary measure clinical report for characterizing AAC performance. In Proceedings of the RESNA Conference, Arlington, VA: RESNA Press. 55-57.

Koester, H.H., & Levine, S.P. (1994). Modeling the speed of text entry with a word prediction interface. IEEE Transactions on Rehabilitation Engineering, 2(3), 177-187.

Lesher, G., Rinkus, G., Moulton, B.J. & Higginbotham, D.J. (2000). Logging and analysis of augmentative communication. In Proceedings of the RESNA Conference. Arlington, VA: RESNA Press. 82-84.

Light, J., & Binger, C. (1998). Building communicative competence with individuals who use augmentative and alternative communication. Baltimore: Paul H. Brookes Publishing Co.

Lloyd, L.L., Fuller, D.R., & Arvidson, H.H. (1997). Augmentative and alternative communication: A handbook of principles and practices. Boston: Allyn and Bacon.

McKibbon, K.A., Wilczynski, N., Hayward, R.S., Walker-Dilks, C., & Haynes, R.B. (1995). The medical literature as a resource for evidence based care. Working Paper from the Health Information Research Unit, Mc,Master University, Ontario, Canada.

Oxman, A.D., Sackett, D.L., & Guyatt, G.H. (1993). How to Get Started. In Users' Guides to Evidence-based Medicine. Journal of the American Medical Association. 270(17) 2093-2095.

Phillips, B., Ball, C., Sackett, C., Badenoch, D., Straus, S., Haynes, B., & Dawes, M. (1998). Levels of evidence and grades of recommendations. Centre for Evidence Based Medicine. At http://cebm.jr2.ox.ac.uk/docs/levels.html.

Romich, B.A. & Hill, K.J. (1999). A language activity monitor for AAC and writing systems: Clinical intervention, outcomes measurement, and research. In Proceedings of the RESNA Conference. Arlington, VA: RESNA Press. 19-21.

Romich, B.A. & Hill, K.J. (2000). AAC communication rate measurement: Tools and methods for clinical use. In Proceedings of the RESNA '99 Annual Conference. Arlington, VA: RESNA Press. 58-60.

Romich, B.A., Hill, K.J., & Spaeth, D.M. (2001). AAC selection rate measurement: A method for clinical use based on spelling. RESNA '01 Proceeding. Arlington, VA: RESNA Press. 52-54.

Romski, M., & Sevcik, R.A. (1996). Breaking the speech barrier: Language development through augmented means. Baltimore: Paul H. Brookes Publishing Co., Inc.

Sackett, D.L., Rosenberg, W.MC, Gray, J.M., Haynes, R.B., & Richardson, W.S. (1996). Evidence-based medicine: What it is and what it isn't. British Medical Journal. 321: 71-2

The development of AAC performance measurement methods and tools has been supported in part by grants from the National Institute for Deafness and other Communication Disorders of NIH awarded to Prentke Romich Company. The authors gratefully appreciate the review of this paper by Lyle Lloyd, Ph.D., Purdue University.

The AAC Institute, established in 2000, is a resource for all who are interested in enhancing the communication of people who rely on AAC. Organized as a 501c3 not-for-profit charitable organization, the AAC Institute offers information and provides services worldwide. AAC Institute promotes the goal of AAC, the AAC Rules of Commitment, and evidence-based AAC clinical practice. This mission is accomplished through service delivery, research, information dissemination, and education. The AAC Institute Press publishes peer-reviewed materials to support AAC evidence-based practice and advance the field of AAC clinical service delivery.


Katya Hill, Ph.D., CCC-SLP is an assistant professor at Edinboro University of Pennsylvania where she teaches classes in AAC and Assistive Technology and supervises the AAC clinical activity. Dr. Hill is the Executive Director of the AAC Institute. She has over 20 years of AAC clinical experience and her research has been in the area of AAC language activity monitoring, performance measurement, and evidence-based practice. Dr. Hill can be contacted by phone at 814-392-6625 or by email at katyaaac@aol.com.


Barry Romich, P.E. is the co-founder of the Prentke Romich Company. He holds an adjunct faculty position at the University of Pittsburgh. Mr. Romich has been the primary investigator on the research developing tools and methods to support AAC evidence-based practice. He serves as the treasurer for the AAC Institute. Mr. Romich can be contacted by phone at 330-262-1984 ext. 211 or by email at bromich@aol.com.


Reprints of this paper are available.
Contact the AAC Institute for quantity prices. It is also available at the AAC Institute web site www.aacinstitute.org.

AAC Institute
338 Meadville Street
Edinboro, Pennsylvania 16412
Tel: 814-392-6625
Website: www.aacinstitute.org