The Program A cr|t|ca| of documentatlon Assessment Branch Proponency Off|ce for 81 Re|ntegrat|on, Offlce of the Surgeon General August 2010 Prepared by the staff of the Neurocognitive na 'ss OFFICE OF THE SURGEON GENERAL ENT or THE ARMY .i_fI;1 5109 LEESBURG PIKE 4 ir; ik '4`rI' FALLS CHURCH, VA 22041-3258 MCL-HO-PTBI 07 September 2010 SUBJECT: Automated Assessment Metric (ANAM) Program White Paper 1. Purpose: Provide a detailed review of the Automated Assessment Metric Assessment Tool (NCAT) program to Congress. 2. Facts: Congress has requested all available documentation on the ANAM Program. ln response, a large number of relevant documents have been collected. Tabbed articles, presentations, and internal communications are summarized to make the entire book more easily understandable. Throughout this book, the terms ANAM and NCAT are interchangeable. a. The ANAM programs historyis more troubled than has been commonly understood. Although the ANAM was a pioneering test, developed in the early days of personal computers, ANAM never achieved popularity or widespread clinical use. Prior to the selection of the ANAM for use, critical head to head studies ranked ANAM toward the bottom of computerized measures available (Tab 2). b. Although it has been stated that a 2007 scientific advisory panel chose the ANAM as the best available technology for our soldiers, this was not the case. The distinguished Scientific Advisory panel (Tab 3) expressed concerns related to the ANAM, arid in fact cautioned against Al\lAM's use. This report was never published and the panel, which was to meet quarterly, never called again. c. The Scientific Advisory Panel recommendations were overruled (Tab 3a). The rationale was based on several critical assumptions, which turned out to be not true: 1. The ANAM was free to the (it had been turned over to a vendor, and was a commercial product.) 2. The ANAM would only be a very short-term instrument (for an estimate 8 months), until a worthy successor test could be named by the DVBIC Head to l-lead Study to be conducted in 2007, with results due in 2008. Unfortunately, the 2007 DVBIC study on which the selection of the permanent test was to be based has not begun collecting data as of late 2010, and its results are now not due until 2013. lt has been essentially forgotten that ANAM was only ever to be a temporary, or interim, tool for use until something better could be found, and that something better was promised years ago. d. Many things are known to affect ANAM performance other than TBI. Proctor (Tab 4) demonstrated that the stress of deployment alone, such as to a peacekeeping mission in Bosnia, where no blast exposure exists, produces changes in ANAM performance. Russell (Tab 10a) found that 25% of Soldiers not exposed to TBI failed the ANAM while DASG-HSZ SUBJECT: Automated Assessment Metric Program White Paper G. f. g. h. deployed to a combat role in Iraq even though they did not have any exposure to TBI. A similar percentage, 23.8%, of the entire Airborne had similarly depressed scores when they returned to Fort Campbell after deployment (Tab 13). Vasterling found similar post deployment depression of test scores in Germany (Tab 11) and attributed these changes to the general hazards of deployment to a combat zone, not TBI exposure. Study after study showed that lowered ANAIVI scores do not appear to be related to TBI exposure (Ivins, Tab 8a). This is unacceptable if the is to be considered a Test". fails in its at a basic level (Graver, Tab 6). The benchmark for a clinically reliable test is 0.9 and the published reliability in the manual never reaches the 0.9 mark on any subtests at any test-retest interval (including 24-hour retest). By comparison, the lowest reliability coefficient for the is 0.24, (Tab 6a) while the lowest one for the more commonly used commercial competitor, the is 0.65--a much better score. The most comprehensive studies ofthe effectiveness of the ANAIVI to date (Tab 1Oa) and (Tab 18) find mixed results, with some promisingly sensitive subtests, but also major problems with the current formulation of tests, particularly the poor instructions, suggest the need for revision of the The while not a good diagnostic instrument, is showing promise far forward in tracking recovery from TBI and making return-to-duty decisions (Tab 10a). There is a paucity of screening tools available in Iraq and Afghanistan, and the appears to have a role in making return-to-duty decisions (TAB Ta). All of the weaknesses of the instrument, as spelled out above, actually make it a useful tool in this sense. It is sensitive to anything, be it neurological, or fatigue related, if the person has recovered from all of these things his performance will improve on the and he can be safely returned to duty. Unfortunately, the is not at all specific to TBI. It appears we may be doing our soldiers a disservice with the present baseline program. The ANAIVI has serious practice effect problems (TAB 17) which have long been understood by the experimental community but not acknowledged by the test vendor. The changes between the first time anyone takes the ANAIVI and the second time are significant, just due to familiarity with the test (Tab 5, pg 41, Table 4). Kamimori concludes that the whole effort is fundamentally flawed by this problem. After reviewing all available evidence for the the miIitary's Special Operations Command (Tab 19) rejected its use, and instead chose the more modern and commercially accepted test, developed by the University of Pittsburgh l\/Iedical School. Special Operations Command finds the significantly superior to the 2 DASG-HSZ SUBJECT: Automated Assessment Metric Program White Paper j. The selection of Al\lAl\/l was nepotistic, and the long delay in examining alternative instruments is baffling. Efforts must immediately begin to compare other computerized tests, in a fair and unbiased manner, and the best instrument selected. The Army is independently seeking to compare to (TAB 21). Barring replacement of the ANAIVI, serious effoit needs to be devoted to repairing known defects in the ANAIVI, which may be beyond the licensed vendor's internal abilities. If one had to select a test today, the consensus of the Army's community is clear that the is a superior test, and the Special Forces Command made a proper decision in using it rather than i Michael Russell, LTC IVIS Chief, Neurocognitive Assessment Branch Proponency for Rehabilitation and Reintegration Office of the Army Surgeon General 3 The ANAM Program A critical review of supporting documentation August 2010 - Letter from Congress requesting a meeting to discuss screening tools and processes to detect September 2005 - ls Testing Useful in the Management of Sport- Related Concussion? - Christopher Randolph, Michael McCrea, and William Barr, 2005 Journal of Athletic Training October 2007 - Scientific Advisory Panel Use of the Automated Assessment Metric a. October 2007 - Email from COL Jaffee Prospective Assessment of Functioning and Mood in US Army National Guard personnel Deployed as Peacekeepers -Susan Proctor, Kristin Heaton Dutille Dos Santos, Erik Rosenman, and Timothy Heeren 2009 Scandinavian Journal of Work Environment Health ANAM 4 TBI Battery- Users Manual September 2008 - Military Medicine: Letter to the Editor - Christopher J. Graver, a. Reference Data from the Automated Assessment Metrics for Use In Traumatic Brain lnjury in an Active Duty Military Sample - Andrea Vincent, Joseph Bleiberg, Sam Yan, Brian Ivins, Dennis Reeves, and Karen 2008 Military Medicine February 2009 - Letter addressing pre-deployment assessments using ANAM and studies focused on the administration of ANAM immediately post-concussion and validity of the ANAM as a post-deployment screening tool a. Assessment Program - Office of Deputy Assistant Secretary of Defense for Force Health Protection and Readiness Programs PowerPoint Presentation April 2010 - Letter addressing the need for further development and testing of neurocognitive tools for objective evaluation of TBI-Related cognitive dysfunction a. Performance on the Automated Assessment Metrics in a Nonclinical Sample of Soldiers Screened for Mild TBI After Returning From Iraq and Afghanistan: A Descriptive Analysis -- Brain Ivins, Robert Kane, and Karen Schwab, 2009 Journal of Head Trauma Rehabilitation b. June 2009 Memorandum -To: COLIaffin, COL Springer, LTC Russell. From: COL Jaffee and Dr. Kane. Reference: information paper for ERMC c. June 2009 Memorandum -To: From: Dr. Kane. Subject: Report of Conversation d. November 2009 - information Paper - LTC Russell- Limitations e. March 2010 -Ms Katherine Helmick, Traumatic Brain Injury f. May 2010 - Neurocognitive Assessment Program - Ms Elizabeth Fudge PowerPoint Presentation Copies of questions from Representative Pascrell pertaining to ANAM May 2010 - Letter addressing post-deployment assessments using ANAM a. December 2009 - LTC Michael Russell, Acute Concussion in a Combat Environment PowerPoint Presentation PTSD Increases in Iraq-Deployed Soldiers: Comparison with Non~deployed Soldiers and Associations with Baseline Deployment Experiences, and Post deployment Stress - Vasterling et al. 2010 Journal of Traumatic Stress June 2010 -information Paper, LTC Michael Russell ANAM Program Review June 2010 - Executive Summary, LTC Michael Russell, Response to ANAM questions from VCSA June 2010 - information Paper, LTC Michael Russell, - Program June 2010 -- and A between Senator lnhofe and General Chairelli - Pre- and Post- Deployment Cognitive Assessments a. ANAM-4 Pre-Deployment Post-Deployment Assessment Study (data from Ft. Campbell KY) - Andrea Vincent August 2010 - Acute Effects and Recovery After Sport-Related Concussion: A Neurocognitive and Quantitative Brain Electrical Activity Study - Michael McCrea, Leslie Prichep, Matthew Powell, Robert Chabot, and William Barr, 2010 Journal of Head Trauma Rehabilitation August 2010 - Automated Assessment Metrics (ANAM4) Repeated Assessment with Two Military Samples - Stephanie Eonta, Walter Carr, Joseph McArdle, Jason Kain, Charmaine Tate, Nancy Wesensten, Jacob Norris, Thomas Balkin, and Gary Kamimori August 2010 - Compromised Cognitive Processes Following Combat-Related Acute Concussion- CPT Michael Dretsch, COL Rodney Coldren, MAJ Robert Parish, and LTC Michael Russell August 2010 - Letter from LTC Mark Baggett to LTC Michael Russell- ANAM position paper DVBIC ANAM Articles a. DVBIC - Prediction of Concussion Status using a Computerized Measure: A Logistic Regression Analysis - Alison Cernich, Joseph Bleiberg, Tresa Roebuck-Spencer, Brain Ivins, Karen Schwab, Dennis Reeves, Fred Brown and Deborah Warden b. DVBIC - Computerized Testing: Comparison of Individual and Group Administration - Porter, Frisch, Roebuck-Spencer and Bleiberg c. DVBIC - Impact of Remote Brain Injury on Reaction Time~Based Computer Tests - Tresa Roebuck-Spencer, Karen Schwab, Dennis Reeves, Joseph Bleiberg, Brain Ivins, Fred Brown, Daniel Frisch and Deborah Warden d. DVBIC -- Computerized Concussion Assessment without Baseline: Effect of Interval and Severity-Alison Cernich, Joseph Bleiberg, Tresa Roebuck-Spencer, Dennis Reeves, Brain ivins, Karen Schwab, Fred Brown and Deborah Warder e. DVBIC - Influence of age, sex, and education on the Automated Assessment Metrics (ANAM) -Tresa Roebuck-Spencer, Joseph Bleiberg, Alison Cernich, Brain Ivins, Karen Schwab, Wenyu Sun, Dennis Reeves, Fred Brown and Deborah Warden August 2010 - Memorandum for LTC Russell Subject: approval of Recent Research Protocol Submission, Comparative Study: MIL versus ANAM4 TBI MIL for Acute Concussion" The Convoy - Treating 'invisible wounds' - August 2010 LTG Eric B. Schoomaker Surgeon General/Commander U.S. Army Medical Command 5109 Leesburg Pike Skyline 6, Suite 672 Falls Church, VA 22041 Lt. Gen. (Dru) Charles B. Green Surgeon General U.S. Air Force HQ 1780 Air Force Pentagon Washington, DC 203304780 "gina . "jf it 2 ll dis: dungteas at the ?tates at washington, ?01 20515 August ll, 2010 Vice Adm. Adam M. Robinson, Surgeon General U.S. Navy Bureau of Medicine and Surgery 2300 Street, NW Washington, DC 203726300 Robert A. Petzel, MD. Under Secretary for Health Department of Veterans Affairs 810 Vermont Avenue, NW Washington, DC 20420 Dear General Schooniaker, Vice Admiral Robinson, Lieutenant General Green, and Dr. Petzel, As you know, the issue of pre-deployment and postmleploynient screenings for traumatic brain injury (TBI) has been raised at hearings held by the House Armed Services Corninittee, Senate Veterans' Affairs Committee, andthe Senate Armed Services Committee this year. Recognizing that language was passed as part of the FY200S Defense Authorization Act, Public Law ll0>>l8i requiring the Depa1't1'ne11t of Defense to establish a "system of pre-deployment and post-deployment screenings of cognitive ability in members for the detection of cognitive inipairrnent," we are therefore writing to request a meeting on the status of post-deployment screenings for TBI. At a June 22" Senate Anned Services Cornrnittee hearing, Department of Defense representatives reported that the ANAM test was being used posodeployment for soldiers displaying of TBI, yet less than l% of the over 550,000 soldiers have received a followup ANAM test. Fuitherinore, at the same hearing, the Vice Chiefs testifying echoed their concerns over "false positives." While Public Law 110-181 does not require ANAM, it does however require that the system be able to detect cognitive impairment. We are requesting a meeting with your office to discuss screening tools and processes to detect TB Is in our soldiers returning home. As part of this meeting, we would like to request all documentation and data that substantiates the Department of Defense and Veterans Affairs claims that ANAM is an insufficient tool. This must include, ata rninimurn, corroborating test results and analysis that concisely indicates the false positives and should include all factors and variables of the supporting study. Additionally, we would also request the results of the DOD 2008 "Assessment of Alternatives" in which ANAM was LDO XICYLIDIA selected as well as results of studies looking at the effectiveness ofthe MACE for post-injury and post- deployment cognitive assessment. Finally, we request substantiating documentation ensuring that the Department of Defense and the Department of Veterans Affairs have established the means to digitize all medical documentation for TBI, PTSD and/or any other related injuries in the service member's medical records, have implemented a data management system or process that allows access for all Agencies providing health care, and have ensured this system or process allows for the transfer of _this medical information to both Agencies, private medical care and other interested parties for all Active, Reserve and National Guard service members. Considering this issue has garnered much attention in the media this year, we expect a number of our colleagues to join us including members from the Congressional Traumatic Brain Injury Task Force and from the House Armed Services Committee. We look forward to meeting with you and hope that our conversation will be fruitful in strengthening the services provided to our troops. Sincerely, tiri Bill Pasc ell, r. Todd Russell Platts Member of Congress "i Member of Congress Co-Chair, Brain Injury Task Force Co-Chair, Brain Injury Task Force 1' Tom Cole Member of Congress Summary of Article: ls Testing Useful in the Management of Sport-Related Concussion? Authors: Christopher Randolph, Michael McCrea, and William B. Barr (September 2005, Journal of Athletic Training) The purpose of this meta-analysis was to outline the criteria that should be met in order to establish the utility of instruments as a tool in the management of concussion and to review the degree to which existing tests have met these criteria. The data sources for this study consisted of a review of literature from 1990 - 2004 of testing in sport-related concussion. At the time of this study most acute concussion data was collected from sport-related injuries. This meta-analysis compared standard paper and pencil neurocognitive tests to four different computerized neurocognitive tests proposed for baseline batteries CogSport, i-leadlviiner CRI, and There are five steps outlined in this article that are necessary for the validation of a neurocognitive battery used for the management of concussion. They include: establish test-retest reliability (stability), establish sensitivity, establish validity, establish reliable change scores and an algorithm for classifying impairment, and determine clinical utility detection of impairment in the absence of When attempting to validate the using the criteria set forth to establish the utility of instruments for the management of concussion the fell far short of meeting the basic requirements. In short, in 2005 there was no published data on the reliability ofthe and practice effects were reported to occur on multiple measures within the battery. ANAIVI generally lacks in sensitivity and its utility for detecting individual impairment after concussion is therefore questionable. No investigations have demonstrated that is sensitive to the effects of concussion once subjective have resolved. QfAfhietic Tmining by the National Atiiletic Trainers Association, Inc gf? *fa *ful 'Testing seful in Elie ariage em of Sport- elated encussion? Christepheif Raifidoiplf; Nlicliae? William B. Sami *Loyola University Medical Cerner, Maywood, iL; Tweukesha Memorial Hospital, Waukesha, WI, and Medical College of Wisconsin, Milwaukee, York Liniversiiy Schooi of Medicine, New York, NY Christopher Randolph, contributed to conception and design and drafting, critical revision, and final approval of the article. Michael McCrea, and Wil/iam B, Barr, contributed to conception and design and critical revision and final' approval of the article. Address correspondence to Chrisfopher Randolph, 1 East Erie, Suite 355, Chicago, JI. 50513. Address e-mai/ to Objective: (NP) testing has been used for several years as a way of cleieciing the effecis of sport- related concussion in order to aid in return-to>>play determina- tions. ln addition io standard pencil-and-paper tests, compue erized NP tests are being commercially marketed for this purpose to professional, collegiate, high school, and elementary school programs. However, a number of important questions regarding the clinical validity and utility of these tesis remain unanswered, and these questions present serious challenges to the applicability of NP iesting for 'sie management of sport- related concussion. Our purpose is to outline the criteria that should be met in order to establish the utility of NP instruments as a tooi in the management of spori-related concussion and to review the degree to which existing tests have met these criteria. Data Sources: A comprehensive Eiteraiure review of MED- LINE and from 1990 to 2604, ieeluding all prespective, controlled studies of NP testing in sport-related concussion. Dafa The effects of concussion on NP test per- formance are so subtle even during the acute phase of injury (i-3 days postinjury) that they often fail to reach statistical sig~ nificance in group studies. Thus, this method may lack utility in individual decision making because of a lack of sensitiviiy. in addition, most of these tests fail to meet other criteria (eg, adequate reliability) necessary for this purpose. Fi- nally, it is unclear that NP testing can deteci impairment in play- ers ence concussion-relateci (eg, headache) have resolved. Because no current guideline for the management of spore-related concussion alicvvs a player to return to sport, 'ine ineremeniai utility of NP testing remains question- able. Conciusions/Recommendations: Despite the theoretic ra- tionale for the use of tesiing in the management of sport- related concussion, no NP tests have met the necessary criteria 'io support a clinical application at this time. Additional research is necessary to establish the utility of these tests before they can be considered pen of a routine standard of care, and con- cussion recovery should be monitored via the standard clinical examination arid subjective checklists until tesiing or ether methods are proven effective for this purpose. Key Words: neurocognitive function, traumatic brain injury, athletic injury he medical management of sport-related traumatic brain injury can be conceptualized as having 2 distinct com- ponents. The first involves the acute care nranagement of the injured athlete at the time of injury; its primary purpose is to ami treai any potential neurosurgicai einergencies (eg, cerebral lieinorrliage). This type of management is quite rarely iiecessary. as most sport-related brain injuries involve uncomplicated cencussions that do not constitute acute neu~ rologic emergenciesl-3 The second component ef sport-related concussion management is niuc-li more commoiily required of team medical personnel. This involves monitoring tlic symp- toms of concussion over finic for the purposes of recovery and making refurn-to-play decisions. ln the case of very mild concussions. often referred to as "cling" iniuries, recovery may be complete within a few ininuies, allowing for return to play that day under most proposed guidelines and (often) obviating any further workup. Many sport>>rcl_atccl concussicns, however, produce a number of subjective (eg, headaches. wooziness, changes in balance/coordination, memory inipainnent) that may last for days aner the injury? Altliougli more than a dozen concussion rating scales compete with separate return-to-play guidelines, they are ail in agreement that ar//:Zeiss be before lo The various guidelines differ only in the factors involved in rating; the severity of a con~ cussion and in how long a player should be free before returriing to competition. A review of the relative merits oi" these various rating scales is beyond the scope of this paper; as is an extensive discussion of the rationale for ensuring a syinpfzoni-free waiting period before reiurning to competition. The primary concern, however. is that players may be at an elevated risk for repeat concussion during the postconcussive period. Some evicience suggests that such a period of vulnerability existsg' and that recovery after a second concussion may be more prolonged. A less welhvalidatecl con- cern is the risk of second-impact or brain swelling, to be due to cerebiiovascidai' conges- tion." This can 'oc a couciiticn, but if is ex- tremely rare and the causative ineclianisrn remains unclear. lt has not been established that closely spaced concussions are Journal of Aihletic Training 139 necessary to produce this and the name may, there- fore, be a misnonneriig in fact, this may be attrib- utable to a genetic abnormality related to fainilial heiniplegic .- 12 Although there is general agreement to date that athletes who suffer a concussion should bc nee before re- mrning to play, the risks of retarn to play remain poorly defined, and little consensus exists on exactly how to mcasore concussion-related or impairments. Vari- ous approaches have been employed to date, including the use of concussion checidcists that rely upon player self- repori info1'n1ation.8 the use of brief neurocognitive iesting de- veloped for sideline evalzlationsf postural stability lneasure- ments_]4 ans! more extensive (NP) testing_15""17 The latter form of testing, iypicaliy in~ volves E1 20- to 3G-minute batiery of tesfs measuring attention, memory, and other cognitive functions, is the focus of ihis paper. This type of NP testing, used in studies of college athletes, is has been routinely employed as pam of sport-reiated concussion management programs in the National Football League and National Hockey League for several years, as Weil as in a large number of collegiate programs. The use of these batteries has prolifcrated rapidly. Testing was initially limited to pencil-anchpapei' batteries. Several computerized test basteries have been developed and are now being ciaiiy marketed to athletic programs in high schoois and col- leges, including (University of Pittsburgh, Pitts~ burgh, CogSpo1't (CogState Ltd, Victoria, and C~oncussi_on Resolution Index (I-1eadMinder, Inc. New York, Athietic trainers and other sports med- icine ciinicians typically lack a sufficient background in psy- chometrics to make an informed decision about the utility of such and no peenreviewed guidelines exist for the seiection of these inst1'u.n1ents. Although gists are trained in identifying criteria necessary for the implcinentation of a given instrument for the purpose of clinical assessment, athletic trainers niay lack the services of an NP consultant to aid in such decision making. In addi~ tion, she application of neurocognitive testing to determine re- covery from concussion has some unique characte-ristics, fur- ther underscoring the need for 21 review of the factors involved in decision making regarding instruments marketed for this purpose. This need was the impetus for this review, and the objectives of this paper are to (1) bricfiy review the Iiterafure the potential utility of NP testing in concussion management, (2) acquaint athletic trainers and other inedicai staff with 'she existing tests used for this purpose, (3) review criteria necessary to establish clinical validity and utility for any test battery proposed or markete? for this purpose, and (4) determine Ehe degree to which existing tests have met these criteria. BACKGROUND, RATIONALE, AND TERMHQOLOGY In a clinical setting, NP assessment involves the administration of various tests of cognitive (eg. 1"ne1n~ ory, attention, language, visuospatial skills. etc). tests of psy- chological functioning (cg personality inventories, scales), and some limited testing of sensory and mo- tor functioning. The results of these tests are combined with i_11fo1'n1ation from other sources (clinical history, neuroiniag- ing, iabo1'ato1'y data) to allow to reach di- agnostic conclusions regarding the presence and nature of var- ious deveiopinental and acquired disorders of the central nervous systein. The NP assessment is an established method for detecting and quantifying residual cognitive or beha.vio1'ai deficits that may ensue from traumatic brain injury (TBIL22 In all forms of TBI, cognitive impairments are typically most severe and easily detecteci in the acute/subacute postin~ jury phase of recove1'y, with continued improvement over time and eventual stabilization. Mild, uncomplicated (eg, no asso- ciated bleeding or swelling) TBIS, including concessions, are generaily noi: expected to produce any detectable permanent impainnents of cognitive Even in mild con- cussion, however, transient iinpainnents of cognition are mea- surable in the immediate postinjury phase of The types of cognitive impairments that are most consistently re- ported after concussion (and TBI in general) include dencits in inemoify, cognitive processing speed, and certain types of executive functions (typically, measures of verbal fluency or response Recently published data from sport concussion studies suggest that these iinpairnicnis may be de- tectable in group studies for up to 5 to 7 days." Although clinical NP assessments, as noted above, involve instruments to screen for conditions such as dc- pression or somatoforrn tendencies that could be diagnosticaily inipovtant, these instruments are not Qvpically employed (or indicated) in the management of sport-related concussion. This is because most athletes are motivated to reium to play, and factors with the potential to affect subjective or actual test performance are generally uncommon in this setting. Therefore, most NP protocols for this purpose consist of only 1-1ew'ocogniz'ive measures (ie, spe- cific tests of memory, attention, and other cognitive domains). For the purpose of this aiticle, howevei; the terms neurocog- nizive and are used Sideline and Baseline Testing Standardized neurocognitive tests have been employed in the management of sport-related concussion in two Ways. The first approach is through the use of a brief measurement toel, designed for the sideline assessment of players after a con~ cussion, for the purposes of quantifying the severity of im- pairment during the acute phase and, in conjunction with other ciinical infonnatiosi, determining eiigibility to re~ turn to play in the same game or practice session.37"39 The most completely studied and well validated of these is the Standardized Assessment of Concussion (SAC) (CNS Inc, Waukesha, WTI), which takes approximately 5 minutes to ed~ niinister. reliability, and change-score analyses of SAC data have been reasonably well The SAC is a relatively cursory neurocognitive screening tool, howevei; with ceiling effects that potentially limit its use~ fulness in detecting subtle changes in neurocognit?ve functions due to concussive brain injury. The primary role of this type of insiruinenf is as one component in decision making ing seinmday return to play. On the other hand, standard clin- ical NP evaluations typically require several hours to coni- plete, and the length and nature of these evaluations are inappropriate for application in za spoits medicine context. Fi~ nelly, the relatively subtle nature of the cognitive impairments associated with concussion suggests that the best method for detecong these is to compare an injured performance with his or her preinjury baseline test scores. This approach Q40 Volume 40 Number 3 September 20% Table Traditional Pencil-and-Paper Neurocognitive Tests Used in Baseline Batteries Test Domain Measure? Description Hopkins Verbal Learning Test" Brief Visuospahal ix/lemow Test" Digit Symbol sub~ test" Symbol Digit Modal?ties Test" Trail Making Test" Controlled Oral Word Associ- ation Test" Stroop Color Word Test* Digit Span Test" Letter-Number Se- quencing Testa" Paeed Auditory Serial Addi- tion Test" Repeatable Battery for the Assessment of Neuropsy- chologieal Status% Memosy Memory Processing speed Processing speed Processing speed, execu- tive Processing speed, execu- tive Executive Working memory Working memory Working memory, speed of processing Neurocognitive battery This test consists of a E2-word tist that is repeated over 4 learning trials for immediate recall. After a delay, free recall and recogni- tion of the Words are tested. This is at visual learning test with 3 learning trials and a de?ayed free-recall trial. Subjects have to learn and reproduce 6 abstract designs in a 2-by-3 matrix. This subtest is za measure of cognitive processing speed. Subjects are asked to til! in a long series of boxes underneath numbers with symbols, using a key to identify which symbol goes with each number. The objective is to correctly fill in as many boxes as possible within the (2-minute) time limit. This test is similar to the Digit Symbol subtest described above. The only signitieant difference between the tests is that the location of the numbers and symbols is reversed. This test requires subiects to Search an afray of circles that each contain a letter or number and to connect the circles in order by drawing lines between them. The score is the time required to complete the task. Subjects are given a letter of the alphabet and asked to generate as many words as they can that start with that letter over a 60-min>> ute period. The test typioaliy involves 3 separate trials with differ- ent Eetters. This is a measure of response inhibition. Subjects ate given a page with columns of color names (red, green. blue). The names are printed in various colors, and subjects are asked to say the name of the color in which each word is printed, inhibiting the natural tendency to read the words themselves. The test is timed and scored for the totai number correct (typically within 45 seconds). A subtest of the Wechsler Adult Intelligence this test in- volves repeating strings of numbers of increasing string length. This includes both forward span (exact repetition) and backward span (repeating the strings in reverse order). This test is also from the somewhat more demanding than simple digit span. Subjects are given strings of numbers and let- ters in random order and must rearrange them mentally, repeating the numbers in order and then the letters in order. The string increase until the failure of all strings at at given length. Subjects perform a series of mental arithmetic computations, with the stimu|? deiivereci via audiotape. A brief (20- to 25-minute) neurocognitive test battery with alternate forms, designed as a neuroeognitive screening tool or stand-alone battery for the evaluation of dementia. lt includes measures in 5 neurocognitive domains, including some domains not believed to be routinely affected by concussion (ie, language and visuospatiai skills). is important as a means to control for individual baseline var- iation in cognitive abilities. In sport settings, the most common approach has been to employ a focused, 20- to 40-minute NP battery that is administered to obtain preseason baseline scores against which to compare postiltjury pe1'l'o1'mance. The wide- spread use of this method has resulted in the use of the term baseline lmrfezgf to identify groups of tests used for this pun pose and to distinguish this approach from sideline testing or traditional NP evaluations. 15 Existing Tests Available for Baseline Test Protocols A variety of neurocogltitix-'e tests have been borrowed or adapted from the clinical a1'mame11ta1'iu111 for the purpose of baseline testing. Table lists the most common such tests, with zz brief description. of each measure. These tests all mea- sure various aspects of memory (new learning), cognitive pro- cessing speed, working memory, or executive fuuctiotts. The rationale for choosing tests from these domains is that these are the Functions typically affected by traumatic brain injury, as opposed to functions such as language or visuospatial skillet which are more resistant to the effects of brain iltjury, The first large-scale study ermnloyitug a set of pencil-aucl paper tests was completed in the early 19805 by Macciocohi et al" at the Unive1'sit`y This method was adopted by Lovell et al" in creating a battery for bztseline testing of selected players with the Pittsburgh Steelers in 1993, although the battery was somewhat modified and expanded. Our group began baseline testing the entire roster of the Chicago Bears and memberg of the New York _lets in 1995 and l996, using El battery largely overlapping the battery selected by Lovell et al. The National Hockey League adopted a leagucwvide baseline testing pro- gram using ?1 somewhat different pencil-and paper battery in the late 19903, although no data from this p1'og1'am have ever Journal of Athletic Training 141 Table 2. Computerized Neurocognltive Tests Proposed for Baseline Batteries Test Description This computerized battery originaily arose from a Department of Defense program focused on determining the cognitive effects ot countermeasures to chemical weapons, environmental stressors, and medications. Seven subtests are included, 6 or which have been empioyed in the investigation ot concusslons: Simple Reaction Time, Matching to Sample (visual working memory), Continuous Performance Test (sustained attention), Math Processing (processing speed and working memory), Spatial Processing (visuai matching) and Stemberg Mem- ory (verbal working memory). lt takes 15 to 20 minutes to administer, depending on how many subtests are used, and emptoys a pseudowandomizatton procedure to minimize practice effects on repeat testing. There ls no overall index score. CogSport*4 This battery reportedty requires 15 to 20 minutes to complete. lt contains measures of speed, accuracy, and con~ sistency for responses within domains described as Decision Making, 'Problem Solving, and Memory, although there is some inconsEster|cy in the test descriptors used En the Web site versus publications. There appear to be 8 scores produced by the test and no overall index score. CogSport is one application from a group ot computerized neurocognitive tests developed by the company CogStage Ltd. Headlvlinder The Concussion Resoiutlon index, deveioped by Headlvlinder, Inc. is a relatively new computerized test that utiliz- es a Web-based administration and scoring system. It can be administered from any computer with an lnternet and takes 20-25 minutes to compietet There are 6 subtests; one of these (Animal Decoding) is similar to the Digit Digit tasks, in that subjects have to type in numbers, keyed to specinc ani- mals being presented. Symbol Scanning is a second processing speed subtest, similar to the Symbol Search subtest. There are aiso 2 reaction time subtests and 2 memory sobtests. The latter 2 tests use a con- tinuouswecognition type ot format, in which subjects are presented with a series of pictures, some of which are repeated. They are instructed to press the spacebar whenever they recognize a previously presented picture. Three overalt index scores are derived from the 6 subtests: Processing Speed, Simple Reaction Time, and Compiex Reaction Time. There is no overall index score. This computerized test battery was created by investigators who were active ln pencil-and-paper testing in the National Football League and National Hockey League. lt requires 20 to 25 minutes to complete. There are 6 subtests, each with multiple associated scores: Word Memory, Design Memo- ry, and CYS, Symbol Match, Color Match, and Three Letters. Five composite scores are reported: memory composite (verbal), memory composite (visual), visuat motor speed composite, Reaction time composite, and Impulse control composite. Five composite scores are catculated for version 2 of the test, as opposed to 3 composite scores in version 1. There is no overall index score. A seifweporl scale is also included in this program. been published Most of the National Football League teams have adopted some type of baseline testing program over the last several years as well, although this is not a uniform en- deavor, and the composition of the test batteries differs some- what from team to team. To date, there has been no systematic exploration of the relative sensitivity of any of the constituent snbtests of these batteries, nor has there been an attempt to create a uniforni scaling or composite battery score. ln addition, at least 4 computerized tests have been adapted for this purpose. 3 of Whicii are available (CogSport, lrleadMinder CRI, and (Table 2). The potential beneiits of coniputerized testing are that trained test administrators might not be needed, multiple subjects could potentially be tested siinultaneously (if multiple computers were available), and reaction time data could be recorded. 46 Steps to Validating a Sport Concussion Neurocognitive Battery As noted above, several steps are necessary to validate a nenrocognitive battery for use in the inaiiageinent of sport- rciatcd concussion. They include the following: (1) establish test-retest reliability (2) estabiish sensitivity; (3) es- tablish validity; (4) establish reliable change scores and an algorithm for classifying iinpairinent; and (5) determine ciin- ical utility (eg, detection of iinpairinent in the absence of syinntoins). Each of these 5 steps will be briefly explained. Athletic trainers or team physicians reviewing candidate bat- teries for use in their programs should ensure that each of these reqnirenients has been inet by any proposed battery before investing in it for routine ctinicai use, Reliability. Different types of reliability are considered in the characteristics of a test, but in this context, we are niost interested in establishing rest-retest reliability, or the extent to which scores on the battery remain stable over titne. This inciudes the exploration of any practice effects, or irnprovenient in perfonnance associated with repeat testing. ln the rnanagenient of sport-reiated concussion, one would ide- ailjf prefer a measure with high test-retest correlation across the different time intervals that are likeiy to be involved in the practical application of these measures, with minimal or no associated practice effects. Because the time interval between preseason baseline testing and the occurrence of a. concussion may be several months in duration, merely demonstrating test- retest reliability over a very snort interval (eg. days) is not adequate to establish reliability for this application. In addi- tion, unless new baseline for cacli player are ob- tained on an annual basis, data regarding very long-term (ie, greater than year) reliability should be established.. Unless reliability is quite high, a test is unlikely to be useful for the purpose of individual decision inal>2 days. 3-7 days, and 8-14 days postinjury. Six dependent measures were included. and the data were analyzed using a mixed-model repeated-mea sures analysis of variance for each dependent variable sepa- rately. Only i of the 6 dependent variables yielded a signin- cant group-by-time interaction (Spatial l?rocessin_g subtest). On post hoc testing, group differences were identilied on this sub- test only for the first 2 postinjury test sessions (0-23 hours and l-2 days postiniury). Apart from a significant group dif- ference on other subtest (Mathematical Processing) on the iirst postiniury test session, no other post hoc comparison (to- tal number of comparisons 24) reached significance. Overail alpha Was not controlied. These data suggest that ANAM is generally lacking in sensitivity and that the 'utility for detecting individual impairment after concussion using this instrument is, therefore, questionable. Vaiidity. Two peer-reviewed arti_cles64=65 have expiored the relationship between /\l\lAl\/t subtests and standard NP tests of cognitive processing speed, executive functions, and Working memory. The results support the construct validity of the ANAM in the measurement of these cognitive dornains.64-65 A number of researchers55"53 have demonstrated the sensitiv- ity of to environmental, phamiacologic, and toxic stressors, as vveli as other diseases of the central nervous sys- tem, providing additional evidence of clinical validity in the appiication of this battery for the detection of diffuse mild encephalopathic conditions. Change Scores/Classification Rates. No reports have been published to date regarding the derivation of a global score from t.hc various subtests or any type of change scores. Ciinical No investigators have demonstrated that Al\iAi\/I is sensitive to the effects of concussion once subjec- tive have resolved. The length ofthe test is appro- priate for this application. CogSport Reiiahility. in a peer-reviewed article, group"9 reported test-retest stability for CogSpo1t over short time intervals (l hour and week). This was calculated using intraciass cor- relation cocfticients, which are typically employed for subjec- tive rating scales. The coethcients for l-week retest stability for the dependent variables (4 tests, each vvith measures of speed and accuracy) reported ranged nom .33 to .82 (median .6). This is relatively poor reliability, failing to meet sug- gested standards for individual decision inalting (see above). To our knowledge, no authors have reported reiiability across different computer platforms. Sensitivity. No published prospective controlled studies have demonstrated that the CogSport computerized test battery is sensitive to the effects of concussion. Validity. in the same publication referenced in the section above," the authors also reported data correlating the 8 de- pendent variabies with scores on the Digit Symbol Test and the Trail lvlaking Test Part B. They again employed intraclass correiation coefficients. with minimal to modest correlations between Trail l\/laking and the speed measures from the CogSpo1t battery (.23e.44) and modest to relatively strong correlations between the Digit Symbol and the CogSport speed measures (.42- -86). None of the CogSport accuracy measures were correlated with either traditionai test. There was no at- tempt to explore divergent validity measures. Change Scores/Classification Rates. There has been no at- tempt in any published work to derive a composite score from the CogSport battery, nor arc there any published reports val- idating ehange-score methods with this battery. The authors of the battery have published on the statistical issues involved in deriving change scores for detecting the effects of concussion in generai,4S but to our ltnovvledge, they have not provided an algorithm (or the data necessary to derive such an algorithm) for doing so with the CogSport battery. They have not pre- sented data on the magnitude of practice effects for the Journal ot Athletic Training 147 5% gg QE EWU OZ b_O 1203E532 5% W_m__m_D WE Egg LQQD OZ SGD mg r_Og_O3_h 1_9 IE 3 ggmu mg: mE>>O?Hg# hgaw WEOEEQ D5 L2 'tm _Lg 2wO?Immwu QQEQ MO U2OgXmm_ Quo: ICOOEOQ gg; QW gm; Lx; MO QDELXUQ MWQNE Wm; 23 M6 COWWLNAEHOO 69% OZ _m?zgg _umgmiu 2505 UGQDOCOO bg 2; EE UQOQQ $53 MEWOQEOO _:Ot Eg Z5 QU M522 mg 5_5 gg gr EMEW Ugm__D_a gg $5 2_0 LO M5 OZ _Owgpa Q5 ECO _wg CO Ugzgm_wig EQEOCOU Ugm_wm_D_a _Eg Wg Egg 9_0 _Eg QE _kaoghoa? MMEEQQW Eg Ei UFO gm_Cmw?D CO _Ugg W2mw_?_w mga WEE gg gg EQIP _Eg OH mnwig __w_>mgQw 'Oi OC mg 92 x_1 OZ SEO UQWQW PO Eg wg _gg OM $03 SE) XQOCOO 2 U5 25 gm NEOQ Uggpa _Eg hwigmwam _gg QOQE GE gm; 52 gr Eng OZ C920 lg LQ EDQEGU H3 gg# Egg EESMEOW _Vg Ami 5 Eagg "mga 2_0 OH _mv WEE L33 UCEE OH EE WEEOEQOO gg WE wg: 2 OZ l_2__wO_ go 2 _gm 99; Egg 2_0 HLOQQOU E633 3% gm gawm _Qu _agmw QDO5 2 Eg: mE__E_mEmm mgugm _gg E03 UQCEE _gg EGEJO _Cmag IHDQEOO _umig Egwg OZ gr EOEMW QUCQEEOU 2_3 _r_Om%Or_g atm" ?3 23% E3 Ez gd 3_3 QDEUE _Eg mg _gg QE Q35 Eg mEmW__? WUOEQQ M55 _gzm?m go EPO wig BUF 350% mix Q35 XECEW ?39 gg A3 QEVRE QEQEGU Qt 25 :Ogg 2 OZ EEN EQJSOCOO QW N305 M52 EOE igmimama 39% mg?m EP 2 gg WEEW ming \/Oiume 40 Number 3 September 2005 148 CogSport battery or on test-retest reliability for retest intervals greater than week. Clinical Utility. No authors have demonstrated that Cog- Sport is sensitive to the effects of concussion once subjective have resolved. The length of thetest appears to be appropriate. Concession Resolution Index Reliability. Two-week test-retest stability of the 3 index scores (see below) was reported to be .82 for processing speed (PS), .70 for sample reaction time (SRT) index, and .68 for complex reaction time These are somewhat below suggested levels for individual decision making but higher than for most individual pencil-and-paper tests. Significant practice effects were observed only on the PS index for this retest interval. No test-retest re- liability data has been reported. To our knowledge, reliability across different computer platforms has not been explored or reported. Sensitivity. No prospective. controlled studies have dem- onstrated that the Headlvlinder CRI is sensitive to the effects of concussion.. Sensitivity has been explored by using test- retest data from normal subjects (n I l75) tested 2 weelts apart and a subsample (n 2 ll7) tested for a third time to 2 days after the second test session." The lleadivlinder has 6 subtests, several of which have both accuracy and speed measures (15 dependent variables in all). The data from these subtests were sub) ccted to a factor analysis, and 4 factors were derived. The information from the factor analysis was used to help compose 3 index scores, which were employed for sub- sequent analyses. These include a PS index, an SRT index, and a CRT index. Two statistical models for deriving signifi- cant change scores were developed (RCI and regression-based change scores) from the nonnative retest data and applied to data from 26 athletes, primarily at the high school and college levels, who sustained a concussion during a single school year. The duration between baseline and postinjury testing for the concussed group was not reported. On average, concussed players underwent their initial postinjury testing less than 48 hours postinjury and a second postinjury testing approximately 6 days postinjury. The PS index did not appear to be sensitive to concussion at either time point, with classification rates sim- ilar to expected rates due to chance, regardless ofthe methods employed. The SRT classified 27% to 31% of concussed play- ers (depending on the statistical model) as impaired on the first postinjury testing false-positive rate expected), and the CRT classified 50% to 53% of eoneussed players as im- paired on the lirst postinjury testing. Only 12% to l5% of concussed players were identilled as impaired on the second postinjury test session. These data appear to have been sepa- rately analyzed for an earlier publication, in which error scores and subjective were also reported.7l At the first as- sessment point. it seems that 89% of their subjects were en- dorsing Only 2 players were found to have impairment on ncurocognitive testing in the absence of (not significantly different from chance). The inter- pretation of these data is complicated by the fact that they were not collected as part of a prospective controlled study, and the test-retest intervals between the control and injured subjects were obviously quite different. The data suggest that compo- nents of the lleadlviinder CRI might in fact be sensitive to concussion, but this is difficult to deteirnine from the existing data. Validity. The authors referenced above" also reported con- cordant and divergent validity data for the 3 CM index scores with 6 standard Nl' tests of working memory, processing speed, fine motor skill, and response inhibition. These corre- lational analyses generally support the construct validity of the CRI index scores. We are unaware of any additional clinical validity data. Change Scores/Classification Rates. The authors have carefully explored 2 common methods for calculating change scores and applied these to an adequately sized normative sani- ple. This occurred for a relatively short test-retest interval, however (maximuin of 2 weeks), which is not adequate for routine clinical use, particularly because a practice effect was identihed on of the 3 index scores (see "Reliability" section above for details). No attempt has been made to derive a single composite score, which might increase reliability and eliminate the problem of controlling overall alpha. Clinical Utility. No researchers have demonstrated that the l-leadlvlinder CRT can detect the effects of concussion in a sample of players once subjective have resolved. The length of the test appears to be appropriate. Reliability. We were unable to identify any peer-reviewed paper reporting reliability data on The authors have posted reliability data for short retest intervals Qmaximum of 2 weeks) hom a normative sample of 49 high school and col- legiate athletes on their Web site. This was from version of the test (the only version with sensitivity data from a published prospective controlled study). These were reported to he .54 for the lvleinory Composite, .76 for the Processing Speed Composite, and .63 for the Reaction Time Composite (forthe 2~weel< interval). These stability coefficients are below sug- gested levels for individual decision making (see above) and generally commensurate with typical stability coefficients for individual paper-and-pencil tests. Sensitivity. Only peer-reviewed article involving a pro- spective controlled study with has been published. This involved 64 high school athletes who had suffered con- cussion, compared with 24 controls. Only data from the Mem- ory Composite score were reported (3-5 composite scores make up the total, depending on the version ofthe test), and the 2 groups differed at baseline, with the concussion group performing below controls. Although the concussion groups l\/lemory Composite score was reported to be significantly be- low their baseline at l.5, 4, and 7 days postinjury, there were no direct postinjury comparisons between the control and con- cussed groups. The fact that the authors chose to report only a portion of the data available from this battery hampers in- terpretation of these data, as does the lack of comparable base~ line performance between controls and injured players and the lack of any direct postinjury group comparisons. Validity. We were unable to identify any reported data on concurrent or divergent validity studies with standardized NP tests or any reported clinical validity data with other patient groups. Change Scores/Classification Rates. The authors have posted a preliminary analysis lrom the reliability data described above on their Web site. As would be predicted giv- en the low reliability of these scores, the 90% confidence in- 72 Journal of Athletic Training 149 tervals for the composite scores are quite large. For example, a 90% confidence for ihe Memory Composiie score would require a drop of nearly 13 points to reach a criterion of impaired. In reviewing the group data from the 1 coiltrolled study that has been published on HHPACK73 the concussed group dropped only 8.3 points on average at the first postin- jury test session (36 hours postirgjury) on this measme; no other measures weve reported. No attempt to derive a single giobai composite score to improve reliability has been report- ed. The authors also recently published an article examiniilg reliable change scones on version 2.0 of the test." These change-score illiewals were derived from 56 young high school and college nonconcussed students, tested en average 6 dayS apart 143 days), and applied to 21 sample of -41 high schooi and coliege athletes tested preseason and thu retested Within 72 hours of concussion. Using somewhat less coniidcnce intervals of 80%, the authors reported data 4 of 5 composite scores data for the Impulse Control Composite score were not reported). A practice ef- fect was identified only on the Processing Speed Composite Score. Between 41% and 51% ef athletes were as impaired across the 4 composite Scores (expected false-p0si~ tive rate of 10%) A standardized concussion S5/1T1pt01'n scale was also and 54% of concussed athletes were on the basis of this scaie at the time of the new testing. Clinical Utility. No studies have delneustrated that lm- PACT is sensitive to the cffecis of concussion once subjective have resolved. The lengtii of the test appears to be appropriate for this appieication. SUMMARY Our objective was to focus on the use OENP test instruments in detecting and tracking collcussiou-related cognitive impair- ments as an aid in the mzmagemeut of athletes with census- sion. The <>rite1'ia necessary to justify the routine use ef any such were reviewed, along with the degree to which currently available instalments have been demonstrated to meet these c1'ite1'ia. Alfhough some of these issues have been discussed in ai least one prior review a number of available batteries have become available in izhe interim, and we uf1de1'teo1< the present article as a comprehen- sive update on the state-of-the-art of NP testing for the man- agenmexit of spozfz-related concussion. Unfortulzately, no existing conventional or coinputedzed NP batteries proposed for use in the assessment and management of spexf-related concussion have met ail of the criteria neces- sary to wan?a11t routine clinical application. Therefore, impor- tant questions regarding the validity, 1'e1iabiIity, and clinical of these instruments uua11swered. As a result, test-refest data from any of these instruments are difficult ?0 intequret. and any such interpretation must rely far more heavi- ly upon clinical j"udgment than statistical algorithms. Given these facts, additional is cleafly necessary before NP testing can be considered a component of the routine standard of care in the of sport-related partic- ularly as ihe risks of return to piay remain poorly deiincd. However, NP testing is a reliable, objective method for eval- uating the effecis of central nervous sysEURem injury and disease, including mild TBI. A substantial amo'un? of additional re- search will be 1'equi1'ed, in order for any of the pro- posed batteries to meet the necessary criteria for this purpose. This should include the following; Establishing test-retest reliability over time intervals that are practical for this clinical purpose. Because baseline testing is 1ike}y ?0 precede testing by a period of weeks to months (er even years) iest-retest reliability should be established for all applicable retest time periods. De1no11strating, through a prospective controlled study, that the battery is sensitive in detecting the effects of concussion. Establishing validity for any novei test battery, through stan- dard procedures employed to determine which 11eux'0cognitive abilities a new NP test is nleasuring Dcdving reliable change scores, with a classi?"1catie11 algorithm for deciding that a decline of a cer- tain magnitude is attribuiable to the effects of concussion, rather than random test lu addition, for tests pre- ducing multiple scores, probability should be adjusted ap- propriately for the number of scores generated. that the proposed battery is capable of de- tecting cognitive impatment once subjective have 1'esoi\fcd. This should occur through a controlled prospective Study, tracking through the use of a detailed cinecklist, wiih NP testing implemented once have resolved. UIIQCSS an BP battery is capable of detecting impairment after subiective resolution, it cannot alter clinical decision making under any of the current management guidelines. Meeting this criterion wouid also safisfy the criterion of sensitivity. Until additional research is completed in order to satisfy these criteria, NP testing for the purpose of managing sport- reieted concussion should be co11servativc:1y_ given the limitations of the existing data, or be limited to research purposes. Athletic: trainers are urged to exercise caution in the implementation of any NP testing protocol in their approach to the management of spoff-related concussion until this meth- od is more fimdy established through the necessary empirical research, A munber of 3Jlfi101`S have reported data indicating that subjective documented through the use of a standardized checklisi, are evident for a period of time as long (or longer) pesiinjury than detectable NP im- Given these data, the use of a standardized checkiist in addition to routine clinical examination is suggested as a reasonable approach to monitoring recovery from spe1't-related until the utility of NP testing (or other methods) can be established. REFERENCES I. ?\/Iucilex' FO. Catastrophic head injuries in high school and collegiate sports. /Iilzl Train, 2. Powell JW, Barber-Foss KD. brain injury in high school ath- ietms. .L4MA. l999;282:958e963. 3. Pi\zu1dSG, Mot? RW. Femuwx MS, Peterson CL. Evidence for the factorial and constfuci validity of a se1f>>1'epo1'l concussion sca?EUR:, .I Ark/ 4, LeBlanc KE. Concussion in SPUITI diagnosis, rctum to com- petition. Compr Ther. l999;25;39~44. 5. Czuml RC. Head iqjuries in SPOITS. Br' Spwurs Med. 1996302289-296. 6. Warren 31; Bailes JE. On the fieid evaluation of athletic head injuries. Clin Sports Med. 1998117213 26. 7. Sturmi JE. Smith C, Lombardo JA. Mild brain trauma in sports: diagnosis and treatment guidciines. Sports Meri. 8. Cencussion Sport Group, Summary and agreement statement ofthe First 159 \/oiume 40 Number 3 September ZGO5 International Conference on Concussion in Sport, Vienna 2001. .SjzJortsmed. 2002;30(2) 15?-63 Guslciewicz Klvl, l'~'lcCrea M, Marshall SW. et al. Cumulative effects as- sociated with recurrent concussion in collegiate football players: the NCAA Concession Study. JAMA. 2003;290;2549 2555. Cantu RC. Second-impact Clin Sports Med. 1992217237-44. McCrory PR. Berkovic SF. Second impact Nenrologif. 1993; 50:677- 683. Kors EE, Terwindt GM. Verinenlen FL. et al. Delayed cerebral edema and fatal coma after niinor head trauma: role of the calcium channel subunit gene and relationship with familial heniiplegic migraine. Ann Neurol. 2001493753-760. l\=icCrea M. Kelly JR Klnge J, Ackiey B. Randolph C. Standardized as- sessment of concussion in football players. Ousltiewicz KM. Postural stability assessment lbllowing concussion: one piece of the puzzle. Clin Meal 2001 gl l:1S2~l 89. Randolph C. testing models for the high school. collegiate. and protcssional sport settings. fltlil Train. 2001; 361288-296 Lovell MR. Collins MW. assessment of the college football player. Hema' Echeniendia RJ, Qlulian LJ. Mild traumatic brain injury in sports; neuro- contribution to a developing held. Nezr1'ops;vcl1ol Rev. 2001; 11:69--88. Macciocchi Barth JT, Alves W. Rimel RVE .lane JA. logical functioning and recovery after mild head injury in collegiate ath- letes. Available at: Accessed February 2004. CogSport. Available at: Accessed February 2004. Resolution Index: [computer prograin] New York, NY: l-leadlvlinder; lne; l999. Levin HS. A guide to clinical testing. Arcicf 19941511854-859. Dilonen S. Macharner Tenlkin N. Mild head injury: facts and artifacts. Clin Exp Ponsford J, Willmott C. Rothwell A, et al. Factors influencing outcome following mild traumatic brain injury in adults. lni Soc. Barth .VE Alves WM, Ryan et al. Mild head injury in sports: neu- sequelae and recovery of function. ln; Levin l-lS. Eisen- berg HM. Benton AL. eds. Mild Neon' New York, NY: Oxford; lvladclocks DL. Saling MM. clelicits following con- cussion. Brain lnj. lvl, fiuskiewicz KM. ivlarshall SW. et al. Acute effects and re- covery time lbllowing concussion in collegiate football players: the NCAA Coneussion Study. J/lil/Lfl. Shapiro AM. Benedict RH, Schretlen D. Brandt Construct and current validity of the l-lopltins `v'erbal Learning Test-revised. Clin Neni'op.sjvc'hol. 19991131348-358. Benedict RH. Brie/` llf?iilltkil' les:-Revlseol. Odessa, FL: Psy- chological Assessment Resources; 1991 Wechsler D. Weelzsler lnielligenceSeale. 3rd ed. San Antonio. TX: The Corp; 1997. Smith A. Symbol Digi! Moclalilics Test. Los Angeles. CA: Western Psy- chological Services; 1991. Reitan Wolfson D. The l-lalsfeczcl-Reitfnz Yes/ Bnfzeigu Tucson. AZ: Press; Benton AL. llamsher K, Sivan AB. Aplmsin E,mnz1`narion. Iowa City, lA: AEA Assoc; 1983, Golden .lC. Snoop Color mid Word Test. Chicago, ll.: Stoching Co; 1978. Gronwall DM. Paced auditory serial-addition task: a measure of recovery from concussion. Percepf Mor Skills. Randolph C. ltepefzralnlf' Barreiayfbi' live A.r.r@.rsn1e11f ofNeump.Syel10l>and-pencil tests in a reasonably comprehensive manner, which allows readers to appreciate in- struments currently available. The authors also provide a nice introduction to the relationship between test reliability and sensitivity but offer readers only one reference for reliability of commonly used paper-and-pencil tests? The authors criti- cize existing research in several respects: most iiotably. inves- tigators' historical failure to calculate i'el_iable change indices (RCIS) for tests used in concussion research and investigators' failure to develop coherent test composite indexes. Although concerns about paper~and~pencil test psy- cliometrics are factually correct, the authors seem to expect inuch of some investigators, who were most likely exaniining the effects of concussion in an exploratory fashion rather than attempting to construct sound for melting return-to-play decisions. The authors also review currently used computerizecl tests, Some of these instruments were reported to have limited re- search suppoit, Whereas others have been more extensively examined. The authors snrninarize the and week- nesses of the computerized test batteries based on their criteria (sec Table 4). According to the authors, none of the instru- nients reviewed has inet the criteria necessary to "warrant routine clinical application." Test developers are likely to take umbrage at such assertions, but issues of stan- dard error of ineesurenient and RCI are very important for clinicians using tests. The authors also provide several reasonable reconiinendations regarding general test deveiopment but stipulate that test batteries must be "ca~ pable of detecting cognitive iinpairinent once subjective sy1np- toms have resolved," a requirenient that be unrealistic given the variability in clinical response to concussive injury. The authors' review of tests is valuabie in a number of respects, which are directly or indirecti?,f ailnd- ed to in the article. First, discussing consider- ations end test liniitations should help to educate persons un~ familiar with these issues. Second, the authors have identified a number of primary research needs. which ere important for developing instruments used in making return-to>>play deci- sions as well as for investigators seeking a better understand- ing of the consequences of eoncussive injuries. Third, the au~ thors have begun 21 process of critical evaluation, which can be difiictiit in our current, competitive scientific environinent. Although the authors should be applauded for their critical appraisal and recoinrnendations, several issues not discussed by the authors merit inention. First., the authors fail to distin- guish between studies seeking to validate at de- veloped batterygt? and exploratory studies with completely different Both types of studies deserve empirical scrutiny, just HGT using; the seine criteria. Second, the authors conclude that tests used in concussion as- sessment are unreliable and qnestionably valid and tnen opine that testing is a reliable, obiective meth~ od for evaluating the effects of central nervous system injury and disease, including mild Despite the authors' attempt to casually extoi the virtues of testing, the same concerns that apply to concussion testing are also con- cerns for testing in general. Unforninately, the authors do not address what makes concussion assessment different noni assessing other forms of "centrai nervous sys- teni iniury." Reliability and SEM are always a concern and vary depending on the test in question." Detecting very mild, transitory deficits in select neurocognitive networks may make concussion testing somewhat different than other clinical es- sessinents, but many of the problems discussed by the authors are operative in all examinations. Third, based on the data presented, the authors rcconiniend interpreting tests This rcconiinendation is prudent, but authors fail to give readers any practical guidelines for test interpre tation, especially iniportant because these tests seein to be widely applied in concussion management. Fouitli, the authors recoinmencl relying on checklists but fail to mention that in some cases, athietes ininiinize or intentionally deny when they arc, in fact, Finallcy. as pre- viously mentioned, the authors conclude that cognitive deficits resolve before self-reported resolve, which is the most common recovery pattern identified by group studies but is by no means observed in all athletes. 152 Volume 40 Number 3 September 2905 from a clinical point of view, athletic trainers are faced with concussion management decisions that are rarely based on a single test or decision point. The athletic trainer must consider history. physical and neurologic examinations. imaging studies (if relevant), test hndings, and self-reporc ed as well as the athlete's motivation and response to injury. ln most cases, athletes do not evidence protracted cognitive and physical Nonetheless. sometimes players have no cognitive deficits but report significant post- concussive (eg, headaches, dizziness, and memory complaints). ln other cases, players deny but evi- dence iinpaired test performance, even by "conservative" in- terpretive standards. The authors recommend using checklists as a prirnary decision-rnalting tool in part because return-to-play algorithms require resolution. Despite the inanagenrent value of checklists, an athletes de- nial of may be credible or implausible, but assum- ing that self-reported resolution is valid in every case is problematic. When setting thresholds for decision-melting errors (false positive versus false negative). athletic trainers should consider the problems with RCI noted by the authors, but in all cases, athletic trainers should evaluate the entire set of clinical. historical, and test data available and not rely on any single indicator for return-to-play decisions. Although the problems discussed by the authors merit serious attention, the use of data may help clinical decision making in some cases but not in others. Given the stated need for additional research, completely avoiding the use of neu- tests in clinical practice may have the effect of preventing exploration of the very concerns identified in the current article. REFERENCES l. Barr WB. testing of high school athletes: prelinrinary and test-retest indices. Arch Clin Ner4i'opsi'c-lzol. 2, Erlanger DEW, Saliba Barth JT Alrnquist Webright W. Freeman J. Monitoring resolution of postconcussion in athletes: prelimi- nary results of a Web-based test protocol. Athi Train. 2O(ll 36:286-287. 3. Erlanger DM. Feldman DJ, Kutner K, et al. Development and validation of a web-based test protocol for sports-related return- Lo-play decision-making. /lrch Clin 4. lverson GL, Lovell MR. Collins MW. lntelpreting change scores on following sport concussion. Clin Nezr1'ojr>svcfioi. 467. 5. Lovell MR, Collins MW, lverson GL, et al. Recovery from mild concus- sion in high school athletes. Nezwosnfg. 2001981296-3(ll. 6. l<1a`oat Ml-l, Kane RL, AL. DiPino RK. *Construct validity of selected Automated .-Xssessment Metrics (ANAl\fl) battery measures. Clin 7. l-linton-Bayre Al), Geffen GM. Mcliarland KA. Mild head injury and speed of irrlormation processing: a prospective study of professional rug- by league players. Clin E132 8. lvlacciocchi SN, Barth JT Alves W. Rirnel RW. .lane .lA. logical functioning and recovery alter mild head injury in collegiate ath- letes. l996;39Guskicvvicz, KM, hlarshall SW. et al. Acute effects and rc- covery time following concussion in collegiate football players; the Concussion Study. ll). Ponsford J, Willmott C, Rotlnvell A. et al. Factors influencing outcome following mild traumatic brain injury in adults. /fir Soc. ll. Macciocclii SN, Barth JT Methodological concerns in traumatic brain irriury. ln: Lovell MR, Echeincndia Rl. Barth Collins, MW, cds. Timi- nmzic- Emir; Injury' in SporI.s'.' An lfilerriciiforzcii Per- Lisse, The Netherlands: Swets and Zeitlinger; RESPONSE We appreciate Dr. conrments and the concerns he raises regarding ou.r article. suggests that we should have distinguished between exploratory studies of the use of testing in sport-related concussion and studies attempting to validate computerized batteries intended for distribution. l-le also questions how concerns regarding the application of testing in the management of sport-related concussion can be distinguished from similar concerns in routine clinical assessment. notes that we "fail to mention tht, in some cases, athletes minimize or intentionally deny and suggests that it might not be appropriate to rely upon clrecldists as a result. He also suggests that requiring tests to be capable of detecting irnpairrnent in players once they are otherwise may be "unrealistic" and worries that "completely avoiding the use of tests in clinical practice may have the effect of preventing exploration of the very concerns identified in the current article." As we have been concerned with these issues as well, and we have devoted a good deal of time and research into addressing these issues over the years. We have the following responses: First, distinguishing between "exploratory studies" and commercially driven validation studies of tests. ln our opinion, this is an unnecessary distinction. The issue at hand is vvlrether or not our/' test battery can reliably identify "irnpairnrent" in -concussed play- ers. The ultimate requirernents of any such bat- tery are the same, regardless of the provenance of the battery. The fact that some of the exploratory studies vvere not specif- ically designed to allovv for the derivation of change scores docs not preclude a review of their findings with respect to sensitivity and reliability. We see no reason to adopt different standards for our review of these Second, assessment of sport-related concussion versus clin- ical We do not think these are conrparable endeavors, for the most part. There are a limited number of situations in which must ad- dress change scores in clinical practice. are typically engaged in performing much more comprehen- sive evaluations than the testing that occurs in the sport setting, without the benefit of preinjury baseline testing. These clinical evaluations are carried out for various types of differential di- agnostic and treatment planning purposes. With regard to the importance of change scores in clinical practice (eg, detecting postsurgical change, tracking decline in dementia), studies over the past decade or so have detailed the very issues we bring up here and provided change-score infornration for coni- rnonly used tests (eg. WATS-3, RBANTS) ln contrast, the po- tential role for neurocognitive testing in the management of sport-related concussion is constrained simply to the cfazecfion of the residual effects of the injury. to ensure complete recov- ery before return to play. This boils down to a simple math- ematical algorithnr, and clinical judgment should not be re- quired if the tests meet the necessary criteria for this purpose. Third, the problem of athletes rnirrinrizing This is a concern that is often raised to champion the use of objec- tive measures, We have pointed out, however, that when sub- jective reporting is systernatically obtained in pro- spective studies. players typically report for at least Journal of Athletic Training 153 as long as inipairnient can be detected using neurocognitive tests (or other objective measures such as balance This does not preclude the possibility that any given athlete will deny or minimize syinptoins, but it does suggest that a systematic inventory of is an appropriate first step in monitoring concussion recovery. Syinptozn check- lists are also the fastest. simplest. and inost economical Way of monitoring recovery. Any more time-consuming. compli- cated, or expensive method (eg, neurocognitive testing) should demonstrate that it provides demonstrable value>>added infon niation in terms of detecting impairment once players are syinptoin free. Fourth. waiting until syinptonis resolve before testing is "nnreal.istic." We do not understand this concern. if a player is as documented by a syniptoin checklist, no current guideline would pennit return to play. Theretbre, test~ ing players while syniptoniatic can add nothing to`clinica.l de- cision niaking and is likely to confound the interpretation of subsequent testing due to uncontrolled practice effects. A re- cent consensus paper included a statement to this effect as well, advocating that neurocognitive testing be deferred until players are t`ree.5 Fifth, avoiding the use of testing may prevent further study. We clearly called for additional pro~ spective research to establish the use of neurocognitive testing in the inanageinent of sport-related concussion. As neuropsy~ chologists, We would obviously like to be able to reconiniend this as routine clinical practice, but the data do not allow us to niake this Finally, the real risks involved in "preniatdre" return to play have never been clearly cleiined, and no assessznent tech- nique or nianagcnient intervention has ever been demonstrated to result in risk inodification. We do not beiieve it is appro- priate for athletic teams to expend significant resources in or- der to iinpleinent an unproven method (neurocognitivc testing) in an attempt to modify an unknown risk. Until these questions are answered. We believe that athletic trainers and teani med- ical personnel are best advised to rely on their clinical judg- ment, augmented by the systeinatic use of syinptoni checklists and by consultation with when appropri- ate. REFERENCES l. i\f'icCrea M, Guskicwicz KM. Marshall SW, et al. Acute effects and rc- covery time 'Following concussion in collegiate football players: the NCAA Concussion Study. MMA. 2003;290:2556e2563. 2. `i\lassiri ED. Daniel JC, Wilckens J, L-and BC. The iinpleincntation and use of the Standardized Assessment ol Concussion at the U.S. Naval Academy. Mi! Meri 3. Daniel IC. Glesniewicz MH. Reeves DL, ct al. Repeated measures oi' cognitive processing efficiency in adolescent athletes: implications for monitoring recovery from concussion. Be- N@m'0a', 9993211 674 69. 4. Bleiberg Kane RL, Reeves DL, Garinoe WS. Halpern E. Factor analysis of coniputerized and traditional tests used in iniid brain research. Clin Neuropsifcirol. 2000; 5, Johnston K. Mecuwisse W. et ai. and agreement statement of the 2nd international Conference on Concussion in Sport* Prague 2004. Br Sports lliferl 154 Volume 40 Number 3 September 2005 Summary of 2007 Scientific Advisory Panel Use of the Automated Assessment Metric The issue of neurocognitive assessment and its role in TBI evaluation has become increasingly important. Subsequently, a panel of health care providers and researchers with expertise in and neurocognitive assessment was convened on October 2-3, 2007 to provide the PR with recommendations regarding the process of TBI specific neurocognitive assessment. Issues with the ANAIVI discussed at that time included the use of the Al\iAl\/l as a universal baseline assessment with a 10% preliminary false positive rate, use of the ANAIVI data to guide clinical referral and return to duty determinations, lack of algorithms to guide command decisions pertaining to fitness for duty decisions, and the need to include measures of response inhibition and effort. Panel recommendations included ongoing evaluative process, including head-to-head studies of the ANAIVI and related devices. This was recommended to ensure integrity of the assessment process. Periodical assessment and reassessment of the instrument was also recommended particularly test-retest reliability, discriminate and convergent validity, and predictive/ecological validity. Scientific Advisory Panel Use of the Automated Assessment Metric Panel Members: October 2&3, 2007 COL Robert J. Labutta - Co-chair CAPT Morgan Sammons - Co-chair COL Bruce Crow COL Greg Gahrn Dr. Louis French COL Karl Friedl Dr. Pamela Mishler LtCol Michael affee CDR Russell Shilling Invited Subject Matter Experts in AttendanceJoseph Bleiberg David Cox Kirby Gilliland Robert Kane William Perry Katharine Winter George Zitnay Traumatic Brain Injury (TBI) is one of the "signature wounds" from the current conflicts in Iraq and Afghanistan. In their final report, the Independent Review Group (IRG) examined conditions at Walter Reed Army Medical Center and elsewhere in military medicine and recommended that "The Assistant Secretary of Defense (Health Affairs), in conjunction with the Services, should develop and implement functional and cognitive measurements upon entry to military service for all recruits; the Assistant Secretary of Defense (Health Affairs) should include functional and cognitive screening on the post-deployment health assessment and reassessment; the Assistant Secretary of Defense (Health Affairs) should develop and issue a policy requiring 'exposures to blasts' be noted in a patient's medical record; and the Assistant Secretary of Defense (Health Affairs) should develop comprehensive and universal clinical practice guidelines for blast injuries and traumatic brain injury with post traumatic stress disorder overlay, and disseminate Military Health Systemwide." (Independent Review Group Report on Rehabilitative Care and Administrative Processes at Walter Reed Army Medical Center and National Naval Medical Center, April, 2007). Section 723 of the National Defense Authorization Act for fiscal year 2006 directed the Secretary of Defense to "establish within the Department of Defense a task force to examine matters relating to mental health and the Armed Forces." The Task Force on Mental Health was established and assigned to assess the military mental health system and make recommendations for improving the efficacy of mental health services provided to members of the Armed Forces. Additionally, the President's Commission on Care for Wounded Warriors recommended a fundamental reconstruction of how services to Wounded combatants are provided. They recommended that and Veterans Affairs (VA) must rapidly improve prevention, diagnosis, and treatment of both Post Traumatic Stress Disorder (PTSD) and traumatic brain injury They further recommended that should establish a network of public and private-sector expertise in TBI and partner with the Veteran Affairs (VA) on an expanded network for PTSD, so that prevention, diagnosis, and treatment of these two conditions stay current with the changing science base. Specifically, it should: conduct comprehensive training programs in PTSD and TBI for military leaders, VA and medical personnel, family members, and caregivers, disseminate existing TBI and PTSD clinical practice guidelines to all involved providers; where no guidelines exist, and VA should work with other national experts to develop them." In support of these recommendations, the Office of the Deputy Assistant Secretary of Defense (Force Health Protection and Readiness) was given the lead for strategic TBI and PTSD planning. As part of this plan, the issue of neurocognitive assessment and its role in TBI evaluation became increasingly important. Subsequently, a panel of health care providers and researchers with expertise in and neurocognitive assessment was convened on October 2-3, 2007 to provide the with recommendations regarding the process of TBI specific neurocognitive assessment. This panel's task was to examine the tool currently being fielded by the US Army, the Automated Assessment Metric (ANAM), and make, on the basis of currently available science, recommendations regarding: l) deployment of a single standardized battery, 2) interpretation of results, 3) required quality assurance, 4) education and communication plan, and 5) necessary research. Subject matter experts from outside the Department of Defense attended the October 2-3 meeting and individually provided the panel with technical assistance and comments. Rationale Universal neurocognitive assessment in the US military is a daunting and historical challenge. Even with neurocognitive assessment scaled down to a brief automated battery designed primarily to detect the effects of such large scale testing has never been previously attempted. While there are many immediate and long-term benefits that may accrue from measuring some of the cognitive effects of TBI, determining the optimal policies, procedures, and safeguards is an absolute necessity. A brief neurocognitive assessment at the time of service entry or basic training may serve as a baseline upon which to compare any post-TBI effects. It is extremely doubtful that the results of such an assessment will ever be used to "screen out" servicemembers on the basis of test scores or to assign them to Military Occupational Specialties (MOS) that they otherwise would not have selected. We must have confidence in the predictive ability of any neurocognitive assessment well before it is used to make personnel decisions of any kind. We must also have confidence that assessment devices have both the sensitivity and specificity required to assist in medical decision making; and to appropriately categorize and identify for follow-up those servicemembers who have suffered, or who may be at differential risk to experience a traumatic brain injury, e. g. explosive ordnance personnel. A further consideration involves data collection, security, and accessibility for research purposes. Data from neurocognitive assessments has both operational and medical uses and must be securely stored so it is accessible by both operational and medical decision makers. Automated Assessment Metric (ANAM) The ANAM represents three decades of joint sponsored computer-based test development for assessing cognition and human performance. The library of test modules in the is used in research, test, and clinical settings. Among the validated batteries constructed from the library of test modules is the Traumatic Brain Injury (TBI) battery. The TBI battery can be completed in approximately 15-20 minutes and tests domains most affected: simple reaction time, code substitution, matching to sample, procedural reaction time, and mathematical processing. The TBI battery also collects demographic information, a sleepiness scale and a mood scale. The instrument runs on a variety of platforms (desktop, laptop, PDA, Web, and U3) and uses only a mouse and simple key responses, thus requiring no additional hardware (trackball optional). A major advantage of the TBI battery is the incorporation of a performance writing tool which provides a user-friendly but rigorous summary of test results as Well as comparison of results to norms derived from large (approximately 5,500) military population. In addition the systern's pseudo-randomization design can create multiple forms of item sets, minimize learning effect and facilitate repeated-measure precision testing. The ANAM is being implemented in large scale by the US Army with testing of the l0lSt Airborne Division at Ft. Campbell, KY prior to deployment and with a testing cell standing up in Kuwait for units soon to enter Iraq. Issues 1. Use of the ANAM as a universal baseline assessment The ANAM was not designed as a tool for universal neurocognitive assessment, therefore, an optimally valid configuration of the test battery is not completely known nor has it even been empirically demonstrated that the individual subtests presently employed are valid for this purpose. While ANAM subtests appear promising for this purpose, as yet, there is no normative data that can guide the development of cutoff scores for this particular use of the tool. Preliminary results from the ongoing rollout indicate that less than 10% of those taking the battery score in a range requiring reassessment (indicating a "spoiled baseline"), and that upon reassessment essentially of participants achieve recommended minimum scores. While these data are reassuring in answering concerns about the device's ability to identify false positive baselines, the ability to identify true positives using current configuration and cutoff scores remains to be assessed. In a related vein, the stability of ANAM scores across various time intervals has not been determined, nor has the influence of specific historical variables (with the exception of TBI) on assessment results in the context of pre and post deployment screening. Some data regarding the effects of deployment on ANAM test performance exists, however, the influence of various environmental factors on cognition in general and ANAM performance in particular requires further research. 2. Use of the ANAM in deployed environments Appropriately trained personnel will be required to administer the ANAM in deployed enviromnents. Administration of the ANAM may be sensitive to enviromnental variables; it remains unknown if scores obtained from large scale screening in deployed environments approximate those obtained in garrison settings. Only those who have demonstrated competence in test interpretation should be authorized to report findings of ANAM examinations. These individuals are trained as or of whom there are unlikely to be present in sufficient numbers in- theatre. However, the results of an ANAM TBI battery may be used by any medical personnel in-theatre or any other location as a decision support tool for possible referral to the next echelon of care (for interpretation or context) or for return to duty. 3. Use of ANAM data to guide clinical referral and return to duty determinations Front line clinicians will probably not have sufficient expertise to interpret the numerous battery subscores. The availability of trained interpreters of the test results is critical. A composite score or indicator that will permit front line providers to make immediate recommendations regarding referral or return to duty is required. 4. Use of the ANAM to guide command decisions, particularly fitness for duty decisions No algorithms currently exist to guide commanders in decision making. Commanders must understand that the ANAM cannot, by itself, be considered sufficient to make command decisions about individual functional capability. 5. Use of the ANAM in injured personnel The purpose of the ANAM is to assist in the measurement of the effects of TBI after the diagnosis of TBI in a military population. It should only be administered in~theatre to those who have sustained an actual or high probability TBI. 6. of the battery The inclusion of measures of response inhibition and of effort is required. If these measures have no correlate in the existing test battery, they should be added. Recommendations l. The panel supports the use of the ANAM as a neurocognitive assessment device, With appropriate provisos governing its use and exact composition. The ANAM should be administered Within 6 months prior to deployment. The ideal composition of the ANAM measures included has yet to be fully agreed upon. Configuration of a standardized ANAM battery should receive highest priority. An ongoing evaluative process, including head-to~head studies of the ANAM and related devices, is required to ensure integrity of the assessment process. Initial implementation of this (tool device battery) should use cut points of 2 SDS below the mean on two subtests or 3 SDS below the mean on one subtest to recommend further evaluation Until in-theatre norms have been established the panel recommends caution about using pre-deployment baseline ANAM data for in-theatre assessments/comparisons. Research regarding the implementation of change scores will further inform clinical and operational decision making. 2. In the deployed environment, it is not recommended that the ANAM be used at the Level I echelon of care. The Defense and Veterans Brain Injury Center (DVBIC) developed clinical management guidelines for TBI at the Level echelons of care. The DVBIC recommends the Military Acute Concussion Evaluation (MACE) be administered for diagnostic and functional evaluation post-injury. It may be possible to correlate a first responder with ANAM at a later date. Clinical management guidance using the has already been established and the panel supports this practice. This practice should continue along with the continued use of the DVBIC clinical management guidelines for TBI at the Level echelons of care. The ANAM will best be employed at the Level II and Level echelons of care, where, if needed, more ready access to specialized expertise exists. Uneven resources in- theatre may interfere with universal application of guidelines utilizing ANAM at Level II. Any servicernember being considered for return to duty after a blast injury being evaluated at the Level II or Level should receive an assessment with the ANAM. The ANAM may be used as a clinical tool at the Level IV echelon of care in keeping with currently established clinical practices. 3. The panel recommends the development of a comprehensive educational plan to address use of the ANAM and resulting data. This educational plan must address multiple audiences: servicemembers, their families, commanders, and medical providers. Command education should emphasize that the ANAM is but one data point of many that must be incorporated into decisions regarding fitness for duty. The potential misuse of TBI data, including those emanating from ANAM administration, must be addressed. 4. Results of assessment with the ANAM must be clearly specified in a format that is usable to commanders and which conveys appropriate information to medical providers and decision makers. It is strongly recommended that any platform housing ANAM data be a joint service tool and have interface capabilities with Armed Forces Health Longitudinal Technology Application (AHLTA) and DoD's Clinical Data Repository (CDR). Identification of "flags" to guide clinicians and commanders must be accomplished. 5. The use of telehealth technology to allow for consultation and, if needed, appropriate interpretation of assessment results in-theatre should be explored. 6. The in concert with the Defense Health Board (DHB) establish an external advisory body as a continuing DHB sub-committee to provide recommendations and to develop a plan of continuous process improvement to guide the implementation of the ANAM. It is recommended that the DHB TBI External Advisory Subcommittee meet no less frequently than quarterly. ln specific, the Subcommittee should be charged with systemic review of ANAM results and recommending changes to the assessment device and/or process on this basis.. The effects of universal baseline assessment upon force readiness should also be periodically assessed. 7. A broad based educational platform should be developed with target audiences of servicemembers, their families, line leadership, and military and civilian healthcare providers regarding the appropriate use of the ANAM and interpretation of results. Use of multimedia and other consumer accessible platforms for dissemination of information will be required. 8. The ANAM battery may need to incorporate measures of effort and response inhibition in the context of sustained attention. The of the instrument, particularly test-retest reliability, discriminant and convergent validity, and predictive/ecological validity of the instrument must be periodically reassessed. Research Universal neurocognitive assessment provides us with a heretofore unavailable opportunity to answer questions about neurocognitive functioning of servicemembers at baseline and throughout the deployment cycle and about the use of specific tools. Research and Quality Assurance Questions l. ls the Automated Assessment Metrics (ANAM) useable as a decision support tool (baseline, after traumatic event and post-deployment) for neurocognitive deficits? 2. Do hand-held or other abbreviated versions of the ANAM have good convergent validity with other versions (ANAM, other computerized batteries such as CogSport/CogState, Headminder, etc., traditional tests, real-life functional measures)? 3. What characteristics of the test enviromnent or delivery platform affect test results? Are particular ANAM tests differentially susceptible to environmentally mediated performance? 4. On what cognitive domains does the factor structure of the ANAM permit reliable assessment? 5. What is the ecological validity of the ANAM as it pertains to military tasks in the deployed environment or any other tasks such as Activity of Daily Living 6. Do tests of response inhibition or delayed reaction time translate to individual performance differences on the battlefield (probably as assessed by Virtual Reality (VR) and other simulations, but very useful if actually could be related to combat survival/performance)? 7. How do characteristics of the enviromnent in which the test is administered affect performance on the measure? Are scores obtained in the deployed environment comparable to those obtained in a garrison administration setting? 8. Can We correlate scores on the ANAM with results obtained from physical measures of potential trauma, such as helmet accelerometers or "blast exposure detectors"? 9. Health risk communication: How do service members and family members acquire knowledge regarding traumatic brain injury and sequelae of operational exposure? How are Service members and Families informed about the role of testing in What valence do service members and their families attach to information received from traditional physicians, mental health providers, military authorities) vice non~traditional internet, Word-of-mouth, YouTube) sources of information. 10. What is the utility of the ANAM in the remote or post-deployment assessment of service members who have been exposed to blast with resulting alterations of consciousness? Does the ANAM have sufficient sensitivity to detect alterations in neurocognitive functioning that may persist for Weeks or months post-event using different neurocognitive and imaging methods as markers? 11. Does the ANAM allow the assessment of attempts to enhance cognitive performance in clinical trials? Is the ANAM sensitive to differences in processing speed or attention that may accrue from the administration of stimulant medications or other agents? (The answer from single-subject placebo-crossover double-blind published studies is that it does - need larger scale replications) 12. Is the ANAM sensitive for longer-term changes in higher level executive functions that may result from brain injury computational ability, decision making, affective modulation)? 13. What is the role of the ANAM in translational research? Do investigations of response inhibition in animals inform use of the Does the ANAM have a role in the development of computational models of neurological function in brain injury and recovery? 14. What is the stability of the change overtime? ls it possible to develop a change score normative set Where fixed battery of interest is given at various time intervals to reflect anticipated operational use? 15. What other sensory correlates of Traumatic Brain Injury (TBI) can be reliably assessed olfactory sensitivity, voice recognition). How do ANAM scores correlate with other neurobiological correlates of 16. What non-traditional mechanisms of assessment can be applied to investigating potential sequelae of brain injury in service members? Can game formats be adapted to maintain motivation while assessing cognitive performance? Can voice recognition be used as an interface to assess individuals with polytrauma who can use devices such as the mouse keyboard? Can We adopt non-traditional measures, such as game formats, to assess stress analysis of speech patterns/articulation) and /or factors g. attention deficits, reaction time)? Can game formats be used to assess progress in rehabilitation? 17. What is the relationship between Post Traumatic Stress Disorder (PTSD) and other sequelae of brain injury and What overlap and how can the ANAM or other neurocognitive assessment best differentiate between those deficits that are stress related and those resulting from TBI or toxic exposure? What type of samples would be required to distinguish between and neuro>>-biologically mediated 18. Can the ANAM or other device be used to identify outcomes in a longitudinal cohort study of service members deployed to theatre vice those who have not deployed? How can results from the ANAM or other neurocognitive assessment be integrated into extant longitudinal studies such as the Millennium cohort study? 19. What is the utility of the ANAM as a management decision making tool? (Similar to 1) Does baseline data add to the clinical utility of ANAM for deployed warriors compared to having only normative data based on group performances? 20. What is the reliability of baseline data derived from the How many assessment points are required to establish a stable baseline? What is the cost/benefit of a second baseline, in terms of improving test-retest reliability and reducing practice-effects, both of which could produce a more clinically useful 21. How do ANAM data correlate with other measures of disability, including non- cognitive measures? Do ANAM data correlate with disability evaluation ratings? ls a larger ANAM battery necessary for this correlation? 22. How does performance on the Mood Scale impact scores on the various cognitive measures in the ANAM battery. 23. Can the mood scale of the ANAM be used to predict stability of mood over time? Can we make future projections of mood or dysfunction on the basis of a mood scale cutoff point? 24. What if any corrections are needed when assessing ethnic minorities and individuals for whom English is their second language? 25. Can the clinical use of ANAM be increased by integrating it into existing screening software? 26. What are the current clinician beliefs about ANAM, and does a briefing change attitudes/use patterns? - 27. What is the relationship between ANAM scores and answers? 28. The neurobiological understanding of suicide behaviors is in its infancy. Is there any difference in ANAM scores between Soldiers with and without a recent suicide attempt? Does depression account for all the variance in any relationship between suicide and ANAM scores, or is there any unique predictive value for suicide behaviors? 29. What is the utility of the ANAM for documenting improvements in cognitive functioning associated with the treatment of 30. ls the ANAM sensitive to feigned or malingered cognitive deficits? 31. What are the differences in ANAM performances among those attempting to exaggerate or fake cognitive deficits relative to populations with confirmed Email communication between LT COL Michael Jaffee and Panel Members and Subject Matter Experts from the Oct. 2007 Scientific Advisory Panel Contrary to statements subsequently implied, the 2007 Scientific Advisory Panel expressed concerns related to the selection of the ANAIVI as a baseline tool for TBI and noted the Al\lAl\/l was not among the top ranked tests available. The was chosen by LT COL Jaffee who overruled the panel and selected the at that time because the test was said to be 'free' to administer for the (which later turned out not to be the case, costing roughly thirty dollar per administration) and had military norms. The ANAIVI was also chosen as an interim test and was always suppose to be replaced with a better clinical instrument following the completion of a head-to-head study. The Al\lAl\/I was only going to be used for 18 - 24 months, during which time a head~to-head study was to be completed and a more suitable test was to be chosen. Subject RE: ANAM DRAFT Report Dr. Mishler: Thank you very much tor compiling this summary report. Below are some preliminary comments: Bottom of p. 2 (preamble): it was not my understanding that this panel was charged w/ evaluating available tools. It was my understanding H-to-head study being organized by DVBIC to be by NAN. Since the head-to head study would not be only tool available tree of charge to as well available militray norms, it was decided to tiled ANAM NO. 2 -- Last sentence: consider adding to last senetence necessity ot reterring to next echelon ot care it needed". Recommendations No. 2 -- There was some discussion ot specifying what resources would be needed at Level Il to be that this task was being undertaken by taciliated by an independent panel selected completed tor 18-24 mos and the ANAM is the as the only tool tor which there are the ANAM within appropriate parameters. decision support tool to decide aH"` turther interpretation or context may be able to meet the need tor the specialized expertise recognizing that there is a variety of capabilities amongst the Level IIS. I would welcome the panel's clarification on this point. (the theater CPGS attempted to express this idea by using terms such as and Thanks Michael S. Jattee, M.D. Col, USAF, MC, FS - tional Director gense and Veterans Brain Injury Center 2 nom: william Perry sent: Friday, october 12, 2oo7 8:37 PM To: drcox@abpp;org; Jattee, Michael LTC dreeves@clinvest.com; rlkane@comcast.net; Morgan CAPT Sammons; Robert COL Labutta; robert.kane@med.va.gov; joseph.bleiberg@medstar.net; French, Louis Dr WRAMC -wash kathy.winter@navy.mil; kirby@ou.edu; Friedl@Tatrc.org; Pamela CIV Mishler; Russell CDR Shilling; Crow, Bruce COL Sam Houston Gahm, Gregory A COL Friedl, Karl COL USAMRMC Subject: Re: ANAM DRAFT Report Pam Excellent report I would recommend the Following edit. 7. The ANAM battery may need to incorporate measures of etfort and response inhibition in the context of sustained attention. ot the instrument, particularly test-retest reliability, discriminant and convergent validity, and predictive/ecological validity ot the instrument must be continually reassessed. thank you, Bill Perry Perry, .rotessor ot - Associate Director ot and Behavioral Medicine University ot Calitornia, San Diego 9599 Gilman Drive. San Diego CA., 92993-8218 Fed Ex address: 359 Dickinson Street San Diego, CA. 92193 Voice: (619) 543-2827 Fax: (619) 543-3738 President, National Academy oF Fellow, National Academy ot Fellow, Society tor Personality Assessment "Mishler, Pamela, CIV, 19/12/2997 3:98 PM Enclosed please Find the ANAM Draft Report. we would like by Friday (October 19) and no later than Monday (October 22). Thank you. Em Pamela J. Mishler Red Cell 3 DVA Representative Force Health Protection and Readiness Skyline Four, Suite 991 S113 Leesburg Pike church, VA 22041 From: Joseph Bleiberg Date: Fri, May 21, 2616 at 4:17 PM To: "Russell, Michael LTC MIL USA MEDCOM Department ot NeurologyThis message and any attachments may contain contidential or privileged intormation and are only tor the use of the intended recipient ot this message. It you are not the intended recipient, please notify the sender by return email, and delete or destroy this and all copies ot this message and all attachments. Any unauthorized disclosure, use, distribution, or reproduction of this message or any attachments is prohibited and may be unlawful. Joseph Bleiberg, ABPP-CN %@1.385.6461 31.493.4198 (tax) Clinical Associate Protessor Department ot Neurology Georgetown University School of Medicine jb454@georgetown.edu (Dictated with Dragon -- please ignore typos) This message and any attachments may contain confidential or privileged intormation and are only tor the use of the intended recipient ot this message. It you are not the intended recipient, please notify the sender by return email, and delete or destroy this and all copies of this message and all attachments. Any unauthorized disclosure, use, distribution, or reproduction ot this message or any attachments is prohibited and may be unlawtul. 4 riginal article Scand l/l/orlfr Ef':wi'or1 Health Prospecliive assessment of functioning and mood in US Army National Guard personnel deployed as peacekeepers* by Susan Proctor 'i Kristin Heaton, Dufflle Dos Santos, Erik Rosenman, llrnoiliy Heereri, Proctor SP, tileaton KJ, tips .Santos KD. Etpsensnen ES, l?eeren T. Prospective assessment of functioning emi mporl in US Army National Girard personnel deptoyect as peacekeepers. Scand Work Heraliit. 'lhc sitiaty lite impact on iintuticming and mood Gllilftl por:?nimcl_ We that tlcpioyinont on at mission, compared to would rcstilt in rcttucctl prol2cionce?cs in ;aprtin'inn:tcc and moon! changes, and than such rolattc to woritirtg in at ltiglx-stwtiit job {ltigh deinnmisiltaw control), in with model, l"Vl8lhU??S 9 melt: soldiers participants cxnanincdl laciooc pied altwticploy- mont to the Bosnia o;>crntional 52 soldicte assessed twice. ovtar fi coitipninolc period); Results Unit-lisvcl nmltivatrialto tmatlyses found that deployed soldiers, to their prolicicncy in tasks motor spout! cootlicicnt coinldotacc (95% (fi) -3.84, 95% (fl -5.55--llfl; tlomlitziaat and non- and atisteinotl attention 95% (Il along with decreased vigor (lite -233. 95% CTI Dcploycfl soldiers also sshowtsd improved in zz wtarlciitg-ti1otno1'y tatslt (13 -if= 95% Cl with loss ticprc-Ssion (Fi: 9* 95% Ci .l 3). Work lcvois ovor time in both :mil ?_1't1tipS, lit!! t:l`ibcts t't:meincd significant lor at The in pcriormuncu with ptzacokocpirtg to (slowed processing motor spa:-:tl and rcpo|'tcoE ttmgetitcr with improvccl protioicncy in at worlcirtg; memory task) nn adaptive rcsportso to mission occta|Jnt_iortztI st_res>tors. This pattern docs not npgaczp' to bc oy working in ai is i't:'qt1it'ccl to cxainino wlictlicr :vaults transient or in Key terms ln recent prospective study of US soldiers deployed as part of Operation lretqi lfrecdom, war-zone tloployitzciit was associated witli Within domains of emotion. ?nt:tno1"y, and mood, along, with improvements in reztctiost time at |mtlern st1ggost.i\fe of at bio?o_g?c to 2 I strcee (I). Plowevcr, cleployinent as at ;ncaccl types: higlt-strain (high demands/low Control), low~stt'aiti (low control), active (high 350 scene near Hesrin sees, vol 35, no 5 control), and passive (low detnands/low control). The strain hypothesis of die ntorlel predicts that working in a presents die ltigxliest risk For adverse physical and health outcomes and Work siress eliaraetieristed by hi glter demands (ie, overtiinef) nod diminislted een- troi (ie. assembly-line work) in a civilian inciustriai environnaent has been associated with less prolieient within the ciontztins of attention and executive ttanetion, and ettrrent mood {l 97). lt has been recognized that ofjob srrain impact general health atnong military cohorts (20-25). But, to ont' kttowvletlge, there is lim- ited of the intiuenee that oeettoational stressors present during deploytneni one-raiions, and more specifically -during, peaeelceeping missions (eg, boredom, work overload, orjob arnbigoity), may have on nettropsyeitologieai tenetioning and mood. iinowl- edge about whether and how job stress origin impact perforrnanees in a military work environntent gfnovides an important step towards lne?~ ter understanding broader post-deployrnent health and readiness, and identities an additional focal point lor training, and protective strategjes. The aim of this cohort study was to assess the impact of deployment on netiropsyelto- logical litnetiorring and mood in Army National Guard personnel. Based on the eoneeotualization that neoropsyelzological changes reflect CNS petformattee responses when confronted with occupational stress, we lzypoiltesized that deployed personnel (hereafter '"deeployers") would perforin more poorly than their non-deployed counterparts (hereal`ier "non>>depioyers") particularly within domains involving attention and eogititiive processing - and would report more negative mood Addirionaily, we hypotltesized that compared to nott-deployznent, would resttit in increased stress together with redueedjob control), and that working in ri job (high demands, iow eontroi) wouid account for the it priori lrypotlzesized depioytnent ei'l`eets. 'l`?iere- fore, the analyses addressed two core questions: (E) Are there changes in and mood associated with servin?_; on a pefzcelreeping, deployment? If yes, (ii) are the observed deploy- znent~rela?ed el`l`et;ts associated with work stress (eg. Working a ltiglt-strain job) among the depioyed group to the non-deployed), and thus supportive ol` the strain hypothesis in l>on~one in at space. Tire NESS consists that have been vtilirlnteci in and clinical settings (39, 30}; the specific testis trrlorinister'cd in the were "leppirzrg and Sequences A and tresponse time and rrurnber oi' errors); Digit tresporrse time), and Contirrtroes Test involving lc'lt'ers (response time and A description ofthe NILES3 can ire in an etnlier pnhliezrtion (33). To assess current mood status, ell participants corn- pleted the Protilc States and Testing Service, Sen Diego, CA, 65~itern adjective scale. Perticiparrts were presented witlr as series of mood and asked to rate the degree to which catch adjective described their mood state over the preceding seven days, irrcluding the day Rzrtinggs were made on at tive-point scale (0 not nt ull, it little, 3f?=2qt1ite at hit, We scores by sunr- rning the items lriglrer scores indicated more mood feelings lor tension fill, depression USL confusion and t"zrt.iguc l7l and nrorc positive feelings oi`vigor or activity ln addition, we at total mood trrcasure ity sehtr'z1ctirr_g the positive t`zrctor (vigor) horn the stun ofthe negative scores and a of lil() to elinrirrate negative values (3 We assessed AIQNCE work stress levels at hoth Tinto 1 :md Tirne 2 using, the lil-itcrn Joh Content 352 .1 r/or 35, no (18, 32-34) and scoring usizrg; the presented by of ad (33) that permit compatri- son to earlier US Quality o'l` Surveyst The scale provides rr measure ofthe degree ot' decision latitrrcie erjoh control (eg, to in decision~ processes and l_er13r'nirrg ond job znrteoonry) (9- score rezrge 1248; coetiicicnt alpha at 'little and 'l"irne and psycirologiczrl _[oh dernends, which qrrarrtitey of work ent! degree oi' time and work (5-items, score range eoetiicicrrt alpha at Tinre l==U.38 and Time Also, we recornputed the job dcnrends using those three items we expected would better cher'trcter'izte job denroncls annong, personnel ("re-quires rvorking travel", "requires woricing first", "Irnve enough time to gctjola clone"); this yieided coe'llicierr't eiphes and 0.56, at Time 1 and Tizne 2, respectively. at continrzorrs rneasure of work stress by dividirrgjolr hy jeh control lqrrotient terrn For hypothesis testing, we determined a sintpli- tied nreosure strain, hasecl on the qezzdrertt term and mediarr split described hy Bcserzr (35) and et el (33). Persons who scored shove the rnedian forjoh demands and below the rnediarr forjob control were detincd as vvorlring in at job. We used the Time nrcdirfn levels for the or/creil group to per'fi;rr'm this categoriza- tion scheme at both Time and "Vitne 2, Persons in the other three quadrants of exposure were coznirined and deiittecl in tr non-lriglt strain job. lnfornrzatiorr on current age, edrrcatiorr level, military service clrerscteristics as history ol" prior head injury, recent number of hours of sleep (mean nurnber per dey in post weel<)i czrt`i'eine use (incite number ot' drinl:amincd mood state outcomes. To account for baseline levels. we entered the Time I value for the Time 2 out- come ineasure of interest as a covariate in each model, erentin_g a i'esiduaiizcd index of longitudinal change (49). By including, the Time I value in the core model set, we were able to examine the etieetz on the re-siciuai change for each outcome o'r`interest. Signilicancc levels were adjiisted vie corrections to limit Type error. We considered eight task outcomes involving objectively measured cognitive and motor abilities and six Subjective mood state outcomes, resulting in adjusted significance levels of lor objective perforrnances and (0.05/6) for stxlujectivc mood outcomes. Our secondary hypothesis higher levels of work stress influence the association between deployment status and tack performance and mood outcomes was examinecl in several analytic- steps. First, to examine wlicthor the observed signilicant relationsltips resulting Scand Work Em/i'ron Health 2609, 1/of 35, no 5' 353 'lflio mziiority o1'pzu'tiCipiuits in this study were onlixtod in tiirougii tho testing Wert: oxtalziisiod by work. st1?f:isu, wo cntorcri i ri tt at Timo 3 into time rnoriois (coin pure-1% tlio 5-item job sonic). Second, to examine wiiotlior workint; ri Eiig,ii-stiroiri _ioh niighi modiiy tho doploy- moni olioots found in tzlto tniliiary lrypotliosis tosting, we oi'ttcrod imo tho modeis tiio lolioxving, computed interac- tion torrns: (deployment (yosi'no)] ('i`in1o job (activznod, (yosi'uo)] ['i`i11to 2 tlsing tlio acijuntod in scores ized divitiecl by the uiiadjustoci dmfiotioit, wo rite ofthe t:i`i`cct sizos for tho rcsuitn vritli Wo performed so.nsitivit.y~typo unolysos to examine wlit:tlioi' additional t`rtcto.rs that have been shown to iniiuonce rzsiptzots and ltifwti Severity, history oiliond use, 'llitigtto lovol, orjob ongugoniotitl ini'luoi'ioi:tl liao deployment observed when individually entororl into tho core: stotisifioz-ii inotiols. its alcohol use was not porniittod during wt: did not oxomitte tl.t:.ii' giotoiitinl liaotor in the sensitivity analyses. Since iovols of liovt: boon observed to the between work strain and sytnotoinatology (Ei, 22), furrlior post hoo oxominod wlxotlior tlzo intorti<:tion botwoon doployinont' status and unit (us asses:-:od at Timo 2) ztiifootod this Signi iiozuii mood otitcoinos oiuservod. /kiso, wo 21:-11111 ctriatlyiios to our ltypothosis utilizing tho 3-item _job domztridn sonic. Results soltiiors tiom intoiitiyignii occupational ispcoiolticss and iiom iuliintry, policti. or lioltl airtillory units Tztbit: I presents moans. ricxfizititizis, and rotor; offiio iiosr:i'iptivo t:ltoi'actoi'istios porsoimcl groups nlTin1o l. 'l`|io nioan time interval iiotwoon tl1e'l`iini: 1 and Timo fnssossinonts was 7.5 [dcploycrs 7.3 months (Sl) non-doployors 7.7 iriontlis (SD 'initio 2 prcsonts tiio moons and standard deviotioni; for ozscii nonropsyolzologicul po1'i`o1'i11olico and mood outcome and wor|< stress vmiataics, by time and group. .lolz stress (quotient term) incroosecl over time [`i`imc li i.tl3 (SD 5122); Time: 2: 1.13 (Sl) F(i,l as didjoia demands (Timo il: 30.9 (SD Time 2: 32.4 (SD F(l,l control did not [Timo I: 30.8 (Si) '|`intt: 2: (SD 1 There 354 fioono' .i wort ffm/i;fon vol 35. no 5 Tabie 1. Descriptive characteristics of sturiy oi Army itlatioriai Guszrci (ARMS) Soldiers at Tirne 1. (SU standard deviation; WHATS Witte (tango ftchievetiieritiest, version 3; Test oi lviernonf lviallogoring; loss of consciousness; PCS physical component surnfnary; MCS rzioiitai component suriirriary; vR~12 Vote-runs Rand 12-itom Hoaitn Survey; PTSD 1 post-tratitnatic stress disorder) Dascriptives Deoloyetl Non-deployed (i\l=52) lvleafi SD Mean SU Ago (years) 28.4 8.3 25] i 2 Ago f;3-5 years) ?9.1 81.5 Percentage of enlisted 86.6 96.2 education ailer high school) 276.1 57.7 Eritlcaiion of college or above) 17.9 13.5 Readirig Standard Score 102.9 QI 192.2 sooro 48.3 1.4 218.3 1 Percentago of Caucasian 1 1.9 -- 20.0 Percentage of married 22,4 29./l Porcaiitage oi years ARNG service; 22.7 16.3 lilstory of prior head injury witii reported minutes 9.2 3.8 Percentage oi history oi prior overseas deployment 1 91" 19.6 ifarriiliarity with computers very familiar versus moderately, soinowhal. or not at ati) 29.9 38.5 Pliysicai iunctioning (PCS from VR-12) 5=t.2 5.1 52.5 6 ti iviuntai functioning {lir1GS1rorn V51-12) 54.5 125 52.1 PTS[isyr11ptoi'n severity. summary score 26.8 11.0 218 10 1 Percentage ot presumptive PTSD 6.1 3.8 Unit oonosion 51.4 9.6 42.3 8.0 were no signincant over time laottvoort tht, deployed and non-deployed groups. Tho percentage of persons working; in increased over time (Timo 1: l9.5%; Tinto 2: with Timo 2 rates liigiior in the non- deploycd groups to tlio deployed group. Among doployors, the most prevaiont Strc:ssoz's reported, aside: iiom separation iiom or loved ones were (Rotors: lack of opportunities to further education (ii) long,~tiuty days uncertain rcdoployment date and (iv) boring or repetitive work The most reported expcriciicos rated with or impact were: seizing, children victimizisd by war (ii) seeing physical devastation Contact with trnuinotizetl civilians (iv) recoivirzg laostilo reactions iroan civilians they were Froctciir of arf Table 2. perforinerice and mood among oeployed and non-deployed groups at Time 1 and Time 2. (SD standard cievietiori; N583 2 itlerirobeliaviorel Evaluation System. version 3; Continuous Performance Test; Profile ot Mood States) mm. Time 1 Time 2 Qeoioyeri ixlon-deployed Deployeo Non-deployed {iri=52} (01:52) Activated Activated Mori-activated (30:33) (i\l=19) ilfieon SQ itfleen S0 Mean SD Mean S0 itfiezin S0 Mean S0 Motor speed NESS Finger Tapping: 62.10 8.2 60.58 6.0 52.00 $3.4 62.50 9.0 01.79 t0.? 55.60 9.2 lklonwiomlooiit, mimiier of tape 54.50 7_8 54.32 4.5 Simple NES3 Seouenoes A: time io con1gJlete{seconfis)l' 2.82 0.35 2.78 0.25 Execiitive memory NESS Sequences B: number of 6f`i0l'S 0.35 0.51 0.27 0.45 N683 B: time io rzomolete ieecornis) 3.42 0.37 3.37 0.34 Vieoo-eozineinggforooeesiog speed NES3 DigilSyn1iJol: time to complete (seoorttis) 4.65 0.15 4.5/i 0.15 Stistaineii NESS CPT: response time i' 5.011 0.00 5.96 0.11 NESS CPT: number oi errors 0.00 0.57 0.62 0.62 Mood P01013 terisioo 9.04 9.30 6.1 P0625 oepreesioii 7.05 9.1 0.26 0.2 PDMS :roger 9.16 8.3 10.26 6.0 P01015 confusion 5.55 6.6 6.39 3_2 PGMS taiigue 5.18 5.0 5.86 5.3 F=0iviS vigor i 20.1 5.7 17.3 11.8 P0018 total score 116.84 32.2 123.85 26.3 tftiorif siress Joi: flernaneis 32.28 5.2 30.42 3.4 Joi; control 31.76 4.8 28.33 5.0 seore iriernaodsfcontroli 1.04 0.20 1.12 0.29 Percentage of nigh strain ioti 22.4 -- 26.3 3.49 0.36 3.27 0.38 53.67 7.2 55.00 0.6 57.00 8.0 5.00 6.8 2.88 2.9 2.72 0.33 2.78 2.5 2.76 0.32 0.26 0.49 0.20 0.42 0.10 0.30 0.32 0.114 3.26 0.29 3.110 0.34 4.57 0.12 4.54 0.15 4.52 0.15 4.63 0.13 5.96 0.12 5.99 0.10 5.95 0.03 5.98 0.10 0.53 0.67 0.40 0.55 0.20 0.36 0.45 0.73 6.13 6.3 8.01 6.0 10.32 6.6 9.53 8.4 8.55 9.7 5318 7.7 0.79 11.2 11.78 11.9 9.39 8.8 9.62 10.6 11.53 6.1 11.5 11.2 6.03 4.8 5.76 4.5 7.42 3.0 6.45 5.1 6.79 5.7 6.30 6.0 7.21 6.0 7.55 7.0 17.2 6.2 15.7 7.2 16.9 4.5 15.8 7.5 121.95 36.2 119.46 34.1 129.71 35.3 126.00 39.2 26.41 5.2 33.36 5.1 31.05 6.3 31.15 4.5 30.25 5.0 30.76 5.6 27.63 7.5 30.30 3.7 0.96 0.i9 1.13 0.32 1.23 0.60 1.04 0.21 9.1 26.9 .. 36.8 -- 30.3 -- Higher, more ooeitive scores reiiect better periorriirtiice ouicomes; otherwise, higher more positive scores reiiect poorer ouicornes. i have added prior to transformation. tiyiitgg. to help (V) liaving to exercise sell1rc:sti'eint while polrolliitg. {i and (vi) in rtreoe wliore there were land mines 01ClED10?,fl116l11?I primary hypothesis We tlepioyinerit inilicoting rctiti<:cd prolicieii<:y on tasks involving motor skills and siistziiticd attention (ie, rttiriihei' marie with tlic dominant and nomciominant hands on the Finger Test, log ol' the moon rcspoitso time on Coritintious Test, (table 3). 'l"lir: tleployocl group also more proiicirznt; on ar involving; Working memory (log oi` the time to complete Sequences B) ooinporrzd to the noir-ziciixffiiczd group. ?113-ploymeni to Bosnia was not associated with changes in overeil mood [total score: -3.39, 95% oonlidcnoia interval (95% Cl) however, wlicn exemiziiaig the distinct mood deployere iowor levels ofvigor but also less depression ritology compzirorl to tiowdoployors. Moderate oiT<:ot sizes ranged iiotwccii for thi: objective reeks and depres- sion and vigor. lniluenee of work stress; secondary iiypotiiesie \Vl1o|1 we r:ntr:i'cc1 "working in zi in the period preceding 'lime 2" into tho models, the deployment Scam' Work Erwfr'ori i-rio?rifii 52009, vo/ 35. no 5 In poarookooporo tahio 3. the effects of doplotrmoot status on neoropoyoiiologirzat functioning and mood? (B onslaridaroizod parameter esti~ moto tor tito deployed or activated oroop variabios, which presents the absolute difference io adjosted mean outcome scores compared to trio tion-doployorl, non-activated comparison group; 05% Ci 95% confidence interval; NES3 Evaluation Systerrt, version 3; CPT Continuous Test; PGMS Profile of Mood States) tviotor Speed NESS Firtifior Topping; Dornioorit ltonii mean ooroticr ol taps Depioyod Activated ttrzo-doroirmrit ttztori. moon fiomtner oi tops tleployed Activated Simpio attorrtiort NESS Af time to ooiripiote (seconds) Deployed ftotlvatori Executive iimtiorifworkirag irtotraory NESS B: riumoer ol errors if Deployed Activated Sooiitriicos B: time to complete (seconds) if Uoptoyed Activated tiisiio-Sozitiriirig :tori speed NES3 Stott Symbol: time to complete (seconds) Deployed Activated Sootoirictl Attention NESS CPT: response time Dapioyod Activated P0233 CPT1toitrl oi errors Uoployod Activator? ivloorl PUMS toosion Doplovott itctivotorl P0503 rloprossioo Deployod Activated i=0lvlS attpor P0013 txonfissioo Dopiovod Activated 909.43 loiigito Doployoo Activated i1"0t.rlS vigor Deptoyorl Activated Group 95% Ct P-valuo i' Direction ot orient -3.88 -1.393 0.002 Loss proficient -3.61 -6.8-to 0.386 0.028 -.84 -5.55- <0.001 Less proficient -4.53- -0.82 0.005 Lass-proficient ~0.0"i7 0.-437 0.061 -0.029-0.151 0.180 ~0.0fl5 ~0.i 18-0.027 0.222 -- -0.085 0.345 -- -0.098 -0.1362 -0.060 More proficient -0.039 0.094 -- 0.018 0.300 0.016 -0.033--0.056 0.520 -- 8.031 0.005 Less proficient -0.028 ~0.058~0.003 -- -0.038 0121 -0.203 -0.fl06-- ~0.0?i 0.009 .46 -2.31% 0.09 0.036 0.184 -3.l6-3.53 0.0t4 -- 4.87 -?f93~ -1.80 0.002 toss fiopressioo 0.457 -- -1.61 -3.490261 o.oQ2 0.351 -3.30-4.10 0.854 -0.595 0.048 1.03 0.353 0.440 -3.55-2.44 0.659 - 0.2-tri -2,706.19 0_8?l -- -210 -3.53- -1.78 <0.0llt Decreased vigor 1.35 -L02-3.72 0.265 -- ..~wwm i tviodot: lritloporaocoi outcome tieployroorit status (yin), activated status (vr'rt}; covariates: ape. education love? (arty poot-high school education versus ll0it0]. Emo mazisitro of outcome. group :ion-deployed, activated group l\t=t9; rtoo-rioptoyod, oosr-activated group oppliorl: bold deployment or activated status associated with compromised performance or mood; italicized doptoymerrt or acf.ivs2iod status ossooioted tariff: ootier or i Higtior, more positive oociticioots bottor portorrnaocc otrlcomrzs or more positive mood; otliervriso, itigtior more positive coefficients reflect poorer outcomes. Log--trooslorriiod For mood Timo 2 unit cotiosiori was ioctodcd. ci'l`oots nzirtoirtorl for the Your ond two mood outcomes in this primuigr ltypotlaosis testing doscritiod above. Working in is ttiglt-stitiiri join sigriiticritatly and itidopondeiilly rolotctl to rotiticcrl with tho lionfl on tho Finger task (l3=f==f -3.67, 95% CI Tltoro was no evidence of a siguilicmit ot`t`or;i tiotwoori deployment status and in xt 35 5 Scand l/t/om Er'rtfr'r'or'i Hearth 2009, vo135. no 5 lotloeoco of other factors on primary ootoomos: sensitivity aoaiysos Tho pattern of deployment ofi`oct.s revoztlod with the Cori: mode! primary i1ypotiir:s_is testing was not alterocl when wo added, individually anti post hoo, the following to tho primary models: rank, history of head injury. hours of stoop, coi"f`einr: uso, PTSD severity, fatigue, or job ctigagclnotit. 'l`hoz'o was no evictenee ol` ei signitieont intereet_ion effect between cleployment status and unit cohesion tor either ofthe significant mood lirtdings described above. Also, the observed resuits were not altered when we re-run the analyses the soeondtuy liypotiiesis utilizing job demands computed with 3-items. Discussion To our ltnowledge. this is the first study to examine new ropsyeliologieal functioning, prospectively over pence- lteeping, mission. lt represents one ofthe 'fevr deployment lieziitli outcome studies to inelurle pre-deployment exa1ni~ notions. objective measures oi' it eompernlfiie iionnleployed group, and timely posudeployinent assess~ fnenls. 'l`lie tindings indicate that deploynient to Bosnia as is associated with, at least in tile sliort-term, Sliilts in oiajectlve cognitive and motor perlin'nut:'ioes speeilicully etirn'aet.ei'l2ed by reducetl in tasks ol`motoi' speed and sustained otten~ tion, and witit reduced levels liypotla~ esis}. Findings also associated with deploynient include greater proficiency in it task involving working, memory as well as retloeod depression l~lowever, the deployment etleets observed were not associated dill ieteniinlly nfitli working in tt liiggli-straiit job among; the deployed hypothesis), interpretation Reduced protieieney ot` neuropsyeliologiettl fonetioniitg assoeiatetl with time Bosnia deployment tie, fewer taps on the Finger 'Ripping 'lest and longer response times on tlte Continuous Test) suggests a peitorinance peitern skid in port by it in the rate ot`eog,nitive processing (15, lo, 50). Tire findings cannot be atuibuterl to functional levels as we cori- trolied lor pre-deployment functioning. Also, the pattern ol' findings does not appear' be related to other work- or lifestyle-related dimensions that we were able to exuntirie. For example, tire results were not impacted when we took into :recount aspects oi`oeoupntionttl and trzuimatie stress, unit cohesion, job tittigue sleep, or recent eitffeiite use. its anitioipztted. the Operation ,loint Guard, deployment rotation involved minimal traumatic or experiences. `l`lie types ol' potentially trm.1mnt.ie events and negative experiences (eg, uiuzertain reclepioyment date, long days, boring and repeti- tive work, anti oonoerns about mines end unexploded ordiniinces) were similar and reported at or lower prevalence rates to those in prior peaeelteeping missions in Bosnia and Kosovo (27, 48. 5 Proctor of Hi Previous pene-el-teeping missions to the Sinai, Ipeiin- non, and involving the monitoring, ofa ceasefire (similar to Bosnia) luive been identilieci as environinents where soldiers are prone to boredom as the nature ofthe work is as tedious, with brief and rare moments of peek alertness (2, fi, 52). in experimental studies. reduced vigilance ns reflected by slowed respozise times on tasks involving sustained attention has been observed under conditions of pro- longed work on the some repetitive task in sirouloted air traitic control taslts (53) and Sentry work (54), but the eogn-itive model of boredom has not been fully characterized to date Within the current study design, we were unabie to direetly examine wlietlier the observed nem'opsyel1oiog,ieat perl'ormanee shift was rolnted in dose-effect manner to tt specilie expetrieriee or scenario- involving repetitive work inherent lit the selling (such as Sentry or routine patrol duties). However. endorsement of "boring or :petitive work" and "long duty days" were the more prevalent negative deployment job stressors described by the deployed group, post-deployment. Fi11d'ing_s suggest that, to non-sotivzitetl deployment status, peacekeeping deployment is ass_o- eiated with more ;3roi`ieient in at task involving working memory (that is, response time on the Sequences task). in the time oi` the above pets tern of`results, improved peri`ortt1snee in tliis task, wliieli requires more complex attention, may retieet the arousal and eflort needed lor task eoinpleti on (50, SS., 59), skiils wltieit are required and eznpliasized during this type ofdeployinent scenario. the iinrling ot" reduced depression syniptornsto_lo;;y at Time Ii. tltore Wil?i little actual in the mean level of deptessiott reported over time QIIEGIIQ, the deployers, but by eompzirison, depression levels inereesetl from Time 1 to 'I`inie 2, among non~ deployers (see table 2). Gilmer prospective deployment studies have documented a lrome-conring effect. eliar~ aeterized by improved mood and other when assessed most proximal to re>>depioy~ ment (60). Although over 90% of the deployed group was assessed within 7-8 days of their return from Bosnia, no wides;n'ead evidence ol`tt significant home- coming, et`l'eet on mood and patterns was observed in this study. Subjective reports ol" boredom suggest the deployed group may iiave encountered aspects ofbotli an "roulet- loatrl" and stress ("overloztcl") situation, which in turn is reflected by the observed 11ei11'opsyoliologieiil performance pattern and recluoed vigor. indeed, in Sentry studies, tasks involving, pro- longed periods ofropetitive, attention with brief episodes requiring peak alertness are viewed as eittretnely and stressl`ul (54). As discussed by serine it were Enviion 2009, vo/ 35, no 357 Nooropsyofioiogirrzsi in `l`uei> inni io note that while we did not find support ibn' the hypoilzesis ihnt working in as strain job "expisins" neuropsyeliolngieal periinntamres and mood rtssoeiftted wlih ii deployzneni, in ri high _ion did independently peribr'rnanee and mood. Study and iimltatioris it is noteworthy to comment that the Bosnia tleployznem mission occurred in the iniinetlinte time period following; the events oi`Sepiember li, 2003 . As Snell, non-de|iioyed group in fliis study experienced higher ievels of operznional tempo or poee than antiei~ pared wlien ilze siudy was designed and initiated, in this reg,erd. the non-deployed _group inelntling both time aeti~ voted and non-neiixfziteii sniuszunples) was perhzips a hetier comparison group inereli to the depioyed _group in ierms oioperzuionsl tempo levels 'ihan might have been the ease comparison group had been ofthe :no1'eireditionai A znodei with training only occurring one weekentl per month. llowever, the priori assumption iliai the depioyed group would encounter siroin over the depioynienr mission in to the non- deployed groups was not oifiserved. ln rtdtiition, it is pos- sible the 5~item _join scale does not liiliy provide on :issessrnern ofinilitoryjoiv demands and the nhility to eitzunine the iiypotiiesis. Nonetlieiess. the Sindy includes ol' impor- tant nietiiotiologieal sirengilis enhancing, our knowiedge and to exaunine periornuinee puiierns reiefed to it peacekeeping, mission. Specifically, we eoodtieietl the prospective assessment oi' ai military eohorf both before and ofier deployment, together' with zi non-depioyed group and the inciusion of olajeeiive performance meztsilres along with subjective oi` psyelioiogiefil health. Generallzahiliiy The of the resulis to other 1nili12\z'y population groups eoinprised ofnctive Citlly, female, or 358 Scans? iftiurk Environ Health 2009, vo! 35, no 5 non-US peaeelwepers may be limited. The neuropsy- chological and mood poifern observed oinong Bosnia peseelwepers does contrast with their found in ri recent prospective study of US Army soldiers deployed as part iraqi Freedom where depioynient was nssoeinieci with redticed wiihin iilnetional doinains involving sustained attention, learning, and memory, but better reaction iime suggestive oi' a biologic response to trzuunatie su'ess_ 'i`ogether, the iindirigs iiom these two siociies provide evidence for the intuitive ohservation that deployriiezit missions differ in ierrns of types or severity oistressors, which in turn may diil impact postwiepioyinent heoith, mood, and Concluding remarks The results of this study provide evidence suggestive of changes in per't`ormanee associated with ii deployment, that is, the slowing ofetigziitive processing and reduced motor speed coupled wiih in zz tusic involving more complex zntenrion and working memory But, the obserxred deployment effects are not essoeieteci with high jot: strain over deployment. Whai is not known at this point is wneiher deployment- related nenropsyeliol'og;ieai dii"i`erences reflect irimsieiit or more perirnenent; eiiztriges in ing and mood undfor by extension occupational perfor- mance. The group observed do not appear to approach clinieai thresiioids indicative or psyeliologicai disease states (30). ieiowever, even smail group in the ability io sustained attention and slowed motor speed may result in risk problems in daily iife. fiwsrenoss of potentiai pa_ttei'ns of iiemopsyelioiogioel functioning Foiiowing deployment provides an opportunity to railor ?raiin'ing, protective, and strategies to he more eiiective in niitigoting risks. Acknowfedgements Funding for this project was provided by the US Army Research and l\/iateriei Command OGS4) in 21 grant owzuded io Boston University. We thank the soldier their sponsoring units, and the stntedevel Arnny Nntionai Guard Commancl for iheir time and support. The investigators have nrllierecl to the policies For protection of' subjects as pre~ scribed in f\rmy Regulation and the researeh was conducted in adherence with the provisions ol`32 CFR Part 219 (l~lSliRB Log No. Boston University #2001-164). i~luman stibjeets participated in ilu: study after giving their and iniormed voluntary consent. .loo strain ont! tlmaloyment as-infects oftln: study were presented in posicr at 211113 and. 2.1108 Work, l?oztlilt in "l`oronto, Canada and DC, USA, respectively. Please note that tln: opittiotts or assertions herein are the private views ofthe and are not to bt: consatrttecl as Gilli- <;i't1 or views oftlte US Heforences ll, SP, itintizoso 11.1, R, Whitt: Rl". otticonics ol' to tlto lrint War. 2011629615 1 2. l3tn'toni: .ftillcr Vnitlins Mfft. of stress in ogicsatlionaa. Mil Mod. 3. lviorris 1311. 1i'om time Sinai: Soo. l985;l 1:2155-ill, fl. lilotop'E" M. r\n11aony 1). Hull li, isnmil K, l.lnw?n C, Wcssoly 'E`1ic ltcziltli ollbolx of it or-oss-scctinttol with nonelcployctl ntilitory Mil lvliccl, S, 'l`ltont;1s C, N, Kcliy V. Sowing; in Bosnia mattlo mo apmccizitif living in Bristol; cxtiuriiusicox, and needs ol' o1`tln: llnitccl Mil it llionzio; {T;tsts'o licltavlor and 115 til. ln: Britt TW, Adler AB. criitorrs, T112 of the lossons fiom Hold. l'rt3$s; 21103. 12?-416. 7. Witltc 1111- l"ror:toz' SP, llcercit M. ll. et til. in Gtii1'Wztr votototisi In sc1lls't:pni'tcrl Am .1 End 211111 S. Proctor SP. Wliitt: 1111 li, 13. M, ct al. liitictiintiitg in Dnnisli .1 Assess. 9. itnpuct oftln: 1991 on mimi lirnitt. Philos 'lions Soi: Loud 13 Biol Sci. 2111112315 2593-604. 111. Mil, 1-11.2 13. Hortons PT. The nzitnrc u1` stt'a:ssot's 91. lni Britt TW, Arllur MS, editors. The of tln: lc:-1510123 from thi: field, (CT): l'|'css; 211113. l=l9-68. ll. ll.. fuller AB, Castro CA. relating. it to ntiliuiry iowa Persian (`1n11' Group, illness into siutus :-nnong 1.inli` Win za 13. Steels: :uni patterns in Kansas of of vct'crans: associrititm oi' with c1tzn'actci'istics 01" person, place, and time scrviciz. Asn 5 10112. 1:11, New York 1\fit:Grnw- l~1i1?: 19711. 344, 145-Htl, as at 01' stress. :intl in: Hoi? KR, 1., Tltomns .111 crlitors. 1 iztittlhook and New York (NY1: John Wiley 8; Sons; GRJ, 1 lomi lton P, stzitcx. In: lil ockcy (311.1 erlitor. Stress ond in New York {1\l T11 .loltn Wiley Sons; WS3. fi RA. Job decision lanitotlo, :ncinzil strain; iinplic-ntiotts liorjoii redesign. Adm Sci Q, 1. R, Bri5Sun C, N, 1, P, 13, Thr: .loh an Ihr of psycitosociztl job .1 Proctor Sli Whinz. RF, Robins TG. IJ, A1121 '1 ht: nnflixct woric on cragztitivc in notoniosivc 'ttforlr iinvlron fleziltlt. Britt. TW., Atllor Mi. and licaitlt during atnrtiicui missions. Mil lvicrl. l999;1 Grifliih 3, M. tzolnzsion to stress. strain, tlisintegrattion, and an flatnowork. Mil 19993 122155. Blcifw Pl), CA. Rolf: clarity, work uvcrlozxcl, :mei orgrmizzttioziul Work Stress. Blcist: PE). Czuairn CA. The Soldier Azlaptitin Misdcl in int Britt TW, Adler All, orlitors. 'l`lzc ol' the icssons front the field. (CT): tmicgci' 211113. lppolito J, Aa1li:rA?3, 'l`ll01T1l1? JL, Lita: llulzl R. and applying the dcmatnd~conn~ol model: tho of`soirlicr's coping on iz iiczicclicctuiaig .1 (lcoup Elcitlth 21105; 1 111452.-64. RR, Moltr Ci), Al). tcmpornl invostigzniort of thu dirocl, interztotivc, and reverse relations and control and t21`1i:c1ivr~ Strain. Work Stress. 20tJS;22;8l-95. 83, Ritzcr 1311, Vrilimtittc JN, Gil`1`ord RK. Joint Bosnia: and atdaptiw: coping of (DCM Walter Recd Arnty lnxtituic 1998. l-67. `I`i:clmit:ul report, 19990318 045. Ritzcr UR, SJ, JN. llunian rcscorclt during (Jpimztirnt 'Joint Bosnia. 115 z'\rnty Mori l999;8t5~16. Colton J. power analysis for the sciences. 2nd cd. lilihnlzilc Lawrence Associates; 1988. 25~'f. Proctor SP, Len: R, Wltitc RF. Vzilitiity of it Scand Work Health 20199, 1/0135, no 5 359 368 in pazazrseirfaepers test in Irixiczixrt icul, 43, Whitt: RF, lzimcs iili, ll, R. K, Ii. at til, thi' cognitive impairment tiring tasks. Wil Wtliimttm' Iii, PB, Patti: RR. isrmsirlixmtiurt oftitc clitt: lbmzilc lvlartl, l9l1l7;3 suppl; l21l-Bl, RA .Init Qiwmitanitztirc and. msc-r's guide.. I935. l6-9, lift, K, l'iclEURL2t'irtg Ili. znribtaltai.rn'y hlnutl and zritcmtuivu ui`_joh str-aint, Scami .I Work Iiirwimit lilcetlth, Sultrtall Pl.. PA, lialccr I), ,lub strain mul Arm Rev Pub #ll E. Ifhrrariiti il, Pcicr R, fiicgrisi J, lvl. "l`wo aItci'nuti\fcjul> and the risk ui" heart disease. Am 3 Ptililit; I S, Engel' I., Kichler J, lilack F, 'limi ni' tncittory trial I as liar cltiirt. Clin l-~2 I. Ili, Wiisori N, Sltiamer K, Les A, lirigcrs W, Ren XS, cl ati, llcailtli anti iatatcoriics olf vt:tc.rar.rsz: and Scurcti Vt:-tcruns i998 Nartitaiizzl Survey Patticnts IEURctItiti'rt tl\f1f\tt (licfatcr fm' Quztlity, iiuuritmiic 140-51, FW, Lili DS, JA, Karim: Stress I.I3isurd@z' mlitiixility, vul?tlity, and tlitrgimstic utility. in: Mcctiitg, til Society Stress Sttulics; San Antusiio Uettibcr, 1993, Sun (TX): Society fm Stress: KI, Del Burt K, Scotti JR, Ali. ul this Stress Disorder' Clicclalisb v<:rsirin_ .I Stress. 2t)tl3;l Huge UW, Castro SC, D, Cuttittg IJI, Iiulliriztu Rl., ditty in lrtiq zmri lnrailtls tint! l'iurz'icr's to ling! Mud. I l?I~22. ll, limit I, Vcmotilcii ill, l3Icij<:1:lJi:rg ti, (EM. Iizrtigric auzorig, worlting pcoplci vtilidity of xi meszistirc. Occup iEURiwirvf>; af tscraltlx response. Mil Mud. Ftrriay L. Iluw we should measure or l?sycl1ol Bull. Ml), DB, Loring UW, ctlitms, Nutm- psy<;lmIogicaI =Itli cd. New 'York (NY): ?'rc:ss; 349-51. Castro EW, Httlilluart All, AH. tlimcusiontt anti readiness in UB. Arrity Forces to liosiwu, Rev int Serv Sams I A., -Otto ll. of Swcdisli UN scilrlicrs in Smith Lt:-lwzaiiuti in 1988. Stress lvled, I II. 'l`liticlirrsy Rl, Eztilcy .Ilf', 'lbuizlistriniz sulijective and tzufrfelaies tilf reported horctintit and while pc:rliirruif1;; siritulztted rarlzir cuuirul ini lvlarckic RR, editor, Viggilancct tltirmff, and phyetitilogiczsl York (NW: Plenum Press; l9?7_ 26,3-15. .luhusrin RIT, DJ. Target rifli: and rnuutl tltiriug three of Simulated Sentry duty. Proc Sac. lfl-S. Hill AB, `l7'crl>MWwm 1' ff; Contents CONTENTS RESPONSIBLE usE or TBI PREFACE KEY FEATURES AND ADVANTAGES OF ANANMN II CREDITS ADDITIONAL RESOURCES I Software installation ana' Licensing i Support and Services i REFERENCING ANANI4 I Bibliographic citation for i Bibliographic citation for this manual i TYPOGRAPHICAL CONVENTIONS I INTRODUCTION 1.1 BRIEF HISTORY OF ANAMAW 1 1.2 ABOUT THE ANAMATN TBI (AND TBI-NHL) BATTERY 2 INSTALLING ANAMQW TBI 2.1 HARDWARE AND SYSTEM REQUIREMENTS 4 2.2 ANANI4 TBI 4 TBI TEST ADMINISTRATION 3.1 GENERAL TEST ADMINISTRATION GUIDELINES 8 3.1.1 Test Administrators 8 3.1.2 Testing Environment 8 3.1.3 Testing Procedures 8 3.2 RUNNING TBI 1 0 3.2.1 STARTING ANANI4 TBI 1 0 Starting TBI via the Launch Pad 1 0 Starting ANAM4 TBI Directly 1 0 3.2.2 BATTERY SELECTION 1 0 Selecting a Test Battery 1 1 Changing the Primary ana' individual Data Directories 1 1 Confirming Date/ Wme and Session Number 1 1 3.2.3 Test Settings 1 2 3.2.4 Exiting a Battery 1 2 3.2.5 Exiting a Test 1 2 3.2.6 Restart/Recovery options 1 3 Restarting a Previously Cancelled Battery 1 3 3.2.7 Administering a Retest 1 3 Selecting a Specific Test or Subset of Tests 1 3 3.3 SHORTCUTS 1 4 TBI TESTS 4.1 TBI TEST DESCRIPTIONS 1 5 4.1.1 Demographics Module 1 5 4.1.2 TBI Questionnaire 1 5 4.1.3 Sleep Scale . 15 43.1.-47 Mood' Scale ll - Revised 4.15 Simple Reaction Time 4.1.6 CODE SUBSTITUTION - LEARNING 4.11. 7 Procedural Reaction Time Matching to Sompfe $1.9 Processing 4-.113 CODE SUBSTITUTION - DELAYEO -4i.1.i.Z Simple Reaction Tirne Armies" Tel TEST owe. 5.1 Arteries" ouvpuv 5.1.1 Filename format 5.2 DATA STORAGE 5.3 COMPIUNG AND DATA l?.Plin: ANAMQW 6.1 lmsTAs_LmsO AND nusumnno 6 .2 CREATING A REPORT 6.2.1 Selecting Doto Folder 6.2.2 Selecting of User ii? 6.2.3 Selecting of Session 6.2.-i Selecting Archive Sessions 6.2.5 Archive Settings Suppressing archive plots Setting default archive sessions 6.3 REPORT OPTIONS SAMPLE REPORT 6.3.2 Selecting o' Comparison Group Graph Style 6.3.3 Arclfiive Plots 6.4% Srlvafee AND PRINTING A REPORT Get-.1 TO PDF 6.4 .2 PRINTING A REPORT 6.4.3 A REPORT . 6.5 REPORT DETAILS 6.5.15 User Demogroplrics 6.5.2 Summary lnclicotor En vu. 6.5.47 History one' Proviaier Observations 6.5.-fi Comparison Green .. 6.5.5 Performance ot Glance: Comparisons to Baseline and Reference Group Scores 6.5.6 Performance Detoif 6.5.7 Archive Plots 6.5.8 Reference uns 6.5.9 Disclaimer. . .. UNKEERSTANUENG TEST 7 .1 GENERAL CONSIDERATIONS . 7.2 OF T81 TEST ADMTNISTRATION EOR TEST 7.2.1 Fundamentals of Administering the 7.2.2 what Does the ANAM4 TBI Test Battery Provide 2.2.3 What is the bosic Arena/f4'" ref zest-mg paradigm 7.2.4 when Should the Test Battery be Administered? 7.2.5 How often Should ANAIVIJEM TBI resizing be Administered? .JW 7.3 TEST ScoREs 3 5 7.3.1 Levels of interpretation 35 7.3.2 Comparisons Across Levels of interpretation 37 7.3.3 Do Fairly Fast Reaction Times and High Accuracy Always Suggest the Absence of Brain Injury? 3 7 38 PROPERTIES 39 8.1 SELECTION OF VARIABLES 3 9 8.2 AND STABILITY or TEST ScoREs 3 9 8.3 SENSITTVIW AND SPECIFICITY 4 1 8.3.1 Mild Traumatic Brain Injury 4 2 8.4 4 2 8.5 GROUPS 4 3 8.5.1 Military Reference Group 4 3 8.5.2 College Reference Group 4 3 ANAMQN DATA AND PRESENTATION TOOL 44 9.1 INSTALUNG AND RUNNING 4 4 9.1.1 Creating a Summary View 44 Selecting a Data Folder 44 Selecting IDs, Tests, and Sessions 4 5 Select User IDs 4 5 Select ANAMAN Tests 4 ts Select Sessions 4 6 Selecting Runs within Sessions and Variables 4 7 Select Runs 4 8 Select Variables 4 8 9.1.2 Saving a View 4 9 9.1.3 Exporting Data 5 Exporting an Entire Summary View 0 Export a Summary View by Test 5 0 9.2 ADEPTW OPTIONS 5 0 9.2.1 General Options 5 1 Standard Deviation Highlighting 5 1 Standard Deviation Multiplier 5 2 Minimum/Maximum Value Highlighting 5 2 Show Summary Data 5 2 Use Abbreviations Auto Size Columns 5 2 Use Variables List 5 2 9.2.2 Variable Abbreviations 5 2 9.2.3 Variables List 5 3 9.2.4 View and Export Format 5 3 9.3 SUMMARY DATA Fates 5 4 REFERENCES 55 APPENDICES 59 Al. List OF ANAM TBI TESTS, MODULE NAMES, AND (IN ORDER PRESENTED) 5 9 A2. DATAERTE FORMATS 6 Code Substitution - Standard or Delayed Version (.cds or .cda') 6 0 Demographics (.sub) 6 1 Matching to Sample (.m2s) 6 3 Math Processing (.mth) 64 Procedural Response time (.pro) 68 Simple Response Time Sleepiness Scale (.slp) TBI Questionnaire (.tbq) A3. MILITARY REFERENCE DATA A321 lwlitary Reference Data by Test /13.2 Military Reference Data by Gender and Test A33 Military Reference Data by Age, Gender and Test M. COLLEGE DATA A41 College Reference Data by Test College Reference Data by Gender and Test FUR MEERE -52: - I Responsible use of TB: ANAM4 TBI is the Automated Assessment Metrics (version 4), Traumatic Brain Injury Battery. As such, TBI is classified as a test and is regulated and distributed in accordance with professional standards such as articulated by the Ethical Principles of and Code of Conduct of the American Association (APA, 2002). Who is quolWed to administer ANANMN ANAM4 TBI can be administered by qualified professionals who have training in testing principles and test administration procedures. This might include primary care providers trained in test administration to assist in initial triage, assessment, or clinical guideline decision making, but this level of assessment activity wouid not include clinical interpretation. Who is qualWed to interpret scores from ANAM4 TBI The ANANI4 TBI battery is a test and the clinical interpretation of the test should be conducted by qualified medical professionals with training in testing principles, test administration procedures, clinical test interpretation, and possess some degree of knowledge and experience in head injury assessment and treatment. Further, should not be used alone to diagnosis medical or mental diseases. it is not meant to replace more comprehensive tests or assessment procedures. it is important to use the test results in conjunction with other tests and information about the test taker before making any diagnostic or prescriptive decisions. Disclaimer The use of ANAM4 does not constitute the practice of medicine or the provision of professional health care advice, The information provided by is of a general nature and does not represent medical advice, a diagnosis, or prescription for treatment. Only quaiified medical professionals should interpret test results. C-SHOP and the University of Oklahoma are not responsible for any decisions made based on ANANMN test results. A qualified medical professional has the sole responsibility for establishing diagnosis and suggesting appropriate treatment. Weteoe Welcome to the Automated Assessment Metrics version 4 (ANAM4 ANAM4 is the iatest in an evolutionary line of computer test batteries sponsored by the Department of Defense originating in the late 1970s. This long and distinguished history provides Al\lANl4m with a rich foundation in classical laboratory-based human performance assessment technology as well as modern clinical assessment methods and techniques. The result is a computer test battery with remarkable versatility and flexibility to meet a wide range of assessment needs. The Al\iAM4m test system consists of a library of computer-based tests designed for a broad spectrum of clinicai and research applications. This library was constructed to meet the need for precise measurement of cognitive processing efficiency in a variety of assessment contexts that include readiness to perform, neurotoxicology, pharmacology, and human factors research. liter Fienrunss ruin or Arianna" Al\iAM4m is automated, has simple instructions, and is largely self-administered, which allows efficient, rapid, and broad domain/ability testing. tests are relatively circumscribed tests of specific cognitive, or sensory- motor domains, thus providing more distinct measures of those domains with less confounded measurement as compared to many other computer-based testing systems. can be delivered with common contemporary hardware and software operating systems and pointing devices (mouse) across many platforms desktop, laptop, PDA), without the need for special and often expensive response pads, light pens, touch screens, etc. Response modality used with allows simple and minimaiiy demanding motor responses (primarily mouse button clicking). The ANAMAW test stimuli can be varied session-to-session to provide an almost infinite number of fer A-1 /apr for repeated-measures testing sessions. nf ANAMQW tests are designed to require the very minimum amount of learning for test performance mastery, thereby leading to baseline data with minimized concomitant learning effects. timing accuracy and large capacity data coilection provide measurement accuracy and reliability that is greatly improved over most face-to-face and pencii~and-paper tests. Af\iAi\/if tests provide a high level of precision measurement compared to many traditional tests, thereby providing greater measurement sensitivity to detect neurocognitive or performance changes/deficits. tests meet common professional standards of test construction with scientificaiiy verified fig' A7 for properties. Al\iAlvi?m has a long history of direct appiication in research and clinical practice in ali branches oi the United States armed services. Ive ANAM4-N tests are being used by other governmental agencies FAA, NASA), as well as major government contractors, pharmaceutical companies and many university researchers, thereby adding to the versatility, comparability, and of ANAM4 test results:staged nf* I >>Amum>>uxmm. u>>uv CREDITS Many individuals have contributed to computer test development efforts that have directly culminated in the system. Noteworthy among them is Dennis Reeves (then LCDR USN, now retired), who creatively managed the transition from eariy generation computer test batteries through one of the first PC-based systems. The originai ANAM?(version 1.0) development team included: Fred Hegge, Dennis Reeves, Kathy Winter, Kathy Raynsford, Sam LaCour, Gary Kay, 81 Tim Eismore. This effort successfully migrated many tests of core human performance from varied early-generation computer hardware platforms to the IBM PC/Windows-based platform while also implementing some of the first international design specifications (the NATO AGARD Standardized Tests for Research with Environmental Stress, NATO AGARD Working Group 12) for computer-based tests. The U.S. Army Medical Research and Materiel Command, Fort Detrick, MD, has been the primary organization that has supported the development of ANAM4 and many of its predecessor test batteries. Continuing Dr. Fred Hegge's visionary support of computer-based testing technology to broad-ranging military applications, many others at USAMRMC have been instrumental in continuing the legacy of ANAM4 including COL Karl Friedl, Dr. Stephen Grate, and COL Brian Lukey. The original software development efforts for were managed by Kathy Winter of the U.S. Navy Space and Navy Warfare Systems Command (SPAWAR), NAS Pensacola, Florida. Key past and present members ofthe programming team (in alphabetical order) include Kerry Culligan, Michael Flanagan, Timothy Howard, Samuel l.aCour, Phillip Muldoon, and Kathy Raynsford. Over the years, many other individuals have contributed to the development of ANAMG and Notable among them are Dr. Robert Kane, Dr. Joseph Bleiberg, Dr. Alan Lewandowski, and Dr. Jack Spector. Assuredly, there have been many others who have contributed to the evolution and development of ANAM, and their contributions are gratefully appreciated. The Center for the Study of Human Operator Performance (C-SHOP) at the University of Oklahoma is now responsible for the future management, development, enhancement, and distribution of as well as future ancillary products. C-SHOP is a multi-disciplinary research center dedicated to the development and application of computerized test batteries through the simultaneous coordination of high-level research, test development, quality assurance assessment, and clearinghouse/coordination activities related to computer-based testing technologies. tea ti tuner; RESGURCES Software instoiiotion ond Licensing For compiete software installation instructions, see 2.2 Aft!/lim. For information regarding licensing or End User License Agreements for ANAMQW software, please contact the Center for the Study of Human Operator Performance (C-SHOP). Contact information is listed below. Support ond Services C-SHOP is committed to helping you get the most out of your software. Visit the C-SHOP website at where you can download software documentation and learn more about our research center. if you experience problems with your software instaiiation or operation, please contact C-SHOP. Our offices are open ivionday through Friday 8 am. to 5 pm. US Centra! Standard Time. You can contact C-SHOP Technical Support via phone or email: Email: Phone: (405) 325-?444 (United States) fteesnencine Atintvia" Eibiiogrophic citation for ANAIRMN Automated Assessment ii/ietrics (Version ii) [Computer software). (2007). Norman, OK: Bibliographic citation for this monuoi C-SHOP (2007). ANAM4 TBI: User ii/ionuoi. Center for the Study of Human Operator Performance, University of Oklahoma, Norman, OK. CONVENTIGNS This manual uses the following typographicai conventions to aid the user in understanding references to specific program objects and other types of information. Formatting Convention Type of information Items you must select, such as menu options, command buttons, or items in a list. Emphasis Used to emphasize the importance of a point or to reference screen names. CAPITPJ-15 Names of keys on the keyboard. Example: SHIFT, CTRL, or ALT. Key combinations for which the user must press and hold down one key and then press another, for example, V. MW ?teerre iiser reieneei ?ieri? Qeiee ter the ?e'i'i'seere ??teri tor ine ?eii*seore tee nrrie ei Mpperfiar wveve.e shop.ou.ede Center for the Study of Human Operator Performance 5 . University of Oklahoma 3200 Marshall Ave, Suite 260 ~s Norman on 73072 usn Copyright ZO08 Center for the Study of Human Operator Performance. All rights resewed. ANAM4, the ANAM4 logo, 'dentitied as trademarks and/or service marks are specific device designations and all other words and logos at are tr'es. All other product or unless noted otherwise, the trademarks and service merits coun ice names are the property of their respective holders. ANAM products are protected under numerous LLS. and sen: foreign patents and pending applications, mask work rights, and copyrights. o* i -'%Wi> "u -, 3 ae 4- introduoti 1.1 Berea E-iisroav or ANAMAM Al\iAl'\fie is the culmination of a long ine co - Defense and evolved principally from the Unified Tri-Service Cognitive Performance Assessment Battery TC PAB En lund Reeves et al 1987) The UTCPAB specifications developed from a set of military (U - 8 . -, - batteries including: the U.S. Army Walter Reed Performance Assessment Battery Thorne, Genser, et i 1985) the Air Force Criterion Task Set Shingledecker, 1984), the U.S. Navy Performance a Evaluation Tests for Environmental Research Bittner, Carter, et al., 1986), and the NATO Advisory Ci ou A for Aerospace Research and Development Standardized Tests for Research with Environmental Stressors Reeves, Winter, et al., 1991). WRPAB's original purpose was performance assessment in continuous performance paradigms and determination of efficacy of performance degradation countermeasures. Development of the PETER battery began in 1977 with the obiective of identifying traditional cognitive measures suitable for test/retest administration. in a subsequent effort Naya personne developed the Naval lviedical Research Institute Performance Assessment Battery (NMRI-PAB) in an effort to G. A standardize assessment of operational environment effects on military performance. The Harry rong Aerospace Medical Research Laboratory developed the CTS to assess mental workload. uter based test systems developed by the Department of 'd-1980s available tests were evaluated by the Tri-Service Joint Working Group on Drug Dependent mi Degradation in Ivlilitary Performance with the intent of sponsoring development of cholo ical performance tests with a major goal of assessing the effect of operational neurop pharmaceuticals on military performance. Available procedures were found to lack rdization and appropriateness for the repeated-measures designs necessary for evaluation within an a operational constraints. lt was particularly demonstrated that a number of the most sensitive . . neuro chologicai measures were not suitable due to practice effects or were not designed for exten baseline studies. The transitioned into the Office of Military Performance Assessment li 12 rn Technology OMPAT led the standardization of computerized operational performance es that subsequently developed into the UTC-PAB and, later, ANAIVI. Prior research studies supported selected - .Th UTC-PAB modules in the UTC-PAB library because of their construct validity, reliability, and sensitivity proved to be a flexible system and formed the basis for other test batteries, including the NATO AGARD- STRES battery. OMPAT for the AGARD-STRES battery and the UTC- ANAM began with the technology first developed by th PAB and was improved by program innovations that permitted extraordinary timing accuracy. ANAM also integrated a wider range of performance tests. Further development, including transition from MS-DOS to the MS Windows platform, was directed through the Military Operational Medicine Research Program at United States Army Medical Research and Materiel Command (USANIRMC). The early versions of ANAIVE 'n 1995. were developed for the NES-DOS operating system with the Windows version of the library eginning ANAM in research and clinical applications led to increased pressure for accessibility and The success usability, as well as a more comprehensive method for developing, managing, distributing, and sustaining AM 2006 ANAM was licensed to the Center for the Study of Human Operator Performance (C-SHOP) AN . at the University of Oklahoma, which has, during the entire developmental period of the batteries beginning in 1984, provided basic research, quality assurance, and human factors engineering support related to computer-based test system development. M, C-SHGP researchers and staff sunfeyed ANAM users, initiated After receiving the exclusive license for ANA a duality assurance assessment of the existing ANAM software, and then set about making improvements and innovations in order to produce an enhanced suite of ANAM software products that would provide greater uniformity, capability, and usability. C-SHOP released an improved version of the ANAM test modules (version 4.0 or in the Fall of 2006. C-SHOP also released a significantly enhanced software tool for ANAM data aggregation and management (the Data Extraction and Presentation and a completely new software tool, the Performance Report (APR), for producing reports of individualized ANAM test performance, including comparisons to normative/reference groups and available baseline and/or prior test sessions. 1.2 ABOUT me TBI (Aan rar-Mir.) am-raw The Traumatic Brain Injury (TBI) Battery is a selection of tests from the library designed to aid in the assessment of general cognitive function following a head injury. The origins of the TBI Battery began with early studies applying ANAM testing technology to the assessment of For example, Levinson and Reeves (1997) conducted a study in which a battery of ANAM tests was able to correctly classify brain-injured patients with 91% accuracy, a better level of accuracy than alternative tests or staff ratings. Bleiberg et al. (1997) administered ANAM tests and traditional tests to a small group of patients with mild TBI. While only a few of the traditional tests seemed to differentiate between patients with mild TBI and controls, four of five ANAM tests yielded significant differences. Based on a factor analytic study of ANAM and many traditional measures, Bleiberg et al. (2000) developed an ANAM battery for use in sports-related TBI. This battery has been effective in assessing concussion and the influence of previous concussion on current concussion in a study examining West Point cadets injured during boxing (Bleiberg et al., 2004). Some of the most useful research on ANAM and TBI has evolved from an ongoing project conducted by the Defense and Veterans Brain Injury Center has extensive databases on selected ANANI tests and has been using the precursor to the Battery for a number of years. One normative study of TBI Battery test modules with over 2,000 paratrooper recruits was recently published (see Reeves et al., 2006) and another study based on over 5,000 recruits is in final analysis. These studies have provided some of the largest and finest assessment databases for military personnel available and attest to the cost-effective leveraged value of Doi)-sponsored ANAM test development and application. Recently, customized modifications to the Test Battery and the APR made by for the u.s. Army have resulted in the rar Battery. The rar and the AnArvr4" rar Batteries do not differ with regard to the actual ANAlvl4"' test modules presented or the order of test module presentation. The differences between these test modules reside in customized demographic features and characteristics of the Performance Report, which provide relevance and ease of integration with unique medical records systems and clinical applications. The Mil. Battery provides precise, objective, automated measures of fundamental neurocognitive functions including response speed, attention/concentration, immediate and delayed memory, spatial processing, and decision processing speed and efficiency. importantly, these qualities of the ANANMW TBI MIL Battery are consistent with past applications of computer-based testing and TBI, with normative work conducted by DVBIC, and with the Clinical Practice Guidelines and Recommendations published by the Defense and Veterans Brain injury Center Working Group on the Acute Management of Mild Traumatic Brain injury in Military Operational Settings (Helmiclc, 2006). This manual was specially constructed to provide information regarding the typical features of TBI Test Battery and the unique features of the ANAM4 Battery. Tests in the ANAM4 Battery include the following: Acknowledgement and appreciation is extended to the Defense and Veteran's Brain injury Center (DVBIC) for graciously sharing ANAM data independently collected by DVBIC at Fort Bragg and for an invitation to collaborate with them in the development of ANAM military reference group data, as well as permission to incorporate those reference group data within ANANMN APR. Test List Domain/ Function Demographics User Profile TBI Questionnaire TBI History Sieepiness Scale Fatigue Mood Scale Mood State Simple Reaction Time Basic neural processing (speed/efficiency) (Emphasis on motor activity) Code Substitution - Learning Associative Learning (speed/efficiency) Procedural Reaction Time Processing Speed (choice RT/rule adherence) Mathematical Processing Working lvlernory Matching to Sample Visual Spatial Memory Code Substitution Deiayed l\/iernory (delayed) Simple Reaction Time (R) Basic neural processing (speed/efficiency) .Special Note: The results of Al\lAi\!l4 TBI testing must always be considered within the broader framework of information regarding the test taker including: premorbid status, clinical history, the nature of the injury, immediate effects of the injury, post-injury and other possible sources of change in cognition. Data from TBI should be used in conjunction with military clinical guidelines for concussion and traumatic brain injury assessment and management. I installing re: This chapter provides the basic information necessary to install and run the software. SPECIAL NOTE: These instructions assume CD-based installation procedures with the computer user having full "Administrator privileges." Because many government computer systems do not provide complete "Administrator privileges to users, the may provide alternative installation procedures. 2.1 HARDWARE AND SYSTEM REQUIREMENTS Platforms The core ANAl\/l4w software has been designed for use on IBM-compatible computer systems. Windows 95/ss, NT4, xP, and vista are supponeu by The ancillary support products (ADEPTN require Windows 98 or higher, as they utilize the Microsoft .NET Framework v2.0. Processor Speed and RAM Nlost desktop and laptop machines sold since 2000, and running Windows 2000 or later, should be sufficiently equipped to run ANANI4 TBI. When using older hardware and running older operating systems (Windows 95/98 or NT4), the following minimum requirements apply: Af Pentium 90NlHz microprocessor 32MB RAM Disk Space The core software requires approximately 25MB of disk space. .However, due to data storage requirements and to ensure optimal performance, we highly recommend having at least 150MB of free space. A full installation including ancillary modules requires approximately 50MB of space (BOMB if the .NET Framework v2.0 is not already present). Due to data storage requirements and to ensure optimal performance, we highly recommend having at least 300MB of free space. Input Devices Most standard input devices are supported, including USB mice and keyboards, mice and keyboards, and serial mice. Microsoft or Logitech input devices are recommended. When using laptop computers, most internal keyboards and pointing devices will be sufficient for most ANANMN test modules, but the use of external input devices is highly recommended where practical. Wireless input devices (mouse) are NOT recommended and should be avoided. 2.2 iusmuiwe 'rel The Software installation program consists of a series of easy-to foliow dialogs that lead you through the installation procedure. To install TBI and the support software products, and 1. lnsert the Al`\lANl4m Software CD in your CD drive and wait for the installation program to start. if the installation program does not start automatically, click Start: Run on the task bar. Type your CD drive letter followed by D:\Setup or E: \Set;up). Finaiiy, click OK to proceed with the installation. Fl?icroseit .MET Henteerorlc installation The Microsoft .NET Framework version 2.0 is required tor the and programs. if version 2.0 or h' it will automatically be installed as part of the installation process. later is not installed on your mac meww? _,ease ag* 's 3 .ees -'rea 'age ., W, .. . as A . aa- _lf-artew en- Qfescn reader_Pig ,s .f1l.. 2 luis I if e: .Uni rJ_a.c- 11.5 el? are oben Ur) 1.4 Ji. 'calls ufClick UK in the .NET message window. A series of dialogs will guide you through the installation process. the installation to run to completion. Once the installation installation ofthe ANAMQ-W software will automatically proceed. 2. Click Next in the Installation dialog. 3. Read the Al\lAl\/if license agreement and, if you accept the agreement, then click I lf you decline, the software will not install. 4. Click Next. The installation of the Microsoft .NET framework could take up to S- minutes' Please be patient and allow of the .NET Framework is complete, the :see N. _,hs f. Weltxorne to the . weefii -ciiw fre iaffzfa 'll'ls's?i 'east to flea ang to reins: sou- 'jab lj?' IG ue . ?5 .lei 61% 3% 3 9% -, earl (til fir.; W5 fee 2 rw 55sr-_ caterer* elaii -4 .g R-ml; ig- 155. 'r le as 'fr . _-16 w~ 2' Qi?ggk. ei is?/73 wifi? _ji ff snr" IL, F2 gt 545; we 4' tifin-'l .sig ,fl gage; 531 'Ks 1 tf. 'Wir Fr . -fly, . it igw el I I?-at 'afloat-an ba* ie naulrigf-fl1l~H. of . in rf the rr; ccr.ff1fl. ll . 1; 1. Lfrfevin in Drstar' fri? tue 5e "te ,rf 2l`l'Tff! or a W2 f"llF gn* .EDT . ef I rn.-V nlfe I me e15;i an agle-einen] if I a I I, >>-wma if 15.11Gal sez: tc L13-ear :ri .j 3: 'c sizeof: :ae LIITSG nor:-ariift yfncitvrn 'laai ,sais as 5-ae a Sire' se ag'e==rne it (1 .fl -,m?mm?em&' Ce; -ire! ll 5 _:wa . All software packages are selected for installation( "fi" )by default. If you do not want to install a software package, click on the check box next to it to uncheck it 5. Click Next. The default installation directory is C:\Program 6. Click Install. The application begins to install. 1 gan E3 5 Clue-se lion qmuenl 1 i ih-'Jose -whizfl lesions you van ?0514/;3l. ANAM yo; want and you do'i`l wary: tc . Ci?fk Heal to cusfnue (1, Br: ocboni you s-:sh tc ne-ld: -"ff - lijnwe fs ,at IFQAZUI - bteceuecurcdz 2 H-ff.: Huwsee land Lowlion &7 *Ibm-s-s :lm 11 whah if mtg! AN sem- wil recall Rfil\Wl'l an the f0l0EUR~dh?i lolisfl. lo a offs-lem dezv B'0fi5?' and 5 semi fcliff. Clizi 'lnslal Lu slarl elf 3 &f.h'1 Foiifl ffwwesl - 1 5:?cr (4 4% fill Irrilai -z-fin; ANAM 3 Quipu: foiief 'law- 1 . . 3 ww asnmas TF. Click Finish. Ihr; Setup lm'-Fylzi* After the ANAIWIW flies are copied to your computer, the setup is complete. Desktop icons for ANAMQ ADEPT and APR be created. 4 3 7 3 Anand" 'rel 'rest Administration 3 1 GENERAL TEST ADMINISTRATION GUIDELINES 3.1.1 Test Administrators ANANMW TBI is to be administered by professionals who have been trained in the proper administration procedures for testing. Furthermore, training of TBI test administrators in standardized procedures helps to reduce variability between test administrations and ensure consistency of test data across administrations. it is important thot test administrators 4-1' are informed about the standard testing procedures, including information about the purposes of the testing, the kinds of tasks involved, the method of administration, and the scoring and reporting; fir# have sufficient practice experiences prior to the test to include practice, as needed, on how to operate equipment and practice in responding to tasks; fe' have been sufficiently trained in their responsibilities and the administration procedures for the test; A1 have a chance to review test materiais and administration sites and procedures prior to the time for testing to ensure standardized conditions and appropriate responses to any irregularities that occur; it-f arrange for appropriate modifications of testing materials and procedures in order to accommodate test takers with special needs; and have a clear understanding of test taker rights and responsibilities. 3.1.2 Testing Environment it is important that testing facilities and conditions be reasonably uniform for all test takers. These extraneous factors can affect the reliability and validity of test results. It is important that the test environment 41 is both physically and conducive to eliciting the best possible performance ofthe test taker; Ae' is weli-lit, well-ventilated, with comfortable room temperature; iv' has a comfortable chair and work surface configuration that allows good visibility of the computer display and comfortable access to the keyboard and mouse; is free of excessive noise, traffic, and other interruptions; and fe' allows privacy and reasonable separation between test takers. 3.1.3 Testing Procedures Uniform testing procedures heip to ensure that the test results minimally reflect differences in test administration conditions. To maintain the integrity of test results, administrators need to be alert to test takers' activities throughout the administration. For example, some individuals may experience difficulty in understanding the instructions. Others may proceed through the battery randomly responding to items. Others may be uncomfortable using a computer, while others may attempt to misuse the computer or attempt to engage their fellow test takers in conversation, competitive activity, or amusement. An alert administrator will be able to correct these situations quickly before they invalidate the test takers' responses. If the administrator believes the integrity of has been compromised, the battery should be terminated and the test taker should be quietly removed from the testing location. A test proctor log book can be used to record standard information about each test session as weii as any abnormalities that may have occurred during the test session. ln general, test administrators should offer aid at any point in the testing when it becomes clear that a test taker is having difiicuity understanding the test. Niost Ai\iAi\il4W tests provide practice trials thathelp ensure that the test taker understands what is required before the actual test begins. Test administrators should be familiar with potentiai questions and/or problems that might be encountered during administration and be advised of standard procedures for handling such situations. During test administration it is important that -fe' test administrators ensure that all test takers are able to understand the test instructions; he test administrators ensure that test takers who prefer to use their right hand should place the index finger of their right hand on the left mouse button and the middle finger on the right mouse button (test takers who prefer to use their left handed should piace the index finger of their left hand on the right mouse button and the middle finger on the left mouse button); for test administrators be vigilant that test takers understand the correct way to respond to each ANArv14"t@sr; sufficiently trained personnel establish and maintain uniform conditions and observe the conduct of test takers when large groups of individuals are tested; iv' personnel are alert to probiems individuals may have in taking the test some test takers may have forgotten to bring their eyeglasses; others may have temporary illnesses or injuries); Arf test administrators be vigilant to test takers who may exhibit signs of excessive fatigue, sleep loss, or may be taking the test under undesirable circumstances (such as inordinately early or late in the day), fe' a systematic and objective procedure is in place for observing and recording environmental, heaith, emotional factors, or other eiements that may invalidate test performance and results; deviations from prescribed test administration procedures, including information on test accommodations for individuals with special needs, and for the security of test materials and testing software is protected, ensuring that only individuals with a legitimate need for access to the materials/software are able to obtain such access and that steps to eliminate the of breaches in test security and copyright protection are respected. After test administration it is important to he record notes on any problems, irregularities, and accommodations in the test records; of answer questions of test takers appropriate concern for their privacy. in a manner that is forthright, accurate, and conveys Test results should be kept in a secure location. Results should only be released to quaiified personnel. Test results are confidential and shouid not be disclosed to another individual or outside organizations without the informed consent of the test taker. Only quaiified personnei should be involved in interpreting ANAMQW test results. Responsible interpretation of test scores requires knowledge about and experience with the tests, the scores, and the decisions to be made. interpretation of scores from should not be made without this knowiedge and experience and a thorough understanding of the limitations. For more information on ANAf\fl4m TBI interpretation guidelines, see Understanding and interpreting attains" 'rest scar-es. 0 i 3.2 RUNNING TBI Once you have familiarized yourself with the test administration guidelines, you are ready to run TBI. 3.2.1 Starting TBI Starting ANAM4 TBI via the Launch Pad - ulunrarrnfuica I I 1. Double-click the ANANl4Launch icon on the desktop. Depending on your version of the Launch Pad, the screen - - at the right may or may not appear. 1- 2. if the screen at the right appears and the Username and - - Password fields are pre-loaded, click Contlnue or press - Enter. You do not need to modify the Username and Password fields on the ANAM4 Launch Pad. If the fields are not pre~loaded, enter the supplied Username and Password. 3. Seiect mouse hand. If you have a test 55 YV 4" ZANAM4 Launohpad uses rnouse :z with the left hand and you wish to f" i-i - obtain responses using the ieft hand, set the Mouse Hand to Left. EWEEL _,Eg i i 4. Enter the test taker's Social Security Number 5. Re-enter the Social Security Number (for 6. Click Start Test. After a brief period during which you may see a I Command Prompt window, the ANAM4 Introduction Screen at the right will appear. Starting ANAM4 TBI Directly 1. Double~c|ick the icon on the desktop or fi; select the ANAM4 program iisted in start Programs The introduction Screen at the right will appear. Buufyseiecua 3.2.2 BatterySeiection After a brief introductory screen, the Battery Selection I Screen wiil appear. The Battery Seiection Screen allows the user to choose a battery to run, specify an id number, and specify data storage directories. This information may already be pre-loaded if you started ANAM4 via the Launch Pad (figure at right). jo-:arf i 4' 25' I1 .. c\ Selecting of Test Battery if vw ANAM4 land via the Launch Pad): Battery Seiedicin . 4? Use upfdowii cursor arrows or - . 1. . mouse in Use the up/down cursor keys or mouse if -- -- - - -- you wish to select a demonstration batteryThe Demo battery options are SH i shortened versions of the battery 5 (fewer items per test). There are two versions of the Demo battery: 1) collects/outputs data and 2) does not my ,g 5 collect/output data. 2. Enter a user iD (usually the Social Security Nurnizier). it is VZTAL that the number be entered accurately! lf a test id is entered that has never been used, you wiil 1 be asked to verify that you are creating a new 3 - This wili createanew participant 1D articipant ID. If this is correct, click Yee. if the session is a repeat administration and the participant ID has been 1 used previously, you will not receive this prompt. i }iaW 5 Is this correct? Changing the Primary and individual Data Directories Data from completed ANAM tests will be stored in the directories specified on the Battery Selection screen. The default Primary Data Directory is cr: \a.namda'ca. All data files wili be stored in this directory unless otherwise specified in the Primary Data Directory field. By default, the individual Data Directory field is blank. This means that all data collected will be stored together in the Primary Data Directory. if an lndividuai Data Directory is specified, a subfolder will be created in the Primary Data Directory folder and ali data will be stored in the individual Data Directory. To change the Primary or individual Data Directories 1. Press This will unlock the Primary Date Directory and individual Data Directory paths for modification. 2. Type the path location ofthe directory for data storage or click Browse. if you select Browse, navigate to the directory where you would like to store the ANAM data files. 3. After confirming ali information, press Enter or click on Next: to continue. Confirming Date/Time and Session Number A Confirmation Screen appears prompting you to verify the Date, Time. "1 and Session- i Is this information correct? nate: June 19, zme 1. Confirm that the Date and Time are accurate. lf not, you may i iime: neue ID: vi: focuses. buttons, click No, close the Battery Selection screen that reappears by clicking on the recl close button at the upper mn 1 need to press to reveal the Yee/No option 11 right corner, correct the Date/Time setting on your computer, and restart the Launch Pad. 2. Confirm that the correct Session number is about to be run. lf you are certain that it needs to be changed, press to unlock the fieid and enter the desired session number. 3. Click OK or Yes to continue. Verifying the ID, date, time, and session is particularly important so that performance can be tracked over time. To unlock the session field for modification, press 3.2 .3 Test Settings The Test Settings Screen allows the user to - TU I customize the ANAM4 TBI test sessionmost test takers this information is already 'i /ii sv- sex To unlock the File Extension field and/or Mouse Hand field for modification, press 3.2.4 Exiting cr Battery To exit ANAMAN TBI from the Battery Selection Screen or the Test Settings Screen, click on the in the upper right corner of the window. To exit from the Test List Screen, click on the Exit button. To exit ANAMAN TBI during a test, press at any time following the instructions screen. i Cancel battery? The exit function works ONLY after the display of Wil" test instructions is complete, the test has begun, and a response is required. 1 1 After the test aborts, click Yes to cancel the battery. At the conclusion of the battery, you should see a "Thank You" message informing you that the Test Battery is complete. 3.2.5 Exiting a Test To abort from any test (end the test without collecting data), Press at any time following the instructions screen. Cancel battery? The exit function ONLY works after the display of test instructions is complete, the test has begun, and a response required. . After the test aborts, if you wish to cancei the rest of the battery, click Yee. if you wish to continue with the remaining tests, click No. The next test in the sequence will begin. _i This will end the current test and NO WILL BE RECORDED for that test. 3.2.5 Restart/Recovery options Data foreach test within a battery is saved separately immediately following a test's administration. Therefore, in most cases, data will not be iost for tests completed prior to test discontinuation, system failure, or other unintended interruptions. Data from a partiaiiy completed test will not be saved-wi ?=f2igi/\ 3 a iris rsiit wr i ENE I i. see the Restart Battery Screen asking if you wish to Start from the First Test or Continue from Lees - get-1 from First Test lf you choose to Start from the Fires Test; any tests administered prior to battery cancellation will be repeated. The data will be appended to the test's datafile. Once you have selected the desired option, click on Next; to continue. Administering Retest Retest on the ANAMQ-W ioatten; is generally required due to poor performance resuiting from a failure to understand the instructions for a given test in the battery. A minimum score of 56% correct is used to identify those individuals who may not have properly understood the instructions and a retest is administered. Retest may be required on only one test in the battery or multipie tests i ii* batiefv- .. - .. . .. 'f`1SeiTfi sf Selecting Specific Test or Subset of Tests Lf* To select a specific test or subset of tests to administer 'rr rr- i . 1. Press on the Test Settings Screen. . roar ic., 4 :Wea-r rsf.a.< 2. Click on Select; under Tvype of Run. .sais 3. Press Enter or ciick on Next; to continue. i Ma. as., i war - The lists of tests within the seiected battery will appear Mania. "1 3 Fasaesi I 0* the Ti?-Sf Lift - Teil; list I . The Test List Screen contains a complete listing of the tests included in the selected ANAM4 Battery. I i is-if (nu nu] ll. Use your mouse to select the testis) that you would A: tudfr--1 iike to administer. hai Eilfrbabiq Use to select consecutive multiple sew- warm-= rw- '3 rf' tests. Use to select non-consecutive multiple tests. `il` iss _'cr 1' as J- . . 5. Press Enter or click on Next to continue. Tests will proceed in sequence. If instructions are turned on the typical sequence for each test is one or more pages of instructions, a screen with the test name, the test itself, and (if selected from the Test Settings screen) a feedback screen summarizing individual Test Results. 3.3 SHORTCUT5 The shortcut keys for ANAM functions are the following: if you need On this screen Press these keys Exit a test Any Test Screen* ALT F1 Exit the battery Any Test Screen* ALT F1 Interrupt a battery Any Test Screen* ALT F1 Unlock primary Si individual data directories Battery Selection ALT F1 Unlock Session field Confirmation ALT F1 Unlock File Extension Mouse Hand fields Test Settings ALT F1 Expand Test Settings Screen. Test Settings ALT F2 Continue test after criterion failure Criterion notice ALT F3 *Must be an actual test screen. Shortcut will not work during the instruction screens. i 4 1 a 1 nares" Tests An ANAFVT battery is a collection of severai tests that are selected by the test administrator to run in a sequential manner. The specific tests assess different basic functions (or domains) of cognition such as t' The ANAM4 TBI battery can be self-administered by attention, reaction time, memory, and concentra ion the user and takes approximately 15-30 minutes to complete. M. Test Besceiirriens Descriptions of the individual tests follow in th order ot administration. $2.1 Demographics Moduie TEST Disscnipnon The demographics rnoduie allows users to enter a wide variety of information including name, age, gender, ethnicity, medical diagnosis, medications, and additional comments that the researcher or clinician finds useful. 4.1.2 TBI Questionnaire TEST The Questionnaire is designed to assess injury history and related $.13 Sleep Scafie The ANAM sieepiness scale has been designed to provide a state and/or trait assessment of energyiatigue level. Test This test permits selhassessment of the user's sleep/fatigue state (and/or trait). The user is presented with seven different statements of alertness/sieepiness, ranging from "Feeling ven/ alert, wide awake, and energetic" to "Very sieepy and cannot stay awake much longer." The user is instructed to select the one statement that best matches the current state-Ei Mfrli- "Vw, _-i5 page stsgas, r- we 5 Wvex ,Wye 1 as 2' -, .. -me emiwsge' -e --.3 - a- - a. 4 wats. 3 . er "za est' .- . .. as Jeeps., 'sf Qearssawt, /*ess Qi sg it We if?sg if" Gs ev qegy deg; . A5 'aiiw-,#riverine-_. /wsrare 1: deiget 2152" 4 4: /_Ly vita .deaf Laci .. c. f, #4 *wwe A i .i W, isvzasig* Wg_.suse -5:35 a?i?fr fwsitfd Law-.11 is.-Ls4.1.4 Mood Scale ll - Revised Cosnmve Doivinm The Nloodscale2-R is designed to assess either mood state or trait in participants in six subcategories that include Vigor (high energy~leveI), Happiness (positive disposition), Depression Anger (negative disposition), Fatigue (low-energy level), and Anxiety (anxiety level). Test This test permits self-assessment of the user's mood state in seven categories: vigor (high energy level), Happiness (positive disposition), Depression Anger (negative disposition), Fatigue (low energy level), Anxiety Jii (anxiety level), and a new subcategory of Restlessness (motor agitation). The user is presented with a scale of numbered blocks ranging from 0 to 6, with having the verbal anchor "Not at all," the midpoint labeled "Somewhat" and labeled "Very Much." The user is presented a series of adjectives, each adjective contributing to one of the mood ctegories, and is instructed to select the box/number that best represents the current state with respect to the Presented adjective. 4.1.5 Simple Reaction Time DOMNN Resuits of this test are used as an index of visuo-motor response timing. TEST Diiscnierion This test measures simple reaction time by presenting the user with a series of symbols on the display. The user is instructed to respond as quickly as possible by pressing a button each time the stimulus appears. 4.1.6 Code Substitution - Learning Coewirive Dommn Results of this test are used as an index of visual search, sustained attention, and encoding. Test _y yy in this test the user must compare a displayed digit-symbol pair with a set of defined digit-symbol pairs, or the key. The user i presses designated buttons to indicate whether the pair in question represents a correct or incorrect mapping relative to the In the Learning Phase (Simultaneous pr@S@"feii0" mode). the defined Pairs are presented on the screen ai0i'1EUR f?i Illiil I with the digit-symbol pair in question. ln the Deiayed Memory test (to follow later in the battery) the comparison stimuli are again presented individually without the key. i~'f ii? Procedure! Reaction Time COGNITNE DOMAIN This test measures the reaction time and processing efficiency S. associated with following a simple set of mapping rule TEST DESCRIPTION *rs c. at-we s. i_._ts -.1 '-ia ., There are three possible blocks of trials for this test. In the Basic Biock, the user is presented with a number constructed on the display using a iarge dot matrix (either a 2, 3, 4, or 5). The user is <4 -V fi - sr- -ez :das (ee-.s _c . .c 4 P1 avtg., - 31. tease: - t. _,as ag, si. if lik- =e-L W1 Msrsieeu -_.res _g Wei ta 'mmf N, instructed to press one designated button for a "iow" number (2 it ef- -#fe were NJUtdaatfa . 9 eg ,f or 3) and another designated button for a "high" number or 5). Matching COGNITIVE DOMAIN Results of this test are used as an index of spatial processing and visuo-spatial working memory. yes. ,ss TEST During this test the user views a pattern produced by eight shaded cells in a lixli- sample grid. The sample is then removed One ew' 'gisgf and two comparison patterns are displayed side by side. grid is identical to the sample grid and the other grid differs by one shaded ceil. The user is instructed to press a designated button to select the grid that matches the sample. $.15 Mathematical Processing fam ashes of/' Q- hs.-3 eg' 5 -, -. YK ,-we-s :tate A-nm .s S.: sp./rem ~?-was ew- an i -xavsv was we sc; - 4 521' .ri 4SZ "Ji `1 mb 33:~.e-ss-t - - was . ., Asstpigi?g? as. $1 ,fszq 17 ?s Q, wigCoemrnye Dol\n/Aim Results of this test are used as an index of basic computational skills, concentration, and working memory. as My .ag ami. Image TEST DESCRIPTION During this task, an arithmetic problem invoh/ing three single- oligit numbers and two operators is displayed "5 - 2 3 The user presses buttons to indicate whether the answer to the rt; tj 'ws 4 Aim asnqsa Jw .ce mi' rf' =s=a=r--sem enqfwfa efgtf ,revue gs fgx*lldire"it" problem is less than five or greater than five. . - - T12 -e $2.10 Cade Substitutron Delayed Coewmvc - 'f Results Oi this 'test are used as an Index Of deiaved memvrvi 1 EST DESCR1 PTION ln test the user must compare the digit-symbol Pall' WVU ?2lf"5Vm F33il`5>> Ol' EV, l3*@5em3@ Uflfig 9 Code Thi? USEI Di @5565 buttons to whether the pair in question represents a correct or lncori ect mapping relative to the key. .. -Lit! r- -K wfiat h=x"1- 5? z` i' 17 4.1.11 Simple Reaction Time Coemmve DOMAIN Results of this test are used as an index of visuo-motor response timing. TEST DESCRIPTION This is a repeat of the Simple Reaction Time test presented ple reaction time by earlier in the battery. This test measures sim presenting the user with a series of symbols on the display. The user is instructed to respond as quickly as possible by pressing a button each time the stimulus appears. 5 a rttef"" i Teet are ion of each test. If a test is not completed (due to interruption, Data from ANAFW4 is saved at the concius fstem faiiure, or other intentional or unintentional termination) data will not be saved for the incomplete if test. However, data will be saved for all tests in the battery completed up to the point of termination or interruption. Ei enema" Earn Gurrut Upon test administration, two data files are generated in two different formats through the executive program. These are: fir' Comma separated files (CSV) Ftavv Data -- individual item/trial information Summary Data - summary statistics computed across all items/triais Extensible Markup Language files (XML) Raw Data individual item/trial information Summary Data - sumrna ry statistics computed across all items/trials The CSV files do not include variable labels. For further information on variable position, labels, order, and calcuiatlon, see Apperidix AZ. 5.1.1 Filename former Data filenames are generated na four components: Zi. File type identifier - The fo identified by a one letter code. This code wiil occupy the first character in the data file name. a. - ravv data in CSV format b. - raw data in Xi\/li. format c. - summary data in CSV format ci. - summary data in XML format 2. ED - Corresponds to the ID provided on the Battery Selection Screen. The ID component is variable in length and can be any alphanumeric character string. 3. Type of Administration - The iD is followed by a or designating a Practice or Test session. Session number - Two digits representing the session number which was administered. File extension - A three letter extension is attached to the file which serves as an abbreviation code for test identification. For a list of these abbreviations, see rippendirr xii. A miwenu executive program. The filenames are comprised of ur different data fiies generated from each run of an test are 1 4. 5. Example: This is a summary data file in CSV format for subject 32545 taking Test Session number 1 of the Simple Reaction Time test. 5.2 Ar?atreef" Uirrn lt rima data directory" is displayed on the Battery Seiection Screen. Shown below are two fields _eau ry to enter a Primary Doro Directory and individual Data Directory. Data from completed ANANI tests wil! be stored in the directories specified in these fields. ~ww 19 . _.zz A555 The default Primary Data Directory in this example is The data from ail completed tests will be saved in this directory or folder. The default for the Individual Data Directory is blank. if the individual data directory is not changed, all data collected will be stored together in the Primary Data Directory. lf an Individual Data Directory is specified, a subfolder will be created in the Primary Data Directory folder and all data will be stored in the Individual Data Directory. By default, the Primary and individual Data Directory fields are locked. To modify these fields, press which will unlock the fields and allow you type the desired data directory or .. f~ . .r totes me ms- 5- ,atm it .- - 7 2 yuwq. 1 (ngriisml) gamma 1 i . .. . . navigate to the desired directory by pressing the Browse button. 5.3 Commune AND Viewms DATA The ANAMAN Data Extraction and Presentation Tool, or ADEPTW, is designed to give users of the ANAM battery of tests a too! for viewing and managing their XML data output. This tool is useful for viewing data from multiple individuals or creating a database for further analysis. For more information on the ADEPTW program, see 9 ADEPT. The ANANMM Performance Report, or provides a summary snapshot of a single ANAMAN test session. The is designed to aid clinical assessment, confirm that subiect/patient scores are within a normal range with respect to a reference group, and examine performance history. For more information on the program, see 6 APR. its .- M: ertotmanoe The ANANMN Performance Report, or APRW, provides a summary snapshot of a single ANAINMN test session. The APR is designed to aid clinical assessment, confirm that subject/patient scores are within a normal range with respect to a reference group, and examine performance history. at iftstattwe awp auiwimze APRW automatically installed along with the if you have received this program along with ANAMGW, APRW wiil lo software. The default installation directory is c:\Prograrn Upon installation, a desktop icon for wiil be created. #Pit To run APRW, double-click on the icon located on your desktop, the file located in the \Pr'ogra.m directory, or the program listed in start All Programs APR. The software will launch and the File selection dialog box will be displayed.JEZE -r -yn .envQioifamv /Retain pg testo 1 if ?L?yrnri we-f 1 we at .excl Szore #scores lor: Onf<>11_.too" si' 5 - C5 12:15:_,nag 1 is 52-Mun; tt its re _t -41 -`View IU U2 ii" 1' ;2f= 1 - Lis: 5; 7 I 1 i' 'Ii 2_5 rf- "3i"i5 5 5-:Restore factory default has my futtre defects . . .- - . - t_ N, 9 Www6.2 CREATING A REPORT 6.2.1 Selecting a Data Folder The first step in creating a performance report is to locate the data you would like to include in the report. To open a File selection dialog box 2. Select open. Or, press the Open file button in the toolbar. iifwviw ceseailnepms Upon opening, APRW will default to searching \anamdata. for - 5161 valid ANAMAN data files. lf this is the location of your ANAMQN . e. data files, you can proceed with creating a report. To change the folder 1. Click located in the ro sf? 'Directory Settings Directory Settings section of the . .. File selection dialog box. ?mgi -we the Browse_buttonONavigate to the folder where your data files are located. 3. cnet ox. 5 Si 9| l-iybocufrems EE ?3 lavcmoure use E3 tw! Dex 53 Arsrr E2 Ann: i EE ij 5' Lf; C-Seb? ff DELL 1 121 ommeae me seraea if alia was ivCausal 1 lf the directory you select has no valid data files, the following message will appear on the screen"lilafounci Pie select theft? l`>>2oX oth . . ase am rectory. -a if you get this message, click OK. and rnake sure you select a directory that contains valid XML ANAM data files. The XML summary data files start with the letter li you would like the selected directory to be set as the default directory for future sessions, click Make this my future default in the Directory Settings section oi the Fiie selection dialog box. ii you have selected a directory with valid data files, the user ID labels for all XML summary data files located in the specified directory will populate the ID box of the Selection Form. The next step in creating a report is selecting the desired user iD and Session. APRW allows you to choose a single user iD and Session for constructing the performance report. 6.2.2 Selecting User ID Select a single user lD in the ID selection box by clicking on the ID label. The Session seiection box vvili be updated with all sessions corresponding to the selected user ID. Session number and the last modification date/time wiil be displayed for each session. The date/time stamp will correspond to the last time data were appended to the file. Selecting ci Session Select a single Session by clicking on the Session label, and click the View button. Alternatively, double-click the Session label and go directly to viewing the reportem, Qui, we - 1 naw emo:cj 15.3.3 #ml .1 5 E-5 3 _ein 5, A 1 .I -- !i9E sie .6 5_3 . .W .C _c weera -e 3. it - 1 51 ssfasezi L: R5 vi - - - fs- View ID Session 547: za- .saw - ~f 5 -ur enum we: us/M -we 6.2? Selecting Archive Sessions _g When archived ANAi\/im data baseline testing) are available for the seiected user ID, the APRW can plot the 'f`il" current session along with archived sessions for historical comparison. The default sessions that are plotted are determined by the Archive Settings. The initial default is to plot the selected session plus the previous three sessions (ii available). The default Archive Settings provide a starting point. In addition, sessions can be selected or deseiected by clicking on the individual session labels in the right-hand archive sessions panel. To exclude the archiving feature from the current report, deselect the Use archive checkbox. 5.2.5 Archive Settings To modify the default settings, click the Archive Settings button to open the Archive Preferences dialog box. 23 5~ i a Jii ey, Suppressing archive plots To suppress the plotting of archive data by default, deselect the include archive sessions box in the Archive Preferences dialog box. Setting default archive sessions To alter the sessions that are included in the archive plot by default, select the desired setting in the Sessions included by default selection box. A description of each option is given below. 1. Selected only The user must select all archive sessions manually each time a report is generated. 2. Selected plus some previous - The selected session plus a number of previous sessions will be included in the report. The number of previous sessions is specified in the additional sessions box in the lower portion of the Archive Preferences dialog box. 3. Selected plus some subsequent The selected session plus a number of subsequent sessions will be included in the report. The number of subsequent sessions is specified in the additional sessions box in the lower portion ofthe Archive Preferences dialog box. 4. Selected plus all previous - The selected session plus all previous sessions will be included in the report. 5. Selected plus all subsequent - The selected session plus all subsequent sessions will be included in the report. 6. Selected plus first - The selected session plus the first available archive session will be included in the report. 7. Selected plus last - The selected session plus the most recent available archive session will be included in the report. 8. A11 - The selected session plus all available archive sessions will be included in the report. Default: Selected plus some previous After all desired changes are made to the default archive settings, click the Save Preferences button. The selected default options regarding the use of archival data will be implemented in subsequent reports generated by the APR until the defaults are changed. 24 i . 6.5 REPGRT Sormpfe Report riwvma-ea" . . f- tyre . 3 5 Mrs* 1-wr rs :1 mvmriw LD: PE!-Foil-lxieci Pi:/me: EGG-3 -4 af rrinzcmorz i er - . L. some war Af.4Af.a_ statusnu-.a 2:21 nm :stu pr, meg ii. The nfafrrerizn orowfed tnis report does net rearesent masse! anne. casnoss. c-f a presoopan fer treatment. Protiders Use these resuo cornaaomsri war; a moral exarnnat?er-' c: I HISTORY ioyxr Iwgxp i-isefobaee. Fla;-sei \-smog, prefers ss 55516 Based, ttriioef. 52/ 5152 3" 75" "7 es 11920 51: lirdnorree-ercef ron' 1m% If (1-"cn /eo-:er 'f'7 Tir;>'rb'ewsmcwxosr. oeseavatinrrs 1 not fr -t :zfzv-1 are presses Fri Cer--1-,,ine - . . seas may Tru i 'Fr'-3 $2233 ,ig 1 fini: Law' og 51 5532 i 3 'i -2 rms- afneraasucs 'j iezagoryizaer Average (9151 oezreatie, EELS smtfafo amanda. 70 52:62:11 score] are based on iizrev, H, J., Et Lazer, H. D. The mi tsaseseas in H, D. Lei:-ic D. E. D. W. Leung (Erie), 5; - N, . . (pp. 133-156). NEW Yost: Lirwrersry Press 5 -71 2 i I *C-5Hoi' ana' 'ine unizeoof nfofeteafre as riotefoafmsofeiw env ?irern? 112565 cn bfvw?lbn fi:ire rapper. The psf/:der re; the sae raamit-tn' fo' eififbishho damoss E05 4-#Lf-3-J-iv-#ll--ji Lv 6.3.2 Selecting cr Comparison Group By default, uses sex and age information from the Demographics (sub) file to determine the best match for the chosen Comparison Group. However, it is often desirable to change the comparison group or to select a particular subset of the comparison group collapsed across both sexes rather than comparing with the specific sex ofthe selected user ID). To change the Comparison Group 1. Once you have created a report for a user ID and session, click the Comparison Group button on the report as shown below to open the Comparison Group Selection dialog box. poem AT A omrice 'r'i :area (1 -737 ., 1 W: go" C`mp 'ew 1" d" Score; ei little tired . *mi "hifi zssj;-slag - concentrating; assigns sfmeie nemiaf. 'rams 'rsr-re) lj Moon r-faeeawsi Readizer time (eaocessme :wi 'ie 2 Using the drop-down menus, select from the Hvailabl? C?mPafiS?" OPUOHS- 2 3. Clic|