This study examines the accuracy of the Test of Memory Malingering (TOMM), a frequently administe... more This study examines the accuracy of the Test of Memory Malingering (TOMM), a frequently administered measure for evaluating effort during neurocognitive testing. In the last few years, several authors have suggested that the initial recognition trial of the TOMM (Trial 1) might be a more useful index for detecting feigned or exaggerated impairment than Trial 2, which is the source for inference recommended by the original instruction manual (Tombaugh, 1996). We used latent class modeling (LCM) implemented in a Bayesian framework to evaluate archival Trial 1 and Trial 2 data collected from 1198 adults who had undergone outpatient forensic evaluations. All subjects were tested with two other performance validity tests (the Word Memory Test and the Computerized Assessment of Response Bias), and for 70% of the subjects, data from the California Verbal Learning Test–Second Edition Forced Choice trial were also available. Our results suggest that not even a perfect score on Trial 1 or Trial 2 justifies saying that an evaluee is definitely responding genuinely, although such scores imply a lower-than-base-rate probability of feigning. If one uses a Trial 2 cut-off higher than the manual’s recommendation, Trial 2 does better than Trial 1 at identifying individuals who are almost certainly feigning while maintaining a negligible false positive rate. Using scores from both trials, one can identify a group of definitely feigning and very likely feigning subjects who comprise about two-thirds of all feigners; only 1 percent of the members of this group would not be feigning.
Mental health professionals often use structured assessment tools to help detect individuals who ... more Mental health professionals often use structured assessment tools to help detect individuals who are feigning or exaggerating symptoms. Yet estimating the accuracy of these tools is problematic because no "gold standard" establishes whether someone is malingering or not. Several investigators have recommended using mixed group validation (MGV) to estimate the accuracy of malingering measures, but simulation studies show that typical implementations of MGV may yield vague, biased, or logically impossible results. In this article we describe a Bayesian approach to MGV that addresses and avoids these limitations. After explaining the concepts that underlie our approach, we use previously published data on the Test of Memory Malingering (TOMM; Tombaugh, 1996) to illustrate how our method works. Our findings concerning the TOMM's accuracy, which include insights about covariates such as study population and litigation status, are consistent with results that appear in previ...
Mental health professionals often use structured assessment tools to help detect individuals who ... more Mental health professionals often use structured assessment tools to help detect individuals who are feigning or exaggerating symptoms. Yet estimating the accuracy of these tools is problematic because no ―gold standard‖ establishes whether someone is malingering or not. Several investigators have recommended using mixed-group validation (MGV) to estimate the accuracy of malingering measures, but simulation studies show that typical implementations of MGV may yield vague, biased, or logically impossible results. This article describes a Bayesian approach to MGV that addresses and avoids these limitations. After explaining the concepts that underlie our approach, we use previously published data on the Test of Memory Malingering (TOMM; Tombaugh, 1996) to illustrate how our method works. Our findings concerning the TOMM's accuracy, which include insights about covariates such as study population and litigation status, are consistent with results that appear in previous publications. Unlike most investigations of the TOMM's accuracy, this article's findings neither rely on possibly flawed assumptions about subjects' intentions nor assume that experimental simulators can duplicate the behavior of real-world evaluees. Our conceptual approach may prove helpful in evaluating the accuracy of many assessment tools used in clinical contexts and psycholegal determinations.
This study examines the accuracy of the Test of Memory Malingering (TOMM), a frequently administe... more This study examines the accuracy of the Test of Memory Malingering (TOMM), a frequently administered measure for evaluating effort during neurocognitive testing. In the last few years, several authors have suggested that the initial recognition trial of the TOMM (Trial 1) might be a more useful index for detecting feigned or exaggerated impairment than Trial 2, which is the source for inference recommended by the original instruction manual (Tombaugh, 1996). We used latent class modeling (LCM) implemented in a Bayesian framework to evaluate archival Trial 1 and Trial 2 data collected from 1198 adults who had undergone outpatient forensic evaluations. All subjects were tested with two other performance validity tests (the Word Memory Test and the Computerized Assessment of Response Bias), and for 70% of the subjects, data from the California Verbal Learning Test–Second Edition Forced Choice trial were also available. Our results suggest that not even a perfect score on Trial 1 or Trial 2 justifies saying that an evaluee is definitely responding genuinely, although such scores imply a lower-than-base-rate probability of feigning. If one uses a Trial 2 cut-off higher than the manual’s recommendation, Trial 2 does better than Trial 1 at identifying individuals who are almost certainly feigning while maintaining a negligible false positive rate. Using scores from both trials, one can identify a group of definitely feigning and very likely feigning subjects who comprise about two-thirds of all feigners; only 1 percent of the members of this group would not be feigning.
Mental health professionals often use structured assessment tools to help detect individuals who ... more Mental health professionals often use structured assessment tools to help detect individuals who are feigning or exaggerating symptoms. Yet estimating the accuracy of these tools is problematic because no "gold standard" establishes whether someone is malingering or not. Several investigators have recommended using mixed group validation (MGV) to estimate the accuracy of malingering measures, but simulation studies show that typical implementations of MGV may yield vague, biased, or logically impossible results. In this article we describe a Bayesian approach to MGV that addresses and avoids these limitations. After explaining the concepts that underlie our approach, we use previously published data on the Test of Memory Malingering (TOMM; Tombaugh, 1996) to illustrate how our method works. Our findings concerning the TOMM's accuracy, which include insights about covariates such as study population and litigation status, are consistent with results that appear in previ...
Mental health professionals often use structured assessment tools to help detect individuals who ... more Mental health professionals often use structured assessment tools to help detect individuals who are feigning or exaggerating symptoms. Yet estimating the accuracy of these tools is problematic because no ―gold standard‖ establishes whether someone is malingering or not. Several investigators have recommended using mixed-group validation (MGV) to estimate the accuracy of malingering measures, but simulation studies show that typical implementations of MGV may yield vague, biased, or logically impossible results. This article describes a Bayesian approach to MGV that addresses and avoids these limitations. After explaining the concepts that underlie our approach, we use previously published data on the Test of Memory Malingering (TOMM; Tombaugh, 1996) to illustrate how our method works. Our findings concerning the TOMM's accuracy, which include insights about covariates such as study population and litigation status, are consistent with results that appear in previous publications. Unlike most investigations of the TOMM's accuracy, this article's findings neither rely on possibly flawed assumptions about subjects' intentions nor assume that experimental simulators can duplicate the behavior of real-world evaluees. Our conceptual approach may prove helpful in evaluating the accuracy of many assessment tools used in clinical contexts and psycholegal determinations.
Uploads
Papers by Roger Gervais