Skip to main content

Brief Montreal-Toulouse Language Assessment Battery: adaptation and content validity

  • The Correction to this article has been published in Psicologia: Reflexão e Crítica 2020 33:23

Abstract

Background

Evaluating patients in the acute phase of brain damage allows for the early detection of cognitive and linguistic impairments and the implementation of more effective interventions. However, few cross-cultural instruments are available for the bedside assessment of language abilities. The aim of this study was to develop a brief assessment instrument and evaluate its content validity.

Methods

Stimuli for the new assessment instrument were selected from the M1-Alpha and MTL-BR batteries (Stage 1). Sixty-five images were redesigned and analyzed by non-expert judges (Stage 2). This was followed by the analysis of expert judges (Stage 3), where nine speech pathologists with doctoral training and experience in aphasiology and/or linguistics evaluated the images, words, nonwords, and phrases for inclusion in the instrument. Two pilot studies (Stage 4) were then conducted in order to identify any remaining errors in the instrument and scoring instructions.

Results

Sixty of the 65 figures examined by the judges achieved inter-rater agreement rates of at least 80%. Modifications were suggested to 22 images, which were therefore reanalyzed by the judges, who reached high levels of inter-rater agreement (AC1 = 0.98 [CI = 0.96–1]). New types of stimuli such as nonwords and irregular words were also inserted in the Brief Battery and favorably evaluated by the expert judges. Optional tasks were also developed for specific diagnostic situations. After the correction of errors detected in Stage 4, the final version of the instrument was obtained.

Conclusion

This study confirmed the content validity of the Brief MTL-BR Battery. The method used in this investigation was effective and can be used in future studies to develop brief instruments based on preexisting assessment batteries.

Introduction

Acquired brain injury due to stroke, traumatic brain injury (TBI), infections, and brain tumors can cause motor impairment, swallowing difficulties and language disorders such as aphasia. Aphasia is most commonly caused by stroke (Martinez, Saborit, Carbonell, & Contreras, 2014; Santiago & Gárate, 2016; Shipley & MacAfee, 2016) and affects approximately one third of stroke sufferers (Koyuncu et al., 2016; Raju & Krishnan, 2015).

Aphasia can be described as a language disorder, in which there is a loss or impairment of the skills of perception, interpretation, and structuring of linguistic elements (Maranhão, Souza, Costa, & Vieira, 2018; Ardila & Rubio-Bruno, 2017). The affected individual may present changes in comprehension and/or oral and/or written expression and difficulties to remember words during a conversation or to name objects (anomy), and the sensory and motor systems (phonoarticulatory apparatus) may be intact (Benson, 1993). Anomy, neologisms, paraphasias, and agrammatisms are some of the linguistic alterations that can be found in aphasic pictures (Ortiz, 2010).

An accurate diagnosis of aphasia is crucial to initiate interventions and improve patient outcomes (Godecke, Hird, Laylor, Rai, & Phillips, 2012; Rohde, Worrall, Godecke, O’Halloran, Farrell, & Massey, 2018). However, it is important to note that language recovery is not a linear process, especially after stroke, where the time since injury is a major contributor to recovery (Kiran & Thompson, 2019).

Many clinical conditions show spontaneous recovery over time, especially in the first 3 months after focal neurological damage (Azuar et al., 2013; El Hachioui et al., 2017). This is attributable to the rapid regeneration of affected tissues, as well as the neurological and functional recovery processes in the subacute phase of stroke (Kiran & Thompson, 2019). This period may see the formation of new synaptic connections and regeneration of damaged pathways (Kiran & Thompson, 2019). Neurogenesis in damaged brain regions may then lead to neural changes which contribute to spontaneous recovery (Kiran & Thompson, 2019). Even though some degree of spontaneous recovery may be expected, it is important to diagnose aphasia while still in hospital in order to allow for the adequate treatment of linguistic and communicative aspects (Johnson et al., 1988; Rohde et al., 2018) and the early implementation of more effective and targeted interventions. As such, language should be evaluated as early as possible in order to guide the rehabilitation process and contribute to prognosis (Nursi et al., 2018).

Extensive aphasia assessment batteries may be too tiring for patients with complex clinical conditions (Casarin et al., 2020). After an acute stroke, for instance, many patients are unable to undergo prolonged evaluations (Marshall & Wright, 2007). In these cases, long assessments may actually constitute a waste of time (El Hachioui et al., 2017). Instruments such as screening tools and brief test batteries which are simpler and easier to administer may be more helpful in diagnosing and mapping the extent of aphasic impairment in inpatient settings (El Hachioui et al., 2017).

Screening instruments can provide information about the language disorder in terms of its impact on comprehension and expression. The results of such an assessment contribute to early rehabilitation interventions, which may lead to greater gains and improve language recovery (Nursi et al., 2018).

Several instruments are currently available to evaluate language in the acute phase of brain injury (Rohde et al., 2018). Internationally available screening instruments originally developed in the English language to screen for language impairment include the following: Acute Aphasia Screening Protocol, AASP (Crary; Haak; Malinsky, 1989); Aphasia Diagnostic Profiles, ADP (Helm-Estabrooks, 1992); Sheffield Screening Test for Acquired Language Disorders, SST (Syder et al., 1993); Frenchay Aphasia Screening Test, FAST (Enderby & Crow, 1996); Mississippi Aphasia Screening Test, MAST (Nakase-Thompson et al., 2005); and Bedside Evaluation Screening Test (2nd edition), BEST-2 (West, Sands, & Ross-Swain, 1998). Other instruments include the Aachen Aphasia Bedside Test, AABT (Biniek, Huber, Glindemann, Willmes, & Klumm, 1992), originally developed in German; the Ullevaal Aphasia Screening Test, UAS (Thommessen, Thoresen, Bautz-Holter, & Laake, 1999), developed in Norwegian; the ScreeLing (Doesborgh et al., 2003), developed in Dutch; and the Language Screening Test, LAST (Flamand-Roze et al., 2011), in French. The BEST-2 and LAST have been translated and adapted to Brazilian Portuguese but are not commercially available.

The majority of English language screening instruments, including the MAST (Nakase-Thompson et al., 2005) and FAST (Enderby & Crow, 1996), detect aphasia using measures of oral and written comprehension, spontaneous speech, repetition, naming, reading, and writing. The same approach is used by the M1-Alpha (Nespoulous et al., 1986), which was the focus of the present study.

All previously mentioned screening instruments are also available in expanded form, as is the M1-Alpha. The latter was developed based on the Montreal-Toulouse Protocol for the Linguistic Assessment of Aphasia, or MT-86, developed by Nespoulous, Joanette, and Lecours (1986). This instrument was translated to Brazilian Portuguese and used for research purposes in the 1980s and 1990s. However, it was never published and was therefore only available to researchers and some clinical practitioners in identifying linguistic behaviors. It is an important instrument in the diagnosis of aphasias, especially in screening hospital environments, which require faster procedures (Ortiz, 1991; Ortiz; Costa, 2011). However, Brazilian studies involving the M1-Alpha identified the need to revise its scoring criteria and revealed issues with some of its linguistic and pictorial stimuli. These included the low recognizability of some pictures, and the absence of tasks considered crucial for assessment and diagnosis (Lecours et al., 1985; Ortiz, Osborn, & Chiari, 1993). In light of these issues, it was suggested that the instrument be revised and updated.

In addition to the M1-Alpha, there is an extended version of the MT-86 protocol known as the MT-86β, which was adapted to Brazilian Portuguese and named Bateria Montreal-Toulouse de Avaliação da Linguagem (MTL-BR) (Montreal-Toulouse Language Assessment Battery) (Parente et al., 2016). The MTL-BR has proved fully applicable to the Brazilian population, prompting researchers to consider developing a brief version of the battery similar to the format of the M1-Alpha (Pagliarin et al., 2014; Pagliarin et al., 2014; Pagliarin et al., 2015; Pagliarin et al., 2015; Parente et al., 2016).

The adaptation of neuropsychological instruments is a complex process and must have sufficient psychometric qualities (Gauer, Gomes, & Haase, 2010; Ivanova & Hallowell, 2013; Mcleod & Verdon, 2014; Kirk & Vigeland, 2015; Pacico & Hutz, 2015; Pasquali, 1999). The International Test Commission (ITC, 2017) recommends providing evidence of reliability and validity of the adapted instrument. Therefore, there are four fundamental steps in the adaptation process of a neuropsychological instrument: translation, analysis by non-expert judges, analysis by expert judges, and pilot study (Fonseca et al., 2011).

The content validity is one of the types of validation process of an instrument. This verifies how representative the content selected is for an instrument (Fachel & Camey, 2000; Pasquali, 2009; Oliveira, Sousa, & Maia, 2017). For this, the analysis of expert judges, semantic analysis, and studies with the target population are used. All these raters analyze each item of the instrument taking into account their relevance and clarity (Rubio, Berg-Weger, Tebb, Lee, & Rauch, 2003; Pasquali, 2010; Zamanzadeh et al., 2015; Pernambuco, Espelt, Magalhães, & Lima, 2017).

Few instruments are available for the bedside assessment of linguistic abilities. As such, there is a need for screening tools which can be used to assess patients before hospital discharge so that interventions can be implemented as early as possible. The delayed implementation of speech-language interventions may lead to limited therapeutic progress (Landenberger, Rinaldi, Frison, & Salles, 2017).

In light of the need for earlier detection of language impairments and accurate speech-language diagnoses in patients with acquired brain damage, the aim of this study was to develop a brief instrument for the bedside assessment of linguistic skills in patients with aphasia, based on the MTL-BR Battery and the M1-Alpha. This study will also collect evidence of the content validity of this novel assessment instrument.

Methods

Participants and procedures

The study was conducted in four stages, each involving a different set of participants. These included speech pathologists (Stage 1), non-expert judges (Stage 2), expert judges (Stage 3), and a pilot sample (Stage 4). The description and selection criteria for all samples are shown in Table 1.

Table 1 Description and selection criteria for participants in each stage of the adaptation and validation of the Brief MTL-BR

The four stages in the adaptation process and validity study are described below.

Stage 1. Instrument evaluation and stimulus selection

This stage involved three speech pathologists, including two aphasiologists and one master’s student in Human Communication Disorders. The brief screening battery developed in the present study was based on the adaptation performed by Scliar-Cabral (1983) (M1-Alpha) and the MTL-BR (Parente et al., 2016). The researchers first examined the M1-Alpha (available only to researchers and some clinical practitioners), the MTL-BR, and MTL-BR version B (an alternate form of the MTL-BR developed for psychometric evaluation but not commercially available) in order to select items for the brief assessment instrument and determine whether any of these would need to be redesigned or updated. New stimuli, such as nonwords, were also developed for inclusion in the assessment battery.

Throughout the process, the researchers attempted to ensure that the word structure and sentence length of items in the Brief MTL-BR was similar to those in the M1-Alpha. The administration and scoring instructions, including recommendations for the qualitative analysis of each task, were obtained from the MTL-BR application manual (Parente et al., 2016).

The protocol developed in the present study was named the Brief Montreal-Toulouse Language Assessment Battery (Brief MTL-BR). Since the Brazilian Portuguese equivalent of the MT-86β is referred to as the MTL-BR, the equivalent of the alpha version was named Brief MTL-BR, with the consent of the authors of the original instrument.

Stage 2. Assessment by non-expert judges

This stage involved 28 neurologically healthy participants of both genders (21 female and seven male) aged 19 to 61 (M = 29.25, SD = 12.39). Two participants had 5 to 8 years of formal education (7.14%), three had 9 to 11 years (10.71%), and 23 had completed at least 12 years of formal education (82.14%). Participants were recruited from university settings and community centers. For inclusion criteria of non-expert judges, the participants answered the question the Sociodemographic and Health Questionnaire (Fonseca et al., 2012) and underwent the Geriatric Depression Scale (GDS-15) (Yesavage et al., 1983).

In order to verify the representativeness of pictorial stimuli, all 65 images were presented to non-expert judges for analysis. The images were inserted into PowerPoint slides and projected on a screen for each participant. Non-expert judges were asked to perform two tasks: name each image, then match it to one of several descriptive phrases on a sheet of paper. Each participant performed the tasks individually and was asked to suggest any improvements they deemed necessary.

Stage 3. Evaluation by expert judges

The sample in this stage consisted of nine speech pathologists with doctoral training and experience in aphasiology and/or linguistics. Participants were selected for the study based on their clinical and/or research experience and invited to participate via an email which also contained a description of the study and of the analysis they would be asked to perform. Five of the judges analyzed 66 drawings in order to classify them as adequate or inadequate considering the representativeness of images of target and distractor stimuli. The other four judges evaluated the words, nonwords, and phrases, classifying them as adequate or inadequate considering psycholinguistic characteristics, such as length, frequency, and mental representativeness. All participants were asked to suggest modifications to the stimuli whenever they deemed necessary. Stimuli classified as inadequate were modified and reevaluated by the same set of judges. After implementation of all suggested improvements, a preliminary version of the Brief MTL-BR was obtained.

Stage 4. Pilot study

The pilot study was divided into two parts. In Pilot Study 1, the preliminary version of the instrument was administered to seven participants of both genders (four female and three male), aged 38 to 56 years (M = 45.71, SD = 6.47). Two participants had 5 to 8 years of formal education (29%), four had 9 to 11 years (57%), and one had over 12 years of formal education (14%). The aim of Pilot Study 1 was to identify any problems with the instrument and to estimate the time of administration of the Brief MTL-BR in neurologically healthy participants.

Subsequently, after the implementation of necessary changes, a final version of the Brief MTL-BR was developed. Pilot Study 2 had a similar aim to Pilot Study 1 and involved 65 individuals of both genders (44 female and 21 male) aged 19 to 75 years (M = 42.55, SD = 15.18). A total of 16 individuals had 5 to 8 years of formal education (24.6%), 22 had 9 to 11 years (33.8%), and 27 had at least 12 years of formal education (41.5%). The age of participants in Pilot Study 2 corresponds to the age range of the Brief MTL-BR, though participants were not matched for other characteristics.

Participants in Pilot Studies 1 and 2 were recruited from university settings and community centers. All participants were neurologically healthy, right-handed Brazilian Portuguese speakers, with no current or prior history of psychoactive substance use, and no signs of depression and/or psychiatric or sensory disorders.

In order to screen for exclusion criteria and select participants for Stage 4, prior to completing the Brief MTL-BR, subjects were administered a Sociodemographic and Health Questionnaire (Fonseca et al., 2012) which investigates cultural and communicative experiences, demographic characteristics, handedness, medical history, neurological and motor impairments, as well as the frequency of social interaction and reading and writing activities. Participants also completed the GDS-15 (Yesavage et al., 1983), which was originally developed to screen for signs of depression in elderly populations but is applicable to adults aged 17 or older (Lezak, Howieson, & Loring, 2004). These procedures were conducted in order to screen for exclusion criteria and ensure that participants did not have any health conditions which could interfere with the results of the study. Those selected for participation were then administered the Brief MTL-BR.

Data Analysis

Each stage of the study involved a different set of statistical procedures. Data from Stages 1 and 4 were analyzed using descriptive methods. Stage 2 involved the calculation of simple percent agreement between raters. Items with a minimum of 80% agreement were maintained in the instrument (Fagundes, 1985).

In Stage 3, the Content Validity Ratio (CVR) (Lawshe, 1975) of each item was analyzed. The CVR was obtained using the following formula: CVR = (ne − N/2)(N/2), where ne corresponds to the number of positive ratings for a given item, and N represents the total number of raters. The minimum acceptable CVR depends on the number of raters. For a study with five raters, the minimum acceptable value is 0.99 (Pacico & Hutz, 2015). After the CVR analysis, inter-rater agreement was evaluated using Gwet’s first-order agreement coefficient (AC1) (Gwet, 2008).

Ethical aspects

This study was conducted as part of a research project approved by the Federal University of Santa Maria Ethics Board under protocol number 2.170.519. All participants provided written informed consent prior to entering the study, as recommended by National Health Council Resolution 466/12. Authorization for this study was also obtained from the authors of the original M1-Alpha, as provided by the ITC guidelines (2017).

Results

In Stage 1, the analysis of items from the extended MTL-BR, the M1-Alpha, and the MTL-BR version B revealed the need to substitute or adapt some of the stimuli from these instruments. A total of 120 images were selected for the Brief MTL-BR Brief, including 25 from the M1-Alpha, 52 from the MTL-BR version B, and 43 newly developed items. Pictographic stimuli have fine lines drawn in black and white. A total of 14 words (11 from the M1-Alpha and 3 newly developed) were also selected, as were five nonwords and five sentences (one from the M1-Alpha and four newly developed). Two tasks which are not in the M1-Alpha protocol were also added to the Brief MTL-BR. These were the automatic speech and nonverbal praxis tasks, both from the MTL-BR Battery.

Sixty-five images were redrawn and reanalyzed by the non-expert judges. The results of this procedure revealed 100% agreement on 33 images, 96% agreement on 19 images, 93% agreement in six images, and 89% agreement on two images. Five images failed to achieve the 80% agreement threshold and were therefore modified. These results led to the substitution of six images and the addition of one image to two response cards.

After these changes were made, the five specialist judges were given 66 images for analysis (Stage 3). A third of these items had unsatisfactory ratings, with a CVR of − 0.20 to 0.60. These were therefore redesigned. The remaining 44 items had a CVR = 1 and remained in the instrument. The 22 redesigned items were then reexamined by the expert judges. Three items obtained a CVR of 0.6 while the remaining 19 had a CVR = 1. After careful analysis by the authors of the present study, it was decided that the three items with a low CVR would remain in the instrument but as distractors rather than target stimuli. Additionally, Gwet’s agreement coefficient suggested near perfect inter-rater reliability (AC1 = 0.98; CI = 0.96–1).

The four remaining specialist judges analyzed 14 words, five nonwords, and four sentences for inclusion in the Brief MTL-BR (Stage 3). Eight words had 100% inter-rater agreement, while six had 75%. Four nonwords had 100% agreement while one had 75%. Lastly, two of the sentences had 100% agreement, one reached 75%, and the other, 50% agreement. After analyzing the suggestions made by expert judges, the authors of this study opted to make no modifications to the items, since this would have detracted from the purpose of the instrument. One suggestion, for instance, pertained to the sentence “The apples are green” in the dictation task. The judges suggested the word “apples” (in Portuguese, “maçãs”) be replaced by a regularly spelled word. However, assessing the spelling of irregular words is one of the main goals of the task. Additionally, Gwet’s agreement coefficient suggested satisfactory inter-rater reliability for this item (AC1 = 0.74, CI = 0.57–0.90).

These procedures were followed by Stage 4, which began with Pilot Study 1. Seven participants were administered the preliminary version of the Brief MTL-BR. The mean duration of administration was 8 min (SD = 1.25). During the pilot study, some errors were detected in the instrument, including punctuation errors in two tasks (guided interview and oral comprehension) and an inadequate stimulus in the written comprehension task. After the stimulus in question was redesigned and the punctuation errors were corrected, the final version of the Brief MTL-BR was obtained. This instrument did not require any additional modifications and was therefore used in Pilot Study 2. The mean time of administration of the final version of the Brief MTL-BR in healthy adults was 11 min (SD = 4.00).

The Brief MTL-BR was therefore composed of the following tasks: directed interview, oral comprehension, written comprehension, copy, writing to dictation, reading aloud, oral naming, automatic speech, and non-verbal praxis. The directed interview task in the Brief MTL-BR is identical to the corresponding task in the M1-Alpha, save for minor modifications, such as the inclusion of the terms “in treatment/sick” in the question, “How long have you been in the hospital?” In the oral comprehension task (words, simple, and complex phrases), four of the stimuli obtained from the M1-Alpha were updated, and five new response cards were developed. Two response cards from the MTL-BR version B were also included in the Brief MTL-BR. The written comprehension task (words, simple, and complex phrases) was also modified. Three new cards were developed, while seven were reused from the MTL-BR version B. Only one response card form the M1-Alpha was updated and reused.

The copying task involved the same sentence as the M1-Alpha. In the writing to dictation task, all words were modified, and nonwords were included. The sentence in this task was also changed but retained the same structure and number of words as the sentence in the corresponding task of the M1-Alpha. The copy and writing to dictation tasks were marked as optional in the MTL-BR Brief, since the instrument is intended for bedside assessment, and some patients in this situation may be unable to complete these items.

The repetition test retained five of the eight words in the M1-Alpha protocol. Of the remaining three words, one underwent phonological modification (e.g., “cat”/“gato” was substituted for “duck”/“pato”), and two were replaced with pseudowords. A new sentence was also created for this task. In the reading aloud task, two words were replaced with pseudowords. A new phrase was also developed.

In the oral naming task, four stimuli (one noun and three verbs) were reused from the MTL-BR version B, three nouns were selected from the M1-Alpha, and five new stimuli were developed. One of the newly developed items was in the same semantic category as an item in the M1-Alpha (e.g., “nose” was substituted for “ear”). Automatic speech and nonverbal praxis tasks were also included in the Brief MTL-BR. Table 2 shows tasks, objectives, and number of items used on the instrument.

Table 2 Description of Brief MTL-BR tasks

Discussion

To detect linguistic changes resulting from neurological damage, language assessment is essential (Kalbe, Reinhold, Brand, Markowitsch, & Kessler, 2005). If this is carried out still at the bedside, an initial overview of the deficits presented can be obtained, favoring that the beginning of the speech therapy intervention is early (Kiran & Thompson, 2019; Salter, Jutai, Foley, Hellings & Teasell, 2006; Sampaio & Moreira, 2016). For assessments to occur during hospitalization, it is necessary that the test be performed quickly and effectively (Seniów, Litwin, & Leśniak, 2009).

Due to the existing limitation of language tests available for aphasia that present psychometric evidence, it is necessary to adapt instruments to fill this gap in clinical practice and research. It also emphasized the importance of tests guarantee that materials are culturally and linguistically sensitive (Ivanova & Hallowell, 2013).

When developing an instrument based on an existing assessment tool, it is important to ensure that it remains similar in content to the original instrument, even after implementing any necessary modifications (Borsa & Seize, 2017). Additionally, the items should be approved by the authors of the original instrument (Astepe & Köleli, 2019), as was the case in the present study. The stimuli in the Brief MTL-BR were selected based on existing instruments such as the MTL-BR and the M1-Alpha, as well as the MTL-BR version B. The resulting instrument fulfilled the same purpose as the M1-Alpha protocol, using updated and redesigned stimuli.

Additionally, brief measures of automatic behavior were included in the Brief MTL-BR. A similar procedure was followed in the MTL-BR, since these abilities are often preserved in severe aphasia and must therefore be examined (Vendrell, 2001). The Brief MTL-BR evaluates automatic verbal behavior through counting and singing “Happy Birthday.” In addition to these items, the MTL-BR also requests that patients name the days of the week. The importance of evaluating automatic speech was made evident in studies involving instruments such as the MAST (Nakase-Thompson et al., 2005), LAST (Flamand-Roze et al., 2011), and MTL-BR (Parente et al., 2016).

Items were also added to the instrument in order to evaluate nonverbal praxis, including the ability to perform isolated gestures and sequences of tongue and face movements (Parente et al., 2016), which may provide an indication of impaired nonverbal motor planning in addition to language disorders (Bonini & Radanovic, 2015; Rouse, 2020). The number of sentences in the repetition and reading aloud tasks was also reduced to facilitate screening. Instruments such as the Mini Mental State Examination (Chaves & Izquierdo, 1992), which are also used for screening purposes, are composed of short tasks, which are easy to administer and provide a cognitive profile of the patient. Additionally, pseudowords were included in the reading aloud, writing to dictation, and repetition tasks, in order to evaluate the perilexical or phonological reading route. These items follow the same structure of real words in Brazilian Portuguese. Irregular words were also included in the reading and dictation tasks in order to evaluate the lexical reading route. This type of stimulus serves a similar purpose in other instruments such as the MTL-BR (Parente et al., 2016) and the Boston Diagnostic Aphasia Examination (BDAE) (Goodglass & Kaplan, 1983).

Content validity is crucial for the adaptation and development of assessment instruments, as it provides information about the relevance, clarity, and representativeness of each item (Sireci, 1998; Oliveira, Sousa, & Maia, 2017). Therefore, after items are selected for a particular instrument, it is important for their semantic properties to be assessed by the target population as well as expert judges in order to collect evidence of content validity (Pasquali, 2010; Borsa & Seize, 2017). In the present study, the first of these procedures was referred to as “evaluation by non-expert judges” (Stage 2) and confirmed the intelligibility of the images in the instrument.

As instructed by the ITC’s guidelines (2017), expert judges must consider linguistic, cultural, and psychological differences in the intended population of an instrument. In this study, the analysis by expert judges (Stage 3) showed that the stimuli selected or developed for the instrument were adequate. The expert judges were asked to determine whether the stimuli were representative and consistent with the goals of the instrument (Mohajan, 2017; Zamanzadeh et al., 2015). The CVR was used to measure inter-rater agreement and determine the extent to which an item was considered essential for the test, as has been done in previous studies (Al-Thalaya et al., 2017; Bonini; Keske-Soares, 2018). Previous instrument adaptation studies which used Gwet’s first-order agreement coefficient (AC1) found that values over 0.70, such as those obtained in the present study, are indicative of satisfactory agreement between raters (Bukenya et al., 2017; Erivan et al., 2019). The main suggestion offered by judges who analyzed the words, pseudowords, and phrases for the Brief MTL-BR was the replacement of irregular by regular words. However, the authors of the present study chose not to accept this suggestion, since they believed irregular words should remain in the instrument to allow for an assessment of lexical strategies for reading and writing, both of which are important aspects of linguistic processing (Pinheiro & Rothe-Neves, 2001).

The pilot study (Stage 4) was crucial to provide an estimate of the time of administration and detect any errors in the instrument. According to the literature, using an instrument in a realistic situation often allows for the identification of issues which may have gone unnoticed in other steps of the study (Salles et al., 2011; Bailer, Tomitch, & D’Ely, 2011). Additionally, the pilot study helps evaluate the comprehensibility of test items and instructions, detect insufficiently sensitive tasks, and familiarize the examiners with scoring methods (Salles et al., 2011). According to the ITC guidelines (2017), confirmatory evidence about the psychometric quality of the adapted instrument should be obtained. The pilot study is a way of verifying the functioning of the test items and instructions and revising them when necessary.

The duration of administration of the Brief MTL-BR (11 min) is similar to that of instruments such as the UAS (Thommessen et al., 1999). As such, the Brief MTL-BR can be considered a brief assessment instrument suitable for inpatient screening, like the M1-Alpha (Ortiz, Osborn, & Chiari, 1993). However, this does not mean that the administration in acute aphasia sample will have the same duration. Further studies considering that will be conduct.

Like screening instruments in the English language such as the MAST (Nakase-Thompson et al., 2005), which is internationally recognized as a major screening tool for aphasia (Salter et al., 2006), the Brief MTL-BR includes measures of both receptive and expressive language. However, in addition to evaluating linguistic abilities, the Brief MTL-BR also allows for an assessment of non-verbal praxis, unlike the MAST or other language screening tools.

Conclusion

The present study provided a detailed account of the adaptation of the Brief MTL-BR, which confirmed its content validity and applicability to adult and elderly individuals. The tasks selected for this instrument from the M1-Alpha and MTL-BR (Parente et al., 2016) capture the most significant language impairments in patients with left hemisphere damage. However, there is still a need for further studies involving the clinical population which is the target of this assessment battery. Therefore, other psychometric sources should also be researched such as test-criterion, convergent validity, factor analysis, and consequences of testing.

Given the scarcity of language screening instruments for patients with left hemisphere damage in Brazilian Portuguese, the Brief MTL-BR constitutes an important contribution to the current literature. The present findings provided strong evidence of the content validity of the MTL-BR Brief. Further studies are required to investigate its reliability, as well as its construct and criterion validity.

Availability of data and materials

All data generated and analyzed during this study will be treated with total confidentiality. The dataset supporting the conclusions of this article is available by request to the authors.

Change history

  • 30 September 2020

    An amendment to this paper has been published and can be accessed via the original article.

Abbreviations

AABT:

Aachen Aphasia Bedside Test

AASP:

Acute Aphasia Screening Protocol

ADP:

Aphasia Diagnostic Profiles

BEST:

Bedside Evaluation Screening Test

CVR:

Content Validity Ratio

FAST:

Frenchay Aphasia Screening Test

GDS-15:

Geriatric Depression Scale

ITC:

International Test Commission

LAST:

Language Screening Test

MAST:

Mississippi Aphasia Screening Test

MTL-BR:

Montreal-Toulouse Language Assessment Battery

SST:

Sheffield Screening Test for Acquired Language Disorders

TBI:

Traumatic brain injury

UAS:

Ullevaal Aphasia Screening Test

References

  1. Al-Thalaya, Z., Nilipour, R., Ghoreyshi, Z. S., Pourshahbaz, A., Nassar, Z., & Younes, M. (2017). Reliability and validity of bedside version of Arabic Diagnostic Aphasia Battery (A-DAB-1) for Lebanese individuals. Aphasiology., 32, 323–339. https://doi.org/10.1080/02687038.2017.1338661.

    Article  Google Scholar 

  2. Ardila, A., & Rubio-Bruno, S. (2017). Aphasia from the inside: The cognitive world of the aphasic patient. Applied Neuropsychology:Adult, 25(5), 434–440. https://doi.org/10.1080/23279095.2017.1323753.

    Article  PubMed  Google Scholar 

  3. Astepe, B. S., & Köleli, I. (2019). Translation, cultural adaptation, and validation of Australian pelvic floor questionnaire in a Turkish population. European Journal of Obstetrics Gynecology and Reproductive Biology, 234, 71–74. https://doi.org/10.1016/j.ejogrb.2019.01.004.

    Article  Google Scholar 

  4. Azuar, C., Leger, A., Arbizu, C., Henry-Amar, F., Chomel-Guillaume, S., & Samson, Y. (2013). The Aphasia Rapid Test: An NIHSS-like aphasia test. Journal of Neurology, 260(8), 2110–2117. https://doi.org/10.1007/s00415-013-6943-x.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Bailer, C., Tomitch, L. M. B., & D’Ely, R. C. S. F. (2011). Planejamento como processo dinâmico: a importância do estudo piloto para uma pesquisa experimental em linguística aplicada. Revista Intercâmbio, 24, 129–146.

    Google Scholar 

  6. Benson, D. F. (1993). Aphasia. In K. M. Heilman, & E. Valenstein (Eds.), Clinical neuropsychology, (pp. 17–36). New York: Oxford Uni.

    Google Scholar 

  7. Biniek, R., Huber, W., Glindemann, R., Willmes, K., & Klumm, H. (1992). The Aachen Aphasia Bedside Test-criteria for validity of psychologic tests. Nervenarzt, 63(8), 473–479.

    PubMed  Google Scholar 

  8. Bonini, J. B., & Keske-Soares, M. (2018). Pseudopalavras para terapia fonológica validadas por juízes especialistas. CoDAS, 30(5), 1–6. https://doi.org/10.1590/2317-1782/20182017013.

    Article  Google Scholar 

  9. Bonini, M. V., & Radanovic, M. (2015). Cognitive deficits in post-stroke aphasia. Arq. Neuro-Psiquiatr., 73(10), 840–847. https://doi.org/10.1590/0004-282X20150133.

    Article  Google Scholar 

  10. Borsa, J. C., & Seize, M. de M. (2017). Construção e adaptação de instrumentos psicológicos: dois caminhos possíveis. In B. F. Damásio & J. C. Borsa (Eds.), Manual do desenvolvimento de instrumentos psicológicos (1st ed.). São Paulo, SP: Vetor.

  11. Bukenya, R., Ahmed, A., Andrade, J. M., Grigsby-Toussaint, D. S., Muyonga, J., & Andrade, J. E. (2017). Validity and reliability of general nutrition knowledge questionnaire for adults in Uganda. Nutrients, 9(2), 1–11. https://doi.org/10.3390/nu9020172.

    Article  Google Scholar 

  12. Casarin, F. S., Pagliarin, K. C., Altmann, R. F., Parente, M. A. M. P., Ferré, H., Cotê, H., … Fonseca, R. P. (2020). Bateria Montreal de Avaliação da Comunicação Breve-MAC B: fidedignidade e validade. CoDas, 32(1), 1–7. https://doi.org/10.1590/2317-1782/20192018306.

    Article  Google Scholar 

  13. Chaves, M. L., & Izquierdo, Y. (1992). Differential diagnosis between dementia and depression: A study of efficiency increment. Acta Neurologica Scandinavica, 85(6), 378–382. https://doi.org/10.1111/j.1600-0404.1992.tb06032.x.

    Article  PubMed  Google Scholar 

  14. Crary, M. A., Haak, N. J., & Malinsky, A. E. (1989). Preliminary psychometric evaluation of an acute aphasia screening protocol. Aphasiology, 3(7), 611–618.

    Article  Google Scholar 

  15. Oliveira, M. A. M. de, Sousa, W. P. da S., & Maia, E. M. C. (2017). Adaptação e validade de conteúdo da versão brasileira da Cambridge Worry Scale. Revista de Enfermagem UFPE On Line, 11, 2083–2089. doi:https://doi.org/10.5205/reuol.9302-81402-1-RV.1105sup201713

  16. de Salles, J. F., Fonseca, R. P., Cruz-Rodrigues, C., Mello, C. B., Barbosa, T., & Miranda, M. C. (2011). Desenvolvimento do Instrumento de Avaliação Neuropsicológica Breve Infantil NEUPSILIN- INF. Psico-USF, 16(3), 297–305. https://doi.org/10.1590/S1413-82712011000300006.

    Article  Google Scholar 

  17. Doesborgh, S. J., van de Sandt-Koenderman, W. M., Dippel, D. W., van Harskamp, F., Koudstaal, P. J., & Visch-Brink, E. G. (2003). Linguistic deficits in the acute phase of stroke. J. Neurol., 250(8), 977–982. https://doi.org/10.1007/s00415-003-1134-9.

    Article  PubMed  Google Scholar 

  18. El Hachioui, H., Visch-Brink, E. G., de Lau, L. M. L., van de Sandt-Koenderman, M. W. M. E., Nouwens, F., Koudstaal, P. J., & Dippel, D. W. J. (2017). Screening tests for aphasia in patients with stroke: A systematic review. Journal of Neurology, 264(2), 211–220. https://doi.org/10.1007/s00415-016-8170-8.

    Article  PubMed  Google Scholar 

  19. Enderby, P., & Crow, E. (1996). Frenchay aphasia screening test: Validity and comparability. Disability and Rehabilitation, 18(5), 238–240. https://doi.org/10.3109/09638289609166307.

    Article  PubMed  Google Scholar 

  20. Erivan, R., Villatte, G., Chaput, T., Mulliez, A., Ollivier, M., Descamps, S., & Boisgard, S. (2019). French translation and cultural adaptation of a questionnaire for patients with hip or knee prosthesis. Orthopaedics and Traumatology: Surgery and Research, 105(3), 435–440. https://doi.org/10.1016/j.otsr.2019.01.011.

    Article  Google Scholar 

  21. Fachel, J. M. G., & Camey, S. (2000). Avaliação psicométrica: a qualidade das medidas e o entendimento dos dados. In J. Cunha, & A. Psicodiagnóstico (Eds.), Porto Alegre, (pp. 158–170). Artes Médicas: RS.

    Google Scholar 

  22. Fagundes, A. J. F. M. (1985). Descrição, definição e registro de comportamento, (7th ed., ). São Paulo: Edicon.

    Google Scholar 

  23. Flamand-Roze, C., Falissard, B., Roze, E., Maintigneux, L., Beziz, J., Chacon, A., & Denier, C. (2011). Validation of a new language screening tool for patients with acute stroke: The Language Screening Test (LAST). Stroke, 42(5), 1224–1229. https://doi.org/10.1161/STROKEAHA.110.609503.

    Article  PubMed  Google Scholar 

  24. Fonseca, R. P., Casarin, F. S., de Oliveira, C. R., Gindri, G., Ishigaki, E. C. S. S., Ortiz, K. Z., … Scherer, L. C. (2011). Adaptação de Instrumentos Neuropsicológicos Verbais: Um fluxograma de Procedimentos para Além da Tradução. Interação em Psicologia, 15, 59–69. https://doi.org/10.5380/psi.v15i0.25374.

    Article  Google Scholar 

  25. Gauer, G., Gomes, C. M. A., & Haase, V. G. (2010). Neuropsicometria: Modelo clássico e análise de Rasch. In Avaliação neuropsicológica, (pp. 22–30). Artmed: Porto Alegre, RS.

    Google Scholar 

  26. Godecke, E., Hird, K., Laylor, E. E., Rai, T., & Phillips, M. R. (2012). Very early poststroke aphasia therapy: A pilot ran- domized controlled efficacy trial. Int J Stroke., 7, 635–644. https://doi.org/10.1111/j.1747-4949.2011.00631.x.

    Article  PubMed  Google Scholar 

  27. Goodglass, H., & Kaplan, E. (1983). The assessment of aphasia and related disorders, 2nd (). Philadelphia, PA, USA: Lea & Febiger.

    Google Scholar 

  28. Gwet, K. L. (2008). Computing inter-rater reliability and its variance in the presence of high agreement. Br J Math Stat Psychol., 61, 29–48. https://doi.org/10.1348/000711006X126600.

    Article  PubMed  Google Scholar 

  29. Helm-Estabrooks, N. (1992). Aphasia diagnostic profiles. Austin, TX: Pro Ed Inc.

    Google Scholar 

  30. International Test Commission. (2017). The ITC guidelines for translating and adapting tests (Second edition). [www.InTestCom.org].

  31. Ivanova, M.V. & Hallowell, B. A tutorial on aphasia test development in any language: Key substantive and psychometric considerations. (2013). Aphasiology. Jan 1; 27(8): 891–920. doi: https://doi.org/10.1080/02687038.2013.805728

  32. Johnson, A. F., Valachovic, A. M., & George, K. P. (1988). Speech-language pathology practice in the acute care setting: A consultative approach. In: Johnson, F.; Jacobson, B. H. Medical speech-language pathology: A practitioner’s guide. New York: Thieme, 96–130.

  33. Kalbe, E., Reinhold, N., Brand, M., Markowitsch, H. J., & Kessler, J. (2005). A new test battery to assess aphasic disturbances and associated cognitive dysfunctions - German normative data on the Aphasia Check List. Journal of Clinical and Experimental Neuropsychology, 27(7), 779–794. https://doi.org/10.1080/13803390490918273.

    Article  PubMed  Google Scholar 

  34. Kiran, S., & Thompson, C. K. (2019). Neuroplasticity of language networks in aphasia: Advances, updates, and future challenges. Frontiers in Neurology, 10, 1–14. https://doi.org/10.3389/fneur.2019.00295.

    Article  Google Scholar 

  35. Kirk, C., & Vigeland, L. (2015). Content coverage of single-word tests used to assess common phonological error patterns. Language, Speech, and Hearing Services in Schools, 46, 14–29. https://doi.org/10.1044/2014_LSHSS-13-0054.

    Article  PubMed  Google Scholar 

  36. Koyuncu, E., Çam, P., Altınok, N., Çallı, D. E., Duman, T. Y., & Özgirgin, N. (2016). Speech and language therapy for aphasia following subacute stroke. Neural Regeneration Research, 11(10), 1591–1594. https://doi.org/10.4103/1673-5374.193237.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Landenberger, T., Rinaldi, J., Frison, T. B., & De Salles, J. F. (2017). Reabilitação neuropsicológica em um caso de traumatismo cranioencefálico em fase crônica. Neuropsicologia Latinoamericana, 9(1), 9–18. https://doi.org/10.5579/rnl.2016.0322.

    Article  Google Scholar 

  38. Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575. https://doi.org/10.1111/j.1744-6570.1975.tb01393.x.

    Article  Google Scholar 

  39. Lecours, A. R., et al. (1985). Illiteracy and brain damage: Aphasia testing in culturally contrasted population (control subjects). Montreal: Centre de Recherche du Centre Hospitalier Côte-des-Neiges.

    Google Scholar 

  40. Lezak, M. D., Howieson, D. B., & Loring, D. W. (2004). Neuropsychological assessment. New York: Oxford University Press.

    Google Scholar 

  41. Maranhão, D. K. M., Souza, M. L. P. de, Costa, M. L. G. da, & Vieira, A. C. de C. (2018). Caracterização das afasias na hemorragia subaracnóidea aneurismática. CoDAS, 30(1), 1–6. doi:https://doi.org/10.1590/2317-1782/20182016225

  42. Marshall, R. C., & Wright, H. H. (2007). Developing a clinician-friendly aphasia test. Am J Speech Lang Pathol., 16, 295–315. https://doi.org/10.1044/1058-0360(2007/035.

    Article  PubMed  Google Scholar 

  43. Martinez, E. O., Saborit, A. R., Carbonell, L. B. T., & Contreras, R. M. D. (2014). Epidemiología de la afasia en Santiago de Cuba. Neurología Argentina., 6(2), 77–82. https://doi.org/10.1016/j.neuarg.2013.12.002.

    Article  Google Scholar 

  44. MCleod, S., & Verdon, S. (2014). A review of 30 speech assessments in 19 languages other than English. American Journal of Speech-Language Pathology, 23(4), 708–723. https://doi.org/10.1044/2014_AJSLP-13-0066.

    Article  PubMed  Google Scholar 

  45. Mohajan, H. K. (2017). Two criteria for good measurements in research: Validity and reliability. Annals of Spiru Haret University. Economic Series, 17(4), 59–82. doi:10.26458/1746

  46. Nakase-Thompson, R., Manning, E., Sherer, M., Yablon, S. A., Gontkovsky, S. L. T., & Vickery, C. (2005). Brief assessment of severe language impairments: Initial validation of the Mississippi aphasia screening test. Brain Injury, 19(9), 685–691. https://doi.org/10.1080/02699050400025331.

    Article  PubMed  Google Scholar 

  47. Nespoulous, J. L., Lecours, A. R., Lafond, D., Lemay, A., Joanette, Y., & Cot, F. (1986). Protocolo Montreal-Toulouse de exame lingüístico de afasia MT-86. Montreal: Laboratoire Théophile- Alajouanine.

    Google Scholar 

  48. Nursi, A., Padrik, M., Nursi, L., Pähkel, M., Virkunen, L., Küttim-Rips, A., & Taba, P. (2018). Adaption and validation of the Mississippi Aphasia Screening Test to Estonian speakers with aphasia. Brain and Behavior, 9(1), 1–8. https://doi.org/10.1002/brb3.1188.

    Article  Google Scholar 

  49. Ortiz, K. Z. (1991). Aplicação do teste M1-Alpha em 35 sujeitos: descrição e questionamentos. São Paulo: Escola Paulista de Medicina.

    Google Scholar 

  50. Ortiz, K. Z. (2010). Afasia. In K. Z. Ortiz (Ed.), Distúrbios Neurológicos Adquiridos: Linguagem e Cognição (Manoele). SP: Barueri.

    Google Scholar 

  51. Ortiz, K. Z., & da Costa, F. P. (2011). M1-Alpha test in normal subjects with low educational level : A pilot study. Jornal Da Sociedade Brasileira de Fonoaudiologia, 23(3), 220–226. https://doi.org/10.1590/S2179-64912011000300007.

    Article  PubMed  Google Scholar 

  52. Ortiz, K. Z., Osborn, E., & Chiari, B. M. (1993). O teste M1-Alpha como instrumento de avaliação da afasia. Pró- Fono, 5(1), 23–29.

    Google Scholar 

  53. Pacico, J. C., & Hutz, C. S. (2015). Validade. In C. S. Hutz, D. R. Bandeira, & C. M. Trentini (Eds.), Psicometria. Artmed: Porto Alegre, RS.

    Google Scholar 

  54. Pagliarin, K. C., Gindri, G., Ortiz, K. Z., Parente, M. A. M. P., Joanette, Y., Nespoulous, J. L., & Fonseca, R. P. (2015). Relationship between the Brazilian version of the Montreal-Toulouse language assessment battery and education, age and reading and writing characteristics. A cross-sectional study. São Paulo Medical Journal, 133(4), 298–306. https://doi.org/10.1590/1516-3180.2014.8461610.

    Article  PubMed  Google Scholar 

  55. Pagliarin, K. C., Ortiz, K. Z., dos Santos Barreto, S., Parente, M. A. D. M. P., Nespoulous, J. L., Joanette, Y., & Fonseca, R. P. (2015). Montreal–Toulouse Language Assessment Battery: Evidence of criterion validity from patients with aphasia. Journal of the neurological sciences, 357(1-2), 246–251. https://doi.org/10.1016/j.jns.2015.07.045.

    Article  PubMed  Google Scholar 

  56. Pagliarin, K. C., Ortiz, K. Z., Parente, M. A. D. M. P., Arteche, A., Joanette, Y., Nespoulous, J. L., & Fonseca, R. P. (2014). Montreal-Toulouse language assessment battery for aphasia: Validity and reliability evidence. NeuroRehabilitation, 34(3), 463–471. https://doi.org/10.3233/NRE-141057.

    Article  PubMed  Google Scholar 

  57. Pagliarin, K. C., Ortiz, K. Z., Parente, M. A. M. P., Nespoulous, J. L., Joanette, Y., & Fonseca, R. P. (2014). Individual and sociocultural influences on language processing as assessed by the MTL-BR Battery. Aphasiology, 28(10), 1244–1257. https://doi.org/10.1080/02687038.2014.918573.

    Article  Google Scholar 

  58. Parente, M. A. de M. P., Fonseca, R. P., Pagliarin, K. C., Barreto, S. dos S., Soares-Ishigaki, E. C. S., Hübner, L. C., & Ortiz, K. Z. (2016). Bateria Montreal-Toulouse de Avaliação da Linguagem – Bateria MTL-Brasil (Vetor Edit). São Paulo.

  59. Pasquali, L. (1999). Instrumentos psicológicos: manual prático de elaboração. Brasília: LabPAM/IBAPP.

    Google Scholar 

  60. Pasquali, L. (2009). Psicometria. Rev. Esc Enferm USP, 43, 992–999.

    Article  Google Scholar 

  61. Pasquali, L. (2010). Instrumentação psicológica: Fundamentos e práticas. Porto Alegre: Artmed.

    Google Scholar 

  62. Pernambuco, L., Espelt, A., Magalhães Junior, H. V., & de Lima, K. C. (2017). Recomendações para elaboração, tradução, adaptação transcultural e processo de validação de testes em Fonoaudiologia. CoDAS, 29(3), 5–8. https://doi.org/10.1590/2317-1782/20172016217.

    Article  Google Scholar 

  63. Pinheiro, A.M.V., & Rothe-Neves, R. (2001). Avaliação Cognitiva de Leitura e Escrita:As Tarefas de Leitura em Voz Alta e Ditado. Psicologia: Reflexão e Crítica, 14(2), 399-408. doi:10.1590/S0102-79722001000200014

  64. Raju, R., & Krishnan, G. (2015). Adaptation and validation of stroke-aphasia quality of life (SAQOL-39) scale to Malayalam. Annals of Indian Academy of Neurology, 18(4), 441–444. https://doi.org/10.4103/0972-2327.160068.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Rohde, A., Worrall, L., Godecke, E., O’Halloran, R., Farrell, A., & Massey, M. (2018). Diagnosis of aphasia in stroke populations: A systematic review of language tests. PLoS ONE, 13(3), e0194143. https://doi.org/10.1371/journal.pone.0194143.

    Article  PubMed  PubMed Central  Google Scholar 

  66. Rouse, M. H. (2020). Neuroanatomy for speech-language pathology and audiology. 2 ed. Burlington, MA: Jones & Bartlett Learning.

  67. Rubio, D. M., Berg-Weger, M., Tebb, S. S., Lee, S., & Rauch, S. (2003). Objectifying content validity: Conducting a content validity study in social work research. Social Work Research, 27(2), 94–104. https://doi.org/10.1093/swr/27.2.94.

    Article  Google Scholar 

  68. Salter, K., Jutai, J., Foley, N., Hellings, C., & Teasell, R. (2006). Identification of aphasia post stroke: A review of screening assessment tools. Brain Injury, 20(Jun), 559–568. https://doi.org/10.1080/02699050600744087.

    Article  PubMed  Google Scholar 

  69. Sampaio, G. R., & Moreira, E. (2016). Caracterização dos distúrbios comunicativos em indivíduos pós AVCI por meio da aplicação adaptada da bateria MAC. Distúrbios Comun, 28(3), 452–461.

    Google Scholar 

  70. Santiago, G.S. & Gárate, P.R. (2016). Epidemiologia, rehabilitación y pronóstico de las afasias. Rev Hered Rehab, 1, 11-20. doi:10.20453/rhr.v1i1.2891.

  71. Seniów, J., Litwin, M., & Leśniak, M. (2009). The relationship between non-linguistic cognitive deficits and language recovery in patients with aphasia. Journal of the Neurological Sciences, 283(1–2), 91–94. https://doi.org/10.1016/j.jns.2009.02.315.

    Article  PubMed  Google Scholar 

  72. Shipley, K. G., & Mcafee, J. G. (2016). Assessment in speech- language pathology: A resource manual. 5. ed. Cengage Learning.

  73. Sireci, S. G. (1998). The construct of content validity. Social Indicators Research, 45, 83–117. https://doi.org/10.1023/A:1006985528729.

    Article  Google Scholar 

  74. Syder, D., et al. (1993). Sheffield screening test for acquired language disorders. Manual, Windsor, UK: Nfer-Nelson.

    Google Scholar 

  75. Thommessen, B., Thoresen, G. E., Bautz-Holter, E., & Laake, K. (1999). Screening by nurses for aphasia in stroke- the Ullevaal Aphasia Screening (UAS) test. Disabil Rehabil, 21, 110–115. https://doi.org/10.1080/096382899297846.

    Article  PubMed  Google Scholar 

  76. Vendrell, J. M. (2001). Las afasias: Semiología y tipos clínicos. Revista de Neurologia, 32(10), 980–986. https://doi.org/10.33588/rn.3210.2000183.

    Article  PubMed  Google Scholar 

  77. West, J. F., Sands, E. S., & Ross-Swain, D. (1998). Bedside Evaluation Screening Test, (2nd ed., ). Austin, TX: Pro-Ed.

    Google Scholar 

  78. Yesavage, J., et al. (1983). Development and validation of a geriatric depression screening scale: a preliminary report. J Psychiat Res, 17(1), 37–49.

    Article  Google Scholar 

  79. Zamanzadeh, V., Ghahramanian, A., Rassouli, M., Abbaszadeh, A., Alavi-Majd, H., & Nikanfar, A.R. (2015). Design and implementation content validity study: Development of an instrument for measuring patient-centered communication. Journal of Caring Sciences, 4(2), 165–178. doi: https://doi.org/10.15171/jcs.2015.017

Download references

Acknowledgements

The authors thank CAPES for first author scholarship and FAPERGS (edital 01/2017–ARD, number 17/2551-0000837-5) for financing this project.

Funding

This research had the financial support of FAPERGS (edital 01/2017–ARD, number 17/2551-0000837-5) and CAPES for the first author scholarship.

Author information

Affiliations

Authors

Contributions

RFA collected the data, analyzed the results, and performed the writing of the manuscript. KZO participated in the elaboration of the research and writing of the manuscript. TRB contributed to the data collection and writing of the manuscript. EPO contributed to the data collection and writing of the manuscript. KCP was responsible for the project, study design, general orientation of the stages of execution and preparation of the manuscript, and writing of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Raira Fernanda Altmann.

Ethics declarations

Competing interests

No interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Altmann, R.F., Ortiz, K.Z., Benfica, T.R. et al. Brief Montreal-Toulouse Language Assessment Battery: adaptation and content validity. Psicol. Refl. Crít. 33, 18 (2020). https://doi.org/10.1186/s41155-020-00157-6

Download citation

Keywords

  • Assessment
  • Language
  • Aphasia
  • Adult
  • Elderly