- Open Access
A short version of the questionnaire of metamemory in Adulthood (MIA) in Portuguese
Psicologia: Reflexão e Críticavolume 29, Article number: 37 (2016)
The aim of this study was to develop a short version of the Metamemory in Adulthood Questionnaire (MIA) in Brazilian Portuguese. The original MIA is an instrument in english composed by 108 items, divided into seven dimensions of metamemory (Strategy, Task, Capacity, Change, Anxiety, Achievement and Locus). Despite of being widely used, the extension of the instrument makes its application impractical in many contexts, reinforcing the need for a short version. A total of 472 participants answered the original full version of the MIA. First, Item Response Theory (IRT) analyses revealed that nine items of the instrument could be excluded due to poor infit and outfit values. After exploratory factor analyses, the 99 items left were judged by five experts that chose the most appropriate items following previously established criteria (factor loading, repetitiveness, bad writing, and temporal/cultural inadequacy). A 39-items version (MIAr) was obtained, with the same factorial structure of the original MIA. The MIAr demonstrated satisfactory internal consistency indexes, as well as evidences of convergent validity and validity based on the response process. The results revealed that the MIAr achieved good psychometric properties, serving as a more parsimonious and practical option for metamemory assessment.
Metamemory consists of the overall knowledge and understanding that someone has about memory, and about one's own memory, specifically (Schraw 2008). The concept was initially used by Flavell (1979) in the context of developmental psychology research, and then was systematised by Nelson and Narens (1990), that described a cognitive model of memory monitoring processes. The Metamemory in Adulthood Questionnaire (MIA; Dixon and Hultsch 1983a, 1983b) is one of the most used instruments to measure individual traits of metamemory, and has an adapted version for the Brazilian context (Yassuda et al. 2005).
The MIA is composed of 108 multiple-choice statements that are answered in a five-point likert scale, evaluating seven factors of metamemory (Hultsch et al. 1987): 1) Strategy: knowledge and use of information about skills that can improve memory; 2) Task: knowledge of basic memory processes; 3) Capacity: perception of memory capacities in different tasks; 4) Change: perception of memory stability over the years; 5) Anxiety: knowledge of emotional influence on memory performance; 6) Achievement: perceived relevance of having an accurate memory and good memory performance; and 7) Locus: perceived personal control over memory abilities.
A good understanding of metamemory functions has important practical implications. Metamemory is closely related to learning processes (Bjork et al. 2013), and is an important predictor of academic achievement (Rawson et al. 2002; Schraw and Gutierrez 2015). It also affects cognitive aging, being associated to changes in executive functioning (Bender and Raz 2012; Palmer et al. 2014). It must be considered that metamemory assessment using questionnaires might have some limitations, such as being less predictive of memory performance when compared to online metamemory tasks (Veenman et al. 2006; Zortea et al. 2014). Nevertheless, psychometric instruments allow for an indirect evaluation of beliefs and perceptions related to metamemory. The MIA, in particular, provides information about the use of strategies, knowledge on basic memory processes, relevance of having a trained memory, and perceived changes in memory capacities, among other elements of metamemory described previously.
The MIA stands out among other instruments that also evaluate metamemory, like the Memory Function Questionnaire (MFQ; Gilewski et al. 1990), the Everyday Memory Questionnaire (EMQ-r; Royle and Lincoln 2008) and the Memory Compensation Questionnaire (MCQ; de Frias and Dixon 2005). These instruments do not have a Brazilian Portuguese version so far. Particularities of MIA involve the evaluation of both knowledge and self-efficacy of one’s memory, as well as the measurement of dimensions that go beyond memory problems, what predominates in the MFQ and in the EMQ-r. Furthermore, the MIA shows many evidences of psychometric validity, such as convergent validity with other instruments (Hertzog et al. 1989), discriminant validity with other constructs such as personality, depression, self-efficacy and locus of control (Hertzog et al. 1990a), and predictive validity with self-efficacy and memory performance (Hertzog et al. 1990b).
The MIA is also widely used in many different countries, accumulating evidences of validity in many idioms and demonstrating adequate cultural adaptability. The original MIA was developed by studies in the USA and Canada (Hertzog et al. 1990a). Since then, adaptations and reduced versions were elaborated in other countries, such as Netherlands (Ponds and Jolles 1996), Arabia (Alquraan and Aljarah 2011), Japan (Kinjo et al. 2013) and South Africa (Van Ede 1995).
In Brazil, the complete instrument was adapted by Yassuda et al. (2005). This Portuguese version showed satisfactory reliability indexes in six of its seven factors (α values between 0.79 and 0.87), with the exception of the Control subscale (α = 0.66). The instrument also showed adequate temporal stability (Spearman correlations between 0.83 and 0.57, p < 0.05), except in the Locus subscale (p = .43, p < 0.1). Differences were also found between young and older adults in metamemory subscales, so that old adults showed greater memory knowledge in general and young adults had more favourable perceptions and feelings about memory.
Despite of the satisfactory psychometric properties of the Brazilian Portuguese MIA version, its extension is still a methodological barrier in many research contexts, such as with elderly samples, individuals with cognitive dysfunctions, or in studies with extensive experimental designs. In clinical contexts, for example, the higher demand of attentional and cognitive resources in elderly people can impair the use of the instrument. Moreover, the application duration can be too extensive to include the full version of MIA in research with big samples. It would be very beneficial in such contexts to have a short MIA version that can briefly evaluate the same factors of the original instrument.
Studies such as the ones of Ponds and Jolles (1996) and of Alquraan and Aljarah (2011) already achieved a reduced version of the MIA without compromising its psychometric properties. These studies used multivariate analysis, such as factorial analysis and IRT models, to establish safe procedures to the instrument reduction, while keeping the same original dimensions. The present study aimed to develop and present evidences of validity of a short version of the MIA in Brazilian Portuguese, using IRT, exploratory factor analysis, judge's evaluation, and tests of convergent validity.
A total of five samples of participants were invited and took part in this study. The first one consisted of 185 university students, recruited personally in two universities of the central region of Brazil and one university of the south region. The second sample consisted of 192 participants that took part in an online survey, published in social medias and email lists. The last three samples consisted of participants from the southern region of Brazil, specifically: 19 teachers of elementary schools, 27 adults, and a clinical sample of 20 patients with stroke. Table 1 shows demographic characteristics of each sample. In the total sample (n = 472), participants were 29 years old on average (SD = 18.5), and 69 % were women. Most of them were from the central region of Brazil (53.8 %), followed by the southern region, (22.9 %), southeastern (10.2 %), northeastern (7.8 %) and northern (4.4 %). Most of the participants were university students (57 %).
Metamemory in Adulthood Questionnaire (MIA; Dixon and Hultsch 1983a, 1983b; Dixon et al. 1988). We used a Brazilian Portuguese adapted version of the original MIA (Yassuda et al. 2005). This version has the same factorial structure of the original one, with 108 items divided into seven subscales: Strategy (α = 0.81, n = 18, “Do you write appointments on a calendar to help you remember them?”), Task (α = 0.76, n = 16, “For most people, facts that are interesting are easier to remember than facts that are not”), Capacity (α = 0.82, n = 17, “I am good at remembering names”), Change (α = 0.86, n = 18, “The older I get the harder it is to remember things clearly”), Anxiety (α = 0.82, n = 14, “I find it harder to remember things when I'm upset”), Achievement (α = 0.77, n = 16, “It is important that I am very accurate when remembering names of people”), and Locus (α = 0.67, n = 9, “Even if I work on it my memory ability will go downhill”). The responses for some items are given in a five-point scale that ranges from 1 (totally agree) to 5 (totally disagree), while other items are responded in a five-point scale that ranges from 1 (never) to 5 (always).
Prospective and Retrospective Memory Questionnaire (PRMQ; Crawford et al. 2003). We used an adapted Brazilian Portuguese version of the PRMQ (Benites and Gomes 2007), in order to investigate convergent validity of the short version of the MIA. In this instrument participants self-evaluate the quality of their own prospective and retrospective memories. This version has 10 statements, five related to prospective memory (e. g. “Do you decide to do something in a few minutes' time and then forget to do it?”, α = 0.80), and five related to retrospective memory (e. g. “Do you forget something that you were told a few minutes before?”, α = 0.71). The statements are judged in a five-point scale that ranges from 1 (never) to 5 (very often). The Brazilian Portuguese version has demonstrated good validity evidences, which were further confirmed in latter studies (Piauilino et al. 2010).
Five of the six samples answered to the instruments personally (n = 280), and one sample participated in an online survey (n = 192). Participants of the survey were invited by announcements published in email lists and social medias, and only this sample answered to the PRMQ instrument. The other samples responded only to the MIA. Participants from the online sample took about 25 min to complete the survey, the other samples took about 15 min to complete the procedure. This project was approved by the local Research Ethics Committee (numbers 988.985, 2009028 and 21717), and all participants signed an Informed Consent Form.
The main analyses of the study aimed to: (1) reduce the amount of items of the original MIA, and (2) obtain evidences of validity of the short version. Initially, we used the software Winsteps 3.72.2 to evaluate items adjustments of the seven subscales to the Rasch Rating Scale model (Rasch 1960), and estimate the items difficulty parameters (δ i ). In the Rasch model, answering an item correctly is a function of the individual's ability (θ n ) and of the difficulty of the item (δ i ). These measures are represented in a logarithmic scale of the odds (log odds units, logits), varying from infinite negative to infinite positive, generally covering values from −3.00 to +3.00 (Bond and Fox 2007). The difficulty of the items (δ) indicate the probability of choosing it in function of the level of the latent trait. In the current analysis the difficulty of the items were free to vary, while the mean was fixed in zero. Values below the mean indicate that the item is easiest in relation to the item pool, and the item is more difficult if the logit of the item is positive (Bond and Fox 2007).
The individual adjustment of items to the Rasch model were evaluated by the infit and outfit indexes, that quantify items' residuals in relation to the model. The ideal value of adjustment is 1.00, but values between 0.50 and 1.50 are acceptable. The unidimensionality of the instrument was evaluated by the analysis of residuals of the model to investigate possible salient dimensions in the data. It was found correlations between residuals above 0.32, which indicate local dependency and possible violations of unidimensionality.
The items selected by the Rasch model were evaluated by Exploratory Factor Analysis (EFA), using principal axis factoring and orthogonal rotation, in order to verify if the obtained factorial structure was similar to the original one. The final structure obtained was then submitted to a judge evaluation. Five specialists in memory participated in this phase, independently choosing which items should remain in the reduced version, using an evaluation form. The judges were instructed to evaluate each item in terms of repeatability (“Did the content of the item already appear before?”), bad writing (“Is the item difficult to understand because it is ambiguous, extensive, or has too many inversions/negations?”), and cultural or temporal inadequacy (“Is the item outdated temporally or culturally in the Brazilian context?”). Judges were also instructed to consider factor loading of the items. After evaluating each one of these criteria, the judges needed to choose five items per factor that should remain in the reduced version. Items chosen by at least three judges were included in the final version of the reduced instrument. New exploratory factor analyses were conducted in order to verify evidences of validity of the short MIA version (MIAr), checking if the original structure was retained. The MIAr was then submitted to a new evaluation using the Rasch model, in order to verify evidences of validity by means of item response process (Primi et al. 2009). The analyses so far were implemented for the total sample. Correlation tests were conducted between the MIAr and the PMRQ for the online survey subsample (n = 197), in order to test convergent validity of the short version. Finally, the MIAr scores were compared between non-clinical and clinical groups, in order to check for further construct validity in the reduced version.
Analyses of items to obtain a reduced version of the MIA (MIAr)
An initial evaluation using IRT was conducted in order to reduce the number of items of the MIA. This analysis indicated that nine items could be excluded from the item pool due to poor infit and outfit values. The remaining 99 items were then submitted to an exploratory factor analysis (KMO = 0.80), with principal axis factoring and orthogonal rotation, given that we have found only very small correlations between the original factors. The eigenvalue criteria (higher than 1), scree plot, and parallel analyses revealed the existence of 11 factors in the instrument. However, the theoretical interpretation of the factors in structures with more than seven factors was compromised. The composition with seven factors was more compatible with the original structure and more adequate to use in subsequent analyses.
The judges evaluation was then conducted, aiming to reduce the 99 remaining items to a version with 35 items, following the criteria of factor loading, repetitiveness (“Did the content of the item appear before?”, e.g.), bad writing (“Is the item difficult to understand because it is ambiguous, extensive, or has too many inversions/negations?”) and cultural or temporal inadequacy (“Is the item outdated temporally or culturally in the Brazilian context?”, e.g.). Then, they should select five items that best represented each subscale, in their evaluation. The most common issues were related to repetitiveness (e.g. the items "I get anxious when someone asks me to memorise something" and "When someone I do not know so well asks me to memorise something, I get anxious"), followed by bad writing (e.g. "I do not have any difficulty to remember where I put my things", for the excessive negations) and cultural/temporal inadequacy (e.g. "I cannot expect to have a good memory for ZIP Codes, at my age"). Items that were selected by at least three of the five judges were included in the reduced version, what happened with 80 % of the items (n = 29), indicating a high rate of consensus among the majority of judges.
Items that did not reach the majority's approval were debated by the authors, that decided which items should remain in the instrument, following the same evaluation criteria. After this procedure only the factor Strategy remained with six items, due to the distinctivity of the sixth item in relation to others of the same factor. All the other factors remained with five items each, culminating in an initial reduced version with 36 items.
Analyses of internal consistency revealed that the inclusion or removal of some items would increase the reliability of three factors (verified by total-item correlations). Given that, three items were added to the factor Achievement (alpha of .62 to .73), one item was removed from Change (alpha of .52 to .82), and one item was added to Locus (alpha of .55 to .57). Therefore, the final version of the MIAr comprehended 39 items. Table 2 shows each subscale of the MIAr, with their respective items and reliability values (Cronbach’s alpha), along with factor loadings and communalities for each item.
Validity evidences of the MIAr
Rasch analysis was used in each subscale to investigate the psychometric parameters of the MIAr. The mean of the subjects trait was fixed to zero in the comparison of difficulty values between the original and reduced versions. Values of infit and outfit were smaller in the scales of the short version when compared to the original scales, indicating a better adjust of the items to the measurement models (having a smaller dispersion in relation to the unity). Also, the mean and standard deviation values were similar in the two versions of the instrument, what suggests that there was no significant loss of information. In some of the subscales (e. g., Strategy), it was observed a reduction in the extent of the trait and a loss in the reliability. However, it is still within acceptable parameters. Finally, the estimated standard theta values for the participants of the two sets of items were correlated, showing the following values: Anxiety = 0.91, Capacity = 0.90, Locus = 0.94, Strategy = 0.80, Achievement = 0.90, Change = 0.88, and Task = 0.89. The difficulty values of the reduced version are shown in Table 2.
Correlation tests were conducted between the subscales of MIAr and the PRMQ, in order to test convergent validity of the reduced instrument. In this analysis (n = 192) it was possible to identify that most subscales of the MIAr achieved significant correlations with the factors related to prospective and retrospective memory capacities. The values found in this analysis are showed in Table 3, showing moderate and small magnitudes.
Finally, we conducted comparative analyses between the clinical (stroke patients) and non-clinical samples. The results showed that patients with stroke had higher scores for the subscales Strategy (t(467) = 2.07; p = 0.04; d = 0.46), Capacity (t(467) = 0.03; p = 0.03; d = 0.64), Change (t(467) = 4.43; p < 0.001; d = 0.97), and Anxiety (t(467) = 2.32; p = 0.02; d = 0.52), when compared to the non-clinical sample. No significant differences were found for the other subscales (p > 0.12).
This study aimed to obtain a short version of the Metamemory in Adulthood Questionnaire (MIAr) and evaluate its evidences of validity. The factorial structure of the short version obtained is very similar to the original composition (Dixon and Hultsch 1983a; Dixon and Hultsch 1983b), and the internal consistency was similar to the Brazilian full version (Yassuda et al. 2005). Also, this study contributed to the investigation of psychometric properties of the instrument, presenting new information obtained from both classical test theory and item response theory (IRT). Considering the presented evidences of validity, the MIAr seems to be adequate for metamemory evaluation.
The MIAr presented satisfactory psychometric properties, including evidences related to its internal structure and convergent validity. Most of the subscales showed good internal consistency, with the only exception being the Locus factor. This pattern has already been presented in previous studies, comparing Locus reliability with the other subscales (Yassuda et al. 2005). Therefore, users of the MIAr should be cautious when evaluating Locus, due to problems related to less reliable factors (Henson 2001). Future studies should further explore this subscale, eventually reformulating it to improve its psychometric properties.
The IRT analyses enabled the selection of items based on their adjust measures of infit and outfit. The first selection step eliminated items considering their discrepancy from the unidimensional model of measurement. Besides contributing to the reduction process, this information reinforces the assumption that the full version of MIA has an excessive quantity of items. Exploratory factor analyses (EFAs) and IRT analyses (with Rasch model) were used as tools to develop the MIAr. EFAs aimed to select the sets of items with greater linear association with the latent score, while the IRT analyses estimated the adjust parameters and items difficulty, and both provided evidences of validity regarding internal structure. Small variations were observed in difficulty parameters, due to the retention of discriminative items in factor analysis (i. e. the most informative items to classify subjects). Furthermore, the indexes indicated that MIAr items fit the unidimensional models of each subscale, presenting less residual variability compared to the 108-item version, what indicates the enhanced internal cohesion of subscales.
The MIAr also showed adequate evidences of convergent validity, revealed by the significant correlations between most MIAr subscales and the Prospective and Retrospective Memory Questionnaire (Piauilino et al. 2010). The variability in the magnitude of these correlations also suggests that MIAr constructs are sufficiently distinct from each other, despite of all being related to metamemory. Future studies using MIAr must consider such distinctions, as it is possible to establish specific hypothesis for each of the seven subscales and other variables (Tarricone 2011).
Comparisons between clinical and non-clinical samples aimed to explore MIAr sensibility to different profiles of participants. The sample of patients with stroke scored higher in Strategy, Capacity, Change and Anxiety, possibly because this condition is often associated with impairments in executive functions (Aben et al. 2008; Aben et al. 2009). However, the replicability of these results should be addressed with a larger sample size and with other clinical samples.
As possible limitations of the present study, it should be considered that the number of different samples could have biased MIA’s results, once differences in metamemory characteristics are expected among those different respondents. On the other hand, the use of different samples is a strategy that allows a broader generalisation, considering the larger population representativity, and all statistical assumptions for the parametric analyses used were met, which suggests that the use of different samples did not impaired MIA’s reduction. Also, the items difficulty variability indicates that the instrument captures an adequate extent of the construct. The heterogeneity of the sample may have contributed to this psychometric property. Our results comparing clinical and non-clinical samples are also limited, due to the small number of participants in the clinical group. future studies should further explore this relationship.
The investigation of MIAr properties would be highly benefited by new studies that increased the quantity of evidence supporting its use, with the required modifications, if necessary. Larger sample sizes and a greater representativity of the Brazilian population should be aimed, helping to improve the reliability of our results, the robustness of factorial structure and the amount of evidences of validity.
The objective of elaborating a short version of the MIA was achieved. The MIAr seems to be a more practical instrument than the complete MIA, keeping the same factorial structure of the full version and demonstrating satisfactory psychometric indexes. Metamemory assessment in Brazilian context is now facilitated and the MIAr is expected to provide a better understanding of cognitive and mnemonic processes, dementia, and other cognitive deficits.
Aben, L, Busschbach, JJ, Ponds, RW, & Ribbers, G. M. (2008). Memory self-efficacy and psychosocial factors in stroke. Journal of Rehabilitation Medicine, 40(8), 681–683.
Aben, L, Kessel, MAV, Duivenvoorden, HJ, Busschbach, JJV, Eling, PATM, Bogert, MA, & Ribbers, GM. (2009). Metamemory and memory test performance in stroke patients. Neuropsychological Rehabilitation, 19(5), 742–753.
Alquraan, M, & Aljarah, AA. (2011). Psychometric revision of a Jordanian version of the metamemory in adulthood questionnaire (MIA): Rasch model, confirmatory factor analysis, and classical test theory analyses. Education, Business and Society: Contemporary Middle Eastern Issues, 4, 292–302. doi:10.1108/17537981111190079.
Bender, AR, & Raz, N. (2012). Age-related differences in recognition memory for items and associations: Contribution of individual differences in working memory and metamemory. Psychology and Aging, 27(3), 691–700. doi:10.1037/a0026714.
Benites, D, & Gomes, WB. (2007). Tradução, adaptação e validação preliminar do Prospective and Retrospective Memory Questionnaire (PRMQ). Psico-Usf, 12(1), 45–54.
Bjork, RA, Dunlosky, J, & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417–444. doi:10.1146/annurev-psych-113011-143823.
Bond, TG, & Fox, CM. (2007). Applying the Rasch Model. Fundamental measurement in the human sciences. New Jersey: Lawrence Erlbaum Associates Publishers.
de Frias, CM, & Dixon, R a. (2005). Confirmatory factor structure and measurement invariance of the Memory Compensation Questionnaire. Psychological Assessment, 17(2), 168–178. doi:10.1037/1040-35188.8.131.52.
Crawford, J, Smith, G, Maylor, E, Della Sala, S, & Logie, R. (2003). The Prospective and Retrospective Memory Questionnaire (PRMQ): Normative data and latent structure in a large non-clinical sample. Memory, 11(3), 261–275.
Dixon, RA, & Hultsch, DF. (1983a). Metamemory and memory for text relationships in adulthood: A cross-validation study. Journal of Gerontology, 38, 689–694.
Dixon, RA, & Hultsch, DF. (1983b). Structure and development of metamemory in adulthood. Journal of Gerontology, 38, 682–688.
Dixon, RA, Hultsch, DF, & Hertzog, C. (1988). The metamemory in adulthood (MIA) questionnaire. Psychopharmacology Bulletin, 24, 671–688.
Flavell, JH. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34(10), 906–911. doi:10.1037/0003-066X.34.10.906.
Gilewski, MJ, Zelinski, EM, & Schaie, KW. (1990). The memory functioning questionnaire for assessment of memory complaints in adulthood and old age. Psychology and Aging, 5(4), 482–490. doi:10.1037/0882-79184.108.40.2062.
Henson, RK. (2001). Understanding internal consistency reliability estimates: A conceptual primer on coefficient alpha. Measurement and Evaluation in Counseling and Development, 34(3), 177–189.
Hertzog, C, Dixon, RA, & Hultsch, DF. (1990a). Metamemory in adulthood: Differentiating knowledge, belief, and behavior. Advances in Psychology, 71, 161–212.
Hertzog, C, Dixon, RA, & Hultsch, DF. (1990b). Relationships between metamemory, memory predictions, and memory task performance in adults. Psychology and Aging, 5(2), 215–227.
Hertzog, C, Hultsch, DF, & Dixon, RA. (1989). Evidence for the convergent validity of two self-report metamemory questionnaires. Developmental Psychology, 25(5), 687–700. doi:10.1037/0012-16220.127.116.117.
Hultsch, DF, Hertzog, C, & Dixon, RA. (1987). Age differences in metamemory: Resolving the inconsistencies. Canadian Journal of Psychology, 41(2), 193–208.
Kinjo, H, Ide, S, & Ishihara, O. (2013). Structure of the Japanese metamemory in adulthood (MIA) questionnaire and development of its abridged version. The Japanese Journal of Cognitive Psychology, 11(1), 31–41.
Nelson, TO, & Narens, L. (1990). Metamemory: A theoretical framework and new findings. In G. H. Bower (Ed.), The Psychology of Learning and Motivation (26th ed., pp. 125–173). New York: Academic.
Palmer, EC, David, AS, & Fleming, SM. (2014). Effects of age on metacognitive efficiency. Consciousness and Cognition, 28, 151–160. doi:10.1016/j.concog.2014.06.007.
Piauilino, DC, Bueno, OFA, Tufik, S, Bittencourt, LR, Santos-Silva, R, Hachul, H, & Pompéia, S. (2010). The prospective and retrospective memory questionnaire: a population-based random sampling study. Memory, 18(4), 413–426. doi:10.1080/09658211003742672.
Ponds, RW, & Jolles, J. (1996). The abridged Dutch metamemory in adulthood (MIA) questionnaire: structure and effects of age, sex, and education. Psychology and Aging, 11(2), 324–332. doi:10.1037/0882-7918.104.22.1684.
Primi, R, Muniz, M, & Nunes, CHS. (2009). Definições contemporâneas de validade de Testes Psicológicos. In C. S. Hutz (Ed.), Avanços e polêmicas em avaliação psicológica: Em homenagem a Jurema Alcides Cunha (pp. 223–265). São Paulo: Casa do Psicólogo.
Rasch, G. (1960, 1980). Probabilistic models for some intelligence and attainment tests. Chicago, IL: MESA Press
Rawson, K, Dunlosky, J, & McDonald, S. (2002). Influences of metamemory on performance predictions for text. The Quarterly Journal of Experimental Psychology, 55A, 505–524. doi:10.1080/02724980143000352.
Royle, J, & Lincoln, NB. (2008). The everyday memory questionnaire-revised: development of a 13-item scale. Disability and Rehabilitation, 30(2), 114–121. doi:10.1080/09638280701223876.
Schraw, G. (2008). A conceptual analysis of five measures of metacognitive monitoring. Metacognition and Learning, 4(1), 33–45. doi:10.1007/s11409-008-9031-3.
Schraw, G, & Gutierrez, AP. (2015). Metacognitive strategy instruction that highlights the role of monitoring and control processes. Metacognition: Fundaments, Applications, and Trends, 76, 3–16.
Tarricone, P. (2011). The taxonomy of metacognition. Hove, UK: Psychology Press.
Van Ede, DM. (1995). Adapting the metamemory in adulthood (MIA) questionnaire for cross-cultural application in South Africa. South African Journal of Psychology, 25(2), 74–80.
Veenman, MVJ, Hout-Wolters, BHAM., & Afflerbach, P. (2006). Metacognition and learning: conceptual and methodological considerations. Metacognition and Learning, 1(1), 3–14. doi:10.1007/s11409-006-6893-0.
Yassuda, MS, Lasca, VB, & Neri, AL. (2005). Meta-memória e auto-eficácia: Um estudo de validação de instrumentos de pesquisa sobre memória e envelhecimento. Psicologia: Reflexão e Crítica, 18(1), 78–90.
Zortea, M, Jou, GI, & Salles, JF. (2014). Tarefa experimental de metamemória para avaliar monitoramento e controle de memória. Psico USF, 19(2), 329–344.
This research was supported by CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior), and by FAPERGS (Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul, 006/2010).
GC contributed to the conception of the study, the elaboration and analysis of the judge’s evaluation procedure, construction of the online survey, data collection, exploratory and confirmatory factor analysis, writing and revising. MZ was involved in the conception, data collection, IRT analysis and writing. RS contributed to the conception of the study, the elaboration and analysis of the judge’s evaluation procedure, construction of the online survey, data collection, exploratory and confirmatory factor analysis, writing and revising. WM was involved in IRT data analysis, interpretation and writing. JBS was involved in the conception, data collection, IRT analysis and writing. JDS was involved in the conception, data collection, IRT analysis and writing. CG contributed to the conception of the study, the elaboration and analysis of the judge’s evaluation procedure, construction of the online survey, data collection, exploratory and confirmatory factor analysis. GMC and JFS contributed to the conception of the study, writing, revising, and supervision.
There are no organizations that could gain or lose financially from the publication of this manuscript; any of the authors hold stocks or shares with possible benefiting organizations and there is no pretension for applying for any patents relating to the content of the manuscript. Besides, there are no non-financial competing interests between authors or any external organization.