Skip to main content

Psychology: Research and Review

Cross-cultural invariance of the Spanish version of the COVID-19 Assessment Scorecard to measure the perception of government actions against COVID-19 in Latin America

Abstract

Objectives

The present study aimed to evaluate the measurement invariance of a general measure of the perception of governmental responses to COVID-|19 (COVID-SCORE-10) in the general population of 13 Latin American countries.

Methods

A total of 5780 individuals from 13 Latin American and Caribbean countries selected by non-probabilistic snowball sampling participated. A confirmatory factor analysis was performed and the alignment method was used to evaluate invariance. Additionally, a graded response model was used for the assessment of item characteristics.

Results

The results indicate that there is approximate measurement invariance of the COVID-SCORE-10 among the participating countries. Furthermore, IRT results suggest that the COVID-SCORE-10 measures with good psychometric ability a broad spectrum of the construct assessed, especially around average levels. Comparison of COVID-SCORE-10 scores indicated that participants from Cuba, Uruguay and El Salvador had the most positive perceptions of government actions to address the pandemic. Thus, the underlying construct of perception of government actions was equivalent in all countries.

Conclusion

The results show the importance of initially establishing the fundamental measurement properties and MI before inferring the cross-cultural universality of the construct to be measured.

Introduction

The COVID-19 pandemic has generated negative social, economic, educational and health consequences worldwide (Lazarus et al., 2020b). Since its emergence in China and until April 1, 2022, more than 480 million diagnosed cases of COVID-19 and more than 6 million deaths from the disease have been reported worldwide. The emergence and rapid spread of new SARS-CoV-2 virus variants, such as Delta and Omicron, means that the public health emergency will continue to be a public health emergency (Haque et al., 2022) and continue to exert pressure on governments around the world (White et al., 2021).

Since the beginning of the pandemic, the different governments of the world have implemented actions to contain the spread of the disease among their different populations (Lazarus et al., 2020b). These actions included quarantine, restrictions on the movement of people, and the closure of schools, places of worship, stores and industrial activities (Sebastiani et al., 2020). In addition, preventive behaviors such as the use of masks, hand washing, use of hand sanitizer, physical distancing and vaccination against COVID-19 have been promoted (Olapegba et al., 2021). It has even been recommended that preventive behaviors should be maintained after being fully vaccinated against COVID-19 (Aschwanden et al., 2021). However, compliance with these measures has varied among different environments (Sibley et al., 2020). It is possible that the lack of confidence in the government (Sibley et al., 2020) and the confusion generated by the unclear and contradictory information issued by some governmental sources (Gollust et al., 2020; Garrett, 2020) have generated the limited compliance or non-compliance with the aforementioned measures. Likewise, the pandemic may change people’s trust in government and institutions, where those faced with an external threat may have greater confidence in government and institutions because they have few additional options (Bavel et al., 2020).

During this pandemic, the leadership role of government is important, especially in a context of uncertainty about the effectiveness of the control measures in place (White et al., 2021). Greater trust in government authorities makes it more likely to comply with the recommended protective practices for dealing with the disease (Seale et al., 2020). This relationship has been observed previously during the H1N1 pandemic (Freimuth et al., 2014) and the Ebola epidemic (Blair et al., 2017). The different degrees of trust in government may be influenced by certain individual characteristics (Kavanagh et al., 2020). Thus, it is important to have a better understanding of people’s perceptions of government responses to the COVID-19 pandemic. People’s opinions are directed not only to the effectiveness of the measures implemented by governments, but also to other more specific actions, such as support for the most vulnerable groups (Lazarus et al., 2020b).

However, there are relatively few studies that have assessed people’s perceptions of the role of government during the pandemic, particularly in Latin America. The governments of Latin American countries face the pandemic in the midst of their own structural crises such as social inequality and poverty that have led to social and political polarization, in addition to a high prevalence of chronic diseases and a response with limited health resources (Ramírez de la Cruz et al., 2020). This has made Latin America and the Caribbean one of the regions of the world most affected by the pandemic (Anaya-Covarrubias et al., 2022). A recent study indicated that Latin America, along with Europe, is one of the regions that perceived their governments’ responses to COVID-19 as inadequate (Lazarus et al., 2020b). This negative perception leads to harsh criticism of the measures taken by the government (Paterlini, 2020).

A better understanding of perceptions of government responses to COVID-19 requires validated measures. To this end, the COVID-SCORE-10 (Lazarus et al., 2020a, b) was recently developed as a general measure of perceptions of government responses to COVID-19. Specifically, the COVID-SCORE-10 assesses people’s perceptions of socioeconomic support, continuity of health services, communication, and disease control measures. The COVID-SCORE-10 was developed from a longer version of 20 items, the COVID-SCORE-20 (Lazarus et al., 2020a). The choice of items for COVID-SCORE-10 was made by a panel of experts after a review of information on government responses to pandemics and other natural disasters presented above (Lazarus et al., 2020b). Initially, the COVID-SCORE-10 was developed in English and translated into different languages (such as Portuguese, Mandarin Chinese, French, German, Italian, Polish, Russian, Korean, Swedish, among others) under the assumption that it can obtain information from different countries and is sensitive to cultural differences. However, to our knowledge, there was no previous study that evaluated its measurement invariance (MI) across different countries and/or cultures.

Only the initial study reported that the measure was reliable (Cronbach’s alpha = 0.92) and unidimensional (Lazarus et al., 2020a, b). The latter was performed on the basis of a principal component analysis (PCA), which is part of the set of procedures known as Little Jiffy (Kaiser, 1960), which is the least recommended for assessing the internal structure of a measurement test (Lara & Soto, 2016). Regarding PCA, it is a method of reduction of observed variables or items, and not a factor analysis (Lloret-Segura et al., 2014), which takes into consideration the total variance (which includes the common and unique variance, in addition to the error variance) and leads to overestimate the factor loadings and distort the appropriate variance (Ferrando & Anguiano-Carrasco, 2010).

Conducting a cross-cultural study, similar to Lazarus et al. (2020a, b), should effectively address cultural influences on measure performance across countries (Ryan et al., 1999). However, most research comparing groups (including countries) does not assess the equivalence of the factor structure of the instruments between groups (Steinmetz et al., 2009). For this, the MI procedure makes it possible to evaluate whether an instrument works in the same way in all groups. Failure to establish MI would mean that the results of the comparison between groups may be erroneous and not replicable, since the differences between groups may not reflect true differences, but rather a different functioning of the instrument between the groups evaluated. This would mean that the theoretical and practical implications of cross-cultural studies may be limited or spurious (Nimon & Reio, 2011). In view of this, MI should be evaluated before drawing conclusions based on a group comparison (Jeong & Lee, 2019).

Although the COVID-SCORE-10 has been used in different countries, this does not mean that it can be used in different places without being certain that the concept and measurement of the perception of governmental actions against the COVID-19 is similar. In view of this situation, the present study aimed to evaluate the MI of the COVID-SCORE-10 in the general population of 13 Latin American countries. This will provide additional evidence for a general tool that can measure the perception of government actions on COVID-19 in various countries and that is sensitive to cultural differences, which would benefit researchers and public health policy makers. Having a measure that is invariant across countries will allow for understanding how people perceive their governments’ response to the COVID-19 pandemic in order to plan and adapt their public health interventions. This topic is considered a research priority (World Health Organization [WHO], 2020) and important for decision support for governments in Latin America and the Caribbean.

Additionally, once MI was tested, the performance and parameter estimates of individual COVID-SCORE-10 items were evaluated based on Item Response Theory (IRT). IRT models allow for a better understanding of the relationship between an individual’s responses to the COVID-SCORE-10 items with the underlying latent trait, in this case the perception of government responses to the COVID-19 (Embretson & Reise, 2000). IRT analysis has been suggested as an effective method for developing and optimizing the sensitivity of measurement instruments, and to our knowledge, there are no previous studies that have conducted item-level analyses of the COVID-SCORE-10 using IRT models.

Method

Participants

Participants were 5780 individuals from 13 countries in Latin America and the Caribbean (Argentina, Bolivia, Chile, Colombia, Cuba, Ecuador, El Salvador, Guatemala, Mexico, Paraguay, Peru, Uruguay, and Venezuela), who were selected by non-probability snowball sampling. Snowball sampling has been a common strategy used in studies during the COVID-19 pandemic due to the limitations for interaction between individuals (Leighton et al., 2021). We planned to recruit a minimum sample size of 200 individuals in each country, which is considered an adequate sample size for psychometric studies (Wilson Von Voorhis & Morgan, 2007). In addition, the number of participants in each country was also in line with the recommendations for confirmatory factor analysis and IRT models, which required minimum samples of 300 and 375, respectively (De Ayala, 1994; Tabachnick & Fidell, 2007). The number of participants in each country ranged from 322 (Peru) to 747 (El Salvador). To be part of the study, participants were of legal age and gave informed consent.

The sample showed a higher participation of women (n = 4093) as opposed to men (n = 1687). The mean (M) age of the total participants was 33.53 years (M = 29, IQR = 23–42), with Mexico having the youngest participants (M = 24.96, M = 21, IQR = 20–27) and Guatemala the oldest (M = 44.04 years, M = 42, IQR = 33–57). Most of the participants were single (61.23%) and with completed university studies (47.08%). In addition, almost 50% (52.56%) reported not having been diagnosed with COVID-19. Table 1 shows the sociodemographic characteristics of each country in greater detail.

Table 1 Sociodemographic information of the study sample

Instruments

Sociodemographic survey

The sociodemographic questionnaire was prepared for the purposes of this study and included questions about the participants’ sex, age, educational level, and having been diagnosed with COVID-19.

Global survey to assess public perceptions of government responses to COVID-19 (COVID-SCORE-10; Lazarus et al., 2020b)

The COVID-SCORE-10 is comprised of 10 items and aims to measure people’s perceptions of their government’s COVID-19 response actions. Each of the 10 items has five response options ranging from “strongly disagree = 1” to “strongly agree = 5”. For the scoring, a min–max transformation is applied to the sum of the items and multiplied by 100; in this way, the final scores are in the range between 0 and 100. The study used the Spanish version of White et al. (2021). The COVID-SCORE-10 items are (in parentheses and in italics the Spanish translation of each item):

  1. 1.

    The government helped me and my family meet our daily needs during the COVID-19 epidemic in terms of income, food, and shelter (El gobierno nos ayudó a mí y a mi familia a satisfacer nuestras necesidades diarias durante la epidemia de la COVID-19 en términos de ingresos, alimentos y vivienda)

  2. 2.

    The government communicated clearly to ensure that everyone had the information they needed to protect themselves and others from COVID-19, regardless of socioeconomic level, migrant status, ethnicity or language (El gobierno se comunicó claramente para garantizar que todos tuvieran la información que necesitaban para protegerse a sí mismos y a otros de la COVID-19, independientemente de su nivel socioeconómico, estatus migratorio, origen étnico o idioma)

  3. 3.

    I trusted the government’s reports on the spread of the epidemic and the statistics on the number of COVID-19 cases and deaths (Confié en los informes del gobierno sobre la propagación de la epidemia y las estadísticas sobre el número de casos y muertes por COVID-19)

  4. 4.

    The government had a strong pandemic preparedness team that included public health and medical experts to manage our national response to the COVID-19 epidemic (El gobierno contaba con un sólido equipo de preparación para una pandemia que incluía expertos médicos y de salud pública para gestionar nuestra respuesta nacional a la epidemia de COVID-19)

  5. 5.

    The government provided everyone with access to free, reliable COVID-19 testing if they had symptoms (El gobierno brindó a todos acceso a pruebas de COVID-19 gratuitas y confiables si tenían síntomas)

  6. 6.

    The government made sure we always had full access to the healthcare services we needed during the epidemic (El gobierno se aseguró de que siempre tuviéramos pleno acceso a los servicios de atención médica que necesitábamos durante la epidemia)

  7. 7.

    The government provided special protections to vulnerable groups at higher risk such as the elderly, the poor, migrants, prisoners and the homeless during the COVID-19 epidemic (El gobierno brindó protecciones especiales a los grupos vulnerables con mayor riesgo, como los ancianos, los pobres, los migrantes, los prisioneros y las personas sin hogar, durante la epidemia de COVID-19).

  8. 8.

    The government made sure that healthcare workers had the personal protective equipment they needed to protect them from COVID-19 at all times (El gobierno se aseguró de que los trabajadores de la salud tuvieran el equipo de protección personal que necesitaban para protegerse del COVID-19 en todo momento).

  9. 9.

    The government provided mental health services to help people suffering from loneliness, depression and anxiety caused by the COVID-19 epidemic (El gobierno brindó servicios de salud mental para ayudar a las personas que sufren de soledad, depresión y ansiedad causadas por la epidemia de COVID-19)

  10. 10.

    The government cooperated with other countries and international partners such as the World Health Organization (WHO) to fight the COVID-19 pandemic (El gobierno cooperó con otros países y socios internacionales como la Organización Mundial de la Salud (OMS) para combatir la pandemia de COVID-19).

Procedure

The project was approved by the Institutional Committee for the Protection of Human Subjects in Research (CIPSHI) of the University of Puerto Rico (No. 2223-006) and informed consent to participate in this study was provided by the participants. However, the study followed the ethical guidelines of the American Psychological Association (APA, 2010) and the Declaration of Helsinki. All methods were carried out in accordance with relevant guidelines and regulations.

The study was conducted between September 15 and October 25, 2021 during the COVID-19 pandemic. An online survey was developed using Google Forms, which contained instructions for answering the survey, the study objectives, informed consent, and the COVID-SCORE-10 questions. The survey was distributed via social networks (Facebook, Instagram, and LinkedIn) and email. Participants were asked if they could disseminate the survey link to their personal contacts. This procedure was the same and was carried out simultaneously in the 13 Latin American and Caribbean countries that participated in the study. Responding to the survey was risk-free for the participants. Participants took part in the study on a completely voluntary basis and could discontinue their participation at any time. All participants gave informed consent to be part of the study. Participants were informed that their responses were completely anonymous and confidential. Responding to the survey took an average of approximately 10 min. In addition, to complete and submit the online survey, participants should not leave any questions unanswered.

Data analysis

We began by analyzing some descriptive statistics at the item level. Specifically, the mean, standard deviation, skewness and kurtosis were calculated. Next, a confirmatory factor analysis (CFA) was performed using the robust maximum likelihood method (MLR; Yuan & Bentler, 2000). It should be noted that the instrument has 5 response options, so it is plausible to use it instead of more sophisticated methods such as weighted least squares means and variance adjusted (WLSMV; Rhemtulla et al., 2012). The reason for selecting MLR over WLSMV is that the alignment method used for the invariance analyses is based on the former. The model fit was judged with the following indices: comparative fit index (CFI), Tucker-Lewis index (TLI), root-mean-square error of approximation (RMSEA) y standardized root-mean-square residual (SRMR). To assess the fit of the model to the data, the following guidelines were considered: CFI > 0.95, TLI > 0.95, RMSEA < 0.06 y SRMR < 0.08 (Hu & Bentler, 1999).

For the evaluation of invariance, the alignment method was used, which is recommended when evaluating a large number of groups, as in the present case (Asparouhov & Muthén, 2014). The objective of this method is to reduce the lack of invariance as much as possible in order to perform an unbiased comparison of latent means. This methodology requires establishing a priori tolerance values for the parameters examined (factor loadings and intercepts). Following previous recommendations, conservative values were selected for both factor loadings (λ = 0.40) and intercepts (ν = 0.20) (Fischer & Karl, 2019). In addition, R2 were calculated for each parameter; values close to 1 suggest compliance with invariance. Finally, the total percentage of non-invariant parameters was also examined; values greater than 25% would indicate lack of invariance (Muthén & Asparouhov, 2014).

It should be noted that the aim of the alignment procedure is to estimate latent mean differences, and it was developed as an alternative to multi-group confirmatory factor analysis (MGCFA; Asparouhov & Muthén, 2014). Indeed, we only seek approximate (not exact) measurement invariance when applying the alignment method. Thus, lack of invariance under MGCFA is not incompatible with approximate alignment invariance. On the other hand, it is true that most applications of the alignment optimization use variations of the maximum likelihood estimator, and thus assume that the variables are continuous in nature (e.g. Marsh et al., 2018). While evidence suggests that it is safe to treat Likert-type items as continuous in single-group CFA (given that there are at least five response options; Rhemtulla et al., 2012), this may not hold for MGCFA (Temme, 2006). Furthermore, to the best of the authors’ knowledge, the robustness of this approach when using the alignment method has not been examined in simulation studies. Given the above, we decided to also conduct a MGCFA using the WLSMV estimator. Following recommendations for ordinal MGCFA, we first examined thresholds’ invariance, followed by the addition of factor loadings’ invariance (Temme, 2006; Wu & Estabrook, 2016). As expected, the results showed lack of (exact) measurement invariance (Supplementary Material 1). For transparency, we also make our dataset available for anyone interested in reproducing or improving our analyses. The database can be seen at the following link: https://osf.io/8ms6n.

Once the approximate invariance was verified, we proceeded with a graded response model (GRM) applied to the total sample. This model is part of the item response theory and consists of the estimation of two parameters (discrimination and difficulty) in polytomous items (Samejima, 2016). Specifically, one discrimination parameter (a) and k-1 difficulty parameters (b) are estimated for each item, where k is the number of response options. Discrimination refers to the ability of the item to distinguish between persons with high and low levels of the construct (θ). Difficulty parameters refer to the level of the construct (θ) at which the individual has a 50% probability of providing answers higher than indicated by parameter (Edelen & Reeve, 2007). With the information of both parameters, information curves were constructed for each of the items, which allow us to graphically examine the psychometric quality of the items in terms of reliability (Furr, 2018).

As mentioned, the alignment method used in the invariance analysis allows an unbiased comparison of the latent means between countries. In a complementary manner, this comparison was also made with the observed means. Although this procedure is methodologically inferior to the alignment method, it was applied to facilitate a simpler interpretation of the mean comparisons. Specifically, standardized mean differences were calculated with Cohen’s d index, which were interpreted considering the classic guide of 0.20, 0.50 and 0.80 as cut-off points for small, medium and large differences, respectively (Cohen, 1992).

The analyses were implemented in the R 4.0.3 program. For the CFA, the package lavaan 0.8–8 was used. For the alignment method, the sirt 3.9–4 package was used. Finally, the GRM was performed with mirt 1.33.2. The scripts used in this study can be seen at: https://osf.io/r5274.

Results

Preliminary analyses

Table 2 presents the descriptive statistics for each item of the COVID-SCORE. In general, people tended to show less acceptance of the item 1 (The government helped me and my family meet our daily needs during the COVID-19 epidemic in terms of income, food, and shelter), while item 10 had a greater acceptance (The government cooperated with other countries and international partners such as the World Health Organization (WHO) to fight the COVID-19 pandemic). As for the skewness and kurtosis values, most of them are within the range between -1 and -1, or very slightly outside this range.

Table 2 Item-level descriptive statistics of the COVID-SCORE

When proceeding with the CFA, an acceptable fit was observed in almost all countries (Table 3). The most notable exception was Uruguay, especially in relation to the TLI and RMSEA. In examining possible modifications, no conceptually defensible respecification was identified. Therefore, it was decided to continue with the initial model, even with the suboptimal fit for Uruguay. Table 3 also presents the factor loadings and internal consistency reliability. For the latter, values between 0.86 (Mexico) and 0.93 (Ecuador) were found, indicating adequate reliability.

Table 3 CFA’s fit indices, factor loadings and internal consistency reliability of the COVID-SCORE

Approximate measurement invariance

Table 4 presents the results of the approximate invariance analysis with the alignment method. As can be seen, in no case are there marked deviations concerning factor loadings. On the other hand, when intercept invariance is examined, it is not satisfied in 23.1% of the cases. It should be noted that this value is just below our a priori criterion (25%). Therefore, it is decided that the invariance between countries is approximately fulfilled, but this result should be taken with caution due to the closeness between the observed percentage of non-invariant intercepts and the pre-established maximum limit.

Table 4 Approximate measurement invariance of the COVID-SCORE using the alignment method

By maximizing the invariance in the data, the alignment method allows for the comparison of scores, which can be seen in the last row of Table 4.

Graded response model

Next, a GRM was applied to the COVID-SCORE items. As shown in Table 5, the items with the least discrimination were 1 and 3, while the most informative was item 6. Regarding the difficulty parameters, it is observed that item 1 was the most “difficult”, since even average values of the construct (θ ≈ 0) were associated with a 50% probability of answering the lowest option, whereas values of 2 SD above the average were required to have 50% probability of answering the highest option. In all other cases, some variation in the spectrum of the construct covered was observed.

Table 5 Graded response model parameter estimates for the COVID-SCORE

Based on the GRM parameters, information curves were constructed for each item of the COVID-SCORE (Fig. 1). These curves allow us to identify that items 6 and 7 are the most informative, especially at θ values close to the average. Taken together, the information curves demonstrate that the COVID-SCORE scores are more reliable at values close to the average of the latent variable.

Fig. 1
figure 1

Item information curves of the COVID-SCORE

Mean comparison across countries

When examining invariance with the alignment method, a comparison was already made between the latent measures of the variable perception of the actions carried out by the government. However, it was also decided to perform, in a complementary manner, a comparison between the transformed scores of the COVID-SCORE-10. In doing so, it was observed that the participants with the most positive perceptions of government actions to address the pandemic were Cuba, Uruguay and El Salvador (the differences between the three countries were small to negligible, ds < 0.50). On the other hand, the participants with the lowest positive perception were those from Venezuela, Guatemala and Bolivia (also ds < 0.50 among the three). Figure 2 shows boxplots that graphically represent these differences.

Fig. 2
figure 2

Boxplots comparing observed scores of the COVID-SCORE

Discussion

Due to the cross-cultural use of the COVID-SCORE-10 in different populations, the study aimed to assess whether the results are comparable between different countries by evaluating the MI of the scale. In particular, the cross-cultural replicability of the COVID-SCORE-10 was tested in 13 Spanish-speaking countries in Latin America and the Caribbean.

First, the evaluation of the factor structure of the COVID-SCORE-10 indicated the presence of a unidimensional model that fits the evaluated data well. In most countries, the CFI and TLI fit indices are above the cut-off value of 0.90 and the RMSEA and SRMR values are below 0.08, indicating an acceptable fit. It should be remembered that, the original COVID-SCORE-10 study only evaluated its factor structure by means of an exploratory factor analysis, but using procedures such as PCA, which have already been mentioned are inadequate as they are not a factor analysis method (Lloret-Segura et al., 2014). Only in Uruguay the values of some fit indices, such as TLI (0.89) and RMSEA (0.10) are slightly outside the range considered acceptable. The evaluation of a model with correlated errors could have improved model fit in all countries, including Uruguay; however, it has been suggested that this procedure could overestimate or underestimate reliability due to the presence of variance unrelated to the construct and thus generate a bias in the interpretation of COVID-SCORE-10 accuracy (Yang & Green, 2010).

On the other hand, the findings showed that the COVID-SCORE-10 is highly reliable in the 13 participating countries. In this sense, the COVID-SCORE-10 is likely to have balanced and easy-to-understand questions, resulting in consistent responses from individuals and generating good reliability.

On the other hand, although the COVID-SCORE-10 could be successfully replicated in each country independently, this is the first study to analyze its cross-national MI. Thus, identifying the generalizability of COVID-SCORE-10 scores is important for comparing groups internationally (Odell et al., 2021). The findings from the approximate invariance alignment approach indicate that the non-invariance for factor loadings (0%) and intercepts (23.1%) were within the recommended 25% limit (Muthén & Asparouhov, 2014), which provides greater reliability in the invariance results. In this sense, the alignment method indicated that no factor loadings challenged invariance.

On the other hand, if the non-invariance results had exceeded the 25% limit, a Monte Carlo simulation study would be needed to specifically identify the sources of non-invariance (Muthén & Asparouhov, 2014). Thus, the presence of an acceptable approximate invariance in the COVID-SCORE-10 was suggested to support its use for an unbiased comparison of the average levels of perception of government actions against the COVID-19 among the 13 countries assessed. Therefore, it can be suggested that the underlying construct of perception of government actions, as measured by the COVID-SCORE-10, was equivalent across countries. However, this result should be taken with caution due to the closeness between the observed percentage of non-invariant intercepts and the pre-established maximum limit.

The findings on invariance also provide further evidence to consider the alignment method as a suitable strategy for testing MI when the number of groups is large, which is difficult to achieve with the traditional approach based on CFA. Also, the alignment method allows estimating and comparing latent means despite partially invariant measurements (Cieciuch et al., 2018), which, in turn, automates and simplifies comparative analyses. Due to the ability of the alignment method to work with several groups, it is possible to test MI in different subpopulations within countries (Munck et al., 2018).

After testing for invariance, COVID-SCORE-10 scores were compared. It was observed that participants from Cuba, Uruguay and El Salvador had the most positive perceptions of government actions to address the pandemic. In the case of Cuba, the government, 1 month before the first case of COVID-19 was detected in its territory, created the Cuban Scientific Group for the Confrontation of COVID-19, which has been important in making decisions to control the pandemic (Castellanos-Serra, 2020; Díaz-Canel Bermúdez & Núñez Jover, 2020). This allowed immunological strategies to be applied during the outbreak of COVID-19 in Cuba, such as the development and application of an antibody detection test, the application of immunotherapeutics developed in Cuba to patients with COVID-19, and the application of a new antibody detection test (Pereda et al., 2020; Venegas Rodríguez et al., 2020) and the implementation of preventive strategies for vulnerable populations with products developed in Cuba for the immune system (Castellanos-Serra, 2020).

Uruguay is considered to be one of the most successful cases in the region in containing COVID-19, without implementing a general suppression strategy (González-Bustamante, 2021). The success of the initial containment of the pandemic by the Uruguayan government would lie, in part, in the rapid declaration of a state of health emergency throughout the country upon detection of the first cases and the closure of borders, schools and other activities that caused crowding. In addition, they appealed to personal responsibility to control the spread of the virus through voluntary self-confinement, but without mandatory blocking (e.g., no restrictions on meetings or public transportation, and no very lax quarantine) and seeking support from their scientific community to increase their testing capacity (e.g., no restrictions on public transportation, and no very lax quarantine) (Moreno et al., 2020).

Regarding this last point, it has been suggested that the evidence-based policies adopted by the Uruguayan government, together with a strong public health system and scientific innovations, are some of the main factors of success. For the development of evidence-based policies, scientific, medical-epidemiological, economic and educational aspects have been considered by a scientific advisory group made up of important figures in the government that provided recommendations on the different responses to the pandemic and the economic reactivation of Uruguay (Pittaluga & Deana, 2020). In this regard, the government of Uruguay developed a balanced strategy that allowed containing the social consequences of the pandemic and maintaining some degree of economic activity (Azerrat et al., 2021). In El Salvador, a few days after the first case of COVID-19 was detected, a strict containment was implemented, closing public transportation, schools and all stores, except those selling essential foodstuffs.

To try to mitigate the economic impact, the Salvadoran government made cash transfers of US$300 to workers in the informal sector; in addition, utility and loan payments were frozen, and millions of food baskets were distributed (Lagarde et al., 2020). In addition, El Salvador has been one of the Central American countries that have reached a proportion of direct beneficiaries, due to the fact that it has the three main components of the social protection information systems (social registry, single registry of beneficiaries and interoperability) (Cejudo et al., 2020). The actions taken in Cuba, Uruguay and El Salvador have been able to positively affect Cubans’ perception of their government’s actions to address COVID. In addition, countries such as Uruguay and Cuba are among those with lower income inequalities, smaller poverty gaps, higher per capita spending and higher public spending on health, which leads to better health outcomes (Giovanella et al., 2020).

However, it has also been reported that Venezuela, Guatemala and Bolivia presented the lowest positive perception of government actions. In the case of Venezuela, COVID-19 has been a serious threat that adds to the daily struggle of people to obtain basic foodstuffs in the midst of a political and economic crisis (Cooper, 2020). In this sense, the Venezuelan government’s response cannot be isolated from the country’s political situation. Therefore, in parallel, the country has had to face the pandemic and consolidate its control over a questioned political life (Østebø, 2020). Although official reports revealed that the country had some of the lowest incidence rates of COVID-19 in Latin America, it is possible that these figures are erroneous and therefore higher; in addition, the country did not have a standardized treatment process and used diagnostic tests with high false negative rates, which were rare (Bates et al., 2021). In addition, due to the fact that the Venezuelan government of President Maduro is not recognized by several countries, including the United States and the European Union, there have been problems for the arrival of vaccine against COVID-19, depending on Russia and China for the distribution of vaccines (Andrade, 2021).

In Guatemala, it has been suggested that the measures that, in other countries, were effective in controlling the pandemic, exacerbated the inequities present in the country and expressed the absence of social protection for citizens (Caridad et al., 2020). This, coupled with a weak and precarious health system to address the health crisis, lack of water, low coverage and malfunctioning of hospitals and lack of medicines, have led Guatemalans to perceive that their government had problems in dealing with the Covid-19 pandemic (Guillén & Pérez, 2021).

As in other countries, in Bolivia, the high percentage of informality and the persistent inequity in health benefits have amplified the impact of the pandemic and explain the poor results in containing the pandemic in the country (Hummel et al., 2021). In the same way, the political crisis over the legitimacy of the government meant that Bolivia did not have the conditions to carry out a coordinated response, with different results throughout the country (Velasco-Guachalla et al., 2021). Moreover, in Bolivia, citizen support for governmental measures has been lower, due to weaknesses in leadership and in the commitment to protect the life and health of the people, with a tendency to favor the economic interests of the elites (Giovanella et al., 2020). These adverse conditions did not contribute to a better perception of governmental actions to address the pandemic.

In general, in all Latin American countries, heterogeneity in the development of the epidemic, state capacity and pressure on health systems are significant factors for the rapid implementation of pandemic control strategies (Gallegos et al., 2020). Latin American countries have been challenged to respond to COVID-19 despite low budgetary support. Thus, the appropriate formulation of public health policies together with effective national diagnostic and vaccination strategies have been and will be key to the management of the pandemic, the reactivation of the economy and the alleviation of the poverty generated by the pandemic.

Additionally, the performance of the COVID-SCORE-10 items was evaluated based on the IRT. It was observed that item 6 is one of the most informative on the perception of government actions and refers to ensuring access to health services. As mentioned earlier, wide inequalities in effective access to health services are common in Latin America (Garcia et al., 2020), so it is not surprising that item 6 is one of the items that can provide the most information on the actions of Latin American governments to address the pandemic. Likewise, item 7, referring to the protection of the most vulnerable groups, is another item that allows us to obtain more information on the perception of the government’s actions. In this sense, many Latin American governments have seen the need to mitigate the effects of the pandemic on vulnerable groups through social pension programs and economic transfers to families, which function mainly as redistributive and social investment measures (Barrientos, 2020). However, these programs have been insufficient to compensate for the lack of work and income for the poorest segments of the population and for those informal workers at high risk of poverty (Busso et al., 2021).

In general, the impact on the most vulnerable groups of the population, limited access to social welfare and health services are the main concerns of the population in Latin America and the Caribbean (Benítez et al., 2020). Overall, the IRT results suggest that the COVID-SCORE-10 measures with good psychometric ability a broad spectrum of the construct, especially around average levels. That is, the instrument reliably measures trust in government for most people.

Despite the diversity of countries and consistent results, the present study has limitations that should be taken into account when interpreting the results. First, due to government regulations to prevent and control the COVID-19 pandemic, snowball sampling was adopted. This may generate the presence of partial bias and not have representative samples in each country. In added, the snowball sampling did not allow for adequate gender balance, with the majority of participants being female. Similarly, most had completed university education. This may have generated another bias, as less educated participants tended to have probably less access to the Internet, which was imperative due to the online nature of the study. Likewise, although we wanted to have the largest possible participation of countries, most of them came from South American countries, so there may be some type of regional bias in the findings. In view of the above, further studies should use more balanced sampling methods to allow greater generalization of the findings. Another limitation was that the data were collected using self-report methods.

Therefore, it is possible that the responses were affected by recall bias or social desirability. On the other hand, although a comparison was made between COVID-SCORE-10 scores across countries, the study alone cannot provide an exploration of how the economic and sociodemographic conditions of the participating countries affect individuals’ perceptions of government actions to address COVID-19. Thus, future studies should address the issue more directly by evaluating, for example, countries with very different socioeconomic characteristics. In addition, a longitudinal study design could be more informative of the evolution of individuals’ perceptions of government actions to address COVID-19 over time, beyond a single comparison.

Conclusion

In conclusion, the study contributes to the knowledge of the factor structure of the COVID-SCORE-10 in different populations by presenting the MI results and characteristics of the scale items in 13 Latin American and Caribbean countries. The results show the importance of initially establishing the fundamental measurement properties and MI before inferring the cross-cultural universality of the construct to be measured. In sum, the findings provide evidence to test the external validity and cross-cultural applicability of the conceptualization and operationalization of the perception of government actions vis-à-vis COVID-19. Generalizability is an important characteristic when evaluating any measurement instrument. They also provide scholars and practitioners with strong evidence of cross-cultural variations in perceptions of how different governments have dealt with the pandemic. There is relatively little research on the perception of government actions vis-à-vis COVID-19 in the LAC region. In this sense, the availability of a psychometrically sound and invariant measure of the perception of government actions could motivate researchers to include the LAC region in cross-cultural research on the topic. However, future studies should provide more complete data on the evidence for the validity of COVID-SCORE-10 as well as an assessment of the impact of culture on perceptions of government actions to address the pandemic.

Availability of data and materials

The data presented in this study are available on request from the corresponding author.

Abbreviations

MLR:

Robust maximum likelihood method

WLSMV:

Weighted least squares means and variance adjusted

CFI:

Comparative fit index

TLI:

Tucker-Lewis index

RMSEA:

Root-mean-square error of approximation

SRMR:

Standardized root-mean-square residual

GRM:

Graded response model

CFA:

Confirmatory factor analysis

MI:

Measurement invariance

PCA:

Principal component analysis

IRT:

Item Response Theory

WHO:

World Health Organization

References

Download references

Acknowledgements

None.

Informed consent statement

Informed consent was provided by all participants.

Additional information

No additional information is available for this paper.

Permission of the original creators of the instrument

No, permission was not necessary.

Funding

No funding was received to support the writing of this research paper.

Author information

Authors and Affiliations

Authors

Contributions

TC-R provided initial conception, organization, and main writing of the text. PDV analyzed the data and prepared all figures and tables. PDV; JV-L, CC-L, LWV, MR-B, MD-C, DEY-L, RP-A; CR-J, MG, MC, PM, RP-C, DAP, RM-H, AS-P, MELR, ABF, DXP-C, IEC-R, RC, WLAG, OP, AC, JT, JAMB, PG, VS-C, WYMR, DF-B, PC-V, AM-d-C-T, JP, CB-V, AMEFL, IV, DV, NAB-A, MKSh, HTUR and AELL were involved in data collection and acted as consultants and contributors to research design, data analysis, and text writing. The first draft of the manuscript was written by TC-R, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Tomás Caycho-Rodríguez.

Ethics declarations

Ethics approval and consent to participate

The study received ethical approval from the Institutional Committee for the Protection of Human Subjects in Research (CIPSHI) of the University of Puerto Rico (No. 2223-006).

Competing interests

The authors have no Competing or conflicting interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Multi-group Confirmatory Factor Analysis of the COVID-SCORE-10 using the WLSMV Estimator.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Caycho-Rodríguez, T., Valencia, P.D., Ventura-León, J. et al. Cross-cultural invariance of the Spanish version of the COVID-19 Assessment Scorecard to measure the perception of government actions against COVID-19 in Latin America. Psicol. Refl. Crít. 36, 34 (2023). https://doi.org/10.1186/s41155-023-00277-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41155-023-00277-9

Keywords