- Open Access
Psychometric examination of Runco Ideational Behavior Scale: Thai adaptation
Psicologia: Reflexão e Crítica volume 34, Article number: 4 (2021)
Creativity is a multidimensional construct. Several different approaches have been developed to measure creativity, including psychometric scales. The Runco Ideational Behavior Scale (RIBS) is one such measure of creative ideation. The primary purpose of this paper was to assess the 23 items of the RIBS in the context of the Thai language and examine scale reliability and validity. Participants, consisting of 508 undergraduate students selected from five Thai public universities, were selected through a convenience sampling approach involving both exploratory and confirmatory factor analysis. Results suggested that the Thai version of the RIBS presented a valid measure to a certain extent. Factor analysis of the empirical data indicated a two-dimensional structure. Confirmatory factor analysis (CFA) results confirmed that the two-factor construct demonstrated a better fit with improved psychometric characteristics. Six items were eliminated from the Thai RIBS version inventory: five items during explanatory factor analysis (EFA) and one during the CFA process. Results will contribute to ascertaining that the Thai version of the RIBS instrument can be used as a self-assessment tool for measuring students’ creative ideation. Implications and limitations of this research are discussed with suggestions for future studies.
One of the four competencies needed in the twenty-first century is creative thinking (P21 2019). Creative thinking is considered a high priority in both academic as well as policy-maker agendas (Ritter and Mostert 2017) and attracts interest in conducting research from many different perspectives (Tep et al. 2018). Creativity is recognized as a complicated construct that commands a vague role in educational policies (Kupers et al. 2018) but is now receiving increased attention in scholastic settings (Diakidoy and Constantinou 2001). Creativity is a multidimensional construct. This claim was first introduced by Guilford (1956) in his model of the structure of intellect. Supporting this notion, Plucker et al. (2004) conducted a content analysis of 90 articles from high impact factor journals to define the term creativity. They posited that “Creativity is the interaction among aptitude, process, and the environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context” (p. 90). Undoubtedly, creativity involves many different aspects, and it seems entirely implausible to expect that one single all-purpose instrument can adequately assess a person’s creativity (Treffinger 2009). Several different approaches were developed to measure creativity. These included the divergent thinking test as the Torrance Tests of Creative Thinking (TTCT; Torrance 1962, 1974, 1990) and the Consensual Assessment Technique (CAT; Amabile 1982). However, these assessments are time-consuming, while the CAT is costly and requires the recruitment of experts (Baer 2016a).
Another approach to assess creativity is by using a questionnaire instrument that includes self-report indexes of creativity. However, this avenue has received considerable criticism, with questioned validity of the scales (Baer 2016b). Many researchers measured other wide-ranging aspects of creativity using a questionnaire as the simplest feasible and easy to conduct method (for a list of creativity assessment questionnaires in various aspects, see Long and Plucker 2015, p. 321). Notably, to assess creative behavioral aspects, researchers believe that creative individuals’ behavior and past experiences determine their later creativity (Colangelo et al. 1992). Conforming to this perception, many psychometric scales were developed including the Creativity Behavior Inventory (Hocevar 1979), Creative Achievement Questionnaire (Carson et al. 2005), and the Kaufman Domains of Creativity Scale (Kaufman 2012). Among all these instruments, a self-report survey as the Runco Ideational Behavior Scale (RIBS; Runco et al. 2001) was constructed to assess individual ideational behavior as an interchangeable term for thinking disposition or creative ideation. As mentioned above, creativity is often measured by different tests based on producing fluency, originality, and flexibility of ideas (Kim 2011). Idea generation precisely plays a crucial role in assessing creativity which relies on the central concept in the notion that “ideas can be treated as the products of original, divergent, and even creative thinking” (Runco et al. 2001, p. 394). Seen through the lens of the model of creativity, the RIBS can be viewed through the Four Cs model proposed by Kaufman and Beghetto (2009). According to Kaufman and Beghetto (2009), this model incorporates four aspects to measure creativity, viz., mini-c, little-c, pro-c, and big-C. At the mini-c level, creativity is measured by self-assessments whereas, at the little-c level, it is measured by additional assessments beyond self-ratings. At the pro-c level, professional’s accomplishments might be employed, and major prizes/honors are used at the big-C level (Kaufman and Beghetto 2009). Runco et al. (2001) developed the RIBS with the intention that it could be used to assess everyday creativity, despite the fact that everyone is able to produce their own ideas. Therefore, the RIBS certainly would tend to fall into the mini-c aspect. Correspondingly, Kaufman, and Beghetto (2009) indicated that “The primary purpose for assessment at the mini-c level would be to support creative ideation and nurture student creativity” (p. 8). In sum, the RIBS strongly emphasizes ideas as a product and can be used by all people in their daily lives.
Over the past two decades, RIBS has been extensively employed in numerous studies to measure different constructs, e.g., everyday creativity (Cohen and Ferrari 2010; Benedek et al. 2012b; Benedek et al. 2012a), creative ideation (Pannells and Claxton 2008; Hao et al. 2016; Tyagi et al. 2017), creative behavior (An et al. 2016; Smith et al. 2016), and ideational behavior (Batey et al. 2010; Paek and Runco 2018). It was also applied to different age groups, including children, adolescents, and elders in various studies. Sen (2016) sought to explore the latent class structure of the RIBS with 765 Turkish middle school students. The author identified three classes, i.e., regular ideators, idea-producers, and idea-averters class. Anderson et al. (2017) examined creative ideational behaviors of US 6th-grade students in relation to student engagement by using RIBS-C (RIBS for students). They found that flexibility in creative ideation was highly correlated with relational support (e.g., peers and teachers, and educational aspiration). Liu et al. (2017) assessed the mediating role of creative self-efficacy (CSE) in the relationship between active procrastination and creative ideation among the adolescents’ age group of 853 undergraduate students and found that active procrastination, CSE, and creative ideation were positively associated with each other. Pertaining to adolescents, and elder age groups, the RIBS was utilized in a study conducted by Benedek et al. (2013) along with other measurements. The authors investigated the way in which reliability and validity of the originality and fluency scores rely on subjective top-scoring method among 105 participants whose ages ranged from 18 to 51 years. Their results indicated that the “subjective top-scoring method avoids the confounding of originality scores with fluency” (p. 346). Simultaneously, many studies have investigated the underlying factor structure of RIBS, with results suggesting various solutions as one factor (Runco et al. 2001), two factors (Rojas and Tyler 2018), and three factors (von Stumm et al. 2011). Nevertheless, there are conflicting results regarding the factor structures adopted in previous studies, and also a limited number of studies related to RIBS psychometric assessment. Table 1 summarizes the published literature focusing on RIBS psychometric assessment in different languages around the world and highlights the contribution of this research in terms of adapting RIBS to the Thai language for the new location of Southeast Asia, with a large sample size compared to previous studies.
The RIBS psychometric has been translated into many different languages. Kālis and Roķe (2011) adapted the RIBS-09 version (49 items) to the Latvian language and administered the result to 107 Master degree psychology students and teachers. Results showed that the adapted version presented high internal consistency. Tsai (2015) also undertook a study on RIBS by adapting the original version by consisting of 23 items to the Chinese language. Tsai administered the test to 107 children in Taiwan, with results suggesting that it evinced good internal consistency among all items. Tsai further conducted a confirmatory factor analysis, and results suggested that a two-factor solution model gave a reasonable fit for the adapted Chinese RIBS instrument. Recently, López-Fernández et al. (2019) adapted the original 23-item RIBS version conducting a translation of RIBS items into Spanish. They administered the test to 116 Spanish-speaking individuals from different cities. Their results also demonstrated that two latent constructs were the best fit for the adapted Spanish RIBS. This study differs from those reported in the literature review and was conducted in a different language with a larger sample. The primary purpose of this paper was to assess the 23 items of the original RIBS in the context of the Thai language and examine scale reliability and validity (Appendix).
To gain the maximum benefit from professional networks and ensure a high response rate from data collection, this study used a convenient sampling method in which participants were volunteers. Data were collected from undergraduate students studying in different departments of Accounting, Management, Marketing, Animal Science, Information Technology, and Educational Communication and Technology enrolled in five public universities in Thailand. A total of 524 responses were completed. Sixteen responses were excluded from the data analysis due to outliers (using a critical point of 49.73 with the Mahalanobis distance method). The final sample for data analysis comprised 508 students, 406 (80%) females, and 102 (20%) males with a mean age of 20.64 years and standard deviation of 1.27. The number of female samples in this study was more than males due to Thailand’s feminine society. In the study sample, 14.5% of the participants were freshmen, 44.1% sophomore, 21.2% juniors, and 20.2% seniors.
Before adapting the RIBS, the authors sought permission from Runco et al. (2001), which was granted. The questionnaire comprised two parts. The first part collected demographic information including gender, age, year levels, and subject areas. The second part focused on the original 23 self-reporting items of the RIBS that examined differences in idea usage, idea appreciation, and idea-generating skills of individuals from their usual behavior, without covert activities or actions. Participants responded using a 5-point Likert-type scale ranging from “1” being “never” and “5” being “very often.” The internal consistency of the RIBS items was highly satisfactory with Cronbach’s α = .94.
Approval and informed consent were granted by the University Institute Research Board (IRB) to carry out the study. Data collection was in accordance with human subjects’ guidelines and principles. A paper-and-pencil survey questionnaire was distributed to the participants. Before completing the questionnaire, participants were informed that their answers would remain anonymous and confidential. Participation in this survey was regarded as voluntary; participants did not gain any educational benefit (e.g., extra credit, course requirement fulfillment). This information was also written on the survey. The questionnaire was distributed to the participants and handed back in blank envelopes. In accordance with prospective participants’ discretion, they either completed the questionnaire or returned a blank or partially completed question sheet. The questionnaire items were first translated into English and then back-translated into Thai, adopting the translation-back-translation procedure of Brislin (1980). The results were verified and validated.
Data analysis was conducted in three steps. In the first step, data underwent preliminary analysis to assess for missing and normality assumptions (i.e., multivariate and univariate). Notably, missing data and violation of normality assumptions did not occur in the analysis. Skewness and kurtosis ranged from − .54 to .31 and Pearson’s correlation between all items varied from .66 to .11 as less than .80 (see Table 2). In the second step, data were analyzed using exploratory factor analysis (EFA) to determine a plausible RIBS instrument factor structure. Principal axis factoring (PAF) was performed using the oblique rotation method (Promax). We employed PAF to address the number and nature of the underlying factors based on participants’ responses (Hatcher 1994) and used oblique rotation to assess the theoretically expected factor correlation. The Kaiser-Meyer-Olkin (KMO) measure confirmed whether the sample size was appropriate for EFA and CFA analyses; KMO = .95 and the KMO value of each item was > .89, higher than .5 that was specified as a satisfactory value. The value for the chi-square of Bartlett’s test of sphericity was 6,034.87, with degrees freedom at 253 and p < .001, suggesting that correlations between items were substantial to run PAF. A parallel analysis and scree plot were run to ascertain the number of factors to extract. All factors were selected with eigenvalues greater than 1 (Kaiser 1960). Based on Stevens (2002), the significance of loading indicated minor importance for a variable to a factor. In this regard, only factor loadings with a value higher than .4 were interpreted. The calculation of Cronbach’s alpha (α) was performed separately for each subscale to ensure measurement scale reliability.
In the final step, the sampling data were examined by confirmatory factor analysis (CFA) to check whether the results were well-fitted with the identified model suggested by EFA. The model fit assessment relied on several indices including the chi-square, with significant p value expected divided by the degrees of freedom (χ2/df), comparative fit index (CFI), Tucker-Lewis index (TLI), the standardized root mean square residual (SRMR), and the root mean square error of approximation (RMSEA). Hair et al. (2019) suggested that in the case of models that consisted of observed variables between 12 and 30 (as presented in this study), a sample number higher than 250, χ2/df < 3, CFI or TLI > .94, SRMR = .08 or less (with CFI above .94), and RMSEA < .07 with CFI of .94 or higher could be considered as a good model fit. They further suggested that using three to four indices was enough to prove the model fit.
Exploratory factor analysis (EFA)
Results from the parallel analysis and scree plot suggested that the two-factor solution was ideal, due to cutoff point (factor loadings below .4) and cross-loading. However, five items (IB11, 12, 13, 20, and 21), e.g., “I would take a university course which was based on original ideas,” were eliminated from the Thai RIBS version inventory. The two-factor model was tested further with CFA. All items in each factor showed loading values over 0.4 (see Table 3). These two factors accounted for 48% variance: factor 1 (13 items, e.g., “I come up with a lot of ideas or solutions to problems”) and factor 2 (5 items, e.g., “I often have trouble sleeping at night, because so many ideas keep popping into my head”). The first and second factor subscales of Cronbach’s alpha values were .91 and .82, respectively. Kline (2015) suggested that good measurement models should demonstrate factor correlations that were not higher than .85. The Pearson correlation between RIBS subscales was r = .65, p < .001.
Confirmatory factor analysis (CFA)
Diagnostics for the model noted no concerns with influential cases and assumption testing, e.g., multivariate normality. According to Runco et al. (2001), a controversy existed in choosing factor solutions. Based on the statistical notion, data were fitted for the two-factor model; however, the lack of theoretical justification for interpretation suggested that a one-factor solution was more recommendable and interpretable for the data. Therefore, differences between both constructs of one factor and two-factors were compared. As shown in Table 4, the two-construct model fitted the sampling dataset better than the single-construct model. Consequently, further modifications of the two-factor model were carried out. The modified model provided a significant fit to the sampling data. However, one item (IB10, “I enjoy having leeway in the things I do and room to make up my own mind.”) was eliminated due to nonsignificant loading (standardized factor loadings below .5). The standardized factor loadings and reliability can be seen in Table 5. The model fit indexes yielded satisfactory results and suggested that the factor structure was plausible: χ2/df = 2.82 < 3, CFI = .95 > .94, SRMR = .04 < .08, and RMSEA = .06 < .07. The path diagram of standardized estimates for a modified two-factor model is illustrated in Fig. 1.
Moreover, the reliabilities of both constructs of the modified model were assessed by examining convergent and discriminant validity. In terms of composite reliability (CR), CFA statistics showed that factor 1 CR = .91 and factor 2 CR = .81. Convergent validity as average variance extracted (AVE) of factor 1 was .46, with factor 2 at .47. This indicated that the CR scores of the two factors were higher than the AVE score. Therefore, this analysis demonstrated acceptable convergent validity (Hair et al. 2019). To check whether the two constructs were not different from others, the square root of AVE of both factors was calculated. The results suggested that the square root of AVE factor 1 was .67, and factor 2 was .68, respectively, which were smaller than both factor correlation value of .77. Therefore, the two constructs were related (Hair et al. 2019).
The original 23 items of the RIBS were adapted in the context of the Thai language to assess the structure and psychometric properties. Overall, based on the evaluation results, the Thai version presented a valid measure to a certain extent. Factor analysis suggested a two-dimensional structure. However, Tsai (2015) performed an analysis with children and conducted an EFA with results suggesting a four-factor dimension. This was likely due to the maturity and cross-cultural nature of the participants. The Thai version of the RIBS was proved to have satisfactory reliability. A one-dimensional structure was suggested by Runco et al. (2001), and a single-factor solution was called into question by Tsai (2015) and Kālis and Roķe (2011) since no theoretical support for CFA was performed regarding the fitting comparison and evaluation of this single-factor and the proposed two-factor models. Confirmatory factor analysis results confirmed that the two-factor construct demonstrated a better fit with improved psychometric characteristics. Therefore, the two-factor model was further modified for the purpose of construct validity. The revised model provided a suitable description of the data as an adequate and valid measure of ideational behavior. This result was consistent with Runco et al. (2001), Kālis and Roķe (2011), Tsai (2015), and López-Fernández et al. (2019) who all uncovered an ideal better fit two-factor construct. von Stumm et al. (2011) conducted EFA on RIBS and found that RIBS consisted of three factors as the quantities of ideas, absorption, and originality, but they did not further confirm the factor structure through CFA.
This study showed two factors as a similar result to Tsai (2015); however, item loadings into the two constructs were different. For example, Tsai’s second factor comprised items (1) I have many wild ideas, (6) I like to play around with ideas for the fun of it, (7) It is important to be able to think of bizarre and wild possibilities, (11) My ideas are often considered “impractical” or even “wild,” and (18) Some people might think me scatterbrained or absentminded because I think about a variety of things at once, whereas this study suggested that the second factor consisted of items (14) Sometimes I get so interested in a new idea that I forget about other things that I should be doing, (15) I often have trouble sleeping at night, because so many ideas keep popping into my head, (16) When writing papers or talking to people, I often have trouble staying with one topic because I think of so many things to write or say, (17) I often find that one of my ideas has led me to other ideas that have led me to other ideas, and I end up with an idea and do not know where it came from, and (18) Some people might think me scatterbrained or absentminded because I think about a variety of things at once. Tsai (2015) argued that this was due to differences between perceptions of ideational behavior and divergent thinking of Eastern and Western Cultures. Nonetheless, this study showed discrepancies with Tsai and supported the notion that there is no difference in individual innate nature of creativity as proposed by Lim and Plucker (2001).
Compared to the original study of Runco et al. (2001), the findings in this study presented the same item loadings into both factors, except that six items were eliminated from the Thai RIBS version instrument as IB10 (e.g., I enjoy having leeway in the things I do and room to make up my own mind), IB12 (e.g., I would take a university course which was based on original ideas), IB13 (e.g., I am able to think about things intensely for many hours), IB20 (e.g., I am able to think up answers to problems that haven’t already been figured out), and IB21 (e.g., I am good at combining ideas in ways that others have not tried) from construct 1 and one item (IB11, e.g., my ideas are often considered “impractical” or even “wild”) from construct 2 due to conceptual and empirical justifications. For empirical reasons, the items were dropped to improve the model validity. For conceptual reasons, all dropped items were likely related to autonomy and originality of ideas, e.g., producing new, wild ideas and combining ideas. Buasuwan (2018) suggested that Thai students’ presenting original idea behavior was likely obstructed by norms and traditions as a cultural factor that paid seniority, educators, and higher authority great respect. All item loadings in factor 1 demonstrated the number of ideas one possesses, while all items in factor 2 illustrated the barriers interrupting one’s thinking process. In order to label these two factors, this study may not have identified, more studies with theoretical interpretation are needed.
With the calling into question of the single-factor solution by Runco et al. (2001), Kālis and Roķe (2011), and Tsai (2015), this study provided clear and valid evidence that the two-factor solution could be precisely interpreted. This was due to the constructs’ convergent and discriminant validity. The Pearson correlation between RIBS subscales was also robust which smaller than .85 (Kline 2015). These findings concurred with López-Fernández et al. (2019) who indicated two explicit constructs measuring ideational behavior and the independence and number of items in both constructs. The current findings also seem to ascertain the fact that two distinct types of RIBS constructs were more reasonable.
The study results are limited due to several considerations. First, data collection relied on the self-rated method. Although the surveys were anonymous, students likely want to appear to be more creative than they are. In this regard, students may not provide accurate, honest answers that reflected what they really feel, which may result in false reports. Second, the number of female samples was more than males because of the feminine culture of Thailand. Convenience sampling was selected from only five universities and carried out in one country (Thailand). Thus, the findings were limited to a collectivistic Thai culture, which prevented any potential inferences. Future research should be conducted in individualistic countries or more masculine societies.
In terms of practical implications for education, the Thai version of the RIBS can be of benefit in most schools and educators can gather a wide range of useful information regarding students’ levels of idea generation skills. Information pertaining to the barriers perceived by students might offer instructors some intervention strategies to reduce this burden. Teachers could also improve low levels of creative ideation in their students by applying enhancement strategies. Fostering creative thinking in children may differ from late adolescence. Children’s creativity manifests in the form of imagination play or self-expression; conversely, the creativity of late adolescence may lead to some products or solutions to problems (Mark and Nur 2012). Therefore, educators must consider this fact before employing different teaching strategies or techniques to different age levels of students to maximize their thinking skills. In the context of contemporary higher education, educators may consider implementing problem-based learning (PBL; Boud and Feletti 1998) along with collaborative learning (CL; Bruffee 1998) in the classroom. As a learning model, PBL encourages students to think creatively and actively. By confronting real problems, PBL may help students to generate many ideas (Hmelo-Silver 2004). When students face barriers that interrupt their thinking process, CL assumes the role of attempting to conciliate. By its nature, CL will not allow students who have problems to remain alone. Through collaboration, students learn from each other and listen to each other. This allows students, together with their instructors, to overcome and tackle the problems and obstacles placed in their path (Barkley et al. 2014).
In sum, this study added theoretical knowledge regarding ideational behavior characteristics. Typically, individuals were likely to exhibit a typical frequency of generating ideas; however, sometimes they also perceived barriers in finding solutions. In addition, the results suggest that the Thai version of the RIBS instrument can be used as an additional self-assessment tool for measuring students’ creative ideation as a feasible, quick, and simple method to identify students’ ideation skills and discover whether they perceive any barriers that might be affecting their thinking process.
Finally, though the evidence reported in this study suggested that the RIBS Thai version instrument was valid and useful for assessing ideational behavior, some still doubted the usefulness of self-reported creativity assessment (Baer 2016b). Further studies should be conducted to evaluate the properties of this instrument in order to prove and put more weight on its validity and reliability.
Availability of data and materials
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Average variance extracted
Consensual Assessment Technique
Confirmatory factor analysis
Comparative fit index
Degree of freedom
Exploratory factor analysis
Institute Research Board
Principal axis factoring
Runco Ideational Behavior Scale
Runco Ideational Behavior Scale for Students
Root mean square error of approximation
Standardized root mean square residual
Torrance Tests of Creative Thinking
Amabile, T. M. (1982). Social psychology of creativity: A consensual assessment technique. Journal of Personality and Social Psychology, 43(5), 997–1013 https://doi.org/10.1037/0022-35188.8.131.527.
An, D., Song, Y., & Carr, M. (2016). A comparison of two models of creativity: Divergent thinking and creative expert performance. Personality and Individual Differences, 90, 78–84 https://doi.org/10.1016/j.paid.2015.10.040.
Anderson, R. C., Pitts, C., & Smolkowski, K. (2017). Creative ideation meets relational support: Measuring links between these factors in early adolescence. Creativity Research Journal, 29(3), 244–256 https://doi.org/10.1080/10400419.2017.1360057.
Baer, J. (2016a). Chapter 1 - Domain specificity: Introduction and overview. In J. Baer (Ed.), Domain Specificity of Creativity, (pp. 1–16). San Diego: Academic Press.
Baer, J. (2016b). Chapter 2 - Research evidence for domain specificity. In J. Baer (Ed.), Domain Specificity of Creativity, (pp. 17–54). San Diego: Academic Press.
Barkley, E. F., Major, C. H., & Cross, K. P. (2014). Collaborative Learning Techniques: A Handbook for College Faculty. San Francisco: Jossey-Bass.
Batey, M., Chamorro Premuzic, T., & Furnham, A. (2010). Individual differences in ideational behavior: Can the big five and psychometric intelligence predict creativity scores? Creativity Research Journal, 22(1), 90–97 https://doi.org/10.1080/10400410903579627.
Benedek, M., Franz, F., Heene, M., & Neubauer, A. C. (2012a). Differential effects of cognitive inhibition and intelligence on creativity. Personality and Individual Differences, 53(4), 480–485 https://doi.org/10.1016/j.paid.2012.04.014.
Benedek, M., Könen, T., & Neubauer, A. C. (2012b). Associative abilities underlying creativity. Psychology of Aesthetics, Creativity, and the Arts, 6(3), 273–281 https://doi.org/10.1037/a0027059.
Benedek, M., Mühlmann, C., Jauk, E., & Neubauer, A. C. (2013). Assessment of divergent thinking by means of the subjective top-scoring method: Effects of the number of top-ideas and time-on-task on reliability and validity. Psychology of aesthetics, creativity, and the arts, 7(4), 341–349 https://doi.org/10.1037/a0033644.
Boud, D., & Feletti, G. (1998). The challenge of problem based learning. Routledge.
Brislin, R. W. (1980). Cross-cultural research methods. In I. Altman, A. Rapoport, & J. F. Wohlwill (Eds.), Environment and Culture, (pp. 47–82). Boston: Springer US.
Bruffee, K. A. (1998). Collaborative learning: Higher education, interdependence, and the authority of knowledge. Baltimore: The Johns Hopkins University Press.
Buasuwan, P. (2018). Rethinking Thai higher education for Thailand 4.0. Asian Education and Development Studies, 7(2), 157–173 https://doi.org/10.1108/AEDS-07-2017-0072.
Carson, S. H., Peterson, J. B., & Higgins, D. M. (2005). Reliability, validity, and factor structure of the creative achievement questionnaire. Creativity Research Journal, 17(1), 37–50 https://doi.org/10.1207/s15326934crj1701_4.
Cohen, J. R., & Ferrari, J. R. (2010). Take some time to think this over: The relation between rumination, indecision, and creativity. Creativity Research Journal, 22(1), 68–73 https://doi.org/10.1080/10400410903579601.
Colangelo, N., Kerr, B., Hallowell, K., Huesman, R., & Gaeth, J. (1992). The Iowa inventiveness inventory: Toward a measure of mechanical inventiveness. Creativity Research Journal, 5(2), 157–163 https://doi.org/10.1080/10400419209534429.
Diakidoy, I. N., & Constantinou, C. P. (2001). Creativity in physics: Response fluency and task specificity. Creativity Research Journal, 13(3–4), 401–410 https://doi.org/10.1207/S15326934CRJ1334_17.
Guilford, J. P. (1956). The structure of intellect. Psychological Bulletin, 53(4), 267–293 https://doi.org/10.1037/h0040755.
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate data analysis. Hampshire: Cengage Learning EMEA.
Hao, N., Tang, M., Yang, J., Wang, Q., & Runco, M. A. (2016). A new tool to measure malevolent creativity: The malevolent creativity behavior scale. Frontiers in Psychology, 7, 682 https://doi.org/10.3389/fpsyg.2016.00682.
Hatcher, L. (1994). A step-by-step approach to using SAS for factor analysis and structural equation modeling. Cary: SAS Institute.
Hmelo-Silver, C. E. (2004). Problem-based learning: What and how do students learn? Educational Psychology Review, 16(3), 235–266 https://doi.org/10.1023/B:EDPR.0000034022.16470.f3.
Hocevar, D. (1979). The development of the Creative Behavior Inventory. Paper presented at the the annual meeting of the Rocky Mountain Psychological Association, (ERIC Document Reproduction Service No. Ed 170 350.)
Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 20(1), 141–151 https://doi.org/10.1177/001316446002000116.
Kālis, E., & Roķe, L. (2011). Adaptation of Runco Ideational Behavior Scale in Latvia. Journal of Pedagogy and Psychology Signum Temporis, 4(1), 36–45 https://doi.org/10.2478/v10195-011-0043-4.
Kaufman, J. C. (2012). Counting the muses: Development of the Kaufman Domains of Creativity Scale (K-DOCS). Psychology of Aesthetics, Creativity, and the Arts, 6(4), 298–308 https://doi.org/10.1037/a0029751.
Kaufman, J. C., & Beghetto, R. A. (2009). Beyond big and little: The four C model of creativity. Review of General Psychology, 13(1), 1–12 https://doi.org/10.1037/a0013688.
Kim, K. H. (2011). The creativity crisis: The decrease in creative thinking scores on the torrance tests of creative thinking. Creativity Research Journal, 23(4), 285–295 https://doi.org/10.1080/10400419.2011.627805.
Kline, R. (2015). Principles and practice of structural equation modeling, (4th ed., ). New York: Guilford Press.
Kupers, E., Lehmann-Wermser, A., McPherson, G., & van Geert, P. (2018). Children’s creativity: A theoretical framework and systematic review. Review of Educational Research, 89(1), 93–124 https://doi.org/10.3102/0034654318815707.
Lim, W., & Plucker, J. A. (2001). Creativity through a lens of social responsibility: implicit theories of creativity with Korean samples. The Journal of Creative Behavior, 35(2), 115–130 https://doi.org/10.1002/j.2162-6057.2001.tb01225.x.
Liu, W., Pan, Y., Luo, X., Wang, L., & Pang, W. (2017). Active procrastination and creative ideation: The mediating role of creative self-efficacy. Personality and Individual Differences, 119, 227–229 https://doi.org/10.1016/j.paid.2017.07.033.
Long, H., & Plucker, J. A. (2015). Assessing creative thinking: Practical applications. In R. Wegerif, L. Li, & J. C. Kaufman (Eds.), The Routledge International Handbook of Research on Teaching Thinking, (pp. 315–329). London: Routledge.
López-Fernández, V., Merino-Soto, C., Maldonado Fruto, M. L., & Orozco Garavito, C. A. (2019). Analysis of the descriptive and psychometric characteristics of the internal structure of the RIBS in Spanish. Creativity Research Journal, 31(2), 229–235 https://doi.org/10.1080/10400419.2019.1577123.
Mark, A. R., & Nur, C. (2012). The development of children’s creativity. In O. N. Saracho, & B. Spodek (Eds.), Handbook of Research on the Education of Young Children, (3rd ed., ). New York: Routledge.
P21: Partnership for 21st Century Learning. (2019). P21 framework for 21st century learning definitions. Battelle for Kids. http://static.battelleforkids.org/documents/p21/P21_Framework_DefinitionsBFK.pdf.
Paek, S. H., & Runco, M. A. (2018). A latent profile analysis of the criterion-related validity of a divergent thinking test. Creativity Research Journal, 30(2), 212–223 https://doi.org/10.1080/10400419.2018.1446751.
Pannells, T. C., & Claxton, A. F. (2008). Happiness, creative ideation, and locus of control. Creativity Research Journal, 20(1), 67–71 https://doi.org/10.1080/10400410701842029.
Plucker, J., Beghetto, R. A., & Dow, G. T. (2004). Why isn’t creativity more important to educational psychologists? Potentials, pitfalls, and future directions in creativity research. Educational Psychologist, 39(2), 83–96 https://doi.org/10.1207/s15326985ep3902_1.
Ritter, S. M., & Mostert, N. (2017). Enhancement of creative thinking skills using a cognitive-based creativity training. Journal of Cognitive Enhancement, 1, 243–253 https://doi.org/10.1007/s41465-016-0002-3.
Rojas, J. P., & Tyler, K. M. (2018). Measuring the creative process: A psychometric examination of creative ideation and grit. Creativity Research Journal, 30(1), 29–40 https://doi.org/10.1080/10400419.2018.1411546.
Runco, M. A., Plucker, J. A., & Lim, W. (2001). Development and psychometric integrity of a measure of ideational behavior. Creativity Research Journal, 13(3–4), 393–400 https://doi.org/10.1207/S15326934CRJ1334_16.
Sen, S. (2016). Applying the mixed Rasch model to the Runco Ideational Behavior Scale. Creativity Research Journal, 28(4), 426–434 https://doi.org/10.1080/10400419.2016.1229985.
Smith, R. M., Sardeshmukh, S. R., & Combs, G. M. (2016). Understanding gender, creativity, and entrepreneurial intentions. Education + Training, 58(3), 263–282 https://doi.org/10.1108/ET-06-2015-0044.
Stevens, J. (2002). Applied multivariate statistics for the social sciences, (4th ed., ). Hillsdale: Erlbaum Associates.
Tep, P., Maneewan, S., Chuathong, S., & Easter, M. A. (2018). A review of influential factors affecting undergraduate students’ creative thinking. Paper presented at the 11th International RAIS Conference on Social Sciences, Johns Hopkins University, in Montgomery County Campus, Rockville, MD, USA. https://doi.org/10.5281/zenodo.1569253.
Torrance, E. P. (1962). Thinking creatively with pictures – Figural booklet A. Bensenville: Scholastic Testing Service.
Torrance, E. P. (1974). Torrance Tests of Creative Thinking: Norms and technical manual. Bensenville: Scholastic Testing Service.
Torrance, E. P. (1990). The Torrance Tests of Creative Thinking: Norms – technical manual figural (streamlined) forms A and B. Bensenville: Scholastic Testing Service.
Treffinger, D. J. (2009). Myth 5: Creativity is too difficult to measure. Gifted Child Quarterly, 53(4), 245–247 https://doi.org/10.1177/0016986209346829.
Tsai, K. C. (2015). Assessing a Chinese version of the Runco Ideational Behavior Scale. Social Behavior and Personality: an international journal, 43(7), 1111–1122 https://doi.org/10.2224/sbp.2015.43.7.1111.
Tyagi, V., Hanoch, Y., Hall, S. D., Runco, M., & Denham, S. L. (2017). The risky side of creativity: Domain specific risk taking in creative individuals. Frontiers in Psychology, 8, 145 https://doi.org/10.3389/fpsyg.2017.00145.
von Stumm, S., Chung, A., & Furnham, A. (2011). Creative ability, creative ideation and latent classes of creative achievement: What is the role of personality? Psychology of Aesthetics, Creativity, and the Arts, 5(2), 107–114 https://doi.org/10.1037/a0020499.
The authors acknowledge the financial support provided by the Petchra Pra Jom Klao Ph.D. Scholarship Fund, King Mongkut’s University of Technology Thonburi, Thailand. The authors also wish to thank Jessica Clayton and Sinath Tep for their helpful comments on earlier versions of this manuscript.
Ethics approval and consent to participate
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Consent for publication
Informed consent was obtained from all individual participants included in the study.
The authors declare that they have no conflict of interest.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Tep, P., Maneewan, S. & Chuathong, S. Psychometric examination of Runco Ideational Behavior Scale: Thai adaptation. Psicol. Refl. Crít. 34, 4 (2021). https://doi.org/10.1186/s41155-020-00170-9
- Creativity assessment
- Ideational behavior
- Psychometric examination
- RIBS Thai version
- Runco Ideational Behavior Scale
- Undergraduate student