What the Critical Thinking Data Tells Us: College Level Learners

In the past 30 years, critical thinking has moved from a theoretical academic discussion to a global concern for employers, educators, and those seeking to advance truth-seeking for the sake of democracy and our common good. This is a longitudinal look at assessment data collected at Colleges and Universities in the US and globally. It offers some insight into the strength of critical thinking in our future leaders.

What have we learned about College Students’ Critical Thinking Skills?

Data described in these representative samples were collected using leading measures of critical thinking skills assessment (CCTST) and critical thinking mindset (CCTDI). Because the assessments have been kept culturally relevant, and psychometrically valid, reliable, and equivalent over time, we can ask some question about what has been happening in higher education.[1] These assessments are calibrated to the college level population. They can capture change within an individual and also differences between groups.

We have also included a review of the change in documented skill levels in students pursuing professional education credentials in Business and the Health Sciences during the COVID years. Here the data are drawn from undergraduate and graduate level programs using the BCTST and the HSRT. In each case, one can see the effects of cognitive fatigue, illness and lack of focus impacting the reasoning skills scores. Mindset metrics confirmed these effects.

College students vary widely in the strength of their critical thinking skills.

The CCTST is calibrated for the college level student. This representative USA sample of CCTST scores shown in Figure 1 was was modeled in 2019 using an algorithm to include students from all demographic, geographic, cultural, economic, and college program foci. It is typical of the relatively normal distribution of scores on the CCTST in the overall US 4 year undergraduate population up to that year.

College level students range quite broadly in their demonstration of critical thinking skills. Some have considerable strength when they enter college, and others have weaknesses that challenge their ability to successfully engage their program of study. Those who score in the Strong (green bars) or Superior range (blue bars), have well-practiced critical thinking skills and can apply them to presented scenarios. Those with scores in the Weak (orange bars) or Not Manifested ranges (red bars) commonly make reasoning errors and make inaccurate judgments, and they often have great difficulty interpreting, analyzing, inferring, explaining, and evaluating new information and new situations.

Humans vary along a continuum, depending on how well we have developed our ability to accurately identify and interpret new information or analyze emerging problems. While some of us are strong critical thinkers, we might have more difficulty when it comes to reasoning when numbers are involved. Some of us lack the foresight to analyze a situation well enough to manage risks. Some of us are poor at evaluating information and some are poor at critiquing the quality of our previous decisions. In other words, we are all vulnerable to common reasoning errors. To be reliably strong in critical thinking, we need to practice our critical thinking skills and develop our thinking mindset.

A common question: Is college level education effectively training critical thinking?

To explore this question, we examined CCTST critical thinking skills scores in a randomized, proportionate 4-year college sample at institutions of varying types to see how scores from 2012 compared to scores for 2019. We chose this time frame because before 2012, although there was much talk of the importance of assessing critical thinking, not much actual assessment was being done at many institutions. 2019 is chosen because it represents the end of the time frame during which college education was not influenced by the pandemic and its associated hardships. We discuss these effects below.

In this Figure 2 sample there are 2,000 cases for each time period. To be clear, this is not a comparison of scores for the same individuals seven years apart. These are two samples of 2,000 students randomly selected using variables to model the college student population at each of the two time periods.

Randomization helps assure that our analysis sample accurately represents the population of college students in each time frame, and controls all of the variables involved as much as possible. To see this, consider the many reasons that critical thinking is assessed, and the effects that these reasons may have on our analysis sample. For instance, the CCTST may have been used to assess students at risk, to select students for impacted programs, to study the effectiveness of an educational approach, to respond to state funding mandates, or for an accreditation self-study of learning outcomes (six common uses of the assessment). Some institutions may have had well-designed quality improvement programs in place during this period, but others certainly were only collecting data to minimally satisfy externally imposed obligations. Randomization is the best way to effectively control the many variables that could introduce measurement bias.

Both the 2012 and the 2019 randomized samples OVERALL scores were normally distributed. The change in CCTST OVERALL score, from 15.33 to 16.73, demonstrates an average gain of 1.4 points, statistically significant (t = 9.10, p<001). More important, this change is educationally significant, demonstrating that those in the 2019 sample were better able to reason to an accurate response and not fall prey to the common human reasoning errors. This supports the assertion that the educational emphasis on training reasoning skills is paying off. Figure 2 illustrates these findings.

A diagram of a normal distribution Description automatically generated

Figure 2. CCTST overall scores for 2012 (dashed line) and 2019 (solid line).

Going deeper than the CCTST Overall score, in analyzing these two samples, we found statistically significant growth in all of the individual cognitive skill metrics assessed: Analysis (t = 7.84, p<.001), Inference (t = 7.96, p<.001), Evaluation (t = 5.66, p<.001), Induction (t = 11.78, p<.001), and Deduction (t = 5.55, p<.001).

The publications of independent researchers and doctoral dissertation scholars have documented achieving a growth in the critical thinking skills of groups of students as a result of their training efforts (See Research – Training Critical Thinking and Leadership a list of some of these citations.) Publications that describe effective ways to teach/train critical thinking in different disciplines have grown in their quality and quantity. Peer reviewed research reports on the effectiveness of case-based learning, the use of human simulators, reflective journaling, concept mapping, and various other learning approaches.[2] Meta-analyses, like those conducted by Abrami and colleagues[3] (2008, and 2015), further document that students gain strength in critical thinking after an effective training program. So yes, we can confidently say that college education effectively trains critical thinking. Our discussions with researchers, dissertation students, and employee development professionals lead us to conclude that initiatives to improve critical thinking are occurring globally, in at least 50 countries.

Is there evidence that all types of degree program students are becoming better critical thinkers over time?

Year Associate Degree Students Baccalaureate Students Graduate Students
2005 13.3 (71.0) 15.0 (73.4) 19.0 (79.0)
2012 13.7 (71.3) 16.5 (75.5) 19.0 (79.0)
2019 14.0 (72.0) 16.3 (75.3) 20.0 (80.4)

The mean scores for OVERALL critical thinking skills in our population comparison samples have been increasing over time in all types of educational programs (2-year, 4-year, and graduate level programs). Our cross-sectional data, although uncontrolled by the many changes that have occurred in higher education over the 2005 to 2019 time frame suggests that educational efforts to help students to focus on their reasoning process is having a positive effect on their critical thinking skills.

Figure 3: Significant cross-sectional improvements at all program levels.

This evidence is very modest (Figure 3). And yet, it is encouraging when we consider the changes in college level learning over this time frame. Today, educational institutions enroll a higher percentage of high school graduates, and they offer a wider variety of programs to an ever more diverse student population as compared to 2005 (e.g. traditional college student vs re-entry / fulltime vs. part-time / working vs. not / on-site-versus on-line, etc.). There have also been significant shifts in the proportion of public versus private educational programs, an increase in for-profit educational organizations, and an increasing focus on technology programs. Graduate level programs, always highly selective, have changed in their proportion of globally admitted students. So, although the news is positive about potential gains in critical thinking skills in the college student population, we need to continue to examine this issue.

Which cognitive skills are the strongest in the college level learner?

Early assessments of critical thinking skills provided a breakdown of skills for the cognitive dimensions of Analysis, Inference, Evaluation, Inductive Reasoning, and Deductive Reasoning. More recently they have been expanded to include Interpretation (to increase our assessment of how well the individual being assessed can see what information is relevant versus not and so will be able to get the problem right). Also added are sub-scores of Explanation (which assesses whether the individual can clearly describe what is relevant and why) and Numeracy (can the individual apply their reasoning skills in quantitative contexts).

In the combined population of college level learners, stronger scores are more typically seen for Analysis, Inference, Explanation, and Induction. Weaker scores are more common for Interpretation (many students have difficulty identifying the critical details of the problem), Evaluation (many cannot judge the quality of an analysis, inference, judgment, etc.), Deduction (many have difficulty reasoning in logically precise contexts), and Numeracy (many find it difficult to reason in contexts that involve numbers, proportions, probability, flow rates and other quantitative conditions).

What about college students’ critical thinking mindset?

Our own studies and a search of publications using CCTDI data shows that some of the mindset attributes derived from the APA Delphi conceptual definition research describing the ideal critical thinker are more strongly characteristic of college level learners than the population at large. But these attributes are malleable and the assumption that a thinking mindset is only made stronger by exposure to college level learning was not supported in early longitudinal research. While the majority of students are demonstrated to grow stronger in their thinking mindset (becoming more truth-seeking, systematic, open-minded, foresightful, inquisitive, and mature in their judgment process), others are seen to become less likely to engage problems, explore new ideas, or make a revised judgment in light of new information.

Strong reasoning skills are most commonly associated with strength in thinking mindset, but it is possible to solve problems using analysis, evaluation, inference (etc.), but be close-minded about ideas that are different from one’s own. It is also possible to have a mindset that values strong reasoning skills, but to possess relatively poor reasoning skills. This happens when there is a lack of reflection on, and self-evaluation of, one’s thinking process, and a lack of training and effort to practice problem analysis and reasoning skills.

Can thinking mindset be improved in college level learners?

Evidence for the improvement of the attributes associated with a positive disposition toward critical thinking is plentiful in specific cross-sectional and matched-pairs samples, collected under controlled conditions. Over several decades, many baccalaureate institutions collecting thinking mindset data using the CCTDI as a component of their outcomes assessment projects (perhaps for a SACS regional QEP accreditation, or an AACSB Business Undergraduate accreditation self-study) have focused on improving student’s thinking mindset and used the CCTDI to assess progress.

Following our previous plan of comparing data from two random but comparable samples, we created a randomized sample of 450 undergraduate students who took the CCTDI assessment at SACS institutions gathered before 2012 and compared these data with another randomized sample of 450 students collected in 2017-2019 to see if there were significant gains for any of the seven thinking mindset attributes.

Scores above 40 on the CCTDI scales indicate a consistent affirmation and positive endorsement of the specific mindset attribute. Scores under 30 show consistent disinclination and rejection of the attribute. Scores in the 30’s indicate ambivalence about the attribute. Figure 4 shows the mean scores for three of the seven mindset attributes (Truth-seeking, Open-mindedness, and Maturity of Judgement) at both time frames. Although some students in these samples did not demonstrate these mindset characteristics, the average scores for all three attributes were in the positive endorsement range in both time periods. These mean scores were also significantly increased on average in 2019.

Mindset Attribute N Mean SD Mean Gain t-value p-value
Truth-seeking 2012 450 32.79 6.8 2012 13.7 (71.3) 16.5 (75.5)
Truth-seeking 2019 450 34.80 6.9 2.01 4.33 p<.001
Open-mindedness 2012 450 39.67 6.1
Open-mindedness 2019 450 41.31 6.3 1.64 4.01 p<.001
Maturity of Judgment 2012 450 37.94 8.4
Maturity of Judgment 2019 450 38.87 7.5 0.93 1.72 p<.09

Figure 4. Growth in three thinking mindset attributes over time.

Scores for Analyticity (Foresight), Inquisitiveness, and Confidence in Reasoning did not improve significantly over time in these randomly selected samples; but all three averaged above 40 (Positive scores) at both time periods. Keeping in mind that a person may express strong confidence in reasoning and yet not demonstrate the skill to warrant that confidence, this scale is most meaningful when examined at the individual level in the context of a reasoning skills score. A change in scores for Confidence in Reasoning after a training session in reasoning skills represents a more accurate self-assessment of reasoning skill. Scores for Maturity of Judgment and Systematicity were least strong at pretest and did not gain significantly at posttest, making these mindset attributes foci for curriculum improvement.

Much the same picture emerges when we carry out this same analysis using randomized samples from associate level degree programs and graduate level programs. Graduate level samples generally tend to have stronger mindset attribute scores than undergraduate level samples. This is to be expected because of the selectivity and the added educational experience and commitment of the graduate samples.

Selective admissions

Student selectivity can explain much of the differences in sample means. For example, the whole baccalaureate student population (the population sample we have been discussing in this report), which includes top tier institutions, had a mean score of 16.3 (75.3) for CCTST OVERALL in 2019. Yet, the mean at regional, open-enrollment universities was 14.8 (73.2). Regional universities are more likely to have admission policies that allow students who have weaker critical thinking skills and perhaps other educational gaps. These students begin study with a vision for bridging these gaps. In contrast, the mean CCTST Overall score for baccalaureate students at Research I universities was 18.0 (77.6), a result of their admissions criteria selecting students who have already improved their critical thinking and now have higher critical thinking skills scores on average.

Institutions of different kinds find it meaningful to benchmark their scores against their peers. For this purpose, several sets of up-to-date comparison percentiles for the CCTST Overall score are maintained:

  • Research I Universities — Baccalaureate Students
  • US Regional Open-Admissions Universities — Baccalaureate Students
  • Globally Ranked Universities — Baccalaureate Students
  • Globally Ranked Universities — Graduate Students
  • Health Sciences Undergraduate Students
  • Health Science Graduate Students
  • STEM Undergraduate Students
  • Graduate Students and Professionals
  • Undergraduate Students at Four-Year Colleges and Universities
  • Undergraduate Students at Two-Year Colleges

Selectivity of admission is only one variable to consider when examining data for possible evidence of improvement in students’ critical thinking. The actual algorithm for modeling student populations for our comparison samples is proprietary, but many of the variables that we have used can be anticipated: geographic location, size of student body, rural/urban, public/private, online/residential, and socioeconomic descriptors are among those used to model these populations.

Critical Thinking Skills Assessment in the Pandemic years of 2020-2022.

The data show an increased proportion of students making an increased number of common reasoning errors since 2020. Historically there have always been college applicants and even college graduates who fail to demonstrate a competence in critical thinking skills (refer to Figure 1 to see the usual distribution of CCTST OVERALL scores in the population sample).

Figure 5: Critical Thinking Skills – 4 YR Undergraduates 2020-2022

Since 2020, the range of scores for this national aggregate sample has not changed, but the distribution of the scores now leans in a weaker direction. The proportion of students with scores in the moderate and weak range has increased. Weaker scores on the CCTST occur when individuals commit common human reasoning errors, getting the problem wrong, and struggling to draw a warranted inference when they should be able to do so with confidence. After years of stability, such a significant change in the CCTST OVERALL scores for the college level learner requires an explanation.

Why suspect COVID is a Cause?

To obtain valid and reliable assessments of cognitive skills, an individual should be completing the assessment in a language where they are proficient, they should be rested, and in an environment free of distractions, and they should not be uncharacteristically ill or distressed. Although these assessments are given on-line and many accommodations can be provided to optimize the assessment environment, it has been uncharacteristically difficult or even impossible to maintain conditions that are optimal for a cognitively demanding assessment.

In addition, there is also evidence that some students may be additionally affected by COVID’s sequelae. Many COVID sufferers who have been hospitalized have developed severe cognitive issues that persist at discharge and beyond. It is not too surprising to see cognitive impairment while a person is suffering from an acute viral infection, particularly when it is known to create fever and severe headache. What is more surprising are the reports of cognitive symptoms in cases where the infection itself is described as ‘mild’, continuing for months after the infection itself has resolved.

Nearly everyone who describes themselves as having COVID symptoms over several weeks mentions a disturbance in the ability to think well, difficulty with concentration, and in some cases short-term memory loss, some using the term brain fog. As we have had the opportunity to follow these people in the clinic, many deny experiencing mental health issues. Rather they describe their experiences as a difficulty focusing that interferes with their ability to work. They say that they become confused, or they feel unexpectedly disorganized when they try to focus on work or handle problems. Putting their thoughts into words is sometimes more difficult.

Clinical studies have documented forgetfulness, low efficiency, and error-prone task execution.[4] These cognitive issues have been seen independent of the severity of symptoms at the time of infection, and to persist for many months or longer in a yet-to-be-determined percentage of individuals.1,2 Most recently, concern about the long-term effects of COVID was raised by a population study from the UK. In the group of 95,969 non-hospitalized young people (ages 18-29), a symptom cluster including brain fog was the most common manifestation of long COVID.[5] Recurrent infection, in a relatively short time frame, is now a complication that will increase the impact of COVID on reasoning capability and learning.

Given the impact on learning and work performance, a more systematic investigation seems warranted. In July, the CDC warned that their electronic health data is recording long COVID symptoms in one in five Americans.[6] It’s vital to understand and quantify COVID’s neurological footprint on work performance and educational achievement.

Our assessments focus on scale scores to permit educators and trainers to determine how to best design trainings for an observed gap in a particular individual or group. We see dipped scores across the various cognitive assessment areas: analysis, interpretation, inference, evaluation, explanation, induction, deduction, and numeracy.

Figure 6: Interval Plot of Evaluation Scores 2019 through 2022

Hopeful Signs

Only in the past few months have we begun seeing improved scores in several discipline groups. Most significantly, scores on the HSRT for the health science disciplines have begun to rise significantly in the past 8 months. This is a hopeful sign that the changes we have observed are a fluctuation related to the pandemic and its influence on our collective cognitive ability to focus and think well, rather than some other unknown variable in the population.

What we can be sure of is that the reduced ability to identify problems and develop and monitor needed solutions is a major societal concern related to COVID infection. Medical professionals, first responders, and parents, … none of these can afford to have an impaired cognitive response to impending and existing problems or the failed management of emerging threats.

While neurologists have been leading the investigation into the prevalence of post-viral cognitive issues, they are not the group of professionals who will need to analyze the personal and societal impact of this problem. This work will fall on teachers, trainers, educators, and staff developers of all kinds.

The need to support students who demonstrate focal weaknesses in reasoning skills is not new to student success professionals. Monitoring the effectiveness of our trainees and working personnel is a familiar concern. Now one potential reason for observed failure to meet standards may be post-COVID cognitive issues that need to be acknowledged and addressed.

The impact of COVID-19 on the workforce continues to be discussed as a major disruption.[7] We will continue to monitor the group performance and provide information about the national population student groups to aid general decision-making for organizations that are assessing critical thinking as a component of admissions, training effectiveness, or hiring potential.

Summary thoughts

Anything that is valued is measured. Employees typically attend to the metrics their supervisors use to evaluate their workplace performance. If it is something my company values enough to measure, then I had better be doing it well. Over the past three decades building critical thinking skills and a thinking mindset has moved from a theoretical academic discussion to a global concern for employers, educators, and those seeking to advance fair-minded truth-seeking for the sake of democracy and the reasoned pursuit of our common good.

Published articles increasingly document the growing numbers of critical thinking-focused educational reports, research studies, corporate projects, and critical thinking focused initiatives globally.[8] Given the ever-increasing focus on assessing and training critical thinking, and the increasing global energy around critical thinking, we can draw several confident conclusions. First, strong thinkers are valued. Second, measuring critical thinking yields actionable results. We also hope two workplace responses will follow: Those who display strength in critical thinking skills and mindset will be rewarded, and those whose deficits are objectively identified will be offered training.

When leaders in government, the health professions, business, and the military call for greater attention to critical thinking, their focus is practical. They rightly believe that strength in critical thinking results in better decision-making and better problem-solving. And these mean better government, business, healthcare, and military outcomes. This very practical, bottom line, attention to critical thinking creates demands for effective employee development programs, including accurate empirical assessments. In reporting these general observations, we have mentioned government projects whose descriptions are available to the public. We have also presented other general information with consideration of client privacy. Many of these client organizations are societal leaders in calling for a focus on developing a culture of critical thinking in the workplace and in our government agencies. Our hope is to inform agency leaders of the potential for growth in leadership strength and overall employee performance when more emphasis is placed on assessing the reasoning skills and thinking mindset of prospective agency leaders and staff.

This report is one of a series of white papers prepared by our research team to inform researchers and trainers of critical thinking skills and mindset. By P. Facione, N. Facione, C. Gittens

More Background

We have been discussing data collected with a group of critical thinking assessments that have been scientifically developed and tested over a period of more than 30 years. Here are some of the key considerations in this process.

Four key considerations for developing and maintaining valid and reliable critical thinking assessments:

  1. The valid and reliable assessment of the reasoning skills and thinking mindset of working people in varying sectors requires tailored instruments calibrated to varying levels of decision responsibility. Today the INSIGHT Series includes assessments for executives, professionals, and two levels of support staff.
  2. Valuable input from working professionals and acknowledged leaders in each sector make it possible to create and validate instruments tailored specifically for people who work in Business, Health Care, Defense, Science and Engineering, Law, and as Educators and First Responders.
  3. Over thirty years, collaborative projects with scholars and professionals globally have extended the availability of culturally relevant language translations.
  4. Independent research and assessment projects have produced comparative data on the distribution of critical thinking skills and mindset attributes for employees from support staff to top level leadership.
  1. More discussion of the assessments themselves and how they are designed and maintained is included at the end of this document.
  2. Allaire, J. L. (2015). Assessing critical thinking outcomes of dental hygiene students utilizing virtual patient simulation: A mixed methods study. Journal of Dental Education, 79, 1082-1092. Kaddoura, M. (2011). Critical thinking skills of nursing students in lecture-based teaching and case-based learning. International Journal for the Scholarship of Teaching and Learning, 5(2), article 20. Available at: https://doi.org/10.20429/ijsotl.2011.050220. Michaels, N. (2017). The efficacy of a skill-building workshop for reflective critical thinking with graduate students: Effect-size differences based on race. Journal of Interdisciplinary Education, 15(1), 198-215. Quitadamo, I. J., & Kurtz, M. (2007). Learning to improve: Using writing to increase critical thinking performance in general education biology. CBE Life Science Education, 6(2), 140–154. Doi: 10.1187/cbe.06-11-0203. Wood, R. et al. (2012). Measuring critical thinking dispositions of novice nursing students using human patient simulators. Journal of Nursing Education, 51, 349-352.
  3. Abrami, P. C., Bernard, R. M., Borokhovski, E., Wade, A., Surkes, M., Tamim, R., & Zhang, D. A. (2008). Instructional interventions affecting critical thinking skills and dispositions: A stage one meta-analysis. Review of Educational Research, 78, 1102–1134. Abrami, P. C., Bernard, R., Borokhovski, E., Waddington, D., Wade, C. A., & Persson, T. (2015). Strategies for teaching students to think critically: A meta-analysis. Review of Educational Research, 85, 275–314.
  4. Hellmuth, J., Barnett, T.A., Asken, B.M. et al. Persistent COVID-19-associated neurocognitive symptoms in non-hospitalized patients. Neurovirol. 27, 191–195 (2021). https://doi.org/10.1007/s13365-021-00954-4
  5. Subramanian, A., Nirantharakumar, K., Hughes, S. et al. Symptoms and risk factors for long COVID in non-hospitalized adults. Nat Med (2022). https://doi.org/10.1038/s41591-022-01909-w
  6. CDC. Post–COVID Conditions Among Adult COVID-19 Survivors Aged 18–64 and ≥65 Years — United States, March 2020–November 2021. CDC (2022). Post–COVID Conditions Among Adult COVID-19 Survivors Aged 18–64 and ≥65 Years — United States, March 2020–November 2021 | MMWR (cdc.gov)
  7. COVID-19 Did a Number on the Workforce – and the Workplace. https://www.usnews.com/news/economy/articles/2022-03-17/covid-19-did-a-number-on-the-workforce-and-the-workplace
  8. Aghababaein, P., Moghaddam, S.A.H., Nateghi, F., & Faghihi, A. (2017). Investigating changing in social studies textbooks of public review (basic fourth and fifth) based on the emphasis on critical thinking skills Facione [sic] in the last three decades. International Education Studies, 10(3), 108-115. Alkharusi, H. A., Sulamani, H. A., & Neisler, O. (2019). Predicting critical thinking ability in Sultan Qaboos University students. International Journal of Instruction, 12, 491-504. Huang, Y. C. et al. (2012). Case studies combined with or without concept maps improve critical thinking in hospital-based nurses: a randomized-controlled trial. International Journal of Nursing Studies, 49, 747-754. Jacob, S. M. (2012). Analyzing critical thinking skills using online discussion forums and CCTST. Procedia – Social and Behavioral Sciences, 31, 805-809. Paans, W., Sermeus, W., Niewsg, R., & van der Schans, C. (2010). Determinants of the accuracy of nursing diagnoses: Influence of ready knowledge, knowledge sources, disposition toward critical thinking and reasoning skills. Journal of Professional Nursing, 26(4), 232-241. Suliman, W. A. (2008). Critical thinking and learning styles of students in conventional and accelerated programmes. International Nursing Review, 53(1), 73-79. Tiwari, A., Lai, P., So, M., & Yuen, K. (2006). A comparison of the effects of problem-based learning and lecturing on the development of students’ critical thinking. Medical Education, 40, 547-554. Yuan, H, et al. (2008). Improvement of nursing students’ critical thinking skills through problem-based learning in the People’s Republic of China: a quasi-experimental study. Nursing & Health Sciences, 10(1), 70-76.