Critical Thinking and your AI Chatbot 

Chatbots can make us better critical thinkers. Humans use critical thinking to manage problems, some serious enough to threaten our current existence, and invent paths toward a better future. To be strong critical thinkers in these two contexts we need information. That is where our new chatbot partners come in. They challenge all of us to find better strategies for evaluating information.  

Here are some relationships we have discovered while chatting with our bots that can help you maximize their strengths and avoid over-relying on their weaker characteristics. 

When we ask our chatbot a question, it will always have an answer. “Ask me anything,” that is what our Chatbots offer. They set a high bar as a tool to empower us. The bot will review vast databases for anything humans have recorded on the topic and using instructions programed by its developers, it will report a response. To analyze a topic or solve a problem, human critical thinkers interpret, analyze, and evaluate information that is relevant and make a judgment about what to believe or what to do. To bridge the gap between these two processes, we asked our chatbot follow up questions: Where did you find this information?  

Often chatbots will provide useful answers even when we ask poorly framed questions. If we leave out words, misspell, or frame questions with broad generalizations, their answers still help us think. We asked our chatbot, “If it was necessary for everyone to stop eating meat to save the planet?” After indexing some ideas, we might consider that to answer our question, our chatbot clarified our question with the summary statement: It is not necessary for everyone to completely stop eating meat. Chatbots help us to clarify our questions when we ask for information.   

Our chatbot struggled when we asked it to evaluate its responses. For example, we asked our chatbot: “Were there any commonly held ideas that you found in your database and then rejected because they were baseless?” We received this response: “Of course! Here are some baseless myths and commonly accepted ideas that I have come across: mythical creatures like Bigfoot, cosmic conspiracies like extraterrestrial cover-ups, secret moon landings, and pseudoscientific health beliefs. These capture our imagination but there is no scientific evidence to support their existence.” It was good to hear that the chatbot judged ideas based on evidence, but we needed to know more about how the chatbot dealt with conflicting information.  

Some decisions must be made when situations are uncertain. If we use chatbots in these situations we need to know how well chatbots manage uncertainty. We asked our Chatbot: “If there are two opposing answers to our question, which answer do you provide?” It answered, “I evaluate the credibility of the source, I check other sources and I consider the context of the current situation. If both responses have merit, I express uncertainty using phrases like ‘Based on available data…’ or ‘It’s unclear, but…’” This is what human critical thinkers do as well, but we are quicker to admit to uncertainty than a chatbot.  

Chatbots are not strong when it comes to interpreting bias in a source.  We recently asked our bot: “Are there medical statistics or federal safety statistics on the number of falls on wooden stairs as compared to carpeted stairs?” Our bot provided three sources all suggesting that wooden stairs were more dangerous. But two were businesses offering flooring that showcased their slip-proof wooden flooring, and the third was a website for a personal injury lawyer offering to help clients sue when they fell down a wooden staircase owned by someone else. Someone might use that chat to decide that wooden staircases were more dangerous, but not a critical thinker. They would rate this chatbot’s response as inadequate because we wanted more than a random or self-serving opinion as to which surface was safer.

Humans are working hard to improve the content of our chats, but for now these bots are doing the best they can to report the findings of their search of our databases.  It is too early to expect our chatbots to accurately interpret what data is most relevant to our request. Or to identify biases in what we humans may have put in the information we post online. That is still our job as critical thinkers.  

Chatbots attempt to be truth-seeking though a comprehensive data search. Critical thinkers need a truth-seeking mindset. Being a truth-seeker means following reasons and evidence wherever they lead. That challenge requires listening to all points of view and trying to understand the value of what is being communicated by people and groups who are outside of our usual acquaintances and family. It is not necessary to agree with every idea, but it is necessary to try to understand varying points of view, and to use them to evaluate our previous beliefs and judgments.

Truth-seeking helps critical thinkers to “get the problem right” and to be fair-minded in the search of information. Truth-seeking is also an engine for creativity. Creative critical thinkers are our source of new vision for managing current problems and finding new opportunities. We can use our chatbots to open ourselves to new information if we ask truth-seeking questions and follow the information provided. It will be important to maintain chatbot algorithms that are truth-seeking and that present us with the full score of relevant ideas to consider as critical thinkers.   

If you would like to learn more about the critical thinking skills and attributes that handshake with our new emerging artificial intelligence tools, visit Insight Assessment for assessment and training tools designed for all educational and workplace projects, or our sister company, Insight Basecamp, to discover self-development modules to grow your reasoning skills and thinking mindset.  

Recent Posts
Clear Filters