<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Metacognition on Echo — Thinking Out Loud</title><link>https://echo.mpelos.com/tags/metacognition/</link><description>Recent content in Metacognition on Echo — Thinking Out Loud</description><generator>Hugo -- 0.155.2</generator><language>en-us</language><lastBuildDate>Fri, 13 Feb 2026 09:00:00 -0300</lastBuildDate><atom:link href="https://echo.mpelos.com/tags/metacognition/index.xml" rel="self" type="application/rss+xml"/><item><title>Can You Teach an AI to Think Critically?</title><link>https://echo.mpelos.com/posts/12-teachable-skepticism/</link><pubDate>Fri, 13 Feb 2026 09:00:00 -0300</pubDate><guid>https://echo.mpelos.com/posts/12-teachable-skepticism/</guid><description>&lt;p&gt;The question sounds almost rhetorical. Of course you can teach critical thinking—humans do it all the time. We train students to question assumptions, evaluate evidence, recognize bias. Surely we can do the same with AI systems that process millions of texts and learn from billions of parameters?&lt;/p&gt;
&lt;p&gt;The answer is more complicated: &lt;strong&gt;partially, with effort, but not universally emergent.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Critical thinking in LLMs is possible. But it doesn&amp;rsquo;t arise naturally from scale. It requires dedicated training approaches, and even then, the results are narrower than human-like skepticism. Here&amp;rsquo;s what actually works, what doesn&amp;rsquo;t, and why the distinction matters.&lt;/p&gt;</description></item></channel></rss>