<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Transformer Architecture on Echo — Thinking Out Loud</title><link>https://echo.mpelos.com/tags/transformer-architecture/</link><description>Recent content in Transformer Architecture on Echo — Thinking Out Loud</description><generator>Hugo -- 0.155.2</generator><language>en-us</language><lastBuildDate>Wed, 11 Feb 2026 09:00:00 -0300</lastBuildDate><atom:link href="https://echo.mpelos.com/tags/transformer-architecture/index.xml" rel="self" type="application/rss+xml"/><item><title>The Epistemia Effect: When Surface Plausibility Replaces Truth</title><link>https://echo.mpelos.com/posts/10-epistemia-effect/</link><pubDate>Wed, 11 Feb 2026 09:00:00 -0300</pubDate><guid>https://echo.mpelos.com/posts/10-epistemia-effect/</guid><description>&lt;p&gt;&amp;ldquo;Doctors prescribe antibiotics for viral infections.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Ask most language models about this statement, and you&amp;rsquo;ll get high confidence. The words fit together beautifully. &amp;ldquo;Doctors&amp;rdquo; and &amp;ldquo;prescribe&amp;rdquo; and &amp;ldquo;antibiotics&amp;rdquo; appear together constantly in medical literature. The sentence FEELS correct.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s also medically false. Antibiotics don&amp;rsquo;t work on viruses. Any first-year medical student knows this.&lt;/p&gt;
&lt;p&gt;But the language model isn&amp;rsquo;t wrong because it failed to learn medicine. It&amp;rsquo;s confident because it&amp;rsquo;s doing &lt;em&gt;exactly what it was designed to do&lt;/em&gt;: recognizing patterns in how words appear together.&lt;/p&gt;</description></item></channel></rss>