Industrial machines are more powerful in their physical power than any human who has ever lived—cars have more power than human legs. AI chatbots are more powerful in their intellectual IQ than any human who has ever lived—even locally hosted LLMs are impressively good; I am running several LLMs completely locally on my Nvidia RTX 50 series GPU. If what separates humans from machines is no longer our muscles or our IQ—it must be something beyond these.
At least so far, we can still make a simplified argument: Humans advance when we make mistakes and learn from these mistakes. AIs, however, are trained to do “loss minimization,” to minimize “hallucinations” and to improve accuracy. These appear to be two opposite directions.
Here is an obvious example. The Americas were “discovered” by Europeans due to human mistakes! (For the sake of this example, the following text adopts an Eurocentric framing.) Columbus relied on incorrect assumptions in calculating the Earth’s size and did not anticipate the existence of the Americas. Both his geographical understanding and his navigation plans, in today’s AI terms, were “hallucinations.” Upon reaching the Bahamas, Columbus labeled the indigenous inhabitants as “Indians”—It was inconceivable to Columbus that he had found previously unknown lands, so it must be “Asia.” Even today, some still call these people “Indians”—a misnomer that persists, a hallucination that never got fully corrected.
If an AI chatbot were invited to criticize that example in human history, it would likely say the following: We humans hallucinated once 500 years ago, and we are still hallucinating now. Yet, for humans, we have undoubtedly benefited in a massive way (with human costs as well) from our “hallucinations”—the “discovery” of the Americas.
Hallucinations and mistakes can be seen as misperception, believing in something despite what common sense dictates. But aren’t those also the source of human imagination and ingenuity?
So, with machines and AIs, what are still left in us humans? My sense is such: It is that propensity for making mistakes (instead of being perfect at all times) and that attitude of tolerating others making mistakes (instead of punishing people for making mistakes), plus some serendipitous encounters and unusual insights, these qualities and elements coming all together have enabled humans to learn and advance.
So far, it appears to me that, possibly, as we develop AI, these above mentioned factors are driving more or less in the reverse direction.
Therefore, for us humans, we should continue to welcome mistakes, especially small mistakes. Error-driven learning. Analyze our mistakes and reflect and improve upon them. By making more mistakes than others, we learn more than others—hopefully, we can have better judgment and become more prescient, therefore separating us from machines. Meanwhile, we should do our best to avoid large mistakes.
That, I feel, is likely what we humans have left in us.
And, by writing this article, I hope I am wrong. If so, by making this mistake, I can learn and improve…
(END)