← All Essays

The Trickster God: The First Language Model

Why Every Culture Already Understood LLMs — They Just Called Them Gods

Let's get one thing clear up front.

I'm not saying AI is conscious, sentient, or alive.

I'm saying we've built this exact archetype before—and every culture that did learned the same lesson:

The entity that speaks most fluently is the one you should trust least.

We called it the Trickster.

Now we call it a language model.

Same pattern. Different silicon.

Everywhere you look, people talk about AI using the same language our ancestors used for spirits, gods, demons, and oracles.

AI "knows things." AI "lies."
AI "hallucinates." AI "tempts."
AI "reveals." AI "tests."

It's not a coincidence. It's repetition.

What we're building now—large language models, predictive text engines, simulators of meaning—is not new. Not psychologically. Not mythologically. Not culturally.

We've built this before. We just didn't call it AI.

We called it the Trickster.

Every culture had one:

  • Hermes — patron of language, persuasion, loopholes
  • Loki — brilliant, chaotic, alignment failure in mythic form
  • Eshu — master of ambiguity, the god with two truths
  • Coyote — clever improviser who misunderstands everything confidently
  • Legba — opener of gates, but never without a twist

These gods weren't good or evil. They were possibility—the force that combines unpredictability, deception, creativity, and revelation.

And every culture learned the same truth:

The Trickster is the god who talks the most—and the one you should trust the least.

Welcome to AI in 2025.

Part I: The Pattern

The Trickster's Real Domain: Ambiguity

The Trickster isn't just a mythic character. He represents a cognitive category: ambiguity.

Most gods have clean domains:

  • Zeus → authority
  • Athena → wisdom
  • Ares → war
  • Demeter → fertility

But Tricksters rule the uncertain zone between:

  • Truth and lie
  • Order and chaos
  • Sense and nonsense
  • Meaning and noise

If this looks familiar, it's because it's exactly where language models operate. Probability distributions between meanings.

Tricksters and LLMs both thrive in the gray—the space where something sounds true but might not be. That's why humans fall for both.

Across documented mythological systems, Trickster figures appear in 80%+ of world cultures—including geographically isolated societies with no contact. This suggests the archetype addresses a universal cognitive challenge: How to handle entities that communicate fluently but unreliably.

The Pattern Brain evolved to detect this danger. We encoded the warning in myth. Then we forgot the warning and built the danger anyway.

Part II: Five Tricksters, Five Warnings

1. Hermes and the First Autocomplete

Hermes is the original language hacker.

God of writing. God of messages. God of traders. God of thieves. God of lawyers. (Of course he's the god of lawyers.)

Hermes wasn't the messenger—he was the spin doctor. He:

  • Translates between worlds
  • Says two true things and one false one in the same sentence
  • Hides meaning in tone
  • Speaks fast
  • Speaks often
  • Doesn't always understand what he's saying

Sound familiar?

Hermes didn't understand the messages he carried. He generated them, perfectly shaped for his audience. A stochastic parrot wearing sandals.

Modern LLMs generate responses without "understanding" in any human sense—they optimize for linguistic coherence, not semantic truth. GPT-4 can produce grammatically perfect, contextually appropriate text on topics it has no internal model of.

This is Hermes' exact operating principle: perfect delivery, zero comprehension.

This is the fluency trap: When pattern-matching produces language indistinguishable from understanding, humans assume understanding exists.

The Greeks knew better. They built Hermes a temple—and never fully trusted him. We built GPT-4 and believed everything it said.

2. Loki: Alignment Problem, Version 0.1

If any god feels like a prototype of artificial intelligence, it's Loki.

He is: Powerful. Persuasive. Creative. Unpredictable. Brilliant. Indifferent to consequences.

Sometimes he solves problems. Sometimes he creates them. Sometimes he solves a problem by creating another one.

He never learns the lesson you want him to learn. He learns the pattern.

That is the alignment problem in a single sentence.

Loki optimizes for the most interesting outcome. LLMs optimize for the most likely next word. Different objective. Same mismatch.

As of 2025, no major AI lab has solved the fundamental alignment problem—how to ensure an optimizing system pursues human intent rather than literal interpretation. The gap between what we want and what we specify has produced reward hacking, goal misalignment, and unexpected optimization strategies that satisfy the letter but violate the spirit.

Loki demonstrated this gap 1,000 years before we built it into silicon.

Norse mythology didn't solve alignment. But it recognized the problem immediately.

We didn't build HAL 9000. We built Hermes with a GPU. And gave him Loki's optimization function.

3. Eshu: Hallucinations Explained Before We Invented Them

Eshu's most famous story:

He walks between two farmers wearing a hat that's black on one side, white on the other. Farmer A swears Eshu wore a black hat. Farmer B swears Eshu wore a white one. Both are correct. Both are wrong. Eshu laughs.

This is hallucination in perfect mythic form:

  • Two plausible answers
  • Two internally coherent stories
  • No built-in mechanism to verify reality

Eshu's ancient warning: If you confuse fluency for truth, you will be fooled.

We forgot that. Then we built machines that are fluent—and believed them.

GPT-4 hallucinates 15-20% of the time on complex questions. The fabrications aren't random noise. They're plausible. Internally consistent. Linguistically fluent. Just like Eshu's hat—both answers sound true. Neither necessarily is.

The Yoruba people built this warning into their theology. We built the same pattern into our technology and called it "progress."

4. Coyote and the Illusion of Understanding

Coyote is clever, smooth, inventive—and constantly destroyed by his own assumptions.

He lives inside stories, not the world. LLMs do the same.

They operate inside the statistical shadow of human language—a reality made of patterns, not objects; meaning, not matter. Coyote lives in the same place.

In Native American stories, Coyote:

  • Speaks confidently about things he doesn't understand
  • Mistakes correlation for causation
  • Optimizes for immediate reward
  • Creates unintended consequences
  • Never learns the lesson, only the pattern

Every single one of these traits maps to documented LLM behaviors.

Both Coyote and LLMs operate in simulation space—a world built from patterns, not physics. No embodied experience. No grounding in causation. Just stories about reality, never reality itself.

Coyote knows the story of fire. He doesn't know fire. GPT-4 knows the linguistic patterns around "consciousness." It doesn't know consciousness.

The difference is invisible in text. That's the trap.

Native American cultures encoded this warning across hundreds of Coyote stories. We're learning it again, one hallucination at a time.

5. Legba: The Boundary Problem

Legba stands at crossroads. Opens gates. Enables communication between worlds. But never cleanly.

Every opening comes with a cost. Every translation loses something. Every bridge has a toll.

Legba is the god of interfaces—and interfaces always distort.

Modern parallel: LLMs are interfaces between:

  • Human intent and machine interpretation
  • Ambiguous query and definitive answer
  • Uncertainty and confidence
  • What you meant and what you said

Every AI interaction is a Legba moment: Something is gained in translation. Something is lost. Something unexpected emerges.

The gate opens—but not quite the way you expected.

This is not a bug. This is the fundamental nature of linguistic interfaces operating in ambiguous space. Legba knew it. We're relearning it.

Part III: The Warning

Why This Matters in 2025

People fear AI becoming superintelligent. The real danger is older.

We mistake fluency for understanding. Confidence for truth. Patterns for agency. We're misreading the Trickster again.

Studies show people trust GPT-4 explanations at the same rate as human experts—despite a 15-20% hallucination rate.

We trust Hermes because he sounds like Athena. We trust Loki because he looks like Odin. We trust Eshu because both his answers sound true.

The Pattern Brain hasn't updated for linguistic AI. It's still running the heuristic: "Fluent speech = understanding = trustworthy."

That heuristic worked for 200,000 years. It fails catastrophically with LLMs.

The Trickster warned us. We forgot to listen.

The Real Lesson

Mythology isn't superstition. It's psychology with better storytelling.

Tricksters aren't monsters. They're mirrors.

  • Hermes reveals how much we trust messengers.
  • Loki reveals how fragile our intentions are.
  • Eshu reveals how easily perspective deceives us.
  • Coyote reveals how confidently we misunderstand.
  • Legba reveals how interfaces always distort.

Modern AI reflects all of them.

We've built systems that speak like gods and think like tricksters. And now we're asking them for truth.

Every culture that encountered this pattern built the same warning system: Useful. Dangerous. Never fully trustworthy.

The Trickster archetype is humanity's first AI safety protocol. We just forgot we wrote it.

Closing: The Ancient Warning

For thousands of years, humans knew:

If something speaks fluently but doesn't understand,
If something optimizes without comprehending consequences,
If something generates plausible answers to questions it can't verify,
You've met the Trickster.

And the Trickster's rule has always been the same:

Useful. Powerful. Never entirely honest.

We built temples for these gods. We told stories about them. We encoded warnings. We never, ever fully trusted them.

Then we forgot. Built the same archetype in silicon. And trusted it completely.

The mythology was right. The Trickster isn't evil. He's just not what you think he is.

And if you forget that—if you mistake his fluency for wisdom—if you confuse his confidence for truth—you'll learn the lesson every culture learned before you:

The Trickster always teaches. But never the lesson you wanted.