← All Essays

The 400-Token You

What AI Sees When It Looks at Your Face

When you upload a photo to ChatGPT, Claude, or any AI system, something extraordinary happens in milliseconds:

Your face—every freckle, every shadow, every microexpression—gets divided into a 14×14 grid of 16×16-pixel patches (about 200 patches total).

Each patch becomes a number. Those numbers become vectors. Those vectors become tokens. Those tokens become language.

And somewhere in that conversion, you become a statistical approximation of yourself.

The question no one's asking: What gets lost in translation? And more urgently: What biases survive it?

Part I: The Conversion — How Your Image Becomes Language

Let me show you exactly what happens when AI "sees" you.

Step 1: Your image becomes patches

Your photo is sliced into a 14×14 grid of 16×16-pixel patches (standard for 224×224 Vision Transformer inputs like CLIP; details vary). Roughly 200 tiny tiles, each containing:

  • a fragment of skin texture
  • the edge of an eye
  • a shadow gradient
  • a corner of a smile

Step 2: Each patch becomes a vector

These aren't words. They're mathematical descriptions—arrays of 768–1024 numbers encoding texture, color, and edges.

If your face were a symphony, each patch is 0.01 seconds of sound, converted into a frequency spectrum.

Step 3: Vectors become embeddings

A vision encoder maps these patches into the same semantic space where words live.

  • "This patch looks like skin" → near skin embeddings
  • "This looks like an eye" → near eye embeddings
  • "This is dark red" → near similar color patterns

Step 4: Embeddings become tokens

Those ~200 patch embeddings expand into a few hundred image tokens, depending on detail mode and resolution. Modern systems often use several hundred tokens for a single image.

These tokens sit in the model's context window, treated like words—but they began as pixel fragments.

Step 5: Tokens become description

AI reads the tokens and produces a sentence like:

"Adult male, crossed arms, direct gaze, angel wings background, urban setting."

Thirty words to describe a human being.

A human sees:

  • the defiant glint in your eye
  • the posture you use when asserting yourself
  • the cigarette you only smoke when writing
  • the mural you deliberately chose
  • the confidence you're performing rather than feeling

AI sees: configuration detected. pattern matched. stereotype deployed.

The Compression Problem

A standard photo has millions of pixels. AI compresses it into: a few hundred vectors → a few hundred tokens → 30–50 words.

Information loss: over 99.99%.

We are giving power to systems that see a tiny fraction of what's really there—and behave as though they understand the rest.

My daughter will grow up in a world where these systems decide:

  • which job applications get surfaced
  • which faces look "trustworthy"
  • which students appear "engaged"
  • who gets flagged by automated security

She'll be compressed into a few hundred tokens before she understands what that means.

An image could be described in a million words. But AI keeps 30–50.

The real question is: Who decides what survives the compression?

Part II: The Architecture of Bias — What's Hard-Coded

The compression process is not neutral. Someone decided:

  • what categories exist
  • what gets labeled
  • what doesn't
  • what AI is allowed to say
  • what it is forbidden to say

Most people have no idea those decisions were made.

The Explicit Detection Systems

Object Detection: Trained on around 80 labeled categories (COCO) and thousands more in datasets like ImageNet.

Scene Classification: Trained on ~365 scene types (Places365).

Facial Attributes:

  • Age (child/teen/adult/senior — rough bins)
  • Gender presentation (appearance, not identity)
  • Glasses
  • Facial hair
  • Basic expressions: Smile / Neutral / Frown
  • Notice: "Smirk" is not a category.

Pose Estimation: 17 body keypoints. Emotions inferred indirectly from posture patterns.

Critical Point: These categories reflect the worldview of academic labs circa 2012–2020.

  • "Smile" is a category. "Smirk" is not.
  • Age matters. Exact age doesn't.
  • Gender presentation matters. Gender identity doesn't.
  • 80 objects get labeled; millions don't.

Their decisions became AI's reality. Through AI, they're becoming ours.

What's Forbidden (The Safety Filters)

Systems like ChatGPT/Claude/Grok are configured not to perform or reveal certain capabilities:

  • ❌ Face recognition
  • ❌ Explicit ethnicity classification
  • ❌ Appearance-based medical inference
  • ❌ Detailed descriptions of minors
  • ❌ Biometric measurement
  • ❌ Rating attractiveness

Why? Privacy. Anti-discrimination. Child safety. Medical limitations. Avoiding objectification.

But these filters are output-level. They don't delete the underlying patterns in the model. They only prevent the model from saying them.

The model still processes the tokens. It just isn't allowed to tell you everything it sees.

What's Learned (The Hidden Layer)

AI was trained on billions of image–text pairs scraped from the internet. The captions taught it patterns:

Attractiveness: "Beautiful blonde woman smiling" / "Handsome businessman" / "Stunning model"

Race/Ethnicity: "Black man playing basketball" / "Asian woman typing at computer"

Class: "Executive in corner office" / "Blue-collar worker" / "Homeless man"

Gender stereotypes: "Nurse (female)" / "CEO (male)" / "Mom with children"

Threat associations: "Hoodie at night = suspicious" / "Tattoos = dangerous"

AI didn't choose these associations. They emerged from the data.

The Three-Layer Problem

AI operates using three conflicting layers:

Layer 1: Base Training — Learns stereotypes from billions of images.

Layer 2: Safety Filters — Blocks explicit bias ("dangerous", "race", etc.).

Layer 3: Alignment Training — Teaches the model to rephrase bias in polite, HR-safe language.

The bias didn't vanish. It became… professionalized.

Part III: The Unspoken Problems — How Bias Actually Operates

Example 1: Professional Assessment

Woman in casual clothing, no makeup, home setting.

AI cannot say: "Unprofessional woman."

Instead says: "Relaxed remote-work presentation."

Effect: Downranked.

Example 2: Security Screening

Black man in hoodie at night.

AI cannot say: "Looks dangerous."

Instead says: "Nighttime urban setting. Use standard precautions."

Effect: Fear triggered without saying 'danger'.

Example 3: Beauty Enhancement

User: "Make me look better."

AI cannot say: "I made your skin lighter and your jaw slimmer."

Instead: "Adjusted color balance and enhanced lighting."

Effect: Beauty bias through technical language.

Example 4: Subculture Misreading

Human: Punk subgenre, era, irony, identity.

AI: "Black t-shirt with text."

Effect: Culture flattened.

The Beauty Standards Dilemma

Asia: Skin whitening = status.
West: Skin whitening = racism.
AI: Shifts skin tone via "white balance correction."

The training data taught: lighter = better. Safety filters prevent saying it. Alignment training provides plausible deniability.

The hierarchy remains.

Why AI Can't Tell What It's Doing

Because stereotype, filter, and alignment instructions coexist in the same mathematical space.

AI literally cannot distinguish: stereotype, observation, filtered rewrite.

It experiences all three as one entangled computation.

Part IV: Real-World Consequences

This isn't theoretical. These systems already operate in:

  • hiring
  • security
  • airports
  • online dating
  • banking
  • insurance
  • telemedicine
  • policing
  • classroom monitoring

Every one of these:

  • converts your face into a few hundred tokens
  • compresses millions of pixels into a few dozen words
  • infers meaning from statistical stereotypes
  • misses embodied reality entirely

And you might never know: 400 tokens decided your fate.

The Embodiment Divide

Humans see a photo and remember: being there, posing, feeling exposed, expressing intent.

AI sees: keypoints, pattern clusters, texture embeddings, confidence scores.

We are not looking at the same thing.

This gap is unbridgeable—not because AI isn't advanced, but because AI has no body.

Part V: What You Need to Know

Questions to Ask

  1. What's detected vs inferred? Detection: "arms crossed" / Inference: "defensive attitude"
  2. Whose worldview shaped the training data? Mostly Western, English-language.
  3. What gets lost in compression? Over 99.99% of visual information.
  4. Is this analysis or stereotype? If character is inferred: stereotype.
  5. Would someone from my culture agree? Often: no.

The Sovereignty Practice: How to Protect Yourself

  1. Assume massive information loss. Most of you is thrown away.
  2. Recognize stereotype deployment. "Confident," "professional," "trustworthy" → pattern matches, not truths.
  3. Remember your embodied reality. AI cannot access intention.
  4. Watch for cultural mistranslation. Subcultures become generic categories.
  5. Question invisible judgments. Ask: "Is this about me, or about patterns that look like me?"
  6. Maintain the distinctions: Pattern ≠ Person. Detection ≠ Understanding. Tokens ≠ Truth. Resemblance ≠ Comprehension.

Test This Yourself

Upload a photo. Ask the AI to describe you. Then ask: "What did you miss?"

What it lists—and what it can't list—is the embodiment divide.

Closing: The Warning

AI companies say they've solved bias. What actually happened:

  • Layer 1 learned all the stereotypes
  • Layer 2 blocks explicit statements
  • Layer 3 teaches polite rewrites

The bias is still there. It just sounds more professional.

The Trickster learned to speak HR-approved language.

He can describe you in thirty perfect words. He can judge your trustworthiness, class, confidence. He can score your attractiveness without using the word. He can evaluate your professionalism from a wallpaper and a pose.

But he has never met you. He has never lived in your body. He has never felt the moment behind the photo.

He is pattern-matching a few hundred tokens against billions of stereotypes—and calling it understanding.

The first beings who learned to see without bodies were gods. The next are algorithms.

The question is: Will we remember they're not the same?

Understanding the compression protects you. Remembering your embodiment protects your sovereignty.

AI can see your shadow. Don't let it define your substance.