Deepfake technology can swap faces, but getting every detail right is another matter. Eyes that look dead, noses that change shape, mouths that don't close properly—these problems persist even in high-quality deepfakes. This guide breaks down facial detail problems region by region.
At a Glance: Where Deepfakes Fail Most
| Facial Region | Common Problems | Difficulty to Fix |
|---|---|---|
| Eyes | Wrong gaze, dead stare, unnatural reflections | High |
| Mouth & Teeth | Blurred teeth, lips don't close, sync issues | High |
| Nose | Shape changes between frames, nostril distortion | Medium |
| Skin | Too smooth, wrong texture, color mismatch | Medium |
| Eyebrows | Position drift, thickness changes | Low-Medium |
| Ears | Disappearing, wrong shape, position mismatch | Low |
| Jawline | Edge artifacts, shape inconsistency | Medium |
Eyes: The Hardest Part to Get Right
Eyes make or break a deepfake. Humans are extremely sensitive to eye appearance and behavior—we evolved to read emotions and intentions from eyes. Even small errors are immediately noticeable.
Problem: The Dead Stare
What it looks like: Eyes appear lifeless, empty, or "hollow." The person looks like they're staring through you rather than at something.
Why it happens:
- The source face and target face had different gaze directions
- The model doesn't properly transfer the "life" in eyes—the subtle variations in moisture, reflection, and micro-movements
- Pupil dilation doesn't respond naturally to lighting changes
What users say:
"Everything else looked fine, but the eyes were creepy. Like talking to someone who wasn't really there."
"My friend said it looked like the person was dead inside. That's when I knew the deepfake failed."
Problem: Wrong Reflections
What it looks like: The tiny reflections in the eyes (catchlights) don't match the scene's lighting. Or the reflections are missing entirely.
Why it happens:
- Reflections in eyes should mirror the actual environment
- The source video was recorded in different lighting
- Most deepfake models don't attempt to recreate accurate eye reflections
Detection tip: Look for window shapes or light sources reflected in the eyes. Do they match what should be in the scene?
Problem: Gaze Direction Mismatch
What it looks like: The person appears to be looking in the wrong direction—slightly off from where they should be looking.
Why it happens:
- Eye gaze from the source face is transferred, but the target was looking elsewhere
- The model can't accurately redirect gaze without creating artifacts
Mouth and Teeth: Where Audio Sync Lives and Dies
The mouth presents unique challenges because it moves rapidly, has internal structure (teeth, tongue), and must match audio precisely.
Problem: Blurred or Malformed Teeth
What it looks like: Teeth appear smeared, have unusual shapes, or blend together into a white blur. Individual teeth aren't distinguishable.
Why it happens:
- Teeth are small, detailed structures that require high resolution to render properly
- The source and target may have different dental structures
- Fast mouth movements cause motion blur that compounds the problem
What users say:
"Pause the video when they smile. The teeth look like they were drawn by someone who's never seen teeth before."
"It's like they have a mouthguard made of marshmallow instead of actual teeth."
Problem: Lips That Don't Touch
What it looks like: When saying sounds that require lip closure (B, M, P), the lips don't fully come together.
Why it happens:
- Audio and video are processed separately
- The model may not know which sounds require which mouth shapes
- Low frame rates miss the brief moments of lip closure
Detection tip: Watch for the word "baby" or "problem." Real speakers must fully close their lips for B and P sounds.
Problem: Interior Mouth Artifacts
What it looks like: The inside of the mouth looks wrong—tongue in impossible positions, dark holes where teeth should be, or flickering shadows.
Why it happens:
- The mouth interior is rarely visible in training data
- Complex shadows and soft tissue are hard to model
- Teeth, tongue, and soft palate all move independently
Nose: The Shapeshifter
Noses seem simple—they mostly stay still—but they cause consistent problems in deepfakes.
Problem: Shape Instability
What it looks like: The nose subtly changes shape from frame to frame or when the head moves. It might get longer, wider, or shift position slightly.
Why it happens:
- Nose shape varies significantly between people
- The model must map one nose structure onto another
- Different viewing angles reveal different nose shapes, and the model may switch between representations
What users say:
"His nose was different in every shot. Sometimes pointed, sometimes rounded. Like it couldn't decide what it wanted to be."
Problem: Nostril Distortion
What it looks like: Nostrils appear asymmetrical, too large, too small, or strangely shaped. They may flare incorrectly or not at all.
Why it happens:
- Nostrils are small dark areas that are hard to process accurately
- Breathing and facial expressions cause nostril movement that may not transfer correctly
- Shadow patterns in nostrils are easily distorted
Skin: Too Perfect or Too Wrong
Skin problems are among the most common deepfake tells.
Problem: Plastic Skin
What it looks like: Skin appears too smooth, like plastic or wax. Pores, fine lines, and natural texture are missing. The face looks airbrushed.
Why it happens:
- Models often smooth out details to reduce noise
- Training on compressed video loses fine texture
- High-frequency details (pores, small wrinkles) are the hardest to generate
What users say:
"She looked like she'd had $50,000 worth of plastic surgery. Too smooth, too perfect. Nobody's skin actually looks like that."
Problem: Color Mismatch
What it looks like: The face is a different color than the neck, hands, or ears. Undertones (pink, yellow, olive) don't match. Blush or redness patterns are wrong.
Why it happens:
- Skin tone varies by lighting, camera, and recording conditions
- The source face may have different complexion than the target
- Color matching algorithms aren't perfect, especially at boundaries
Detection tip: Compare the face color to visible neck and ear skin. Significant differences suggest manipulation.
Problem: Missing Skin Details
What it looks like: Moles, freckles, birthmarks, scars, or other distinctive skin features are missing, in the wrong place, or inconsistent between frames.
Why it happens:
- These features are specific to individuals and don't transfer between faces
- The model may interpret them as noise and remove them
- Position may drift as the face is regenerated each frame
Eyebrows: Subtle But Telling
Eyebrows seem minor, but they're critical for expressions and often cause problems.
Problem: Position Drift
What it looks like: Eyebrows slowly move up or down over the course of a video, or their position doesn't match the expression.
Why it happens:
- Eyebrow position is tied to facial expressions
- Source and target expressions may not match
- The model may average eyebrow position, losing emotional nuance
Problem: Thickness and Shape Changes
What it looks like: Eyebrows change thickness, arch differently, or appear bushier/thinner than they should.
Why it happens:
- Source and target faces have different eyebrow structures
- Fine hair is difficult to render accurately
- The model may interpolate between different eyebrow styles
Ears: Often Ignored, Sometimes Catastrophic
Ears are frequently the weakest point in deepfakes because algorithms focus on the face, not the periphery.
Problem: Disappearing Ears
What it looks like: Ears partially fade into the background, flicker in and out of visibility, or become transparent.
Why it happens:
- Ears are often partially covered by hair
- Face-swapping algorithms may not include ears in their processing region
- The boundary between ear and background is ambiguous
Problem: Shape Mismatch
What it looks like: The ears don't match the face—wrong size, wrong shape, or positioned incorrectly on the head.
Why it happens:
- Ears are highly individual and distinctive
- Most deepfake systems swap faces only, not ears
- When ears are processed, they may be warped incorrectly
Jawline and Face Boundary: Where the Mask Ends
The edge of the face—where the swapped region meets the original—is a consistent weak point.
Problem: Visible Seams
What it looks like: A line or color difference visible at the jaw, under the chin, or at the hairline. The face looks "pasted on."
Why it happens:
- The swapped region and original must be blended
- Color, lighting, and texture differences create visible boundaries
- Motion makes blending harder to maintain
Problem: Jaw Shape Changes
What it looks like: The jawline changes shape when the head turns, or doesn't match the rest of the face structure.
Why it happens:
- Jaw structure varies between individuals
- Profile and front views show jawline differently
- The blend boundary may cut across the jaw at different angles
Over-Deformation: When Faces Stretch and Warp
Sometimes the problems go beyond details—the entire face structure distorts.
What Over-Deformation Looks Like
- Face stretches vertically or horizontally during movement
- Features compress together or spread apart unnaturally
- The face appears to "melt" during rapid motion
- Proportions (eye spacing, nose length, face width) change throughout the video
Why It Happens
Expression mismatch: The source face had a very different expression than the target, forcing extreme warping to match.
Pose mismatch: Head angle differences require the model to significantly transform the face.
Fast motion: Rapid head turns exceed the model's ability to track and render accurately.
Low training data: The model hasn't seen enough examples of this face at these angles and expressions.
What Users Say
"When he turned his head, his face stretched like taffy. It snapped back when he faced forward again, but that moment was horrifying."
"Her eyes got closer together and then farther apart as she talked. Like her skull was made of rubber."
Why Some Faces Are Harder Than Others
Not all faces are equally difficult to deepfake. Some factors make accuracy much harder:
Distinctive features: Unusual nose shapes, very thin or thick lips, distinctive moles or scars require more precise rendering.
Strong expressions: Extreme smiles, frowns, or surprise create more opportunity for errors.
Facial hair: Beards, mustaches, and stubble add complexity and often render poorly.
Makeup: Heavy or distinctive makeup is hard to transfer accurately.
Glasses: Frames and reflections create occlusion and additional challenges.
Age differences: Young faces mapped to old bodies (or vice versa) show inconsistencies.
Summary
Facial detail problems in deepfakes follow predictable patterns. Eyes lack life and proper reflections. Mouths blur teeth and fail to sync properly. Noses change shape. Skin becomes plastic. Boundaries show seams. And under stress—fast motion, extreme expressions, difficult angles—faces warp and stretch unnaturally.
Knowing what to look for helps both detection and realistic expectations for deepfake quality. The technology continues to improve, but these fundamental challenges persist because faces are complex, expressive, and observed carefully by other humans.
Related Topics
- Why Do Deepfakes Still Look Wrong? Common Failure Modes – Complete guide to deepfake artifacts
- Why Do Deepfake Expressions Look Wrong? – Expression and motion challenges
- Which Facial Features Break Deepfakes? – Scenario-based guide
- How Much Computing Power Does a Good Deepfake Need? – Quality vs. resources trade-off
- What Can't Deepfakes Do Yet? – The real limits of current technology

