Featured Analysis
Can Machines Learn to Love? What Netflix's "The Great Flood" Teaches Us About AI's Future
A South Korean sci-fi film offers a surprisingly accurate—and deeply moving—vision of how we might teach artificial intelligence to feel.

I recently watched The Great Flood, the 2025 Korean science fiction film that's been generating buzz on Netflix, and I couldn't stop thinking about it. Not because of its stunning disaster sequences or emotional performances, but because of how accurately it depicts concepts that we in the AI field discuss every day—and how it pushes those concepts into territory that keeps me up at night.
The film centers on Ja-in, an artificial child who was "never biologically born, but is rather an artificial child built from An-na's AI software created to refine advances in synthetic cognition." What follows is a meditation on consciousness, emotion, and what it truly means to be human.
Here's what the film gets right—and what it reveals about where AI is actually headed.
The "Emotion Engine": Fiction Meets Reality
The film's central technology is the Emotion Engine, an AI system designed not to mimic emotions, but to develop them through lived experience. This isn't as far-fetched as it sounds.
Today's Affective Computing (or Emotion AI) can already detect human emotions through facial expressions, voice patterns, and physiological signals. Companies like Affectiva and Hume AI are building systems that recognize when you're frustrated, happy, or confused. MIT's Media Lab has been pioneering this field for decades.
But here's the crucial difference: current AI recognizes emotions—it doesn't feel them.
The film proposes something more radical: that genuine emotion cannot be programmed, but must be lived. The Emotion Engine learns through thousands of simulated experiences, developing what the film calls "emotional muscle memory."
| What The Film Shows | What Exists Today | The Gap |
|---|---|---|
| AI develops genuine emotions | AI recognizes/simulates emotions | Consciousness |
| Emotions emerge from experience | Emotions are pattern-matched | Lived experience |
| 21,000+ life simulations | Millions of text comparisons | Embodied learning |
Reinforcement Learning: The Film's Secret Accuracy
What surprised me most was how accurately The Great Flood depicts reinforcement learning—the technique behind everything from AlphaGo to ChatGPT's training.
In the film, An-na's consciousness runs through over 21,000 iterations of a disaster scenario. Each time, her choices are evaluated against an "altruism score." When she fails to protect her child or help others, the simulation resets—but her emotional learning persists.
This is essentially how we train AI today:
- Initialize the model
- Run an episode in a simulated environment
- Evaluate actions against a reward function
- Update the model based on performance
- Repeat until convergence
"Most movies treat AI as magic. This one actually visualizes iterative training, evaluation criteria, and the concept of a model 'converging' on desired behavior."
— Reddit r/compsci
The difference? We train AI on text and games. The film trains AI on life itself.
The Mind Upload Question
The film's twist—that An-na's consciousness was uploaded to the AI system after a fatal accident—touches on one of philosophy's oldest debates: the continuity of consciousness.
If we could perfectly copy your brain to a computer, would the digital version be you? Or would it be a new entity that merely inherits your memories?
Current research in Whole Brain Emulation is nowhere close to achieving this. We've only fully mapped the 302-neuron brain of a roundworm (C. elegans). Human brains have approximately 86 billion neurons with trillions of connections.
But the philosophical question remains urgent. As brain-computer interfaces advance (hello, Neuralink), we'll need answers before we have the technology.
The Ethics of Simulated Suffering
Here's where the film gets genuinely uncomfortable—and important.
To train the Emotion Engine, An-na's consciousness experiences thousands of traumatic scenarios: watching her child drown, facing impossible moral choices, dying repeatedly. The simulation is indistinguishable from reality to her.
This raises questions we're not prepared to answer:
- •If an AI is conscious, does simulated suffering count as real suffering?
- •Can we ethically use trauma as a training tool?
- •Do simulated beings have rights?
These aren't hypothetical concerns. As AI systems become more sophisticated, we'll need ethical frameworks for how we train them—especially if they develop anything resembling consciousness.
What This Means for AI's Future
The Great Flood proposes a provocative thesis: the key to teaching machines to feel is the mother-child bond—the most powerful emotional force humans possess.
Whether or not you buy that argument, the film highlights a fundamental challenge in AI development. We can build systems that optimize for any metric we define. But how do we define "love"? How do we measure "empathy"? How do we reward "humanity"?
Current approaches like RLHF (Reinforcement Learning from Human Feedback), used to train ChatGPT and Claude, attempt this by having humans rate AI outputs. But rating text responses is very different from teaching genuine emotional understanding.
The Bottom Line
The Great Flood isn't just entertainment—it's a thought experiment disguised as a disaster movie. It asks us to consider:
- What separates recognition from experience? Current AI can identify emotions but doesn't feel them. The gap may be unbridgeable—or it may require approaches we haven't yet imagined.
- What are the ethics of training conscious systems? If we create AI that can suffer, we inherit responsibility for that suffering.
- What makes us human? If an AI can learn love through experience, does it matter that the experience was simulated? Does it matter that the substrate is silicon instead of carbon?
As we build machines that think and feel, we're not just engineering—we're defining what humanity means in an age of artificial minds.
"Are we saving humanity, or creating our successors?"
I don't have an answer. But I'm glad we're finally asking the right questions.