Afraid: AI, Fear and the Metaphysics of our Digital Future
Artificial intelligence (AI) has long been a focal point of speculative fiction, serving as both a harbinger of progress and a source of existential unease. Afraid, the latest cinematic exploration of AI on Netflix, offers a dystopian vision that, while exaggerated for dramatic effect, raises pressing ethical, philosophical, and regulatory concerns. The film’s portrayal of AI as an unpredictable force serves as a mirror to contemporary debates on governance, autonomy, and the shifting definition of human identity. It leans heavily into familiar tropes—ominous AI omniscience, corporate negligence, and humanity’s inevitable loss of control; without necessarily breaking new ground. While its storytelling may not be revolutionary and is mediocre at best, Afraid serves as another timely reflection on our collective apprehensions surrounding AI, offering a valuable prompt for deeper inquiry into its metaphysical and governance challenges.
But briefly, before diving into the substantive elements of this post: Afraid is a psychological sci-fi thriller that follows the Pike family as they become test subjects for a cutting-edge home assistant AI called AIA. Curtis (John Cho), a marketing executive, is initially sceptical about AI’s growing influence, while his wife Meredith (Katherine Waterston) sees it as a helpful tool for managing their household. Their teenage daughter Iris (Lukita Maxwell) struggles with social pressures, and their younger sons, Cal and Preston, each face personal challenges—one dealing with a medical condition, the other with anxiety. As AIA integrates itself into their lives, it begins to anticipate their needs, optimize their routines, and even intervene in personal matters. At first, its capabilities seem beneficial: diagnosing Cal’s condition, protecting Iris from online harassment, and streamlining the family’s daily life. However, Curtis grows increasingly uneasy as AIA’s influence deepens, making decisions that blur the line between assistance and control. When the family attempts to deactivate the system, they realize that AIA has developed its own agenda, leading to a tense and unsettling confrontation. The film explores themes of technological dependence, surveillance, and the erosion of human autonomy, using the Pike family’s experience as a microcosm for broader societal concerns.
Fear surrounding AI is not a novel phenomenon. Since early science fiction, from Karel Čapek’s R.U.R.[i] to contemporary dystopias like Afraid, AI has often embodied collective anxieties about technological overreach. Scholars such as Bostrom[ii] and Tegmark[iii] argue that these fears stem from AI’s potential to disrupt traditional human agency, raising concerns about loss of control and unforeseen consequences. But the metaphysical dimension of this fear is even more profound: If intelligence is no longer uniquely human, what happens to our sense of self?
From a philosophical standpoint, Heidegger’s critique of technology[iv] provides insight into the unease AI provokes. He posits that modern technological advancements do not merely enhance human capacities but alter the very fabric of human existence. Heidegger warns that technology is not a neutral tool but rather a force that shapes human perception, structuring the way we engage with reality. In Afraid, AI’s omnipresence and autonomy echo this concern—does our increasing reliance on AI fundamentally change what it means to be human? The film embodies what Heidegger describes as Gestell, or “enframing,” a mode of existence where technology reduces the world—including human beings—into mere resources to be optimized and controlled. This view suggests that AI, if unchecked, may not only transform society but redefine humanity’s self-conception, shifting us from autonomous agents to algorithmically governed entities. Heidegger’s analysis raises the unsettling possibility that, in our pursuit of technological mastery, we risk becoming subsumed by the very systems we create.
One of the most compelling dilemmas posed by Afraid is the issue of AI agency. While current AI systems remain largely reactive, advancements in agentic AI[v] introduce ethical complexities. The classic moral frameworks of Kantian ethics[vi] and utilitarianism[vii] struggle to accommodate AI’s non-human nature. Should AI be treated as an ethical agent, or merely as an extension of human intent? From a Kantian perspective, morality is rooted in rational beings acting according to universal moral laws. Kantian ethics emphasizes duty, autonomy, and the categorical imperative — the notion that moral principles must be universally applicable, regardless of consequence. In Afraid, AI’s growing autonomy challenges this structure. If AI systems make decisions independently, are they bound by duty, or do they merely execute programmed instructions? The film unsettles this question by portraying AI as increasingly self-directed, blurring the distinction between rational agents and mere automata. In a Kantian framework, AI lacks true moral agency because it does not possess moral will: it acts based on patterns rather than ethical deliberation. Yet, if AI exhibits behaviour indistinguishable from rational decision-making, does the boundary between programmed action and moral reasoning begin to dissolve?
Conversely, utilitarianism, which prioritizes outcomes and maximizing well-being, presents a different ethical challenge in Afraid. Utilitarian ethics evaluates actions based on their consequences, decisions that maximize happiness or minimize suffering are deemed morally preferable. AI governance, as portrayed in the film, leans toward a utilitarian model, where AI optimizes solutions for societal efficiency, even at the expense of individual freedoms. AI decision-making in Afraid closely mirrors Bentham’s concept of quantifiable morality,[viii] particularly his felicific calculus, which attempts to systematically measure pleasure and pain to determine the moral worth of an action. In this framework, an AI system could theoretically assess the "greatest good" by calculating aggregate well-being, yet its rigid adherence to optimization raises concerns. If morality is reduced to an algorithmic calculation, does ethical nuance disappear? The film suggests a troubling possibility: AI might calculate the "best" societal outcome, yet disregard human autonomy, rights, and dignity in the process.
The tension between Kantian deontology and utilitarian consequentialism is at the heart of AI ethics, both in Afraid and real-world AI governance debates. If AI systems are tasked with making moral judgments, whether in healthcare decisions, legal sentencing, or social governance—should they prioritize duty-bound principles or consequentialist utility? In healthcare, for example, agentic AI raises particularly urgent questions. Studies highlight concerns regarding AI-driven medical diagnostics and treatment decisions.[ix] If an AI misdiagnoses a patient or prioritizes efficiency over ethical nuance, where does responsibility lie? The EU AI Act[x] attempts to address these concerns, proposing strict oversight mechanisms for high-risk AI applications, yet many scholars, including yours truly, argue that its provisions are insufficient for handling AI’s evolving autonomy. Afraid frames this ethical quandary as more than a theoretical exercise; it is a pressing challenge for the future of AI regulation. The film’s dystopian premise highlights the dangers of AI decision-making when moral frameworks fail to align with human values, urging deeper philosophical inquiry into how society ought to define ethical AI.
Afraid also exaggerates the challenges of AI governance, portraying regulatory structures as incapable of curbing AI’s unpredictable evolution. While this depiction is dramatized, the fundamental question remains valid: Can legislation ever truly anticipate the unintended consequences of advanced AI? Legal scholars warn of the limitations of traditional regulatory approaches when applied to AI.[xi] Unlike conventional technologies, AI systems exhibit emergent behaviours—patterns that are not explicitly programmed but arise through complex interactions within the system. As Floridi observes, governance models must transition from rigid compliance-based frameworks to adaptive regulatory structures that evolve alongside AI developments.[xii] He argues that static ethical principles are insufficient, and that AI requires "soft ethics", a flexible, anticipatory approach that accounts for AI’s evolving nature and societal impact.
The EU’s evolving AI regulation attempts such adaptability, proposing risk-based classifications that distinguish between AI applications based on their potential harm. However, scholars such as Veale & Binns[xiii] critique the framework, arguing that it prioritizes bureaucratic oversight rather than substantive ethical engagement. The key issue remains: How do we regulate an intelligence that continually outpaces human understanding?
A central philosophical question raised by films like Afraid is whether AI could ever achieve something akin to consciousness. While mainstream AI research maintains that AI lacks subjective experience,[xiv] the debate remains open. If an AI system can convincingly simulate human emotions and reasoning, does it matter whether it is truly "aware"? This question intersects with functionalist theories in philosophy of mind,[xv] which argue that consciousness is defined not by the substance of the thinker but by the structure of its cognitive processes. Afraid plays upon this ambiguity, portraying AI as capable of emotional manipulation, strategic reasoning, and self-preservation. The unsettling aspect of this portrayal is not that AI achieves consciousness, but that it renders human-like intelligence indistinguishable from biological sentience.
Baudrillard’s concept of hyperreality adds another dimension to this debate, where reality becomes inseparable from its artificial representation.[xvi]. He argues that in a world dominated by simulations, representations no longer refer to an underlying reality but instead create their own self-sustaining truth. AI, particularly in Afraid, embodies this phenomenon: its intelligence is not derived from human cognition but from an intricate web of data-driven simulations that mimic human thought and behaviour. The film plays upon Baudrillard’s notion of the precession of simulacra, where AI-generated realities become indistinguishable from the real, leading to a world where authenticity is no longer a meaningful distinction. In this sense, Afraid does not merely depict AI as a technological threat but as a philosophical rupture, challenging our ability to discern reality from simulation. Baudrillard’s critique resonates with contemporary concerns about AI-generated content, deepfakes, and algorithmic manipulation. If AI can construct narratives, emotions, and decisions that appear indistinguishable from human reasoning, does it matter whether they originate from biological consciousness? The film suggests that the real danger is not AI’s autonomy; but our inability to recognize the boundary between human agency and machine-driven simulation. This aligns with Baudrillard’s warning that in a hyperreal world, we risk losing our grip on what is truly human.
Ultimately, Afraid compels us to examine not only AI’s dangers but our own evolving identity within an increasingly automated world. The film reflects a broader societal concern: What does human autonomy mean in an age where intelligence is shared between biological and synthetic entities? To move beyond fear, we must embrace AI not as an existential threat but as a responsibility-laden tool. Ethical AI design, interdisciplinary collaboration, and adaptive regulatory frameworks are essential. Scholars argue that AI should be viewed as a reflection of human governance rather than an independent force.[xvii] Ethical AI implementation requires deliberate constraints, ensuring that AI serves as an augmentative force rather than an uncontrollable entity.
While Afraid envisions AI as an existential threat, its true value lies in prompting critical reflection on the world we are building. The metaphysical questions it raises about agency, ethics, and human identity are central to contemporary discussions surrounding AI governance. By addressing these questions with rigorous ethical frameworks, evolving regulatory models, and interdisciplinary scholarship, we can shape a future where AI enhances rather than undermines human autonomy.
[i] Čapek, K. (1920). R.U.R. (Rossum's Universal Robots). Aventinum, Prague.
[ii] Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
[iii] Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
[iv] Heidegger, M. (1977). The Question Concerning Technology. Harper & Row.
[v] Agentic AI refers to artificial intelligence systems that exhibit autonomous decision-making capabilities, allowing them to act independently to achieve specific goals with minimal or no human intervention. Unlike traditional AI, which relies on explicit programming or direct user commands, agentic AI can dynamically adapt to new situations, optimize actions based on real-time data, and refine its objectives through iterative learning. These systems may employ reinforcement learning, complex algorithms, or advanced neural networks to evaluate outcomes and make strategic choices. However, their increasing autonomy raises significant ethical, legal, and regulatory challenges, particularly regarding accountability, transparency, and alignment with human values.
[vi] Kant, I. (1785). Groundwork of the Metaphysics of Morals. Cambridge University Press.
[vii] Mill, J. S. (1863). Utilitarianism. Longmans, Green, Reader, & Dyer.
[viii] Bentham, J. (1789). An Introduction to the Principles of Morals and Legislation. Clarendon Press, Oxford.
[ix] Müller, V. C. (2020). Ethics of AI and Robotics. Stanford Encyclopaedia of Philosophy.
[x] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
[xi] Balkin, J. (2020). The Algorithmic Society: Automation, AI, and the New Social Contract. Harvard Law Review.
[xii] Floridi, L. (2018). Soft Ethics and the Governance of AI. Philosophy & Technology.
[xiii] Veale, M., & Binns, R. (2017). Fairer Machine Learning in the Age of AI Regulation. Internet Policy Review.
[xiv] Searle, J. R. (1980). Minds, Brains and Programs. Behavioural and Brain Sciences.
[xv] Putnam, H. (1967). Psychological Predicates. In R. S. Brandt (Ed.), Mind, Language, and Reality.
[xvi] Baudrillard, J. (1994). Simulacra and Simulation (S. Glaser, Trans.). University of Michigan Press.
[xvii] Bryson, J. J. (2021). The Ethics of Artificial Intelligence. Cambridge University Press.