3.1 The Five Levels of Human Impact Overview

Social license applies across game AI, and trust must be built with the people it affects. But not all uses require the same permission, the same level of transparency, or engagement from the same communities.

This difference can be understood as a spectrum of human impact, outlined in the Five Levels of Human Impact for AI in Games.

From the paper:
Framework for
Responsible AI in Games

Full paper release
15th May 2026

For lower-impact AI used in development workflows, the relevant social license sits with the people who build and contribute to games, not the people who play them.

At Level 1, this includes infrastructure and pipeline systems such as build and QA automation, asset organisation, and procedural content preparation - systems that operate behind the scenes and are not directly experienced by players. At Level 2, this extends to tools used by developers and creative teams, such as code assistants, design support tools, and Text-to-Speech (TTS) for internal prototyping.

In both cases, the affected community is primarily developers and performers, and the ability to grant permission sits with them.

Higher levels = higher human impact = higher social license needed.

This responsibility doesn’t stop at Levels 1 and 2. The requirement for industry-facing social license continues into higher-impact systems.

Vendors and those implementing AI owe these stakeholders clarity regarding how their AI systems behave, how data and creative assets are used, and what boundaries are in place. Regardless of the level of emotional or narrative intensity, industry-facing social license is foundational for responsible adoption, and may include:

  • Transparency regarding system behavior, constraints, and data use
  • Creator-led constraints and human-authored guardrails that ensure predictable behavior aligned to intended scope
  • Performer safeguards and protections to ensure the use of creative assets is well understood and consented to
  • Ethical input, oversight, and review as part of implementation and ongoing operation of games incorporating AI functions
  • Creator data and IP sovereignty, and consent to the scope and role of any data custodian hosting or involved in the management of the same

3.2 The Five Levels of Human Impact Detail

Once AI touches the player experience, the affected community expands. Player-facing systems require player social license, not just creator agreement. And as the systems move closer to emotional or relational interaction - where the likelihood of parasocial bonds, behavioral influence, and ultimately identity distortion become possible - the need for social license increases sharply.

This framework focuses in more detail on layers 3-5: where AI begins to directly shape the player’s world in increasingly sensitive ways.

At these levels, public trust, emotional safety, and narrative integrity become essential - and maintaining social license becomes a core responsibility.

View Human Impact Levels:

3

Level 3: Systemic Reactive AI

First level of player visibility - shapes the world and experience, but not relationships.

Level 3 systems influence the world around the player rather than the player directly. They may adjust game difficulty, weather, ecology, factions, world states, barks and ambient chatter, ambient flavor, or other systemic behaviors (e.g. AI-enabled strategy NPCs as seen in GOAP). They increase responsiveness and dynamism - but they do not attempt to understand the player, mirror them, or form relationships.

Human Impact:
Indicative Social License:
Low
Public Awareness
  • Neutral-to-mild emotional responses (surprise, amusement, anticipation, annoyance)
  • Frustration due to tuning
  • Fragmented immersion if lore breaks
  • Gameplay distortion or imbalance
  • Basic awareness that AI is in use, and the type of AI involved, in the relevant game(s)
  • Transparency is sufficient; consent is not yet required

4

Level 4: Player/Social Interactive AI

Player recognition, adaptive memory, personalized interactions, can form bonds and build rapport.

At Level 4, AI systems would cross a fundamental boundary - moving from primarily  functions of world responsiveness, to ones of relational responsiveness. This changes who must consent, what must be disclosed, and what must be actively monitored.

These systems may recognize the player, speak to them, remember past interactions, mirror tone, adjust tone and style to the player, and sometimes form alliances, friendships, or romantic trajectories.

This tier introduces parasocial relationship potential, emotional hooks, and the possibility of a player feeling understood, recognized, or “connected” to an AI character. These effects may persist beyond the game and influence player expectations, emotional state, or behavior outside the game environment.

These systems are not designed to diagnose, treat, or replace psychological support, and are not intended to intervene in real-world mental health needs. Additional safeguards, constraints, or exclusions are assumed where systems are accessible to minors.

Human Impact:
Indicative Social License:
High
Public Consent & Limitations
  • Mild-to-moderate emotional responses (satisfaction, laughter, enjoyment, appeal)
  • Narrative drift, misinterpretation of intent, or generation of harmful or inappropriate content
  • Self-disclosure of personal information
  • Boundary confusion, parasocial dependency, and identity disorientation
  • Unrealistic or excessive emotional reinforcement (e.g. affirmation, mirroring)
  • Clear, comprehensible consent before interaction begins, distinct from general legal terms
  • Authored boundaries
  • Clear limits on subject matter
  • Guardrails
  • Monitoring
  • Escalation paths

Where harm occurs despite safeguards, responsibility for response, correction, and repair remains with those deploying the system.

5

Level 5: High Intensity Interactive AI

Emergent worlds, persistent personas across worlds and games, adaptive psychological models.

Level 5 represents a transformational horizon - where AI systems not only interact, but autonomously adapt, generate, and expand. They may self-evolve lore, shape entire societies and worlds, simulate emotional behavior, or target players based on psychological profiling.

This is the point at which AI systems would be considered to resemble entities within a world, rather than authored components. Advances in world-modeling or simulation alone would not constitute High-Intensity Interactive AI unless they are coupled with persistent, player-facing social or emotional interaction.

A failure of trust at this tier would likely jeopardize social license across the entire pyramid.

Human Impact:
Indicative Social License:
High
Public Trust, Oversight, Governance
  • Strong and potentially disproportionate emotional responses (e.g. joy, desire, anger, fear)
  • Reduced visibility of boundaries, loss of narrative control, and emergent behaviours beyond creator intent
  • Expectations of continuity, companionship, and ongoing presence
  • Deep parasocial immersion, identity entanglement, and difficulty disengaging from the world
  • Failure of safeguards may result in harm that is no longer contained within the game itself
  • Active transparency and public literacy regarding function, impact, and scope
  • Explicit, meaningful, educated consent - ongoing or progressive as interaction and emotional engagement increase
  • Creator-authored boundaries for use, monitoring, and exit/expulsion use cases
  • Extended community engagement
  • In-game safety mechanisms and escalation pathways
  • Proportionate external review and governance
  • Legal and ethical disclosures

3.3 Unacceptable Human Impacts (Non-Exhaustive)

The following examples illustrate categories of human impact that undermine emotional safety, consent, and social license in AI-enabled games. They are not an exhaustive list, nor specific implementation guidance, but represent boundaries beyond which player-facing AI systems cease to be responsible.

In practice, safeguarding against these harms requires guardrails that operate at runtime. These shape what AI systems can generate, respond to, or escalate in live player interactions, not only policies or intentions defined in design.

  • Sexualized or pornographic misuse, including non-consensual or exploitative fantasy scenarios
  • Harm to or targeting of minors, including exposure to age-inappropriate content or dynamics
  • Manipulative or coercive parasocial dynamics, including systems designed to foster emotional dependency
  • Deceptive or undisclosed AI identity, where systems misrepresent their nature or role to the player
  • Identity or likeness misuse, including unauthorized use of real individuals or cultures
  • Spillover harm, where in-game AI interactions contribute to real-world psychological or social harm

3.4 Join the Conversation

This is a living framework.

We’re refining it with input from creators, studios, players, and researchers. If you have a perspective, challenge, or insight, we’d love to hear it.

Join the Conversation →