1. The FIVE + FIVE Framework
This framework presents a practical design and reference model for Responsible AI in Games.
1. 1 About the FIVE + FIVE Framework
This model combines two complementary ideas:
- The Five Levels of Human Impact for AI in Games, which describe how different AI systems affect players and therefore require different approaches to social license, and:
- The Five Principles of Responsible AI in Games, which define the core requirements for responsible design and implementation.
Responsible AI in Games
15th May 2026
Together, the Five + Five offers a shared language to discuss human impact and responsible implementation without prescribing uniform solutions, technology choices, or gameplay patterns. It forms pages 1–5 of our Responsible AI work as a shared model for the industry, with Responsible AI Practice (pages 6–10) showing how this model is applied in practice through Helix and Of Moss & Moonlight.
The framework is not a mandate or a standard, and we share it not to claim authority. It is intended as guidance for responsible design - not legal advice or a guarantee of specific outcomes.
Our aim is to support safer living world AI, adapted to genre, audience, art direction, community norms, and creative goals, and to evolve the framework through ongoing contribution and critique.
1.2 State of the Industry
Negative publicity regarding the increasing use of Artificial Intelligence is greater than ever before - amplified by a predicted “AI bust” from market commentators.
We believe AI technology is at critical risk of outpacing its social license - generally regarded as the informal but critical “permission” granted by society/communities for its responsible use.1
Gaining social license often requires a high level of transparency, trust, and legitimacy, resulting in sustained consent, emotional safety, and perceived benefit; however in practice AI use is often opaque, even unconsented, and the benefit for communities (rather than corporations) is unclear.
Long-term success and sustained adoption of AI technologies depends not only on technical capability,
but on responsible behavior and stakeholder trust.
Few sectors have secured and maintained robust social license for use of AI - there is a trust gap that is still holding back adoption and provision of societal acceptance. Privacy regulation designed for the internet era was not built for this class of technology and is not yet fully aligned to it.
Specifically in the gaming industry, reactive systems and AI-enabled narrative tools are becoming more central to how games behave. Stakeholders in this space include:
- Players
- Creators
- Studios
- Platforms
- Investors
- Regulators
- Performers
Each stakeholder group brings different, and sometimes conflicting, risk tolerances, incentives, and expectations around trust, consent, and accountability. As a result, stakeholders are being increasingly vocal about concerns around consent, authorship, labor impact, emotional safety, and trust - often without clear or shared answers.2
The industry is wrestling with real questions about AI in games, and any responsible approach needs to acknowledge that reality. We have unique opportunities for the use of AI, and therefore unique challenges - including the level of emotional engagement, issues of identity, parasocial interaction, and consequence.
There are clear parallels between how social license has been established in other sectors and how emerging narrative technologies will need to earn and maintain player trust.
The opportunity is to apply those lessons early - before trust, safety, and legitimacy are tested in public.
1.3 Social License: Trust, Legitimacy, and Community Permission
Social License is a well-established concept in many sectors, and is broadly described as the permission provided by a group to operate in a certain way. It exists alongside formal regulation, but not always in sync with it - social license may be considered as being both ahead of, and behind, regulation on the same operation.
In the deployment of AI systems, social license would be considered as the formal or informal agreement by stakeholders of the technology to use AI as set out by the vendors or implementation entities.
It is essentially the community’s permission for the AI to operate in the way it has been communicated to - and this permission must be earned, not assumed.
How trust translates into behaviour4
Social license does not imply consensus:
- Stakeholders may disagree on acceptable risk, boundaries, and trade-offs
- Responsible practice requires those tensions to be surfaced rather than smoothed over
- Forces beyond any single actor’s control, including cultural shifts, media scrutiny, and public responses to adjacent uses of AI, also shape expectations
The lack of conversation about social license in gaming should be considered a growing risk to trust – and when public attention shifts toward AI in games, that tension will surface quickly. In games, loss of social license will show up as player backlash, performer withdrawal, platform scrutiny, and downstream impacts on investment, partnerships, and public-facing endorsement - or the quiet abandonment of features teams are no longer willing to defend.
Social license is maintained through continual review and adaptation, clarity of intent, explicit boundaries, and openness to critique. That work should not be done in isolation.
This gap between legal authority and community consent is not hypothetical; it has played out in real implementations.
In some cases, systems have been legally permitted but socially rejected — not due to non-compliance, but due to a lack of trust, transparency, and community consent.3
In games, this gap becomes particularly visible. Living worlds and Non-Playable Characters (NPCs) have the ability to blur emotional boundaries, create relationships, and build attachments with players. They influence behavior. They introduce human impact.
Not all AI in gaming is created equal. Different systems have different levels of visibility, consequence, and player-facing influence. Some operate deep in infra pipelines; others speak, respond, remember, and build relationships. These differences determine the level of social license required.
A major, public failure at the most sensitive end of that range would likely undermine trust in all AI tools used in the industry, and could easily trigger hostility, regulation, or public backlash towards even the safest, most low-impact uses.
Higher human impact doesn’t just increase risk
- it changes who must be part of the conversation.
Because of this, it’s important to clearly differentiate the types of AI in use today, understand their human impact, and articulate the corresponding social license for their use. We aren’t suggesting to constrain AI development - it’s about matching the level of transparency and trust to the level of impact on the people who experience it.
This spectrum forms the basis of the Five Levels of Human Impact for AI in Games →
1.4 Intended Audience & Orientation
This framework is proposed primarily for creators, studios, and teams building or considering player-facing AI systems in games - particularly those working with adaptive, relational, or emotionally responsive mechanics. This includes developers, designers, producers, performers, creatives, and platform teams directly involved in shaping living game worlds.
It may also be useful to players, researchers, ethicists, regulators, and platform stakeholders who are affected by, studying, or engaging with these systems.
The Framework is intended to be used as a practical reference during design and decision-making, helping teams reason about human impact and responsibility as systems are built.
1.5 Join the Conversation
This is a living framework.
We’re refining it with input from creators, studios, players, and researchers. If you have a perspective, challenge, or insight, we’d love to hear it.
1.6 References
1. Moffat, K., et al. The social license to operate: a critical review, 2016
2. Melhart, D., et al. The ethics of AI in games, 2023
3. Xu, J., et al. Implementation of e-health record systems in Australia, 2013
4. Thiebes, S., et al. Trustworthy artificial intelligence, 2020 (adapted from McKnight et al., 2002, 2011)

