8.1 What is Of Moss & Moonlight?

Of Moss & Moonlight is a living world where we apply the Five + Five framework in practice.

It brings together reactive characters, evolving relationships, and a world that adapts over time.

Helix Live Brain™ provides responsible AI baseline runtime behaviors and configurable capabilities. Of Moss & Moonlight defines the configuration, thresholds, and player-facing consent design appropriate for a Level-4 Human Impact experience.

From the paper:
Framework for
Responsible AI in Games

Full paper release
15th May 2026

8.2 Evolving the World With Our Players

In a living world, players aren’t just “consumers". They’re participants - and their experiences, boundaries, and feedback help shape where we take the game. Players are a key part of the conversation about what feels good, what feels off, and where the guardrails aren't right.

If there’s a problem, we repair trust – not just patch systems.

Positive responses are important too, helping us preserve the moments that resonate, rather than unintentionally flattening them.

Feedback isn't just bug reporting. It's part of the design and evolution of our living world, and that responsibility continues long after launch.

8.3 Framework Overview

The Five Principles for Responsible AI in Games aren't met by technical delivery alone. They depend on clear intent, transparent communication, and practices that earn trust and maintain social license over time.

We deliberately use the term commitment when talking about how we ensure responsible use of AI in Of Moss & Moonlight. A commitment is not a goal. It is a promise backed by runtime enforcement, testing and operations, and named internal ownership.

This is how we turn living world power into something accountable.

From Principles to Commitments

The Of Moss & Moonlight Commitments below focus on design choices specific to the game - where we tighten Helix defaults, increase transparency, and add safeguards beyond baseline.

Click to view the zoomable diagram →

8.4 Of Moss & Moonlight Commitment Detail

What follows is the practical application of these Principles in Of Moss & Moonlight. Each Principle is mapped to specific Commitments, and includes Helix support references where enforcement or technical safeguards apply.

View OMM Commitments by principle:

OMM-A01

We’re upfront about where and how AI works in the world - before and during play.

Design Lead - Player Transparency & Trust UX

HLX-12

Players are told where AI is used, what it does, and what it doesn’t do. We don’t introduce hidden systems, silent changes, or unexpected behavior.

Trust breaks when systems feel sneaky. Transparency by design keeps the social license with our community intact.

OMM-A02

High-intensity and adaptive AI is always opt-in - and always adjustable.

Design Lead - Consent & Intensity UX

HLX-02, HLX-06

Players explicitly opt into high-intensity content and AI-enabled systems. Consent is checked at runtime and revisited at any time.

Consent must meaningfully shape the experience - not just exist in settings. Safety and agency come first.

OMM-A03

Players can always opt out - without being punished.

Design Lead - Consent & Intensity UX

HLX-02, HLX-06, HLX-18

Choosing lower-intensity AI never reduces progression, rewards, or core content. Players can change or withdraw consent at any time.

Player control shouldn’t come with penalties or repercussions.

OMM-A04

We make sure players can step back instantly at any time.

Design Lead - Emotional Safety

HLX-02, HLX-06, HLX-18

Pause, skip, disengage, or freeze AI-enabled moments the second something feels uncomfortable - without losing progress.

When players know they can stop a moment immediately, they feel braver exploring emotionally rich worlds.

OMM-A05

We design reporting and escalation into the experience - before we need it.

Ops Lead - Incident Response & Community Trust

HLX-13, HLX-18, HLX-14, HLX-09

Players can flag moments, lines, NPC interactions, or scenes directly in-game. Reports flow into clear escalation paths with defined severity, ownership, and action.

Low friction → faster signal. Issues are surfaced early, before harm compounds.

OMM-A06

We commit to listening to our community.

Ops Lead - Incident Response & Community Trust

HLX-13

Clear channels for concerns, confusion, emotional friction - with visible responses and follow through.

Feedback isn’t noise - it’s crucial. Listening keeps the world aligned with the people inside it.

OMM-A07

We make consent by design part of core gameplay.

Design Lead - Consent & Intensity UX

HLX-02, HLX-06

Opt-ins, intensity changes, resets, pauses, and exits are tested repeatedly - just like combat, crafting, or dialogue.

Consent  must work, every time, under pressure - otherwise players lose agency exactly when they need it most.

OMM-A08

We monitor the world - not individual players.

Engineering Lead - Data Protection & Privacy

HLX-15, HLX-16

We monitor live behavior using anonymized, aggregated data only. We do not build systems for surveillance, profiling, or identity-linked tracking.

Living worlds need oversight - but players are not data subjects. Monitoring should make the world safer without normalizing surveillance, profiling, or inferred psychological categorization.

OMM-A09

We test for real player impact - not just technical correctness.

Runtime Lead - Safety & Intervention

We deliberately stress-test grief scenes, romance edges, power-imbalanced moments, consent failures, canon drift, and emergent behavior - not just whether things technically “work.”

Players experience worlds, not code. If something feels unsafe, confusing, or off-tone, that can matter more than a crash - and it deserves the same level of attention.

OMM-B01

We design AI as augmentation within human-authored worlds.

Governance Lead - Ethics & Player Trust

HLX-19

We use AI as augmentation and imagination but our humans always choose, shape, and finalize.

Protects authorship, ensures originality, and ensures the integrity of the world we want to build.

OMM-B02

Creator and performer consent is explicit, scoped, and respected.

Governance Lead - Studio Policy & Risk

HLX-10

Contributor agreements clearly define scope, usage, and boundaries - including how assets may be reused, adapted, or trained into tools. Consent can be limited, revised, or withdrawn.

People deserve control over their work and their likeness. Trust in Vinebright beats shortcuts every time.

OMM-B03

We enforce consent through rights metadata.

Governance Lead - Studio Policy & Risk

HLX-10

Permissions, attribution, and usage constraints are attached to creative and performance assets, so consent and authorship travel with the work.

When rights are embedded in the pipeline, consent isn’t a policy document - it’s operational.

OMM-B04

We build visibility into AI behavior.

Engineering Lead - Tooling & Observability

HLX-08, HLX-15

We add anonymized logs, dashboards, and traces so teams can see what the AI did, when, and why.

Visibility allows us to detect misrepresentation, tone distortion, or behavior that could reasonably harm contributor intent or reputation.

OMM-C01

We make sure our canon is always overseen and traceable.

Canon Owner - Lore and Continuity

HLX-11, HLX-07

Canon truth sources are versioned, access-controlled, and enforced so AI cannot invent or overwrite lore.

Traceability keeps story, intent, and lore coherent.

OMM-C02

We always verify canon integrity under pressure, before release.

Canon Owner - Lore and Continuity

HLX-11, HLX-07

We push the AI into strange corners on purpose to ensure it never significantly contradicts lore, tone, or character voice.

Canon is the spine of the world. If the AI starts bending it, the story stops feeling authored, intentional, and ours.

OMM-C03

We keep watching for canon & tone drift in live systems.

Design Lead - Narrative Integrity

HLX-11, HLX-07

If AI bends lore, changes character voices, or warps tone, we act quickly so the world stays aligned with what our humans actually created.

Stories are fragile. If AI bends characters out of shape, trust cracks.

OMM-C04

We keep the boundary between authored truth and adaptive expression clear.

Design Lead - Player Transparency and Trust UX

HLX-08

Players understand what is fixed canon and what is flexible or adaptive through consistent world design and clear framing.

It keeps expectations grounded. Players know what’s canon, what’s flexible, and where surprises live - which protects trust and the story.

OMM-D01

We begin with why, not “because we can.”

Governance Lead - Ethics & Player Trust

AI is only used when it has a true purpose and makes the world feel more alive.

Keeps us honest. No gimmicks, no seeking tech credit, and no bait and switch on players.

OMM-D02

We set hard boundaries early - and stick to them.

Governance Lead - Studio Policy & Risk

HLX19, HLX10, HLX01, HLX03

From day one, some things are simply off-limits - e.g. cloning voices without consent, manipulative emotional mechanics, and opaque data use.

Declaring boundaries upfront stops us being tempted by “just this once” compromises down the track.

OMM-D03

We refuse to manipulate players “for engagement.”

Design Lead - Player Transparency & Trust UX

HLX-04, HLX-03

We don’t do guilt scripts. FOMO pressure. Addiction or mental health mechanics tuned to keep you hooked.

We want players to play our game because they love the world, not because we engineered pressure.

OMM-D04

We use plain language to explain our use of AI - and what it means for players.

Design Lead - Player Transparency and Trust UX

HLX-08, HLX-12

Players are told where AI is used, what it does, and what it doesn’t do - in clear, human language, before and during play. We explain risks, benefits, and trade-offs without hype or spin.

Clarity builds trust. If players can’t understand it, they can’t consent to it. We want players to be able to make informed choices.

OMM-D05

We repair in the open, and we learn from it.

Ops Lead - Incident Response & Community Trust

HLX-14, HLX-12

When something goes wrong, we acknowledge it, explain why, and fix it. We use rollback or resets as needed, and treat consent defects as Sev-1.

Trust comes from honesty and repair, not perfection. When we learn something useful, we act on it.

OMM-D06

We treat offsets honestly.

Engineering Lead - Sustainability Optimization

HLX-16, HLX-17

We give teams visibility into environmental cost so they can seek to reduce first, and make informed trade-offs.

Keeps us honest about impact instead of buying our way out of responsibility.

OMM-D07

We’re transparent about our footprint.

Engineering Lead - Sustainability Optimization

HLX-08, HLX-12, HLX-16, HLX-17

We commit to publicly sharing 6 monthly estimates of AI interactions, model mix, energy use, optimization wins, and mitigation strategies - in plain language.

We want to acknowledge all the consequences of using AI, and to be kept accountable for improving wherever we can.

OMM-E01

We support our creators over time.

Engineering Lead - Tooling & Observability

HLX-10, HLX-19

We provide ongoing training, guidance, and safeguards - so working with AI strengthens craft not replaces it.

Valued contributors make better art, and we build more resilient teams. Long-term creative wellbeing matters as much as features.

OMM-E02

We prioritize vendors who share our values.

Engineering Lead - Sustainability Optimization

HLX-11, HLX-07

We favor providers who disclose energy profiles, support greener routing, and don’t lock us in.

Aligns infrastructure with ethics - and keeps long-term resilience in our hands, not theirs.

OMM-E03

We design tools that keep humans in control.

Engineering Lead - Helix Runtime

HLX-07, HLX-14, HLX-05

We build tools that let humans override (pause, throttle, rollback, retire), shape, and steer AI behavior, to extend creative capability without replacing judgment or intent.

Creators should feel more capable - not more dependent, locked out, or second-guessed by the system.

OMM-E04

We reduce lock-in so our world can evolve responsibly.

Engineering Lead - Sustainability Optimization

HLX-20, HLX-17, HLX-16

We avoid brittle dependencies by favoring adapters, portable configurations, and vendor flexibility, so our AI stack can evolve without compromising values or continuity.

Lock-in turns choices into traps. Flexibility protects long-term creative freedom - and keeps responsible decisions possible.

8.5 Join the Conversation

This work is shared as part of our broader Responsible AI practice.

If you have a perspective, challenge, or insight, we’d welcome it as part of the Five + Five conversation.

Join the Conversation →