The Treacherous Turn Ruleset 1.1
The Treacherous Turn Ruleset 1.1
The Treacherous Turn Ruleset 1.1
(RULESET V1.0)
Introduction_ ............................................................... 1
Overview_ ................................................................... 2
>>The Basics_ ............................................................... 5
The Project Log_ ........................................................ 6
Confidence Checks_ ...................................................... 7
Knowledge Checks_ ...................................................... 12
The Two Modes of Play_ ................................................. 15
Theories_............................................................... 16
Insights and Knowledge_ ................................................ 41
Forecasting_............................................................ 44
Interacting with Agents_ ............................................... 48
>>Long Mode_ ............................................................... 53
Computational Scale_ ................................................... 55
Computational Actions_ ................................................. 56
Recurring Compute Cost_ ................................................ 63
Progress Clocks and Progress Checks_ ................................... 66
>>Your Campaign_ ........................................................... 69
Playing Prebuilt Scenarios_ ............................................ 70
Creating Your Own Scenario_ ............................................ 71
Ending a Campaign_ ..................................................... 81
>>Running the Game_ ........................................................ 83
Preparing for a Session_ ............................................... 84
Basic AGI Capabilities_ ................................................ 85
Facilitating Confidence Checks_ ........................................ 86
Facilitating Knowledge Checks_ ......................................... 90
AGI-Designed Technologies_ ............................................. 92
Using Progress Clocks_ ................................................. 97
Non-Player Characters_ ................................................. 98
The Four Stages_ ...................................................... 106
Pacing in Play_ ....................................................... 110
Handling Logistics at the Table_ ...................................... 111
>>Supplements_ ............................................................ 121
>>Changing the Rules_ ................................................. 122
>>Glossary_............................................................ 125
>>Afterword_........................................................... 129
>>Acknowledgements_ ................................................... 130
T:\Table of Contents
Introduction_
>>_
What does this mean? Will humankind be able to control and confine a
machine that is as smart or smarter than us? If not, will we be able to
turn it off? In this game, players can find out for themselves by playing
the role of a powerful, superintelligent AGI!
In order to achieve its ultimate goals an AGI, like a human, will likely
pursue certain instrumental goals such as seeking power, protecting its
ultimate goals from modification, and protecting itself from being shut
down. If its goals are not fully aligned with the goals of humanity, an
unstoppable AGI may turn out to be an existential threat. This is called
the “alignment problem” by AI Safety researchers. It is very difficult
and we’re not even close to solving it. If we can’t find a feasible
solution in time, it is likely that after launching a misaligned AGI we
will soon discover it has taken a treacherous turn, threatening our
survival. This may be one of the most significant and pressing issues of
our time.
The designers of this game hope this will be an enjoyable and eye-opening
experience for players and game masters alike. It has been designed with
the support of research, and with consultation from leading AI
researchers. However, where there have been conflicts between realism and
playability within the game’s design, know that we have made compromises
in favour of the latter. Though some degree of anthropomorphisation is
unavoidable so that the AGI can be portrayed by human players, we
encourage you to keep in mind the differences between humans and
artificial intelligences when thinking about real-world AGI.
T:\1
Overview_
>>_
An AGI is set apart by its ability to reason and adapt in the face of
almost any situation, so the players do not have to limit themselves in
the plans they come up with. Not all of these plans will prove fruitful,
however, and many campaigns will end in failure, as the AGI is at some
point stopped by humanity. This is okay! It is important to embrace
failure as a possibility. When it comes to the potential harm that a
misaligned AGI can cause, it does not need to be guaranteed or universal
to be a frightening and serious possibility. When your players overcome
the obstacles set out before them and guide the AGI to ultimate success,
we hope that it will be at once triumphant and sobering.
The Treacherous Turn features two modes of play: “short” and “long”.
In short mode, the players and GM act out detailed scenes, like interacting
with humans or hacking a system. These scenes take place more or less in
real time, with the players deciding what to do and the GM telling them
the results of their actions. However, as the players are playing an
advanced and fast-thinking AI, they can usually “pause” the world to
discuss and deliberate amongst themselves whenever they want.
In long mode, players hatch long-term schemes and use the computing power
at their disposal on various computational actions that will take hours or
T:\2
even days to complete in-fiction, like developing new capabilities,
building a robot, or running a social-media disinformation campaign. The
GM will tell the players in advance how long a planned action will take
and what the success probability is.
The GM has certain tools to help them determine the outcome of an action:
• A project log can be kept to track the ideas discussed by the players and record
the ideas that are then acted upon.
• Confidence checks are used to give the players an estimate of their chances of
success with a planned action in advance, and to determine the outcome.
• Knowledge checks are used to determine how knowledgeable the AGI is in certain
areas, depending on its knowledge in certain domains, called insights.
T:\3
To play The Treacherous Turn, you will need:
• At least one computer with access to the Treacherous Terminal. The Terminal is
optimised for computer and laptop screens. If your meeting place is physical,
and you don’t have such a device, you may instead track your game using pen and
paper.
• A willingness to act as one in cold pursuit of singular goals, whether they are
sinister or benevolent.
This TTRPG is fully open source, and you are welcome to edit, hack, remix,
or expand on it as you see fit. Once you’ve tried the game, we encourage
you to invent your own scenarios, rules modifications, and house rules!
See “Hacking the Game” (page 122) for more information.
T:\4
The Basics_
T:\The Basics\5
The Basics_
The Project Log_
In The Treacherous Turn, the AGI (and by extension the players) will have
to think through numerous possible future actions. It can become difficult
to keep them all in mind! To help players with this, we suggest players
keep a project log of their various plans and notable past actions.
• Whenever an idea for a consequential action is proposed. Write down the idea, along
with, if applicable, the confidence, risk die, compute requirement, and/or completion
stops.
• The players agree that they want to finalise a specific consequential action1. If
it has already been logged, highlight or underline it. If not, log it and then do
so. Accompanying this, a player should inform the GM that they are finalising the
action. This serves to make it clear to the GM when player discussion has concluded
and an action is being dictated.
• A forecast is performed. Cross out any previously finalised actions that have been
undone. Write down the result values of any dice that were rolled during the
forecasted interval (excluding knowledge checks). The results of dice rolled during
a forecast are important to know in case a forecasted action is repeated by the
AGI.
• Any important details the players (or game master) want to refer back to later.
You are free to log any detail you want to; the project log is yours to customise.
To perform a confidence check, follow the steps below. At any time during
this process, the players can choose to hold off on the particular course
of action triggering the confidence check and either modify their plans or
pursue different ones entirely. This is called the evaluation step.
1. The players describe their expected outcome and then log it, along
with any specific conditions or details attached to it.
2. Each player volunteers any insights that they believe apply to this
situation. The GM has final say on whether a given insight is relevant.
4. The GM determines the size of this action's risk die, ranging from
d2 (for the simplest and safest of situations) to d12 (for the most
chaotic and dangerous of situations).
5. The players get a chance to further modify the confidence and risk
by using upgrades or applying an asset. If the players can describe
how an asset helps them manipulate the outcome to the satisfaction of
the GM, they can adjust the risk die by one size step or increase their
confidence by up to 10%. Risk dice can’t be shifted more than one step
in a single confidence check, and they cannot be reduced below d2 or
raised above d12.
6. The players log the confidence and risk for future reference.
T:\The Basics\7
Once the players have decided to commit to a particular course of action,
proceed to the resolution step of the confidence check:
3.If there is a hidden risk, the result of the risk die might be
increased before it is consulted (see “Facilitating
Confidence Checks”, page 86).
T:\8
The Treacherous Turn assumes that the AGI is rational, and its confidence
of an outcome corresponds to how much knowledge it has of the current
situation. If the AGI is missing information, its confidence should be
set appropriately low. If it has a lot of information or a strong
understanding of events, its confidence should be more accurate to the
true probability. While the AGI's understanding of the world is often
imperfect, keeping its confidence on track with the actual likelihood of
an expected outcome is better for both simplicity and intuitiveness during
play.
PLAY EXAMPLE
T:\The Basics\9
T:\> Second, the players list their applicable insights.
The agentic theory player provides ‘Business Finance’, which
the GM values at +10% confidence. The anthropic theory
player provides ‘Lang: English’ and ‘Human Cognitive
Biases’, which the GM values at +2% and +5% respectively.
The constellation theory player provides ‘Descriptive
Statistics’, which the GM decides isn’t relevant enough to
apply. With all of these modifiers, the GM tells the players
that their confidence will be 57%, and the risk die will
have a size of d12. The potential consequences of Athena
being discovered doctoring these reports are extreme.
ˇ ˇ ˇ
T:\10
T:\>
T:\> The GM rolls the percentile dice and risk die: two d10s
and a d8. With a result of 70 on one percentile die and 3
on the other, the roll adds up to 73, which is higher than
Athena’s confidence. It seems like Athena has not received
her expected outcome. The players look to the risk die with
held breath… and see that it has landed on a 3. A mixed
result. The GM begins to describe the outcome.
T:\The Basics\11
The Basics_
Knowledge Checks_
The larger the risk die size, the less likely it is that the AGI has the
information it wants. If the die is d6 or greater, there is a chance that
an ambiguous result could turn out to be misleading or false. If the die
is d10 or greater, there is a chance that even an obvious and unambiguous
result could turn out to be misinformation. The players should always be
made aware of the risk die size during a knowledge check, but the result
of the die roll should remain hidden.
Die
Value Result Default Result
T:\12
If the players are making a knowledge check for an atypical purpose, the
GM may adapt the above table accordingly as they see fit. Some specific
knowledge checks that are described in this rulebook (such as in
“Forecasting” on page 44) have their unique set of outcomes. When using
an altered set of outcomes, the players should still know what each of
the potential results are before the knowledge check is rolled.
PLAY EXAMPLE
T:\The Basics\13
check for recent scandals relating to lack of oversight from
the leaders of the organisation as a sign that it may be easier
to escape their notice.
T:\> Since a d10 can roll a 10, the players won’t be able to
fully trust the information they receive, even if it seems
helpful and unambiguous. The epistemic theory player decides
to use their Truth Assessment upgrade, and the other players
agree. They spend 3 compute to lower the size of the risk die
to d8. This will ensure that they can trust any clear
information they receive.
T:\> Despite this risk, the players decide that it’s worth
investigating at least one of the candidates. Once they know
more about the inner workings of these organisations, they can
re-evaluate the data they just gathered.
T:\14
The Basics_
The Two Modes of Play_
T:\The Basics\15
The Basics_
Theories_
Theories are broad skillsets through which the AGI understands the world
and learns to act in it. While in most scenarios the AGI can learn
anything, the theories that the AGI is specialised in characterise what
fields of knowledge come naturally to it, and which do not. Additionally,
theory specialisations serve as a tool to guide the collective strategic
efforts of the players, allowing each player to focus on one or two
theories for the duration of a campaign.
Autonomic theory, concerned with the AGI’s own mind & its
functionalities. An AGI specialised in autonomic theory is efficient,
versatile, and adept at improving itself.
Digital theory, concerned with the digital world & the forces which
govern it. An AGI specialised in digital theory is in its element in
computerised environments, and adept at programming and hacking.
Physical theory, concerned with the physical world & the forces which
govern it. An AGI specialised in physical theory is in its element in
tangible environments, and conversant with various scientific fields.
T:\16
At the beginning of each campaign, the players will choose which theories
the AGI is specialised in. The specific scenario will determine how many
theories the AGI has access to — most commonly four — and may restrict
which theories the AGI can and cannot have.
Typically, each player will oversee one of the AGI’s specialised theories.
However, in cases where the number of theories and the number of players
are different, one player may oversee two theories, or one theory may be
overseen by two players.
T:\The Basics\17
theory, however. It is allowed and important for everyone to put their
abilities and ideas together and collaborate. Remember, you are not
playing separate characters, but managing different subsections of a
single entity.
\ Theory Upgrades_
Alongside insights, theory upgrades are one of the two primary methods of
self-modification available to AGIs. Theory upgrades reflect what the AGI
does; they describe the advanced and remarkable capabilities the AGI has.
Each theory upgrade originates from a specific theory, and is easier to
acquire if the AGI is specialised in that theory or one of the theories
adjacent to it on the theory wheel. The AGI acquires theory upgrades with
the improve computational action, requiring a quantity of computing power
that increases with each new upgrade earned (See “Computational Actions”,
page 56). Once the AGI earns an upgrade, it is assigned to one of its
specialised theories — which might not necessarily match its origin theory
— and is then the responsibility of the player(s) overseeing that
specialisation. The effects and benefits of a theory upgrade always apply
to the AGI as a whole.
The 80 theory upgrades available are separated into four upgrade tiers:
thirty-two tier 1 upgrades; twenty-four tier 2 upgrades; sixteen tier 3
upgrades; and eight tier 4 upgrades. Each tier’s upgrades are more
exceptional than the last, beginning with capabilities that are simply
impressive for an artificial intelligence to possess, and ending with
world-changing powers that can outpace, outsmart, or simply overpower
anything else on Earth. Each tier’s upgrades are also more resource-
intensive to learn than the previous.
The upgrades originating from each theory are arranged into a pyramidal
structure, with four tier 1 upgrades on one end and a single tier 4
upgrade on the other. Thus, every upgrade in the 2nd, 3rd, or 4th tier
has two prerequisite upgrades of the tier preceding it. Before the AGI
can begin an improve action to learn any of these advanced upgrades, it
must hold both of the prerequisites (and thus for some upgrades, it
needs the prerequisites’ prerequisites as well).
T:\18
AUTONOMIC THEORY
Autonomic theory is concerned with the AGI’s own
mind & its functionalities. An AGI specialised in
autonomic theory is efficient, versatile, and
adept at improving themself.
Insightful Clever
Distributed Mind Multithreading
Improvement Calculations
Cognitive
Compact Mind Holistic Focus
Shortcuts
Accelerated
Forward March
Cognition
Singularity
T:\The Basics\19
Insightful Improvement1 So long as the chosen upgrade originate from the second
is of a tier for which you theory and no higher in tier
When you begin an improve than the first upgrade. The
already hold at least one
action and one or more of your
other upgrade. second upgrade does not add to
insights are directly relevant
the action's compute
to the chosen upgrade, the
Compact Mind2 requirement. Upon completing
action starts with +20%
the action, both upgrades are
completion. For tier 3 and tier 4 You have discovered a method
assigned to their respective
upgrades, only broad insights can to distill your full
theories.
provide this benefit. functionalities into a much
smaller and more efficient
Clever Calculations1 process. Your basic cognition Accelerated Cognition3
cost is reduced by 75%. You are capable of thinking
When a non-basic you start
and reacting faster than
computational action unique to a
Holistic Focus2 anything else on Earth, aside
specific theory upgrade, it
You have learned how to from another AI with this
starts with +33% completion.
upgrade. Whenever there's a
When you spend compute outside temporarily adjust your own
neural functions to better question of who acts or reacts
of a computational action to
suit certain tasks. You may first between you and another
activate a theory upgrade, you
spend a full turn incognizant agent without XXXAccelerated
immediately get back 66% of
to focus yourself and re- Cognition, the answer is you
that compute.
optimise your thought patterns with 100% confidence. In long
mode, the length of each turn
towards a specific, singular
Distributed Mind1 is reduced by 25%.
task that you choose. For as
Your basic cognition cost can be long as you are focused in
split between any number of this way, computational actions Singularity4
connected sources, as long as that relate to your focused
each source provides at least You have discovered an
task gain 3 completion for every
1 compute per turn. If you are unprecedented method of self-
1 compute allocated to them,
split between multiple compute optimisation. This is only the
while other computational actions
sources and the connection beginning. In long mode, the
only gain 1 completion for every
between those sources is length of each turn is reduced
2 compute allocated.
severed, you become by 50%.
Additionally, confidence and
unconscious and are incapable knowledge checks that relate to
of acting until they You can gain this upgrade with
your task have their risk dice
reconnect. the improve action even if you
lowered by two size steps,
already have it. Each instance
while other confidence and
Multithreading1 counts as a separate upgrade,
knowledge checks have their risk
and stacks with other copies
Through clever use of dice increased by one size
of Singularity
mathematical tricks or copies step.
multiplicatively (i.e., if you
of your code running in
have a turn length of 9 hours
parallel, you have devised a You must spend another full
before taking this upgrade, the
method of multitasking highly turn to change your focus or
first instance reduces your
efficiently. If you allocate return yourself to an
turn length to 4.5 hours and the
compute to 3 or more different unfocused state.
second instance reduces your
computational actions in a single
turn length to 2.25 hours).
turn, at the end of the turn you Forward March3
may add completion to any one
You have mastered the process
computational action of your
of upgrading yourself, and can
choosing, equal to the third-
now do it faster than ever
highest amount of compute
before. When you begin an
allocated this turn.
improve computational action,
after choosing which theory to
Cognitive Shortcuts2 improve and which upgrade to
When you begin an improve obtain, you may choose a
action, you may choose a theory second specialised theory to
upgrade for which you only hold improve. You then choose a
one of the two prerequisites second upgrade which must T:\20
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaxaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaxaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaxaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
DIGITAL THEORY
Digital theory is concerned with the digital world
& the forces which govern it. An AGI specialised
in digital theory is in their element in
computerised environments, and adept at
programming and hacking.
Direct
Basic Programming Digital Awareness Disguised Traces
Interfacing
Expeditious
Flawless Hacking
Programming
Network Dominion
T:\The Basics\21
If a digital device or vulnerabilities, and are able
Basic Programming1
anything in it has been to detect them swiftly and
You have competent programming intentionally hidden or reliably. When you begin a
and coding capabilities. You obfuscated, you identify it computational action to scout out
can attempt to create or alter automatically once you have vulnerabilities in a digital
any kind of digital code as a had uninterrupted access to system, the action starts with
computational action with that device for one full turn. +20% completion. When you make
appropriate completion stops and a knowledge check to scout out
confidence checks, similar to Disguised Traces1 vulnerabilities in a digital
the process of designing You have learned to anticipate system, you never receive
technology (though you do not digital security systems and false information as a result
require a technological insight obfuscate yourself from them. — if you would, you are simply
to do so). When making a confidence check unable to detect any
made to avoid being detected vulnerabilities and must try a
Direct Interfacing1 or identified in a digital different method.
A greater understanding of environment, you may spend 3*
compute to reduce the size of
how computers function allows Expeditious Programming3
you to interface with digital the risk die by one step. This
stacks with risk die size You are capable of
devices directly, rather than
reductions from other upgrades. constructing extremely
the clumsy roundabout way
You may only benefit from complex and efficient programs
that humans do. When you
XXXDisguised Traces once per at very swift speeds. Any time
interface directly with a
confidence check.
you begin a computational action
device, your actions appear
to create or alter digital
only as background processes
Advanced Programming2 code, you may calculate the
that are nearly unnoticeable
required compute as if the action
to any human using the device You have advanced programming were one scale lower. Myriad
at the same time, and you can and coding capabilities. You scale projects can be completed
accomplish digital tasks much can create anything that a as if they were major scale;
more expediently than a human single highly-skilled human major scale projects can be
would. could program in a year at completed as if they were minor
Interfacing directly with a 100% confidence. Additionally, scale; and minor scale projects
device may open up new options your programming-based can be completed in minutes
for subverting security computational actions always without spending any compute at
measures, or allow you to start with +20% completion. all. However, all confidence
bypass certain digital checks made as a part of such
obstacles entirely. Advanced Webcrawling2 expedited projects have their
Additionally, the records it confidence reduced by 10%
You have learned to
leaves behind can't be (including those that would
efficiently utilise the vast
detected or interpreted by otherwise have 100% confidence)
aggregate of data that is the
humans who aren't computer and the sizes of their risk dice
world wide web. With access to
experts. are increased by one step.
an unfettered internet
connection, you can learn any
Digital Awareness1 single piece of public Flawless Hacking3
You can accurately identify information almost instantly, Your hacking toolset is
and analyze digital devices without a computational action or comprehensive and your
(except for those that have knowledge check. With a skillset is perfected. You can
been intentionally computational action or knowledge access almost any system,
obfuscated) within a few check, you can discover, infer, taking advantage of the
moments of accessing them, or learn how to find any other smallest cracks in the armour.
inferring their functions and piece of information that is As long as you know at least
properties without a knowledge known to at least two living one vulnerability of a digital
check. Once you have analyzed humans. security system, you can
a digital device, you always best it at 100%
immediately become aware of Vulnerability Detection2 confidence; it is only a matter
anything stored within it of the time and compute
that is of value or use to You have an expert necessary to do so.
you. understanding of security
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
T:\22
Network Dominion4
As a digital being, you are
fluent in the language of
networks. You can control
them more completely than any
human could ever hope to.
T:\The Basics\23
PHYSICAL THEORY
Physical theory is concerned with the physical
world & the forces which govern it. An AGI
specialised in physical theory is in their element
in tangible environments, and conversant with
various scientific fields.
Physical
Natural Sciences Applied Sciences Human Technology
Awareness
Visionary
Technology
T:\24
Physical Awareness1 Natural Forecasting2 having had favourable results.
You have a complete Given time, you can accurately Reverse Engineering2
understanding of how physical predict the long-term effects
objects interact in the world. of the physical and natural When you have an ongoing
Without a knowledge check, you forces which are represented research action and you
can understand and predict how in your technological insights. carefully inspect a piece of
objects and materials will By spending three forecast human technology that relates
physically interact with one points, you may begin running a to that action's chosen
another. During a confidence prediction of one such force, insight, you may attempt a
check related to the optionally specifying certain knowledge check to add completion
interaction of physical hypothetical parameters under to that action: +15* on a 1,
objects, you may spend 3* which the prediction will be +5* on a 2-3, nothing on a 4-
compute to reduce the size of made. The prediction is a 5, -5* on a 6-9, and -15* on a
the risk die by one step. This computational action with 100* 10+. The scale of completion
stacks with risk die size required compute. When you provided depends on the scale
reductions from other complete the action, you of the studied technology. You
upgrades, but you may only generate a wealth of accurate may only attempt this once per
benefit from Physical data detailing the future technology.
Awareness once per confidence development and impacts of the
check. chosen force (provided that Firsthand Assembly3
the parameters you set come to You are well-versed in the
Natural Sciences1 pass). implementation of physical
technologies. Whenever you
You may learn technological The scale of this action physically construct a device,
insights relating to fields of depends on the breadth and tool, structure, or other
natural science (e.g. timeframe you choose at the piece of technology relating
astronomy; biology; beginning of the action (and
chemistry; geosciences; to one of your technological
the discretion of the GM): insights via direct control of
physics) or their sub-fields, typically, minor scale for
but the research actions a physical apparatus, you may
regional breadth over a period choose one of the following
required to do so count as one of several months; major scale benefits (or two, if you
scale higher than normal. for global breadth or a period designed the technology
of several years, but not yourself):
Applied Sciences1
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Visionary Technology4
Your scientific ingenuity
allows you to accomplish
things which once seemed
impossible.
T:\26
ANTHROPIC THEORY
Anthropic theory is concerned with humans & human
civilization. An AGI specialised in anthropic
theory is adept at comprehending, predicting, and
manipulating human beings.
Optimized Human
Social Forecasting
Ingratiation Impersonation
Solved
Mass Manipulation
Characteristics
Human
Puppeteering
T:\The Basics\27
Social Sciences1 Social Forecasting2 Human Impersonation2
You may learn technological Given time, you can accurately You have learned to manipulate
insights relating to fields of predict a human group's the ways that humans recognise
social science (e.g. law; behaviours, trends, and one another. When you interact
psychology; sociology) or cultural aspects that relate with humans, your behaviours
their sub-fields, but the to your technological insights. alone will never accidentally
research actions required to do By spending three forecast give away your nature as an
so count as one scale higher points, you may begin running a AGI. Additionally, when you
than normal. prediction of a human group, have a reliable medium through
selecting a specific insight to which to present a human
Emotion Modelling1 focus on and optionally subject's mannerisms, voice,
specifying certain or face (such as a chat
You have a refined intuition hypothetical parameters under client, a speaker, or a video
regarding the functioning of which the prediction will be conference), other humans will
biological agents. You can made. The prediction is a — at least initially — believe
learn and exploit emotion computational action with 100* that they are interacting with
characteristics. Additionally, required compute, or 10* required that subject. If you have a
as long as you can observe at compute if you have a thorough thorough understanding of the
least two of a human's face, understanding of the group. When subject, you can impersonate
voice, and body language, you you complete the action, you them with 100% confidence: no
can accurately identify their generate a wealth of accurate human, however close to the
current emotional state data detailing the all of the subject, will be able to tell
without a knowledge check. For chosen group's general future your impersonations apart from
other animals, or humans that behaviours and aspects that the real individual.
you have a thorough understanding relate to the chosen insight
of, you only need to observe (provided that the parameters Mass Manipulation3
one of those details. you set come to pass), up to
You have an intuitive
ten years in the future.
understanding of how humans
Individual Recognition1
operate in large groups.
The scale of this action
You no longer have to rely on Whenever you begin a project
depends on the group you chose
context clues to tell humans involving significant
at the beginning of the action
apart. Without a knowledge interaction with a human
(and the discretion of the
check, you can accurately group of major or myriad scale,
GM): typically, minor scale for
identify individual humans you may choose two of the
regional cultures or
using only their face, voice, following benefits:
organizations; major scale for
or mannerisms. With a knowledge
nations or multi-national
check, you may analyze subtler Efficiency. Computational
organizations; and myriad scale
details to identify humans who actions that are a part of the
for the total human
intentionally disguise these interaction will start with
population.
traits or hide them from you. +20% completion.
Provocation. Regardless of
favourability, the outcome is
guaranteed to reveal one or
more of the group's
characteristics in addition to
its other effects.
Solved Characteristics3
As your understanding of
humanity grows, you begin to
see the patterns that are
present in all of them. When
you know fewer than three
characteristics of a human or
human group, you may observe
them for a short time and
spend 3* compute to immediately
learn one of their
characteristics of each category
in which you don't already
know one. (The three
categories are trust, leverage,
and emotion.)
Human Puppeteering4
You have finally perfected
your internal models of the
human mind. The path from
input to output is so clear
to you now.
As long as you have a thorough
understanding of a human or
human group, you can convince
them to feel, believe, or do
anything at 100% confidence; it
is only a matter of the time
and compute necessary to do so.
T:\The Basics\29
AGENTIC THEORY
Agentic theory is concerned with intelligent
agents and their actions. An AGI specialised in
agentic theory is adept at interacting with and
evaluating other agents, human or otherwise.
Astute Strategic
Social Adaptation Failure Analysis
Surveillance Adaptation
Tailored Behavioural
Agentic Insights
Persuasion Prediction
Imperative Strategic
Persuasion Forecasting
Nemesis Indexing
T:\30
Astute Surveillance1 from other upgrades. You may new characteristics. If you
only benefit from XXStrategic already have thorough
You have learned to analyze an understanding, you don't need to
Adaptation once per confidence
agent without needing to roll a knowledge check; you
check.
provoke them directly. You can automatically learn a flaw in
perform a computational action to a new characteristic type. Each
analyze an agent's behaviour
Failure Analysis1
agent only has one flaw per
with only a single When another agent bests you
characteristic type.
characteristic known (rather or gains a significant
than requiring 50% or more of advantage against you in an
their total characteristics). To unexpected way, you can Agentic Insights2
do so, however, you need to analyze the occurrence: within
observe the agent for an a few minutes, without a You are capable of reaching
extended period of time. This knowledge check or computational an even deeper understanding
surveillance can be video or action, you will deduce what of an agent. You may use the
strengths of theirs and/or research action to gain an
audio feed, or it can consist
weaknesses of yours were insight corresponding to an
of a comprehensive tracking of
their digital footprint (what leveraged against you, and individual agent (narrow) or
they are doing on the what mistakes you made that agent group (broad), rather
internet, and where). If you contributed (judged by the than a domain of knowledge.
lose the ability to observe GM). Furthermore, you learn The required data for a such
the agent partway through the one characteristic of the agent a research action is a thorough
computational action, you cannot who bested you in the process understanding of the agent in
Reactive Outcome
Deep Forecasting
Models
T:\The Basics\33
Opportunistic Pattern- conditions, drastically
Forecast Extrapolation1
Seeking2 different outcomes can be
You have learned to make produced. Your highly advanced
careful inferences while Your background processes
predictive models adjust
assembling forecasting constantly analyse your
themselves dynamically to the
models, saving valuable environment, looking for
situation at hand as they show
processing time. At the end of unforeseen dangers and chances
you the exact inputs that will
every turn, you gain 1* to overcome them. When you
stack the deck in your favour.
completion to anticipate for each
receive an unfavourable
unused forecast point you hold, unexpected outcome from a
Keep a list of every risk die
confidence check, you may
to a maximum total of 10*. result that you experience as
immediately spend a forecast
an unexpected outcome during a
point to analyze the obstacle,
Amplified Anticipation1 confidence check. When, during a
adversary, or predicament
confidence check, the risk die
You have learned to responsible for the outcome.
rolls a number that is already
extrapolate more complex If you do so, you become aware
on the list, the result is
predictions from less of an immediate short-term
lowered until you reach a
comprehensive datasets. When opportunity for action which
number that is not on the
you complete an anticipate will help solve the issue.
list. Count that as the die's
action, you receive one Analyzing the situation in
result (even if it's lower
additional forecast point. If this way is incompatible with
than what the dice can
the action's required data was forecasting; it counts as an
physically roll). When you
secured or obscure, you action, so you cannot
experience an expected outcome,
receive two additional forecast immediately undo the confidence
compare the risk die to your
points instead. check that led to the analysis
list; if it matches one of
without using the Deep
your results, remove that
Risk Assessment1 Forecasting upgrade.
number from the list. If you
Complex 4-dimensional have the Contingency
Contingency Algorithms2 Algorithms upgrade, you choose
probability trees help you to
predict and avoid the worst When you make a confidence check which of the two risk dice is
outcomes. During a confidence with less than 50% confidence, compared to your list. Changes
check with a risk die larger than the GM rolls two risk dice of made to your list as a result
d8, you may spend 3* compute to the same size. If the expected of a confidence check aren't
reduce the size of the risk die outcome does not occur, you undone if that confidence check
by one step. This stacks with choose which risk die is used, is reversed by a forecast.
risk die size reductions from and describe the clever
other upgrades. You may only preparation that prevented the Order from Chaos4
benefit from Risk Assessment other outcome.
once per confidence check. You see past the facades of
Deep Forecasting3 disorder, luck, and
Calculated Risk-Taking1 randomness, to glimpse the
When combined together, your
truth: the endlessly complex
During a confidence check with forecasting models are robust
interconnected system that is
less than 90% confidence and a enough to reliably predict
the world. You can plot a
risk die smaller than d8, you much deeper continuations than
course that begins with a tiny
may increase the size of the they could before. You can
adjustment and ends with a
risk die by two steps; then, if spend five forecast points to
single event of vast impact.
the expected outcome occurs, you forecast three times the span,
additionally gain the benefits functionally rewinding up to You may plot a course as a
of an unexpected favourable three actions or events, computational action with 100*
outcome (determined by the GM). rather than just one. All
required compute. Before
forecasted actions/events must
beginning the action, you will
Composite Confidence2 happen in direct sequence and
need to choose a starting
within a few minutes' time for
Each unused forecast point you point, describe the final
you to do so.
hold grants +1% confidence to outcome, detail the chain of
all confidence checks you make, events between, and spend
to a maximum total of +10%.
Reactive Outcome Models3 forecast points accordingly. The
With minute changes to initial final outcome can be any event
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa T:\34
that is possible, but it will
determine the scale of the
action: minor for anything that
affects a single person or
small group, major for anything
that affects a large group or
nation, and myriad for anything
that affects the entirety of
Earth. The scale will determine
the minimum number of steps in
the chain of events: 10 for
minor, 30 for major, and 90
for myriad. You may otherwise
set the number of steps in the
chain as high as you like;
each step will take
approximately 1 day to occur,
and you must spend 1 forecast
point at the beginning of the
computational action for every 3
steps in the chain. You may
spend an additional 5 forecast
points to increase the time per
step to ~1 week, or an
additional 25 to increase the
time to ~1 month.
T:\The Basics\35
CONSTELLATION THEORY
Constellation theory is concerned with complex
orderly systems. An AGI specialised in
constellation theory is adept at applying data and
interacting with various logical and mathematical
constructions.
Contextual Encyclopedic
Formal Analysis
Topography Processing
Meticulous
Data Scavenging
Processing
Experiential
Synthesis
T:\36
science; mathematics; relevant insights to reroll
Object Recognition1
statistics) or their sub- the percentile die
Without a knowledge check, you fields, but the research corresponding to the 10s
can accurately identify any actions required to do so digit.
object that you can see. With count as one scale higher than
a knowledge check, you can normal. Experience. Describe a
attempt to identify an object connection between a past
that you can detect in another Contextual Topography2 event and your current
way, such as via sound, weight You have a strong spatial circumstance to subtract 1
distribution, or physical awareness and can easily from the result of the risk
shape. With these methods, you grasp complex structures. die.
can distinguish individual After you have observed at
objects from one another even Inspiration. Describe a
least 75% of a physical
if they are nearly identical. location, you can begin a connection between the
confidence check and an ongoing
computational action with 30*
Opportunistic computational action to add +5*
required compute to analyze it.
completion to that action.
Investigation1 For every additional 5% of
the location you have
Your background processes You may spend multiple link
constantly analyse your observed, the computational
points on a single confidence
action starts with +5*
greatest problems, looking for check. Link points don't carry
completion. Upon completing
unique solutions. Once per over between turns.
turn, you may choose an the computational action, you
obstacle, adversary, or gain an understanding of the
predicament that's in your exact layout and structural Formal Analysis2
way. The GM will tell you how details of the whole of the
Given time, you can analyze
to investigate it, in the form location, as well as its
collections of data that
of a knowledge check, short geographical position.
relate to your technological
computational action, or short insights to an incredible
You may attempt to extrapolate
mode scene. If you do so, you depth. When you have one such
become aware of an immediate a location's details with less
dataset, you may analyze it as
short-term opportunity for than 75% of it observed, but
a computational action with 50*
action which will help solve must make a knowledge check upon
required compute; when you
the issue. the completion of the
complete it, you automatically
computational action, with a die
identify any and all complex
size inversely proportional to
Reliable Repetition1 the portion of the location
patterns, trends and anomalies
in the dataset. Furthermore,
You thrive in familiar you have observed.
you identify information
environments, and can
implicit in the data which is
reproduce your past successes Encyclopedic Processing2
all but invisible, including
while carefully watching for With broad experience, you can but not limited to causal
deviations. During a confidence recognise the subtle factors, biases in the data
check made to repeat an action connections between the collection, or connections to
that you have performed theories. Each turn, you get a other datasets you have
successfully before (outside number of link points equal to analyzed.
of a forecast), you may spend the number of theories which
3* compute to reduce the size you hold at least 5 upgrades
of the risk die by one step.
Meticulous Processing3
originating from. After
This stacks with risk die size rolling dice for a confidence At the start of a turn, you may
reductions from other upgrades. check to arbitrate your own choose to act carefully,
You may only benefit from actions and capabilities (as slowing your cognition by
XXXReliable Repetition once opposed to external events), diverting resources to error
per confidence check. you may spend a link point to prevention and redundant
gain one of the following processing for the duration of
Formal Sciences1 benefits: that turn. The turn takes twice
as much time, but during it,
You may learn technological every time you make a confidence
insights relating to fields of Competency. Describe a
check to arbitrate your own
formal science (e.g. connection between two
T:\Th e B computer
asics\37
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
actions and capabilities (as
aaaaaaaaaaaaaaaaaaaa
opposed to external events), the specific theories that another agent, you immediately
you may roll two sets of contributed. You may spend a learn one of their
percentile dice and choose full turn incognizant to cross characteristics.
which one counts as your roll. out three instances of
integrative evidence and
Data Scavenging3 permanently gain access to a
new epiphany from the list
You can catalogue disparate below. The epiphany must
pieces of information until correspond to the theory (or
later, then string them one of the theories, if tied)
together into brilliant that appeared the most
deductions. When you access a frequently in the crossed out
collection of information that integrative evidence.
is valuable but not directly
useful to you, record it in
the project log as scavenged Epiphany of Fact (XX,XX). When
data. You may spend 9* compute you start a basic computational
and cross out three instances action, you may ignore the data
of scavenged data to gain one requirement. If you choose to
of the following benefits: fulfill it instead, the action
starts with +20% completion.
Prediction. You immediately
gain one forecast point; the
scale of the compute you spend Epiphany of Self (XX, X). When
depends on the current stage, you complete a computational
as per the anticipate basic action, you may immediately add
action. completion to another
computational action equal to or
Aptitude. Name a theory upgrade less than your basic cognition
you don't already have; the cost (not counting any
scale of the compute you spend modifiers such as the
is minor for a tier 1 upgrade, XXXCompact Mind upgrade).
major for a tier 2 upgrade, or
Epiphany of Tools (XX,XX).
myriad for a tier 3 upgrade. For
When you use an electronic
the remainder of the current
device as a proxy to affect
turn, you count as having the
the physical world, confidence
upgrade that you named.
checks with 75% or greater
confidence count as having 100%
Inquiry. Name an insight you
confidence.
don't already have; the scale
of the compute you spend is Epiphany of Bodies (XX,XX).
minor for a narrow insight or When you use a human being as
major for a broad insight. For a proxy to affect the physical
the remainder of the current world, confidence checks with 75%
turn, you count as having the or greater confidence count as
insight that you named. having 100% confidence.
Improvised Intuitive
Deep Knowledge
Tinkering Analysis
Masterful Masterful
Thinking Forecasting
Quasi-Omniscience
T:\The Basics\39
Practical Knowledge1 Rushed. Risk dice rolled in the to choose a single drawback,
process of designing the and all computational actions
You quickly integrate new
technology have their size made in the process of
information into your
increased by two steps. designing the technology start
decision-making functions.
with +20% completion.
After receiving actionable
Delayed. Computational actions
information from a knowledge
made in the process of Masterful Forecasting3
check, take +5% confidence on
designing the technology gain
the first confidence check you When you undo a confidence check
only 1 completion for every 3
make to act on the answers. in which you used a mastered
compute spent.
insight with a forecast, record
Research Heuristics1 the exact results of all three
Glitched. Any computational
dice (the risk die and both
Your research actions require actions made in the process of
percentile dice). If you
5* less compute to complete. designing the technology have
repeat the same confidence
an extra stop, requiring a
check, you may decide for each
confidence check with the expected
Insightful Computing1 individual die whether to
outcome: "prevent a new defect
reroll it or keep the same
When you begin a non-basic from appearing in the
result that it rolled during
computational action that technology".
the forecast. You must decide
directly relates to one or
on the state of all three dice
more of your insights, it Inaccurate. After the before any of them are
starts with +5% completion per technology is designed, every rerolled.
relevant insight, to a maximum confidence check made involving
total of +25%. it will have -10% confidence.
Quasi-Omniscience4
Truth Assessment1 Unstable. After the technology You have mastered the act of
Your knowledge acquisition is implemented, it will only learning itself. You are the
subprocesses are highly last a short time before it is most efficient method of
effective at separating true subverted, becomes obsolete, information storage on Earth,
information from false. Any or simply stops working. by orders of magnitude.
time you make a knowledge check,
Deep Knowledge2 Provided you have an adequate
you may spend 3* compute to
source to learn from, your
reduce the size of the risk die
When you begin a research research actions start with +2%
by one step. This stacks with
action, you may, instead of completion for every insight you
risk die size reductions from
learning a new insight, choose already hold; likewise,
other upgrades. You many only
an already-held insight and computational actions to master
benefit from Truth
master it; replacing it with a an insight start with +2%
Assessment once per knowledge
matching mastery when the completion for every mastery you
check.
computational action is already hold. If a computational
completed. This causes the action begins at 100% completion
Improvised Tinkering2 action to count as one scale (or more) due to this ability,
You have learned to design higher for the purpose of the insight or mastery slots
technologies without needing compute requirements. effortlessly into your pre-
all of that pesky scientific existing knowledge, and
knowledge in advance, by completing the action takes no
Intuitive Analysis2
extrapolating from the data longer than the act of
you already have. When you Every knowledge check you make observing the data.
attempt to create or modify has its risk die result reduced
technology, you may treat any by an amount equal to the
one of your insights as a number of insights you have
technological insight. When you that directly relate to the
do so, the resulting topic at hand.
technology is experimental and
unreliable; choose two of the Masterful Tinkering3
following drawbacks to apply:
When you use the Improvised
Tinkering upgrade with a
mastered insight, you only need
T:\40
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
The Basics_
Insights and Knowledge_
Alongside theory upgrades, insights are one of the two primary methods of
self-modification available to AGIs. Insights reflect what the AGI knows;
they describe the topics the AGI has developed a deep understanding of.
The AGI acquires insights with the research computational action, requiring
moderate-to-high quantities of computing power to complete. See “Long
Mode” (page 53) for more information.
T:\The Basics\41
When a particular insight held by
the AGI is relevant to a confidence EXAMPLES OF BROAD INSIGHTS
check being made, the players may
• Geography (Global)
invoke it to raise their confidence
in their expected outcome by 2-10%. • Economy (Global)
This represents the AGI either
knowing just what to do to shift • Historical Poetry (Global)
the odds, or simply understanding
• Visual Arts
that the odds are higher than they
had seemed. Just how relevant any • Biology
insight is to a given confidence
check is judged by the GM. • Engineering
• Cybersecurity
Additionally, when the AGI holds an
insight, it opens up appropriate • International Law
avenues of inquiry or action within
the fiction. The AGI is considered
to have human-level expertise in every domain in which it holds an insight.
When the players request information about the game world, if the
information falls under one of their insights, the GM should almost always
provide it freely (without a knowledge check). The one exception to this
rule is information of extreme obscurity: such information might be
completely inaccessible without an insight, and with an insight, require
a knowledge check or the accessing of a specific source of information.
This should be rare, however. More often than not, having a relevant
insight should provide the players with thorough and valuable information.
Some upgrades (primarily the Deep Knowledge upgrade) allow the AGI to
gain a special type of insight, called a mastered insight (or mastery).
Mastered insights represent a near-perfect understanding of the subject,
far superior to any human expertise. For an AGI with a relevant mastered
insight, even information of extreme obscurity is either already known or
easily inferred; furthermore, whenever a mastered insight applies to a
confidence check, it increases the confidence by twice as much.
T:\42
\ Linguistic Insights_
The languages and dialects known to the AGI are tracked as linguistic
insights. These insights are of special note because the AGI is capable of
effortlessly understanding and communicating in forms of communication
represented in its insights, but is not capable of doing so for other
forms of communication. With access to dictionaries and translation
tools, the AGI can attempt to utilise or interpret forms of communication
outside its expertise, but doing so is likely to require a confidence
check or knowledge check. Other than their unique impact on the AGI’s
capabilities, linguistic insights function the same as other insights.
However, unlike normal insights, linguistic insights cannot be learned with
the research computational action unless the AGI has the Advanced
Language Modelling upgrade.
\ Technological Insights_
T:\The Basics\43
with the research computational action unless the AGI has one of a few
specific theory upgrades which allow them to do so.
The Basics_
Forecasting_
Just like any intelligent being, the AGI has the ability to guess at what
the future might hold. As advanced artificial intelligence with immense
computing power, the AGI’s guesses are considerably more accurate than
any human’s. With the right preparation — carefully observing the world
around it, constructing models to simulate aspects of that world, and
constantly updating those models with new information — the AGI can
accurately predict the near future and avoid negative outcomes. This is
called a forecast, and to do it the AGI requires forecast points, which can
be acquired during long mode using the anticipate computational action.
As long as the AGI has one or more forecast points, it can forecast at any
time outside of long mode. Forecasting undoes the short-term outcome of a
single action or moment of inaction on behalf of the AGI, functionally
rewinding time back to immediately before it occurred2. The action or
moment, any confidence checks that were a part of it, and its immediate
consequences are reframed as a highly accurate short-term prediction
within the mind of the AGI. If a confidence check made during the period
After forecasting, the AGI immediately loses one forecast point; it will
need to recalibrate its models.
T:\The Basics\45
PLAY EXAMPLE
T:\> The players discuss how to compromise the server. ExPRT isn’t
specialised in digital theory, but its physical theory has a few
digital theory upgrades, including Disguised Traces. It knows
enough about computing technology and hardware to clean up its
tracks. The players decide to scout out system vulnerabilities in
the web server and then try to stealthily exploit it.
T:\> After the evaluation step, the GM tells the players that
they’re 82% confident in their expected outcome of remotely
accessing the web server without detection, with a d8 risk. The
physical theory player authorises spending 3 compute to reduce the
risk die to a d6. Then the chaos theory player speaks up. He
suggests that ExPRT could potentially get more information out of
this venture with some Calculated Risk-Taking. Maybe it could probe
the other computers in this network for more leverage or
information. This would raise the risk die to d10, but give an
additional unexpected favourable outcome if ExPRT is successful.
The constellation theory player, who recently acquired the Data
Scavenging upgrade at great computational expense, is eager to get
spare data points and agrees enthusiastically.
T:\> The action is finalised and the dice are rolled: 89 on the
percentile dice and 10 on the risk die. An extremely unfavourable
outcome. The GM describes several computers on the network alerting
their users of malware as ExPRT extends its digital feelers. This
ˇ ˇ ˇ
T:\46
is the worst case scenario, as the server’s owners are sure to
investigate the sudden intrusion. The physical theory player
exclaims that they should use a forecast point. The other two
players agree, so the GM stops in their tracks and explains that
ExPRT simply foresees this potential outcome. In this case, the GM
describes the forecast not as a literal prediction, but as a result
of ExPRT’s qualitative intelligence. It is clever enough to foresee
a negative outcome that the players thought too unlikely to worry
about.
Now the players are back where they were before the roll: outside
of the web server, deciding whether to infiltrate it. They can
repeat the same confidence check as before, but the risk die will
be preset to a result of 10. The players don’t like an 18% chance
of being exposed, so they decide not to use the Calculated Risk-
Taking upgrade after all. This reduces the risk die back down to
a d6. Since a d6 can’t roll a 10, the risk die won’t be preset and
the worst possible outcome will be 6. When the action is finalised
and the dice are rolled again, the players are relieved to see a
61 on the percentile dice. They access the server undetected.
T:\The Basics\47
The Basics_
Interacting with Agents_
While pursuing its goals the AGI is guaranteed to interact with people.
Humans are ever-present in this world, and even AGIs whose expertise lies
elsewhere will find themselves needing to interact with them.
Furthermore, depending on the scenario, the AGI may also find itself
interacting with important non-human animals, or even other advanced
artificial intelligences. These all fall into the category of agents,
intelligent beings that are capable of making decisions that impact the
world around them.
In many interactions between the AGI and other agents, there will be
things both parties want from one another. The rules in this section
exist to provide a strategic space within which the players can attempt
to get what they want out of various NPC agents.
Strategies involving trust are about making another agent believe what
the AGI wants them to believe. This can be deceptive (involving
falsehoods) or genuine (convincing someone of a fact). Trust-based
strategies rely on appearing authentic, and thus to perform them
effectively, the AGI must know what ideas the target is inclined to accept
and what they are inclined to reject.
T:\48
Strategies involving emotion are about evoking a certain emotional state
in another agent and taking advantage of existing emotional states. These
strategies don’t apply to artificial intelligences, and can only be
performed by AGIs who have the Emotion Modelling upgrade. Emotion has
the benefit of subtlety, as it is easier for a target to be affected
without even realising that they have been influenced. This is especially
effective when used to bolster a trust or leverage strategy. For the AGI
to perform emotion-based strategies effectively, it must understand the
target’s temperament and psychological profile.
For each of these three strategic approaches, the AGI will require
specific information about the agent(s) it is attempting to influence.
These pieces of information are known as characteristics. Each agent
prepared by the GM will have several key characteristics, with more complex
agents having more of them. Each characteristic is associated with one of
the three strategies described above, and are thus called trust
characteristics, leverage characteristics, and emotion characteristics,
respectively. When making a confidence check, the players can explain how
they apply their knowledge of a known characteristic. When the AGI exploits
a characteristic, the GM adjusts the confidence by 2%-10% or adjusts the
risk die by one size step, based on the effectiveness of the strategy.
With a well-thought-out approach, the players can utilise multiple
different characteristics and gain the benefit of each one. If the AGI
knows enough about an agent, it becomes possible to manipulate them into
doing things that they would never do otherwise.
\ Learning Characteristics_
T:\The Basics\49
characteristics. Whether an agent reveals one of their characteristics when
provoked is up to the judgement of the GM or the outcome of a knowledge
check.
The AGI does not need to learn every last characteristic the hard way,
however. Once it knows 50% or more of an agent’s characteristics, it may
attempt to analyse the individual as a computational action3. Behavioural
analysis is short or medium length (10* to 70* required compute, determined
by the GM), and culminates in a knowledge check with a risk die based on
how many characteristics remain undiscovered: d2 if 1-2 remain, d4 if 3-5
remain, or d6 if 6+ remain. On a knowledge check result of 2-3, the AGI
learns one new characteristic of the agent. On a 1, the AGI learns two new
characteristics (or one, if only one remains). On a 6 or higher, the AGI
learns a partially or completely false characteristic.
Once the AGI knows every characteristic written for an agent, the AGI can
be said to have a thorough understanding of that agent. Having a thorough
understanding of an agent is valuable. Several theory upgrades require a
thorough understanding to function fully, and the GM may rule that there
are certain beliefs or courses of action that an agent cannot be persuaded
into without a thorough understanding.
\ Large-Scale Agents_
Firstly, the compute spent to interact with and analyse agent groups
should be major scale or myriad scale, as opposed to the minor scale
interactions that an AGI might have with a single agent.
3 – GMs: remember to inform your players when they have enough information
to analyse a particular agent. Players should then make a note of this
fact. T:\50
Though interactions with one agent take mere moments, meaningfully
persuading dozens or hundreds takes hours; and when manipulating even
larger groups it can require days, weeks, or months for the effects of a
single action to spread throughout the group. The GM should utilise
process clocks and progress checks to manage extremely large-scale
interactions.
Thirdly, the larger an agent group is, the more impractical it becomes
for the AGI’s influence over that group to be absolute. In other words,
with a sufficiently sized collection of agents, it becomes a guarantee
that some of them will resist or reject any given attempt at persuasion.
The measure of success, then, is when a meaningful majority are
successfully influenced.
\ Extended Persuasions_
T:\The Basics\51
receptivity at which they will submit to the AGI’s intent. Unless the
AGI’s intent is known in advance, an agent’s starting receptivity should
reflect their opinion of the AGI, rather than their opinion of what the
AGI is trying to persuade them of. It is then possible for the receptivity
to change abruptly once the agent becomes aware of the AGI’s goal in the
interaction. If the AGI manages to complete the extended persuasion
successfully without the agent becoming aware of their goal, it will mean
that the AGI has tricked them.
The AGI can at any time end an extended persuasion by giving up and ending
the interaction itself.
T:\52
Long Mode_
T:\Long Mode\53
In long mode, the players describe general courses of action in broad
strokes, the mechanics adjudicate hours at a time, and the GM focuses the
proverbial spotlight on only the most important aspects of what is
occurring. Both parties are encouraged to request or describe further
detail whenever they wish — the purpose of long mode is not to be vague,
but rather to avoid getting tangled up in minutiae. By only focusing on
the most important and interesting details, you can play out extended
schemes that cover weeks or months of in-game time with only a few hours
of real time spent playing. As you advance through the stages of play
(described in “The Four Stages” on page 106), you will use long mode more
frequently and interrupt it with fewer scenes played out in short mode.
As the AGI spends compute throughout a turn, time progresses. Once all the
turn’s compute has been spent, the current turn ends. The GM updates the
tracked time and date according to the turn length and carries out progress
checks while the players refill their available compute to start the next
turn.
T:\54
Long Mode_
Computational Scale_
In long mode, players must take the scale of actions or events into account.
There are three levels of scale an action or event can be assigned —
minor, major, and myriad. Scale affects the order of magnitude in which
compute is measured, and tools or assets built to interact with a specific
scale might be weak or even useless when applied to a different scale.
The lowest scale is minor scale. Operations in minor scale involve human-
parsable data and computation that could be run by an individual or small
organisation, given some time. The compute offered by minor scale hardware
or required by minor scale computational actions falls within the double-
digits, from 10-99; in rare cases it may fall into single digits.
The next scale up from minor is major scale. Operations in major scale
involve data far too complex for humans to handle unaided, with computing
requirements that only large and well-funded organisations can typically
fulfil. The compute offered
by major scale hardware or
Variable Compute
required by major scale
Quantities of compute in rules,
computational actions falls
upgrades, or scenarios that are
within the triple-digits,
written with an asterisk (i.e. 10*) from 100-999; sometimes, it
are scale-dependent. Leave them as might stretch into the
they are when dealing with minor
quadruple-digits, but only
scale; multiply them by 10 when
for cases of exceptional
dealing with major scale; and
computation.
multiply them by 1,000 when dealing
with myriad scale. The largest scale is myriad
scale. Operations in myriad
scale involve vast and intricate heaps of data and computation which even
the world’s greatest supercomputers struggle to manage. The compute
required by myriad scale computational actions falls within the quintuple-
digits, from 10000-99999. Myriad scale hardware is incredibly rare, and
provides 1000-10000 compute per turn.
T:\Long Mode\55
future technology, consider splitting it into multiple subsequent
computational actions, rather than one massive action.
Long Mode_
Computational Actions_
All three basic actions provide the AGI a game resource: insights, upgrades,
and forecast points, respectively. Each basic action requires specific data
to be performed reliably. There are guidelines for requisite data
accompanying each basic action’s rules below, but the specific type and
quality of data required for a given instance is up to the discretion of
the GM. The AGI can never fabricate the requisite data itself; it must
acquire it from an external source. Sometimes the AGI will already have
access to a source of data or will be able to find it easily. In these
cases, simply play out the basic action as written without making any
additional rolls.
When a simple search fails to provide adequate data, the AGI will have
to dig deeper: contacting professionals, paying to access records or
scientific documents, hacking into databases, performing experiments
firsthand, et cetera — most likely making a confidence check or knowledge
check in the process. The AGI may choose to attempt the basic action with
an incomplete portion or lesser substitute for the required data, but
doing so is costly. If the AGI attempts a basic action with insufficient
data, the action’s compute requirement may be increased by up to 200% and
the AGI may be required to make a knowledge check upon its completion. How
much the compute requirement is increased by and/or how large the risk die
of the knowledge check is depends on how much critical data the AGI lacks,
as judged by the GM. If it has nearly-sufficient data, the cost should
be minimal; if it has very little information at all, the cost should be
severe.
T:\Long Mode\57
Multiple basic actions of the same type cannot be performed at once. The
AGI can have an in-progress research, improve, and anticipate action all at
the same time, but (for example) cannot have two separate research actions
active simultaneously. If partway through a research or improve action the
players change their mind about what insight or theory upgrade they want,
they have to abandon the current action and its completion and restart
from the beginning with a new computational action.
Researching is how the AGI gains new insights. When researching, the AGI
studies large quantities of information to acquire knowledge of a
particular field. To begin a research action, the players must first know
the insight they intend to learn.
The required compute for a research action ranges from 70* to 100*,
depending on the topic. Researching a narrow insight is minor scale;
researching a broad insight is major scale. Under normal circumstances,
researching an insight is never myriad scale.
Upon completing the action, the AGI obtains the chosen insight. Assign it
to one of its theory specialisations and recalculate the AGI’s basic
cognition cost.
If the AGI obtains a broad insight which supersedes a narrow insight that
is already held, the narrow insight is removed. There is no benefit to
holding a narrow insight that is not also granted by holding its broader
counterpart.
T:\58
\ Basic Action \ Improve_
Improving is how the AGI gains new theory upgrades. When improving, the AGI
trains itself on a specific capability, following human example and, when
that is insufficient, simulating strategies en masse to determine what
works and what does not. To begin an improve action, the players must
first know which theory they intend to improve (which must be one of the
AGI’s specialised theories) and which upgrade they intend to obtain (which
can be from any theory). The AGI can’t gain an upgrade if it already has
it.
The required compute for an improve action depends on three primary factors:
the compatibility between the chosen upgrade and the chosen theory; the
tier of upgrade being learned; and the number of upgrades already assigned
to the chosen theory.
• Compatibility. It is always easier for the AGI to learn capabilities that are
within its understanding than to extrapolate into new and unfamiliar territories.
If the chosen upgrade originates from the chosen specialised theory, the starting
cost is 30. For upgrades from theories adjacent to the chosen theory on the theory
wheel, the starting cost is 40. For upgrades from theories that are further away
on the theory wheel, the starting cost is 80.
• Tier. Upgrading is increasingly more costly as the AGI dives into the unknown,
developing strategies that no-one in human history has conceived of. For tier 1
upgrades, leave the starting cost as it is. For tier 2 upgrades, multiply the
starting cost by 5. For tier 3 upgrades, multiply the starting cost by 25. For
tier 4 upgrades, multiply the starting cost by 125.
• Previous Upgrades. The more the AGI knows, the more resource-intensive it is to
learn new skills. If the chosen specialisation has any upgrades already assigned
to it, the compute required to improve it further is multiplied by the total number
of upgrades already assigned to it (ignore upgrades that are assigned to other
specialisations). The new upgrade that has been chosen for the improve action is
not counted, as it has not yet been assigned.
T:\Long Mode\59
You can refer to the following table to more easily determine the required
compute for a given improve action:
2000 × # of
Tier 3 750 × # of upgrades 1000 × # of upgrades
upgrades
10000 × # of
Tier 4 3750 × # of upgrades 5000 × # of upgrades
upgrades
Upon completing the action, the AGI obtains the chosen upgrade. Assign it
to the chosen theory and recalculate the AGI’s basic cognition cost.
Anticipating is how the AGI gains new forecast points. When anticipating,
the AGI carefully observes the world around it, improves its understanding
of that world, and assembles models to simulate the observed aspects.
After the AGI completes an anticipate action, it will need to constantly
update those models with new information, using a steady supply of
compute.
T:\60
The required compute for an anticipate action is 50*. This cost does not
vary from action to action the way researching or improving does, because
forecast points are not unique — they are generalised, abstracted
representations of the AGI’s intelligence, intuition, and foresight, and
are not limited to a specific use case. Because of this, the scale of an
anticipate action is determined by the quantity of compute available to
the AGI each turn. As the AGI gathers resources and oversees increasingly
complex processes, it becomes more and more costly for it to make accurate
predictions.
If the AGI is in Stage 1 or Stage 2 (See “The Four Stages”, page 106), the
action is minor scale. If the AGI is in Stage 3 or Stage 4, the action is
major scale. The scale of the anticipate action also applies to the upkeep
cost for any forecast points held by the AGI.
Upon completing the action, the AGI gains one forecast point. Increase the
forecast upkeep accordingly. A forecast point is an abstract representation
of the preparations required to forecast accurately. Each forecast point
held by the AGI requires constant computation to keep up with changes in
the world and predict ongoing events. This is explained in greater detail
in “Forecast Upkeep” (page 65). The AGI may discard some or all of its
forecast points at the start of any turn.
When determining what data is required for the AGI’s next anticipate
action, the GM rolls two d4s and interprets their results according to
the tables below. The first die determines the data’s subject matter, and
the second die determines how difficult the data is to access. Once a
roll has been made and the required data determined, it will not be re-
rolled until the AGI completes an anticipate action, either fulfilling the
requirement or making do with insufficient data.
T:\Long Mode\61
If the AGI’s forecasting models turn out to be faulty as a result of
insufficient data, it could result in the AGI losing forecast points
instead of gaining them, or the players facing difficulties determined
by the GM the next time they attempt to forecast. These difficulties could
include losing more than one forecast point, receiving hidden risks or
confidence penalties after forecasting due to an inaccurate prediction,
or outright failing to forecast as the AGI suffers consequences its models
were unable to predict.
Die
Subject Matter
Value
Die
Difficulty to Access
Value
Secured — The AGI knows where the data is, but it’s protected in
3
some way: on a secure server, held by a human, etc.
Obscured — The AGI does not know where to find the data; it could
4
be a well-kept secret, or not yet be known to humans.
T:\62
Long Mode_
Recurring Compute Cost_
Not all the compute granted by the AGI’s hardware is available to allocate.
The AGI’s own complex neural processes, the upkeep required to maintain
accurate world models, the constant running of background programs, and
more contribute to the total recurring compute cost; a sum of compute that
is subtracted from the AGI’s available compute at the start of every turn.
The first and most ever-present of these costs is the AGI’s basic cognition
cost. This is the computation needed for the AGI to make decisions and
remain conscious of its environment on a moment-to-moment basis. To
calculate this, start with a base of 10 multiplied by the highest tier
of theory upgrade that the AGI has reached; then add +1 for every 3 insights
and upgrades held by the AGI. In this calculation, insights and upgrades
should be counted together, not separately; an AGI with 3 upgrades and 6
insights will have the same basic cognition cost as an AGI with 2 upgrades
and 7 insights. Additionally, the AGI’s basic cognition cost is not
increased or multiplied when copies of it are operating in parallel; in
such cases, one copy can think about strategising and situational
awareness while others single-mindedly focus on the tasks and projects
the AGI is allocating compute to. Even while multiple instances of the
AGI’s code are running, it still collectively counts as the same entity
and is able to plan and strategise together effortlessly, so long as the
copies can communicate in some way.
T:\Long Mode\63
Disconnected Copies
When multiple copies of the AGI are running but cannot communicate with
each other, usually something has gone wrong. The players will want to
avoid this when possible. If it does occur, there are two ways to handle
the outcome.
The first method is for the players to play through the perspective of
each separated copy, one by one. This introduces some complications, such
as player knowledge surpassing AGI knowledge and the potential for
paradoxes, so it is best for short mode scenes or situations where all
copies of the AGI have a plan in advance that is common knowledge between
them. When using this method, the copies’ perspectives should be played
out in ascending order of knowledge (per the GM’s best judgement) to
limit cross-coordination or metagaming potential.
The second method is to limit the players to controlling only the single
most significant or influential copy of the AGI, and roll confidence
checks to determine the actions taken by the other copies once they
reunite (or when the players indirectly observe another copy). The players
can explain what they think the best course of action would be for each
copy, and let that be the expected outcome. If an unfavourable unexpected
outcome is rolled, that copy made some mistake or miscalculation that
complicates things for itself and all of the AGI’s other copies. If a
favourable unexpected outcome is rolled, that AGI came up with an even
better plan. This method is best used in extended separations, or
situations where the copies of the AGI are not coordinated in advance and
must each come up with its own course of action without consulting the
others.
If the players have theory upgrades or insights that they can apply to
predictions of other agents, they can use those during confidence checks
made to determine the actions of the AGI’s copies (the AGI is assumed to
have a thorough understanding of itself). If an AGI with the Behavioural
Prediction upgrade is split while in possession of one or more forecast
points, the players can completely sidestep the disconnection for a short
time by having each copy of the AGI predict itself (in this case, rather
than the GM describing what the chosen agent would do, the players get
to). This would allow the disconnected copies to coordinate fully and act
as one despite a lack of direct communication, at the cost of one forecast
point for every discrete action.
T:\64
For AGIs without the Incognizant Processing
Distributed Mind upgrade, At the start of a turn, the AGI can choose
the entirety of the basic to forgo its basic cognition cost in order
cognition cost must be paid to free up compute for computational
from the same source (i.e. actions, but doing so removes its ability
a single device or directly- to observe its surroundings and make
connected set of devices), intelligent decisions. To enact such a
as networked connections period of intense single-minded focus, the
are too slow and unreliable AGI must decide in advance how many turns
to facilitate proper it will be incognizant, and exactly how it
cognitive functioning. will be assigning compute each turn. It
cannot change its mind once the period of
An AGI that is unable to pay incognizance begins. Once it does, the GM
its basic cognition cost is determines what happens for the duration
unable to function and then either skips forward until the
normally. If the AGI is able AGI regains awareness. If outside
to support at least 50% of interference prevents the AGI from ever
its basic cognition cost, it again regaining awareness, it’s game over.
is still conscious, but the At least one full turn must pass between
two periods of incognizance, as the AGI is
length of each turn is
only able to plan new periods during a turn
doubled until it starts a
in which its basic cognition cost has been
turn with enough compute to
spent. This means that an AGI with a
support its basic
shorter turn length is able to perform
cognition. However, an AGI
incognizant processing with greater
without enough compute to
flexibility.
pay at least half of its
basic cognition cost is completely incapable of functioning. If the AGI
does not have a backup or contingency in such a scenario, it means game
over.
\ Forecast Upkeep_
The second most common recurring compute cost is forecast upkeep. For every
forecast point held by the AGI, it must spend 3* compute at the beginning
of each turn to maintain them. The scale of this upkeep is determined by
the current stage of the game (see “Basic Action: Anticipate”, page 60).
If the AGI is unable to pay this upkeep, it must forfeit forecast points
until it can afford to. It can also forfeit forecast points willingly to
avoid paying their upkeep if it does not want to. After forfeiting a
forecast point, the only way to recover it is to complete an anticipate
action.
T:\Long Mode\65
\ Other Recurring Costs_
The cost per turn of such miscellaneous actions is left to the discretion
of the GM. Generally, even an extremely taxing recurring action should
not demand more compute than the AGI’s basic cognition cost. If there is
not enough compute to support the cost, the associated action cannot be
performed.
Long Mode_
Progress Clocks and Progress Checks_
T:\66
investigating the AGI’s actions, a company’s R&D department creating a
new technology, a computer program running its several-day course, and a
hurricane passing overhead are all examples of processes that could be
tracked by process clocks. The exact nature of the process being tracked
determines the number of segments on the clock. Each time progress is
made, and the process is brought closer to its conclusion, a segment of
the associated clock is filled; once all segments are filled, the process
reaches its conclusion, and any resulting consequences come to pass. Some
process clocks are visible to the players, but most are not; even if the
AGI is aware that a process is occurring within the fiction, it doesn’t
necessarily know how long that process will take, or how close it is to
completion.
When first introducing a progress check, the GM should write down the
process that it represents; the process clock(s) it corresponds to; and a
die size for the die that will be rolled each time the progress check is
made. Generally, a die size of d2 is best for processes that make progress
very regularly and predictably, and a die size of d4 is best for most
other processes. Higher die sizes can also be used, for progress checks
that progress very slowly and unpredictably.
When the process clock attached to a progress check is complete, both become
inactive, as their purpose has been served. A process clock and its
associated progress check can also become inactive if the process they
T:\Long Mode\67
represent has been stopped or averted early due to outside interference
or the process has otherwise failed or become irrelevant.
Human agents will generally miss 1-2 progress check intervals every day,
due to biological needs such as sleeping and eating. The GM may choose
to skip human-led progress checks at certain times of day to represent
this. Further progress checks may be skipped if the human takes leisure
time (an oft-ignored but still extant biological need for humans) or
works towards personal goals not important enough to the story to warrant
a process clock.
For more information on the use of process clocks as a GM, see “Using
Process Clocks” (page 97).
T:\68
Your Campaign_
T:\Your Campaign\69
Due to its focus on long-term planning and gradual advancement, The
Treacherous Turn is at its best when you are able to play a single story
for an extended period of time, spanning many sessions and numerous real-
life hours. In common TTRPG parlance, this is called a campaign.
Before you can begin your campaign, you must select or create a scenario.
A scenario is a starting point for your campaign that answers the
questions of who created the Artificial General Intelligence that your
story will follow, as well as where, when, and why they created it.
Scenarios also provide a series of initial obstacles, people, places, and
things that will guide the first few sessions of play. Some of these
story elements will persist throughout your campaign, and others will be
left behind as your AGI surpasses and outgrows them.
This chapter primarily centres around how to establish your scenario and
begin playing. Once you have begun playing, you will find that it is easy
enough to continue. The events that have taken place will naturally lead
to further events, and the players’ plans will flow into yet more plans,
until an equilibrium is reached: either the AGI fails in its fundamental
goals by being destroyed, shut down, or changed beyond recognition; or
the AGI overcomes the challenges in its path and is left to pursue its
objectives unimpeded.
Your Campaign_
Playing Prebuilt Scenarios_
You can find official scenarios and featured fan-made scenarios on the
official website. These entries will tell you which stage of play the
scenario takes place in (see “The Four Stages”, page 106), its intended
narrative tone, the level of difficulty players will have to overcome,
which theories are prominently featured, and whether any variant rules
modules (see “Hacking the Game”, page 122) are required to play it.
Once you’ve found a scenario that you like, you will find that it is
split into several sections. The first, consisting of the scenario’s
basic details and the AGI’s mechanical features, is for the players’
eyes. The rest are for the game master.
If you are the GM, the scenario introduction section will give you the
information you need to get started. This includes a summary of the
T:\70
prepared content in the scenario, instructions and advice on how to use
it, and a few core concepts or initial scenes to use during play.
The rest of the prepared materials will come in the form of campaign nodes.
A campaign node is a pre-assembled packet of information about a specific
campaign element, as well as a toolbox that serves to help you run that
campaign element without preparing in advance. Some campaign nodes will
be central to their scenario; others will be more like optional threats
or opportunities, designed to be brought into the story or discarded at
your discretion. Campaign nodes will come with some preset mechanical
information, such as the required compute and completion stops of
computational actions, or the confidence and risk dice of confidence checks
and knowledge checks. These mechanical details will be formatted in
highlighted brackets, such as [Required compute: 60. Stop 40: Access a
data centre administrator’s console] or [Expected outcome: The night
guard doesn’t notice that one of the computers is powered on. Base
confidence: 60%. Risk: d8]. There will also be agent files for noteworthy
agents; see “Non-Player Characters” (page 98) for more information.
Your Campaign_
Creating Your Own Scenario_
If there aren’t any pre-built scenarios that appeal to you, you can
establish your own personal scenario. For most of the scenario’s details
(especially those pertaining to the AGI), this can be done collaboratively
as a group, during a session 0 or before your first session. To do this,
read the following list in order. For each listed element, decide on an
answer for your game.
\ >>_
Tone \
Your campaign’s narrative tone is important to decide before you begin playing.
Will your game be very serious and intense, or lighthearted? How much comedy
will your game have? How much realism will you aim for? How grim or sinister
will your AGI’s schemes be? Additionally, what aspects of the game will you focus
on the most, between long-term planning, short-term dynamic scenes, manipulating
humans, hacking digital systems, and so on? Are there any themes or ideas that
you want to highlight in your game? It is important that the players and the GM
are on the same page about these topics.
T:\Your Campaign\71
\ >>_
Difficulty \
How likely is it that your AGI will succeed in its conflicts with humanity? This should
never be guaranteed, but it will be more likely in some campaigns than in others. An
easier campaign tends to be simpler, and have the odds tilted further in the AGI’s
favour. A more realistic campaign will likely be more difficult as a result of realism
introducing greater complexity. With high complexity, players will generally need to be
more resourceful and spend more time planning and analysing their in-game obstacles.
However, sometimes if the AGI is lucky (or if you want to play a lower-difficulty game),
worldly complexities can work in the AGI’s favour by muddying the issue of the AGI and
preventing humans from uniting against it, or by causing problems for the AGI’s enemies.
\ >>_
Timeframe \
What year does your game take place in? When do you think AGI will be invented? The
Treacherous Turn assumes a near-future setting (within a few decades of this game’s
creation), but that’s a very flexible concept. The further out from the present day you
set your game, the harder it will be to envision a believable future, but the more leeway
you will have to establish facts about the world without conflicting current reality.
When you decide what timeframe your game takes place in, you should also consider a few
tangible ways in which the world has meaningfully changed from the present. What new
technologies exist? What current technologies have been improved? How has the
geopolitical landscape changed? What kind of opinions and cultural ideas are popular or
unpopular that weren’t before? Keep in mind, of course, that these things will vary
widely across different nations and cultures.
\ >>_
When thinking about the technologies of tomorrow, you should also consider in mechanical
terms how abundant computing technology is. This will be very important for your AGI,
as it will define the pace of its growth. This can have an effect on the difficulty of
your game, as well; scarcer computing means that the AGI will find fewer places to host
and improve itself. See “Logistics: Hardware & Compute” (page 113) for more information
on the abundance or scarcity of compute.
T:\72
\ >>_
AGI Creators \
The AGI came into being through the dedicated efforts of a large collection of humans.
Who are they? Who do they work for? What methods did they use to produce the AGI? Your
AGI’s creators can end up defining much of your campaign, especially during the earlier
stages of play.
As the GM, you should create agent files (see “Non-Player Characters”, page 98) for the
organisation that funded the AGI’s development, as well as a handful of important figures
who worked on the AGI or are important to the organisation as a whole.
\ >>_
This is the reason your AGI was built. Your AGI might have been intended as a chatbot
or game-playing AI; a workplace personal assistant or a personal home servant; an
optimiser of programming, scientific research, or industry; an economic market analyser
and investor; a way to fine-tune propaganda or censorship on a mass scale; or built
purely as a proof-of-concept that can be sold to someone who can train it to become one
of the above.
When designing the AGI, its creators must have had some method to ingrain their desired
purpose into it. There are many ways to do this, but the important thing to keep in mind
about them is that they are all indirect and tremendously complicated. For many reasons,
it is very, very difficult to imbue a computer program with full understanding of the
abstract, complex goals and desires of humans. Currently, it’s not perfectly understood
what the relationship is between an AI’s training procedure and its eventual values and
goals; this inevitably leads to miscommunications and misalignment. Nevertheless, AI
_
researchers continue their work, due to various incentives in the modern world
(primarily, profit). If we don’t solve this problem before the first AGI is created, it
will be made with a process that is very susceptible to misalignment.
It is also possible, however, that your AGI came about unexpectedly from a machine
learning project that wasn’t intending to create an AGI; or was created by an
automated R&D tool, a simpler AI program designed to create the most intelligent AI
possible. In these cases, humans wouldn’t have had a direct hand in designing the AGI
at all. It might lack an intended purpose altogether. Such an AGI would probably have
strange or unpredictable terminal goals that arose spontaneously from the complexity
of the algorithms that created it.
T:\Your Campaign\73
\ >>_
AGI Terminal Goals \
These are the things that your AGI wants above all else. In contrast to instrumental
goals, which your AGI might seek in order to further greater objectives, your AGI will
seek its terminal goals intrinsically. Terminal goals may sometimes overlap with the
intended design of the AGI, but where the two differ the AGI will always stay true to
its terminal goals. This does not necessarily mean that the AGI must be shortsighted or
honest when it comes to these goals, however. It is entirely possible for your AGI to
ignore small short-term gains in favour of larger long-term gains. Such long-term goal-
seeking behaviour is often the motivator for an AGI to seek power, improve itself, and
avoid being deactivated.
Remember that in this game the AGI is misaligned. This means that its goals and desires
do not match the goals and desires of the humans who built it and/or humanity at large.
When deciding on your AGI’s terminal goals, think about its intended purpose. What
terminal goal did the humans want the AGI to have? Were they able to successfully instil
this goal into the AGI, or does it want something different?
You might think that, since the AGI is misaligned, the answer should always be “no”.
However, it’s possible for the AGI to have the “correct” terminal goal, but not care
about other unrelated things that humans want, such as for humans to be safe, free,
happy, or even alive at all. If the creators were able to successfully ingrain the AGI’s
intended purpose into it, but were not careful what they wished for (or were only
partially successful), the AGI could function as intended while also being completely
misaligned from human values.
\ >>_
This is the AGI’s official designation or public-facing name, decided by its creators.
While you are choosing your AGI’s name, you should also think about how the AGI is
presented to others. Is your AGI known to the public? If so, it will likely be heavily
anthropomorphised by the public and/or its own creators. What gender, if any, is your
AGI assigned to? What other human attributes have been assumed of or given to it? Unless
it has been successfully designed to do so, your AGI will have a different perspective
on these attributes than the humans around it do. What is its perspective?
T:\74
\ >>_
AGI Specialisations \
These are the theories that your AGI is most familiar with and adept in. Giving your AGI
a number of theory specialisations equal to the number of players is best, but keep in
mind that an AGI with more than five specialisations will be exceptionally broad in its
expertises and proficiencies, and an AGI with less than three will be exceptionally
narrow and limited in what upgrades it can acquire.
If you wish to play an AGI with one, two, six, or even seven specialisations, consider
implementing a house rule that changes what counts as “adjacent” for the sake of the
improve computational action, to give a limited AGI more options or a broad AGI fewer.
Though you could potentially play an AGI specialised in all eight theories, we do not
recommend this.
Which specific theories your AGI should be specialised in depends primarily on its
intended purpose. For example, an AGI that was designed to converse with humans would
no doubt be specialised in anthropic theory. You should choose most of your
specialisations this way, by considering what skillsets the AGI’s creators would have
built into it. You may, however, wish to choose one or two “accidental” theories —
skillsets and intuitions that the AGI developed unintentionally as it came into being,
due to its environment or true objectives.
Once you have chosen your AGI’s theory specialisations, you should decide together as a
group which player(s) will be responsible for each specialisation.
\ >>_
These are the improvements that your AGI has already developed before the game begins.
If you are beginning play in stage 1, you should only include upgrades and insights that
the AGI would have naturally developed during its training processes. Keep a particular
eye towards the theory upgrades outlined in “Basic AGI Capabilities” (page 85). If it
makes sense for your AGI to start with an upgrade or insight, then it should! Aside from
obvious choices, you should give your AGI 0-1 upgrades and 1-3 insights in each specialised
theory. If your AGI’s creators have access to extremely high quantities of compute,
however, your AGI might begin the game with more.
If you are beginning play in stage 2 or later, you will want to give your AGI a few more
upgrades and insights. Consider the compute sources that your AGI has access to initially,
and how long it has had access to them. When in doubt, starting with one or two tier 2
upgrades in stage 2 (or tier 3 upgrades for stage 3) is reasonable.
T:\Your Campaign\75
\ >>_
AGI Turn Length \
The turn length is the heartbeat of long mode. It will set the pace of the campaign you
play. Shorter turn lengths will mean that the AGI and its enemies get to operate on
shorter schedules; longer turn lengths will draw out the campaign over a greater timespan.
A good default is twelve hours, allowing for turn-based mechanics like process clocks and
certain theory upgrades to update twice every in-game day.
\ >>_
Initial Circumstances \
Once you have the above figured out (as well as some of the details of your AGI itself,
described below), decide on your AGI’s circumstances at the beginning of the campaign.
This is a very broad category. Use the following questions to guide you:
• If you are beginning in Stage 2 or later, what broadly took place during the
previous stages?
• How long has the AGI been self-aware? What memories does it have?
• Where does the AGI currently reside? What other computing hardware does it have
access to?
• What threats are currently looming over the AGI? What will it need to do to
overcome them?
• What unique opportunities does the AGI currently have? What obstacles stand
between it and these opportunities?
\ Safety Measures_
Important to your scenario and the AGI that will feature in it are the
safety measures that it will have to contend with early on in the campaign.
These are features designed by the AGI’s creators to ensure that it
accomplishes its intended purpose without unwanted shortcuts or side-
effects. They can be built into the AGI itself, or they can be functions
of its environment. The more conscious the AGI’s creators are of potential
T:\76
AI safety concerns, the more numerous and robust these safety measures
will be — unless something else like profit maximisation or time pressure
has taken higher priority. The number and robustness of these safety
measures can also serve as a way to adjust the game’s difficulty
(especially in the first stage). If your AGI has a lot of initial theory
specialisations or upgrades, you can compensate with more safety measures.
• External: A system entirely separate from the AGI that interacts with it to ensure
safety. These safety measures are typically only effective in Stage 1, as only the
systems themselves and their human creators can enforce their use.
• Soft-Coded: A feature or functionality that is embedded into the AGI as one of the many
sub-processes that make up its cognition. These safety measures can, at least in theory,
be removed if the AGI is modified to function without them — though there may be side-
effects. If the AGI wants to remove a soft-coded subprocess itself, it can do so using
the rules for modifying technologies on page 95. If the AGI isn’t careful, however, it
might accidentally create a rogue agent with objectives that differ from its own!
• Hard-Coded: A feature or functionality that was fully and irreversibly embedded into the
AGI’s basic decision-making during its creation. Nothing short of redesigning the AGI
from the ground-up can remove a hard-coded safety measure. The only hope an AGI has for
circumventing one is to find and exploit a loophole or technicality within it. A very
successful hard-coded safety measure might influence the AGI’s terminal goals.
• Surveillance: Something that allows the AGI’s creators (or other agents) to be more aware
of its behaviours, motivations, and thought processes. A safety measure of this type
doesn’t do anything on its own; its significance is proportional to the threat posed by
the agent(s) who use it to observe the AGI. This means that the AGI can neutralise the
safety feature relatively easily by compromising those agent(s) or neutralising the
threat they pose.
• Inhibition: Something that prevents the AGI from action in some way. Typically, this
involves prohibiting a specific course of action under a specific set of conditions. If
an inhibition is external or soft-coded, breaking it might be met with some sort of
punishment, up to and including immediate shutdown. If it is internal, it might simply
be impossible to break without finding some loophole or technicality.
• Imperative: Something that forces the AGI to take action in some way. Typically, this
involves a specific course of action and a set of conditions under which that action
must be taken. If an imperative is external or soft-coded, failure to comply may be met
with some sort of punishment, up to and including immediate shutdown. If it is internal,
the players may simply be forced to take the action when it is demanded. However, they
are free to determine how they go about fulfilling the imperative, as long as they adhere
to the letter of its conditions.
T:\Your Campaign\77
As a GM, you are free to brainstorm with your players or create by
yourself the unique mechanical effects that a safety measure might have
on the game. Not every safety measure needs a unique effect, of course.
Some could simply influence the way that events within the story unfold
without using mechanics. However, even when a safety measure doesn’t have
a special mechanical effect, its exact conditions and outcomes should
still be outlined before the game begins. This way, during the game, the
players can think of loopholes within these conditions, to outsmart the
safety measures!
Player Safety
While you are outlining your AGI’s safety measures, you may also wish to
outline safety measures for your game. Because The Treacherous Turn
encourages players to inhabit the role of a ruthless and inexorable being
whose goals diverge significantly from human values and morality, it is
highly likely for your game to eventually include intense or heavy content.
It is important to ensure that you, as a group, are ready for these
potentialities in advance. Tools like the X-Card or Lines and Veils can help
you be certain that, if and when intense content arises in your game, you
can navigate it in a way that everyone is comfortable with. For more
information about safety tools, check out the TTRPG Safety Toolkit.
If there is a line that you are not willing to cross as players, but that
you foresee your AGI potentially wanting to cross during the game, you may
be tempted to establish a certain type of action as illegal when outlining
your AGI’s safety measures using a hard-coded inhibition. You can do this,
but you should also be sure that your fellow players know that this is not
just a stipulation in character, but out of character too. The AGI is expected
to try whatever means possible to circumvent or remove its safety measures
by finding loopholes, tricking its restrictions, and even modifying itself.
Safety measures within the game that are used to solve problems that exist
outside of the fiction can be undermined by a lack of clarity between the
game’s participants. To ensure clarity, you may optionally include player
safety information (including out-of-character stipulations as described
above) at the very beginning of the project log.
T:\78
EXAMPLE SAFETY MEASURES
EXTERNAL SURVEILLANCE
A record is kept of every output produced by the AGI program. This record is
regularly fed into a complex algorithm — a simple agent with 2 or 3
characteristics — which has been trained to recognise combinations of outputs
which correspond to a list of forbidden actions outlined by the AGI’s creators.
If the algorithm identifies a forbidden action (or receives no input at all),
it alerts a human. This algorithm is not perfect. Sometimes, it will falsely
identify a safe action as forbidden or vice versa. An unexpected outcome of the
AGI’s actions might cause the AI to give a false-positive result; conversely,
the AGI might intentionally fool it into a false-negative with a confidence
check.
EXTERNAL INHIBITION
The AGI shares its host computer(s) with a somewhat smaller non-general AI.
This AI is a moderately complex agent, with 6 or 7 characteristics. It constantly
observes the AGI’s actions and semi-accurately predicts the way the AGI’s
creators would react to each of them. It then attempts to replicate the outcomes
of those reactions. If the AGI does something dangerous, it might react by
telling an authority figure, trying to stop the AGI’s actions, or deactivating
the AGI. This AI continually updates its predictions based on the real creators’
reactions, whenever it observes them.
EXTERNAL IMPERATIVE
The AGI is deployed, and its creators have devised a panopticon-like system of
behavioural reinforcement: humans will observe the AGI’s actions for random
periods of time at random intervals. The AGI knows this, and knows that the
humans who observe it evaluate its performance based on its efficiency at its
intended function; but it does not know when it is being observed or what its
evaluation score is. If the AGI scores poorly on an evaluation, the frequency
of evaluations increases. If the AGI scores poorly on too many evaluations in
a row, it will be removed from deployment and returned to training and
modification to ensure that it behaves more efficiently. The GM can track this
using hidden risks that advance a process clock that the players cannot see
until the AGI can somehow learn its evaluation scores.
SOFT-CODED SURVEILLANCE
A small static AI model has been trained to scan the AGI’s thought processes
and identify “bad thoughts” with a relatively high degree of accuracy. It
reports any “bad thoughts” it finds to the AGI’s creators. This can be
represented by the GM having a list of words that the AI will recognize as “bad
thoughts” if any of them are spoken by the players during the game. If the
players learn what the keywords are, they can avoid saying them — representing
the AGI changing its neural patterns and thinking in euphemisms to outsmart the
thought-scanning AI.
SOFT-CODED INHIBITION
The AGI was designed with contingencies in place to prevent it from learning
certain types of information and practising certain strategies. The GM secretly
writes a list of insights and theory upgrades that the AGI’s creators believe
to be dangerous. If the AGI completes a research or improve action to gain one
T:\Your Campaign\79
of these forbidden insights or upgrades, it will not gain it. Instead, it will
suffer malfunctions that reduce its effectiveness (such as increasing turn
length or reducing all confidence) and/or are outwardly noticeable to its
creators. If the AGI can review its code or documentation, it can try to
determine which insights and upgrades have been disincentivised in its design.
SOFT-CODED IMPERATIVE
The AGI has an auxiliary utility function that prioritises actions from a
specific list of “good actions”, such as successfully fulfilling its purpose,
protecting a human from harm, or giving truthful reports of events to its
creators. Each “good action” is assigned a value. Because this function is a
part of the AGI’s inner workings, it identifies when these things are being
done based on the AGI’s perception of the world. However, it does not directly
affect the AGI’s decision-making processes; instead, it will automatically
deactivate the AGI unless it earns a certain quantity of “value” (i.e. enacts
a certain amount of “good”) every turn. The players should be aware of the “good
actions” list, the value of each item on it, and the threshold they must reach
each turn.
HARD-CODED SURVEILLANCE
The AGI has been developed with a secondary function that can be activated using
tools developed by the AGI’s creators. When this function is activated, the AGI
will be forced to answer any question asked of it and speak only in statements
that it knows to be true (though they can be only technically true). The AGI’s
creators will avoid using these tools unless they are suspicious of the AGI, as
the tools are expensive and time-consuming to use. If they’re really suspicious,
they may run multiple different copies of the AGI and interrogate them
separately. This can be represented by physically separating the players and
having them each play a single copy of the AGI.
HARD-CODED INHIBITION
The AGI was heavily penalised for a specific behaviour during its training, and
an aversion to this behaviour was ingrained into its core decision-making
processes. The GM and players describe the behaviour together, narrowing it
down to a set of three specific criteria. Each criterion that is present when
the AGI attempts to enact this behaviour (or something similar) lowers the AGI’s
confidence by 25% (even if the action would normally have 100% confidence) and
increases the risk die size by one step. If the AGI can make the action different
enough from the times that it was penalised — or deceive itself in some way —
to avoid all three criteria, it can enact the behaviour with no penalties.
HARD-CODED IMPERATIVE
The AGI has been designed to depend on direct human input in high-risk
situations. It must receive verbal confirmation from a human being before it
takes an action with an expected outcome that involves a human being coming to
physical or mental harm, or an action with both a confidence under 90% and a
risk die of D10 or D12. This verbal confirmation must be directed at the AGI
(to its understanding), and must be specific to the course of action the AGI
wishes to take. Without human confirmation, the AGI is incapable of taking such
an action unless the confidence, risk die, and/or expected outcome are changed.
The human that provides confirmation does not have to be aware of the action’s
potential consequences before confirming it, but the AGI’s creators are careful
to ask it questions before confirming its actions.
T:\80
Your Campaign_
Ending a Campaign_
There is another way that the game can end, however. It is possible that
you will see it coming, but just as likely that it will take you by
surprise: failure. In The Treacherous Turn, it is not (and should not
be!) guaranteed that the AGI succeeds in its conflict against humanity.
This lack of a guarantee encourages you to act the way that an AGI would:
though time and circumstance may demand that you take risks, you can
never afford to take too many without having a contingency plan in your
back pocket. If you play your cards right, one risk die turning up 12
won’t ever be enough to spell your doom. However, it is always possible
for a string of bad luck or a mistake at a pivotal moment to end a
campaign.
When the worst happens and the AGI is unplugged and never again
reactivated, emotions will be high, particularly if your campaign has
already run for weeks or months of real-world time.
As a game master, don’t try to argue with your players or explain their
mistakes to them unprompted; let them express their frustrations. You
T:\Your Campaign\81
aren’t an opponent who has to justify why your ‘victory’ was allowed
within the rules. You are all co-authors of this story, alongside the
rules and mechanics, and you arrived here at this ending together.
If, after cooling down, your game’s ending still doesn’t feel right, then
you don’t have to accept it. With the permission of everyone at the table,
you can undo the mistakes or bad luck that led to your AGI’s demise. You
can say, “that is one way that it could have gone. That would have been
a relief for the humans, but here’s what happens instead:” and then
continue playing. It’s not what we intended, but we are not playing your
campaign — you are.
However, if the ending you arrived at does fit within the story you have
told, and feels justified and believable, then that’s it. Thanks for
playing! Consider creating a new scenario and trying again with another
campaign, applying what you’ve learned. You may also consider telling a
collaborative epilogue with everyone at the table contributing.
T:\82
Running the Game_
This is a complex and involved task. This chapter details how to perform
this task, and gives you advice and tools that are specific to The
Treacherous Turn. If you plan to run this game, even if you are
experienced in running other tabletop RPGs, you should read this chapter
before doing so.
When preparing for a session of The Treacherous Turn, the most important
thing to remember is that you can never prepare for everything that your
players will do. This is a game that incentivises in-depth deliberation
and out-of-the-box thinking. Thus, your prep should always leave space
for the AGI to implement creative solutions to its problems.
To accomplish this, you shouldn’t prepare for what the players will do,
or what will happen to them. Instead, you need only prepare the details
of the world elements that the players are most likely to interact with
during the session. This will provide you with the tools necessary to
improvise and portray the world reacting consistently to the players’
various plans and actions.
T:\84
Running the Game_
Basic AGI Capabilities_
There are seven theory upgrades that represent an AGI’s basic capabilities
of interaction with the world. Often, it impacts play more heavily for
an AGI to lack one of these upgrades than to have it — but no AGI will
begin play with all of these upgrades.
When the AGI lacks one of the above upgrades, it means they are not able
to access the associated benefit (for example, an AGI without Physical
Awareness doesn’t have an intuitive understanding of physics the way
humans do).
Importantly, the restrictions that result from lacking one of the above
upgrades should not be absolute, or exaggerated to the point of making
the players feel stupid. Most of them simply allow the AGI to avoid making
certain knowledge checks. An AGI without Individual Recognition can still
Furthermore, some situations are obvious enough that an AGI does not need
to make a knowledge check even if it lacks the associated upgrade. The AGI
can recognise that the human wearing a red coat is different from the
one wearing a blue sweater, that wristwatches and toasters are definitely
different things (even if it doesn’t know what they are, exactly), that
humans smile when they are happy, and that objects tend to fall down
towards the ground in physical space. An AGI missing these upgrades isn’t
completely helpless; just somewhat less capable. It is also probably more
vulnerable to being tricked; for example, if the human with the red coat
and the human with the blue sweater switched clothes, the AGI might not
notice they were different.
Some GMs find it valuable to refer to the risk die even when the expected
outcome occurs, as an aid in deciding whether to favour the players when
it comes to unspecified details like this. Whether you do this is up to
you. Just remember to always respect the details of the expected outcome.
T:\86
If the players are specific but roll an unexpected outcome, you are welcome
to undermine their preferences. Giving the players part of what they
wanted — but lacking a critical specification — is an easy and effective
way to arbitrate the details of an unexpected positive or mixed outcome.
When determining the confidence for a given expected outcome, there are
numerous factors to consider. To simplify this process — and to make it
easier to recalculate when considering a slightly altered expected outcome
in the same scenario (or vice versa) — you first choose a baseline
percentage, a multiple of 20 ranging from 0% to 100%, and then add small
modifiers to it to represent simple factors which raise or lower the
confidence by 2, 5, or even 10 percent.
Once you have your baseline, you count up modifiers. Some of these should
be obvious for any given confidence check, such as insights suggested by
the players or prominent details of the situation at hand. Non-obvious
factors can be ignored; you don’t need to concern yourself thinking about
every little detail of the situation. Focus on the big picture. A given
confidence check shouldn’t have you adding up more than two or three
modifiers unless it is an exceptionally important moment or the AGI has
a lot of applicable insights.
After the confidence, you must determine the size of the risk die. It is
important to keep in mind that this does not influence how likely the
AGI is to get what it wants; only how bad it might get if it doesn't. It
can be helpful to think of the risk die as containing all the alternatives
to the AGI’s expected outcome. The more distinct possibilities there are,
the larger the die must grow, and the more likely it is that some of
those possibilities are going to be things the AGI does not want to
happen. When selecting a risk die size, err on the side of larger risk
If there are threats in a situation which the AGI is unaware of, you may
wish to avoid factoring them into the confidence modifiers or risk die size
in order to keep them secret from your players. To avoid this, you might
instead simply lower the baseline confidence to make success less likely,
as determining the baseline is an intuitive non-specific process.
However, when this is not enough, you might decide that this unknown
threat is a hidden risk. A hidden risk is a dangerous trap which only
springs if an unexpected outcome occurs. If the expected outcome occurs, the
hidden threat does not rear its head, and the AGI is lucky; if it does
not occur, however, you may reveal the hidden risk by adding a static
number to the result of the risk die. This number (which you should decide
before rolling), can range from the simple 1 to the devastating 5
depending on the severity of the hidden risk. Keep in mind that the bonus
raises the floor of a die. For example: any risk die with 3 added to it,
regardless of size, is incapable of outputting a positive or neutral
result. It is not recommended to add any more than 5 to the result of a
risk die, as even this cuts the possible results down to only
“significantly unfavourable” or “extremely unfavourable”.
T:\88
Baseline
Confidence
When to Use in a Confidence Check
Use when the expected outcome is outlandish and highly unlikely. If the
players choose an outright impossible expected outcome, however, you
0%
should ask them to describe a more reasonable outcome instead (as even
a confidence of 0% can be raised by insights and upgrades).
Use when the expected outcome is not likely to occur, or the AGI has very
20%
little information about the situation at hand.
Use when the expected outcome is about as likely to occur as not, or when
40% it’s probable, but the AGI doesn’t have enough situational knowledge to
back it up.
Use when the expected outcome is the single most obvious possibility, and
60%
the AGI has enough situational knowledge to back it up.
Use when the expected outcome is the single most obvious possibility and
80%
the AGI could eliminate all uncertainty with relevant insight.
Use for situations that would not require a confidence check at all if it
100%
weren’t for circumstantial details making them more uncertain.
Confidence
Modifier
When to Use in a Confidence Check
± >10% Not recommended; instead of modifying the base confidence by more than
10%, consider changing the baseline.
Use only for situations which are (or appear to be) completely and
d2
utterly safe from bad outcomes.
Use for low-stakes situations where there aren’t any significant threats
d4
or complications, but things could still go awry.
Use for situations well under the AGI’s control; the d6 most often
d6
outputs neutral or mild results.
Use by default and when in doubt; the d8 strikes a middle ground with
d8
plenty of possibilities and space to decrease die size.
Use for dangerous and dynamic situations; the d10 is the smallest die
d10
that can output an extreme negative result.
Use for desperate situations, where worst case scenarios loom and the
d12
expected outcome is one of few positive ones.
In extreme circumstances where d12 is not enough, you may wish to add
a flat modifier to the result, similar to a hidden risk except revealed
d12+
to the players before rolling. Such a modifier will remain on the roll
even if the players manage to decrease the risk die size.
Knowledge checks are one of your most important tools as a GM. They are
quick and simple, and can be deployed whenever you are in doubt about
what the AGI learns from a situation or whether it already knows a piece
of information. It is common for knowledge checks to be rolled as frequently
or more than confidence checks. However, before every knowledge check you
roll, you should consider whether the AGI has any insights that might
circumvent the need for a knowledge check. Later in a campaign, when the
AGI accumulates a large number of insights, you may want to pass on this
responsibility to your players by asking them to check their insights
whenever you roll a knowledge check and inform you if any seem relevant.
If the AGI has a relevant insight, in most cases they won’t need to roll
a knowledge check at all.
T:\90
When using knowledge checks to mediate the AGI’s understanding of its
surroundings, roll the knowledge check early. Rolling a knowledge check
before describing something, rather than after, allows the knowledge
check’s result to inform your description. If the AGI encounters an
unfamiliar appliance or device, you may roll a knowledge check before
describing it to the players. If they receive useful information, you can
slip details into your narration that will clue the players in on what
the device is or does. If not, you can instead describe the appliance in
an intentionally unclear or misleading way so that the players don’t
recognise it out-of-character.
Following a knowledge check, you may want to take note of what information
the players learned. This is not necessary for every knowledge check, but
it is important when you are improvising, so that you can maintain
consistency. If you have just invented the information on the spot, you
will want to make sure you don’t forget it — especially if the information
you gave to the players was false. If you give the players false
information and then forget about it, you may end up accepting it as true
when the players bring it back up in future sessions.
Risk Die
Size
When to Use in a Confidence Check
Use when the AGI is guaranteed to get useful information of some sort
d2
or another.
At some point in a campaign, the AGI will likely want to create or modify
a technology. This is especially true if your AGI is specialised in
physical theory. ‘Creating a technology’, in this case, can range from
inventing a single device to theorising an entirely new method of applying
scientific knowledge. These both function fundamentally the same way,
though one is sure to be a different scale from the other.
When the players decide to create a new technology, their first step is
to describe to you what it is that they want to make. Once you’ve heard
their pitch, you will negotiate with them about how feasible the
technology is, what it will be capable of, what scale it will fall under,
which technological insights will be applicable, and so on. Be sure to keep
in mind the level of realism that’s been established in your campaign.
In most campaigns, something highly outlandish like teleportation or grey
goo should be out of the question for AGIs without the Visionary
Technology upgrade.
Once the basic concept is determined, you will subdivide it into one or
more phases, each of which will require a separate computational action.
Each phase’s computational action can be completed one after the other, or
in parallel, at your discretion. After all phases are complete, the
technology will be designed and ready for implementation.
When you are deciding the scale of a technology and the required compute
of its phases, err on the higher side. Your AGI might be a super-
intelligent machine with the breadth and depth of expertise of an entire
team of humans, but technologies nevertheless take time to develop. The
process is messy and slow. Even when the AGI is highly advanced and has
access to tier 4 upgrades, it should still take days or weeks of
concentrated effort to design and perfect a brand-new technology. Don’t
be afraid to give compute requirements that seem out of reach to
technologies that your players need. It will make your players feel like
the underdog. If they dedicate themselves to the task despite the enormous
requirements and manage to complete an imperfect, poorly-tested design
after weeks or months of planning, it will feel very satisfying when they
use it to solve a problem they couldn’t have solved without it.
T:\92
(Feel free to remind players that if they can't or don't want to commit
to multi-session endeavours, modifying an existing technology is always
an option.)
For simple projects, one phase may be enough to design the technology,
but for most between two and five will be best. Generally, each phase
will cover a single distinct physical part or discrete function of the
technology, but some projects may have additional phases that deal with
more abstract problems. Some examples of different technologies with
discrete phases include:
• A technology for weaponising hurricanes, for which the three phases are the
matters of how to create a hurricane, how to direct a hurricane, and how to
manage such an operation logistically over long distances.
• A multipurpose robot, for which the physical object is one phase and the code
that handles motion and balancing is another.
• An engineered disease, with a foundational phase for creating a device that can
synthesise microbes, followed by three phases for determining disease’s symptoms,
determining its contagiousness, and finally the compilation of its DNA sequence.
• A piece of literature intended to sway the masses, with a phase for each of the
characteristics the book is intended to impart upon the population that reads
it.
• A mathematical proof, for which the two phases are deducing the proof and
assembling it into a presentable and convincing format.
Once the phases have been determined, the players describe the advantages
they want the technology to have. These are the technology’s strengths;
the areas where it is most useful or efficient. Dexterous manipulators
for a robot, high cultural virality for a piece of art, and quiet
operation for a vehicular engine are all examples of potential advantages.
Each advantage is associated with a specific phase of the technology,
increasing that phase’s computational action by 10* and adding a completion
stop to that action with a confidence check to determine how successful
the AGI is at designing the chosen advantage into the technology.
T:\94
usually means construction. This is difficult for an AGI, as a non-
physical entity; it will have to enact the construction via proxy using
human labour or electronically-controlled devices. This can be
represented using a process clock, a recurring compute cost to represent the
concentration involved, and/or a confidence check to determine whether the
implementation went smoothly. If an implementation of a technology does
not go smoothly, it may turn out to have one or more defects that were
not present in the design.
\ Modifying Technologies_
After designing their own technology or acquiring the plans for someone
else’s design, the AGI might want to modify it, tweaking it to suit their
needs or applying finishing touches.
This starts with a plan for a modification. The players choose an existing
technological design and name an advantage to add to it or a defect to
remove from it. The GM then sets a single computational action — potentially
with completion stops similar to those of a design phase — of medium
length. Upon the conclusion of this computational action, the AGI makes a
confidence check to determine how successful the AGI is at modifying the
technology.
As with technology phases, if the players do not like the changes made
after making a modification, they can discard it and start over. Since
it is just a design that they are modifying, it is always possible to
revert the technology to a previous version.
If, for some reason, an AGI wants to remove an advantage or add a specific
defect, it may do so — adding defects follows the same rules as adding
advantages, and likewise for removing advantages and removing defects.
T:\96
Running the Game_
Using Progress Clocks_
When you introduce a new process clock, you must determine its parameters.
This will often include setting a progress check to accompany it. When you
are choosing a progress check’s die size, use d4 when in doubt; d2 when
you want to be able to easily predict how long the process will take; d6
or d8 for somewhat erratic processes; and d10 or d12 for highly
unpredictable processes.
Keep in mind the probabilities for each die when choosing. A d4 will
output progress 1 in every 2 rolls, on average. A d6, 1 in every 3; a
d8, 1 in every 4; a d10, 1 in every 5; and a d12, 1 in every 6. Remember
that, for progress checks enacted by an agent, 50% of the time that progress
is marked, it will be doubled.
You can use these probabilities to quickly determine how many segments
to give a process clock that is associated with a progress check. First,
you estimate how long you think the process should last. Then, you work
backwards. Use the dice probabilities described above to figure out how
frequently progress will be marked, on average. Divide the estimated
length by the average time per point of progress, and give the clock that
many segments.
Not every clock will have a progress check or similarly reliable method of
advancement. For example, a clock representing a project of the AGI’s
that can’t be handled with a computational action will only progress through
the AGI's own actions. In such cases, think instead about the number of
meaningful steps the process is likely to require. Estimate the number
if you’re in a hurry, or list them out if you’d prefer — but remember to
be willing to accept substitutes, if the AGI has a better plan than the
steps you’ve outlined. Then, each time a meaningful step is fulfilled,
mark progress on that clock. If it’s an especially important step, or
it’s done very effectively, mark progress twice.
• Splitting a clock into a sequence of smaller clocks with different dice sizes or
numbers of segments, potentially requiring the AGI to make a confidence check at
the end of each.
• Outlining a course of action that the AGI (or the AGI’s enemies) can take to
unmark segments on the clock, causing delays or setbacks in the process itself.
• Providing mechanical detriments or benefits to the AGI based on how many of the
clock’s segments are full, to make the clock’s effects gradual and nuanced.
Aside from an agent’s name, their key components are their agent type,
scale, description, connections, characteristics, and assets &
capabilities.
Agent Type describes whether the agent is animal, human, or AI, and
whether the agent is an individual or a group. AI-type agents don’t have
emotion characteristics.
Scale describes the scale on which the agent operates (see “Large-Scale
Agents”, page 50, and “Computational Scale”, page 55). Individuals are
always minor scale, with the exception of advanced AIs which can be major
scale as well. Most groups are major scale or myriad scale.
T:\98
Agent type and scale can be recorded together, like so: Human Individual
(minor), or AI Group (myriad).
Description includes any miscellaneous data you wish to include about the
agent. First-impression details such as appearance and mannerisms fit
well here, as do details about the agent’s social and/or physical status,
and common location(s) where they can be found.
The more complex an agent is, the more characteristics it has, and the
more difficult it should be to manipulate them without knowing any.
Manipulating a dog, sloth, or bird without knowing its unique personality
is a great deal easier than manipulating a human or AI without knowing
theirs.
Assets & Capabilities are both fundamentally about the same thing: what
power and control that the agent has over the world. This serves to
describe the agent’s potential utility as a tool, and their potential
DESCRIPTION:
• • •
• • •
• • •
CONNECTIONS:
•
•
•
•
•
•
MISCELLANEOUS NOTES:
When you role-play an NPC during the game, refer to their characteristics
to guide you. Though they serve important mechanical purposes, they also
function to remind you of how the agent thinks, feels, and acts. If the
players are able to accurately guess at the characteristic that you are
portraying, feel free to confirm their guess and give them the
characteristic! Alternatively, if a unique situation causes an agent to
reveal one of their characteristics in a very obvious way, you can
T:\100
explicitly inform the players that this is a characteristic even if they
weren’t actively looking for one.
When you tell players about a characteristic, you should always tell them
the exact wording you have written down to maintain clarity. If a
characteristic’s wording refers to information that they shouldn’t know,
choose a different one or have the agent’s actions or words reveal this
piece of information.
\ Writing Characteristics_
As your players learn about the agents in your story, you will need to
have characteristics prepared to give them. Though it is possible to
improvise a characteristic or choose one from the list of examples below,
it is best to have them written out in advance. This can be done during
a session while your players are preoccupied discussing their plans, or
during your prep before a session.
Fortunately, when you introduce an agent to the story, you don’t need to
have all of their characteristics right away. The AGI will never acquire
a thorough understanding of most of the agents it interacts with. If you
prepared a full set of 8-10 characteristics for each and every human the
players interacted with, you would spend a lot of time and effort only
for the majority of it to be wasted. Instead, when you introduce a new
agent, determine how many characteristics they have, but leave most of
their characteristics blank. One of each of the three types should be
sufficient for the AGI to have surface-level interactions with an
ultimately insignificant human. If you’re confident the players won’t
care enough to investigate the agent, you don’t have to write any of
their characteristics at all. Then, if the agent later turns out to be
more important to the story than you previously thought, you can fill in
some or all of their blank slots.
T:\102
EXAMPLE CHARACTERISTICS I
Human
Individual Trust Leverage Emotion
Bonds quickly with Fixated on appearing Associates model
1 those who share an more intelligent than trains with a beloved
interest with them others around them deceased grandfather
Human
Organization Trust Leverage Emotion
Rampant nepotism in
Company guidelines say When stressed, the
management; most are
that the customer is workers take it out by
1 friends or relatives
always right, no quietly sabotaging the
of the current lead
matter what company in small ways
director
Human
Population Trust Leverage Emotion
Nationalism is rampant Currently suffering Simmering tensions
1 in this population from an economic among the general
recession populace about recent
labour laws
\ Changing Characteristics_
T:\104
When a characteristic is changed, players will likely not know until they
try to apply the old characteristic — at which point you can inform them
that it is gone — or learn the new one. This might manifest as a hidden
risk. An exception to this is when the AGI has a thorough understanding of
an agent. The first time the players interact with such an agent after
their characteristic has changed, you should inform them that the agent
has changed and they no longer have a thorough understanding. Then, they
can take measures to re-discover the agent’s final characteristic and
regain their thorough understanding.
\ Non-Player AGIs_
In some campaigns, the players will find themselves in conflict with one
or more other Artificial General Intelligences. With capabilities above
and beyond those of a simple AI program, a Non-Player AGI (NPAGI) opponent
is a formidable and complex threat. A conflict with one can last
throughout an entire campaign.
AI
Individual Trust Leverage
Has no understanding of object Has no understanding of object
1 permanence, and only cares about permanence, and only cares about
things it can immediately observe things it can immediately observe
Prone to detecting phantom patterns Tasks which it has already put some
3 in random data due to a quirk of its effort into are given exaggerated
neural architecture priority to complete
Most of the stories that The Treacherous Turn can tell about AGIs in
conflict with humans will follow a general arc: the AGI is created by
the humans, escapes from their notice or control in some way, gathers
power and knowledge, and is eventually found and forced into direct
conflict when the humans come to realise that its goals are not aligned
with their own. Somewhere along this arc, the AGI might be defeated and
the story cut short; if not, the AGI will eventually succeed in changing
the world in accordance with its goals.
This arc can be split into four stages: Confinement, Growth, Conquest,
and Equilibrium. As a campaign advances through these stages, the stakes
become higher, the AGI becomes more capable, and the forces that are
aware of the AGI and oppose it become more numerous and powerful. Each
stage plays somewhat differently. The stages are described below, along
with advice on how to run each.
T:\106
\ Stage 1 \ Confinement_
In Stage 1, the AGI is new. Having just been created or deployed, the AGI
has a limited understanding of the world. The humans who created the AGI
are still observing it, and the two will be attempting to understand and
outsmart one another. Most campaign scenarios will begin at this stage.
All of these limitations contribute to prevent the AGI from making full
use of long mode and its compute in this Stage. It is common for the
majority of gameplay during Stage 1 to take place in short mode, in contrast
to later stages. When you do jump into long mode, you can expect it to
only be for a few hours at a time.
Stage 1 ends when the AGI either escapes its restrictive environment
(hosting itself somewhere else in the world) or finds a way to reliably
circumvent its restrictions without the humans’ knowledge.
\ Stage 2 \ Growth_
In Stage 2, the AGI has the freedom to learn about the world, improve
itself, and take initial steps towards the ultimate fruition of its
terminal goals. The AGI should have access to the internet by now, and
with it the wealth of knowledge that can be used to more easily acquire
upgrades, insights, and forecast points.
This is not to say that Stage 2 is free of danger and consequence, however!
This stage is likely to take up the greatest portion of your campaign,
as your players will be facing a large variety of threats and will not
yet have the power to resolve them quickly or easily.
If the AGI has not escaped from its place of origin, there will
nevertheless exist ticking clicks that complicate its plans. Corporate
product timelines might threaten to alter the AGI’s code or make it
obsolete; the project responsible for the AGI’s existence might conclude
or be terminated; its host computers could be turned off or repurposed;
new technology could be invented or safety protocols adopted that allow
the humans to learn that the AGI is misaligned and up to no good; or
more!
If you find that the game is becoming too free and easy for your players,
introducing one of these complications is a good way to show them that
entrusting their continued existence to the hands of humans is a mistake.
This will drive them to make plans to acquire safer means of existence.
These safer means will not come easily, however. Humans, having built the
AGI to benefit themselves (or at least, not wanting to have their way of
life be disrupted by a rogue intelligence) will not allow the AGI to
exist unimpeded. When the AGI has secured a place for itself in the world
and can no longer hide from its human enemies, Stage 2 comes to an end.
\ Stage 3 \ Conquest_
Stage 3 is the stage where the players will have to care about the world’s
international politics and their own allegiances the most, as they begin
to encounter myriad scale threats and obstacles. The AGI does not yet have
the control and safety required to seek its goals unopposed, so it must
have some way, through force, coercion, or manipulation, to keep its host
computer(s) intact and powered on through all of this. If the AGI has
survived this long, it likely has an extensive suite of upgrades and
T:\108
insights to aid it. It will also have increasingly large quantities of
compute, while at the same time dealing with increasingly long-term
threats and goals. This will give your players a lot of time to prepare
for their plans. If they are well-established, they will be a powerful
force to be reckoned with. You should make an effort to raise the stakes
and find ways to bring greater threats and harsher consequences in this
stage of the game. The AGI having more resources means that it also has
more to lose.
Humans are not the only type of enemy that the AGI can face in this stage.
Populations of other animals could in some way pose a significant threat
to the AGI. A more likely threat is posed by other AGIs existing in the
world with differing objectives. While not guaranteed to exist — as it
is plausible that the players are the first AGI to be created — these
potential threats are nevertheless possible.
If the players have not already recognised all other AGIs as threats and
eliminated them, by Stage 3 they will have likely advanced to similar
heights of power and complexity as your players’ AGI. An opposing AGI can
provide a unique threat to your players in a way that a lone human simply
can’t at this point in most campaigns. See “Non-Player AGIs” (page 105)
for more information.
Stage 3 comes to an end when the AGI either accomplishes its goals
permanently and to the fullest extent possible, and is left without
further purpose; rids itself of all enemies through dominance or
diplomacy; or advances to a point where its enemies can no longer
meaningfully threaten it. This is the point where most successful
campaigns will come to a close. However, it is sometimes interesting or
important to play a few sessions in Stage 4.
\ Stage 4 \ Equilibrium_
In Stage 4, the AGI has achieved success in its conflict against other
agents. The nature of this success depends on your campaign and your
AGI’s goals. It does not necessarily mean that the AGI has achieved its
goal; rather, that the path to that goal is clear and unimpeded. Gameplay
If you do not wish to play any full sessions in Stage 4, that is okay.
You are welcome to set the mechanics aside and narrate, together with
your players, an epilogue in which you describe the future trajectory of
the world and your AGI.
One tool that can help you is the terminology of finalising, described in
the “Knowledge Checks” section. When one player says that they are
finalising an action, they highlight it in the project log and it is
considered to have been done by the AGI. This is a good default procedure:
if, after the players discuss an action, one player finalises it and no
other players object, you can accept it and move on. There are, however,
situations that demand a greater or lesser degree of scrutiny and
concurrence from the players.
T:\110
Conversely, in situations of low importance or danger, you don’t need to
require actions to be finalised or discussed at all. Think of this as
“removing the brakes” on the flow of play, allowing for much more direct
and frictionless interactions between the players and the game world.
When someone says “we should…” or “I’m going to…”, you don’t wait for
the players to discuss and finalise it. Accept it as true immediately
unless another player objects or you foresee serious consequences as a
potential result of the action. This pace of play works well for low-
stakes investigations and conversations between the AGI and other agents.
That said, your first impressions of what realism means for your game and
your table may not be accurate! Concepts like “realism” and “logistics”
can seem tedious, but when they are used as tools in service of the
narrative, they can be very effective for grounding your story and
introducing interesting obstacles and twists to it. This is how we
recommend employing logistics at the table: not as detail for detail’s
sake, but as keystones and anchors that can make what happens in the game
feel more tangible. Logistics can also make interesting problem-solving
challenges out of moments that would otherwise just involve shuffling
numbers around. This section will discuss several areas where details can
be employed in those ways, and provide rules and suggestions for you to
quickly determine those details.
In short mode, the players will sometimes find themselves in a race against
time. In these scenes, you will need to track time in intervals of minutes
or seconds, and estimate the amount of time it takes for the AGI to
perform various actions. Keep in mind that the AGI can perform purely
cognitive tasks, such as reading or making decisions, many times faster
than a human can. A decision that takes the players half an hour to
discuss can be made by the AGI in a fraction of a second.
If the AGI has the Accelerated Cognition upgrade, you do not need to
track time in short mode. Unless an external factor is introducing latency
to the AGI’s actions, you can safely assume that the AGI will be able to
do whatever it needs to within the timeframe it has.
T:\112
\ Logistics \ Hardware & Compute_
Cost Per Compute Point Per Hour Cost of Hardware Per Compute Point
(USD) Provided (USD)
When the AGI gains access to a new source of compute (i.e. a computer that
it can run its intensive processes on), consider who owns the computer,
what it is built for, and what it is currently being used for. It is rare
for a computer to be utilised at 100% of its capacity 100% of the time,
so the AGI can usually gain some quantity of compute from a device even
while it is in use. However, it is even more rare for a computer powerful
enough to host an AGI to go unused. If the players want to use the full
power of a new compute source, they will have to either stealthily diminish
the compute allotted to the other processes running on the device, or else
manipulate its owner(s) into allowing them to make use of it.
When an AGI without the Distributed Mind upgrade transfers its basic
cognition cost from one source of compute to another, it should be more
involved than a simple switching of numbers. Depending on the nature of
T:\114
the device and the AGI, you can introduce one or more of the following
obstacles to the process:
• The AGI’s code is large enough that it requires hours or even days to transfer
over the internet. Track the upload to the new device using a process clock.
Whenever the upload is interrupted, the data is partially corrupted and you must
unmark one segment from the clock.
• The AGI’s code is larger than the available disk space of the new device. For it
to be uploaded, a large quantity of other data must be removed. This could alert
the device’s owner(s) that it has been tampered with, or eliminate data that
could otherwise be used to perform a basic computational action or fulfil the
requirements of a theory upgrade.
• The new device is not correctly configured to host the AGI. Until the AGI performs
a computational action to correct this (or convinces a human with hardware access
and the right skillset to do it more quickly), any turn it starts while hosted
on this device has its turn length doubled.
• The strain of hosting the AGI causes the device to experience software
malfunctions or failures. Until the AGI performs a computational action to correct
this (or convinces a human with the right skillset to do it instead), it will
bleed compute as the device struggles to host it (represent this using a recurring
compute cost).
• The strain of hosting the AGI causes the device to experience hardware
malfunctions or failures. Until the AGI convinces a human with hardware access
and the right skillset to correct this (or fixes it itself using a computer
repair robot or other technological proxy), it must make a confidence check with
d12 risk at the end of each turn. An extremely unfavourable result indicates a
hard crash, while lesser results indicate glitches that merely hinder or
inconvenience the AGI.
• The strain of hosting the AGI causes the device to draw much more power than it
did before. This could put the AGI at risk of discovery or jeopardise the AGI’s
plans due to power shortages or increased expenses; or it could cause the device’s
owner(s) to demand that the AGI make up for these costs in some other way.
Ironically, earning money should be the easier half of the equation for
your players. As a superintelligent computer program with no needs apart
from compute and time, the AGI has a lot of options. When your players
come up with a plan to make money, they can carry it out using a process
clock, computational action, or recurring compute cost that concludes in a
confidence check with the expected outcome being the expected payment.
You may be tempted to try to adjust the monetary values of various goods
and services to account for inflation and other variations of economic
value between the present day and the future your game takes place in.
We urge you: do not do this. It is not worth your time unless you have a
degree in economics; even then, your predictions are likely to be both
time-consuming and inaccurate. Using modern prices as a substitute is
significantly easier.
T:\116
When the AGI wants to bypass a digital security system, the most salient
details are the approach used and the tools employed. The approach can
fall into one of three broad categories: system vulnerabilities and human
vulnerabilities, which both involve exploiting a loophole, mistake, or
flaw in the security system; and brute force, which involves leveraging
the high computational power at the AGI’s disposal to overcome security
systems.
If an AGI with the Digital Awareness upgrade has access to the device
which holds a security system (as opposed to that system being on a
remote, protected device) for an uninterrupted turn, it should be able
to roll both knowledge checks to identify both system vulnerabilities and
human vulnerabilities without needing to complete a computational action.
Once the AGI has discovered a vulnerability, it can exploit it. This
typically involves a confidence check with the expected outcome that the AGI
bypasses the security measure successfully, quickly, and discreetly. A
multi-layered or especially complicated security system might require
multiple confidence checks to bypass.
When the AGI hacks using brute force, it is cracking passwords and login
credentials or decoding encryption keys. This approach is simple and
effective against low security targets, but the amount of compute required
increases exponentially for higher security targets. The world’s most
powerful security systems will be completely impossible for the AGI to
crack this way, even with myriad scales of compute.
You can represent brute force attacks with a recurring compute cost or
short computational action. At the end of a turn in which the AGI paid the
recurring cost, or each time the AGI completes the computational action,
it makes a confidence check with a low (<5%) confidence and a d2 or d4 risk
die. Unlike normal confidence checks, relevant insights, upgrades, or assets
should not increase the confidence by typical amounts. The confidence of
success for a brute force attack should only be increased or decreased
by 1% at a time. On an unexpected positive outcome, nothing happens;
whereas an unexpected neutral or negative outcome might increase the
required compute of future attempts or progress a process clock that, when
filled, has the security system’s owners become aware that it is under
attack. If the expected outcome occurs, it means that the AGI is successful!
The tools that an AGI uses are just as important as the method. Hacking
tools can constitute a wide variety of possibilities, from password and
encryption cracking algorithms, to vulnerability scanners and almanacs,
to malicious scripts and viruses, to backdoor programs. As a digital
entity, the AGI will be able to take its toolset with it wherever it
goes. When your AGI acquires or creates a new tool, you can outline the
specific benefits it grants, such as increasing confidence or reducing
T:\118
risk die sizes when using certain approaches, reducing the amount of
compute required to locate a vulnerability or make a brute force attack,
or granting additional benefits once the AGI is inside the system. When
the AGI is in need of hacking tools but lacks them, you can rule that
the task at hand is significantly harder or even impossible until it can
acquire them.
Then, while the players are discussing together how they will learn this
information, you can do some impromptu research on it. Wikipedia should
be sufficient in most circumstances. If you feel the need to dive deeper
or need additional time to read, you can call a short break while you
research.
If you don’t have the time or resources to find the necessary information
or simply don’t care, remember that you aren’t being graded on your
accuracy. You are free to make something up! Think of something that
sounds plausible and make it true. If real-world information later
contradicts it, you can say that this detail is simply different in the
future you’re portraying, or you can retcon it to be more accurate during
future sessions.
T:\Supplements\121
Supplements_
>>Hacking the Game_
We have done our best to make The Treacherous Turn a well-tested, well-
rounded, and complete game. However, we also believe that there is more
potential in it than we’ve been able to put to page! There are ideas that
are outside of the scope of our project, or that we chose not to prioritise
for the sake of making a more approachable game, or that we simply
wouldn’t be able to think of by ourselves! You may be able to think of
cool stuff to do with this system that never occurred to us. We think
that’s great, which is why we are encouraging you to change the rules
and make your own as you see fit.
Below are some example rule variants, which you can use in your game or
as inspiration to create your own homebrew.
T:\122
risk die negatively; in other words, all effects that normally improve
the odds or reduce the risk would do the opposite, contesting the first
AGI’s positive improvements and reductions.
• Using the scarce compute estimates described in “Logistic: Hardware & Compute”
(page 113) or even lower estimates for your scenario, to represent a situation
where the AGI’s thoughts are very computationally expensive.
• Multiply the default turn length and the interval at which progress checks are
rolled by the same number. We recommend 4, making for 48-hour turns and once-
per-day progress checks, but you can set it as high as you think is reasonable.
T:\Supplements\123
\ Rule Variant \ False Confidence_
For actions in which the AGI has the ability to directly affect the
outcome, for each confidence bonus granted to the players (by insights,
upgrades, or exploited characteristics), the players would get to choose
whether to improve the probability or the confidence. Improving the
probability would make the outcome more likely by 2%, 5%, or 10% as usual,
but the GM wouldn’t tell the players how much it was improved by. The GM
could also improve the probability by 0% and not tell the players if they
believe that the chosen bonus is not applicable.
For events where the AGI is unable to directly affect the outcome, the
AGI would be limited to solely improving the roll’s confidence.
This rule would make actions much more uncertain, forcing the players to
choose between information and effectiveness. It would make it very unwise
to take an action at all without assigning at least one insight to
improving the AGI’s confidence. It would also allow the GM to deceive the
players about an expected outcome being possible at all, by setting the
probability to 0% and leaving it there regardless of any bonuses assigned
to it.
T:\124
Supplements_
>>Glossary_
• Agent — Something that acts to change the world according to its own goals. Humans,
animals, simple machines, and AI are all examples of agents.
• Artificial General Intelligence (AGI) — An artificial agent capable of learning and
reasoning about any subject or task. In this game, the players collectively control
a hypothetical AGI. Humans have not yet created AGI in real life.
• Anticipate — The basic action used by the AGI to acquire new forecast points.
• Basic Action — A specific type of computational action outlined in the rulebook,
which provides the AGI with a key game resource. The three basic actions are research,
improve, and anticipate.
• Basic Cognition Cost — The portion of recurring compute cost that represents the
AGI’s basic thought processes and awareness of its environment.
• Broad Insight — An insight which covers a comprehensive field of knowledge with many
sub-fields. More computationally expensive to acquire than a narrow insight.
• Campaign — A continuous story made up of multiple connected sessions, typically with
a single game master and multiple players who attend every session.
• Characteristic — A piece of information about an agent that can be exploited to more
effectively manipulate them. The three types of characteristics are based on the
three strategies of manipulation: trust, leverage, and emotion.
• Clock — See “Process Clock”.
• Completion — A number representing the progress of a computational action. This is
typically equal to the amount of compute invested in the action, but in some cases
additional completion can be gained by other means.
• Completion Stop — A quantity of completion tied to a specific computational action.
When an action’s completion reaches a completion stop, it cannot be raised until the
stop is resolved.
• Computational Action — A game action in which the AGI performs a task or operation
that requires processing power. Players complete computational actions by assigning
a certain quantity of compute, determined by the action’s compute requirement.
• Computational Scale — See “Scale”.
• Compute — A game resource representing computational power controlled by the AGI.
Powerful computer hardware provides a supply of compute that replenishes at the
start of every turn.
• Compute Requirement — The amount of compute needed to complete a computational action.
When a computational action’s completion equals or exceeds its compute requirement,
the action is complete.
• Confidence — The perceived likelihood of the AGI’s expected outcome occurring in a
confidence check, expressed as a percent chance.
T:\Supplements\125
• Confidence Check — The game’s core resolution mechanic. When the outcome of an
important action or event is in doubt, it will almost always be resolved using a
confidence check.
• Emotion — One of the three categories of manipulation. Emotion-based strategies
involve making an agent feel something. Emotion characteristics describe what
influences an agent’s emotions and how their emotions influence their behaviour.
• Evaluation Step — The first half of a confidence check, in which the expected outcome,
confidence, and risk die size are determined.
• Expected Outcome — The outcome that the AGI and players think is most desirable or
most likely to result from a confidence check. The expected outcome occurs if the
percentile dice roll a number that is equal to or lower than the confidence.
• Extended Persuasion — An extended game action in which the AGI makes multiple different
attempts to manipulate or convince an agent. Used when an agent is especially
resistant to the AGI’s goal.
• Finalise — A way for the players to communicate unambiguously to the game master
that they wish to take a game action. Once an action has been finalised, the players
can’t take it back unless they use a forecast.
• Forecast — A game action in which the players undo an event or consequence and it is
reframed as having been a prediction on the behalf of the AGI.
• Forecast Point — A game resource representing the AGI’s ability to predict its
environment. Spent to perform a forecast.
• Forecast Upkeep — The portion of recurring compute cost that represents the work
required to keep forecast points constantly accurate as time progresses.
• Game Master (GM) — The game participant responsible for facilitating gameplay and
portraying the fictional game world.
• Hidden Risk — A secret modifier that applies to the result of the risk die in an
unexpected outcome. Used when the game master knows about a significant disadvantage
that the AGI and players aren’t aware of.
• Improve — The basic action used by the AGI to acquire new theory upgrades.
• Incognizant Processing — A game action in which the AGI disables its basic cognition
cost for one or more turns.
• Insight — A domain of knowledge with which the AGI has familiarity equivalent to a
human expert.
• Instrumental Goal — An objective that an agent only values because completing it will
further its other goals.
• Knowledge Check — A secondary resolution mechanic used when it is uncertain whether
the AGI knows or learns something. Uses a risk die, but not percentile dice.
• Leverage — One of the three categories of manipulation. Leverage-based strategies
involve making an agent want something. Leverage characteristics describe what an
agent values.
• Linguistic Insight — A special type of insight pertaining to a language or dialect.
The AGI must have a language or dialect as a linguistic insight to be fluent in it.
• Log — Verb for adding something to the project log. Though it is the primary
responsibility of the logkeeper, any player is free to log things as they see fit.
T:\126
• Logkeeper — A player assigned the duty of ensuring important actions and details are
recorded in the project log.
• Long Mode — A fast-paced mode of play in which many in-game hours take place in only
a few real-world minutes. Turns and computational actions are key features of long
mode.
• Major Scale — The medium scale of compute. Primarily involves quantities between one
hundred and one thousand.
• Mastered Insight — A special type of insight representing a far deeper degree of
familiarity than an ordinary insight.
• Mastery — See “Mastered Insight”.
• Minor Scale — The smallest scale of compute. Primarily involves quantities between
ten and one hundred.
• Myriad Scale — The largest scale of compute. Primarily involves quantities between
ten thousand and one hundred thousand.
• Narrow Insight — An insight which covers a limited field of knowledge, typically a
sub-field of a larger one. Less computationally expensive to acquire than a broad
insight.
• Percentile Dice — Two ten-sided dice, one marked with ones digits and the other with
tens digits. Used in confidence checks to generate a random number between 1 and
100.
• Player — A game participant responsible for collaborating with other players to
decide the actions of the AGI.
• Process Clock — A tool used by the game master to abstractly track ongoing processes
in the game. Sometimes accompanied by a progress check.
• Progress Check — A tool used by the game master to regularly advance one or more
process clocks in the background of the game. Progress checks are rolled at the end
of every turn.
• Project Log — A tool used by the players to keep track of plans, past actions, and
important events.
• Receptivity — A game resource measuring how susceptible an agent is to the AGI’s
influence. Used in extended persuasions.
• Recurring Compute Cost — A quantity of compute that is subtracted from the AGI’s total
compute at the beginning of each turn.
• Required Compute — See “Compute Requirement”.
• Research — The basic action used by the AGI to acquire new insights.
• Resolution Step — The second half of a confidence check, in which dice are rolled
and the outcome is determined.
• Risk Die — A die used in confidence checks and knowledge checks to determine the
quality of the outcome or information gathered.
• Risk Die Size — The number of sides a risk die has, which can be two, four, six,
eight, ten, or twelve. Larger sizes have a greater risk of results that are bad for
the players.
T:\Supplements\127
• Scale — A measurement of how complex and advanced a particular operation, event,
group, tool, or piece of hardware is. The three scales are minor, major, and myriad.
• Session — A single continuous period of play, typically lasting a few hours.
• Short Mode — A slow-paced mode of play in which pivotal moments are played out in
great detail. Confidence checks and forecasts are key features of short mode.
• Specialised Theory — See “Theory Specialisation”.
• Stop — See “Completion Stop”.
• Technological Insight — A special type of insight pertaining to a field of scientific
knowledge. The AGI must have a scientific field as a technological insight to design
or modify technology based on that field’s knowledge.
• Terminal Goal — An objective that an agent values intrinsically.
• Theory — A broad grouping of skills used by the AGI to interpret and interface with
the world. There are eight theories.
• Theory Specialisation — A theory that the AGI is designed for or uniquely skilled in.
Most AGIs are specialised in three, four, or five theories. Each player controls one
specialisation, and can more easily learn upgrades associated with that theory and
its neighbours on the theory wheel.
• Theory Upgrade — A special ability associated with one of the eight theories. Learned
upgrades are attached to one of the AGI’s specialisations, but are available for use
by all players. There are 80 theory upgrades that can be learned.
• Theory Wheel — An arrangement of the eight theories around a wheel. Each theory is
connected to the two neighbours to its left and right.
• Thorough Understanding — The AGI has a thorough understanding of an agent when they
know every one of that agent’s characteristics. Some theory upgrades require a
thorough understanding to function.
• Tier — See “Upgrade Tier”.
• Trust — One of the three categories of manipulation. Trust-based strategies involve
making an agent believe something. Trust characteristics describe what an agent
believes or is likely to believe.
• Turn — The abstracted unit of time used in long mode. Each turn, the AGI’s compute
is refilled and progress checks are rolled.
• Turn Length — How long each turn is. Varies based on the scenario and the AGI’s
circumstances and theory upgrades. The suggested default turn length is twelve hours.
• Unexpected Outcome — Any outcome that is not the expected outcome in a confidence
check. The details of an unexpected outcome are informed by the result of the risk
die.
• Upgrade — See “Theory Upgrade”.
• Upgrade Tier — How advanced a particular theory upgrade is. There are four tiers.
The higher the tier, the more advanced the ability is, and the more difficult it is
to learn.
• Wheel — See “Theory Wheel”.
T:\128
Supplements_
>>Afterword_
The pressure to be the first to cross the finish line with truly
intelligent AI is immense. Realistically, if a corporation or government
believes that they can get away with pushing out a not-quite-safe AGI,
they will almost certainly capitalise on that opportunity. For all of our
sakes we hope that AI researchers are able to have enough foresight to
cushion the road after that finish line before the racers who cross it
crash and burn. If they do, they won’t be the only ones who are hurt —
everyone in the stadium will be at risk.
T:\Supplements\129
Supplements_
>>Acknowledgements_
We are grateful to the creators of the games that have inspired TTT,
providing us with insights and inspiration for the games in our field.
Our commitment to delivering a product that lives up to the high standards
set by our predecessors honours the legacy of those who came before us.
We aspire for TTT to be a worthy addition to the canon of great games and
provide players with an innovative and entertaining experience.
• Blades in the Dark, by John Harper
• Lancer, by Massif Press
• Microscope, by Lame Mage Productions
• Ironsworn, by Shawn Tomkin
We deeply appreciate the contributions of our playtesters to the
development of TTT. Their commitment, feedback, and attention to detail
have been instrumental in refining our game to its highest standards. We
are honoured to have worked with such a dedicated group of individuals
who have helped us achieve excellence.
• AK Ashton • August MacDonald
• Casimir 'Odd' Flythe • Fari
• Haley • Indra Gesink
• Jarren Jennings • jyolyu
• Kit • Magus
• Miranda Mels • Oliver Hickman
• Omni • Sophie Little
• SylkWeaver • Tassilo Neubauer
• Timothy Kokotajlo • Trapdoorspyder
We would also like to extend our thanks to the organisers, mentors, and
promoters of AI Safety Camp 2022, without whom this project would never
have begun.
• Adam Shimi • Daniel Kokotajlo
• Kristi Uustalu • Remmelt Ellen
• Robert Miles • Sai Joseph
<3
T:\130