Keywords

1 Introduction

In the first decade of the 21st century, hardware development has been transcending the needs of videogames far beyond offering support for large resolutions and refresh rates above 60 Hz, reaching the point that new releases simply do not present innovations in the interaction, repeating the successful features from known franchises.

The balancing in videogames exists to control the difficulty that the player will encounter, thus avoiding giving up and at the same time trying to keep the player’s interest for as long as possible [13]. This control is done by changing various elements of the game, such as scenarios, music, artificial intelligence behaviours and parameters such as the character’s resistance to damage, among several other items.

Currently, with the high processing power of graphics cards, videogames are with graphics increasingly closer to reality. They can have narratives that, in some cases, surpass the most complex films, but even so, sometimes they turn out to be just another game among so many and does not offer the entertainment expected by the players.

Another point is that, given the large amount of information that people have at their disposal, creating a media that can keep people’s attention is becoming an arduous task for videogame developers. Thus, apart from the competition that videogame studios have among themselves, seeking to expand their Market Share, they also face competition from the internet, which offers to people, videos and social networks to compete for their time with games. So, these companies need to look for ways on how to develop a videogame with characteristics that are capable of offering entertainment and immersion to the largest number of people without dividing them into groups, such as professional players and casual players, that is, looking for a way to make a video game that can be played by anyone and in turn, the player stay with feeling that has received a custom videogame.

Video games are beginning to incorporate adaptive game mechanisms. In the traditional model, the game is linear and will always be presented in the same way to all players. Adaptive games evolve game design according to the player’s choices and with that, offering a unique gaming experience for each person [9]. Currently, in some video games, the player has the false feeling that he is choosing the future of the narrative when making choices about which path to take. This can increase players’ involvement and immersion, yet it does not guarantee that the player does not feel bored during certain points of the game. This is because the game may be poorly balanced and in some moments - it is very difficult to go through a level, while in other moments there is not much to do besides walking around the scenery without any resistance. When launching a new videogame, companies have in mind the desire for success. But, what guarantees the success of a video game? The success of a video game consists of four basic elements, which are: Mechanics, History, Aesthetics and Technology [22].

Despite the precise indications of Schell (2018) [22], the failure in sales and consumer adhesion in many video games that presented significant developments in each one of the highlighted areas or maintained successful recipes, some of them with recognition and prizes, demonstrate that there is no guarantee that success can be achieved only following technical criteria.

Another very common problem in videogames’ development is their balancing. Nowadays, big companies invest millions of dollars to balance their games passively and, even so, there is no guarantee that the game will be pleasant to every player. With that, two accessory problems were formulated, which guide this study:

  • P1 - Videogames do not have an automatic mechanism to help keeping players motivated and emotionally involved.

  • P2 - Videogames are not able to make a personalized balance, to present a unique experience for each player.

With these problems in mind, knowing that there is a hidden ingredient to the success of videogames, and that their balance needs something more to be efficient, some questions arise: “What is behind all this? How to make people want to play a videogame and how to create an ideal balance for each person to offer a unique experience?”

For the development of this study, it is considered that the fundamental piece for the success of videogames is related to individual emotions [4]. When a videogame is able to produce different emotions, and those emotions being what people expect to have while playing, the videogame will certainly be successful.

Throughout the history of videogames, many examples show that the technical criteria alone is not the key factor for a successful videogame. How to explain the case of Minecraft? A videogame that does not have an in-depth narrative, aesthetically it doesn’t have sophistication, technologically it does not have any advancement, as it is possible to play it on machines with simple hardware configurations and its mechanics are common among several games. While some studios are investing heavily in graphic quality to create games that look like real playable movies and are unsuccessful. One possible explanation for the Minecraft case could be in the way that each person can imagine their own world and do whatever they want within the game universe. Each game elicits a set of different emotions in each player and, consequently, has a unique experience in the game. But, how can this be reproduced in videogames where there is a defined narrative and the player must follow a path established by the game designer? It is already known from literature [4], that personalized emotions are the only way to guarantee the success of an entertainment medium, especially in videogames, that are the most immersive multimedia. An alternative is to provide game designers a model in which they can use the emotions of the players so that these emotions can influence the game design, making the game self-adaptable.

This study seeks to analyse how to use the emotions experienced by players in videogames to evolve game design. For this, the focus of the studies is on how to measure physiological signals, and identify emotional patterns, so they can be used as input patterns to a decision-making algorithm when advancing the game. It is important to note that a game can be adaptive only using AI (Artificial Intelligence) to learn from the player using Machine Learning [14] techniques and make changes, without needing to know player’s emotion, but the focus here is to add emotion to the equation. Thus, with the wide variety of biosensors currently available on the market, and with relatively low prices, finding a way to use them in videogames can be a solution to the problem of balance and motivation of players. Seeking to offer even more immersion to the players so that they can fully uniquely enjoy each videogame, the following research question was raised:

  • Is it possible to increase the involvement and attention of the players by making the game design adapt to the emotions of those who are playing?

The following research hypothesis was created after the research question was established:

  • Assuming that it is possible to identify in real-time the emotions experienced by the players, it will be possible to make changes to the game design, manipulate the narrative and gameplay elements through the use of AI algorithms and thereby increase the player’s involvement.

Thus, if the player is on a mission where, for example, he must keep collecting items around the scene and in the meantime, there is nothing that generates action, the player may end up getting bored and even closing the game. But if the video game can identify this, it can insert some enemies via AI to make the mission more difficult. And in case the player is facing a very strong opponent and the game identifies that the player is starting to feel angry for not being able to win, then it can adapt to decrease the resistance of the opponent character so that the game stays balanced with the player and still be interesting.

2 Emotional Envelope in the Gameplay

Currently, measuring emotions and interpreting them is not a simple task. Additionally, measuring emotional reactions through biofeedback is not yet consensual, despite having evolved a lot in the last 13 years [7, 18, 20]. Measuring the right emotions is still complex and inaccurate. Complex because for a more accurate reading of the biosensors, the necessary physical conditions of the environment still need to be very controlled due to the sensitivity of the biosensors.

Regarding interpretations, there are also conflicts of opinion between authors, since the terms emotion and feeling are sometimes confused [24]. Inaccurate because an emotion considered negative on emotion maps such as anguish and fear, can be good in videogames, as long as that is the goal of the game designer. One option offered as a solution is the creation of a model that involves obtaining physiological reactions from the players, making a subjective assessment and automatic repetition in the gameplay.

2.1 How to See Emotions?

Before talking about how emotions can be represented, first is needed to define what an emotion is. An emotion can be defined as an episode that causes interconnected and synchronized changes in the states of all or most of the five subsystems of the organism in response to the evaluation of an external or internal stimulus event as relevant to the main concerns of the organism [23].

To be able to visually represent emotions, some researchers over time have created some graphic models, in which each one defines sets of emotions and how is the interpolation between them. Among the main models are:

  1. 1.

    Russell’s Circumplex Model of Affect: a person’s affective state can be represented in two dimensions, excitement and valence. Where excitement represents the response to stimuli and the person’s alertness level and the valence represents the level of feeling on a scale between pleasant and unpleasant [21].

  2. 2.

    Plutchik’s Wheel of Emotions: in this model, the emotions are represented by the three-dimensional shape of a cone, the basic emotions are eight and, in this model, the base of the cone represents these basic emotions that are arranged in a circle, organized in four pairs of opposites according to their similarities [19].

2.2 How to Measure Emotions?

When trying to measure and interpret people’s emotions in an experiment, it is necessary to use a tool where the volunteer can identify or describe their emotions. For this research, two tools were considered, Self-Assessment Manikin and PrEmo.

Self-Assessment Manikin (SAM): is a non-verbal pictorial assessment technique that directly measures pleasure, excitement and dominance associated with a person’s affective reaction to a wide variety of stimuli [15]. To facilitate reading, in this document the Self-Assessment Manikin technique will be simplified for the SAM Scale. This technique consists of a questionnaire that was developed to measure three characteristics of an emotional response that were identified as the main ones for measuring emotions, which are valence, excitement and dominance [15].

PrEmo is a tool that aims to indicate, through images and animations, the emotional state of a person on a given situation or subject. It is a non-verbal instrument in which the person makes a self-assessment of his emotions using images. PrEmo measures 14 emotions that are usually provoked by the design of a product [6]. The commercial version of the tool presents an option for respondents to be able to report their emotions using expressive cartoon animations. In this version, each of the 14 measured emotions is portrayed by an animation through dynamic facial, body and vocal expressions, so there is no need to rely on the use of words [16].

3 Biofeedback

Biofeedback is a technique that provides biological information in real-time [8]. With this, the use of biofeedback can help so that it is possible to understand and look for ways to interpret people’s emotions with the use of computer algorithms. Currently, precisely defining emotions using biofeedback is still a task to be defined. Even with advances in research and attempts to create automatic systems for recognizing emotions, it is still very difficult to identify emotions using biological signals [10, 12]. Some studies show the difficulty in interpreting emotions. According to them, there are many interindividual differences when it comes to interpreting emotions using biosensors, since the same emotion can lead to different patterns of physiological reactions, which causes great difficulties in finding patterns in the data and thereby training classifiers [12].

In another study based on data from biosensors [11], the objective was to classify users’ emotional experience when interacting with embedded conversational agents, that is, a virtual character. The study was carried out in two phases, where the first objective was to be able to train the classifier and the second to apply the classifier to human-agent interactions. In the first phase, images with known values of valence were used to be displayed to participants who used the SAM Scale [3] to evaluate these images. At the same time, they collected data from biosensors to train classifiers. During the second phase, the participants were placed to interact with a virtual agent and meanwhile their physiological data was collected, and the classifiers trained in the first phase were applied to that data.

One difficulty pointed out by the authors is the limitation in the use of subjective classifications for the training of classifiers for physiological signs. As this classification was acquired from satisfaction questionnaires carried out at the end of the interaction session, this can cause some problems because people may not remember exactly how their emotional experience was during the experiment. Another point is that the questionnaires provide measures only at the end of the interaction and this generates a single measure for the entire interaction [1, 17]. The conclusion was that making automatic recognition of emotions by distinguishing emotions with respect to valence is a difficult task. The study obtained good results for image evaluations, on average above 90% of correct answers, but for tests with virtual agents, the data were not satisfactory.

3.1 Biosensors

A biosensor is an analytical device that measures biological or chemical reactions and converts them into an electrical signal. Currently, biosensors are present mainly in biomedical diagnosis. They are used in applications such as disease monitoring, drug discovery and pollutant detection, disease-causing microorganisms and markers that are indicators of a disease in body fluids such as blood, urine, saliva and sweat [2].

The term biosensor is often used to refer to sensors used to determine the concentration of substances and other parameters of biological interest, even when they do not directly use a biological system [5]. Currently, these biosensors are accessible for research, mainly because low-cost equipment, such as Bitalino, can be used. Bitalino is a board slightly larger than a credit card that enables the acquisition of physiological data. It is a very versatile hardware and was designed in a way that allows anyone, from students to professional application developers, to be able to create projects and applications using physiological sensors [25].

However, to study emotions in videogames it is not enough to have the equipment. It is necessary to create experimental paradigms to study the players’ behaviour, integrating for this purpose the synchronized reading of physiological signals, filtering algorithms and data treatment to use this information in the identification of emotional patterns generated during the execution of the games. In this sense, an online platform accessible at brainanswer.pt was used. With this platform, it is possible to develop experimental test paradigms while measuring various physiological signals, collecting forms and self-assessment scales from players. In this platform, tools are made available that allow the replay of the game while the physiological signals are visualized, and comments or markings can be made at the moments of greatest emotional involvement of the player. Another possibility is the crossing of the players’ responses with the generated signals making it possible to investigate changes in physiological signals at key moments in the games.

The BrainAnswer platform works like an online laboratory for studying emotions. However, the ease with which it is possible to identify changes in some physiological signals through the platform, should not be accepted in any way, normally requiring interpretation by experts. This is because the changes can occur due to several factors: poor placement of the sensors, inappropriate collection environment, involuntary movement of the participant, electromagnetic noise in the room, external interference, great variability of signals between individuals, the stress associated with the monitoring experience and even the existence of some pathology. All of these are factors that escape inexperienced researchers and that are a factor of success or failure in the study of emotions. However, it is necessary to develop an API (Application Programming Interface), to integrate the game variables with the emotional parameters measured on the Brainanswer platform.

4 Emotional Gameplay Through Application Programming Interface - API

One way to solve these problems seen so far related to the interpretation of emotions in videogames is using an API, which receives data from three sources: Biosensors, Gameplay Telemetry and the Emotion Patterns expected to be found in that game. These three sources are important because from the biosensors’ data it is possible to find the physiological variation of the players to the stimuli of the game at a given moment or event, but this information alone is not enough, because it is necessary to know the game data at the moments when these variations occur so that the interpretation of emotions is more accurate. These data can be acquired from the telemetry of the game, which must be done during the development of the game, so that it is possible to recover the value of the variables that are of interest for analysis, as well as events that the game designer considers important to generate emotions in players. And the third source of data is the emotional patterns expected for the game events, because by correlating the expected emotion with the player’s emotion for a given game event, one can guarantee that the emotions obtained are the ones expected and so they are evaluated correctly. That way the API may be able to make more assertive decisions when it comes to predicting the next level.

4.1 API Training Model

To present an application that can perform these tasks, a model called the Emotional Game API was developed, which was divided into two stages. The first is responsible for training the API so that it can recognize emotional patterns. And the second stage is the application of the API for creating settings for a videogame based on the input data. Figure 1 shows the steps for training the API.

Fig. 1.
figure 1

Activity flow to API training.

  1. 1.

    Player plays the game level with manual settings: the first step for training the API is to prepare the game settings manually, that is, all the variables that define the difficulty of the level must be defined by the game designer so that it is playable by anyone, even if they do not have previous knowledge about the game.

  2. 2.

    System collects biofeedback and gameplay data: the player’s biofeedback data, such as Electrodermal Activity (EDA), Blood Volume Pulse (BVP), Electrocardiogram (ECG) and Respiration, must be collected in a synchronized manner with the game’s telemetry data. This facilitates the process of interpreting the data when placed in Big Data systems.

  3. 3.

    Apply a survey regarding with player’s emotions: at this stage, it is important to survey with the player so that it is possible to understand what his emotions were during the gameplay. This step is important because if the evaluation is performed only considering the maps of emotions, wrong conclusions can be reached. An emotional pattern can be defined as positive or negative by the user without having to know whether or not it is positive or negative on the map of emotions. It happens because, in a game interaction, the player may like fear or anguish and hopes for it, but these types of emotions are considered negative on the scales of emotions. Therefore, the game designer needs to know what emotions are expected for the game’s events.

  4. 4.

    Apply emotional maps (PrEmo/SAM): with the utilisation of the two emotion maps, PrEmo [16] and SAM [15] together, the strengths of both can be added. With the use of PrEmo it is possible to identify the emotional state of the player during the gameplay and with the use of the SAM map it is possible to measure the intensity of that state, as well as to verify the levels of Pleasure, Arousal and Dominance, considering player’s opinion. The utilisation of these maps helps in the API validation process, because based on the players’ inputs it is possible to know if the changes made by the API have shown good results or not.

  5. 5.

    Search for patterns using Big Data: as the data acquisition process generates a very large volume of data to be analysed, the best way is to apply Big Data systems to extract information. At this stage, it is important to analyse recurring events in the game, for example, when the character receives a bonus, when he levels up, when he dies, when he is under pressure. These events can be defined in the game’s telemetry, or if the analysed game is already existing and offers the option of collecting the telemetry, these events must be defined manually. For this stage it is also important to have recordings of the gameplays, this helps to identify patterns of behaviour of the player as well as it is possible to know what he was doing before and after the event. These patterns can be simplified to positive and negative. Positive when the event pleases the player and negative when the event displeases the player.

  6. 6.

    Training the API with found patterns: after having found these patterns, these values can be used to train an Artificial Intelligence (AI) using Machine Learning. That way, when the API is in a production environment, it will only be necessary to input the biofeedback data, game telemetry and expected emotional patterns, that the API should be able to interpret these values and with them to create configurations for the next level of the game. In this way, each new level of the game may present a different configuration, based on the emotional state of the player and thus increase the engagement of players with the game.

An important point to be highlighted during the process of evaluating the emotional state of the players is to know what the player was doing before and after each analysed event. This information may not be found by Machine Learning systems, because they can only observe that there were changes in the data of the biosensors, but without knowing the reason for this change. Players can experience physiological changes while mentally creating a strategy. These changes also occur when players see that they will be in danger shortly. Therefore, in the initial evaluations, it is important to observe the gameplay recordings to remove these critical points.

4.2 API Application Model

The API usage model is like the training model. This model is indicated to be used as a continuation of AI training, because with it, the player will play the same level several times with different configurations suggested by the API. In this model, the player continues to starting the first level with a manual configuration suggested by the game designer, the procedure for collecting the game’s biofeedback data and game telemetry data remains the same, as well as the application of emotion maps at the end of the level played. The flow of activities for using the API can be seen in Fig. 2 and the added points are presented subsequently.

Fig. 2.
figure 2

Activity flow to use the API.

  1. 1.

    Game designer defines the expected emotions: as stated earlier, the interpretation of an emotion can presents incorrect values for the prediction systems if they are evaluated only based on the maps of emotions. That is why the game designer needs to define the emotional pattern he intends to meet for each game event. This data must be sent to the API to be correlated with the data received from the biosensors and game’s telemetry data. This is important because, for example, if the game designer expects that the player feels angry at a certain point in the game and then starts a battle with a desire to revenge, this emotional pattern will be interpreted as positive by the API, contrary to what it would be if analysed only based on emotion maps.

  2. 2.

    API receives the biofeedback data, gameplay data and expected emotions: each of these three data sources is a fundamental part of a successful API result. In the biofeedback data, there are the physiological changes of the players, in the gameplay data, there is all the game’s telemetry with the relevant game data, such as the character’s position, life level, distance between the character and the players, enemies, among others that the game designer considers important. And finally, there are the emotional patterns that the game designer hopes to meet for the game’s events.

  3. 3.

    API defines a new configuration for the game: with all three data sources ready and the AI previously trained, the API may be able to predict configurations that are in line with the players’ expectations. For this, at each event considered important by the game designer, the API must correlate a range of data from the biosensors, which in turn must be synchronized with the telemetry data of the game and thereby seek the standards previously defined in the training process, then, this result must be correlated with the emotional pattern established by the game designer so that it is possible to identify whether or not the player is involved with the game.

  4. 4.

    Player plays the game level with API settings: This step helps to test whether the API was efficient in creating a game configuration in a way that pleases the player. This is a feedback system process and also helps for AI training, because if it is identified that the settings did not satisfy the player, for the next level the API should be able to identify whether the game was too easy or too difficult, resulting in the player’s disinterest and thereby adapting. If the result was positive and the player liked the game’s difficulty settings, for the next level the API can be configured to offer the player more challenge.

With this model, for the realization of experiments, the game designer can define how many levels the player will play. So, in Fig. 2, there is the decision component called Last Level, it assists to direct the experiment, where the player will repeat the level with the settings suggested by the API until reaching the defined value and then the experiment is finished. For the “emotional pattern established by the game designer”, it is understood that for a given game event, an emotion is expected. A point to be highlighted is that when starting the API training process there is still no data pattern, so what are needed to know is just the expected emotion. A pattern in the data is something that will be found only after training.

5 Conclusion

The model presented in this paper is being applied in a field study, in the final stage, with a sample of players (n = 40). The API behavior has revealed an affinity with the players’ emotional envelopes, and it can be increased the emotional game experience between matches. The model has managed to combine the biofeedback elements with the predefined gameplay characteristics. The first results have shown that there is an expected viability for this model.

A point that must be observed in research that intends to use the model presented here is using an authorial videogame, where it is possible to test specific events without the influence of others. In general games, even though it is a simple game, have a very fast transition between events. So, it can be difficult to know what the exact reason for an emotional trigger was. Thus, being able to test events separately that cause joy, sadness, anger, fear, among other emotions, it becomes more evident if a person had an emotional shot because he was in danger or because he received a reward. Thus, creating a videogame where all people go through the same events and between these events there is an interval to calm down, it will probably be easier to identify patterns and thereby teach AI.

It is indicated that in the self-assessment forms that are presented to players when completing the level. That forms need to have a question that inspire the player to express in writing their experience during that game, because with that, in the initial stages of creating the API, where the pattern search process requires manual intervention and analysis, making it easier to remove misinterpretations. This is also one of the reasons for doing separate event tests, because when playing a full level and only at the end make an evaluation of the experience as a whole the player can consider only isolated cases that occurred during the gameplay and thereby hamper the process of the search for emotional patterns.

As seen in the literature, finding emotional patterns in the data is still difficult, but this can be simplified if the analysis was made to verifying whether the event caused an activation in the player or not. This can be found by correlating the game telemetry from a certain event with the biosensors data, mainly with the EDA and heartbeat rate curves of the player. This does not guarantee knowing the emotion of the player, but it does provide information on whether that event satisfied the player. This topic is under study and will be discussed in more detail in future research.

However, the model presented in this research aims to be a starting point for the search for emotional patterns of players in commercial games. As it has a Deep Learning and Machine Learning process, it is indicated that it is initially used in simple games, with levels that can be played again, because this way it is easier to search for emotional patterns when compared to complex games that have several forms of player activation.