1 Introduction

Understanding behaviour is a key aspect of biology, psychology and social science, e.g.  for studying the effects of treatments (Alboni et al., 2017), the impact of social factors (Langrock et al., 2014) or the link with genetics (Bains et al., 2017). Biologists often turn to model organisms as stand-ins, of which mice are a popular example, on account of their similarity to humans in genetics, anatomy and physiology (Van Meer & Raber, 2005). Traditionally, biological studies on mice have taken place in carefully controlled experimental conditions (Van Meer & Raber, 2005), in which individuals are removed from their home-cage, introduced into a specific arena and their response to stimuli (e.g.  other mice) investigated: see e.g.  the work of Arakawa et al. (2014), Casarrubea et al. (2014), Jiang et al. (2019), Qiao et al. (2018), Schank (2008), Tufail et al. (2015), Wiltschko et al. (2015). This is attractive because: (a) it presents a controlled stimulus–response scenario that can be readily quantified (Bućan & Abel, 2002), and (b) it lends itself easier to automated means of behaviour quantification e.g.  through top-mounted cameras in a clutter-free environment (Qiao et al., 2018; Schank, 2008; Tufail et al., 2015; Wiltschko et al., 2015).

The downside of such ‘sterile’ environments is that they fail to take into account all the nuances in their behaviour (Gomez-Marin & Ghazanfar, 2019). Such stimuli-response scenarios presume a simple forward process of perception-action which is an over-simplification of their agency (Gomez-Marin & Ghazanfar, 2019). Moreover, mice are highly social creatures, and isolating them for specific experiments is stressful and may confound the analysis (Bains et al., 2016; Crawley, 2007). For these reasons, research groups, such as the International Mouse Phenotype Consortium (Brown & Moore, 2012) and TEATIME cost-actionFootnote 1 amongst others, are advocating for the long-term analysis of rodent behaviour in the home-cage. This is aided by the proliferation of home-cage monitoring systems, but is hampered by the shortage of automated means of analysis.

In this work, we tackle the problem of studying mice in the home-cage, giving biologists tools to analyse the temporal aspect of an individual’s behaviour and model the interaction between cage-mates—while minimising disruption due to human intervention. Our contributions are: (a) a novel Global Behaviour Model (GBM) for detecting patterns of behaviour in a group setting across cages, (b) the Activity Labelling Module (ALM), an automated pipeline for inferring mouse behaviours in the home-cage from video, and (c) two datasets, ABODe for automated activity classification and IMADGE for analysis of mouse behaviours, both of which we make publicly available.

In this paper, we first introduce the reader to the relevant literature in Sec. 2. Section 3 describes the nature of our data, including the curation of two publicly available datasets: this allows us to motivate the methods which are detailed in Sect. 4. We continue by describing the experiments during model fitting and evaluation in Sect. 5 and conclude with a discussion of future work (Sect. 6).

2 Related Work

2.1 Experimental Setups

Animal behaviour has typically been studied over short periods in specially designated arenas—see e.g.    Arakawa et al. (2014), Casarrubea et al. (2014), Schank (2008)—and under specific stimulus–response conditions (Qiao et al., 2018). This simplifies data collection, but may impact behaviour (Bailoo et al., 2018) and is not suited to the kind of long-term studies in which we are interested. Instead, newer research uses either an enriched cage (Jiang et al., 2019; Le & Murari, 2019; Nado, 2016; Sourioux et al., 2018) or, as in our case, the home-cage itself (Bains et al., 2016; de Chaumont et al., 2019). The significance of the use of the home-cage cannot be overstated. It allows for capturing a wider plethora of nuanced behaviours with minimal intervention and disruption to the animals, but it also presents greater challenges for the automation of the analysis, and indeed, none of the systems we surveyed perform automated behaviour classification for individual mice in a group-housed setting.

Concerning the number of observed individuals, single-mice experiments are often preferred as they are easier to phenotype and control (Jiang et al., 2019; Nado, 2016; Sourioux et al., 2018; Wiltschko et al., 2015). However, mice are highly social creatures and isolating them affects their behaviour (Crawley, 2007), as does handling (often requiring lengthy adjustment periods). Obviously, when modelling social dynamics, the observations must perforce include multiple individuals. Despite this, there are no automated systems that consider the behaviour of each individual in the home-cage as we do. Most research is interested in the behaviour of the group as a whole (Arakawa et al., 2014; Burgos-Artizzu et al., 2012; Jiang et al., 2021; Lorbach et al., 2019), which circumvents the need to identify the individuals. Carola et al. (2011) do model a group setting, but focus on the mother only and how it relates to its litter: similarly, the social interaction test (Arakawa et al., 2014; Segalin et al., 2021) looks at the social dynamics, but only from the point of view of a resident/intruder and in a controlled setting. While Giancardo et al. (2013), de Chaumont et al. (2019) and de Chaumont et al. (2012) do model interactions, their setup is considerably different in that (a) they use specially-built arenas (not the home-cage), (b) use a top-mounted camera (which is not possible in the home-cage) and (c) classify positional interactions (e.g.  Nose-to-Nose, Head-to-Tail etc... , based on fixed proximity/pose heuristics) and not the type of individual activity (e.g.  Feeding, Drinking, Grooming etc... ).

2.2 Automated Behaviour Classification

Classifying animal behaviour has lagged behind that of humans, with even recent work using manual labels (Carola et al., 2011; Casarrubea et al., 2014; Loos et al., 2015). Automated methods often require heavy data engineering (Arakawa et al., 2014; Dollár et al., 2005; Jiang et al., 2019, 2021). Animal behaviour inference tends to be harder because human actions are more recognisable (Le & Murari, 2019), videos are usually less cluttered (Jiang et al., 2019) and most challenges in the human domain focus on classifying short videos rather than long-running recordings as in animal observation (Jiang et al., 2021). Another factor is the limited number of publicly available animal observation datasets that target the home-cage. Most—RatSI (Lorbach et al., 2018), MouseAcademy (Qiao et al., 2018), CRIM13 (Burgos-Artizzu et al., 2012), MARS (Segalin et al., 2021), PDMB (Jiang et al., 2021), CalMS21 (Sun et al., 2021) and MABe22 (Sun et al., 2023)—use a top-mounted camera in an open field environment: in contrast, our side-view recording of the home-cage represents a much more difficult viewpoint with significantly more clutter and occlusion. Moreover, PDMB only considers pose information, while CRIM13, MARS and CalMS21 deal exclusively with a resident-intruder setup, focusing on global interactions between the two mice rather than individual actions. We aim, by releasing ABODe (Sect. 3.3), to fill this gap.

2.3 Modelling Mouse Behaviour

The most common form of behaviour analysis involves reporting summary statistics: e.g.  of the activity levels (Geuther et al., 2019), the total duration in each behaviour (de Chaumont et al., 2019) or the number of bouts (Segalin et al., 2021), effectively throwing away the temporal information. Even where temporal models are used as by Arakawa et al. (2014), this is purely as an aid to the behaviour classification with statistics being still reported in terms of total duration in each state (behaviour). This approach provides an incomplete picture, and one that may miss subtle differences (Rapp, 2007) between individuals/groups. Some research output does report ethograms of the activities/behaviours through time (Bains et al., 2017; Ohayon et al., 2013; Segalin et al., 2021)—and Bains et al. (2016) in particular model this through sinusoidal functions—but none of the works we surveyed consider the temporal co-occurrence of behaviours between individuals in the cage as we do. For example, in MABe22, although up to three mice are present, the four behaviour labels are a multi-label setup, which indicate whether each action is evidenced at each point in time, but not which of the mice is the actor. This limits the nature of the analysis as it cannot capture inter individual dynamics, which is where our analysis comes in.

An interesting problem that emerges in biological communities is determining whether there is evidence of different behavioural characteristics among individuals/groups (Loos et al., 2015; Carola et al., 2011; Van Meer & Raber, 2005) or across experimental conditions (Bains et al., 2016; Rapp, 2007). Within the statistics and machine learning communities, this is typically the domain of anomaly detection for which Chandola et al. (2009) provide an exhaustive review. This is at the core of most biological studies and takes the form of hypothesis testing for significance (Carola et al., 2011). The limiting factor is often the nature of the observations employed, with most studies based on frequency (time spent or counts) of specific behaviours (Crawley, 2007; Geuther et al., 2021). The analysis by Carola et al. (2011) uses a more holistic temporal viewpoint, albeit only on individual mice, while our models consider multiple individuals. Wiltschko et al. (2015) employ Hidden Markov Models (HMMs) to identify prototypical behaviour (which they compare across environmental and genetic conditions) but only consider pose features—body shape and velocity—and do so only for individual mice. To our knowledge, we are the first to use a global temporal model inferred across cages to flag ‘abnormalities’ in another demographic.

3 Datasets

Fig. 1
figure 1

An example video frame from our data, showing the raw video (left) and an enhanced visual (right) using CLAHE (Zuiderveld, 1994). In the latter, the hopper is marked in yellow and the water spout in purple, while the (RFID) mouse positions are projected into image space and overlaid as red, green and blue dots

A key novelty of this work relates to the use of continuous recordings of group-housed mice in the home-cage. In line with the Reduction strategy of the 3Rs (Russell & Burch, 1959) we reuse existing data already recorded at the Mary Lyon Centre at MRC Harwell, Oxfordshire (MLC at MRC Harwell). In what follows, we describe the modalities of the data (Sec. 3.1), documenting the opportunities and challenges this presents, as well as our efforts in curating and releasing two datasets to solve the behaviour modelling (Sect. 3.2) and classification (Sect. 3.3) tasks.

3.1 Data Sources

We use continuous three-day video and position recordings—captured using the home-cage analyses system of Bains et al. (2016)—of group-housed mice of the same sex (male) and strain (C57BL/6NTac).

3.1.1 Husbandry

The mice are housed in groups of three as a unique cage throughout their lifetime. To reduce the possibility of impacting social behaviour (Gomez-Marin & Ghazanfar, 2019), the mice have no distinguishing external visual markings: instead, they are microchipped with unique Radio-Frequency Identification (RFID) tags placed in the lower part of their abdomen. All recordings happen in the group’s own home-cage, thus minimising disruption to their life-cycle. Apart from the mice, the cage contains a food and drink hopper, bedding and a movable tunnel (enrichment structure), as shown in Fig. 1. For each cage (group of three mice), three to four day continuous recordings are performed when the mice are 3-months, 7-months, 1-year and 18-months old. During monitoring, the mice are kept on a standard 12-hour light/dark cycle with lights-on at 07:00 and lights-off at 19:00.

3.1.2 Modalities

The recordings (video and position) are split into 30-minute segments to be more manageable. Experiments are thus uniquely identified by the cage-id to which they pertain, the age group at which they are recorded and the segment number.

A single-channel infra-red camera captures video at 25 frames per second from a side-mounted viewpoint in \(1280 \times 720\) resolution. Understandably, the hopper itself is opaque and this impacts the lighting (and ability to resolve objects) in the lower right quadrant. As regards cage elements, the hopper itself is static, and the mice can feed either from the left or right entry-points. The water-spout is on the left of the hopper towards the back of the cage from the provided viewpoint. The bedding itself consists of shavings and is highly dynamic, with the mice occasionally burrowing underneath it. Similarly, the cardboard tunnel roll can be moved around or chewed and varies in appearance throughout recordings. This clutter, together with the close confines of the cage, lead to severe occlusion, even between the mice themselves.

With no visual markings, the mice are only identifiable through the implanted RFID tag, which is picked up by a \(3\times 6\) antenna-array below the cage. For visualisation purposes (and ease of reference), mice within the same cage are sorted in ascending order by their identifier and denoted Red, Green and Blue. The antennas are successively scanned in numerical order to test for the presence of a mouse: the baseplate does on average 2.5 full-scans per-second, but this is synchronised to the video frame rate. The RFID pickup itself suffers from occasional dropout, especially when the mice are above the ground (e.g.  climbing or standing on the tunnel) or in close proximity (i.e.  during huddling).

3.1.3 Identifying the Mice

A key challenge in the use of the home-cage data is the correct tracking and identification of each individual in the group. This is necessary to relate the behaviour to the individual and also to connect statistics across recordings (including different age-groups). However, the close confines of the cage and lack of visible markers make this a very challenging problem. Indeed, standard methods, including the popular DeepLabCut framework of Lauer et al. (2022) do not work on the kind of data that we use.

Our solution lies in the use of the Tracking and Identification Module (TIM), documented in Camilleri et al. (2023). We leverage Bounding Boxes (BBoxes) output by a neural network mouse detector, which are assigned to the weak location information by solving a custom covering problem. The assignment is based on a probabilistic weight model of visibility, which considers the probability of occlusion. The TIM yields per-frame identified BBoxes for the visible mice and an indication when it is not visible otherwise.

3.2 IMADGE: A dataset for Behaviour Analysis

The Individual Mouse Activity Dataset for Group Environments (IMADGE) is our curated selection of data with the aim to provide a general dataset for analysing mouse behaviour in group settings. It includes automatically-generated localisation and behaviour labels for the mice in the cage, and is available at https://github.com/michael-camilleri/IMADGE for research use. The dataset also forms the basis for the ABODe dataset (Sec. 3.3).

3.2.1 Data Selection

IMADGE contains recordings of mice from 15 cages from the Adult (1-year) and 10 cages from the Young (3-month) age-groups: nine of the cages exist in both subsets and thus are useful for comparing behaviour dynamics longitudinally. All mice are male of the C57BL/6NTac strain. Since this strain of mice is crepuscular (mostly active at dawn/dusk), we provide segments that overlap to any extent with the morning (06:00–08:00) and evening (18:00–20:00) periods (at which lights are switched on or off respectively), resulting in generally \(2\nicefrac {1}{2}\) hour recording runs. This is particularly relevant, because changes in the onset/offset of activity around these times can be very good early predictors of e.g.  neurodegenerative conditions (Bains et al., 2023). The runs are collected over the three-day recording period, yielding six runs per-cage, equivalent to 90 segments for the Adult and 61 segments for the Young age-groups.

3.2.2 Data Features

IMADGE exposes the raw video for each of the segments. The basic unit of processing for all other features, is the Behaviour Time Interval (BTI) which is one-second in duration (25 video frames). This was chosen to balance expressivity of the behaviours (reducing the probability that a BTI spans multiple behaviours) against imposing an excessive effort in annotation for training behaviour classifiers).

The main modality is the per-mouse behaviour, obtained automatically by our ALM. The observability of each mouse in each BTI is first determined: behaviour classification is then carried out on samples deemed Observable. The behaviour is according to one of seven labels: Immobile , Feeding , Drinking , Self-Grooming , Allo-Grooming , Locomotion  and Other . Behaviours are mutually exclusive within the BTI, but we retain the full probability score over all labels rather than a single class label.

The RFID-based mouse position per-BTI is summarised in two fields: the mode of the pickups within the BTI and the absolute number of antenna cross-overs. The BBoxes for each mouse are generated per-frame using our own TIM (Camilleri et al., 2023), running on each segment in turn. The per-BTI BBox is obtained by averaging the top-left/bottom-right coordinates throughout the BTI (for each mouse).

3.3 ABODe: A dataset for Behaviour Classification

Our analysis pipeline required a mouse behaviour dataset that can be used to train models to automatically classify behaviours of interest, thus allowing us to scale behaviour analysis to larger datasets. Our answer to this need is the Annotated Behaviour and Observability Dataset (ABODe). The dataset, available at https://github.com/michael-camilleri/ABODe consists of video, per-mouse locations in the frame and per-second behaviour labels for each of the mice.

3.3.1 Data Selection

For ABODe we used a subset of data from the IMADGE dataset. We randomly selected 200 two-minute snippets from the Adult age-group, with 100 for Training, 40 for Validation and 60 for Testing. These were selected such that data from a cage appears exclusively in one of the splits (training/validation/test), ensuring a better estimate of generalisation performance. The data was subsequently annotated by a trained phenotyper (see appendix Sec. B).

Fig. 2
figure 2

The ALM for classifying observability and behaviour per mouse. The input signal comes from three modalities: i coarse position (RFID), ii identified BBoxes (using the TIM as implemented in (Camilleri et al., 2023)) and iii video frames. An OC iv determines whether the mouse is observable and its behaviour can be classified. If this is the case, then the BC v is activated to generate a probability distribution over behaviours for the mouse. Further architectural details appear in the text

3.3.2 Data Features

As in IMADGE, ABODe contains the raw video (as two-minute snippets). We also provide the per-frame per-mouse RFID-based position reading and BBox location within the image (generated using TIM).

The dataset consists of 200 two-minute snippets, split as 110 Training, 30 Validation and 60 in the Test set (see Table 9). To simplify our classification and analysis, the behaviour of each mouse is defined at regular BTIs, and is either Not Observable  or one of seven mutually exclusive labels: Immobile , Feeding , Drinking , Self-Grooming , Allo-Grooming , Locomotion  or Other . We enforce that each BTI for each mouse is characterised by exactly one behaviour: this implies both exhaustibility and mutual exclusivity of behaviours. The behaviour of each mouse is annotated by a trained phenotyper  according to a more extensive labelling schema which takes into account tentative labellings and unclear annotations: this is documented in the appendix Sec. B.1. Note that unlike some other group-behaviour projects, we focus on individual actions (as above) rather than explicitly positional interactions (e.g.  Nose-to-Nose, Chasing, etc... )—this is a conscious decision that is driven by the biological processes under study as informed through years of research experience at the MLC at MRC Harwell.

4 Methods

The main contribution of this work relates to the GBM which is described in Sect. 4.2. However, obtaining behaviour labels for each of the mice necessitated development of the ALM, described in Sect. 4.1.

4.1 Classifying Behaviour: The ALM

Analysing behaviour dynamics in social settings requires knowledge of the individual behaviour throughout the observation period. Our goal is thus to label the activity of each mouse or flag that it is Not Observable  at discrete BTIs—in our case every second. A strong-point of our analysis is the volume of data we have access to: this allows our observations to carry more weight and be more relevant to the biologists as they are drawn from hours (rather than minutes) of observations. However, this scale of data is also challenging, making manual labelling infeasible.

As already argued in Secs. 2.1 and 2.2, existing setups do not consider the side-view home-cage environment that we deal with. It was thus necessary to develop our own ALM (Fig. 2), to automatically determine whether each mouse is observable in the video, and if so, infer a probability distribution over which behaviour it is exhibiting. Using discrete time-points simplifies the problem by framing it as a purely classification task, and making it easier to model (Sect. 4.2). We explicitly use a hierarchical label space (observability v. behaviour, see Fig. 2(vi)), since (a) it allows us to break down the problem using an Observability Classifier (OC) followed by a Behaviour Classifier (BC) in cascade, and (b) because we prefer to handle Not Observable  explicitly as missing data rather than having the BC infer unreliable classifications which can in turn bias the modelling. It is also semantically inconsistent to treat Not Observable  as a mutually exclusive label with the rest of the behaviours: specifically, if the mouse is Not Observable , we know it is doing exactly one of the other behaviours (even if we cannot be sure about which).

In the next subsections we describe in turn the OC and BC sub-modules: note that we postpone detailed training and experimental evidence for the choice of the architectures to our Experiments Sect. 5.

4.1.1 Determining Observability

For the OC (iv in Fig. 2) we use as features: the position of the mouse (RFID), the fraction of frames (within the BTI) in which a BBox for the mouse appears, the average area of such BBoxes and finally, the first 30 Principal Component Analysis (PCA) components from the feature-vector obtained by applying the Long-term Feature Bank (LFB) model (Wu et al., 2018) to the video. These are fed to a logistic-regression classifier trained using the binary cross-entropy loss (Bishop, 2006, 206) with \(l_2\) regularisation, weighted by inverse class frequency (to address class imbalance). We judiciously choose the operating point (see Sect. 5.1.2) to balance the errors the system makes. Further details regarding the choice and training of the classifier appear in Sect. 5.1.2.

4.1.2 Probability Over Behaviours

The BC (v in Fig. 2) operates only on samples deemed Observable  by the OC, outputting a probability distribution over the seven behaviour labels (Sect. 3.3). The core component of the BC is the LFB architecture of Wu et al. (2018) which serves as the backbone activity classifier. For each BTI, the centre frame and six others on either side at a stride of eight are combined with the first detection of the mouse in the same period and fed to the LFB classifier. The logit outputs of the LFB are then calibrated using temperature scaling (Guo et al., 2017), yielding a seven-way probability vector. In instances where there is no detection for the BTI, a default distribution is output instead. All components of the BC (including choice of backbone architecture) were finetuned on our data as discussed in Sect. 5.1.3.

Although the identification of key-points on a mouse is a sensible way to extract pose information in a clean environment with a top-mounted camera, it is much more problematic in our cluttered home-cage environment with a side-mounted camera. Indeed, attempts to use the popular DeepLabCut framework (Lauer et al., 2022) failed because of the lack of reproducible key points (see previous work in Camilleri, Zhang, Bains, Zisserman, and Williams 2023, sec. 5.2). Hence we predict the behaviour with the BC directly from the RFID data, BBoxes and frames (as illustrated in Fig. 2), without this intermediate step.

Fig. 3
figure 3

Graphical representation of our GBM. ‘\(\times \)’ refers to standard matrix multiplication. To reduce clutter, the model is not shown unrolled in time

4.2 Modelling Behaviour Dynamics

In modelling behaviour, we seek to: (a) capture the temporal aspect of the individual’s behaviour, and (b) model the interaction and interdependence between cage-mates. These goals can be met through fitting a HMM on a per-cage basis, in which the behaviour of each mouse is represented by factorised categorical emissions contingent on a latent ‘regime’ (which couples them together). However, this generates a lot of models, making it hard to analyse and compare observations across cages.

To address this, we seek to fit one GBM across cages. The key problem is that the assignment of mouse identities in a cage (denoted as R, G, B) is arbitrary. As an example, if R represents a dominant mouse in one cage, this role may be taken by e.g.  mouse G in another cageFootnote 2. Forcing the same emission probabilities across mice avoids this problem, but is too restrictive of the dynamics that can be modelled. Instead, we introduce a permutation matrix to match the mice in any given cage to the GBM as shown in Fig. 3. This formulation is broadly applicable to scenarios in which one seeks to uncover shared behaviour dynamics across different entities (e.g.  in the analysis of sports plays).

As in a HMM, there is a latent state Z indexed by cage m, recording-run n and time t, forming a Markov chain (over t), which represents the state of the cage as a whole. This ‘regime’, is parametrised by \(\mathbf {\pi }\) in the first time-point (initial probability) as well as \(\Omega \) (transition probabilities), and models dependence both in time as well as between individuals. We then use \(\widetilde{X}\) to denote the behaviour of each mouse: this is a vector of variables, one for each mouse \(\tilde{k} \in \lbrace 1, \dots , K\rbrace \), in which the order follows a ‘canonical’ assignment. Note that each mouse is represented by a complete categorical probability distribution (as output from the ALM), rather than a hard label, and is conditioned on Z through the emission probabilities \(\Psi \). This allows us to propagate uncertainty in the outputs of the ALM module, with the error implicitly captured through \(\Psi \).

figure a

For each cage m, the random variable \(Q^{[m]}\) governs which mouse, k (R/G/B) is assigned to which index, \({\tilde{k}}\), in the canonical representation \({\widetilde{X}}\), and is fixed for all samples nt and behaviours x. The sample space of Q consists of all possible permutation matrices of size \(K\times K\) i.e.  matrices whose entries are 0/1 such that there is only one ‘on’ cell per row/column. Q can therefore take on one of K! distinct values (permutations). This permutation matrix setup has been used previously, e.g.  for problems of data association in multi-target tracking (see, e.g. , Murphy 2023, sec. 29.9.3), and in static matching problems, see e.g. , Mena et al. (2020), Powell and Smith (2020) and Nazabal et al. (2023). In the above cases the focus is on approximate matching due to a combinatorial explosion, but here we are able to use exact inference due to the low dimensionality (in our case \(|Q| = 3! = 6\)). This is because the mouse identities have already been established though time in the TIM, and it is only the permutation of roles between cages that needs to be considered. The GBM is a novel combination of a HMM to model the interdependence between cage-mates, and the use of the permutation matrix to handle the mapping between the model’s canonical representation \(\tilde{X}\) and the observed X.

Note that fixing Q and X determines \({\widetilde{X}}\) completely by simple linear algebra. This allows us to write out the complete data likelihood as:

$$\begin{aligned} P_\Theta \left( \mathcal {D}\right)&= \prod _{m, n} \Biggl ( P_\mathbf {\xi }\left( Q^{[m]}\right) P_\mathbf {\pi }\left( Z^{[.,1]}\right) \nonumber \\ {}&\times \prod _{t=2}^{T^n} P_\Omega \left( Z^{[.,t]}|Z^{[.,t-1]}\right) \nonumber \\ {}&\times \prod _{t=1}^{T^n}P_\Psi \left( X^{[.,t]}|Z^{[.,t]},Q^{[m]}\right) \Biggr ) . \end{aligned}$$
(1)

The parameters \(\Theta = \left\{ \mathbf {\pi }, \Omega , \mathbf {\xi }, \Psi \right\} \) of the model are inferred through the EM algorithm (McLachlan & Krishnan, 2008) as shown in Algorithm 1 and detailed in the appendix. We seed the parameter set using a model fit to one cage, and subsequently iterate between optimising Q (per-cage) and optimising the remaining parameters using standard EM on the data from all cages. This procedure is carried out using models initialised by fitting to each cage in turn, and then the final model is selected based on the highest likelihood score (much like with multiple random restarts). Furthermore, we use the fact that the posterior over Q is highly peaked, to replace the expectation over Q by its maximum (a point estimate), thereby greatly reducing the computational complexity.

5 Experiments

We report two sets of experiments. We begin in Sect. 5.1 by describing the optimisation of the various modules that make up the ALM, and subsequently, describe the analysis of the group behaviour in Sect. 5.2. The code to produce these results is available at https://github.com/michael-camilleri/Mice-N-Mates.

5.1 Fine-tuning the ALM

The ALM was fit and evaluated on the ABODe dataset.

5.1.1 Metrics

For both the observability and behaviour components of the ALM we report accuracy and F\(_{1}\)  score (see e.g. Murphy, 2012, Sec. 5.7.2.3). We use the macro-averaged F\(_{1}\)  to better account for the class imbalance. This is particularly severe for the observability classification, in which only about 7% of samples are Not Observable , but it is paramount to flag these correctly. Recall that the Observable  samples will be used to infer behaviour (Sect. 4.1.2) which is in turn used to characterise the dynamics of the mice (Sect. 4.2). Hence, it is more detrimental to give False Positive (FP) outputs, which results in Unreliable  behaviour classifications (i.e.  when the sample is Not Observable  but the OC deems it to be Observable , which can throw the statistics awry) than missing some Observable  periods through False Negatives (FNs) (which, though Wasteful  of data, can generally be smoothed out by the temporal model). This construct is formalised in Table 1, where we use the terms Unreliable  and Wasteful  as they better illustrate the repercussions of the errors. In our evaluation, we report the number of Unreliable  and Wasteful  samples to take this imbalance into account. For the BC, we also report the normalised (per-sample) log-likelihood score, \(\widehat{\mathcal {L}}\), given that we use it as a probabilistic classifier.

Table 1 Definition of classification outcomes for the Observability problem

5.1.2 Observability

The challenge in classifying observability was to handle the severe class imbalance, which implied judicious feature selection and classifier tuning. Although the observability sample count is high within ABODe, the skewed nature (with only 7% Not Observable ) is prone to overfitting. Features were selected based on their correlation with the observability flag, and narrowed down to the subset already listed (Sect. 4.1.1). As for classifiers, we explored Logistic Regression (LgR), Naïve Bayes (NB), Random Forests (RF), Support-Vector Machines (SVM) and feed-forward Neural Networks (NN). Visualising the ROC curves (see e.g.  Murphy, 2012, Sect. 5.7.2.1) (Fig. 4), brings out two clear candidate models. Note how at most operating points, the LgR model is the best classifier, except for some ranges where NB is better (higher). These were subsequently compared in terms of the number of Unreliable  and Wasteful  samples at two thresholds: one is at the point at which the number of Wasteful  samples is on par with the true number of Not Observable  in the data (i.e.  8%), and the other at which the number of predicted Not Observable  equals the statistic in the ground-truth data. These appear in Table 2: the LgR outperforms the NB in almost all cases, and hence we chose the LgR classifier operating at the Wasteful  = 8% point.

Fig. 4
figure 4

ROC curves for various architectures of the OC evaluated on the Validation split. Each coloured line shows the TP rate against the FP rate for various operating thresholds: the ‘default’ 0.5 threshold in each case is marked with a cross ‘\(\times \)’. The baseline (worst-case) model is shown as a dotted line

Table 2 Comparison of LgR and NB as observability classifiers (on the validation set) at different operating points

5.1.3 Behaviour

We explored two architectures as backbones (feature extractors) for the BC, the Spatio-Temporal Localisation Transformer (STLT) of Radevski et al. (2021) and the LFB of Wu et al. (2018), on the basis of them being most applicable to the spatio-temporal action-localisation problem (Carreira et al., 2019). In both cases, we: (a) used pre-trained models and fine-tuned them on our data, (b) standardised the pixel intensity (over all the images) to unit mean and one standard deviation (fit on samples from the tuning split), and (c) investigated lighting enhancement techniques (Li et al., 2022), although this did not improve results in any of our experiments. We provide an overview of the training below, but the interested reader is directed to Camilleri (2023) for further details.

The STLT uses the relative geometry of BBoxes in a scene (layout branch), as well as the video frames (visual branch) to classify behaviour (Radevski et al. 2021). The visual branch extracts per-frame features using a ResNet-50, The layout branch is a transformer architecture which considers the temporal and spatial arrangement of detections of the subject animal and contextual objects—the other cage-mates and the hopper. In order to encode invariance to the absolute mouse identities, the assignment of the cage-mates to the slots ‘cagemate1’ and ‘cagemate2’ was randomly permuted during training. The signal from each branch is fused at the last stages of the classifier. The base architecture was extended to use information from outwith the BTI, drawing on temporal context from surrounding time-points. We ran experiments using the layout-branch only and the combined layout/visual branches, each utilising different number of frames and strides. We also experimented with focusing the visual field on the detected mouse alone. Training was done using Adam (Kingma & Ba, 2014), with random-crop and colour-jitter augmentations, and we explored various batch-sizes and learning rates: validation-set evaluation during training directed us to pick the dual-branch model with 25 frames (12 on either side of the centre frame) at a stride of 2 as the best performer.

Table 3 Evaluation of the baseline (prior-probability), STLT and LFB models on the Training and Validation sets in terms of Accuracy, macro-F\(_{1}\)  and normalised log-likelihood (\(\widehat{\mathcal {L}}\))

The LFB (Wu et al., 2018), on the other hand, is a dedicated spatio-temporal action localisation architecture which combines BTI-specific features with a long-term feature-bank extracted from the entire video, and joined together through an attention mechanism before being passed to a linear classification layer. Each of the two branches (feature-extractors) uses a FastRCNN network with a ResNet 50 backbone. We used the pre-trained feature-bank generator and fine-tuned the short-term branch, attention mechanism and classification layer end-to-end on our data. Training was done using Stochastic Gradient Descent (SGD) with a fixed-step learning scheduler: we used batches of 16 samples and trained for 50 epochs, varying learning rates, warm-up periods and crucially the frame sampling procedure, again in terms of total number of frames and the stride. We explored two augmentation procedures (as suggested by Wu et al. 2018): (a) random rescaling and cropping, and (b) colour jitter (based only on brightness and contrast). Again, validation-set scores allowed us to choose the 11-frame model with a stride of 8 and with resize-and-crop and colour-jitter augmentations (at the best performing epoch) as our LFB contender.

Table 4 Test performance of the ALM and baseline model, in terms of observability and behaviour

To choose our backbone architecture, the best performer in each case was evaluated in terms of Accuracy, F\(_{1}\)  and log-likelihood on both the Training and Validation set in Table 3. Given the validation set scores, the LFB model with an F\(_{1}\)  of 0.61 (compared to 0.36 for the STLT), was chosen as the BC. This was achieved despite the coarser layout information available to the LFB and the changes to the STLT architecture: we hypothesize that this is due to the ability of the LFB to draw on longer-term context from the whole video (as opposed to the few seconds available for the STLT).

The LFB (and even the STLT) model can only operate on samples for which there is a BBox for the mouse. We need to contend however with instances in which the mouse is not identified by the TIM, but the OC reports that it should be Observable . In this case, we fit a fixed categorical distribution to samples which exhibited this ‘error’ in the training data (i.e.  Observable  but no BBox).

5.1.4 End to End Performance

In Table 4 we show the performance of the ALM on the held-out test-set. Since there is no other system that works on our data to compare against, we report the performance of a baseline classifier which returns the prior probabilities. In terms of observability, the ALM achieves slightly less accuracy but a much higher F\(_{1}\)  score, as it seeks to balance the types of errors (cutting the Unreliable  by 34%). In terms of behaviour, when considering only Observable  classifications, the system achieves 68% accuracy and 0.54 F\(_{1}\)  despite the high class imbalance. The main culprits for the low score are the grooming behaviours, which as shown in Fig. 5, are often confused for Immobile . Within the supplementary material, we provide a demo video – Online Resource 1–showing the output from the ALM for a sample 1:00 clip. In the clip, the mice exhibit a range of behaviours, including Feeding , Locomotion , Self-Grooming , and Immobile .

Fig. 5
figure 5

Behaviour confusion matrix of the End-to-End model as a Hinton plot. The area of each square represents the numerical value, and each row is normalized to sum to 1

5.2 Group Behaviour Analysis

The IMADGE dataset is used for our behaviour analysis, focusing on the adult demographic and comparing with the young one later.

5.2.1 Overall Statistics

It is instructive to look at the overall statistics of behaviours. In Table 5 we report the percentage of time mice exhibit a particular behaviour, averaged over cages and split by age-group. The Immobile  behaviour clearly stands out as the most prevalent, but there is a marked increase as the mice get older (from 30% to 45%)—this is balanced by a decrease in Other , with most other behaviours exhibiting much the same statistics.

Table 5 Distribution of Behaviours across cages per age-group

5.2.2 Metrics

Evaluating an unsupervised model like the GBM is not straightforward, since there is no objective ground-truth for the latent states. Instead, we compare models using the normalised log-likelihood \(\widehat{\mathcal {L}}\). When reporting relative changes in \(\widehat{\mathcal {L}}\), we use a baseline model to set an artificial zero (otherwise the log-likelihood is not bounded from below). Let \({\widehat{\mathcal {L}}_{BL}}\) represent the normalised log-likelihood of a baseline model: the independent distribution per mouse per frame. Subsequently, we use \({\widehat{\mathcal {L}}_{\Theta }}\) for the likelihood under the global model (parametrised by \(\Theta \)) and \({\widehat{\mathcal {L}}_{\Theta }^*}\) for the likelihood under the per-cage model (\(\Theta ^*\)). We can then define the RDL between the two models parametrised as:

$$\begin{aligned} \text {RDL}\left( \Theta ;\Theta ^*\right) = \frac{\widehat{\mathcal {L}}_{\Theta } - \widehat{\mathcal {L}}_{\Theta ^*}}{\widehat{\mathcal {L}}_{\Theta } - \widehat{\mathcal {L}}_{BL}} \times 100 \% \quad . \end{aligned}$$
(2)

In evaluating on held-out data, we have a further complexity due to the temporal nature of the process. Specifically, each sample cannot be considered independent with respect to its neighbours. Instead, we treat each run (\(2\nicefrac {1}{2}\) hours) as a single fold, and evaluate models using leave-one-out cross-validation: i.e.  we train on five of the folds and evaluate on the held-out in turn.

5.2.3 Size of Z

The number of latent states |Z| in the GBM governs the expressivity of the model: too small and it is unable to capture all the dynamics, but too large and it becomes harder to interpret. To this end, we fit a per-cage model (i.e.  without the Q construct) to the adult mice data for varying \(|Z| \in \left\{ 2, \ldots , 13 \right\} \), and computed \(\widehat{\mathcal {L}}\) on held out data (using the aforementioned cross-validation approach). As shown in Fig. 6, the likelihood increased gradually, but slowed down beyond \(|Z|=7\): we thus use \(|Z|=7\) in our analysis.

Fig. 6
figure 6

Normalised log-likelihood (\(\widehat{\mathcal {L}}\)) of the GBM for various dimensionalities of the latent state over all cages

5.2.4 Peaked Posterior Over Q

Our Algorithm 1 assumes that the posterior over Q is sufficiently peaked. To verify this, we computed the posterior for all permutations over all cages given each per-cage model. To two decimal places, the posterior is deterministic as shown in Fig. 7 for the cages in the Adult demographic using the model trained on cage L. The Young demographic exhibited the same phenomenon.

Fig. 7
figure 7

Posterior probability of the GBM over Q for all cages (\(|Z|=7\), model trained on cage L)

5.2.5 Quality of Fit

During training of per-cage models, we noticed extremely similar dynamics between the cages (see Camilleri 2023). This is not unexpected given that the mice are of the same strain and sex, and recorded at the same age-point. Nonetheless, we wished to investigate the penalty paid by using a global rather than per-cage model. To this end, in Table 6, we report the \(\widehat{\mathcal {L}}\) and RDL (Eq. (2)) of the GBM model evaluated on data from each cage in turn. Although the per-cage model is understandably better than the GBM on its own data, the average drop is just 4.8%, which is a reasonable penalty to pay in exchange for a global model. Plotting the same values on the number-line (Fig. 8) shows two cages, D and F, that stand out from the rest due to a relatively higher drop. This led us to further investigate the two cages as potential outliers in our analysis, see Sect. 5.2.6.

Table 6 Evaluation of the GBM model (\(|Z|=7\)) on data from each cage (columns) in terms of the Normalised log-likelihood (\(\widehat{\mathcal {L}}\)) and RDL
Fig. 8
figure 8

RDLs from Table 6, printed on the number line: the lowest scoring cages are marked

5.2.6 Latent Space Analysis

Figure 9 shows the parameters of the trained GBM. Most regimes have long dwell times, as indicated by the values close to 1 on the diagonal of \(\Omega \). For the emission matrices \(\Psi _1, \ldots , \Psi _3\), note that regime F captures the Immobile  behaviour for all mice, and is the most prevalent (0.26 steady state probability)—it also provides evidence for the anecdotal phenomenon that mice tend to huddle together to sleep. The purity of this regime indicates that the mice often are Immobile  at the same time, re-enforcing the biological knowledge that they tend to huddle together for sleeping, but it is interesting that this was picked up by the model without any apriori bias. This is further evidenced in the ethogram visualisation in Fig. 10, which also points out regime C as indicating when any two mice are Immobile , and D for any two mice exhibiting Self-Grooming . Similarly, regime A is most closely associated with the Other  label, although it is less pure.

A point of interest are the regimes associated with the Feeding  behaviour, that are different across mice—B, E and G for mice 1, 2 and 3 respectively. This is surprising given that more than one mouse can feed at a time (the design of the hopper is such that there is no need for competition for feeding resources). This is significant, given that it is a global phenomenon, as it could be indicative of a pecking order in the cage. Another aspect that emerges is the co-occurrence of Self-Grooming  with Immobile  or Other  behaviours: note how in regime (D) (which has the highest probability of Self-Grooming ) these are the most prevalent.

Fig. 9
figure 9

Parameters for the GBM with \(|Z|=7\) trained on Adult mice. For \(\Omega \) (leftmost panel) we show the transition probabilities: underneath the \(Z^{[t+1]}\) labels, we also report the steady-state probabilities (first row) and the expected dwell times (in BTIs, second row). The other three panels show the emission probabilities \(\Psi _k\) for each mouse as Hinton plots. We omit zeros before the decimal point and suppress values close to 0 (at the chosen precision)

Fig. 10
figure 10

Ethogram for the regime (GBM \(|Z|=7\)) and individual behaviour probabilities for a run from cage B. In all but the light status, darker colours signify higher probability: the hue is purple for Z and matches the assignment of mice to variables \(X_k\) otherwise. The light-status is indicated by white for lights-on and black for lights-off. Missing data is indicated by grey bars

Our Quality-of-Fit analysis (Sect. 5.2.5) highlighted cages D and F as exhibiting deviations in the behaviour dynamics, compared to the rest of the cages. To investigate these further, we plotted the parameters of the per-cage model for these two cases in Fig. 11, and compare them against an ‘inlier’ cage L. Note that both the latent states and the mouse identities are only identifiable subject to a permutation (this was indeed the need for the GBM construct). To make comparison easier, we optimised the permutations of both the latent states and the ordering of mice that (on a per-cage basis) maximise the agreement with the global model. The emission dynamics (\(\Psi \)) for the ‘outliers’ are markedly different from the global model. Note for example how the feeding dynamics for cages D and F do not exhibit the same pattern as in the GBM and cage L: in both these cases the same feeding regime G is shared by mice 1 and 3. In the case of cage D, there is also evidence that a 6-regime model suffices to explain the dynamics (note how regimes B and E are duplicates with high switching frequency between the two).

Fig. 11
figure 11

Parameters for the per-cage models (\(|Z|=7\)) for cages D, F and L. The order of the latent states is permuted to maximise the similarity with the global model (using the Hungarian algorithm) for easier comparison. The plot follows the arrangement in Fig. 9

5.2.7 Anomaly Detection

We used the model trained on our ‘normal’ demographic to analyse data from ‘other’ cages: i.e.  anomaly detection. This capacity is useful e.g.  to identify unhealthy mice, strain-related differences, or, as in our proof of concept, evolution of behaviour through age. In Fig. 12 we show the trained GBM evaluated on data from both the adult (blue) and young (orange) demographics in IMADGE. Apart from two instances, the \(\widehat{\mathcal {L}}\) is consistently lower in the younger group compared to the adult demographic: moreover, for all cages where we have data in both age groups, \(\widehat{\mathcal {L}}\) is always lower for the young mice. Indeed, a binary threshold achieves 90% accuracy when optimised and a T-test on the two subsets indicates significant differences (p-value \(=1.1\times 10^{-4}\)). Given that we used mice from the same strain (indeed even from the same cages), the video recordings are very similar: consequently we expect the ALM to have similar performance on the younger demographic, suggesting that the differences arise from the behaviour dynamics.

Fig. 12
figure 12

\(\widehat{\mathcal {L}}\) scores (x-axis) of the GBM on each cage (y-axis, left) in the adult/young age groups, together with the accuracy of a binary threshold on the \(\widehat{\mathcal {L}}\) (scale on the right)

5.2.8 Analysis of Young Mice

Training the model from scratch on the young demographic brings up interesting different patterns. Firstly, the \(|Z|=6\) model emerged as a clear plateau this time, as shown in Fig. 13. Figure 14 shows the parameters for the GBM with \(|Z|=6\) after optimisation on the young subset. It is noteworthy that the Immobile  state is less pronounced (in regime D), which is consistent with the younger mice being more active. Interestingly, while there is a regime associated with Feeding , it is the same for all mice and also much less pronounced: recall that for the adults, the probability of feeding was 0.7 in each of the Feeding  regimes. This could indicate that the pecking order, at least at the level of feeding, develops with age.

Fig. 13
figure 13

\(\widehat{\mathcal {L}}\) as a function of \(|Z| \in \{2, 3,..., 7\}\), with each cage as initialiser. The average (per |Z|) is shown as a blue cross

Fig. 14
figure 14

GBM parameters on the Young mice data for \(|Z|=6\). Arrangement is as in Fig. 9

6 Discussion

In this paper we have provided a set of tools for biologists to analyse the individual behaviours of group housed mice over extended periods of time. Our main contribution was the novel GBM—a HMM equipped with a permutation matrix for identity matching—to analyse the joint behaviour dynamics across different cages. This evidenced interesting dominance relationships, and also flagged significant deviations in an alternative young age group. In support of the above, we released two datasets, ABODe for training behaviour classifiers and IMADGE for modelling group dynamics (upon which our modelling is based). ABODe was used to develop and evaluate our proposed ALM that automatically classifies seven behaviours despite clutter and occlusion.

Since our end-goal was to get a working pipeline to allow us to model the mouse behaviour, the tuning of the ALM leaves room for further exploration, especially as regards architectures for the BC. In future work we would like to analyse other mouse demographics. Much of the pipeline should work “out of the box”, but to handle mice of different colours to those in the current dataset it may be necessary to annotate more data for the ALM and for the TIM.

Some contemporary behavioural systems use pose information to determine behaviour. Note that the STLT explicitly uses BBox poses within the attention architecture of the layout branch: nonetheless, the model was inferior to the LFB. When it comes to limb poses Camilleri, Zhang, Bains, Zisserman, and Williams (2023, sec. 5.2) showed that it is very difficult to obtain reliable pose information in our cage setup due to the level of occlusion. If however, in future work, such pose estimation can be made reliable enough in the cluttered environment of the home-cage, it could aid in improving the classification of some behaviours, such as Self-Grooming .