Enabling Social Robots to Perceive and Join Socially Interacting Groups Using F-formation: A Comprehensive Overview
Abstract
1 Introduction

Existing Surveys and Reviews | (1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10) | (11) |
---|---|---|---|---|---|---|---|---|---|---|---|
Adriana et al. [157] | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ |
Francesco et al. [146] | ✗ | ✗ | ✓✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ |
Sai Krishna et al. [117] | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
This survey | ✓✓ | ✓✓ | ✓✓ | ✓✓ | ✓✓ | ✓✓ | ✓✓ | ✓✓ | ✓✓ | ✓✓ | ✓✓ |
2 Social Spaces in Group Interaction
2.1 What Is F-formation?
2.2 Different Formations (Including F-formations) Possible during Interaction





2.3 Best Position for a Robot to Join in a Group
No. | Formation (Before) | No. of People | Formation (After joining) |
---|---|---|---|
a | Side-by-side | 2\({}^{\mathrm{a}}\) | Side-by-side, side-by-side with one outsider, Triangle |
b | Vis-a-vis | 2 | Triangle |
c | L-shaped | 2 | Triangle, reverse semi-circular |
d | Reversed L-shaped | 2 | Semi-circular |
e | Wide V shaped | 2\({}^{\mathrm{a}}\) | Triangle, semi-circular |
f | Spooning | 2 | Side-by-side with headliner, side-by-side with outsider |
g | Z-shaped | 2 | Triangle |
h | Line | 3 | Line |
i | Column | 3 | Column |
j | Diagonal | 3 | Diagonal, V-shaped |
k | S-by-s with one headliner | 3\({}^{\mathrm{a}}\) | S-by-s with one headliner |
l | S-by-s with outsider | 3\({}^{\mathrm{a}}\) | Semi-circular |
m | V-shaped | 7 | Triangle |
n | Horseshoe | 5\({}^{\mathrm{a}}\) | Pentagon |
o | Semi-circular | 4 | Circular |
p | Semi-circular with one leader in the middle | 5 | Circle |
q | Square | 4 | Circular, horseshoe |
r | Triangle | 3 | Circle, semi-circular |
s | Circle | 6 | Circle |
t | Circle with outsider | 8 | Circle with outsider |
u | Geese | 2\({}^{\mathrm{a}}\) | S-by-s with outsider |
v | Lone wolf | 1 | Vis-a-vis, S-by-s, L-shaped |
3 Research Chronology

Year of Publication | Study, Methods and Techniques | Total |
---|---|---|
2004 | Geometric reasoning on sonar data [12]. | 1 |
2006 | Wizard-of-Oz (WoZ) study on spatial distances related to f-formations [74]. | 1 |
2009 | Clustering trajectories tracked by static laser range finders [80], trajectory classification by SVM [141]. | 2 |
2010 | Probabilistic generative model on IR tracking data [62], WoZ study of robot's body movement [90], SVM classification using kinematic features [185]. | 3 |
2011 | Analysis of different f-formations for information seeking [96], Hough-transform-based voting [43], graph clustering [72], a study on transitions between f-formations on interaction cues [100], a computational model of interaction space for virtual humans extending f-formation theory [107], a study of physical distancing from a robot [103], utilizing geometric properties of a simulated environment [104], a study to relate f-formations with conversation initiation [148], Gaussian clustering on camera-tracked trajectories [44], risk-based robot navigation [132]. | 10 |
2012 | Application of f-formations in collaborative cooking [111], Kinect-based tracking with rules [95], WoZ study on social interaction in nursing facilities [87], a study of robot gaze behaviors in group conversations [184], velocity models (while walking) [102], SVM with motion features [135], HMM [56]. | 7 |
2013 | Spatial geometric analysis on Kinect data [51, 53], analysis of f-formation in blended reality [47], a comparison of [43] and [72] [144], exemplar-based approach [93], multi-scale detection [145], Bag-of-Visual-Words-based classifier [162], Inter-Relation Pattern Matrix [27], HMM classifiers [99], O-space based path planning [60], Multi-hypothesis social grouping and tracking for mobile robots [94]. | 11 |
2014 | Hough Voting (HVFF), graph-cut f-formation (GCFF) [122], game theory based approach [166], correlation clustering algorithm [11], reasoning on proximity and visual orientation data [48], effects of cultural differences [78], HMM to classify accelerometer data [71], iterative augmentation algorithm [36], adaptive weights learning methods [126], estimating lower-body pose from head pose and facial orientation [183], search-based method [52], study on group-approaching behavior [82], spatial activity analysis in a multiplayer game [79]. | 12 |
2015 | Robust Tracking Algorithm using Tracking learning detection (TLD) [10], GCFF-based approach [146], Correlation Clustering algorithm [10], multimodal data fusion [9], spatial analysis in collaborative cooking [110], Group Interaction Zone (GIZ) detection method [35], study on influencing formations by a tour guide robot [81], joint inference of pose and f-formations [152], participation state model [149], SALSA dataset for evaluating social behavior [8], multi-level tracking based algorithm [170], Structural SVM using Dynamic Time Warping loss [150], Long-Short Term Memory (LSTM) network [3], influence of approach behavior on comfort [15], Sensor-based control task for joining [105]. | 15 |
2016 | F-formation applied to mobile collaborative activities [161], subjective annotations of f-formation [186], game-theoretic clustering [167], study of display angles in museum [75], mobile co-location analysis using f-formation [143], proxemics analysis algorithm [129], review of human group detection approaches [158], LSTM based detection in ego-view [4], Tracking people through data fusion for inferring social situation [164], Detecting group formations using iBeacon technology [83], [163]. | 11 |
2017 | Haar cascade face detector based algorithm [88, 115], weakly-supervised learning [165], temporal segmentation of social activities [41], omnidirectional mobility in f-formations [182], review of multimodal social scene analysis [7], 3D group motion prediction from video [77], survey on social navigation of robots [33], a study on robot's approaching behavior [16], heuristic calculation of robot's stopping distance [136], a study on human perception of robot's gaze [168], computational models of spatial orientation in virtual reality (VR) [118], Socially acceptable robot navigation over groups of people [171]. | 13 |
2018 | Optical-flow based algorithm in ego-view [131], meta-classifier learning using accelerometer data [57], human-friendly approach planner [138], discussion on improved teleoperation using f-formation [113], effect of spatial arrangement in conversation workload [97], study of f-formation dynamics in a vast area [46]. | 6 |
2019 | Study on teleoperators following f-formations [117], analysis on conversation floors prediction using f-formation [127], empirical comparison of data-driven approaches [67], LSTM networks applied on multimodal data [134], robot's optimal pose estimation in social groups [116], review of robot and human group interaction [181], Staged Social Behavior Learning (SSBL) [54], Euclidean distance based calculation after 2D pose estimation [114], Robot-Centric Group Estimation Model (RoboGEM) [159], DoT-GNN: Domain-Transferred Graph Neural Network for Group Re-identification [70], Audio-based framework for group activity sensing [156]. | 11 |
2020 | Difference in spatial group configurations between physically and virtually present agents [68], Conditional Random Field with SVM for jointly detecting group membership, f-formation and approach angle [18, 21], Group Re-Identification With Group Context Graph Neural Networks [188], Improving Social Awareness Through DANTE: Deep Affinity Network for Clustering Conversational Interactants [153]. | 4 |
2021 | Conversational Group Detection with Graph Neural Networks [160] | 1 |
2022 | Graph-based group modelling for backchannel detection [147], Pose generation for social robots in conversational group formations [169], Conversation group detection with spatio-temporal context [154] | 3 |
4 Review Methodology
4.1 Generic Survey Framework with Possible Concern Areas

4.2 Search Strategy
4.3 Inclusion and Exclusion Criteria
5 Cameras and Sensors for Scene Capture
5.1 Camera Views

5.2 Other Sensors

Classification | Application Areas/ Details | References |
---|---|---|
Ego-centric (first person view or robot view) [Section 5.1] | – Robotics and HRI – Robot vision in telepresence – Drone/robot surveillance | [68], 1 [114], 1 [54], 1 [159], 1 [117], 2 Hamlet cameras and 1 robot camera [116], 1 [138], 1 [113], 1 [131], 1 [88], 1 [115], multi [77], 1 [4], 2 [75], 1 [158], 1 [161], 1 [3], 1 [149], 1 [81], 1 [10], multi [110], multi [52], multi [183], 1 [36], [82], multi [48], 1 [11], 1 [58], depth camera, RGB camera [56], an omni directional camera [184], multi [107, 111], 3 [90], 4 [62, 67], robot camera [74], 2 [12], [10], 2 [80], 1 [18, 21], 1 [94] |
Exo-centric (global view) [Section 5.1] | – Social scene monitoring – Covid-19 social distancing monitoring – Human interaction detection and analysis | 1 [117], multi [57], multi [113], 1 [16], [182], multi [7], multi [77], 1 [33], 4 [168], multi [165], 8 [136], 1 [75], multi [158], multi [167], 4 [129], 2 [186], multi [150], [170], multi [8], multi [152], [146], 1 [35], multi [146], multi [9], multi [110], [79], 1 [55], 4 [126], a single monocular camera [166], 3 overhead fish eye camera used for training classifier [71], multi [183], multi [52], 1 [36], 1 [27], multi [51], 1 [99], 4 [145], [93], 1 [162], multi [144], 7 [53], 4 [87], 1 [135], 2 [95], 1 [34], multi [111], 1 [44], 1 [100], multi [43], 1 [103], 1 [72], multi [185], 4+2 [62], 4 webcams [74], an omnidirectional camera [12], [122], [187] |
Using other sensors [Section 5.2] | Audio, sociometric badges, Blind sensor, prime sensor, WiFi-based tracking, laser-based tracking, depth sensor, band radios, touch receptors, RFID sensors, smart phones, UWB becon | [8], [48], Kinect depth sensor [51], [58], [56], [100], [161], speakers [127], wearable sensors [134], [33], [136], UWB localization beacons, Kinect [168], [80], [141], [143], RFID tag [75], [89], [51], [102], [107], Asus Xtion Pro sensor [114], ZED sensors [131], single worn accelerometer [57], Kinect sensor [138], Microsoft Speech SDK [118], speaker, Asus Xtion Pro live RGB-D sensor [16], Kinect [77], motion tracker [136], sociometric badges [7], RGB-D sensor [182], tablets [161], tablets [143], mobile sensors [158], microphone, IR beam and detector, bluetooth detector, accelerometer [8], touch sensor [107], range sensor [184], laser sensors [102], Wi-Fi based tracking, laser-based tracking [58], PrimeSensor, Microsoft Kinect, microphone [99], RFID sensors [47], blind sensor, location beacon [48], single worn accelerometer [71], [97], gaze animation controller [118], [148], grid world environment [104], ethnography method [96], ibeacon [83], 2D Range sensing [94], audio [156] |
Others relevant literatures | - | [46, 47, 60, 63, 78, 84, 112, 163] |
6 Categorization of Methods/Techniques

6.1 Rule-Based Method

6.1.1 Fixed Rules Based
6.1.2 Geometric Reasoning Based
6.2 ML-Based Method

6.2.1 Supervised Approaches
6.2.2 Unsupervised Approaches
6.2.3 Mixed Approaches
Classification | References |
---|---|
Classical rule-based AI methods [Fixed model based learning and prediction method based on certain geometric assumptions and/or reasoning (Section 6.1).] | Approach behavior [141], sociologically principled method [43], proposed model [148], The Compensation (or Equilibrium) Model, The Reciprocity Model, The Attraction-Mediation Model, The Attraction-Transformation Model [103], rapid ethnography method [96], digital ethnography [111], GroupTogether system [95], museum guide robot system [184], extended f-formation system [53], Multi-scale Hough voting approach [145], Hough for f-Formations, Dominant-sets for f-Formations, [144], PolySocial Reality-F-formation [47], two-Gaussian mixture model, O-space model [60], Wi-fi based tracking, laser-based tracking, vision-based tracking [58], heat map based f-formation representation [51, 166], group tracking and behavior recognition [55], search-based method [52], Estimating positions and the orientations of lower bodies [183], Kendon's diagramming practice [110], GROUP [170], GCFF [146], [81], HBPE [152], Link Method, Interpersonal Synchrony Method [129], Frustum of attention modeling [167], f-formation as dominant set model [186], HRI motion planning system [182], footing behavior models (Spatial-Reorientation Model, Eye-Gaze Model) [118], matrix completion for head and body pose estimation (MC-HBPE) method [7], [136], Haar cascade face detector algorithm [88], Haar cascade face detector algorithm [115], [168], Approaching method [138], Measuring Workload Method [97, 116, 117], conversation floors estimation using f-formation [127], f-formation as dominant set model [187], geometry based [169] |
ML-based AI methods [Data-driven models for learning and prediction using supervised, semi-supervised, unsupervised and reinforcement learning (ML/DL) or any such techniques (Section 6.2).] | [12], IR tracking techniques [62], SVM classifier [185], Grid World Scenario [104], graph-based clustering method [34, 44, 72], Hidden Markov Model (HMM) [56], proposed method with o-space and without o-space (SVM) [135], Region-based approach with level set method [87], IRPM [27], graph-based clustering algorithm [162], voting based approach [93], HMMs [99], SVM [36], Transfer Learning approaches [126], method with HMM [71], head pose estimation technique [11], [10], MC-HBPE [9], GIZ detection [35], [8], Supervised Correlation Clustering through Structured Learning [150], LSTM, Hough-Voting (HVFF) [3], LSTM [4], 3D skeleton reconstruction using patch trajectory [77], Human aware motion planner [33], [41], Learning Methods for HPE and BPE [165], Group bAsed Meta-classifier learning Using local neighborhood Training (GAMUT) [57], Group detection method [131], [67], LSTM network [134], Robot-Centric Group Estimation Model (RoboGEM) [159], Multi-Person 2D pose estimation [114], SSBL [54], multi-class SVM classifier [18], GNN based [70, 147, 160, 188], DANTE [153], Based on Spatio-Temporal Context [154], Geometry based and data-driven methods [169] |
Other studies | WoZ [74], WoZ paradigm [15], WoZ [90] |
7 Datasets
Dataset | View (Ego/Exo) | Single/Multiple group (s) | Indoor/Outdoor [area] | Availability (Public/Private) |
---|---|---|---|---|
TUD Stadtmitte [13] | Ego | Multi-gp | outdoor [public] | private |
HumanEva II [13] | Ego | Multi-gp | indoor | private |
SALSA [174] | Exo | Multi-gp | indoor | public |
BEHAVE database [109] | Exo | Multi-gp | outdoor [public] | public |
TUD Multiview Pedestrians [13] | Exo | Multi-gp | outdoor [public] | private |
CHILL [34] | Exo | Multi-gp | — | — |
Benfold [28] | — | — | — | — |
MetroStation [34] | Exo | Multi-gp | indoor [public] | private |
TownCentre [29] | Exo | Multi-gp | outdoor [public] | private |
Indoor [34] | Exo | Multi-gp | indoor | private |
SI (Social Interactions) [101] | Exo | Multi-gp | outdoor [public] | public |
Coffee-room scenario [27] | Exo | Multi-gp | indoor | private |
CoffeeBreak [42] | Exo | Multi-gp | outdoor [private] | public |
Collective Activity [14] | Ego | Multi-gp | outdoor/indoor | private |
PETS 2007 (S07 dataset) [27] | Exo | Multi-gp | indoor [public] | private |
Structured Group Dataset [172] | Exo | Multi-gp | indoor/outdoor [public] | public |
EGO-GROUP [5] | Ego | Multi-gp | indoor/outdoor | public |
EGO-HPE [6] | Ego | Multi-gp | indoor/outdoor | public |
Mingling [71] | Exo | Multi-gp | indoor | private |
MatchNMingle [32] | Exo | Multi-gp | indoor | public |
CLEAR [151] | Exo | Single-gp | indoor | private |
Greece [126] | Exo | Multi-gp | indoor | private |
DPOSE [125] | Exo | Multi-gp | indoor | private |
BIWI Walking Pedestrians [119] | Exo | Multi-gp | outdoor [public] | private |
Crowds-By-Examples [92] | Exo | Multi-gp | outdoor [public] | private |
Vittorio Emanuele II Gallery [17] | Exo | Multi-gp | indoor [public] | private |
UoL-3D Social Interaction [40] | Ego | Single-gp | indoor | public |
Cocktail Party [173] | Exo | Multi-gp | indoor | public |
Social Interaction [39] | — | Single-gp | indoor | public |
GDet [23] | Two monocular cameras, located on opposite angles of a room | — | indoor | public |
IPD [76] | Exo | Multi-gp | outdoor | public |
Classroom Interaction Database [93] | Exo | Multi-gp | indoor | private |
Caltech Resident-Intruder Mouse [31] | — | — | — | — |
UT-Interaction [137] | Exo | Multi-gp | outdoor | private |
PosterData [72] | Exo | Multi-gp | outdoor | private |
Friends Meet [24, 26] | Exo | Multi-gp | outdoor | public |
Discovering Groups of People in Images (DGPI) [37] | Exo | Multi-gp | indoor | private |
Prima head pose image [61] | Ego | Single-gp | indoor | private |
NUS-HGA [35] | — | Single-gp | indoor | private |
[62] | Exo | Single-gp | indoor | private |
[185] | Exo | Single-gp | indoor | private |
[72] | Exo | Multi-gp | indoor | private |
[44] | Exo | Multi-gp | outdoor [public] | private |
[104] | Exo | — | — | — |
[56] | Ego | Single-gp | indoor | private |
Dataset using Narrative camera [106] | Ego | Single-gp | indoor | private |
[4] | Ego | Single-gp | indoor/outdoor [public] | private |
[57] | Exo | Multi-gp | indoor | private |
[131] | Ego | Multi-gp | outdoor | private |
Laboratory-based dataset containing distance measures at three key distances, one laboratory-based dataset with distance measures from three predefined distances, dataset with distance measurements collected in a crowded open space [114] | — | — | — | private |
RGB-D pedestrian dataset [159] | Ego | Multi-gp | outdoor [public] | private |
[74] | — | — | indoor | private |
[141] | — | — | indoor | private |
[148] | — | — | indoor | private |
[96] | — | — | indoor | private |
[103] | Ego | Single-gp | indoor | private |
Youtube videos [110, 111] | — | — | indoor | private |
[95] | Exo | Single-gp | indoor | private |
[53] | Exo | — | indoor | private |
In shopping mall [58] | Ego | — | — | private |
[51] | Exo | Single-gp | indoor | private |
[25] | Ego | — | — | private |
[15] | Ego | Single-gp | indoor | private |
DGPI dataset [167] | — | — | — | — |
[182] | Exo | Single-gp | indoor | private |
[136] | Ego, Exo | Single-gp | indoor | private |
[168] | Ego | Single-gp | indoor | private |
[88] | Ego | Single-gp | indoor | private |
[138] | Ego | Single-gp | indoor | private |
[97] | Ego | — | — | private |
Babble [65, 66] | Exo | Single-gp | indoor [public] | public |

8 Categorization of Detection Capabilities and Scale

8.1 Detection Capability

Classification | References |
---|---|
Static scene detection | [36, 51, 54, 57, 82, 113, 114, 122, 126, 129, 136, 144, 146, 166, 167, 186] |
Dynamic scene detection | [3, 4, 7, 8, 9, 10, 11, 12, 15, 16, 18, 21, 27, 34, 35, 41, 43, 44, 46, 47, 52, 53, 55, 56, 57, 58, 62, 68, 71, 72, 74, 75, 77, 79, 80, 81, 87, 88, 90, 93, 95, 96, 97, 100, 102, 103, 110, 111, 113, 115, 116, 127, 129, 131, 134, 135, 138, 141, 143, 145, 146, 148, 149, 150, 152, 159, 161, 162, 165, 166, 168, 170, 181, 182, 183, 184, 185, 187] |
8.2 Detection Scale

Classification | References |
---|---|
Single group detection | [3, 4, 12, 15, 16, 18, 21, 41, 46, 47, 48, 51, 52, 53, 54, 56, 67, 68, 74, 75, 77, 78, 79, 80, 81, 82, 88, 89, 90, 95, 96, 97, 100, 102, 103, 104, 107, 110, 111, 113, 115, 116, 118, 122, 135, 136, 138, 141, 143, 146, 148, 149, 161, 168, 170, 181, 184, 185] |
Multi-group detection | [7, 8, 9, 10, 11, 27, 34, 35, 36, 43, 44, 55, 57, 62, 71, 72, 87, 93, 99, 113, 114, 117, 126, 127, 129, 131, 134, 144, 145, 146, 150, 152, 159, 162, 165, 166, 167, 182, 183, 186, 187] |
9 Categorization of Evaluation Methods

Classification | References |
---|---|
Simulation-based evaluation (robotic simulators/virtual environment) | 2D grid environment simulated in Greenfoot [104], simulated the process of deformation of contours using P-spaces represented by Contours of the Level Set Method [87], [52], Robot Operating System implementation of PedSim [129], a simulated avatar embodied confederate [118], Gazebo [117], a simulator using Unity 3D game engine [54] |
Human experience study-based evaluation (with real robots) | [12, 15, 18, 21, 33, 56, 58, 67, 74, 88, 90, 97, 99, 100, 103, 114, 115, 116, 131, 136, 138, 148, 159, 162, 168, 182, 184] |
Accuracy/efficiency evaluation (without robot, only computation) | [4, 7, 9, 11, 27, 34, 35, 36, 41, 43, 44, 51, 52, 52, 53, 55, 57, 62, 71, 72, 77, 87, 93, 95, 96, 110, 111, 126, 127, 129, 134, 135, 144, 145, 146, 152, 165, 166, 167, 170, 183, 185, 186, 187] |
10 Application Areas

Application Area | References |
---|---|
Drone/robotic vision | [3, 7, 9, 11, 18, 21, 27, 34, 35, 36, 43, 47, 53, 55, 68, 70, 72, 77, 90, 93, 95, 122, 126, 129, 143, 144, 145, 146, 150, 152, 158, 161, 165, 166, 167, 170, 186, 187] |
HRI (i.e., assistive robots) | [12, 15, 16, 41, 54, 56, 57, 58, 60, 67, 70, 74, 75, 78, 79, 80, 81, 82, 89, 90, 97, 99, 100, 102, 103, 104, 107, 113, 114, 131, 136, 138, 141, 147, 148, 149, 159, 160, 168, 181, 182, 183, 184, 188] |
Telepresence/teleoperation technologies | [70, 88, 113, 115, 117, 160, 188] |
Indoor/outdoor scene monitoring and surveillance | [58, 68, 90, 100, 118, 120, 185] |
Human behavior/activity and interaction analysis | [4, 8, 10, 44, 46, 48, 51, 52, 62, 70, 71, 74, 87, 96, 100, 110, 111, 127, 129, 134, 135, 147, 158, 160, 162, 165, 185, 188] |
Covid-19 and social distancing | Scope of future research |
11 Limitations, Challenges, and Future Directions: A Discussion


12 Conclusions
Acknowledgments
Footnotes
Supplemental Material
- Download
- 414.64 KB
References
Index Terms
- Enabling Social Robots to Perceive and Join Socially Interacting Groups Using F-formation: A Comprehensive Overview
Recommendations
Evaluating Social Perception of Human-to-Robot Handovers Using the Robot Social Attributes Scale (RoSAS)
HRI '18: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot InteractionThis work explores social perceptions of robots within the domain of human-to-robot handovers. Using the Robotic Social Attributes Scale (RoSAS), we explore how users socially judge robot receivers as three factors are varied: initial position of the ...
A Real-Time Reconfigurable Collision Avoidance System for Robot Manipulation
ICMRE 2017: Proceedings of the 3rd International Conference on Mechatronics and Robotics EngineeringIntelligent robotic systems are becoming fundamental actors in industrial and hazardous facilities scenarios. Aiming to increase personnel safety and machine availability, robots can help perform repetitive and dangerous tasks which humans either prefer ...
The Robotic Social Attributes Scale (RoSAS): Development and Validation
HRI '17: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot InteractionAccurately measuring perceptions of robots has become increasingly important as technological progress permits more frequent and extensive interaction between people and robots. Across four studies, we develop and validate a scale to measure social ...
Comments
Information & Contributors
Information
Published In

Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 860Total Downloads
- Downloads (Last 12 months)860
- Downloads (Last 6 weeks)198
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in