This work investigates the Journeys of June of 2013, in Brazil, with a special focus on the distributed and transient leadership of the movement, the ideological spectra that it represented and the technopolitical nature of such events.... more
This work investigates the Journeys of June of 2013, in Brazil, with a special focus on the distributed and transient leadership of the movement, the ideological spectra that it represented and the technopolitical nature of such events. We present a longitudinal analysis based on data from surveys (n=579) and 252 Facebook activist pages (n=684,361 posts) from the Right, the Traditional Left, and the Post-June Left networks, uncovering a five-year timeframe. We also propose a multi-method approach for longitudinal studies on network-movements, in which we cross the survey’s results with Facebook data using data mining and information extraction techniques, particularly the automated topic modeling with the LDA algorithm. We also propose to use the “engagement per topic” as a useful metric to determine pages’ influence, from a non-trivial perspective of the data, thus escaping from analyses merely based on single posts’ relevance and making the social data more robust for scientific research. We consider the June movement as a technopolitical event, one which used hybrid spaces (simultaneously taking place online and offline) to foster political participation across the ideological spectrum. Therefore, this multi-method perspective was crucial to uncover the movement’s multi-layering practices. We have learned that different leftist movements fostered the Journeys and motivated the transient leading roles, which ended up opening room for the right-leaning agenda during June 2013.
The Roman economy has been defined as an agrarian regime, where wheat was mainly cultivated combined with livestock farming and intensive cash crops such as wine and olive oil. Possibilities for economic growth in a winegrowing area such... more
The Roman economy has been defined as an agrarian regime, where wheat was mainly cultivated combined with livestock farming and intensive cash crops such as wine and olive oil. Possibilities for economic growth in a winegrowing area such as the Laetanian region in Hispania Citerior depended upon changes in agrarian productivity but were subject to agro-ecological and agroeconomic endowments that could affect the settlement patterns, the fluctuations in population, the forms of production related to the vineyard crop capacities, the spread of new techniques of cultivation and processing and the adoption of new technological advances. The combination of these factors explains how comparative advantages arose from other winegrowing territories, achieved through intensification and specialization processes that generated an increase of winemaking production surplus capable of being traded in different overseas markets.
Spatial humanities are a sub-discipline of digital humanities based on geographic information systems (GIS) and timelines providing an effective integrating and contextualizing function for geo-cultural attributes. As information systems... more
Spatial humanities are a sub-discipline of digital humanities based on geographic information systems (GIS) and timelines providing an effective integrating and contextualizing function for geo-cultural attributes. As information systems from multiple sources and in multiple formats they create visual indexes for diverse cultural data. Spatiotemporal interfaces provide new methods of integrating primary source materials into web-based interactive and 3D visualizations. We are able to chart the extent of specific traits of cultural information via maps using GIS gazetteer style spreadsheets for collecting and curating datasets.
Election forecasting has been traditionally dominated by subjective surveys and polls or methods centered upon them. We have developed a novel platform for forecasting elections based on agent-based modeling (ABM), and it is entirely... more
Election forecasting has been traditionally dominated by subjective surveys and polls or methods centered upon them. We have developed a novel platform for forecasting elections based on agent-based modeling (ABM), and it is entirely independent from surveys and polls. The platform uses statistical results from objective data along with simulation models to capture how voters have voted in past elections and how they are likely to vote in an upcoming election. We then screen for models that can reproduce results that are very close to the actual results of past elections. Subsequently, we can then use these models to forecast an upcoming election by combining extrapolated data from historical demographic data and updated data on economic growth, employment, crime rate, and other factors. Here, we report the results of two experiments of live electoral forecasting in the past year: the 2020 general election in Taiwan and six states in the 2020 general election in the United States. Our mostly objective method may transform how elections are forecasted and studied.
This chapter discusses the challenges and opportunities of incorporating computational social science (CSS) into political science. Using an original dataset of quantitative methods courses offered at the top-40 schools, this study shows... more
This chapter discusses the challenges and opportunities of incorporating computational social science (CSS) into political science. Using an original dataset of quantitative methods courses offered at the top-40 schools, this study shows that CSS courses are currently underrepresented, but their number is likely to augment because of the competition among schools. Yet, this trend is likely to stay confined to North American and Western European schools. Also, collaboration in CSS requires reconciling research priorities in engineering and physics (prediction) and in political science (explanation). Regarding research, big data does not mitigate concerns about causation and representativeness. But simulations allow studying rare phenomena and phenomena for which experiments are not appropriate. By running thousands of experiments at various combinations of explanatory variables, simulations explore counterfactuals, assess the explanatory power of competing theories, and make forecasts.
There are still many sociologists who are skeptical of the findings of big data-based analysis of social-data, questioning the potential of this knowledge production and its contribution to the scientific discourse of sociology.The... more
There are still many sociologists who are skeptical of the findings of big data-based analysis of social-data, questioning the potential of this knowledge production and its contribution to the scientific discourse of sociology.The chapter shows that this tension can be addressed through the redefinition of the research methodological basis of sociology, by the organic incorporation of data science know-how into its methods; the combined application of qualitative and quantitative analysis; and, the use of knowledge-driven science instead of the data-driven approach.The theoretical, methodological, and topical pathways between traditional and computational sociology emerge gradually along the chapter, which also includes plenty of illustrative examples of research situated at the interplay between sociology and data science. As our overview shows, there are new possibilities for sociological research, which are, in some sense, just by-products of information science. We introduce re...
This chapter discusses the challenges and opportunities of incorporating computational social science (CSS) into political science. Using an original dataset of quantitative methods courses offered at the top-40 schools, this study shows... more
This chapter discusses the challenges and opportunities of incorporating computational social science (CSS) into political science. Using an original dataset of quantitative methods courses offered at the top-40 schools, this study shows that CSS courses are currently underrepresented, but their number is likely to augment because of the competition among schools. Yet, this trend is likely to stay confined to North American and Western European schools. Also, collaboration in CSS requires reconciling research priorities in engineering and physics (prediction) and in political science (explanation). Regarding research, big data does not mitigate concerns about causation and representativeness. But simulations allow studying rare phenomena and phenomena for which experiments are not appropriate. By running thousands of experiments at various combinations of explanatory variables, simulations explore counterfactuals, assess the explanatory power of competing theories, and make forecasts.
The collective behaviour of people adopting an innovation, product or online service is commonly interpreted as a spreading phenomenon throughout the fabric of society. This process is arguably driven by social influence, social learning... more
The collective behaviour of people adopting an innovation, product or online service is commonly interpreted as a spreading phenomenon throughout the fabric of society. This process is arguably driven by social influence, social learning and by external effects like media. Observations of such processes date back to the seminal studies by Rogers and Bass, and their mathematical modelling has taken two directions: One paradigm, called simple contagion, identifies adoption spreading with an epidemic process. The other one, named complex contagion, is concerned with behavioural thresholds and successfully explains the emergence of large cascades of adoption resulting in a rapid spreading often seen in empirical data. The observation of real world adoption processes has become easier lately due to the availability of large digital social network and behavioural datasets. This has allowed simultaneous study of network structures and dynamics of online service adoption, shedding light on ...
This work investigates the Journeys of June of 2013, in Brazil, with a special focus on the distributed and transient leadership of the movement, the ideological spectra that it represented and the technopolitical nature of such events.... more
This work investigates the Journeys of June of 2013, in Brazil, with a special focus on the distributed and transient leadership of the movement, the ideological spectra that it represented and the technopolitical nature of such events. We present a longitudinal analysis based on data from surveys (n=579) and 252 Facebook activist pages (n=684,361 posts) from the Right, the Traditional Left, and the Post-June Left networks, uncovering a five-year timeframe. We also propose a multi-method approach for longitudinal studies on network-movements, in which we cross the survey’s results with Facebook data using data mining and information extraction techniques, particularly the automated topic modeling with the LDA algorithm. We also propose to use the “engagement per topic” as a useful metric to determine pages’ influence, from a non-trivial perspective of the data, thus escaping from analyses merely based on single posts’ relevance and making the social data more robust for scientific r...
Agent-based simulation models are often without a direct relation to their target systems in the real world but describe artificial societies, using stylised facts as a basis for modelling. The individuals of these artificial societies... more
Agent-based simulation models are often without a direct relation to their target systems in the real world but describe artificial societies, using stylised facts as a basis for modelling. The individuals of these artificial societies are usually endowed with very few capabilities such that their resemblance to human beings is usually poor. Nevertheless they open insights in the emergence of phenomena such as segregation, opinion formation, norm innovation, to name a few, that can often be observed in real-world societies. More often than not computational social scientists are satisfied with generating emergence effects that can also be observed in real-world scenarios and believe that models of artificial societies explain the mechanisms which are mostly unobservable on the real-world scenarios. On the other hand it is desirable to describe the real-world mechanisms in more detail before starting the modelling enterprise, i.e., to endow the software agents of a computational models with more of the capabilities of human beings than is usually done in stylised-fact models as only such a strategy allows for structural validity where the macro effects are generated in a manner more similar to reality. If one proceeds this way one has also to take into account the problems which arise from measuring opinions and attitudes in empirical settings.
Computational models of human societies are usually designed according to protocols like the ODD protocol or supported by ontologies. An alternative to these approaches is basing simulation models on a structuralist reconstruction of the... more
Computational models of human societies are usually designed according to protocols like the ODD protocol or supported by ontologies. An alternative to these approaches is basing simulation models on a structuralist reconstruction of the theory underlying the simulation model. This makes it necessary to identify all kinds of entities and all relations between these entities the theory is about and to identify the axioms or rules the theory postulates. This defines a "full model" of a theory in the sense of this term in the non-statement view of structuralism, and this definition can be translated into a simulation model, preferably using a declarative simulation language, but a translation into a procedural language more often used in the widespread simulation toolboxes is also possible. The paper uses examples of such conversions between simulation models and structuralist reconstructions of mid-range social theories to describe the potential of this approach which can also be used to estimate theoretical parameters of simulation models when simulation output can be compared to empirical data taken from intended applications of the underlying theory.
This paper analyses rural settlement patterns in the Lower Rhine frontier zone to elucidate the role of forts in the rural economy. Von Thunen’s model of rural marketing suggests that market centres attract intensive cultivation, making... more
This paper analyses rural settlement patterns in the Lower Rhine frontier zone to elucidate the role of forts in the rural economy. Von Thunen’s model of rural marketing suggests that market centres attract intensive cultivation, making them identifiable through spatial analysis of rural settlements. Environmental factors that influenced production capacity, however, can also be expected to exert a strong influence on settlement location, so a multivariate method of spatial analysis is necessary. Using a process of comparative modelling with logistic regression analysis, I test the hypotheses that rural settlements responded to the location of market centres, both civilian and military. I use univariate analysis of settlement territories to identify influential local environmental factors and combine these into a logistic regression model. Then I add a market potential (MP) variable that quantifies the accessibility of marketing opportunities from any location within a market system...