2011 IEEE Third Int'l Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third Int'l Conference on Social Computing, 2011
Although community detection has drawn tremendous amount of attention across the sciences in the ... more Although community detection has drawn tremendous amount of attention across the sciences in the past decades, no formal consensus has been reached on the very nature of what qualifies a community as such. In this article we take an orthogonal approach by introducing a novel point of view to the problem of overlapping communities. Instead of quantifying the quality of a set of communities, we choose to focus on the intrinsic community-ness of one given set of nodes. To do so, we propose a general metric on graphs, the cohesion, based on counting triangles and inspired by well established sociological considerations. The model has been validated through a large-scale online experiment called Fellows in which users were able to compute their social groups on Facebook and rate the quality of the obtained groups. By observing those ratings in relation to the cohesion we assess that the cohesion is a strong indicator of users subjective perception of the community-ness of a set of people.
2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing, 2012
Social Network Analysis has often focused on the structure of the network without taking into acc... more Social Network Analysis has often focused on the structure of the network without taking into account the characteristics of the individual involved. In this work, we aim at identifying how individual differences in psychological traits affect the community structure of social networks. Instead of choosing to study only either structural or psychological properties of an individul, our aim is to exhibit in which way the psychological attributes of interacting individuals impacts the social network topology. Using psychological data from the myPersonality application and social data from Facebook, we confront the personality traits of the subjects to metrics obtained after applying the C 3 community detection algorithm to the social neighborhood of the subjects. We observe that introverts tend to have less communities and hide into large communities, whereas extroverts tend to act as bridges between more communities, which are on average smaller and of varying cohesion.
During the last decade, the study of large scale complex networks has attracted a substantial amo... more During the last decade, the study of large scale complex networks has attracted a substantial amount of attention and work from several domains: sociology, biology, computer science, epidemiology. Most of such complex networks are inherently dynamic, with new vertices and links appearing while some old ones disappear. Until recently, the dynamics of these networks was less studied and there is a strong need for dynamic network models in order to sustain protocol performance evaluations and fundamental analyzes in all the research domains listed above.
In large scale multihop wireless networks, flat architectures are not scalable. In order to overc... more In large scale multihop wireless networks, flat architectures are not scalable. In order to overcome this major drawback, clusterization is introduced to support selforganization and to enable hierarchical routing. When dealing with multihop wireless networks, the robustness is a main issue due to the dynamicity of such networks. Several algorithms have been designed for the clustering process. As far as we know, very few studies check the robustness feature of their clustering protocols.
Network measurement is essential for assessing performance issues, identifying and locating probl... more Network measurement is essential for assessing performance issues, identifying and locating problems. Two common strategies are the passive approach that attaches specific devices to links in order to monitor the traffic that passes through the network and the active approach that generates explicit control packets in the network for measurements. One of the key issues in this domain is to minimize the overhead in terms of hardware, software, maintenance cost and additional traffic.
The advent of large scale multi-hop wireless networks highlights problems of fault tolerance and ... more The advent of large scale multi-hop wireless networks highlights problems of fault tolerance and scale in distributed system, motivating designs that autonomously recover from transient faults and spontaneous reconfiguration. Self-stabilization provides an elegant solution for recovering from such faults. We present complexity analysis for a family of self-stabilizing vertex coloring algorithms in the context of multi-hop wireless networks.
Flat ad hoc architectures are not scalable. In order to overcome this major drawback, hierarchica... more Flat ad hoc architectures are not scalable. In order to overcome this major drawback, hierarchical routing is introduced since it is found to be more effective. The main challenge in hierarchical routing is to group nodes into clusters. Each cluster is represented by one cluster head. Conventional methods use either the connectivity (degree) or the node Id to perform the cluster head election. Such parameters are not really robust in terms of side effects. In this paper we introduce a novel measure that both forms clusters and performs the cluster head election. Analytical models and simulation results show that this new measure for cluster head election induces less cluster head changes as compared to classical methods.
Wireless routing protocols in MANET are all flat routing protocols and are thus not suitable for ... more Wireless routing protocols in MANET are all flat routing protocols and are thus not suitable for large scale or very dense networks because of bandwidth and processing overheads they generate. A common solution to this scalability problem is to gather terminals into clusters and then to apply a hierarchical routing, which means, in most of the literature, using a proactive routing protocol inside the clusters and a reactive one between the clusters. We previously introduced a cluster organization to allow a hierarchical routing and scalability, which have shown very good properties. Nevertheless, it provides a constant number of clusters when the intensity of nodes increases. Therefore we apply a reactive routing protocol inside the clusters and a proactive routing protocol between the clusters. In this way, each cluster has O(1) routes to maintain toward other ones. When applying such a routing policy, a node u also needs to locate its correspondent v in order to pro-actively route toward the cluster owning v. In this paper, we describe our localization scheme based on Distributed Hashed Tables and Interval Routing which takes advantage of the underlying clustering structure. It only requires O(1) memory space size on each node.
This paper presents the architecture and the algorithms used in DIET (Distributed Interactive Eng... more This paper presents the architecture and the algorithms used in DIET (Distributed Interactive Engineering Toolbox), a hierarchical set of components to build Network Enabled Server applications in a Grid environment. This environment is built on top of different tools which are able to locate an appropriate server depending of the client's request, the data localization (which can be anywhere on the system, because of previous computations) and the dynamic performance characteristics of the system. RÉSUMÉ. Cet article présente l'architecture et les algorithmes utilisés dans DIET (Distributed Interactive Engineering Toolbox), un ensemble hiérarchique de composants pour la construction d'applications basées sur les serveurs de calcul dans un environnement de metacomputing. Cet environnement est basé sur différents outils capables de localiser un serveur approprié en fonction de la requête envoyée par un client, des données ayant pu être calculées par des requêtes précédentes (et donc localisées sur d'autres serveurs) et des caractéristiques dynamiques du système, en terme de performances.
In this paper, we present the developments realized in the OURAGAN project around the paralleliza... more In this paper, we present the developments realized in the OURAGAN project around the parallelization of a MATLAB-like tool called SCILAB. These developments use high performance numerical libraries and different approaches based either on the duplication of SCILAB processes or on computational servers. This tool, SCILAB¢ £ ¢ , allows users to perform high level operations on distributed matrices in a metacomputing environment. We also present performance results on different architectures.
We propose solutions to several multicast core management problems, including automatic core sele... more We propose solutions to several multicast core management problems, including automatic core selection, core failure handling, and core migration, for use in networks based on link-state routing. The proposed approach uses a central server, called the core binding server (CBS), to manage core-group bindings, accompanied by a network-level leader election protocol in order to achieve robustness. By modeling the selection of the CBS as a leader election problem, this approach can handle any combination of network component failures, including those that partition the network. Further, our simulation results reveal that the central server can sustain extremely high workloads, and demonstrate the effectiveness of our core selection and core migration methods
A core-based forwarding multicast protocol uses a core router as a traffic transit center: all mu... more A core-based forwarding multicast protocol uses a core router as a traffic transit center: all multicast packets are first sent to the core, then distributed to destinations on a multicast tree rooted at the core. The purpose of this paper is to evaluate, via simulation, the effect of various core selection methods on multicast performance. The main contribution of this work is the discovery of a simple yet effective core selection heuristic that can be implemented in a wide range of networks. Specifically, our results show that the tree center heuristic (using the center of the existing multicast tree as the new core node) significantly outperforms heuristics based on random selection, and performs as well as heuristics that are more computationally expensive
2011 IEEE Third Int'l Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third Int'l Conference on Social Computing, 2011
Although community detection has drawn tremendous amount of attention across the sciences in the ... more Although community detection has drawn tremendous amount of attention across the sciences in the past decades, no formal consensus has been reached on the very nature of what qualifies a community as such. In this article we take an orthogonal approach by introducing a novel point of view to the problem of overlapping communities. Instead of quantifying the quality of a set of communities, we choose to focus on the intrinsic community-ness of one given set of nodes. To do so, we propose a general metric on graphs, the cohesion, based on counting triangles and inspired by well established sociological considerations. The model has been validated through a large-scale online experiment called Fellows in which users were able to compute their social groups on Facebook and rate the quality of the obtained groups. By observing those ratings in relation to the cohesion we assess that the cohesion is a strong indicator of users subjective perception of the community-ness of a set of people.
2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing, 2012
Social Network Analysis has often focused on the structure of the network without taking into acc... more Social Network Analysis has often focused on the structure of the network without taking into account the characteristics of the individual involved. In this work, we aim at identifying how individual differences in psychological traits affect the community structure of social networks. Instead of choosing to study only either structural or psychological properties of an individul, our aim is to exhibit in which way the psychological attributes of interacting individuals impacts the social network topology. Using psychological data from the myPersonality application and social data from Facebook, we confront the personality traits of the subjects to metrics obtained after applying the C 3 community detection algorithm to the social neighborhood of the subjects. We observe that introverts tend to have less communities and hide into large communities, whereas extroverts tend to act as bridges between more communities, which are on average smaller and of varying cohesion.
During the last decade, the study of large scale complex networks has attracted a substantial amo... more During the last decade, the study of large scale complex networks has attracted a substantial amount of attention and work from several domains: sociology, biology, computer science, epidemiology. Most of such complex networks are inherently dynamic, with new vertices and links appearing while some old ones disappear. Until recently, the dynamics of these networks was less studied and there is a strong need for dynamic network models in order to sustain protocol performance evaluations and fundamental analyzes in all the research domains listed above.
In large scale multihop wireless networks, flat architectures are not scalable. In order to overc... more In large scale multihop wireless networks, flat architectures are not scalable. In order to overcome this major drawback, clusterization is introduced to support selforganization and to enable hierarchical routing. When dealing with multihop wireless networks, the robustness is a main issue due to the dynamicity of such networks. Several algorithms have been designed for the clustering process. As far as we know, very few studies check the robustness feature of their clustering protocols.
Network measurement is essential for assessing performance issues, identifying and locating probl... more Network measurement is essential for assessing performance issues, identifying and locating problems. Two common strategies are the passive approach that attaches specific devices to links in order to monitor the traffic that passes through the network and the active approach that generates explicit control packets in the network for measurements. One of the key issues in this domain is to minimize the overhead in terms of hardware, software, maintenance cost and additional traffic.
The advent of large scale multi-hop wireless networks highlights problems of fault tolerance and ... more The advent of large scale multi-hop wireless networks highlights problems of fault tolerance and scale in distributed system, motivating designs that autonomously recover from transient faults and spontaneous reconfiguration. Self-stabilization provides an elegant solution for recovering from such faults. We present complexity analysis for a family of self-stabilizing vertex coloring algorithms in the context of multi-hop wireless networks.
Flat ad hoc architectures are not scalable. In order to overcome this major drawback, hierarchica... more Flat ad hoc architectures are not scalable. In order to overcome this major drawback, hierarchical routing is introduced since it is found to be more effective. The main challenge in hierarchical routing is to group nodes into clusters. Each cluster is represented by one cluster head. Conventional methods use either the connectivity (degree) or the node Id to perform the cluster head election. Such parameters are not really robust in terms of side effects. In this paper we introduce a novel measure that both forms clusters and performs the cluster head election. Analytical models and simulation results show that this new measure for cluster head election induces less cluster head changes as compared to classical methods.
Wireless routing protocols in MANET are all flat routing protocols and are thus not suitable for ... more Wireless routing protocols in MANET are all flat routing protocols and are thus not suitable for large scale or very dense networks because of bandwidth and processing overheads they generate. A common solution to this scalability problem is to gather terminals into clusters and then to apply a hierarchical routing, which means, in most of the literature, using a proactive routing protocol inside the clusters and a reactive one between the clusters. We previously introduced a cluster organization to allow a hierarchical routing and scalability, which have shown very good properties. Nevertheless, it provides a constant number of clusters when the intensity of nodes increases. Therefore we apply a reactive routing protocol inside the clusters and a proactive routing protocol between the clusters. In this way, each cluster has O(1) routes to maintain toward other ones. When applying such a routing policy, a node u also needs to locate its correspondent v in order to pro-actively route toward the cluster owning v. In this paper, we describe our localization scheme based on Distributed Hashed Tables and Interval Routing which takes advantage of the underlying clustering structure. It only requires O(1) memory space size on each node.
This paper presents the architecture and the algorithms used in DIET (Distributed Interactive Eng... more This paper presents the architecture and the algorithms used in DIET (Distributed Interactive Engineering Toolbox), a hierarchical set of components to build Network Enabled Server applications in a Grid environment. This environment is built on top of different tools which are able to locate an appropriate server depending of the client's request, the data localization (which can be anywhere on the system, because of previous computations) and the dynamic performance characteristics of the system. RÉSUMÉ. Cet article présente l'architecture et les algorithmes utilisés dans DIET (Distributed Interactive Engineering Toolbox), un ensemble hiérarchique de composants pour la construction d'applications basées sur les serveurs de calcul dans un environnement de metacomputing. Cet environnement est basé sur différents outils capables de localiser un serveur approprié en fonction de la requête envoyée par un client, des données ayant pu être calculées par des requêtes précédentes (et donc localisées sur d'autres serveurs) et des caractéristiques dynamiques du système, en terme de performances.
In this paper, we present the developments realized in the OURAGAN project around the paralleliza... more In this paper, we present the developments realized in the OURAGAN project around the parallelization of a MATLAB-like tool called SCILAB. These developments use high performance numerical libraries and different approaches based either on the duplication of SCILAB processes or on computational servers. This tool, SCILAB¢ £ ¢ , allows users to perform high level operations on distributed matrices in a metacomputing environment. We also present performance results on different architectures.
We propose solutions to several multicast core management problems, including automatic core sele... more We propose solutions to several multicast core management problems, including automatic core selection, core failure handling, and core migration, for use in networks based on link-state routing. The proposed approach uses a central server, called the core binding server (CBS), to manage core-group bindings, accompanied by a network-level leader election protocol in order to achieve robustness. By modeling the selection of the CBS as a leader election problem, this approach can handle any combination of network component failures, including those that partition the network. Further, our simulation results reveal that the central server can sustain extremely high workloads, and demonstrate the effectiveness of our core selection and core migration methods
A core-based forwarding multicast protocol uses a core router as a traffic transit center: all mu... more A core-based forwarding multicast protocol uses a core router as a traffic transit center: all multicast packets are first sent to the core, then distributed to destinations on a multicast tree rooted at the core. The purpose of this paper is to evaluate, via simulation, the effect of various core selection methods on multicast performance. The main contribution of this work is the discovery of a simple yet effective core selection heuristic that can be implemented in a wide range of networks. Specifically, our results show that the tree center heuristic (using the center of the existing multicast tree as the new core node) significantly outperforms heuristics based on random selection, and performs as well as heuristics that are more computationally expensive
Uploads
Papers by Eric Fleury