Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Learning by Gossip: A Principled Information Exchange Model in Social Networks

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

We cope with the key step of bootstrap methods of generating a possibly infinite sequence of random data preserving properties of the distribution law, starting from a primary sample actually drawn from this distribution. We solve this task in a cooperative way within a community of generators where each improves its performance from the analysis of the other partners’ production. Since the analysis is based on an a priori distrust of the other partners’ production, we denote the partner ensemble as a gossip community and denote the statistical procedure learning by gossip. We prove that this procedure is highly efficient when applied to the elementary problem of reproducing a Bernoulli distribution, with a properly moderated distrust rate when the absence of a long-term memory requires an online estimation of the bootstrap generator parameters. This fact makes the procedure viable as a basic template of an efficient interaction scheme within social network agents.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. akin the routine rand() we may call in C, RandomReal[] in Mathematica, etc.

References

  1. Apolloni B, Bassis S, Valerio L. Training a network of mobile neurons. In: The 2011 international joint conference on neural networks (IJCNN), 2011. pp. 1683–91.

  2. Apolloni B, de Falco D, Taylor J. pRAM layout optimization. Neural Netw. 1997;10:1709–16.

    Article  Google Scholar 

  3. Apolloni B, Malchiodi D, Gaito S. Algorithmic inference in machine learning. Magill: Advanced Knowledge International; 2006.

    Google Scholar 

  4. Brown LD. Sufficient statistics in the case of independent random variables. Ann Math Stat. 1964;35:1456–74.

    Article  Google Scholar 

  5. Clarkson TG, Gorse D, Taylor J, Ng C. Epidemic algorithms for replicated database management. IEEE Trans Comput. 1992;1:1552–61.

    Article  Google Scholar 

  6. Clarkson T. pRAM-256 neurocomputer with on-chip learning. Url:http://clarkson.me.uk/academic/pram.htm.

  7. Clarkson T, Gorse D, Taylor J. Hardware realisable models of neural processing. In: First IEE international conference on artificial neural networks, 1989. pp. 242–6.

  8. Demers A, Greene D, Hauser C, Irish W, Larson J, Shenker S, Sturgis H, Swinehart D, Terry D. Consensus formation on adaptive networks. In: Proceedings of the 6th annual ACM symposium on principles of distributed computing (PODC87), ACM, 1987. pp. 1–12.

  9. Dietterich T. Ensemble methods in machine learning. In: Kittler J, Roli F (eds.) Multiple classifier systems. First International Workshop, MCS 2000, Cagliari, Italy, Lecture Notes in Computer Science, vol. 1857. New York: Springer; 2000. p. 1–15.

  10. Eberhart RC, Shi Y, Kennedy J. Swarm intelligence. The Morgan Kaufmann series in evolutionary computation. Magill, Adelaide; 2006.

  11. Efron B, Tibshirani R. An introduction to the Bootstrap. Freeman: Chapman and Hall; 1993.

    Book  Google Scholar 

  12. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and application to boosting. In: Proceedings of the 2nd European conference on computational learning theory, Barcelona, 1995. p. 92–9.

  13. Gianini G, Azzini A, Damiani E, Marrara S. Global consensus emergence in an unstructured semantic network. In: Proceedings of the 5th international conference on Soft computing as transdisciplinary science and technology, CSTST ’08, p. 185–91. ACM, New York, 2008. doi:10.1145/1456223.1456264.

  14. Hecht-Nielsen R. Theory of backpropagation neural networks. In: Proceedings IJCNN, 1989. pp. 593–606.

  15. Hyotyniemi H. Neocybernetics in biological systems. Tech. rep., Helsinki University of Technology, Control Engineering Laboratory, 2006.

  16. Hyvärinen A, Kahunen J, Oja E. Independent component analysis. New York: Wiley; 2001.

    Book  Google Scholar 

  17. Jelasity M, Guerraoui R, marie Kermarrec A, Steen MV. The peer sampling service: experimental evaluation of unstructured gossip-based implementations. In: In Middleware 04: Proceedings of the 5th ACM/IFIP/USENIX international conference on Middleware. New York: Springer; 2004. p. 79–98.

  18. Jelasity M, Montresor A, Babaoglu O. Gossip-based aggregation in large dynamic networks. ACM Trans Comput Syst. 2005;23(3):219–52. doi:10.1145/1082469.1082470.

    Google Scholar 

  19. Kozma B, Barrat A. Consensus formation on adaptive networks. Phys Rev E 2008;77:016102.

    Article  Google Scholar 

  20. Lo A. A bayesian bootstrap for a finite population. Ann Stat. 1998;16(4):1684–95.

    Article  Google Scholar 

  21. Macy M, Willer R. From factors to actors: computational sociology and agent-based modeling. Annu Rev Sociol. 2002;28:143–66.

    Article  Google Scholar 

  22. Morrison DF (1975) Multi-valued statistical methods. Wiley, New York; 1975.

    Google Scholar 

  23. Ormandi R, Hegedus I, Jelasity M. Gossip learning with linear models on fully distributed data. Concurr Comput Pract Exp, to appear 2012.

  24. Rohatgi VK. An introduction to probability theory and mathematical statistics. Wiley Series in Probability and Mathematical Statistics. New York: Wiley; 1976.

    Google Scholar 

  25. Schafer JB, Frankowski D, Herlocker J, Sen S. Collaborative filtering recommender systems. In: Brusilovsky M, Kobsa A, Nejdl W, editors. The adaptive web. Methods and strategies of web personalization, Lecture Notes in Computer Science, vol. 4321. Berlin: Springers; 2007.

  26. Wilks SS. Mathematical statistics. Wiley publications in statistics. New York: Wiley; 1965.

    Google Scholar 

  27. Zimmermann MG, Eguiluz V. Cooperation, social networks, and the emergence of leadership in a prisoners dilemma with adaptive local interactions. Phys Rev E 2005;72(8):118–32.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to B. Apolloni.

Appendix

Appendix

Hereafter, we give some directions to compute formulas (12), (13), (14), (16), and (17).

Formula (12)

  • Starting statements:

    $$ \begin{aligned} A_k(t)&=(1-\varepsilon)A_k(t-1)+\varepsilon_0X(t)+\varepsilon_1 Y_k(t)+\varepsilon_2\sum_{j\ne k}Y_j(t); \quad \varepsilon=\varepsilon_0+\varepsilon_1+(n-1)\varepsilon_2;\\ \widetilde A(t)&=\frac{1}{n}\sum_{k=1}^n A_k(t); \quad Y_{[k]}(t)= \hbox {Integer}[A_k(t-1)+U_k];\quad U_k \hbox { uniform in} [0,1]; \quad\widetilde Y =\frac{1}{n}\sum_{k=1}^n Y_{[k]}(t). \end{aligned} $$
  • Recurrency relations:

    1. 1.
      $$ \begin{aligned} \widetilde A(t)&=(1-\varepsilon)\widetilde A(t-1)+\varepsilon_0X(t)+[\varepsilon_1+(n-1)\varepsilon_2] \widetilde Y(t);\\ E[ \widetilde A(t)]&=(1-\varepsilon) E[\widetilde A(t-1)]+\varepsilon_0\theta \end{aligned} $$
    2. 2.
      $$ \begin{aligned} Cov[\widetilde A(t-1),\widetilde Y(t)]&=E[ \widetilde A(t-1)\widetilde Y(t)]-E[ \widetilde A(t-1)]E[\widetilde Y(t)]\\ &=\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^n\left(E[A_i(t-1)y_{[j]}(t)]-E[A_i(t-1)]^2\right)=\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^n \left(E[A_i(t-i)A_{j}(t-1)]-E[A_i(t-i)]^2\right)\\ &=\frac{1}{n^2}\sum_{i=1}^n Var[A_i(t-1)]+\frac{1}{n^2}\sum_{i=1}^n\sum_{j\ne i,j=1}^n Cov[A_i(t-1)A_{j}(t-1)] =Var[\widetilde A(t-1)] \end{aligned} $$
    3. 3.
      $$ \begin{aligned} Var[ \widetilde A(t)] &= (1-\varepsilon_0)^2Var[ \widetilde A(t-1)] + \varepsilon_0^2\theta(1-\theta)+[\varepsilon_1+(n-1)\varepsilon_2]^2Var[\widetilde Y(t)]\\ &\quad+ 2(1-\varepsilon)[\varepsilon_1+(n-1)\varepsilon_2]^2Cov[\widetilde A(t-1),\widetilde Y(t)]\\ &= (1-\varepsilon_0)^2Var[ \widetilde A(t-1)] + \varepsilon_0^2\theta(1-\theta)+[\varepsilon_1+(n-1)\varepsilon_2]^2\{Var[\widetilde Y(t)]-Var[\widetilde A(t-1)]\} \end{aligned} $$
  • Final equation

    $$ \begin{aligned} E[(\widetilde A(t)-\theta)^2] &= E[(\widetilde A(t)-E[(\widetilde A(t)] +E\left[(\widetilde A(t)] -\theta)^2\right]\\ &= (1-\varepsilon_0)^2 E[(\widetilde A(t-1)-\theta)^2] +\varepsilon_0^2\theta(1-\theta)+[\varepsilon_1+(n-1)\varepsilon_2]^2 \{E[(\widetilde Y(t)-\theta)^2]-E[(\widetilde A(t-1)-\theta)^2]\} \end{aligned} $$

Formula (13)

  • Starting statements:

    $$ E[(A_k(\infty)-\theta)^2]=E[A_k(\infty)^2]-\theta^2 $$
  • Recurrency relation

    $$ \begin{aligned} E[A_i(t)A_j(t)]&= E\left [\left((1-\varepsilon)A_i(t-1)+\varepsilon_0X(t)+\varepsilon_1 Y_{[i]}(t)+\varepsilon_2\sum_{k\ne i,k=1}^nY_{[k]}(t)\right)\right.\\ &\quad\left. \left((1-\varepsilon)A_j(t-1)+\varepsilon_0X(t)+\varepsilon_1 Y_{[j]}(t)+\varepsilon_2\sum_{k\ne j,k=1}^nY_{[k]}(t)\right) \right] \end{aligned} $$
  • Fixed point equation

    $$ E[A_i(\infty)^2]-E[A_i(\infty)A_ij(\infty)]=(1-\varepsilon_0-n \varepsilon_2)\{E[A_i(\infty)^2]-E[A_i(\infty)A_ij(\infty)]\}- (\varepsilon_1-\varepsilon_2)^2\{E[A_i(\infty)^2]-\theta^2\} $$
  • Final equation

    $$ \hbox{E}\left[ \left( A_k(\infty) - \theta \right)^2 \right] = \theta (1-\theta) \left\{ 1- \frac{2 \varepsilon_0(1-\varepsilon_0)[1-(1-\varepsilon_0-n\varepsilon_2)^2]}{\delta_a-\delta_b} \right\} $$

    where

    $$ \begin{aligned} \delta_a&=\{1-[1-\varepsilon_0-(n-1)\varepsilon_2]^2+\varepsilon_1^2\}\{1-(1-\varepsilon_0)^2+\varepsilon_2[2(1-\varepsilon_0)-n\varepsilon_2]\\ \delta_b&=-2(n-1)\varepsilon_2^2 [1-\varepsilon_0-(n-1)\varepsilon_2-\varepsilon_1][2(1-\varepsilon_0)-n\varepsilon_2] \end{aligned} $$
  • Starting statements:

    $$ Var[\widetilde A(t)]=\frac{1}{n^2}\left(\sum_{k=1}^n Var[A_k(t)]+\sum_{i=1}^n\sum_{j\neq,j=1}^nCov[A_i(t)A_j(t)]\right) $$
  • Fixed point equation

    $$ Cov[A_i(\infty)A_j(\infty)]=\frac{1}{(1-\varepsilon_0-n\varepsilon_2)^2} \left\{[(1-\varepsilon_0-n\varepsilon_2)^2+(\varepsilon_1-\varepsilon_2)^2] E[A_k(\infty)^2]-(\varepsilon_1-\varepsilon_2)^2\theta(1-\theta)\right\} $$
  • Final equation:

    $$ \begin{aligned} E\left[\left(\widetilde A(\infty)-\theta\right)^2\right] & =\frac{1}{n^2}\left\{nVar[A_i(\infty)]+n(n-1)Cov[A_i(\infty) -A_j(\infty)]\right\}\\ &=\frac{1}{n}\frac{\{[1-(1-\varepsilon_0-n\varepsilon_2)^2+(n-1) (\varepsilon_1-\varepsilon_2)^2\}E[(A_k(\infty)-\theta)^2]-(n-1) (\varepsilon_1-\varepsilon_2)^2\theta(1-\theta)}{1- (1-\varepsilon_0-n\varepsilon_2)^2} \end{aligned} $$

Formula (14)

  • Starting statement

    $$ A(t)=(1-\varepsilon)A(t-1)+\varepsilon X(t);\quad Y(t)=\hbox {Integer}[A(t-1)+U];\quad S_n(t)=\frac{1}{n}\sum_{k=1}^n Y_k(t), \hbox {U uniform in} [0,1]. $$
  • Recurrent relations:

    1. 1.
      $$ Cov[Y_i(t),Y_j(t)]=E[Y_i(t),Y_j(t)]-E[Y_i(t)]E[Y_j(t)] =E[A(t-1)^2]-E[A(t-1)]^2=Var[A(t-1)] $$
    2. 2.
      $$ Var[Y_i(t)]=E[Y_i(t)^2]-E[Y_i(t)]^2=E[A(t-1)](1-E[A(t-1)]) $$
    3. 3.
      $$ Var[A(t)]=(1-\varepsilon)^2Var[A(t-1)]+\varepsilon^2\theta(1-\theta) $$
    4. 4.
      $$ Var[S_n(t)]=\frac{1}{n}E[A(t-1)](1-E[A(t-1)])+\frac{n-1}{n}Var[A(t-1)] $$
  • Final equation:

    $$ \begin{aligned} MSE[S_n(t)]&=Var[S_n(t)]+(E[S_n(t)]-\theta)^2\\ &=\frac{1}{n}E[A(t-1)](1-E[A(t-1)])+\frac{n-1}{n}Var[A(t-1)]+ E\left[\left(A(t-1)-\theta\right)^2\right]\\ &=(1-\varepsilon)^2MSE[S_n(t-1)]+\frac{1}{n}\varepsilon(1-\varepsilon) (1-2\theta)E[S_n(t-1)]+\frac{\varepsilon\theta}{n}[1-\varepsilon\theta+(n-1)\varepsilon(1-\theta)]. \end{aligned} $$

Formula (16)

  • Starting statements:

    $$ \widetilde S_n(t)=\frac{1}{n}\sum_{k=1}^n Y_{[k]}. $$
  • Recurrency relation:

    $$ \begin{aligned} Var[\widetilde S_n(t)]&=\frac{1}{n^2}\left\{\sum_{i=1}^n Var[Y_{[i]}(t)]+\sum_{i=1}^n\sum_{j\ne i,j=1}^n Cov[Y_{[i]}(t)Y_{[j]}(t)]\right\}\\ &=\frac{1}{n^2}\left\{\sum_{i=1}^nE[A_i(t-1)](1-E[A_i(t-1)])+\sum_{i=1}^n\sum_{j\ne i,j=1}^n Cov[A_{[i]}(t-1)A_{[j]}(t-1)]\right\} \end{aligned} $$
  • Limit statement:

    $$ E[(\widetilde S_n(\infty)-\theta)^2] = \lim_{t\rightarrow\infty} Var[\widetilde S_n(t)] $$
  • Final equation:

    $$ \begin{aligned} & \hbox{E}\left[ \left( \widetilde S_n(\infty) - \theta \right)^2 \right] \\ &\quad=\frac{1}{n}\frac{(n-1)[\delta_c+ (\varepsilon_1-\varepsilon_2)^2] \hbox{E} \left[ \left( A_k(\infty)-\theta \right)^2 \right] + [\delta_c-(n-1)(\varepsilon_1 - \varepsilon_2)^2] \theta(1-\theta)}{\delta_c} \end{aligned} $$

    where

    $$ \delta_c=1-(1-\varepsilon_0 -n \varepsilon_2)^2 $$

Formula (17)

  • Recurrency relations:

    1. 1.
      $$ \begin{aligned} E[\widetilde S_n(t)]&=\frac{1}{n}\sum_{k=1}^n E[A_k(t-1)]\\ &= (1-\varepsilon)\frac{1}{n}\sum_{k=1}^n E[A_k(t-2)] +\varepsilon_0\theta+\varepsilon_1E[\widetilde S_n(t-1)] +\varepsilon_2\frac{1}{n}\sum_{h\ne k, h=1}^n E[A_h(t-2)]\\ &=(1-\varepsilon)E[\widetilde S_n(t-1)]+\varepsilon_0\theta+\varepsilon_1E[\widetilde S_n(t-1)]+(n-1)\varepsilon_2E[\widetilde S_n(t-1)]=(1-\varepsilon_0)E[\widetilde S_n(t-1)]+\varepsilon_0\theta \end{aligned} $$
    2. 2.
      $$ \begin{aligned} Var[\widetilde S_n(t)]&=E[\widetilde S_n(t)^2]-E[\widetilde S_n(t)]^2\\ &=\frac{1}{n}E[\widetilde S_n(t)]+\frac{1}{n^2}\sum_{i=1}^n \sum_{j\neq i,j=1}^n E[A_i(t)A_j(t)]- (1-\varepsilon_0)^2E[\widetilde S_n(t)]^2-\varepsilon_0^2\theta^2-2\varepsilon_0(1-\varepsilon_0)\theta E[\widetilde S_n(t-1)]\\ &=(1-\varepsilon_0)^2Var[\widetilde S_n(t-1)]+2(n-1) (1-\varepsilon)\varepsilon_2\{Var[\widetilde A(t-2)]-Var[\widetilde S_n(t-1)]\}\\ &\quad+[2(1-\varepsilon_0)-n\varepsilon_2]E[\widetilde S_n(t-1)(1- \widetilde S_n(t-1))]+\frac{1}{n}\varepsilon_0(1-\varepsilon_0)\{E[\widetilde S_n(t-1)]-2\theta E[\widetilde S_n(t-1)]+\theta\}\\ &\quad+\varepsilon_0^2\theta(1-\theta) \end{aligned} $$
  • Final equation

    $$ \begin{aligned} \hbox{MSE}\left[ \widetilde S_n(t) \right] &= (1-\varepsilon_0)^2 \hbox{MSE}\left[ \widetilde S_n(t-1) \right] + 2(n-1)(1-\varepsilon)\varepsilon_2 \left\{ \hbox{Var}\left[ \widetilde A(t-2) \right] - \hbox{Var} \left[ \widetilde S_n(t-1) \right] \right\}\\ &\quad+[2(1-\varepsilon_0)-n \varepsilon_2] \varepsilon_2 \hbox{E} \left[ \widetilde S_n(t-1)\left( 1-\widetilde S_n(t-1)\right) \right]\\ &\quad+ \frac{1}{n} (1-\varepsilon_0)\varepsilon_0 \left\{ \hbox{E}\left[ \widetilde S_n(t-1) \right] -2 \theta \hbox{E} \left[ \widetilde S_n(t-1) \right] + \theta \right\} + \varepsilon_0^2 \theta(1-\theta) \end{aligned} $$

Rights and permissions

Reprints and permissions

About this article

Cite this article

Apolloni, B., Malchiodi, D. & Taylor, J.G. Learning by Gossip: A Principled Information Exchange Model in Social Networks. Cogn Comput 5, 327–339 (2013). https://doi.org/10.1007/s12559-013-9211-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-013-9211-6

Keywords