Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Fractal and Transfractal Scale-Free Networks

  • Reference work entry
Encyclopedia of Complexity and Systems Science

Definition of the Subject

The explosion in the study of complex networks during the last decade has offered a unique view in the structure and behavior of a widerange of systems, spanning many different disciplines [1]. The importance of complex networks liesmainly in their simplicity, since they can represent practically any system with interactions in a unified way by stripping complicated details andretaining the main features of the system. The resulting networks include only nodes, representing the interactingagents and links, representing interactions. The term ‘interactions’ is used loosely to describe anypossible way that causes two nodes to form a link. Examples can be real physical links, such as the wires connecting computers in the Internet orroads connecting cities, or alternatively they may be virtual links, such as links in WWW homepages or acquaintances in societies, where there is nophysical medium actually connecting the nodes.

The field was pioneered by the famous...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Abbreviations

Degree of a node:

Number of edges incident to the node.

Scale‐free network:

Network that exhibits a wide (usually power‐law) distribution of the degrees.

Small‐world network:

Network for which the diameter increases logarithmically with the number of nodes.

Distance:

The length (measured in number of links) of the shortest path between two nodes.

Box:

Group of nodes. In a connected box there exists a path within the box between any pair of nodes. Otherwise, the box is disconnected.

Box diameter:

The longest distance in a box.

Bibliography

  1. Albert R, Barabási A-L (2002) Rev Mod Phys 74:47; Barabási A-L (2003) Linked:how everything is connected to everything else and what it means. Plume, New York; Newman MEJ (2003)SIAM Rev 45:167; Dorogovtsev SN, Mendes JFF (2002) Adv Phys 51:1079; Dorogovtsev SN, Mendes JFF (2003) Evolution of networks: from biological nets to theinternet and WWW. Oxford University Press, Oxford; Bornholdt S, Schuster HG (2003) Handbook of graphs and networks. Wiley-VCH, Berlin; Pastor-Satorras R,Vespignani A (2004) Evolution and structure of the internet. Cambridge University Press, Cambridge; Amaral LAN, Ottino JM (2004) Complexnetworks – augmenting the framework for the study of complex systems. Eur Phys J B 38:147–162

    Google Scholar 

  2. Albert R, Jeong H, Barabási A-L (1999) Diameter of the world wide web. Nature401:130–131

    Google Scholar 

  3. Albert R, Jeong H, Barabási AL (2000) Nature406:p378

    Google Scholar 

  4. Bagrow JP, Bollt EM (2005) Phys Rev E 72:046108

    ADS  Google Scholar 

  5. Bagrow JP (2008) Stat Mech P05001

    Google Scholar 

  6. Bagrow JP, Bollt EM, Skufca JD (2008) Europhys Lett 81:68004

    ADS  Google Scholar 

  7. Barabási A-L, Albert R (1999) Sience 286:509

    Google Scholar 

  8. ben-Avraham D, Havlin S (2000) Diffusion and reactions in fractals anddisordered systems. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  9. Berker AN, Ostlund S (1979) J Phys C 12:4961

    ADS  Google Scholar 

  10. Beygelzimer A, Grinstein G, Linsker R, Rish I (2005) Physica A Stat MechAppl 357:593–612

    Google Scholar 

  11. Binney JJ, Dowrick NJ, Fisher AJ, Newman MEJ (1992) The theory of criticalphenomena: an introduction to the renormalization group. Oxford University Press, Oxford

    MATH  Google Scholar 

  12. Bollobás B (1985) Random graphs. Academic Press,London

    Google Scholar 

  13. Bollt E, ben-Avraham D (2005) New J Phys 7:26

    Google Scholar 

  14. Bunde A, Havlin S (1996) Percolation I and Percolation II. In: Bunde A, HavlinS (eds) Fractals and disordered systems, 2nd edn. Springer, Heidelberg

    Google Scholar 

  15. Burch H, Chewick W (1999) Mapping the internet. IEEE Comput32:97–98

    Google Scholar 

  16. Butler D (2006) Nature 444:528

    ADS  Google Scholar 

  17. Cardy J (1996) Scaling and renormalization in statistical physics. CambridgeUniversity Press, Cambridge

    Google Scholar 

  18. Clauset A, Newman MEJ, Moore C (2004) Phys Rev E70:066111

    ADS  Google Scholar 

  19. Cohen R, Erez K, ben-Avraham D, Havlin S (2000) Phys Rev Lett85:4626

    ADS  Google Scholar 

  20. Cohen R, Erez K, ben-Avraham D, Havlin S (2001) Phys Rev Lett86:3682

    ADS  Google Scholar 

  21. Cohen R, ben-Avraham D, Havlin S (2002) Phys Rev E66:036113

    MathSciNet  ADS  Google Scholar 

  22. Comellas F Complex networks: deterministic models physics and theoreticalcomputer science. In: Gazeau J-P, Nesetril J, Rovan B (eds) From Numbers and Languages to (Quantum) Cryptography. 7 NATO Security through Science Series:Information and Communication Security. IOS Press, Amsterdam. pp 275–293. 348 pags. ISBN 1-58603-706-4

    Google Scholar 

  23. Cormen TH, Leiserson CE, Rivest RL, Stein C (2001) Introduction toalgorithms. MIT Press, Cambridge

    MATH  Google Scholar 

  24. Data from SCAN project. TheMbone. http://www.isi.edu/scan/scan.html Accessed 2000

  25. Database of Interacting Proteins (DIP)http://dip.doe-mbi.ucla.edu Accessed 2008

  26. Dorogovtsev SN, Goltsev AV, Mendes JFF (2002) Phys Rev E65:066122

    ADS  Google Scholar 

  27. Erdős P, Rényi A (1960) On the evolution of random graphs. Publ MathInst Hung Acad Sci 5:17–61

    Google Scholar 

  28. Faloutsos M, Faloutsos P, Faloutsos C (1999) Comput Commun Rev29:251–262

    Google Scholar 

  29. Feder J (1988) Fractals. Plenum Press, New York

    MATH  Google Scholar 

  30. Gallos LK, Argyrakis P, Bunde A, Cohen R, Havlin S (2004) PhysicaA 344:504–509

    ADS  Google Scholar 

  31. Gallos LK, Cohen R, Argyrakis P, Bunde A, Havlin S (2005) Phys Rev Lett94:188701

    ADS  Google Scholar 

  32. Gallos LK, Song C, Havlin S, Makse HA (2007) PNAS104:7746

    ADS  Google Scholar 

  33. Gallos LK, Song C, Makse HA (2008) Phys Rev Lett 100:248701

    ADS  Google Scholar 

  34. Garey M, Johnson D (1979) Computers and intractability: a guide to thetheory of NP‐completeness. W.H. Freeman, New York

    MATH  Google Scholar 

  35. Goh K-I, Salvi G, Kahng B, Kim D (2006) Phys Rev Lett96:018701

    ADS  Google Scholar 

  36. Han J-DJ et al (2004) Nature 430:88–93

    ADS  Google Scholar 

  37. Hinczewski M, Berker AN (2006) Phys Rev E73:066126

    MathSciNet  ADS  Google Scholar 

  38. Jeong H, Tombor B, Albert R, Oltvai ZN, Barabási A-L (2000) Nature407:651–654

    Google Scholar 

  39. Kadanoff LP (2000) Statistical physics: statics, dynamics andrenormalization. World Scientific Publishing Company, Singapore

    Google Scholar 

  40. Kaufman M, Griffiths RB (1981) Phys Rev B24:496(R)

    MathSciNet  ADS  Google Scholar 

  41. Kaufman M, Griffiths RB (1984) Phys Rev B 24:244

    MathSciNet  ADS  Google Scholar 

  42. Kim JS, Goh K-I, Salvi G, Oh E, Kahng B, Kim D (2007) Phys Rev E75:016110

    ADS  Google Scholar 

  43. Kim JS, Goh K-I, Kahng B, Kim D (2007) Chaos17:026116

    ADS  Google Scholar 

  44. Kim JS, Goh K-I, Kahng B, Kim D (2007) New J Phys9:177

    Google Scholar 

  45. Mandelbrot B (1982) The fractal geometry of nature. W.H. Freeman and Company,New York

    MATH  Google Scholar 

  46. Maslov S, Sneppen K (2002) Science296:910–913

    ADS  Google Scholar 

  47. Milgram S (1967) Psychol Today 2:60

    Google Scholar 

  48. Motter AE, de Moura APS, Lai Y-C, Dasgupta P (2002) Phys Rev E65:065102

    ADS  Google Scholar 

  49. Newman MEJ (2002) Phys Rev Lett 89:208701

    ADS  Google Scholar 

  50. Newman MEJ (2003) Phys Rev E 67:026126

    MathSciNet  ADS  Google Scholar 

  51. Newman MEJ, Girvan M (2004) Phys Rev E 69:026113

    ADS  Google Scholar 

  52. Overbeek R et al (2000) Nucl Acid Res28:123–125

    Google Scholar 

  53. Palla G, Barabási A-L, Vicsek T (2007) Nature446:664–667

    Google Scholar 

  54. Pastor-Satorras R, Vázquez A, Vespignani A (2001) Phys Rev Lett87:258701

    Google Scholar 

  55. Peitgen HO, Jurgens H, Saupe D (1993) Chaos and fractals: new frontiers ofscience. Springer, New York

    Google Scholar 

  56. Rozenfeld H, Havlin S, ben-Avraham D (2007) New J Phys9:175

    Google Scholar 

  57. Rozenfeld H, ben-Avraham D (2007) Phys Rev E75:061102

    ADS  Google Scholar 

  58. Salmhofer M (1999) Renormalization: an introduction. Springer, Berlin

    MATH  Google Scholar 

  59. Schwartz N, Cohen R, ben-Avraham D, Barabasi A-L, Havlin S (2002) Phys Rev E66:015104

    MathSciNet  ADS  Google Scholar 

  60. Serrano MA, Boguna M (2006) Phys Rev Lett97:088701

    ADS  Google Scholar 

  61. Serrano MA, Boguna M (2006) Phys Rev E 74:056115

    MathSciNet  ADS  Google Scholar 

  62. Song C, Havlin S, Makse HA (2005) Nature 433:392

    ADS  Google Scholar 

  63. Song C, Havlin S, Makse HA (2006) Nature Phys2:275

    ADS  Google Scholar 

  64. Song C, Gallos LK, Havlin S, Makse HA (2007) J Stat Mech P03006

    Google Scholar 

  65. Stanley HE (1971) Introduction to phase transitions and criticalphenomena. Oxford University Press, Oxford

    Google Scholar 

  66. Vicsek T (1992) Fractal growth phenomena, 2nd edn. World Scientific, SingaporePart IV

    MATH  Google Scholar 

  67. Watts DJ, Strogatz SH (1998) Collective dynamics of “small‐world”networks. Nature 393:440–442

    ADS  Google Scholar 

  68. Xenarios I et al (2000) Nucl Acids Res28:289–291

    Google Scholar 

  69. Zhoua W-X, Jianga Z-Q, Sornette D (2007) PhysicaA 375:741–752

    ADS  Google Scholar 

Download references

Acknowledgments

We acknowledge support from the National Science Foundation.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Appendix: The Box Covering Algorithms

Appendix: The Box Covering Algorithms

The estimation of the fractal dimension and the self‐similar features in networks have become standard properties in the study of real‐world systems. For this reason, in the last three years many box covering algorithms have been proposed [64,69]. This section presents four of the main algorithms, along with a brief discussion on the advantages and disadvantages that they offer.

Recalling the original definition of box covering by Hausdorff [14,29,55], for a given network G and box size \( { \ell_{\text{B}} } \), a box is a set of nodes where all distances \( { \ell_{ij} } \) between any two nodes i and j in the box are smaller than \( { \ell_{\text{B}} } \). The minimum number of boxes required to cover the entire network G is denoted by \( { N_{\text{B}} } \). For \( { \ell_{\text{B}} = 1 } \), each box encloses only 1 node and therefore, \( { N_{\text{B}} } \) is equal to the size of the network N. On the other hand, \( { N_{\text{B}}=1 } \) for \( { \ell_{\text{B}} \ge \ell_{\text{B}}^\text{max} } \), where \( { \ell_{\text{B}}^\text{max} } \) is the diameter of the network plus one.

The ultimate goal of a box‐covering algorithm is to find the minimum number of boxes \( { N_{\text{B}}(\ell_{\text{B}}) } \) for any \( { \ell_{\text{B}} } \). It has been shown that this problem belongs to the family of NP‐hard problems [34], which means that the solution cannot be achieved in polynomial time. In other words, for a relatively large network size, there is no algorithm that can provide an exact solution in a reasonably short amount of time. This limitation requires treating the box covering problem with approximations, using for example optimization algorithms.

The GreedyColoring Algorithm

The box‐covering problem can be mapped into another NP‐hard problem [34]: the graph coloring problem.

An algorithm that approximates well the optimal solution of this problem was presented in [64]. For an arbitrary value of \( { \ell_{\text{B}} } \), first construct a dual network \( { G^{\prime} } \), in which two nodes are connected if the distance between them in G (the original network) is greater or equal than \( { \ell_{\text{B}} } \). Figure 13 shows an example of a network G which yields such a dual network \( { G^{\prime} } \) for \( { \ell_{\text{B}}=3 } \) (upper row of Fig. 13).

Figure 13
figure 13_231

Illustration of the solution for the network covering problem via mapping to the graph coloring problem. Starting from G (upper left panel) we construct the dual network \( { G^{\prime} } \) (upper right panel) for a given box size (here \( { \ell_{\text{B}}=3 } \)), where two nodes are connected if they are at a distance \( { \ell\geq\ell_{\text{B}} } \). We use a greedy algorithm for vertex coloring in \( { G^{\prime} } \), which is then used to determine the box covering in G, as shown in the plot

Vertex coloring is a well‐known procedure, where labels (or colors) are assigned to each vertex of a network, so that no edge connects two identically colored vertices. It is clear that such a coloring in \( { G^{\prime} } \) gives rise to a natural box covering in the original network G, in the sense that vertices of the same color will necessarily form a box since the distance between them must be less than \( { \ell_{\text{B}} } \). Accordingly, the minimum number of boxes \( { N_{\text{B}}(G) } \) is equal to the minimum required number of colors (or the chromatic number) in the dual network \( { G^{\prime} } \), \( { \chi(G^{\prime}) } \).

In simpler terms, (a) if the distance between two nodes in G is greater than \( { \ell_{\text{B}} } \) these two neighbors cannot belong in the same box. According to the construction of \( { G^{\prime} } \), these two nodes will be connected in \( { G^{\prime} } \) and thus they cannot have the same color. Since they have a different color they will not belong in the same box in G. (b) On the contrary, if the distance between two nodes in G is less than \( { \ell_{\text{B}} } \) it is possible that these nodes belong in the same box. In \( { G^{\prime} } \) these two nodes will not be connected and it is allowed for these two nodes to carry the same color, i. e. they may belong to the same box in G, (whether these nodes will actually be connected depends on the exact implementation of the coloring algorithm).

The algorithm that follows both constructs the dual network \( { G^{\prime} } \) and assigns the proper node colors for all \( { \ell_{\text{B}} } \) values in one go. For this implementation a two‐dimensional matrix \( { c_{i\ell} } \) of size \( { N\times \ell_{\text{B}}^\text{max} } \) is needed, whose values represent the color of node i for a given box size \( { \ell=\ell_{\text{B}} } \).

  1. 1.

    Assign a unique id from 1 to N to all network nodes, without assigning any colors yet.

  2. 2.

    For all \( { \ell_{\text{B}} } \) values, assign a color value 0 to the node with id=1, i. e. \( { c_{1\ell}=0 } \).

  3. 3.

    Set the id value \( { i=2 } \). Repeat the following until \( { i=N } \).

    1. (a)

      Calculate the distance \( { \ell_{ij} } \) from i to all the nodes in the network with id j less than i.

    2. (b)

      Set \( { \ell_{\text{B}}=1 } \)

    3. (c)

      Select one of the unused colors \( { c_{j\ell_{ij}} } \) from all nodes \( { j<i } \) for which \( { \ell_{ij}\geq\ell_{\text{B}} } \). This is the color \( { c_{i\ell_{\text{B}}} } \) of node i for the given \( { \ell_{\text{B}} } \) value.

    4. (d)

      Increase \( { \ell_{\text{B}} } \) by one and repeat (c) until \( { \ell_{\text{B}}=\ell_{\text{B}}^\text{max} } \).

    5. (e)

      Increase i by 1.

The results of the greedy algorithm may depend on the original coloring sequence. The quality of this algorithm was investigated by randomly reshuffling the coloring sequence and applying the greedy algorithm several times and in different models [64]. The result was that the probability distribution of the number of boxes \( { N_{\text{B}} } \) (for all box sizes \( { \ell_{\text{B}} } \)) is a narrow Gaussian distribution, which indicates that almost any implementation of the algorithm yields a solution close to the optimal.

Strictly speaking, the calculation of the fractal dimension \( { d_{\text{B}} } \) through the relation \( { N_{\text{B}}\sim \ell_{\text{B}}^{-d_{\text{B}}} } \) is valid only for the minimum possible value of \( { N_{\text{B}} } \), for any given \( { \ell_{\text{B}} } \) value, so any box covering algorithm must aim to find this minimum \( { N_{\text{B}} } \). Although there is no rule to determine when this minimum value has been actually reached (since this would require an exact solution of the NP‐hard coloring problem) it has been shown [23] that the greedy coloring algorithm can, in many cases, identify a coloring sequence which yields the optimal solution.

Burning Algorithms

This section presents three box covering algorithms based on more traditional breadth‐first search algorithm.

A box is defined as compact when it includes the maximum possible number of nodes, i. e. when there do not exist any other network nodes that could be included in this box. A connected box means that any node in the box can be reached from any other node in this box, without having to leave this box. Equivalently, a disconnected box denotes a box where certain nodes can be reached by other nodes in the box only by visiting nodes outside this box. For a demonstration of these definitions see Fig. 14.

Figure 14
figure 14_231

Our definitions for a box that is a non‐compact for \( { \ell_{\text{B}}=3 } \), i. e. could include more nodes, b compact, c connected, and d disconnected (the nodes in the right box are not connected in the box). e For this box, the values \( { \ell_{\text{B}}=5 } \) and \( { r_{\text{B}}=2 } \) verify the relation \( { \ell_{\text{B}}=2r_{\text{B}}+1 } \). f One of the pathological cases where this relation is not valid, since \( { \ell_{\text{B}}=3 } \) and \( { r_{\text{B}}=2 } \)

Burning with the Diameter \( \ell_{\text{B}} \), and theCompact‐Box‐Burning (CBB) Algorithm

The basic idea of the CBB algorithm for the generation of a box is to start from a given box center and then expand the box so that it includes the maximum possible number of nodes, satisfying at the same time the maximum distance between nodes in the box \( { \ell_{\text{B}} } \). The CBB algorithm is as follows (see Fig. 15):

Figure 15
figure 15_231

Illustration of the CBB algorithm for \( { \ell_{\text{B}}=3 } \). a Initially, all nodes are candidates for the box. b A random node is chosen, and nodes at a distance further than \( { \ell_{\text{B}} } \) from this node are no longer candidates. c The node chosen in b becomes part of the box and another candidate node is chosen. The above process is then repeated until the box is complete

  1. 1.

    Initially, mark all nodes as uncovered.

  2. 2.

    Construct the set C of all yet uncovered nodes.

  3. 3.

    Choose a random node p from the set of uncovered nodes C and remove it from C.

  4. 4.

    Remove from C all nodes i whose distance from p is \( { \ell_{{\text{pi}}}\geq\ell_{\text{B}} } \), since by definition they will not belong in the same box.

  5. 5.

    Repeat steps (3) and (4) until the candidate set is empty.

  6. 6.

    Repeat from step (2) until all the network has been covered.

Random BoxBurning

In 2006, J. S. Kim et al. presented a simple algorithm for the calculation of fractal dimension in networks [42,43,44]:

  1. 1.

    Pick a randomly chosen node in the network as a seed of the box.

  2. 2.

    Search using breath‐first search algorithm until distance \( { l_{\text{B}} } \) from the seed. Assign all newly burned nodes to the new box. If no new node is found, discard and start from (1) again.

  3. 3.

    Repeat (1) and (2) until all nodes have a box assigned.

This Random Box Burning algorithm has the advantage of being a fast and simple method. However, at the same time there is no inherent optimization employed during the network coverage. Thus, this simple Monte‐Carlo method is almost certain that will yield a solution far from the optimal and one needs to implement many different realizations and only retain the smallest number of boxes found out of all these realizations.

Burning with the Radius r B, and the Maximum‐Excluded‐Mass‐Burning (MEMB) Algorithm

A box of size \( { \ell_{\text{B}} } \) includes nodes where the distance between any pair of nodes is less than \( { \ell_{\text{B}} } \). It is possible, though, to grow a box from a given central node, so that all nodes in the box are within distance less than a given box radius \( { r_{\text{B}} } \) (the maximum distance from a central node). This way, one can still recover the same fractal properties of a network. For the original definition of the box, \( { \ell_{\text{B}} } \) corresponds to the box diameter (maximum distance between any two nodes in the box) plus one. Thus, \( { \ell_{\text{B}} } \) and \( { r_{\text{B}} } \) are connected through the simple relation \( { \ell_{\text{B}} = 2 r_{\text{B}}+1 } \). In general this relation is exact for loopless configurations, but in general there may exist cases where this equation is not exact (Fig. 14).

The MEMB algorithm always yields the optimal solution for non scale‐free homogeneous networks, since the choice of the central node is not important. However, in inhomogeneous networks with wide‐tailed degree distribution, such as scale‐free networks, this algorithm fails to achieve an optimal solution because of the presence of hubs.

Figure 16
figure 16_231

Burning with the radius \( { r_{\text{B}} } \) from a a hub node or b a non‐hub node results in very different network coverage. In a we need just one box of \( { r_{\text{B}}=1 } \) while in b 5 boxes are needed to cover the same network. This is an intrinsic problem when burning with the radius. c Burning with the maximum distance \( { \ell_{\text{B}} } \) (in this case \( { \ell_{\text{B}}=2r_{\text{B}}+1=3 } \)) we avoid this situation, since independently of the starting point we would still obtain \( { N_{\text{B}}=1 } \)

The MEMB, as a difference from the Random Box Burning and the CBB, attempts to locate some optimal central nodes which act as the burning origins for the boxes. It contains as a special case the choice of the hubs as centers of the boxes, but it also allows for low‐degree nodes to be burning centers, which sometimes is convenient for finding a solution closer to the optimal.

In the following algorithm we use the basic idea of box optimization, in which each box covers the maximum possible number of nodes. For a given burning radius \( { r_{\text{B}} } \), we define the excluded mass of a node as the number of uncovered nodes within a chemical distance less than \( { r_{\text{B}} } \). First, calculate the excluded mass for all the uncovered nodes. Then, seek to cover the network with boxes of maximum excluded mass. The details of this algorithm are as follows (see Fig. 17):

  1. 1.

    Initially, all nodes are marked as uncovered and non‐centers.

  2. 2.

    For all non‐center nodes (including the already covered nodes) calculate the excluded mass, and select the node p with the maximum excluded mass as the next center.

  3. 3.

    Mark all the nodes with chemical distance less than \( { r_{\text{B}} } \) from p as covered.

  4. 4.

    Repeat steps (2) and (3) until all nodes are either covered or centers.

Figure 17
figure 17_231

Illustration of the MEMB algorithm for \( { r_{\text{B}}=1 } \). Upper row: Calculation of the box centers a We calculate the excluded mass for each node. b The node with maximum mass becomes a center and the excluded masses are recalculated. c A new center is chosen. Now, the entire network is covered with these two centers. Bottom row: Calculation of the boxes d Each box includes initially only the center. Starting from the centers we calculate the distance of each network node to the closest center. e We assign each node to its nearest box

Notice that the excluded mass has to be updated in each step because it is possible that it has been modified during this step. A box center can also be an already covered node, since it may lead to a larger box mass. After the above procedure, the number of selected centers coincides with the number of boxes \( { N_{\text{B}} } \) that completely cover the network. However, the non‐center nodes have not yet been assigned to a given box. This is performed in the next step:

  1. 1.

    Give a unique box id to every center node.

  2. 2.

    For all nodes calculate the “central distance”, which is the chemical distance to its nearest center. The central distance has to be less than \( { r_{\text{B}} } \), and the center identification algorithm above guarantees that there will always exist such a center. Obviously, all center nodes have a central distance equal to 0.

  3. 3.

    Sort the non‐center nodes in a list according to increasing central distance.

  4. 4.

    For each non‐center node i, at least one of its neighbors has a central distance less than its own. Assign to i the same id with this neighbor. If there exist several such neighbors, randomly select an id from these neighbors. Remove i from the list.

  5. 5.

    Repeat step (4) according to the sequence from the list in step (3) for all non‐center nodes.

Comparison Between Algorithms

The choice of the algorithm to be used for a problem depends on the details of the problem itself. If connected boxes are a requirement, MEMB is the most appropriate algorithm; but if one is only interested in obtaining the fractal dimension of a network, the greedy‐coloring or the random box burning are more suitable since they are the fastest algorithms.

As explained previously, any algorithm should intend to find the optimal solution, that is, find the minimum number of boxes that cover the network. Figure 18 shows the performance of each algorithm. The greedy‐coloring, the CBB and MEMB algorithms exhibit a narrow distribution of the number of boxes, showing evidence that they cover the network with a number of boxes that is close to the optimal solution. Instead, the Random Box Burning returns a wider distribution and its average is far above the average of the other algorithms. Because of the great ease and speed with which this technique can be implemented, it would be useful to show that the average number of covering boxes is overestimated by a fixed proportionality constant. In that case, despite the error, the predicted number of boxes would still yield the correct scaling and fractal dimension.

Figure 18
figure 18_231

Comparison of the distribution of \( { N_{\text{B}} } \) for 104 realizations of the four network covering methods presented in this paper. Notice that three of these methods yield very similar results with narrow distributions and comparable minimum values, while the random burning algorithm fails to reach a value close to this minimum (and yields a broad distribution)

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag

About this entry

Cite this entry

Rozenfeld, H.D., Gallos, L.K., Song, C., Makse, H.A. (2009). Fractal and Transfractal Scale-Free Networks. In: Meyers, R. (eds) Encyclopedia of Complexity and Systems Science. Springer, New York, NY. https://doi.org/10.1007/978-0-387-30440-3_231

Download citation

Publish with us

Policies and ethics