logo

Barabási-Albert Model 📂Graph Theory

Barabási-Albert Model

Algorithm 1

zoomout.gif

Input

Given the link parameter $m \in \mathbb{N}$ and the network size $N$.


Step 1. Initialization

Construct the initial network with $m$ nodes. Unless otherwise specified, this network is a complete graph.


Step 2. Adding Nodes

Assuming the current number of nodes is $n$. A new node is added, which connects to each of the existing $m$ nodes. The probability of connecting to each $k = 1 , \cdots , n$th node is as follows. $$ p_{k} = {{ \deg k } \over { \sum_{k=1}^{n} \deg k }} $$ Duplicate links are avoided through sampling without replacement.


Step 3. Repeat

Repeat Step 2 until the number of nodes is $n < N$. If the number of nodes reaches $n = N$, the algorithm ends.


Output

Obtains a scale-free network with a Pareto distribution $\text{Pareto} \left( 3 \right)$ degree distribution2.

Description

The Barabási-Albert model is the simplest and most popular algorithm for generating scale-free networks. The terrifying aspect of the BA model is that it accurately identifies the inequality, which can be considered the essence of the Pareto distribution, without complex formulas. $$ p_{k} = {{ \deg k } \over { \sum_{k=1}^{n} \deg k }} $$ The significance of this probability is that it finds the reason for new links to connect directly in the existing links. The early birds are at an advantage over the latecomers, and the advantaged nodes become increasingly advantaged. If the concept of popularity is born into a network, that is exactly a Barabási-Albert network. For this reason, it is sometimes called Preferential Attachment.

Code

Visualization

Below is the Julia code for creating a BA network and visualizing its evolving process.

@time using Graphs
@time using Plots, GraphRecipes

N = 200
X = (1:N)/N .* cos.(2π * rand(N))
Y = (1:N)/N .* sin.(2π * rand(N))

G = barabasi_albert(N, 3)
graph_evolution = @animate for n ∈ 4:N
  graphplot(G[1:n], x = X[1:n], y = Y[1:n],
   curvature = 0, linealpha = 0.2, markerstrokewidth = 0, nodecolor = :gray,
   size = (300,400), nodesize = 0.1)
  xlims!(-n/N,n/N); ylims!(-n/N,n/N)
end
gif(graph_evolution, "zoomout.gif", fps = 16)

Implementation

Includes the function randw() for weighted sampling and implements the function barabasi_albert() for creating a Barabási-Albert model without relying on packages.

function randw(item, weight)
    @assert length(item) == length(weight) "length mismatch"
    @assert any(weight .> 0) "all weights are zero"
    @assert all(weight .≥ 0) "there is a negative weight"
    
    Σ = sum(weight)
    if !isone(Σ)
        weight = weight ./ Σ
    end
    return item[findfirst(rand() .≤ cumsum(weight))]
end
randw(['a', 'b', 'c'], [1,2])
randw(['a', 'b', 'c'], [0,0,0])
randw(['a', 'b', 'c'], [1,2,-1])
randw(['a', 'b', 'c'], [1,2,3])
abc = [randw(['a', 'b', 'c'], [1,2,3]) for _ in 1:6000]
count.([abc .== 'a', abc .== 'b', abc .== 'c'])

function randw(item, weight, n; replace = true)
    @assert n ≤ length(item) "n = $n is larger than the length of item = $(length(item))"

    while true
        sample = [randw(item, weight) for _ in 1:n]
        if (length(unique(sample)) == n) || replace
            return sample
        end
    end
end
randw(-10:-1, 1:10, 5)
randw(-10:-1, 1:10, 5, replace = false)

function barabasi_albert(N, m = 1)
    G = [setdiff(1:m, k) for k in 1:m]
    for new in (m+1):N
        idxs = randw(1:(new-1), length.(G), m, replace = false)
        push!(G, idxs)
        push!.(G[idxs], new)
    end
    return G
end

ba = barabasi_albert(1000, 3)
degree = length.(ba)

using Plots
histogram(degree)
histogram(degree, xscale = :log10, yscale = :log10, xticks = 10.0 .^(-0:0.5:2), xlims = extrema(degree))

  1. Albert, Barabási. (2002). Statistical mechanics of complex networks. https://doi.org/10.1103/RevModPhys.74.47 ↩︎

  2. Barabási, Albert. (1999) Emergence of Scaling in Random Networks. https://doi.org/10.1126/science.286.5439.509 ↩︎