ネットワーク理論における固有ベクトル中心性
定義 1
ネットワーク $\left( V , E \right)$の隣接行列 $A$ のスペクトル半径 $\lambda_{1}$ に対応する固有ベクトル $\mathbf{v}_{1}$ の $i$番目の成分を$i$番目のノード $v_{i}$ の固有ベクトル中心性eigenvector Centralityと言う。
Description
Unlike degree centrality or closeness centrality, which can be intuitively understood from their formulae, Eigenvector Centrality may be hard to comprehend without a background in mathematics.
Intuitive Meaning
A detailed explanation requires looking into its derivation, but simply put, it represents the influence a node has within a network. This influence, somewhat ambiguous, cannot be explained merely through relations with adjacent nodes, necessitating the complex concept of eigenvectors to overcome the limitations of measures like degree centrality.
Derivation
Let’s assume that each node $v_{i} \in V$ is assigned some form of centrality $x_{i}$. While it’s not clear yet how this centrality is calculated, we assume that it is valued higher if the surrounding centralities are larger. This mirrors the way celebrities are cast in shows not randomly but in matching grades and fields or how researchers tend to collaborate with those in similar studies. Thus, $x_{i}$ can be expressed as the sum of the centralities $\left\{ x_{j} \right\}$ of adjacent nodes $\left\{ v_{j} \right\}$. $$ x_{i} = \sum_{j} A_{ij} x_{j} $$ However, since $x_{i}$ itself can change, making this calculation seems problematic. For now, let’s add a variable for the number of iterations $t$ and write it as: $$ x_{i} (t+1) = \sum_{j} A_{ij} x_{j} (t) $$ This can soon be represented as a matrix with respect to: $$ \mathbf{x} (t+1) = A \mathbf{x} (t) $$ This system is a Deterministic System dependent on $\mathbf{x} (0)$. Although there’s no particular reason why the initial point of our desired centrality vector $\mathbf{x} (t)$ should be $\mathbf{x} (0)$, let’s set it as a linear combination of the eigenvalues in $A$ for some set of constants $\left\{ c_{j} \right\} \subset \mathbb{R}$. $$ \mathbf{x} (0) = c_{1} \mathbf{v}_{1} + \cdots + c_{n} \mathbf{v}_{n} = \sum_{j=1}^{n} c_{j} \mathbf{v}_{j} $$ Assuming the set of eigenvectors $\left\{ \mathbf{v}_{j} \right\}$ is indexed the same as the eigenvalues of $A$ sorted in absolute value order $\left\{ \lambda_{j} \right\}$, then for the n-th eigenvector $\mathbf{v}_{j}$, the eigenvalue $$ A \mathbf{v}_{j} = \lambda_{j} \mathbf{v}_{j} \implies A^{t} \mathbf{v}_{j} = \lambda_{j}^{t} \mathbf{v}_{j} $$ holds, which means: $$ \begin{align*} \mathbf{x} (t) =& A^{t} \mathbf{x} (0) \\ =& A^{t} \sum_{j=1}^{n} c_{j} \mathbf{v}_{j} \\ =& \sum_{j=1}^{n} c_{j} A^{t} \mathbf{v}_{j} \\ =& \sum_{j=1}^{n} c_{j} \lambda_{j}^{t} \mathbf{v}_{j} \\ =& \lambda_{1}^{t} \sum_{j=1}^{n} c_{j} \left( {{ \lambda_{j} } \over { \lambda_{1} }} \right) ^{t} \mathbf{v}_{j} \end{align*} $$ Since these eigenvalues $\left\{ \lambda_{j} \right\}$ are sorted by absolute value, for $j \ne 1$, it holds that $\left( \lambda_{j} / \lambda_{1} \right) < 1$ and with sufficiently large $t$, the following approximation holds: $$ \begin{align*} \mathbf{x} (t) =& \lambda_{1}^{t} c_{1} \left( {{ \lambda_{1} } \over { \lambda_{1} }} \right) ^{t} \mathbf{v}_{1} + \lambda_{1}^{t} c_{2} \left( {{ \lambda_{2} } \over { \lambda_{1} }} \right) ^{t} \mathbf{v}_{2} + \cdots + \lambda_{1}^{t} c_{n} \left( {{ \lambda_{n} } \over { \lambda_{1} }} \right) ^{t} \mathbf{v}_{n} \\ \approx & \lambda_{1}^{t} c_{1} \mathbf{v}_{1} + 0 + \cdots + 0 \end{align*} $$ What matters here is that the centrality vector $\mathbf{x} (t)$ obtained after $t$ iterations ultimately scales with the eigenvector corresponding to $\lambda_{1}$, essentially $$ \begin{bmatrix} x_{1} \\ \vdots \\ x_{n} \end{bmatrix} = \mathbf{x} (t) \sim \mathbf{v}_{1} $$ Hence, we bypass the ‘repeat for $t$ times’ step and directly use the n-th component of the eigenvector $\mathbf{v}_{1}$ as the centrality $x_{i}$ of the node we wanted to calculate initially.
■
See Also
Various Centralities of a Network
Newman. (2010). Networks: An Introduction: p170. ↩︎