Let Li:Rni→Rni+1 be referred to as a fully connected layer. Let σ:R→R be referred to as an activation function. The composition of these is called a multilayer perceptron.
MLP(x)=TN∘σ∘TN−1∘σ∘⋯∘T1(x)
Here, σ is a function that applies σ to each component.
σ(x)=σ(x1)σ(x2)⋮σ(xn)where x=x1x2⋮xn
Explanation
It is called a multilayer perceptron because it is a composition of single-layer perceptrons multiple times, primarily using fully connected layers, and is also referred to as a fully connected neural network. These terms refer to the same neural network.