logo

How to Define Artificial Neural Network Layers with Lists and Loops in PyTorch 📂Machine Learning

How to Define Artificial Neural Network Layers with Lists and Loops in PyTorch

Explanation

If there are many layers to stack or if there is a need to frequently change the structure of the neural network, one might want to automate the definition of the artificial neural network. In such cases, you might think of defining it using the following for loop.

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()

        fc_ = [nn.Linear(n,n) for i in range(m)]
        
    def forward(self, x):
        for i in range(m):
            x = fc_[m](x)
            x = F.relu(x)

        return x

However, if you define the neural network as a Python list like this, it cannot recognize the parameters of each nn.Linear layer. As a result, the parameters are not properly passed to the optimizer, and an error like the following occurs when defining the optimizer.

ValueError: optimizer got an empty parameter list

Therefore, if you want to construct the neural network using a list of layers, you must define it as an instance attribute (self.) in __init__() through nn.ModuleList(). Once defined in this way, indexing, slicing, adding, and so on become possible, just like with a regular list.

Below is the code that defines each layer of the neural network and the forward using a list, and initializes the weights.

Code1

class fx_Net(nn.Module):
    def __init__(self):
        super(fx_Net, self).__init__()

        fc_ = [nn.Linear(n,n) for i in range(m)]
        self.linears = nn.ModuleList(fc_)
        
        for m in self.linears:  
          torch.nn.init.zeros_(m.weight.data)

    def forward(self, x):
        for i in range(m):
            x = self.linears[m](x)
            x = F.relu(x)
      
        return f

Environment

  • OS: Windows10
  • Version: Python 3.9.2, torch 1.8.1+cu111