logo

Implementing MLP in Julia Flux to Approximate Nonlinear Functions 📂Machine Learning

Implementing MLP in Julia Flux to Approximate Nonlinear Functions

Start

Import the necessary packages and define the nonlinear function we want to approximate.

nonlinear_function.png

Creating the Training Set

From the domain of the function $[-5, 5]$, 1024 random points were selected. These points are of type Float64, but deep learning typically handles the Float32 data type, so it was converted. Of course, the model can also automatically convert and run data types like Float64 or Int64 when used as input.

Moreover, since each column represents a single data point (if this is confusing, think about the multiplication of matrices), the 1024-vector was reshaped into a 1x1024 matrix.

Using the function defined above $f$, labels were generated, and made trainable with Flux.DataLoader.

Model Definition

A MLP was created with Chain(), and drawing the function’s graph reveals that it’s completely different from $f$.

initial_MLP.png

Loss Function and Optimizer Definition

The loss function was defined as MSE, and the optimizer as ADAM.

Training

Now, training can be performed with the @epochs macro. After repeating for 5000 epochs, it’s evident that the loss has significantly decreased.

The graph of the MLP now looks like this: trained_MLP.png

Full Code

Environment

  • OS: Windows10
  • Version: Julia 1.7.1, Flux 0.12.8