Various Deep Learning Frameworks of Julia
Overview
- Last modified date: November 22, 2022
Among Julia’s representative deep learning frameworks, there is Flux.jl
. Along with it, other frameworks such as Knet.jl
and Lux.jl
will be briefly introduced.
Description
Flux
Flux is the official deep learning framework of Julia. Various packages of GraphNeuralNetworks.jl
and SciML are implemented based on Flux. Although Flux’s functionality and features are still lacking compared to TensorFlow and PyTorch, it might eventually catch up with them as time goes by. Initially, the documentation seemed poor and useless, but it has been improving over time.
However, the real weakness of Flux is not the documentation or the small user base. Of course, it would be great if both were excellent, but from a researcher’s perspective, one cannot rely solely on others’ work and must study and use it themselves. Personally, as of , the most significant drawback is the lack of support for mutating arrays. This makes it difficult to implement desired operations, and the two frameworks introduced below are also affected by this issue.
Knet
It has nothing to do with Korea. It was implemented by Koç University. The fact that the tutorials, documentation, examples, and benchmarks are well organized is a significant advantage. This may lead to the thought that Knet could be better than Flux for beginners.
Lux
Lux is a deep learning framework friendly to compilers and automatic differentiation, and it is implemented only with pure functions. Reading the description on the website, the advantages over the two aforementioned frameworks are not very apparent. It seems to suggest using Lux alongside Flux.