Machine learning is revolutionizing the entire tech industry. Ever since the advent of Deep Learning, we've seen extraordinary changes in the field of Computer Vision and Natural Language Processing. Deep learning has absolutely dominated computer vision over the last few years, achieving top scores on many tasks and their related competitions. People have doubtlessly noticed quantum leaps in the quality of a wide range of everyday technologies.
But, here's the problem. There are many Deep Learning libraries available for implementing deep and complex neural networks. TensorFlow, CNTK, PyTorch, Caffe, MXNet,
CoreML. Each framework has a functioning of its own. Some use Computation graphs, some are completely
dynamic. So here's the question engineers have been trying to solve:
Is it possible to develop a common format that helps us in
running a model made using framework X into another framework Y?
The answer to that question is Yes, it is possible to do so. It is possible to train a model completely in Caffe and then load and run it in PyTorch. The reason behind this is onnx. ONNX is a framework for interchangeable machine learning models. It uses Google's Protocol Buffers to serialize models into a common onnx format, and a set of specifications to read it into another framework.
Although ONNX is currently in its development stage, it shows great promise. It has earned kudos from various data scientists and Machine learning practitioners alike. It makes transfer learning cross platform, which is amazing.
My Project
There was a time when I used to use Python for all data science related work. Now, I can easily say that I'm drifting towards Julia. Julia is slowly becoming the mainstream language used by data scientists, economists, software engineers alike. This comes as no surprise, seeing how easy and dynamic the language is, the wonders that even a few lines of Julia code can do (yes, fewer than Python) even while achieving amazing results on the clock. These qualities make it a perfect language for Machine Learning and Data Science purposes.
For those who don't know,
Flux is a purely Julian ML framework. It was written by one of my mentor,
Mike Innes. It provides an elegant, easy-to-hack approach for implementing Deep learning
models. Coding networks in Flux is as simple as writing mathematical equations. Here is a little taste of a neural network in Flux:
model = Chain(Dense(10, 5, σ),
Dense(5, 2),
softmax)
Yes, that's it. A two hidden layer neural network, implemented in just one line!
1. Load diverse models into Flux.
2. Help in growing Flux as a complete framework, by incorporating functionalities from other frameworks.
3. Object detection in Julia, something I'm really excited about.
Objectives for the next two weeks:
As the official coding period begins, here are the targets I'd like to achieve in the next two weeks (along with the troubles I'm most likely
to face) :
1. Continue work on the operators (ops.jl) for the ONNX models. This will involve reading the docs for the operators and writing the DataFlow code for them.
2. Try to load most of the major ONNX models in Flux. The biggest problem here is that ONNX is itself currently under development.
As a result, most of the ONNX models hosted on its github repo aren't perfect. The repo is updated every week and I need to keep track of all changes the ONNX contributors
make in the models.
3. Writing the node/operator tests for these operators. ONNX provides a set of proto files for testing every backend on. For writing these tests, I (thankfully) have a few options:
To write them directly in Julia, reading each proto model at a time, or to wrap ONNX.jl with python and use PyJulia to run the Python-API generated tests.
Conclusion
All in all, the next two weeks are going to be somewhat like:
while(model.status() == "not running")
code();
test();
debug();
end
Looking forward to tackling these challenges! (Fingers crossed!) :)