Project Report: Week 1-2, GSoC 2018

And objectives for upcoming weeks

Posted by Ayush Shridhar on May 26, 2018

These first couple of weeks have been an amazing ride. Do read my first blog summarizing my project and objectives here. In the following blog, I'm going to summarize my work, the objectives I completed successfully and the work that still remains to be done in this regard.

Objectives completed:

  • Most of my time during these weeks was spend on writing operator tests for the ONNX ops. I successfully wrote about 60 tests for different operators. One big hurdle I was able to overcome was the poor performance of models. I realized the source of this error after I came across a few discrepancies in the tests, and fixed them asap. I got to learn a lot in this process, as I wasn't used to writing tests before.

  • The second big achievement was loading quite a few ONNX models into Flux. I was able to load the SqueezeNet, DenseNet121, ResNet50, VGG19, ZfNet, GoogleNet into Flux. While most of the models except for GoogleNet and ZfNet had a pretty good performance (the VGG19 actually had a slightly better performance that the VGG19 currently hosted on Metalhead.jl), I came across a few legacy issues in a few models.
    The GoogleNet and ZfNet models had Local Response Normalization layers, which Flux currently doesn't support. So, I had to manually bypass this entire layer. As a result, the performance was affected.
    Quite a few models used asymmetric padding. As advised by my mentors, I used to convert most of the asymmetric paddings into symmetric ones, with a little depreciation in performance. For example, padding of the form (0,0,2,2) was implemented as (1,1,1,1). This was fine as far as a few models were concerned, but many showed asymmetric padding which cannot be converted to symmetric ones, without changing the expected output shape. For example, changing (0,0,1,1) to symmetric padding isn't possible. The only way around this would be to introduce support for asymmetric padding in Flux.

  • I got to learn a lot in this process. I found errors in a few ONNX models and I opened issues in the respective ONNX repositories (here). I also encountered an error in Flux layers, which was changing the type of output as compared to the input. The issue I raised in Flux can be found here.

  • I also opened a Pull Request in FluxML/Metalhead.jl, where I added four more models to the Metalhead repository. The PR can be found here.

Objectives for the next three weeks:

  • Implementing Local Response Normalization (LRN) in Flux. I'm currently working on this and implementing it by referring to the research paper by Krizhevsky et al, which can be found here.

  • Keras.jl: I have already begun work on this (here). Rather than generating code, (like ONNX.jl does), I'm working on loading Keras models directly into Flux. I've been able to get the MNIST classifier loaded into Flux. The current impediments in this domain include increasing performance of the models. The MNIST model loaded doesn't have a good accuracy, so I need to investigate the reason behind this. Also, once I get this running for MNIST, I will expand its functionalities to include other complicated models.

  • ONNX.jl models: The ONNX models repository is updated frequently. As soon as the correct models are updated, I will resume testing and loading them into Flux.

  • Conclusion:

    The next few weeks are certainly going to test my programming skills. However, I'd like to thank my mentor, Phil Tomson for taking time out from his schedule to help me and for his continuous support and guidance. I'm pretty confident that I would be able to meet most of the objectives under his supervision.
    Hoping for the best!