New deep learning library goes as small as 512k
This is really an exciting news, especially for app developers, the size of the library is critical. If your model takes more than 100 mega bytes, almost no smartphone user will download it.
Hardware also suffers from the size of the library for a long time, as you know that to hold the whole model needs a lot of memory and energy.
There are some new research published a new kind of models -- compressed deep learning model, which use deep compression to squeeze deep neural network to a much smaller size but still maintain the same accuracy.
There are some basic method we can use to reduce the size of our own model as well:
1. reduce the low weight branch, this will greatly reduce the size of the network.
2. use lower precision in the weights storage, as you know that float numbers take more memory than integers. And as you can see 3.1 and 3 won't change much in the decision.
3. After using the above method, the model size is greatly reduced, and we can easily keep the whole model into cache. This will speed up the process procedure as well.
Here is a chart shows you how much this technique will help us to reduce the size of the library. I'm sure this will soon become a substitution of the existing machine learning library to become a new stream.
The detail of the library is as below: