[C1] Training CNNs with Selective Allocation of Channels
Published in International Conference on Machine Learning (ICML), 2019
tl;dr: Any CNNs can become more efficient by “re-allocating” unnecessary channels to increase the kernel size.
Additional information
Abstract
Recent progress in deep convolutional neural networks (CNNs) have enabled a simple paradigm of architecture design: larger models typically achieve better accuracy. Due to this, in modern CNN architectures, it becomes more important to design models that generalize well under certain resource constraints, e.g. the number of parameters. In this paper, we propose a simple way to improve the capacity of any CNN model having large-scale features, without adding more parameters. In particular, we modify a standard convolutional layer to have a new functionality of channel-selectivity, so that the layer is trained to select important channels to re-distribute their parameters. Our experimental results under various CNN architectures and datasets demonstrate that the proposed new convolutional layer allows new optima that generalize better via efficient resource utilization, compared to the baseline.BibTeX
@InProceedings{jeong2020training, title = {Training {CNN}s with Selective Allocation of Channels}, author = {Jeong, Jongheon and Shin, Jinwoo}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3080--3090}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/jeong19c/jeong19c.pdf}, url = {https://proceedings.mlr.press/v97/jeong19c.html} }