The closing date for this job submission has passed.

Job Description

The objective of this post-doc is to study deep learning methods for visual representations that are in the intersection of learning with little or no supervision and learning lightweight network architectures for use under limited resources, like mobile devices.

Transfer learning is a standard way to adapt to new domains and tasks, e.g. object detection [RHG15], with less supervision than when learning from scratch, but this level of supervision is still significant. In image retrieval, fine-tuning can be performed with algorithmic rather than human supervision [GAR16]. Unsupervised manifold learning [ITA18] is an alternative that can improve the representation without any external algorithm. Metric learning [CHL05] is a more general framework for learning from pairwise or more complex relations between samples, but it is most commonly supervised [WMA17]. Semi-supervised learning [WRC08] can exploit the distribution of large quantities of unlabeled data, while low-shot learning [HG17] and meta-learning [FAL17] rather consider unseen classes at inference.

On the other hand, there has been considerable progress on designing lightweight architectures for mobile and embedded vision applications, based e.g. on quantizing weights or activations, pruning connections, and low-rank or sparse matrix factorizations, in particular depth-wise or group convolutions [ZZL17,HLM17]. These developments have focused on supervised classification so far, rather than other tasks and supervision settings. Recently, semi-supervised learning has been connected to progressive learning of the network structure [WXL17], while residual networks have been connected to progressive inference [LMS18,ZNC18]. It is the objective of this post-doc to investigate such ideas of adapting the architecture to the task and supervision data at hand, either at learning or dynamically at inference.

The subject is at the intersection of two problems typically treated separately so far, i.e., deep learning with limited supervision and learning lightweight network architectures. The candidate should ideally have a PhD degree in one of the two problems and good knowledge of the other; a strong publication record in relevant computer vision and machine learning venues such as CVPR, ICCV, NIPS and ICLR; solid mathematical background and programming skills; fluency in English language.

References:

[CHL05] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In CVPR, 2005.

[FAL17] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In , 2017.

[GAR16] A. Gordo and J. Almazan and J. Revaud, and D. Larlus. Deep Image Retrieval: Learning Global Representations for Image Search. In ECCV, 2016.

[HG17] B. Hariharan and R. Girshick. Low-Shot Visual Recognition by Shrinking and Hallucinating Features. In ICCV, 2017.

[HLM17] G. Huang, S. Liu, L. van der Maaten, and K. Weinberger. CondenseNet: An Efficient DenseNet using Learned Group Convolutions. arXiv preprint arXiv:1711.09224, 2017.

[ITA18] A. Iscen, G. Tolias, Y. Avrithis, and O. Chum. Mining on Manifolds: Metric Learning without Labels. In CVPR, 2018.

[LMS18] S. Leroux, P. Molchanov, P. Simoens, B. Dhoedt, T. Breuel, and J. Kautz. IamNN: Iterative and Adaptive Mobile Neural Network for Efficient Image Classification. arXiv preprint arXiv:1804.10123, 2018.

[RHG15] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015.

[WMA17] C.-Y. Wu, R. Manmatha, A. Smola, and P. Krahenbuhl. Sampling Matters in Deep Embedding Learning. In ICCV, 2017.

[WRC08] J, Weston, F. Ratle, and R. Collobert. Deep Learning via Semi-Supervised Embedding. In ICML, 2008.

[WXL17] G. Wang, X. Xie, J. Lai, and J. Zhuo. Deep Growing Learning. In ICCV, 2017.

[ZNC18] Z. Zhang, G. Ning, Y. Cen, Y. Li, Z. Zhao, H. Sun, and Z. He. Progressive Neural Networks for Image Classification. arXiv preprint arXiv:1804.09803, 2018.

[ZZL17] X. Zhang, X. Zhou, M. Lin, and J. Sun. ShuffleNet: An extremely efficient convolutional neural network for mobile devices. arXiv preprint arXiv:1707.01083, 2017.

Job Information

Contact
email redacted
Related URL
https://jobs.inria.fr/publi...
Institution
Inria Rennes-Bretagne Atlantique
Topic Category
Location
Rennes, France
Closing Date
July 15, 2018