Arsenii (Senya) Ashukha

I am a PhD candidate at Bayesian methods research group and Samsung AI Center Moscow with Dmitriy Vetrov, where I work on probabilistic deep learning.

Email  /  CV  /  Google Scholar  /  GitHub  /  Twitter

profile photo
Research

I'm interested in core deep learning, probabilistic inference, uncertainty estimation, and learning with limited data. My works have been focused on understanding and applications of ensemble techniques and variational inference in deep neural networks.

Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation
Dmitry Molchanov*, Alexander Lyzhov*, Yuliya Molchanova*, Arsenii Ashukha*, Dmitry Vetrov
UAI, 2020
code / arXiv / slides / bibtex

We introduce greedy policy search (GPS), a simple but high-performing method for learning a policy of test-time augmentation.

Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning
Arsenii Ashukha*, Alexander Lyzhov*, Dmitry Molchanov*, Dmitry Vetrov
ICLR, 2020
blog post / poster video (5mins) / code / arXiv / bibtex

The work introduces calibrated log-likelihood a reliable uncertainty estimation metric, deep ensemble equivalent an interpretable technique for comparison of ensembles, and points out that test-time augmentation is a simple technique that allows to improve ensembles for free.

The Deep Weight Prior
Andrei Atanov*, Arsenii Ashukha*, Kirill Struminsky, Dmitry Vetrov, Max Welling
ICLR, 2019
code / arXiv / bibtex

The deep weight prior is the generative model for kernels of convolutional neural networks, that acts as a prior distribution while training on new datasets.

Variance Networks: When Expectation Does Not Meet Your Expectations
Kirill Neklyudov*, Dmitry Molchanov*, Arsenii Ashukha*, Dmitry Vetrov
ICLR, 2019
code / arXiv / bibtex

It is possible to learn a zero-centered Gaussian distribution over the weights of a neural network by learning only variances, and it works surprisingly well.

Uncertainty Estimation via Stochastic Batch Normalization
Andrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, Dmitry Vetrov
ICLR Workshop, 2018
code / arXiv

Inference-time stochastic batch normalization improves the performance of uncertainty estimation of ensembles.

Structured Bayesian Pruning via Log-Normal Multiplicative Noise
Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov
NeurIPS, 2017
code / arXiv / bibtex / poster

The model allows to sparsify a DNN with an arbitrary pattern of spasticity e.g., neurons or convolutional filters.

Variational Dropout Sparsifies Deep Neural Networks
Dmitry Molchanov*, Arsenii Ashukha*, Dmitry Vetrov
ICML, 2017
retrospective⏳ / talk (15 mins) / arXiv / bibtex / code (theano, tf by GoogleAI, colab pytorch)

Variational dropout secretly trains highly sparsified deep neural networks, while a pattern of sparsity is learned jointly with weights during training.

The webpage template was borrowed from Jon Barron.
Also, check out his research, it is very interesting!