Konda Reddy Mopuri

I am a visiting faculty member at the Department of Computer Science, Indian Institute of Technology, Tirupati, where I work on computer vision and deep learning.

Before this, I held a postdoctoral position at the Visual Computing Group (VICO) lead by Dr. Hakan Bilen at the University of Edinburgh. I did my PhD at Indian Institute of Science, Bengaluru, where I was advised by Prof. R. Venkatesh Babu.

My thesis won the IUPRAI Best Doctoral Dissertation Award and the SPCOM Best Doctoral Dissertation Award for the year 2018. I did my Masters at the Indian Institute of Technology Kharagpur.

Email  /  CV  /  Google Scholar  /  Teaching  / 

profile photo
News
  • Jan 2021: Our work with Bo Zhao and Hakan Bilen has been accepted at the ICLR 2021 as an Oral paper. It is the second top rated in the whole conference.
  • Jan 2021: Wondered if one can extract out "data" from a trained DNN? Check-out our pre-print, we do a lot more with the extracted pseudo data samples.
  • Dec 2020: My Thesis has won the IUPRAI Best Doctoral Dissertation Award for the year 2018-19.
  • Nov 2020: Our paper has been accepted at the WACV 2021 Conference (CORE A ranked). A version is available here.
  • Oct 2020: Our paper has been accepted for publishing in the SPE Journal (Impact Factor: 3.372, Scopus Rank:#4/189).
  • Sep 2020: Delivered an invited talk on Knowledge Distillation in data-free scenarios at the Walmart Global Tech India.
  • Aug 2020: Joined Dept. of CSE, IIT Tirupati as a visiting faculty member.
  • July 2020: My PhD Thesis won the SPCOM Best Doctoral Disseratation Award for 2018-19.
Research

I am interested in machine (deep) learning, computer vision, optimization, and signal/image processing. Following is an approximate clustering and labeling of my research (click on the label to find relevant works).

Long-Tailed Training Data

Real-world datasets exhibit skewed distributions, generally with a long-tail. In other words, only a few categories contribute majority of the samples, while most of the other classes claim relatively small number of samples. Classifiers trained on such data perform poorly on the minority categories. We aim to contribute effective solutions to alleviate the adverse effects casued by class imbalance in the training data.

Data Engineering for Deep Learning

In the digital era with the help of fast growing semiconductor technology we have created heaps of digital content (Images, videos, text, audio, etc.). This surely serves the data hungry deep learning in order to read the complex patterns in the data which would be otherwise difficult. However, it comes with the costs such as data redundancy, maintanance and distribution overhead, huge computational and time requirements to perform learning activities on these digital piles. We aim to investigate engineering solutions to these data and learning related challenges

Robustness

Deep CNNs are vulnerable to adversarial samples. There have been multiple approaches formulated to generate the adversarial samples. More importantly, adversarial samples can be transferred (generalized) across multiple models. This property allows an attacker to launch an attack without knowing the target model’s internals, which makes them a dangerous threat for deploying the models in practice. Therefore, the effect of adversarial perturbations warrants the need for in depth analysis of this subject.

Adaptability

Deep Learning has been data and resource intensive. However, real-world may challenge us with various constraints to apply these sophisticated tools. Adapting deep learning techniques/models (e.g. knowledge transfer, domain adaptation) across tasks and to challenging environments such as low data and data-free scenarios is of high importance.

Interpretability

Deep learning Models are complex machine learning systems with hundreds of layers and millions of parameters. Presence of advanced regularizers such as dropout and batch-normalization make the models less transparent. Because of end-to-end nature of the learning, models suffer from lesser decomposability and hence most of us treat them as black-boxes. In order to make these models more transparent, we devise methods that provide visual explanations to the labels predicted by the recognition CNNs.


Source taken from here.