Academic Thoughts
The Search:Since centuries, humans have been attempting to understand the principles that make the mind and that lead to intelligence. Understanding what intelligence is, how to simulate it and make it a reality, how to develop intelligence much more capable than what we ever know of has been a long-standing goal of humanity. It is one of the mightiest goals we know of that even the process of attempting it is a humbling and a fruitful exercise. It is one of the most common questions that many of us ponder over at some point in our life and conclude to incomplete answers. This is my small attempt in finding these answers. I share the same curiosity, passion, and enthusiasm that many others who attempt this goal have. I hope that in my lifetime, we complete the missing pieces of this giant puzzle.
An Approach:Currently, my research interests lie in reinforcement learning, representation learning, and real-world reinforcement learning. I care about building intelligent agents that are grounded in interaction with their environment. Such agents should make use of domain independent, general-purpose learning mechanisms, architectures and algorithms. Such agents should be able to improve their ability to learn with more experience and should be able to adapt to any environment they are placed in, while respecting the inherent constraints on the agents.
I care about reinforcement learning as it is a powerful framework that explains how intelligence can emerge in agents solely on the basis of their interaction with the world around them. I thoroughly enjoy studying and working in reinforcement learning, as it brings together ideas from several areas of knowledge like psychology, neuroscience, and computing science, making it a very rewarding subject to study. Personally, I also think it has ties to philosophy as many parallels can be drawn between life and the reinforcement learning framework.
I care about representation learning because it forms the basis on which learning is dependent. Many components in the reinforcement learning framework rely on good representations. Good representations should lead to better performance, generalization, and no interference/forgetting. With respect to this, I care about constructing the agent state, which is the agent's perception of where it is in the environment. This includes how well the representations summarize the past history, convey the present state, and predict the future. In light of this, I care about general value functions for prediction-based state representation and also discovery/search of good state features. The idea of discovery of representations, hyperparameters, architectures, and learning algorithms with minimal hand-designed components is powerful and general, and is a potential direction to general intelligence. Recently, I have been interested in this study of discovery which is also referred to as learning to learn or meta-learning. Representation learning naturally leads to my interest in function approximation methods like deep learning.
I care about real-world reinforcement learning, especially its application in industrial control and in robotics. Most of the work done today is in the simulated domain as against the real-world where the agents of the future will be operating. Real-world brings many challenges that are important to be addressed but are not explored to the fullest in simulated domains. Today, reinforcement learning is sample inefficient and this characteristic is not very suitable in the real-world where the data rate is much lower than simulated worlds. However, it is important to note that the most gains will also be from its application in the real-world once it is viable.
Accordingly, I have been studying and working in these areas in different capacities.