Welcome to Alumni & Corporate Relations

Interview with Prof B Ravindran, Head, Robert Bosch Centre For Data Science & AI, IIT Madras

Interview with Prof B Ravindran, Head, Robert Bosch Centre For Data Science & AI, IIT Madras

Professor Balaraman Ravindran is the Mindtree Faculty Fellow and the Head of Robert Bosch Centre for Data Science & AI (RBCDSAI), IIT Madras. 

In an exclusive interview with Analytics India Magazine, Prof. B Ravindran spoke about IIT Madras’ contributions to the burgeoning field of artificial intelligence and machine learning in India. Recently, researchers at the institute have developed an AI-based mathematical model to identify cancer-causing alterations in cells

“What the government can do is to make sure that the AI education being offered is more standardised and the students are not taken for a ride in the name of AI education. The move to introduce AI even in the school curriculum is a good one and will give students early exposure and increase their interest in taking up AI as a career option,” said Ravindran. He is an alumnus of the University of Massachusetts Amherst and the Indian Institute of Science (IISc). 

AIM: Tell us about your role at IIT Madras Centre of AI.

Ravindran: I head the Robert Bosch Centre for Data Science and AI (RBCDSAI). This is an interdisciplinary centre that includes 28 faculty members spanning ten departments. Currently, there are 23 research projects funded directly by the centre, and almost all of them span multiple disciplines. Our projects range from fundamental research in deep learning to understand the social implications of AI.

We also support many industrial research projects in collaboration with enterprises from different verticals, namely, manufacturing, finance, smart cities and healthcare. Working with colleagues from so many different disciplines enables me to appreciate the synergy across these disciplines and work on problems with a broad-ranging impact.  

AIM: How does CREST help the Indian startup ecosystem?

Ravindran: Prof. Thillai Rajan is the head of CREST, and I am reproducing his reply below. 

A look at the investment in Indian ventures helps to put things in perspective. For the ten years from 2011-20, the risk capital investment in Indian technology startups has been about $68 billion, which approximately translates to ₹476,000 crores. Academic research and thinking can play an important role in sustaining and furthering this growth in entrepreneurship. Insights and evidence from academic research would support policy-making and increase the effectiveness of managerial and investment decision making in entrepreneurship. The objective of the CREST is to augment academic and thought leadership in innovation, entrepreneurship and risk capital.

We have a vision at CREST and aim to be globally recognised as a leading centre of research engaged in:

  • Scholarly research in creation, development, and financing of innovation, startups, and entrepreneurship.
  • Creation of a world-class unique data repository on Indian startups and ventures that seeks to address the major hurdle for engaging in high-quality research. This database would be made accessible for researchers and policymakers that can result in top tier publications.
  • Cross country collaborative research, resulting in the use of unique data sets on startups and venture capital that increases the chances of breakthrough findings.
  • Evidence synthesis on the startup and innovation ecosystem to inform policymakers and practitioners.

AIM: What are the research and projects IIT Madras is working on in the field of Artificial intelligence?

Ravindran: There are too many to summarise here. Some of the projects are described in detail in the RBCDSAI web pages. I will mention only a few. These are only a small fraction of projects in AI that are carried out at IIT Madras. The fundamental research areas that we work in are deep learning, reinforcement learning, natural language processing, computer vision, network analytics, causality, the social impact of AI, ethics, fairness and explainability of AI models. 

(a) Attention Mechanisms in Deep Neural Networks: Recently, the Deep Learning community has shown great interest in attention mechanisms to train neural networks – the network pays attention to only certain parts of the input or to certain parts of the network structure to learn at a given instant. However, to make this work well, we need to develop efficient algorithms for jointly learning the network parameters and the attention mechanism. The main work in the proposal will be two-fold:

  • Better algorithms for attention, possibly based on reinforcement learning
  • Transfer learning using attention.

We will demonstrate the success of this approach in two domains – natural language generation; and transfer in reinforcement learning.

(b) Interpretability of Deep Learning Models in Healthcare: Interpretability of deep learning models is essential for widespread adoption of these techniques in the Medical image diagnosis community. Deep learning models have been phenomenally successful at beating state of the art in common medical image diagnosis tasks like segmentation and screening applications, e.g. classification of diabetic retinopathy and chest X-ray scans, among others. While these successes have created huge interest in adopting these techniques in clinical practice, a huge barrier in adoption is the lack of interpretability of these models. Convolutional Neural Networks with hundreds of layers is the workhorse for medical image diagnosis. While the initial layers are typically edge detectors and shape detectors, it is fairly impossible to explain or interpret the feature maps as one goes deeper into the network. In order for clinicians to trust the output from these networks, it is essential that a mechanism for explaining the output be present. In addition, black-box techniques will make it hard for clinicians to justify the diagnosis and follow up procedures.

AIM: Tell us about the challenges your institution has faced while working in the field of AI and Data Science.

Ravindran: IIT Madras has an excellent ecosystem for research. Some of the problems typically encountered in AI/Data Science research are:

  • Lack of high-quality data: We collaborate extensively with the industry and also with some of the leading research laboratories globally. This gives us access to a variety of datasets. Further, the centre has sponsored multiple projects and initiatives to generate high-quality data, and we make it available publicly. Still, this is a challenge, and we have to make special efforts to overcome this.
  • Lack of computing resources: IITM is reasonably well placed in terms of in-campus compute servers. The centre has a DGX-1 deep learning machine, a 100+ CPU cluster, and several GPUs. We also have arrangements with several cloud service providers for subsidised access to cloud time. So we do not feel the pinch of computing. Of course, we cannot compete with Google research or Amazon research in their access to computing power and would ideally require significant funding to reach that level. 

AIM: What’s IIT Madras’s approach towards AI education and research?

Ravindran: IIT Madras believes strongly in a bottom-up approach to research agendas. Faculty are largely free to choose their research goals. At the centre, we encourage people to work on interdisciplinary research, but there are several projects that look at fundamental problems in AI.

In education, we are the first IIT to introduce a five-year dual degree program on data science in which the students earn a Bachelor’s degree in any branch of engineering and a Master’s degree in data science. We believe that a strong grounding in the fundamentals of a discipline is necessary for a person to be a successful data scientist in that discipline. This program is aimed at training engineers to be data scientists specialising in their primary domain of study. We also offer a minor in AI for students of all disciplines that allows them to get some exposure to AI and related topics by choosing to do a few electives. 

AIM: What is the scope of AI and data science in India?

Ravindran: India is ripe for the adoption of AI and DS in many disciplines, and the government is rightly putting in programs and incentives for the same. In some ways, it should be easier in India because there is a lot of data generated here, both due to the size of the population and the digitalisation drive of the government. But the challenge is that the data is seldom in a format amenable for analysis and AI. 

Currently, the government is starting a drive to standardise the digital data available so that it opens up the possibility of more and more AI-driven solutions across the board. The other challenge for the adoption of AI in India is that we have not truly “localised” AI. One obvious challenge is that of language and the multiplicity of them. There are a few initiatives (including Samanantar from IITM) that are progressing towards addressing that problem. But there are further challenges – there are issues of bias, fairness, trust, explainability, and social impact, which are so far only studied in the context of well-developed western societies. 

For e.g., what would the adoption of AI mean for a society under an extreme digital divide like India? What is the notion of fairness that is commensurate with our notions of social justice? Such questions would need to be answered when we are looking at a larger rollout of AI in India. These answers have to be steeped in the Indian context and cannot be readily imported from elsewhere. There are nascent research initiatives around the country that talk about this, and hopefully, this will become a very active area of research. This is certainly a part of our research focus at the centre. 

AIM: What role can the government play in encouraging more students to take up AI education?

Ravindran: What the government can do is to make sure that the AI education being offered is more standardised and the students are not taken for a ride in the name of AI education. The move to introduce AI even in the school curriculum is a good one and will give students early exposure and increase their interest in taking up AI as a career option. As I mentioned earlier, people with pure AI expertise are only part of the requirement – we really need to have people with cross-disciplinary expertise. Hence we should explore programs that enable doctors, engineers, lawyers, auditors, etc., to understand AI and appreciate the use of AI techniques in their fields. The government should encourage such cross-disciplinary programs in various disciplines. 

AIM: What is your advice for people who want to pursue AI research?

Ravindran: AI research requires you to have a very strong background in programming, mathematics as well as the fundamentals of AI. Be aware that AI is nothing but another set of problem-solving techniques and not be carried away by the hype. You would need to understand linear algebra, optimisation and probability theory at the least to be able to do meaningful research in modern AI.

For some sub-disciplines of AI, a good grounding in mathematical logic is also essential. In addition, you should have strong programming fundamentals; often, you will be working with tools that are not quite capable of doing what you are looking for. So you should be ready to set up your codebase for the experimental research you are doing. One final piece of advice – never lose sight of the real-life impact of the work you are doing. Always think if your research leads to the development of responsible AI and lends itself to certifications of robustness, fairness, etc.

Original News Link