Welcome to Alumni & Corporate Relations

The next (r)evolution. Managing human-allied Artificial Intelligence

The next (r)evolution. Managing human-allied Artificial Intelligence

Some of the recent developments have led people to believe that “singularity” – a hypothetical future point in time when AI becomes smarter than humans – is imminent. The reality is that the current AI systems still blindly follow patterns in the data and do not truly understand the world.

The need of the hour is the development of Human-Allied AI systems, where Humans and AI work together to amplify their capabilities and mutually cover for their deficiencies. To enable such systems, one need to adhere to the principles of responsible AI. Here are the key ideas behind these principles.

Explainable AI

In order to trust the outcomes of AI models, we must be able to understand the reasons for the decisions/recommendations made by the model. Success for many AI applications are achieved by the use of what are known as ‘black box models’ – where the process of computation is known, but the reasons for the outcomes are not fully understood.

Even models that are not black boxes can only be explained in terms of the statistical properties of the model. A “responsible AI system” will offer explanations so that anyone can understand the decisions made by it. This would require aligning the AI models to the accepted modality of explanations in the application domain.

Data discipline is an aspect of responsible AI that has received the most attention. The European Union’s General Data Protection Regulation was the first-ever comprehensive framework created for controlling collection, access, and usage of data. The rights of the end-user, whose data is used for building AI systems, are made paramount in the framework. India’s Digital Personal Data Protection Act, 2023, also aims to do the same.

The fairness and ethical aspects of AI have received much attention in both research and popular media. Reports of AI systems that routinely identify people of a certain race as more likely to be criminals, or that women are more likely to be nurses than doctors, are often seen. AI chatbots have been known to become toxic in their language with suitable encouragement from the other participant. While much progress have been made in addressing such issues, techniques and policies must be developed for adopting them into the local social context. India has her own dimensions of discrimination, and one cannot blindly adopt the West-centric views on it. Existing prejudices against certain castes or groups or about people from certain regions will be reflected in AI systems trained on this data. This must be identified and guarded against. For which, AI researchers and social scientists must work in close collaboration to understand the existing human biases and the ways in which they manifest.

Another aspect of responsible deployment of AI models is the ‘performance expectations’ of such systems. AI systems are not simple programs. They solve complex problems and the outcome of their calculation may not be always right. And the end-user of such programs often do not fully understand the implications of it. So, when a designer says that my system will be correct 93 times out of 100, does that mean an AI-enabled medical scan will not detect the illness in 7 patients? It could well be that the AI system says a patient has a disease just because the x-ray was damaged!

Hence, development of regulations to mandate performance guarantees in each application domain is required. At the same time, one needs to understand that researchers cannot guard against all possible eventualities and hence one would need appropriate insurance models for AI systems.

Teamwork

The capabilities of AI can be fully realised when humans and AI systems work together. Responsible deployment of AI systems will require one to understand how work will be disrupted with AI, and development of new AI-in-the-loop protocols for solving the problems. Companies will have to /re-skill workers to operate effectively in such a hybrid environment.

The Centre for Responsible AI (CeRAI) has been set up at IIT Madras to study these issues under three themes – making AI understandable, AI and safety, and AI and society. The centre is set up in multi-stake holder consortium model, with participation from industry, government, legal experts, social scientists and industry bodies apart from different academic institutions.

(Prof. Balaraman Ravindran is Head, Centre for Responsible AI, IIT Madras)

Original News Link

https://www.iitm.ac.in/happenings/press-releases-and-coverages/next-revolution-managing-human-allied-artificial