Artificial Intelligence is the official hot topic. It’s the talk of the town, where everyone has a view but few truly understand how it functions, and what it can truly contribute outside of a well drafted LinkedIn post or hyper real images, limited by just botched human faces, for now.

AI is a far reaching phenomenon with consequences well beyond these reductive examples. Be it narrow AI like self driven cars, or generative AI like chatbots that constantly learn and build on information fed to them, AI tools can increase efficiency and propel innovation.

Should there be regulations for AI?

Over the past year or so, Open AI’s ChatGPT has taken the world by storm. Everyone is using it to draft emails, write essays, help with homework, and sometimes even as an entity that can listen and offer life and medical advice! But the question remains – Are AI models reliable to take up these tasks? And does its unreliability call for strict regulations? The spread of misinformation, loss of jobs, creative plagiarism, erosion of public trust, and amplified biases from biased data are just some of the important concerns. For governments these are reasons enough to want to rush to regulate.

Many countries have already passed laws to take on the AI behemoth. The European Union has suggested a legislation called the Artificial Intelligence Act that attempts to manage AI through a risk-based approach that increases compliance requirements depending on use case. The US government has taken a federal approach through a blueprint for an AI Bill of Rights, while also adopting state-level rules through privacy legislation and AI-related use cases. The UK government, however, has chosen a “pro-innovation” strategy with a focus on broader guidelines instead of strict legislation. This approach offers flexibility with oversight from existing regulations and regulators with an AI-specific overlay.

Why regulating AI is a challenge

Regulating AI is a mammoth task for three main reasons. First, AI’s rapid integration into technology has been unprecedented which has raised concerns among many software developers. As self-regulators they are still grappling with the uncertainty of misuse new software updates can cause. This caution is a response to the potential high risk from unintended consequences. 

A second challenge is its evolving nature. Generative AI systems and their advanced versions can constantly learn and build on information they are fed which in turn creates versions that may differ significantly from the initial product. This makes it difficult to scrutinise models ex-ante and difficult to pre-empt the next iteration. 

The third challenge is the question of liability. Imagine an AI model that makes a harmful decision or causes unintended damage—who should be held responsible? Lawyers and policymakers are struggling with this issue, as AI’s actions aren’t directly controlled by any one individual, once it’s deployed. Assigning accountability is tricky, especially when AI systems make complex, independent choices. Without clear lines of responsibility, it’s hard to decide who should be held accountable when something goes wrong.

What can India do now?

Given these reasons and other uncertainties around AI’s dynamic nature, rushing to regulate isn’t ideal. Although imposing stringent constraints might feel instinctive, especially given people’s reluctance toward rapid change, some creative destruction is essential when innovation and progress are at stake.

Does that mean we should give AI a free hand? Absolutely not. The vision for AI regulation should be to recognise its risks and find ways to manage them without stifling innovation. How liability gets determined in the case of harms caused will also set norms for regulations.

Allowing for staggered discretion to more general sector specific regulators, instead of a specialised AI policy, may be more helpful given the dynamic nature of AI. For example, in extreme cases such as deepfakes resulting in child pornography or other harmful consequences, regulations for IT sector and tech platforms can do more than AI specific legislations.

At present, India does not have a dedicated regulation for AI. A good regulatory framework for us would be one that strikes a balance between driving innovation and ethical usage, while anticipating unintended consequences. How India governs AI will matter and it’s important that we don’t lose sight of the forest for trees.

Post Disclaimer

The opinions expressed in this essay are those of the authors. They do not purport to reflect the opinions or views of CCS.

SHARE
Previous articleTackling the Stubble Burning Problem in North India: A Three-pronged Approach
Aakriti Parashar

Aakriti Parashar is an aspiring public policy enthusiast and writer, currently working as a Junior Associate for the Policy Training and Outreach Department at Centre for Civil Society. A 2021 graduate from the University of Delhi, she has led various social entrepreneurial teams and worked as a Consultant for the Delhi Government. Aakriti has a keen interest in education policy and research. Some of her published articles include India’s Daughter – a commentary on women safety in India; and a number of published works for Her Campus Media.