India needs a principles-based approach to regulating AI

Last week, amid the buzz and hubbub around artificial intelligence (AI), an open letter penned by some notable individuals threw a bucket of cold water on all the excitement. Apparently inspired by a letter from the Future of Life Institute, it raised fears that AI will put jobs in the country at risk and hinted at the doom that will befall us if we do not regulate it immediately.

As regular readers of this column will know by now, I am optimistic about AI. I believe this will be the transformative technology that will drive the next big change in the way society functions. As with all ‘tech’-tonic changes, it will transform the way we work, rendering many of the jobs we currently have irrelevant. But in their place will come new jobs and new skills that mankind will have to learn in order to make the most of the opportunities it offers. That is why I do not share his pessimism.

Having said that, there is certainly merit in starting to think about how AI should be regulated. There is no doubt that it is well on its way to becoming a ubiquitous technology that pervades various aspects of our lives. When this happens, many of the regulatory frameworks we currently rely on will become redundant. And it’s never too early to start thinking about how to deal with it.

As it happens, over the years, more than a few countries have attempted to do just that. The US Office of Science and Technology Policy released a blueprint for a bill of rights for AI that supposedly took off laissez faire Approach In addition to reiterating the need to protect users from unsafe and ineffective systems, ensure AI systems are designed so they do not discriminate and take steps to address privacy concerns around notice and user autonomy, the blueprint clarified. Not formally told what exactly AI companies will have to do.

On the other hand, the European Commission has come out with a full blown legislative proposal that lists in excruciating detail how it is going to regulate “high risk AI systems”. This includes the need for AI companies to ensure continuous iterative assessment of risks; ensuring that they only use error-free data-sets for training; and imposes an obligation on them to establish audit trails for transparency. It also intends to set up a European AI Board and set up a punishment regime that is even stricter than the General Data Protection Regulation (GDPR), with fines of up to 6% of global turnover for breaches.

Both of these regulatory proposals attempt to correct what we believe we know – based on our current experience – is wrong about algorithmic systems. They prevent the discrimination that we have seen these systems commit because they have been trained on human data with all its inherent biases. and attempt to mitigate the harm to privacy that may occur when AI systems use our information for purposes other than those for which they were collected or processed without notice.

These are issues that need to be addressed, but designing our regulatory strategy to address problems only after they have become a problem will not help us deal with a technology that is capable of developing as rapidly as AI. Is. Just as applying a traditional approach to accountability would be completely wrong.

From what we’ve seen of generative AI so far, it is capable of unpredictable emergent behavior that often has no bearing on the programming it receives. These systems are adaptive, capable of predicting far more than their human developers could have imagined. They are also autonomous, making decisions that often bear no relation to the expressed intentions of their human creators. and are often executed without their control. If our regulatory solution is to hold the developers of these systems personally liable for this accidental behavior, they will be forced to stop any further development for fear of liabilities that they will have to bear because of this very accidental behavior Which is its strength. ,

What if there is another way? What if we adopted an agile approach to AI regulation that is based on a set of cross-cutting principles that describe at a very high level what we expect AI systems to do (and not do)? We can apply these principles to all the different ways AI is, and will be, deployed—across a wide range of sectors and applications. Sector regulators can then refer to these principles, use them to identify damages at the margin and take appropriate corrective action before the effects become too widespread.

This is the approach the UK government is taking in its recently published Pro-innovation approach to regulating AI, Rather than establishing a new regulatory framework, it intends to follow an agile and iterative approach designed to learn from real experience and continually optimise. Recognizing that draconian laws may slow technology innovation, these principles are not intended to be placed on a statutory footing. Instead, it is looking to issue them on a non-statutory basis, so that they can be implemented by existing regulators, who will leverage their domain-specific expertise to tailor rules to the specific contexts in which AI is used.

So far, India has refrained from regulating AI despite the urgings of some to do so in haste. However, when we do eventually start, we would be well advised to follow the UK approach. AI has a lot to offer us and we should not suppress its potential.

Rahul Maithon is a partner at Trilegal and has a podcast called Ex Machina. His Twitter handle is @matthan.

catch ’em all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less