Trending:

India's AI startups & businesses face challenging data privacy laws, regulatory heat

FP Staff September 20, 2024, 14:07:41 IST

India’s DPDP Act, sets out stringent requirements for the protection of personal data while allowing its processing for lawful purposes. However, companies building GenAI models without adequate transparency regarding how personal data is being used for training these systems

Advertisement
As India's AI industry continues to grow, the balance between innovation and compliance with stringent data privacy laws will be crucial. Companies must navigate these challenges carefully to avoid legal repercussions while continuing to advance AI technologies. Image Credit: Pexels
As India's AI industry continues to grow, the balance between innovation and compliance with stringent data privacy laws will be crucial. Companies must navigate these challenges carefully to avoid legal repercussions while continuing to advance AI technologies. Image Credit: Pexels

India’s rapidly growing AI industry, particularly companies developing generative artificial intelligence (GenAI) models, is facing increasing challenges as stricter data privacy laws and regulatory oversight come into play.

A wide range of businesses, from IT firms to banks and cloud storage providers, are seeking legal guidance to ensure their operations comply with the Digital Personal Data Protection (DPDP) Act, amid concerns about the use of personal data in AI training.

Regulatory challenges and legal concerns
The DPDP Act, passed by the Indian Parliament in August last year, sets out stringent requirements for the protection of personal data while allowing its processing for lawful purposes.

STORY CONTINUES BELOW THIS AD

However, many companies are building proprietary GenAI models without adequate transparency regarding how personal data is being used for training these systems. This lack of transparency could potentially violate the principles of lawful consent, fairness, and transparency enshrined in the DPDP Act, raising significant legal risks.

Experts in the industry have voiced concerns that using publicly available data for AI training without proper consent could conflict with both the DPDP Act and existing copyright laws. Establishing a breach of consent in the context of AI, where models generate new outputs, adapt to new information, and operate with a degree of autonomy, presents unique challenges in the legal landscape.

Businesses are now consulting with legal experts to navigate these complexities, focusing on how to craft privacy policies that secure appropriate user consent, define contractual obligations for data processors, and align with global data protection laws. The DPDP Act mandates principles like purpose limitation and data minimisation, but the widespread use of the same data for multiple AI applications raises questions about compliance with these principles.

Data management an uphill task
The uncertainty surrounding data management in AI systems extends to the technical capabilities of these models. For instance, whether AI models can selectively delete parts of their memory or if they require complete retraining is a significant concern. The cost implications of such requirements, coupled with the need to ensure compliance with the DPDP Act’s provisions, are pressing issues for companies as they strive to avoid potential legal pitfalls.

STORY CONTINUES BELOW THIS AD

Indian companies are increasingly aware of the need to future-proof their operations against legal challenges, especially in the absence of strong legal precedents regarding GenAI’s impact on citizens’ rights. Tata Consultancy Services (TCS), one of the world’s leading IT services companies, has highlighted the importance of proactive risk management and adherence to evolving legal standards. TCS emphasises the need for robust governance frameworks, effective consent management, and continuous monitoring of global regulatory trends to ensure compliance.

What this means for AI development in India
The concerns extend beyond the use of personal data. Experts note that inferences made by AI about individuals are also considered personal data, further complicating the landscape. Inaccuracy and bias in GenAI applications, particularly in areas like marketing, hiring, digital lending, and insurance claims, pose critical risks. Questions about responsibility — whether it lies with the data fiduciary or the developer companies like OpenAI — are central to the ongoing debates.

STORY CONTINUES BELOW THIS AD

As India’s AI industry continues to grow, the balance between innovation and compliance with stringent data privacy laws will be crucial. Companies must navigate these challenges carefully to avoid legal repercussions while continuing to advance AI technologies. The ongoing dialogue between businesses, legal experts, and regulators will play a pivotal role in shaping the future of AI development in India.

Home Video Shorts Live TV