The Age of Artificial Intelligence: Data Safety Challenges and the Need for Evolving an Adequate Legal Framework


The world is gravitating towards a digital technology-based economy, where human efficiency is being outstandingly supported and enhanced by technological innovation. A lot of the mechanical and repetitive works are getting automated, increasing efficiency manifold, and the introduction of Artificial Intelligence, or AI as it is popularly called, has opened up new possibilities of performing tasks like never before. In the words of John McCarthy, who is considered as the father of AI, “Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs.” As we are becoming more and more reliant on our smart gadgets to get our work done, at the click of a mouse or the tap of a fingertip, there is a humungous amount of data that gets captured and communicated through the digital channels. Some of this data is advertently shared, while some get captured without our knowledge through our various interactions and activities on the digital platform, leading to concerns about data protection and the laws to deal with the safety and privacy of users. Data is the new oil of the 21st Century economy – one who has data also has the power.

The type of AI which is most extensively used is the ‘Limited Memory’ system. This is where the data pertaining to the history of usage is stored and processed by the system to produce a predictive solution that is most suited or most beneficial for that usage pattern. Ostensibly, the system is dependent on storing original data which aids machine learning and serves as a reference model for future problem solving, but it is this mode of capturing data that is being seen as a problem in itself in many spheres. Data is the fuel on which this system runs.

To understand how we encounter AI in our everyday lives, we can look at the example of popular social media platform Twitter which uses one of the most advanced Artificial Intelligence systems in its business model. The information related to a user’s activities such as his tweets, retweets, likes, comments etc. are captured, stored and processed through deep neural networks. The users are then profiled on the basis of this data and Twitter makes recommendations on the user’s timeline assuming what he would find relevant and interesting. While many of these platforms openly admit harvesting data for the purpose of enhancing user experience, some do it clandestinely without making any effort to inform the public. In either scenario, the question arises whether it is justifiable to voraciously harvest consumer data for business objectives and whether there should be a limit set on how much and what kind of data these digital entities must be allowed to store.

Why Data Safety is Important in the Age of Artificial Importance

An argument that is often made is why should there be so much concern about data safety when much of the data that is picked up by AI is only of basic nature, such as his shopping preferences or frequency of using a platform etc. But in reality, it is not so simplistic and the ramifications of unregulated and uncontrolled data harvesting can be very detrimental. Some of the risk factors that may emanate are:

(a) Personal data about a user, even if it is basic in nature, can give too much of information about the person to an external entity which is an infringement upon his right to privacy. Merely by agreeing to use an electronic platform, the users cannot be deemed to have given a blanket approval to store, collate and analyse all their data by the AI algorithms. To put a case in point, merely by shopping on an e-commerce portal, a consumer may not prefer to give so much of control to the portal to analyse or share his historical data to the extent of knowing about his personality, needs, preferences etc. way more than what he would freely consent to disclose.

(b) Information pertaining to an individual’s political, religious and ideological inclinations can be harvested, especially from certain social media platforms where people are more expressive about their opinions, and such data can be exploited to propagate political or religious ideologies to the target profiles or can even be used to influence elections in a country, which is a grave threat to the democratic processes. In an incident that stormed the world, it was discovered that a data analytics firm Cambridge Analytica had harvested 50 million Facebook profiles, without authorisation by the users, for profiling individual US voters, to predict and influence their choices at the ballot box by exposing them to personalised political content.

(c) Big data analytics can be enabled to identify vulnerable individuals in the population by analysing and profiling them on the basis of the ideologies they conform to, in the digital media, who can even be brainwashed and radicalized by terror organisations. The use of AI for terrorist activities poses a grave risk to national security, which has to be taken very seriously in the legal system. To put a case in point, the dreaded global terror outfit ISIS is known to use digital channels to influence and recruit youngsters for carrying out its inhuman activities.

(d) The user data of Indian nationals can be transferred without authorisation to servers situated abroad, which can be misused by any State or Non-State actor that is hostile to India, which can be prejudicial to the sovereignty and integrity of the country. Recently, the Government of India banned 59 Chinese apps for privacy and security issues.

A Perspective on the Data Security Laws in the World

There is a pressing need for the legal system to have laws and regulatory mechanisms in place to check unscrupulous or excessive mining of data. While there is no argument about the fact that Artificial Intelligence is essential to enhance the efficiency, accuracy and competitive advantage of the system where it is deployed, it cannot be done at the expense of compromising data security. While we embrace this new technology, it is equally important to incorporate new changes in our legal system to deal with the challenges that come with it.

This aspect has been well highlighted by the Google CEO Mr. Sundar Pichai who said,” Artificial Intelligence needs to be regulated. Companies shouldn’t be able to just build promising new technology and let market forces decide how it will be used. Regulation should be nuanced, balancing mitigation of potential harms with space for social opportunities.” It cannot be left to companies to self-regulate how they harvest data. The ongoing concerns that some Chinese entities are engaged in stealing data has accelerated the need for greater supervision and regulation of AI solutions.

Recently, the Federal Communications Commission of USA, which is the telecommunications regulator, banned Chinese Huawei Technologies and ZTE Corp on finding them to be exploiting the telecommunications infrastructure for espionage, posing a threat to national security. Lawmakers in India are also taking cognizance of these international developments to incorporate measures to prevent data espionage. The Indian Ministry of Electronics and Information Technology is also considering to prohibit Huawei and ZTE from participating in the 5G telecommunication infrastructure.

It is important that the Government takes proactive measures to sanction those entities which have suspicious credentials or have been blacklisted in other countries from accessing data of Indian users. Many companies transfer huge volumes of data to servers abroad without authorisation which can be dangerous for the security, sovereignty and integrity of the country. Recently, the Indian Government invoked Section 69A of the Information Technology Act read with relevant provisions of the Information Technology (Procedure and Safeguards for Blocking of Access of Information by Public) Rules 2009 to ban 59 Chinese apps including TikTok and WeChat on finding them to be engaged in activities prejudicial to the sovereignty and integrity of India, defence of India, security of State and public order.

An ordinary user may not be able to discern or comprehend when and how much of his data is being recorded by the complex algorithms of AI, and it may not be possible for him to stop data from being transferred to servers abroad, it becomes imperative for legislators to have an effective regulatory framework for data safety. Many new provisions are being incorporated in legislations across the world, keeping data security and privacy as an important responsibility of public policy. One of the most comprehensive laws on data protection and privacy enacted by the European Parliament has come into force in the year 2018. This law, called The General Data Protection Regulation (GDPR), applies to all organisations operating within European Union as well as those doing business with customers based in EU. Among the key aspects of this law, is to give more control to the citizens over their personal data and to put legal obligations on the processors of data to ensure transparency and integrity in data management and be liable if there is a breach.

The 7 Principles of GDPR are; Purpose Limitation, Data Minimization, Accuracy, Storage Limitation, Accountability, Integrity & Confidentiality (Security), Lawfulness, Fairness and Transparency.

In comparison, countries like India and USA do not have such comprehensive Consumer Data Privacy Laws till now, but there are certain existing legal provisions which, if not substantially, but to a certain extent can address the issue of data security. Some relevant laws operating in USA are:

US Privacy Act, 1974 which empowered US citizens with certain rights over personal data held by Government agencies and imposes certain restrictions on agencies on availing citizens’ data.

Health Insurance Portability and Accountability Act (HIPAA), 1996 which contains the Privacy Rule that restricts healthcare providers from accessing ‘Protected Health Information’ and from marketing it without authorisation.

Children’s Online Privacy Protection Act (COPPA), 2000 is an important legislation that regulates personal data of minors.

Gramm-Leach Bliley Act, 1999 is a banking and finance law that protects ‘Non-Public Personal Information’ of customers from being shared with non-affiliated third parties without authorisation.

Some US states like California and Massachusetts have framed their own Consumer Data Privacy Laws although there is no such statute yet at the federal level.

Legal Landscape with Respect to Data Security and Privacy in India

In India, although there are growing concerns about data security, but there is no Central legislation or regulatory framework yet that is substantial and effective in ensuring data safety in the new digital age. This void needs to be filled without much delay with the growing proliferation of digital technology and concerns about user data being misused for ulterior objectives, especially by hostile countries against our national security. The Personal Data Protection Bill, 2019 has been tabled in the Parliament for cross-sectoral legal framework for data protection in the country.

The Bill also proposes establishing a Central Communication Interception Review Committee that would regulate surveillance by any electronic device and also a Data Protection Authority (DPA) for monitoring compliance with the law and framing regulations on issues such as mechanisms for taking consent, limitations on data mining and cross-border transfer of data. It remains to be seen when this legal proposition sees the light of the day. The right to privacy has been unanimously pronounced as a fundamental right under Article 21 of the Indian Constitution by a nine-Judge Bench in the landmark case of K.S. Puttaswamy and Anr. v Union of India and Ors, (2017) 10 SCC 1. This case dealt with the infamous Aadhaar controversy that involved recording sensitive biometric data of citizens. This judgment is a milestone for privacy jurisprudence wherein now it is incumbent upon the Government to protect the privacy of digital users as any breach will be a violation of the fundamental right. Some other relevant provisions that deal with the issue of data security in India are:

The Information Technology Act, 2000

Under Section 43A of this Act, if a body corporate causes loss or misuse of any sensitive personal data, then it shall be held liable to pay compensation.

Under Section 72A of this Act, any disclosure of information without authorisation shall be punishable with imprisonment upto three years and fine upto Rs 5,00,000.

Section 69 of this Act is an exception to the rule of privacy, under which the Government is empowered to intercept, monitor or decrypt any information in the interest of defense, national security, sovereignty or integrity, public order, friendly relations with other states and for prevention or investigation of any offence.

Suggestions and Conclusion

The legal framework to regulate data processing has to be such that it strikes a balance between addressing the need for data safety on one hand, and the bonafide requirement of processing entities to harness data for improving delivery and for economic efficiencies. As Artificial Intelligence evolves, it poses a unique conundrum before mankind; while it consumes voluminous streams of data through various digital channels raising serious questions about safety and privacy, the technology itself is being used in cyber security solutions to detect and thwart viruses/malware through complex algorithms. As AI programs can collate and reference from huge collection of historical data, they are more accurate in predicting and apprehending a cyber-attack than manual intervention. Due to its enormous capabilities, AI and robotics are increasingly being deployed in healthcare, banking & finance and in the defense industries. The regulatory framework should be such that it provides an enabling environment for AI to be beneficially developed, while ensuring that there is no breach of data safety and privacy. A few things that ought to be kept in mind while building a legal framework are:

(a) There should be emphasis on monitoring compliance with data safety rules by digital entities. There should be regular data audits to ensure that companies are not breaching data safety rules and compromising on consumers’ safety or privacy.

(b) Attention must be given to consumer awareness so that they know their rights and the limitations on data usage by digital players. Also, it is seen that the documents containing terms for authorising data usage are very complex and run into several pages, making it practically impossible for consumers to read and comprehend their implications. This needs to be simplified so that an individual can truly understand and then give or withhold his consent.

(c) There must be an effective and efficient redressal mechanism to deal with complaints of breach of data safety or privacy.

(d) It must be ensured that the interpretation of data sourced from users does not foment social discrimination or infringe upon their civil rights. To put a case in point, intelligent facial recognition software may lead to racial profiling of citizens. The legal mechanism must discourage such discriminatory outcomes From AI systems.


  • Holt Kris, (2020), Google CEO Sundar Pichai says AI ‘Needs to be Regulated’,
  • ET Bureau, (2020), India bans 59 Chinese apps including TikTok, Helo, WeChat,
  • Reuters, (2020), U.S. FCC issues final orders declaring Huawei, ZTE national security threats,

This article is written by Sonali Sinha Naik, Second Year Student of LLB at G J Advani Law College.

Also Read – How To File Cyber Crime Complaint?

Law Corner