Violation of Human Rights In the AI Spectrum

Introduction

Algorithms, Gait analysis, Facial-recognition systems, biometrics and so much more are merely examples of products developed by Artificial intelligence (AI). This is a branch of computer science that is concerned with building smart machines that are capable of performing tasks that typically require human intelligence.

Worldwide, 32% of the countries are solely relying while 41% of the countries are almost relying on the use of Artificial Intelligence.1 Undeniably, the technological infusion into the global village is a new form of revolution just like how the Industrial revolution caused a paradigm shift in the dynamics of the society in the 19th and 21st century.

However, unlike the slow effects that the Industrial Revolution bought along with it, the ill-effects of this technological revolution have already started cropping. With the rising interaction between human beings and AI, a threat to human rights is bound to occur.
This article outlines the overlapping of AI and how Human Rights looks under this spectrum of technology.

Violation of right to equality and principle of non-discrimination

Everybody is born the same way and therefore are conferred basic rights like that of equality. Article 2 of the Universal Declaration of Human Rights (UDHR) articulates that all the rights and freedoms set forth in this declaration are entitled to everyone without any distinction of any kind. Similarly, the Preamble of UDHR ensures equal rights for men and women. Article 2 of the International Covenant on Civil and Political Rights (ICCPR) reaffirms the same principle of non-discrimination. Article 14 r/w Article 15 of the Indian Constitution are also aligned in light of these principles. Although some find AI as a catalyst to achieve these principles due to its objective nature, one cannot deny the fact that AI is also a product of human intelligence which if not regulated strictly can become an abode to discriminatory practices.

The Discrepancies

The algorithms and the face-recognition systems are some of the examples that have failed to provide equal treatment to its users. One such instance is the research carried out by the National Institute of Standards and Technology. They found that the algorithms falsely identify ‘African-American’ and Asian faces’ 10 to 100 times more than ‘Caucasian faces’. Similarly there were glitches in predicting the gender of the coloured race.2 Even on an occasion while using Google photos, the photo recognition algorithm miscategorized a photo of a person to that of a Gorilla.3 In another research, the medical algorithm is wired in such a way that it favours white patients over sicker black patients.4

The discrimination between black people and white people, between men and women, or any other dialectical classification is a clear Human Rights violation. These discriminatory practices which are deep-rooted into society have now been given new modes of continuing this said practice by way of instilling algorithms in these AI tools which when practiced can lead to irreparable harm in the society.

Violation of Right to Privacy

Warren and Brandeis in their law review article titled “The Right to have privacy” which was published in the 1890 Harvard Law Review stated that the right of determining, ordinarily, to what extent one’s thoughts, sentiments, and emotions shall be communicated with others is a well-protected right under the common law shield. Certainly, the authors wanted to highlight the point of having the right to privacy over one’s personal affairs. Today, right to privacy has been protected through different international and national laws.

However, despite having numerous laws in place, the threat to privacy is very much real and citizens of the global world have very little say in how they are being implemented. For example, the recent Contact-tracing applications that are prevalent to keep track of Covid-19 patients have been seen as a major challenge to the protection of privacy5 as these applications record  personal information of its users and act as a surveillance mechanism that tracks their movement from time to time.

Although prima facie, such record of surveillance does not act as a threat in the view of public safety and health, but “every disclosure of personal information comes with some latent risk that it will be used in the future for purposes not disclosed at the time of collection”6,is what experts in the field have to say on this. This means that while taking the consent of the users to collect data, the ulterior motive of the Government is camouflaged behind the function that is made public. Once the data is collected, the Government uses such data for its non-disclosed purposes.

For example, the biometric data of Rohingya refugees which was collected in order to integrate them in the society was in reality used against them to facilitate their repatriation.7 The main reason which has led to this situation is the weak data protection laws prevalent in most of the nations.

Additionally, technologies like Gait Analysis that can extract a person’s silhouette from any video and analyze the silhouette’s movement to create a model of the way the person walks and thus, can manage the database for further identification8 and Predictive Algorithms that predict the sexual orientation of individuals from a photograph with 91% accuracy9 when used for behavioural analysis are a potent threat to privacy because they function without the consent of the concerned person whose privacy is under jeopardy. Such unethical practices without stringent data protection policies are a threat to the privacy of citizens resulting in deprivation of the basic rights.

Violation of Right to Employment

Today, advancement in AI has threatened the rise in the figure of unemployment due to automation of human jobs. In a seminal study conducted by Oxford, 47% of the total US employment is under the high-risk category of being a victim of automation.10 These studies are not conceptually or practically flawed because the practice of automation is now widely being practiced at all fronts. Cambridge Industries Group, which is one the leading suppliers of telecom equipment in China, has replaced two-third of its human workforce with robots and has planned to have a 90% automated workforce in the near future.11

Similarly,  Adidas has set-up ‘robot-only’ factories12 to improve efficiency and cut down costs where the personal assistants have been replaced with AI-based virtual assistants like Siri, Alexa, etc. impacting the service and hospitality industry as well.

Factually, the current impact of job automation is seen only on the lower and middle-class strata of the society and the upper class has not yet been hit by this wave of automation.

Not only will this widen the gap in the society, but such type of job automation will definitely upshoot the production and efficient use of resources but in the process it will reduce the private consumption that will in turn, reduce the demand and will make a downfall in the overall GDP.

Such a scenario opposes the ideals of Human Rights and therefore puts the same in danger of being abused at the instance of capitalists.

Conclusion

Artificial intelligence was introduced, developed, and progressed in order to complement human functioning rather than making a hostile takeover. It was predicted that the advancement in technology will only supplement as a tool in the protection of Human Rights rather than acting against their fundamental interest.

Artificial Intelligence could have been a blessing to modern society if it would have been circumvented with a timely and stringent legal framework.

Lack of proper framework has widened the loopholes of AI to supersede its beneficial functionality.  Human rights are not those rights that can be suspended or terminated ever.

They would stay as the first shield of protection for every human being.

Therefore, it is for the other elements to bring themselves in conformity with Human Rights and not vice-versa. Hence, if the era of AI has to flourish and grow while complementing human society, it has to comply with the principles of Human Rights and support its jurisprudence.

References

1 https://www.statista.com/statistics/883262/worldwide-enterprise-reliance-automation-machine-learning-artificial-intelligence/
2 https://www.nytimes.com/2019/12/19/technology/facial-recognition-bias.html
3 https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai
4 https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/
5 https://www.financialexpress.com/industry/technology/data-privacy-and-safety-how-secure-are-contact-tracing-covid-19-apps/2006650/
6 https://healthitsecurity.com/news/covid-19-contact-tracing-apps-spotlight-privacy-security-rights
7 https://www.wired.co.uk/article/united-nations-refugees-biometric-database-rohingya-myanmar-bangladesh
8 https://inforrm.org/2020/06/05/when-privacy-and-security-collide-the-legality-of-using-facial-recognition-security-systems-in-quasi-public-spaces-raghav-mendiratta/
9 https://www.theguardian.com/technology/2017/sep/07/new-artificial-intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph
10 https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf
11 https://www.vice.com/en_us/article/534bqb/this-shanghai-factory-plans-to-replace-all-of-its-human-workers
12 https://www.wired.com/story/inside-speedfactory-adidas-robot-powered-sneaker-factory/#:~:text=When%20its%20first%20stateside%20Speedfactory,Save%20this%20story%20for%20later.&text=A%20shoe%E2%80%94customized%20for%20runners,line%20at%20the%20Adidas%20Speedfactory.
Categories:

Phone: +919841011111

Email: subathra@akmllp.com