Explaining the U.S. Blueprint for Artificial Intelligence (AI) Bill of Rights
With the increase in global usage of artificial intelligence and automated systems in our day-to-day functioning, there is a potential of causing a certain level of harm due to a lack of guiding principles or a reasonable framework. This technological upgradation which is seen as progress of human advancement can result in causing irreversible damage if there is no mechanism to keep a check on its applicability. Recently, in order to ensure that usage of a such technological system does not become a threat to the civil rights or democratic values of the United States, the White House of Science and Technology Policy has come out with a blueprint for AI Bill of Rights.1 This blueprint shall act as a guide for society as to how to protect people from technological threats and rather use the former in a way that reinforces the highest values.2 Therefore, in this blueprint of Rights, five different principles have been identified dealing with how automated systems should be designed, used, and deployed to protect and safeguard the rights of people in the era of artificial intelligence dominance.
Safe and Effective Systems
This principle states that every person should be protected from unsafe or ineffective systems. This means that when a system is created, proper and thorough safeguard testing should take place amongst stakeholders, domain experts, and diverse communities. With these safeguards applied, potential risks and impacts can be determined and mitigated. The whole idea of this principle is to ensure that developed systems do not cause any kind of discrimination or adverse impact through their application and rather a safe environment is created for everyone. So, the principle aims in directing that any automated system which is designed should not intend to endanger the safety of the community. Moreover, systems should undergo pre-deployment testing, risk identification, and other standard tests to identify their output and outcomes. Additionally, the focus should also be given to reasonable foreseeability to understand the range in which such a system would work even if there is any kind of derailing so that no substantial or unjustified harm is caused and the effect can be easily managed or prevented.
Thus, in order to ensure that automated systems are safe as well as effective, this principle concludes that every automated system should go through stake-holder-expert consultation, pre-deployment testing, identification, and mitigation of risk, application of continuous monitoring procedures, clear organizational oversight, and avoiding inappropriate, low-quality, or irrelevant data use. It is required that these safeguards should be used and implemented before the deployment of any automated systems in the market so that a safer environment can be created for every being in society.
Algorithmic Discrimination Protections
This principle intends to ensure that no one should face any sort of discrimination by the application of algorithms. According to the Bill, Algorithmic discrimination takes place when “automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” So, this principle obligates that the developers and deployers of such algorithmic systems should aim at equitable functioning of the same. It is certainly an important principle because evidence suggests that algorithms have the potential to produce inequitable outcomes and amplify historical as well as existing inequalities. This would result in sheer consequences where systemic biasedness is promoted. Examples could be unfair arrests and preventive detentions, using algorithms in healthcare, etc.
Thus, in order to ensure an adequate framework to mitigate algorithmic discrimination, it is suggested that every automated system should be tested to analyze that it is free from algorithmic discrimination. To achieve that, it should undergo proactive equity assessments in the design phase where input data is extensively reviewed so as to see the factors considered while using the data and to what extent it can degrade societal goals. The ideal goal is to achieve representative and robust data which in reality represents local communities without any bias and potential harm. Hence, according to principle, continuous disparity assessment tests should be conducted and on basis of their result, disparity mitigation should take place to ensure accessibility and equity and to eliminate disparities.
This principle deals with the most important practice that is required to be protected. Every member of society should be protected from abusive data practices and should have autonomy over how their data should be used. Every person should be given the autonomy to decide how their data should be used and importance should be given to the element of consent. The principle prioritizes for having a framework which ensures protection from violation of privacy via different built-in protections. Such protections should be built on reasonable expectations wherein data is used only where it is strictly necessary and only certain data should be collected which is required in a specific context. In order to achieve this principle, the Bill focus more on consent element like using specific consent based on narrow use contexts for specific time durations, brief and direct consent requests in short and plain language which is easy to understand, option for consent withdrawal and data deletion, etc. Similarly, systems should be designed and built with privacy protection by default. While developing the systems, continuous assessment should be done for eliminating the potential intrusion to privacy and other risks, and appropriate technical and policy mitigation measures should be implemented. Additionally, wherever data is collected, there should be a limited scope and data should only be collected as per the identified goals. No data other than the required scope should be collected. The Bill also cautions that extra protection should be provided for data related to sensitive domains like health, education, criminal justice, etc. All in all, data privacy has been given adequate focus while deploying any automated systems.
Notice and explanation
Through this principle, the Bill focusses on the awareness part of using an automated system. If you are subjected to an automated system, then according to this principle, you should be made aware and understand how and why it contributes to outcomes that impact you. The developers and deployers have to provide clear descriptions as to how this automated system issued in functioning and what is role played by automation. The outcomes of automation should be conveyed in a clear, timely, and accessible manner. Through this, an opportunity would be given to a person to determine who is making a decision and to further contest or request correction of decisions in case of any malice or biasedness. Therefore, notice and explanation serve an important safety and efficacy purpose, allowing experts to verify the reasonableness of any decision. Awareness and knowledge can be spread through tailored made notices explaining the specific purpose in an informational manner. The details should deal with the possibility of recourse, appeal, or any other dispute contestation process. Further, the language of such notices must be accessible and plain language documentation. Thus, informed usage has been seen as one of the top priorities while using Artificial Intelligence systems.
Human Alternatives, Consideration and Fallback
This principle is basically a redressal principle dealing with opt-out options and other remedies in case of encounter to a problem. The principle prioritizes the need of having an opt-out option from automated systems in favour of a human alternative wherever appropriate. It further rules that appropriateness shall be determined based on reasonable expectation in a given context and with the aim to ensure maximum accessibility and protection of the public from harmful impacts. Therefore, a human or another alternative may be available and a timely escalation is provided to the alternative in case the automated system fails, produces an error, or if it directly impacts you, then appeal or contesting options are also open. A special focus has to be given where automated systems is dealing with sensitive domains. This can be done by having a remedy that is proportionate, accessible, and convenient. Further, wherever human alternatives are provided, the human alternative should be allowed to be triggered by an opt-out process from automated systems in a timely and not burdensome manner.
Certainly, the US Bill of AI rights as discussed above is a progressive step towards ensuring fair and safe use of artificial intelligence technologies in society while giving adequate focus on providing alternative remedies. These guiding principles are broad and cover the myopic risk that is raised by the functioning of automated systems. The question here is about the deterrence and implementation of such principles in designing and deploying the systems. The framework is appropriate and justifies the need of the hour, nonetheless, the implementation of such a framework is equally necessary to actually give a real effect to the objective of such principles. On the whole, these guiding principles and the blueprint would act as a precedent for other jurisdictions to understand the risks and viable framework that is necessary to battle the new age of artificial intelligence in a sophisticated fashion.