The availability of massive data sets has made it easy to derive new insights through computers. As a result, algorithms, which are a set of step-by-step instructions that computers follow to perform a task, have become more sophisticated and pervasive tools for automated decision-making.2 While algorithms are used in many contexts, we focus on computer models that make inferences from data about people, including their identities, their demographic attributes, their preferences, and their likely future behaviors, as well as the objects related to them.3
In the pre-algorithm world, humans and organizations made decisions in hiring, advertising, criminal sentencing, and lending. These decisions were often governed by federal, state, and local laws that regulated the decision-making processes in terms of fairness, transparency, and equity. Today, some of these decisions are entirely made or influenced by machines whose scale and statistical rigor promise unprecedented efficiencies. Algorithms are harnessing volumes of macro- and micro-data to influence decisions affecting people in a range of tasks, from making movie recommendations to helping banks determine the creditworthiness of individuals.4 In machine learning, algorithms rely on multiple data sets, or training data, that specifies what the correct outputs are for some people or objects. From that training data, it then learns a model which can be applied to other people or objects and make predictions about what the correct outputs should be for them.5
making hard decisions with decision tools solution zip
Download Zip: https://urluso.com/2vJVBT
With algorithms appearing in a variety of applications, we argue that operators and other concerned stakeholders must be diligent in proactively addressing factors which contribute to bias. Surfacing and responding to algorithmic bias upfront can potentially avert harmful impacts to users and heavy liabilities against the operators and creators of algorithms, including computer programmers, government, and industry leaders. These actors comprise the audience for the series of mitigation proposals to be presented in this paper because they either build, license, distribute, or are tasked with regulating or legislating algorithmic decision-making to reduce discriminatory intent or effects.
The next section provides five examples of algorithms to explain the causes and sources of their biases. Later in the paper, we discuss the trade-offs between fairness and accuracy in the mitigation of algorithmic bias, followed by a robust offering of self-regulatory best practices, public policy recommendations, and consumer-driven strategies for addressing online biases. We conclude by highlighting the importance of proactively tackling the responsible and ethical use of machine learning and other automated decision-making tools.
Yet, even with these governmental efforts, it is still surprisingly difficult to define and measure fairness.40 While it will not always be possible to satisfy all notions of fairness at the same time, companies and other operators of algorithms must be aware that there is no simple metric to measure fairness that a software engineer can apply, especially in the design of algorithms and the determination of the appropriate trade-offs between accuracy and fairness. Fairness is a human, not a mathematical, determination, grounded in shared ethical beliefs. Thus, algorithmic decisions that may have a serious consequence for people will require human involvement.
For example, while the training data discrepancies in the COMPAS algorithm can be corrected, human interpretation of fairness still matters. For that reason, while an algorithm such as COMPAS may be a useful tool, it cannot substitute for the decision-making that lies within the discretion of the human arbiter.41 We believe that subjecting the algorithm to rigorous testing can challenge the different definitions of fairness, a useful exercise among companies and other operators of algorithms.
In the case of determining which automated decisions require such vetting, operators of algorithms should start with questions about whether there will be a possible negative or unintended outcome resulting from the algorithm, for whom, and the severity of consequences for members of the affected group if not detected and mitigated. Reviewing established legal protections around fair housing, employment, credit, criminal justice, and health care should serve as a starting point for determining which decisions need to be viewed with special caution in designing and testing any algorithm used to predict outcomes or make important eligibility decisions about access to a benefit. This is particularly true considering the legal prescriptions against using data that has a likelihood of disparate impact on a protected class or other established harms. Thus, we suggest that operators should be constantly questioning the potential legal, social, and economic effects and potential liabilities associated with that choice when determining which decisions should be automated and how to automate them with minimal risks.
The bias impact statement should not be an exhaustive tool. For algorithms with more at stake, ongoing review of their execution should be factored into the process. The goal here is to monitor for disparate impacts resulting from the model that border on unethical, unfair, and unjust decision-making. When the process of identifying and forecasting the purpose of the algorithm is achieved, a robust feedback loop will aid in the detection of bias, which leads to the next recommendation promoting regular audits.
The subjects of automated decisions deserve to know when bias negatively affects them, and how to respond when it occurs. Feedback from users can share and anticipate areas where bias can manifest in existing and future algorithms. Over time, the creators of algorithms may actively solicit feedback from a wide range of data subjects and then take steps to educate the public on how algorithms work to aid in this effort. Public agencies that regulate bias can also work to raise algorithmic literacy as part of their missions. In both the public and private sector, those that stand to lose the most from biased decision-making can also play an active role in spotting it.
Some decisions will be best served by algorithms and other AI tools, while others may need thoughtful consideration before computer models are designed. Further, testing and review of certain algorithms will also identify, and, at best, mitigate discriminatory outcomes. For operators of algorithms seeking to reduce the risk and complications of bad outcomes for consumers, the promotion and use of the mitigation proposals can create a pathway toward algorithmic fairness, even if equity is never fully realized.
Lambda@Edge is optimized for latency-sensitive use cases where your end viewers are distributed globally. All the information you need to make a decision should be available at the CloudFront edge, within the function and the request. This means that use cases where you are looking to make decisions on how to serve content based on user characteristics (e.g., location, client device, etc.) can now be executed and served close to your users without having to be routed back to a centralized server. 2ff7e9595c
Bình luận