Difficult decisions made by algorithm

In 2020, due to COVID-19 restrictions on students taking exams in person, the UK’s school examining regulator Ofqual used an algorithm to determine their final year marks.

And the students didn’t like it.

British students protested that the algorithm used by Ofqual was unfair and discriminatory. Image: Getty Images

Following protests and concerns about socio-economic discrimination, the algorithmic notes have been removed in favor of a rating assessed by the teacher. One of the main criticisms of the algorithmic scoring system was that there was no process available to students to appeal their grades.

And this is not an isolated incident or a new problem.

In 2014, seven teachers and the Houston Teachers Federation successfully argued that the use of an algorithmic performance measurement system to terminate their teaching contracts violated their constitutional right to due process. They argued that they were unable to “meaningfully challenge” their dismissal “due to a lack of sufficient information”.

The company that created the algorithmic system claimed that the equations, source code, decision rules and assumptions it used were all proprietary trade secrets and, as such, could not be provided to teachers.

This left teachers without a clear understanding of what factors the system takes into account and how their performance scores are actually calculated.

There are many other challenges associated with algorithms besides their opacity. For example, what can actually be disputed is often not clear.

Should people be able to challenge the data used to make the decision? If the algorithm is following the process it was programmed to follow, on what grounds can the decision be challenged? Or must the very use of the algorithm in the first place be questionable?

What can actually be contested when it comes to algorithmic decisions is often not clear. Image: Getty Images

Numerous guidelines and principles have been developed to respond to the use of artificial intelligence in recent years. Many of them mention the possibility of contesting, appealing or challenging algorithmic decisions, but they offer limited guidance as to the type of process to be expected.

Guidance on of the European Union General Data Protection Regulation suggests that the challenge requires an internal post-decision review.

In human-machine interaction, the notion of contestability is seen as a more interactive process – a process where those affected by a decision can interact with the decision-making system to shape the decision-making.

Given these different approaches to contestability, our team wanted to better understand what stakeholders – including the public and policy makers like business and government – expect in relation to the ability to challenge.

Our research analyzed submissions made in response to a discussion paper released by the Australian government in 2019 – Artificial intelligence: the Australian ethical framework.

This is the first such framework to specifically include ‘contestability’ as a principle, which has been defined as: ‘When an algorithm has an impact on a person, there must be an effective process to enable that person to challenge the use or exit of the algorithm. “

Of our bid analysis, the inclusion of “contestability” as its own principle was generally supported, although some thought it was best viewed as an aspect of a higher order principle such as “fairness” or “accountability” .

Human decision making is very different from the way algorithmic decision making works. Image: Getty Images

While contestability was seen as a form of protection, many questioned its usefulness, given that it is currently unenforceable.

It was also recognized that different people affected by algorithmic decisions would have different abilities and skills to challenge. This means that any challenge process should be made as clear and accessible as possible and is not the only tool used to regulate algorithmic decision making.

Many briefs called for more clarity and direction from the government on a number of important policy issues. For example, who can challenge a decision? What can we dispute? How should a review process work?

And then there is the image of the company. associate professor at the University of Colorado Law School, Margot Kaminski note that a lack of contestability guidance could put those affected at a disadvantage:

“This raises the question of whether a company whose interests do not always match those of its users will be able to deliver an adequate process and fair results. There is room for much more policy making to flesh out this right to challenge, ”said Associate Professor Kaminski.

Many submissions described processes that resemble those currently used for the review of human decisions. However, human decision making is very different from the way algorithmic decision making works.

It is therefore important to determine whether existing processes designed to check for human bias and error will be adequate for examining algorithmic decision making.

We need to think carefully about how to design systems that support the ability to challenge decisions. Image: Getty Images

A number of submissions also highlighted the need for a human being to review the decision. But that then raises concerns about the scalability of the human exam – it might just be way too much work for a team of people.

Instead of relying solely on post-hoc decision review processes, it is useful to create algorithmic decision-making systems that take contestability into account from their design.

An approach – “Contestability by design” by European researcher Marco Almada – emphasizes the value of participatory design: where the people most likely to be impacted by a decision-making system are involved in the design of the system itself.

This type of process would help highlight system issues and potentially reduce the need for future challenges.

Having the ability to interact with a system, verify the information it has taken into account, make corrections if necessary, or file disputes could help people understand how a system works and exercise some control. on the outcome – which can also reduce the need for ad hoc post-challenge procedures.

Ultimately, algorithmic decision making is very different from human decision making. We need to carefully consider how to design systems that not only support the ability to challenge, but also reduce the need for anyone to challenge a decision in the first place.

Banner: Getty Images


Source link

Comments are closed.