The man who has confidence in AI – but also reservations
12 December 2022
What happens if you are discriminated against by artificial intelligence (AI)? In the future, algorithms will take more and more decisions – for better and for worse. Magnus Strand's research examines how risks linked to this new technology can be managed and how we can ensure clarity around who is responsible if things go wrong.
In the financial world, the use of different kinds of algorithm has become increasingly common. The majority of securities trading is carried out by algorithms, for example. It is therefore not people buying and selling, rather fully 80 percent of all stock transactions on the stock exchange are conducted algorithmically. When you apply for a loan, a computer calculates your creditworthiness, and if you want advice from the bank on how to invest your money, you can get it from a robot.
“There are both pros and cons to this trend. It becomes more difficult to argue against advice or a decision given by a robot. The fact that an algorithm is issuing the advice is often also invisible to the customer,” explains Strand, a lawyer and senior lecturer at the Department of Business Studies.
He is currently working on several research projects examining how this new trend in the financial world can be approached. Aspects include identifying and managing risks associated with the increased use of AI solutions. What happens when things go wrong?
Risk of discrimination and drops in share prices
Giving AI too much responsibility is not entirely without risk. There are several examples of algorithms that trade in securities having misunderstood something in the market, resulting in stock prices collapsing. Some day traders have also learned to manipulate the algorithms that carry out stock trading. Is it then safe to let AI manage a person’s stock portfolio? And whose fault is it if it decreases in value?
Another risk could be that people are discriminated against when applying for loans. They could be considered less creditworthy because the AI takes certain factors in their background into account, for example.
“What happens if AI identifies that you’ve been ill for an extended period? Illness is not covered by discrimination legislation.”
Pursuing a legal process in such cases is also further complicated by the fact that it is so complicated to explain the process behind AI. Proving why someone acted in a certain way is often central to a trial, but this is not as easy when the operator is AI.
Strand recounts examples of court cases in which courts have ordered companies that made use of AI to hand over the programming code as "evidence" – something completely impossible to interpret for the vast majority.
“The EU is now looking at how to simplify the evidence issue in order to circumvent this difficulty.”
Legislation places demands on human decisions
More new laws are being discussed within the EU. Among other things, they want to require companies that use chatbots to make this clearer to the user. There are also legislative proposals aimed at regulating which decisions AI is permitted take.
“Take the case of credit ratings, for example. In that instance, people should be able to say that they want an assessment to be made by a human. The algorithm should not be able to take a decision alone.”
In Strand’s view, it is important for the EU to be at the forefront of technological developments. At the same time, a priority for the bloc is to ensure that the AI technology being developed fosters people's trust and avoids excessive risk.
AI may require new supervisory bodies
Strand's upcoming research also involves some philosophical elements. For example, he will examine what laws and social institutions might be needed in the long term, in a future where AI becomes an increasingly integrated part of our everyday life.
“My approach is to work out how we create political and legal frameworks that are fit for an uncertain future.”
To make the question more specific, he takes the control mechanisms that have developed around our use of cars as an example. We have to get a driving license, have the car inspected annually and make sure it is insured – there are thus a number of social institutions in place to minimise the risks of our use of the roads.
“Will there be similar structures controlling our various AI systems in the future? Through this research, I hope to help create a picture of the risks that exist and show how the issue of responsibility should be handled in relation to the new technology.”
Humanities and Social Sciences research into artificial intelligence
Magnus Strand's research project is funded by WASP-HS, the Wallenberg Foundation's initiative to support AI research in the humanities and social sciences.