Most people in data science and AI have probably heard by now the name Timnit Gebru, who used to be the co-lead of Google’s AI ethics team. The story of how she came to lose her job at Google has brought to the fore questions of how to pursue ethics in AI, particularly when such a pursuit can run counter to the business interests of Big Tech companies, who have massive amounts of money and influence in the field of AI.
The magazine Fast Company ran a very interesting story about this very topic, and the piece is worth reading in full. It is becoming more and more widely known that AI algorithms are affecting people’s lives in real ways:
From credit scoring and criminal sentencing to healthcare access and even whether you get a job interview or not, AI algorithms are making life-altering decisions with no oversight or transparency.
While big tech companies like Google have AI ethics teams, there is clearly a conflict of interest here. On top of this:
A 2020 study found that at four top universities, more than half of AI ethics researchers whose funding sources are known have accepted money from a tech giant. One of the largest pools of money dedicated to AI ethics is a joint grant funded by the National Science Foundation and Amazon, presenting a classic conflict of interest.
So, even “independent” academic and non-profit AI ethics teams are actually sponsored by big tech companies, leading to the current situation where:
Big Tech’s influence over AI ethics is near total.
“What is Fair, Anyways?”
The Fast Company piece clearly lays out the control that Big Tech has over AI ethics. But, before we discuss tackling this issue, how do we even define ethics and fairness?
Broadly, fairness means treating different individuals equally given the same set of circumstances. [2]
However, this article shows how trying to train a model to account for this can get complicated quickly.
Further complicating matters is that, in reality, there are many different definitions of fairness and “different ways of defining fairness may either contradict each other or cannot be simultaneously true.” [2]
Therefore, we must recognize that:
Ethics is neither a checklist nor something you can write on a wall, hoping that others will follow. You cannot command it into existence, but you can facilitate, measure, and incentivize the conversations that need to take place. [2]
What About “Bias Audits”?
Assuming that we can agree on some guidelines for fairness and ethics, there are new startups emerging that offer “algorithmic audits” in order to check for bias and other issues in AI algorithms.
One issue with this approach is that “to date, there is no clear definition of ‘algorithmic audit’.” So, this approach could easily be abused and possibly even “legitimize technologies that shouldn’t even exist because they are based on dangerous pseudoscience.” [3]
Some proposed solutions to make algorithmic auditing an important tool for tackling AI ethics is first ensuring transparency by making sure affected individuals know when algorithms are being used to make decisions that affect them. Second, of course, there needs to be clear definitions and guidelines for algorithmic auditing. Finally, there should be some method by which people can contest the decisions that these algorithms make. This way, ethical issues can be addressed more quickly, and we can limit how much algorithms take over different critical decision-making tasks.
The Future of AI Ethics
In a Medium article from a few months ago, I argued that in order to make progress in AI ethics, we need to be training data scientists, from the beginning, in how to develop ethical models. This article was focused on the technical side. But, what happens if the company doesn’t provide their data scientists the freedom to develop such ethical models, and instead wants to focus only on performance and future profit, etc? In the Fast Company piece, they mention the idea of training tech workers on how to be whistleblowers and providing them with the necessary resources for speaking out publicly on these issues.
In my view, just having these conversations more openly and more widely is a big step in the right direction. It is certainly unfortunate that Timnit Gebru is no longer part of the Google AI ethics team, but her departure from Google has certainly helped propel this conversation, not just among AI researchers and other tech workers, but also out into the general public.
While it is certainly important for individual tech workers to have the tools to work on and promote ethical algorithms from within Big Tech companies, there also needs to be pressure from the outside, from the general public. Growing pressure, from both inside and from outside Big Tech, will help push the changes needed to make AI algorithms more ethical and accountable.
Python Corner
If you are a Python user, whether you are a beginner or more experienced, check out this GitHub repository with lots of interesting Python code snippets and explanations.
[1] https://www.fastcompany.com/90608471/timnit-gebru-google-ai-ethics-equitable-tech-movement
[2] https://opendatascience.com/building-an-ethical-data-science-practice
[3] https://onezero.medium.com/the-algorithmic-auditing-trap-9a6f2d4d461d