I was at an event the other day. Three AI talks followed by a panel discussion with the speakers.
All three had these headsets with the mic itself at the end of an arm you can swing up and down. One of the speakers/panelists had long hair. Every time they turned their head to look at a fellow panelist, their hair bumped into the microphone so it swung upwards. Every time, they had to adjust the mic back down, next to their mouth. This was quite funny once I noticed it but also, why did this happen?
- a) the headset was broken/worn out
- b) the headset was designed and tested exclusively by people with short hair
For the sake of argument, I assume it’s B1, turning the anecdote into a great example of a team having blind spots in their knowledge. The headset designers simply didn’t think of what may happen if you have long hair and it bumps into the mic. If they had at least one person on the team with long hair, they would’ve discovered the issue.
This has hugely important implications for AI.
Many of the questions to the panel had to do with ethics; transparency, trust, explainability, correctness, how to avoid discrimination and bias. My impression was the panelists have good intentions, as many answers brushed into diversity but it was never explicitly stated. What concerned me was the leaning towards technologist approaches.
One of the answers to the question how to avoid biased datasets and discriminatory results/predictions was “that’s the job of the data scientist”. But even if you’re the best data scientist in the world, due to your particular life experience and limited knowledge, there will be aspects you miss. You will have blind spots.
To decrease the unknown number of such blind spots, you have to increase diversity. Important to understand here is that diversity is not just about race, creed, gender, socioeconomic factors and other common diversity metrics. Those just mentioned are obviously important, but diversity is also about having more perspectives than that of computer science and maths. Like historians or sociologists. Ideally as integral parts of the team. Because bias and oppression is a societal problem, and AI systems use data from and have consequences in society.
If you are not aware of this, you risk having a situation where you have an AI division made up and led by people who think everything is a technical problem. Facebook/Meta’s AI division is the typical example here, which is led by Yann LeCun who quit twitter in 2020 over failing to see the bigger picture. More recently, Facebook/Meta released and then pulled an AI system advertised as being able to generate scientific papers and wiki articles, with LeCun commenting that “it’s no longer possible to have some fun”. The system was advertised as competent, not fun. In other words, LeCun continues to be unaware of the responsibility he and others at his company has to be mindful of the consequences of AI systems.
AI already impacts society and will have even deeper impact in the future. If you are serious about using these systems to make society better - for everyone - then please take diversity to heart.
You can do so by, for example, reading my Planet Ghibli series. But don’t take my word for it. If you’re in the AI field and haven’t watched Ruha Benjamin’s ICLR 2020 keynote you should do yourself, your customers and society a service: watch it.
-
I realise the irony of basing my discussion on a biased assumption, but please bear with me :) ↩