MENU

suburb

  • Loading ...
  • Loading ...

Builders Adelaide

Latest News Builders Adelaide

Are you looking for a holiday? Get special deals.

 

AI expert Meredith Broussard: ‘Racism, sexism and ableism are systemic problems’

27 Mar 2023 By theguardian

AI expert Meredith Broussard: ‘Racism, sexism and ableism are systemic problems’

Meredith Broussard is a data journalist and academic whose research focuses on bias in artificial intelligence (AI). She has been in the vanguard of raising awareness and sounding the alarm about unchecked AI. Her previous book, Artificial Unintelligence (2018), coined the term "technochauvinism" to describe the blind belief in the superiority of tech solutions to solve our problems. She appeared in the Netflix documentary Coded Bias (2020), which explores how algorithms encode and propagate discrimination. Her new book is More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech. Broussard is an associate professor at New York University's Arthur L Carter Journalism Institute.

The message that bias can be embedded in our technological systems isn't really new. Why do we need this book?This book is about helping people understand the very real social harms that can be embedded in technology. We have had an explosion of wonderful journalism and scholarship about algorithmic bias and the harms that have been experienced by people. I try to lift up that reporting and thinking. I also want people to know that we have methods now for measuring bias in algorithmic systems. They are not entirely unknowable black boxes: algorithmic auditing exists and can be done.

Why is the problem "more than a glitch"? If algorithms can be racist and sexist because they are trained using biased datasets that don't represent all people, isn't the answer just more representative data?A glitch suggests something temporary that can be easily fixed. I'm arguing that racism, sexism and ableism are systemic problems that are baked into our technological systems because they're baked into society. It would be great if the fix were more data. But more data won't fix our technological systems if the underlying problem is society. Take mortgage approval algorithms, which have been found to be 40-80% more likely to deny borrowers of colour than their white counterparts. The reason is the algorithms were trained using data on who had received mortgages in the past and, in the US, there's a long history of discrimination in lending. We can't fix the algorithms by feeding better data in because there isn't better data.

You argue we should be choosier about the tech we allow into our lives and our society. Should we just reject any AI-based technology that encodes bias at all?AI is in all our technologies nowadays. But we can demand that our technologies work well - for everybody - and we can make some deliberate choices about whether to use them.

I'm enthusiastic about the distinction in the proposed European Union AI Act that divides uses into high and low risk based on context. A low-risk use of facial recognition might be using it to unlock your phone: the stakes are low - you have a passcode if it doesn't work. But facial recognition in policing would be a high-risk use that needs to be regulated or - better still - not deployed at all because it leads to wrongful arrests and isn't very effective. It isn't the end of the world if you don't use a computer for a thing. You can't assume that a technological system is good because it exists.

There is enthusiasm for using AI to help diagnose disease. But racial bias is also being baked in, including from unrepresentative datasets (for example, skin cancer AIs will probably work far better on lighter skin because that is mostly what is in the training data). Should we try to put in "acceptable thresholds" for bias in medical algorithms, as some have suggested?I don't think the world is ready to have that conversation. We're still at a level of needing to increase awareness of racism in medicine. We need to take a step back and fix a few things about society before we start freezing it in algorithms. Formalised in code, a racist decision becomes difficult to see or eradicate.

You were diagnosed with breast cancer and underwent successful treatment. After your diagnosis, you experimented with running your own mammograms through an open-source cancer-detection AI and you found that it did indeed pick up your breast cancer. It worked! So great news?It was pretty neat to see the AI draw a red box around the area of the scan where my tumour was. But I learned from this experiment that diagnostic AI is a much blunter instrument than I imagined, and there are complicated trade-offs. For example, the developers must make a choice about accuracy rates: more false positives or false negatives? They favour the former because it's considered worse to miss something, but that also means if you do have a false positive you go into the diagnosis pipeline, which could mean weeks of panicking and invasive testing. A lot of people imagine a sleek AI future where machines replace doctors. This does not sound enticing to me.

Any hope we can improve our algorithms?I am optimistic about the potential of algorithmic auditing - the process of looking at the inputs, outputs and the code of an algorithm to evaluate it for bias. I have done some work on this. The aim is to focus on algorithms as they are used in specific contexts and address concerns from all stakeholders, including members of an affected community.

AI chatbots are all the rage. But the tech is also rife with bias. Guardrails added to OpenAI's ChatGPT have been easy to get around. Where did we go wrong?Though more needs to be done, I appreciate the guardrails. This has not been the case in the past, so it is progress. But we also need to stop being surprised when AI screws up in very predictable ways. The problems we are seeing with ChatGPT were anticipated and written about by AI ethics researchers, including Timnit Gebru [who was forced out of Google in late 2020]. We need to recognise this technology is not magic. It's assembled by people, it has problems and it falls apart.

OpenAI's co-founder Sam Altman recently promoted AI doctors as a way of solving the healthcare crisis. He appeared to suggest a two-tier healthcare system - one for the wealthy, where they enjoy consultations with human doctors, and one for the rest of us, where we see an AI. Is this the way things are going and are you worried?AI in medicine doesn't work particularly well, so if a very wealthy person says: "Hey, you can have AI to do your healthcare and we'll keep the doctors for ourselves," that seems to me to be a problem and not something that is leading us towards a better world. Also, these algorithms are coming for everybody, so we might as well address the problems.

More News

Booking.com
What Does 'Off-Grid' Actually Mean (& How Do You Achieve It)?
What Does 'Off-Grid' Actually Mean (& How Do You Achieve It)?
How Much Solar Do I Need?
How Much Solar Do I Need?
Is Solar Good For The Environment (Or Just A Government Scheme)?
Is Solar Good For The Environment (Or Just A Government Scheme)?
Is Solar Really Worth It? Let’s Break Down the Costs and Savings
Is Solar Really Worth It? Let’s Break Down the Costs and Savings
Best Solar Panels in Australia 2022
Best Solar Panels in Australia 2022
Top 5 scams spreading right now
Top 5 scams spreading right now
Humanoid robots handle quality checks and assembly at auto plant
Humanoid robots handle quality checks and assembly at auto plant
Your health data is being sold without your consent
Your health data is being sold without your consent
Ancient Roman coin treasure finally revealed to public after being hidden for centuries
Ancient Roman coin treasure finally revealed to public after being hidden for centuries
Terrifying trend emerges as music festival fans suffer mysterious needle attacks
Terrifying trend emerges as music festival fans suffer mysterious needle attacks
Amtrak apologizes after heat wave trapped passengers without AC or power for over an hour
Amtrak apologizes after heat wave trapped passengers without AC or power for over an hour
FBI pivots resources to counterterror, cybersecurity efforts amid Iran retaliation threat: Source
FBI pivots resources to counterterror, cybersecurity efforts amid Iran retaliation threat: Source
AOC, Dems called out as 'hypocrites' for impeachment talk following US strikes on Iranian nuclear sites
AOC, Dems called out as 'hypocrites' for impeachment talk following US strikes on Iranian nuclear sites
Chicago Teachers Union president suggests children belong to the school system
Chicago Teachers Union president suggests children belong to the school system
UFC legend yearns for Conor McGregor's return to the Octagon after years of waiting
UFC legend yearns for Conor McGregor's return to the Octagon after years of waiting
Amazon Essentials debuts ultra-soft activewear line, all less than $30
Amazon Essentials debuts ultra-soft activewear line, all less than $30
Chick-fil-A shoots down customer complaints about rumored straw change
Chick-fil-A shoots down customer complaints about rumored straw change
Harvard weighs how to strike deal with Trump admin without looking like it caved: Report
Harvard weighs how to strike deal with Trump admin without looking like it caved: Report
Hiker suffers hypothermia during trek on New England's highest peak amid East Coast heat wave
Hiker suffers hypothermia during trek on New England's highest peak amid East Coast heat wave
IAEA director says Iran's enriched uranium can't be located following US military strikes
IAEA director says Iran's enriched uranium can't be located following US military strikes
Latest News

copyright © 2025 Builders Adelaide.   All rights reserved.

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z