\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

🦴💻 AI can identify race from x-rays 💻🦴

https://twitter.com/DrLukeOR/status/1422199954668363776 #...
Bright persian
  08/03/21
...
Puce Charismatic Rigpig
  08/03/21
They wanted to believe the science, but not like this. Not b...
Aqua business firm
  08/03/21
...
Puce Charismatic Rigpig
  08/03/21
...
bateful fishy bbw
  08/03/21
...
galvanic lettuce
  08/03/21
https://www.txstate.edu/philosophy/resources/fallacy-definit...
Bright persian
  08/03/21
*retweeted*
Gaped library
  08/03/21
Lol this should be spammed after basically every lib tweet
claret skinny woman
  08/03/21
...
Puce Charismatic Rigpig
  08/03/21
It should be written underneath every lib tweet just like th...
excitant pocket flask
  08/03/21
...
Puce Charismatic Rigpig
  08/03/21
you want it to be one way, but it's the other way. --Marl...
startling center travel guidebook
  08/03/21
They don't think it be like it is, but it do.
Abnormal hell
  08/03/21
...
Hairraiser base
  08/03/21
...
Thriller Arrogant Address
  08/03/21
...
magical faggotry casino
  08/03/21
...
supple school cafeteria pisswyrm
  08/03/21
...
Fragrant Buck-toothed Fat Ankles
  08/03/21
...
cruel-hearted resort indirect expression
  08/03/21
...
Laughsome ebony tanning salon giraffe
  08/03/21
...
Slippery Free-loading Hall
  08/03/21
...
adventurous yellow kitty cat market
  08/03/21
...
opaque coffee pot dopamine
  08/04/21
and the AI only needed an xray of the penis area to confirm ...
amethyst sneaky criminal
  08/03/21
...
curious pozpig
  08/03/21
lol, "The computer is wrong" and of course he h...
Gaped library
  08/03/21
...
Puce Charismatic Rigpig
  08/03/21
...
Cracking Reading Party Box Office
  08/03/21
This guy is a whiney faggot
useless umber house
  08/03/21
...
Odious Black Hunting Ground Roommate
  08/03/21
Luke Oakden-Rayner PhD Candidate / Radiologist ABOUT ME ...
startling center travel guidebook
  08/03/21
dis article be rayciss becauz im blaqq and dont understand n...
Thriller Arrogant Address
  08/03/21
Strong poast/moniker synergy
big crackhouse
  08/03/21
Science is PAUSED until we can figure out what the hell is g...
supple school cafeteria pisswyrm
  08/03/21
...
Puce Charismatic Rigpig
  08/03/21
"Watch out, next the algo will see Caitlyn Jenner's x-r...
Bright persian
  08/03/21
It's amazing that systemic racism is even present in compute...
Abnormal hell
  08/03/21
Ljl
Puce Charismatic Rigpig
  08/03/21
Hi everyone. This thread has been swamped by racists. I'm pr...
Bright persian
  08/03/21
lmao
Light garrison
  08/03/21
...
Spectacular sick university
  08/03/21
http://www.autoadmit.com/thread.php?thread_id=4891791&mc...
Hairraiser base
  08/03/21
...
Puce Charismatic Rigpig
  08/03/21
Race is just a social construct that neatly matches your bio...
Abnormal hell
  08/03/21
...
Puce Charismatic Rigpig
  08/03/21
...
adventurous yellow kitty cat market
  08/03/21
How long until he is cancelled for calling racism a superpow...
harsh federal son of senegal
  08/03/21
All this just so Jews could join the country club
Submissive Topaz Lay
  08/03/21
LMAO at this planet, holy shit
Puce Charismatic Rigpig
  08/03/21
That's what's so crazy about all this globalist / blank slat...
Puce Charismatic Rigpig
  08/03/21
BUT BUT BUT I THOUGHT RACE WAS JUST SKIN COLOR
Brilliant comical rigor
  08/03/21
Ljl
Puce Charismatic Rigpig
  08/04/21
lol at the very first takeawy from this https://pbs.twimg...
Coral background story kitchen
  08/03/21
it's fucking dark-age medieval
harsh federal son of senegal
  08/03/21
Full-on superstition, allergic or even HOSTILE to empirical ...
Puce Charismatic Rigpig
  08/03/21
...
insane trip indian lodge
  08/03/21
...
harsh federal son of senegal
  08/03/21
How does that work in practice in, say, Saudi Arabia, Japan,...
big crackhouse
  08/03/21
Ljl...
Puce Charismatic Rigpig
  08/03/21
*blank stare* so anyway, medicine is racist
Coral background story kitchen
  08/03/21
...
Puce Charismatic Rigpig
  08/03/21
Why is this bad?
twisted dull halford site
  08/03/21
It's a pavlovian reflex, don't look for logic
harsh federal son of senegal
  08/03/21
Because rich white guys might get cutting-edge personalized ...
Bright persian
  08/03/21
I don't get it. So the computer learns the person is black a...
twisted dull halford site
  08/03/21
If you read the blog post, your question won't be answered.
Bright persian
  08/03/21
...
twisted dull halford site
  08/03/21
medicine is racist
Coral background story kitchen
  08/03/21
*beep boop beep*....*NIGGER!* *NIGGER! *NIGGER!*
multi-colored half-breed nursing home
  08/03/21
(racist
Coral background story kitchen
  08/03/21
libs devoutly believe the dogma that race is a mere social c...
startling center travel guidebook
  08/03/21
...
Puce Charismatic Rigpig
  08/03/21
Oh is this all some silly ontological argument? Like, the ve...
twisted dull halford site
  08/03/21
no, it's that ppl are racist, so if a computer can identify ...
Coral background story kitchen
  08/03/21
"However we want to frame this, the model has learned s...
Bright persian
  08/03/21
Third, I want to take as given that racial bias in medical A...
twisted dull halford site
  08/03/21
Richard ! AUGUST 4, 2021 AT 3:15 AM Interesting article....
twisted dull halford site
  08/03/21
https://www.researchsquare.com/article/rs-151985/v1 https...
twisted dull halford site
  08/04/21
If you think about it, basically what's going on is in the t...
twisted dull halford site
  08/04/21
It can also detect race from microscopic tissue samples. ...
soul-stirring anal ticket booth affirmative action
  08/03/21
Odd case!
Puce Charismatic Rigpig
  08/03/21
https://i.imgur.com/60atq7s.png what's this, AI?
harsh federal son of senegal
  08/03/21
You still don't understand what you're dealing with, do you?...
Puce Charismatic Rigpig
  08/03/21
...
Fragrant Buck-toothed Fat Ankles
  08/04/21
A few years ago I was super lib. My views haven't changed a...
Pink razzle genital piercing depressive
  08/03/21
...
adventurous yellow kitty cat market
  08/04/21
Race isn’t real Also libs: we need to divide everyone...
Cream impressive property hairy legs
  08/04/21
...
Abnormal hell
  08/04/21
...
Puce Charismatic Rigpig
  08/04/21
IN THIS HOUSE
Heady offensive azn
  08/04/21
Ljl...
Puce Charismatic Rigpig
  08/04/21


Poast new message in this thread



Reply Favorite

Date: August 3rd, 2021 2:57 AM
Author: Bright persian

https://twitter.com/DrLukeOR/status/1422199954668363776

#Medical #AI has the worst superpower... Racism

We've put out a preprint reporting concerning findings. AI can do something humans can't: recognise the self-reported race of patients on x-rays. This gives AI a path to produce health disparities.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42887955)



Reply Favorite

Date: August 3rd, 2021 2:59 AM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42887956)



Reply Favorite

Date: August 3rd, 2021 3:02 AM
Author: Aqua business firm

They wanted to believe the science, but not like this. Not by them.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42887965)



Reply Favorite

Date: August 3rd, 2021 3:03 AM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42887967)



Reply Favorite

Date: August 3rd, 2021 3:04 AM
Author: bateful fishy bbw



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42887971)



Reply Favorite

Date: August 3rd, 2021 3:08 AM
Author: galvanic lettuce



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42887978)



Reply Favorite

Date: August 3rd, 2021 3:42 AM
Author: Bright persian

https://www.txstate.edu/philosophy/resources/fallacy-definitions/Ought-Is.html

The ought-is fallacy occurs when you assume that the way you want things to be is the way they are. This is also called wishful thinking. Wishful thinking is believing what you want to be true no matter the evidence or without evidence at all, or assuming something is not true, because you do not want it to be so.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888011)



Reply Favorite

Date: August 3rd, 2021 4:03 AM
Author: Gaped library

*retweeted*

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888019)



Reply Favorite

Date: August 3rd, 2021 4:19 AM
Author: claret skinny woman

Lol this should be spammed after basically every lib tweet

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888037)



Reply Favorite

Date: August 3rd, 2021 5:41 AM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888156)



Reply Favorite

Date: August 3rd, 2021 10:31 PM
Author: excitant pocket flask

It should be written underneath every lib tweet just like those 'covid/election misinformation' warnings.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893008)



Reply Favorite

Date: August 3rd, 2021 10:46 PM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893091)



Reply Favorite

Date: August 3rd, 2021 8:17 AM
Author: startling center travel guidebook

you want it to be one way, but it's the other way.

--Marlo Stansfield

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888437)



Reply Favorite

Date: August 3rd, 2021 7:14 PM
Author: Abnormal hell

They don't think it be like it is, but it do.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892101)



Reply Favorite

Date: August 3rd, 2021 7:29 PM
Author: Hairraiser base



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892162)



Reply Favorite

Date: August 3rd, 2021 4:52 AM
Author: Thriller Arrogant Address



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888050)



Reply Favorite

Date: August 3rd, 2021 8:21 AM
Author: magical faggotry casino



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888441)



Reply Favorite

Date: August 3rd, 2021 8:55 AM
Author: supple school cafeteria pisswyrm



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888531)



Reply Favorite

Date: August 3rd, 2021 9:19 AM
Author: Fragrant Buck-toothed Fat Ankles



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888653)



Reply Favorite

Date: August 3rd, 2021 7:42 PM
Author: cruel-hearted resort indirect expression



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892206)



Reply Favorite

Date: August 3rd, 2021 10:18 PM
Author: Laughsome ebony tanning salon giraffe



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892921)



Reply Favorite

Date: August 3rd, 2021 10:31 PM
Author: Slippery Free-loading Hall



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893013)



Reply Favorite

Date: August 3rd, 2021 11:31 PM
Author: adventurous yellow kitty cat market



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893337)



Reply Favorite

Date: August 4th, 2021 11:08 AM
Author: opaque coffee pot dopamine



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42895024)



Reply Favorite

Date: August 3rd, 2021 3:03 AM
Author: amethyst sneaky criminal

and the AI only needed an xray of the penis area to confirm race with a 99% CI

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42887969)



Reply Favorite

Date: August 3rd, 2021 3:41 AM
Author: curious pozpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888009)



Reply Favorite

Date: August 3rd, 2021 3:52 AM
Author: Gaped library

lol, "The computer is wrong"

and of course he has pronouns in his bio. the worst hot takes always do.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888017)



Reply Favorite

Date: August 3rd, 2021 4:10 AM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888022)



Reply Favorite

Date: August 3rd, 2021 8:21 AM
Author: Cracking Reading Party Box Office



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888442)



Reply Favorite

Date: August 3rd, 2021 7:42 AM
Author: useless umber house

This guy is a whiney faggot

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888379)



Reply Favorite

Date: August 3rd, 2021 8:01 AM
Author: Odious Black Hunting Ground Roommate



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888407)



Reply Favorite

Date: August 3rd, 2021 8:23 AM
Author: startling center travel guidebook

Luke Oakden-Rayner

PhD Candidate / Radiologist

ABOUT ME

RESEARCH

EDUCATION

MY MOOCS (WITH REVIEWS)

BLOG

AI has the worst superpower… medical racism.

AUGUST 2, 2021 ~ LUKEOAKDENRAYNER

Is this the darkest timeline? Are we the baddies?

Hi! It has been a while!

My blogging was extremely slow in 2020 and so far in 2021 for a few reasons, including having a new baby (with a lot of stay at home parenting 😴), starting a new job (I’m formally employed as a researcher at the Australian Institute for Machine Learning now) and embarking on supervising PhD students (🥳), as well as some other stuff that I will be talking about sometime soon.

But one thing that has taken up a lot of time recently has been a piece of work that I’m proud to have been a part of, and which has culminated in a preprint* titled Reading Race: AI Recognises Patient’s Racial Identity in Medical Images.

This paper was a huge undertaking, with a big team from all over the world. Big shout out to Dr Judy Gichoya for gathering everyone together and leading the group!

This work is also, in my opinion, a huge deal. This is research that should challenge the status quo, and hopefully change medical AI practice.

The paper is extensive, with dozens of experiments replicated at multiple sites around the world. It comes with a complete code repository so anyone can reproduce any part of it (although a minority of the datasets are not publicly available, and a few more require an approval process). It even comes with the public release of new racial identity labels for several public datasets, which were obtained with the collaboration of the dataset developers.

We want feedback and criticism, so I hope everyone will read the paper. I’m not going to cover it in detail here, instead I wanted to write something else which I think will complement the paper; an explanation of why I and many of my co-authors think this issue is important.

One thing we noticed when we were working on this research was that there was a clear divide in our team. The more clinical and safety/bias related researchers were shocked, confused, and frankly horrified by the results we were getting. Some of the computer scientists and the more junior researchers on the other hand were surprised by our reaction. They didn’t really understand why we were concerned.

So in a way, this blog post can be considered a primer, a companion piece for the paper which explains the why. Sure, AI can detect a patient’s racial identity, but why does it matter?

Disclaimer: I’m white. I’m glad I got to contribute, and I am happy to write about this topic, but that does not mean I am somehow an authority on the lived experiences of minoritized racial groups. These are my opinions after discussion with my much more knowledgeable colleagues, several of whom have reviewed the blog post itself.

A brief summary

In extremely brief form, here is what the paper showed:

AI can trivially learn to identify the self-reported racial identity of patients to an absurdly high degree of accuracy

AI does learn to do this when trained for clinical tasks

These results generalise, with successful external validation and replication in multiple x-ray and CT datasets

Despite many attempts, we couldn’t work out what it learns or how it does it. It didn’t seem to rely on obvious confounders, nor did it rely on a limited anatomical region or portion of the image spectrum.

A small portion of the results from the paper

So that is a basic summary. For all the gory details see the paper. Now for the important part: so what?

An argument in four steps

I’m going to try to lay out, as clearly as possible, that this AI behaviour is both surprising, and a very bad thing if we care about patient safety, equity, and generalisability.

The argument will have the following parts:

Medical practice is biased in favour of the privileged classes in any society, and worldwide towards a specific type of white men.

AI can trivially learn to recognise features in medical imaging studies that are strongly correlated with racial identity. This provides a powerful and direct mechanism for models to incorporate the biases in medical practice into their decisions.

Humans cannot identify the racial identity of a patient from medical images. In medical imaging we don’t routinely have access to racial identity information, so human oversight of this problem is extremely limited at the clinical level.

The features the AI makes use of appear to occur across the entire image spectrum and are not regionally localised, which will severely limit our ability to stop AI systems from doing this.

There are several other things I should point out before we get stuck in. First of all, a definition. We are talking about racial identity, not genetic ancestry or any other biological process that might come to mind when you hear the word “race”. Racial identity is a social, legal, and political construct that consists of our own perceptions of our race, and how other people see us. In the context of this work, we rely on self-reported race as our indicator of racial identity.

Before you jump in with questions about this approach and the definition, a quick reminder on what we are trying to research. Bias in medical practice is almost never about genetics or biology. No patient has genetic ancestry testing as part of their emergency department workup. We are interested in factors that may bias doctors in how they decide to investigate and treat patients, and in that setting the only information they get is visual (i.e., skin tone, facial features etc.) and sociocultural (clothing, accent and language use, and so on). What we care about is race as a social construct, even if some elements of that construct (such as skin tone) have a biological basis.

Secondly, whenever I am using the term bias in this piece, I am referring to the social definition, which is a subset of the strict technical definition; it is the biases that impact decisions made about humans on the basis of their race. These biases can in turn produce health disparities, which the NIH defines as “a health difference that adversely affects disadvantaged populations“.

Third, I want to take as given that racial bias in medical AI is bad. I feel like this shouldn’t need to be said, but the ability of AI to homogenise, institutionalise, and algorithm-wash health disparities across regions and populations is not a neutral thing.

AI can seriously make things much, much worse.

Part I – Medicine is biased

There has been a lot of discussion around structural racism in medicine recently, especially as the COVID pandemic has highlighted ongoing disparities in US healthcare. The existence of structural racism is no surprise to anyone who studies the topic, nor to anyone affected by it, but apparently it is still a surprise to some of the most powerful people in medicine.

The tweet and podcast that lead to the resignation of the editor-in-chief of one of the biggest medical journals in the world. Look at that ratio!

Sometimes medical bias is because different patient groups will need different clinical approaches, but we don’t know that because the evidence that supports clinical practice is biased; most clinical trials populations are heavily skewed towards white men. In fact, in the 1990s the US Congress had to step in to demand that trials include women and racial/ethnic minorities**. In general, trials populations don’t include enough women, people of colour, people from outside the US or Europe, people who are poor, and so on***.

The geographic origin and race/ethnicity of participants of clinical trials, 1997 to 2014.

There is a long and storied history of the effects this has had. Examples include medicines for morning sickness that cause birth deformities but weren’t tested on pregnant women, imaging measurements that tend to overestimate the risk of Down’s syndrome in Asian foetuses, and genetic tests that fail for people of colour.

But medicine is biased at the clinical level too, where healthcare workers seem to make different decisions for patients from different groups when the correct choice would be to treat them the same. A famous example was reported in the New England Journal of Medicine, where blind testing of all pregnant women in Pinellas County FL for drug use revealed similar rates of use for Black and white women (~14%), but in the same period of time Black women were 10 times more likely to be reported to the authorities for substance abuse during pregnancy. The healthcare workers were, consciously or unconsciously, choosing who to test and report. This was true even for private obstetrics patients:

In the private obstetricians’ offices, black women made up less than 10 percent of the patient population but 55 percent of those reported for substance abuse during pregnancy.

Chasnoff et al, NEJM, 1990

There are innumerable other examples that can be described for any minoritised group. Women, people of colour, and Hispanic white men are less likely to receive adequate pain relief than non-Hispanic white men. Transgender patients commonly report being flat-out denied care, or receiving extremely substandard treatment. Black newborns are substantially more likely to survive if they are treated by a Black doctor.

In medical imaging we like to think of ourselves as above this problem, particularly with respect to race because we usually don’t know the identity of our patients. We report the scans without ever seeing the person, but that only protects us from direct bias. Biases still affect who gets referred for scans and who doesn’t, and they affect which scans are ordered. Bias affects what gets written on the request forms, which in turn influences our diagnoses. And let’s not pretend we aren’t influenced by patient names and scan appearances too. If we see a single earring or nipple piercing on a man, we are trained to think about HIV related diseases, because they are probably gay (are they though?) and therefore at risk (PrEP is a thing now!). In fact, around 20% of radiologists admit being influenced by patient demographic factors.

But it is true that, in general, we read the scan as it comes. The scan can’t tell us what colour a person’s skin is.

Can it?

Part II – AI can detect racial identity in x-rays and CT scans

I’ve already included some results up in the summary section, and there are more in the paper, but I’ll very briefly touch on my interpretation of them here.

Firstly, the performance of these models ranges from high to absurd. An AUC of 0.99 for recognising the self-reported race of a patient, which has no recognised medical imaging correlate? This is flat out nonsense.

Every radiologist I have told about these results is absolutely flabbergasted, because despite all of our expertise, none of us would have believed in a million years that x-rays and CT scans contain such strong information about racial identity. Honestly we are talking jaws dropped – we see these scans everyday and we have never noticed.

Artist’s impression of people listening to me at work parties…

The second important aspect though is that, with such a strong correlation, it appears that AI models learn the features correlated with racial identity by default. For example, in our experiments we showed that the distribution of diseases in the population for several datasets was essentially non-predictive of racial identity (AUC = 0.5 to 0.6), but we also found that if you train a model to detect those diseases, the model learns to identify patient race almost as well as the models directly optimised for that purpose (AUC = 0.86). Whaaat?

Actual footage of Marzyeh^ and me from our Zoom meetings

Despite racial identity not being useful for the task (since the disease distribution does not differentiate racial groups), the model learns it anyway? My only hypothesis is that a) CNNs are primed to learn these features due to their inductive biases, and b) perhaps the known differences in TPR/FPR rates in AI models trained on these datasets (Black patients tend to get under-diagnosed, white patients over-diagnosed) are responsible, where cases that are otherwise identical have racially biased differences in labelling?

But no matter how it works, the take-home message is that it appears that models will tend to learn to recognise race, even when it seems irrelevant to the task. So the dozens upon dozens of FDA approved x-ray and CT scan AI models on the market now … probably do this^^? Yikes!

There is one more interpretation of these results that is worth mentioning, for the “but this is expected model behaviour” folks. Even from a purely technical perspective, ignoring the racial bias aspect, the fact models learn features of racial identity is bad. There is no causal pathway linking racial identity and the appearance of, for example, pneumonia on a chest x-ray. By definition these features are spurious. They are shortcuts. Unintended cues. The model is underspecified for the problem it is intended to solve.

Adapted from Geihros et al, this diagram shows a hypothetical pneumonia detection model that, during optimisation, has learned to recognise racial identity.

However we want to frame this, the model has learned something that is wrong, and this means the model can behave in undesirable and unexpected ways.

I won’t be surprised if this becomes a canonical example of the biggest weakness of deep learning – the ability of deep learning to pick up unintended cues from the data. I’m certainly going to include it in all my talks.

Part III – Humans can’t identify racial identity in medical images

As much as many technologists seem to think otherwise, humans play a critical role in AI, particularly in high risk decision making: we are the last line of defence against silly algorithmic decisions. Humans, as the parties who are responsible for applying the decisions of AI systems to patients, have to determine if an error has been made, and whether the errors reflect unsafe AI behaviour.

Radiologists have a lot of practice at determining when imaging tests are acceptable or not. For example, image quality is known to impact diagnostic accuracy. If the images look bad enough that you might miss something (we call these images “non-diagnostic”) then the radiologist is responsible for recognising that and refusing to use them.

“What is this, Nuc Med?”

But what happens when the radiologist literally has no way to tell if the study is usable or not?

I’ve spoken about this risk before when I discussed medical imaging super-resolution models. If the AI changes the output in a way that is hidden from the radiologist, because a bad image looks like it is a good image, then the whole “radiologist as safety net” system breaks down.

AI-accelerated MRI imaging from the fast MRI challenge, with pretty “diagnostic quality” looking images, but the tear in the meniscus is no longer visible. How is a human meant to identify that the image study is flawed if it looks fine but the important part is missing?

The problem is much worse for racial bias. At least in MRI super-resolution, the radiologist is expected to review the original low quality image to ensure it is diagnostic quality (which seems like a contradiction to me, but whatever). In AI with racial bias though, humans literally cannot recognise racial identity from images^^^. Unless they are provided with access to additional data (which they don’t currently have easy access to in imaging workflows) they will be completely unable to appreciate the bias no matter how skilled they are and no matter how much effort they apply to the task.

This is a big deal. Medicine operates on what tends to be called the “swiss cheese” model of risk management, where each layer of mitigation has some flaws, but combined they detect most problems.

The radiologist slice of cheese is absolutely critical in imaging AI safety, and in this setting it might be completely ineffective.

It is definitely true that we are moving towards race-aware risk management practices, and the recently published algorithmic bias playbook describes how governance bodies might implement such practices at a policy level, but it is also true that these practices are not currently widespread, despite the dozens of AI systems available on the open market.

Part IV – We don’t know how to stop it

This is probably the biggest problem here. We ran an extensive series of experiments to try and work out what was going on.

First, we tried obvious demographic confounders (for example, Black patients tend to have higher rates of obesity than white patients, so we checked whether the models were simply using body mass/shape as a proxy for racial identity). None of them appeared to be responsible, with very low predictive performance when tested alone.

Next we tried to pin down what sort of features were being used. There was no clear anatomical localisation, no specific region of the images that contributed to the predictions. Even more interesting, no part of the image spectrum was primarily responsible either. We could get rid of all the high-frequency information, and the AI could still recognise race in fairly blurry (non-diagnostic) images. Similarly, and I think this might be the most amazing figure I have ever seen, we could get rid of the low-frequency information to the point that a human can’t even tell the image is still an x-ray, and the model can still predict racial identity just as well as with the original image!

Performance is maintained with the low pass filter to around the LPF25 level, which is quite blurry but still readable. But for the high-pass filter, the model can still recognise the racial identity of the patient well past the point that the image is just a grey box 😱

Actually, I’m going to zoom in on that image just because it is so ridiculous!

What even is this? This nonsense generalises to completely new datasets?!?!

This difficulty in isolating the features associated with racial identity is really important, because one suggestion people tend to have when they get shown evidence of racial bias is that we should make the algorithms “colorblind” – to remove the features that encode the protected attribute and thereby make it so the AI cannot “see” race but should still perform well on the clinical tasks we care about.

Here, it seems like there is no easy way to remove racial information from images. It is everywhere and it is in everything.

An urgent problem

AI seems to easily learn racial identity information from medical images, even when the task seems unrelated. We can’t isolate how it does this, and we humans can’t recognise when AI is doing it unless we collect demographic information (which is rarely readily available to clinical radiologists). That is bad.

There are around 30 AI systems using CXR and CT Chest imaging on the market currently, FDA cleared, many of which were trained on the exact same datasets we utilised in this research. That is worse.

The ACR-DSI website lists all FDA cleared and approved AI systems.

So how do we find out if this is a problem in clinical AI tools? The FDA recommends…

…you report all results by relevant clinical and demographic subgroups…

Statistical Guidance on Reporting Results from Studies Evaluating Diagnostic Tests, FDA 2007

which sounds nice, but thus far we have seen almost no evidence that this is being done^^. In fact, even when asked for more information by Tariq et al, no commercial groups provided even a demographic breakdown of their test sets by racial identity, let alone performance results in these populations. I said at the start that I hope this research changes medical AI practice, so here is how: we absolutely have to do more race-stratified testing of AI systems°, and we probably shouldn’t allow AI systems to be used outside of populations they have been tested in.

So is the FDA actually requiring this, or has this been overlooked in the rush to bring AI to market?

I don’t know about you, but I’m worried. AI might be superhuman, but not every superpower is a force for good.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888447)



Reply Favorite

Date: August 3rd, 2021 8:51 AM
Author: Thriller Arrogant Address

dis article be rayciss becauz im blaqq and dont understand nunfin

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888510)



Reply Favorite

Date: August 3rd, 2021 10:46 PM
Author: big crackhouse

Strong poast/moniker synergy

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893088)



Reply Favorite

Date: August 3rd, 2021 8:55 AM
Author: supple school cafeteria pisswyrm

Science is PAUSED until we can figure out what the hell is going on.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42888536)



Reply Favorite

Date: August 3rd, 2021 5:47 PM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42891644)



Reply Favorite

Date: August 3rd, 2021 7:12 PM
Author: Bright persian

"Watch out, next the algo will see Caitlyn Jenner's x-ray and figure out he's a dude."

https://twitter.com/MartianOrthodox/status/1422556966254260224

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892086)



Reply Favorite

Date: August 3rd, 2021 7:18 PM
Author: Abnormal hell

It's amazing that systemic racism is even present in computer systems.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892113)



Reply Favorite

Date: August 3rd, 2021 7:20 PM
Author: Puce Charismatic Rigpig

Ljl

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892126)



Reply Favorite

Date: August 3rd, 2021 7:26 PM
Author: Bright persian

Hi everyone. This thread has been swamped by racists. I'm probably gonna miss your replies, but I'll still be here in a few days when they move on or you can reach out through other channels.

We appreciate all the wonderful support

https://twitter.com/DrLukeOR/status/1422685847170666501

Says the guy who made KKK Skynet.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892148)



Reply Favorite

Date: August 3rd, 2021 7:28 PM
Author: Light garrison

lmao

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892156)



Reply Favorite

Date: August 3rd, 2021 7:35 PM
Author: Spectacular sick university



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892186)



Reply Favorite

Date: August 3rd, 2021 7:29 PM
Author: Hairraiser base

http://www.autoadmit.com/thread.php?thread_id=4891791&mc=7&forum_id=2

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892159)



Reply Favorite

Date: August 3rd, 2021 10:54 PM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893152)



Reply Favorite

Date: August 3rd, 2021 7:29 PM
Author: Abnormal hell

Race is just a social construct that neatly matches your biology, weird.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892164)



Reply Favorite

Date: August 3rd, 2021 7:34 PM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892183)



Reply Favorite

Date: August 3rd, 2021 11:44 PM
Author: adventurous yellow kitty cat market



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893418)



Reply Favorite

Date: August 3rd, 2021 7:32 PM
Author: harsh federal son of senegal

How long until he is cancelled for calling racism a superpower?

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892173)



Reply Favorite

Date: August 3rd, 2021 7:45 PM
Author: Submissive Topaz Lay

All this just so Jews could join the country club

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892213)



Reply Favorite

Date: August 3rd, 2021 10:51 PM
Author: Puce Charismatic Rigpig

LMAO at this planet, holy shit

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893132)



Reply Favorite

Date: August 3rd, 2021 10:52 PM
Author: Puce Charismatic Rigpig

That's what's so crazy about all this globalist / blank slate dogma stuff, no joke

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893134)



Reply Favorite

Date: August 3rd, 2021 10:23 PM
Author: Brilliant comical rigor

BUT BUT BUT I THOUGHT RACE WAS JUST SKIN COLOR

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892957)



Reply Favorite

Date: August 4th, 2021 9:14 PM
Author: Puce Charismatic Rigpig

Ljl

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42898377)



Reply Favorite

Date: August 3rd, 2021 10:24 PM
Author: Coral background story kitchen

lol at the very first takeawy from this

https://pbs.twimg.com/media/E7yrWcdVUAQ38LP?format=jpg&name=large

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892960)



Reply Favorite

Date: August 3rd, 2021 10:27 PM
Author: harsh federal son of senegal

it's fucking dark-age medieval

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892978)



Reply Favorite

Date: August 3rd, 2021 10:50 PM
Author: Puce Charismatic Rigpig

Full-on superstition, allergic or even HOSTILE to empirical evidence in the physical world...

...the exact kind of mentality which lib progressive / science-worshipping types have been mocking for centuries: Odd case, very strange!

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893127)



Reply Favorite

Date: August 3rd, 2021 11:07 PM
Author: insane trip indian lodge



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893230)



Reply Favorite

Date: August 3rd, 2021 11:17 PM
Author: harsh federal son of senegal



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893279)



Reply Favorite

Date: August 3rd, 2021 10:49 PM
Author: big crackhouse

How does that work in practice in, say, Saudi Arabia, Japan, China, Thailand, etc?

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893118)



Reply Favorite

Date: August 3rd, 2021 10:51 PM
Author: Puce Charismatic Rigpig

Ljl...

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893130)



Reply Favorite

Date: August 3rd, 2021 11:05 PM
Author: Coral background story kitchen

*blank stare*

so anyway, medicine is racist

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893219)



Reply Favorite

Date: August 3rd, 2021 11:38 PM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893392)



Reply Favorite

Date: August 3rd, 2021 10:26 PM
Author: twisted dull halford site

Why is this bad?

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892975)



Reply Favorite

Date: August 3rd, 2021 10:28 PM
Author: harsh federal son of senegal

It's a pavlovian reflex, don't look for logic

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892981)



Reply Favorite

Date: August 3rd, 2021 10:28 PM
Author: Bright persian

Because rich white guys might get cutting-edge personalized medical care, which isn't fair.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892985)



Reply Favorite

Date: August 3rd, 2021 10:29 PM
Author: twisted dull halford site

I don't get it. So the computer learns the person is black and so the computer decides the person is less valuable and the computer decides not to treat the person?



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892992)



Reply Favorite

Date: August 3rd, 2021 10:30 PM
Author: Bright persian

If you read the blog post, your question won't be answered.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893002)



Reply Favorite

Date: August 3rd, 2021 10:40 PM
Author: twisted dull halford site



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893065)



Reply Favorite

Date: August 3rd, 2021 10:30 PM
Author: Coral background story kitchen

medicine is racist

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893006)



Reply Favorite

Date: August 3rd, 2021 10:35 PM
Author: multi-colored half-breed nursing home

*beep boop beep*....*NIGGER!* *NIGGER! *NIGGER!*

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893034)



Reply Favorite

Date: August 3rd, 2021 10:28 PM
Author: Coral background story kitchen

(racist

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42892987)



Reply Favorite

Date: August 3rd, 2021 10:45 PM
Author: startling center travel guidebook

libs devoutly believe the dogma that race is a mere social construct, a social fiction.

this AI shows it's deep and real.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893086)



Reply Favorite

Date: August 3rd, 2021 10:46 PM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893095)



Reply Favorite

Date: August 3rd, 2021 10:49 PM
Author: twisted dull halford site

Oh is this all some silly ontological argument? Like, the very fact that a computer could predict race is somehow an indictment of the computer, because we already know race isnt real?

i thought it was like "its bad that it knows race because then it will treat the person different" but its really some useless ideological thing

i'm open to someone explaining what difference this will make in the practice of medicine, or how it causes disparate impact. but what you're telling me is its not about that

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893121)



Reply Favorite

Date: August 3rd, 2021 11:07 PM
Author: Coral background story kitchen

no, it's that ppl are racist, so if a computer can identify race then ppl wil lbe racist to htat person. and because ppl are racist, computers are racist too. basically black people cant use computers. also trump said pussy. institutional. racism.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893226)



Reply Favorite

Date: August 3rd, 2021 11:25 PM
Author: Bright persian

"However we want to frame this, the model has learned something that is 𝐰𝐫𝐨𝐧𝐠"

https://lukeoakdenrayner.wordpress.com/2021/08/02/ai-has-the-worst-superpower-medical-racism/

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893302)



Reply Favorite

Date: August 3rd, 2021 11:28 PM
Author: twisted dull halford site

Third, I want to take as given that racial bias in medical AI is bad.

Weird assumption. What if knowing race is indicative of a different course of treatment.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893322)



Reply Favorite

Date: August 3rd, 2021 11:36 PM
Author: twisted dull halford site

Richard !

AUGUST 4, 2021 AT 3:15 AM

Interesting article. I wish you had explained why you believe that the algorithm detecting race was a bad thing.

You wrote at length about health disparities, but never closed the loop on what that means for the algorithms. What is a simple case where, had the algorithm not detected race, it would make the right choice but didn’t?

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893372)



Reply Favorite

Date: August 4th, 2021 9:52 AM
Author: twisted dull halford site

https://www.researchsquare.com/article/rs-151985/v1

https://assets.researchsquare.com/files/rs-151985/v1_stamped.pdf?c=1612450670

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42894727)



Reply Favorite

Date: August 4th, 2021 10:06 AM
Author: twisted dull halford site

If you think about it, basically what's going on is in the training data being a black woman is associated with underdiagnose (i.e. in hospitals themselves this is what doctors are doing). there's probably some effect from insurance, i.e. black women have bad insurance so doctors know there's less they can do. so that bias gets fed into the AI who uses the black identifier to underdiagnose.

So the issue isn't with the AI, its that the training data is racist.

https://assets.researchsquare.com/files/rs-151985/v1_stamped.pdf?c=1612450670

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42894765)



Reply Favorite

Date: August 3rd, 2021 10:43 PM
Author: soul-stirring anal ticket booth affirmative action

It can also detect race from microscopic tissue samples.

https://twitter.com/lab_pearson/status/1422392094296006656?s=20

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893080)



Reply Favorite

Date: August 3rd, 2021 10:47 PM
Author: Puce Charismatic Rigpig

Odd case!

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893099)



Reply Favorite

Date: August 3rd, 2021 10:46 PM
Author: harsh federal son of senegal

https://i.imgur.com/60atq7s.png what's this, AI?

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893093)



Reply Favorite

Date: August 3rd, 2021 10:48 PM
Author: Puce Charismatic Rigpig

You still don't understand what you're dealing with, do you? Perfect organism. Its structural perfection is matched only by its hostility.

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893102)



Reply Favorite

Date: August 4th, 2021 11:00 AM
Author: Fragrant Buck-toothed Fat Ankles



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42894998)



Reply Favorite

Date: August 3rd, 2021 10:55 PM
Author: Pink razzle genital piercing depressive

A few years ago I was super lib. My views haven't changed at all, and now I'm honestly baffled by what libs worry about. AIs can distinguish race based on medical imaging? Why is that surprising? Wasn't the whole point of these AIs to notice things that humans can't?

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893166)



Reply Favorite

Date: August 4th, 2021 12:05 AM
Author: adventurous yellow kitty cat market



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42893513)



Reply Favorite

Date: August 4th, 2021 11:04 AM
Author: Cream impressive property hairy legs

Race isn’t real

Also libs: we need to divide everyone up based on race in order to accuse whites of racism

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42895009)



Reply Favorite

Date: August 4th, 2021 3:25 PM
Author: Abnormal hell



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42896532)



Reply Favorite

Date: August 4th, 2021 9:14 PM
Author: Puce Charismatic Rigpig



(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42898375)



Reply Favorite

Date: August 4th, 2021 3:47 PM
Author: Heady offensive azn

IN THIS HOUSE

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42896687)



Reply Favorite

Date: August 4th, 2021 9:14 PM
Author: Puce Charismatic Rigpig

Ljl...

(http://www.autoadmit.com/thread.php?thread_id=4891815&forum_id=2#42898374)