based on a few studies, it seems like AI can identify melanomas with 99% accuracy.
The obvious question to me then Is why there are no models publicly available for people to do preventive checks at home, how is this not a thing yet?
User side of false negatives: You miss skin cancer. The user delays a visit to a doctor by a month. The user dies in 3 months, and her relatives sue the hell out of the model creator.
User side of false positives: The model thinks a blemish is malignant. The user spends few grand to verify it is not, and blames you for scaring her.
Doctor side of false responses: The fucking engineers do not know what they are doing. Please, my patients, do not use that. We the doctors and the responsible patients should unite against the stupid AI.
Arguably, the user's side is about being moral. The doctor's side is more important for adoption.
BTW, what is the accuracy, false positives and false negatives for just an AI model, just a doctor, and a doctor equipped with an AI helper model?
Both would constitute medical advice, and you can't get in trouble for not giving advice. Imagine if I missed one bug (or the underlying math was simply wrong!) and someone got HIV
There are times when harm reduction is an obvious win (narcan I hear is very cool) but in this case it's hard to justify
I always have a side project or two going on, and I think it would be a neat side project, but I would need a LOT of pictures of people’s moles to train a >95% accurate model to detect cancer. I’m not sure how one would go about getting that unless they work for a hospital or large health company, and obviously they’d frown upon stealing their images for a side project.