HACKER Q&A
📣 czatt

How far along is the development of AI for reading medical scans?


I've read some sources saying that there isn't enough labelled data available to properly train an algorithm, and some saying that there are already viable solutions being tested at hospitals. What are existing AI solutions capable of doing so far in this field, and how long would it take for them to be commercially used by hospitals / healthcare providers?


  👤 bsenftner Accepted Answer ✓
About 7-10 years ago I tech-MBA-style consulted with a half dozen medical scan reading via machine learning startups in Southern California. In each case, I noticed the CEOs being overwhelmed by their investors, loosing control as soon as their potential was validated, and sold to giant healthcare corps, most often against the protests of the founders. In each of these situations, the startups were working with large offshore teams annotating medical scans for their ML training. I was usually hired to write a prospectus or similar investment document, so my exposure to each startup was brief. But in every case, the founding team kind of were knocked off their feet by the aggression of their investors. I checked back with two of the CEOs about two years ago, and both of their startups were shutdown after acquisition.

👤 davismwfl
Oh the fun. Yes, there are some solutions being tested for specific types of scans and specific diagnosis, but not nearly enough is operational yet.

My "short" answer: There are a number of people/companies researching AI for medical scans/records, but it is true that finding enough properly documented and labeled scans/images/records etc is hard. Privacy laws also add a fairly high burden which makes the research hard, the privacy laws/rules are needed but sometimes they are more about adding road blocks than actually protecting anyone's privacy.

Another part to getting good training sources is in more complex scenarios there isn't always one answer for a given scan, or it is possible multiple issues can be found. So it is hard sometimes to isolate the specific issue in some scans which adds complexity. Add to this that if you have 4 medically trained people read the scan independently and without influence you will generally get different answers to all but the simplest diagnosis.

There are places in medicine that are easier and where more consistency is available and that is where the current set of most algorithms and work is being done. Some things are also easier to box into categories than others, so again that makes them lower hanging fruit which is what I see happening right now. All that said, to bring product to commercial market is the largest gap IMO, lots of companies/people have researched and built tools/algorithms/models but few have cleared the FDA in the U.S. I'd say the biggest hurdle right now is bringing products to commercialization and understanding how to do so.

As for most current AI: Most of what is commercially available is not predictive and instead is used as aids in finding missed data points that might change the diagnosis or highlight something specific for treatment. For example, there are some that highlight fractures that would otherwise be missed by a radiologist reading an x-ray. Same thing for CT/MRI scans where sometimes literally thousands of images are generated for a single scan and missing something is not unheard of. More work has been done around imagery analysis than on most other parts of the patient record IMO, so you'll find more data around that side than you would around vitals analysis or predictive diagnostics.

Honestly I could go on about this for a long time. This is an area ripe for innovation but has regulatory hurdles most startups fail to grasp and fail to navigate.

I am the CTO for a startup that is working on these types of problems (vitals & predictive specifically), we have one FDA cleared product and see many opportunities to make a positive impact on patient care.