Augmedix Launches National Roll-Out of Tech-Enabled Virtual Scribes to Assist Busy ED Clinicians

Augmedix Launches National Roll-Out of Tech-Enabled Virtual Scribes to Assist Busy ED Clinicians

What You Should Know:

Augmedix, a provider of
remote medical documentation and live clinical support launches the national
rollout of a transformative tech-enabled medical documentation service powered
by virtual scribes and created specifically for busy Emergency Department (ED)
environments.

– Augmedix trains remotely-located scribes in emergency
medicine documentation to alleviate the administrative burden for clinicians,
which is especially important today as these clinicians are on the front lines
of the COVID crisis.

– The virtual scribes use the proprietary Augmedix
Notebuilder automation technology to ensure consistent, accurate documentation,
which adds value and potential return on investment relating to ancillary
billing opportunities unique to the ED.

– The Augmedix ED scribes support key facets of
documentation, real-time alerts, and Electronic Health Records tasks. This
includes tracking and providing reminders for labs, radiology, and EKGs, as
well as attaching patient education materials at the end of each encounter.

– Additionally, Augmedix scribes positively impact clinician
workflow and the patient experience by removing barriers between the ED and
hospitalists. The ED virtual scribes assist clinicians to optimize workflow by
prioritizing documentation requirements for patients being admitted, while
still ensuring all ED patient documentation is complete for clinician review
and sign-off by the end of their shift.


QGenda Acquires Automated Provider Scheduling Platform Shift Admin – M&A

QGenda acquires Shift Admin – M&A

What You Should Know:

– QGenda has acquired Shift Admin, an industry-recognized
leader for shift-based specialties including emergency medicine, urgent care,
and hospital medicine.

With the industry-leading scheduling technology for all
specialties across the healthcare delivery system, QGenda’s technology will
ensure care is available for patients when and where it is needed.


QGenda, the leading
innovator in enterprise healthcare workforce management solutions, announced
the acquisition
of Shift Admin, an industry-recognized
leader for shift-based specialties including emergency medicine, urgent care,
and hospital medicine. With the industry-leading scheduling technology for all
specialties across the healthcare delivery system, QGenda’s technology will
serve as the single source of truth for all provider schedules, ensuring care
is available for patients when and where it is needed.

Automated Provider Scheduling for All Specialties

Founded in 2007, Shift Admin offers Automated Schedule
Generation that can generate optimized schedules based on fully customizable
rules and user’s requests. The Shift Admin schedule generator contains a
world-class scheduling algorithm and features a simple but powerful user
interface. Our system is flexible enough to handle even the most complicated
schedules.

Acquisition Enables Greater Access to Data Insights for
Providers

The announcement furthers QGenda’s commitment to advancing
how healthcare organizations manage and schedule their workforce so they can
effectively use providers’ time, reduce burnout and optimize capacity. By
scheduling for all providers and specialties through QGenda, organizations have
greater access to data, details, and insights for thousands of providers
working across the system.

“With care needs fluctuating across states and even within the same area, transparency and flexibility continue to be a large need for healthcare organizations nationwide. QGenda, with the addition of Shift Admin’s shift-based provider scheduling capabilities, is helping healthcare organizations address these priorities. We are partnering with customers to deliver greater visibility into where and when providers are working and optimizing capacity to deliver quality, cohesive care across the entire organization,” stated Greg Benoit, CEO of QGenda.

By adding the leading provider in shift-based scheduling,
QGenda is enhancing capabilities for emergency medicine, hospital medicine, and
urgent care, while building upon its industry-leading scheduling solution for
providers in specialties such as anesthesia, radiology, cardiology, obstetrics
& gynecology, and pathology. The QGenda platform also includes solutions
for on-call scheduling, room management, time tracking, compensation
management, and workforce analytics.


Why Hospitals Should Act Now to Create Clinical AI Departments

Why Hospitals Should Act Now to Create Clinical AI Departments
John Frownfelter, MD, FACP, Chief Medical Information Officer at Jvion

A century ago, X-rays transformed medicine forever. For the first time, doctors could see inside the human body, without invasive surgeries. The technology was so revolutionary that in the last 100 years, radiology departments have become a staple of modern hospitals, routinely used across medical disciplines.

Today, new technology is once again radically reshaping medicine: artificial intelligence (AI). Like the X-ray before it, AI gives clinicians the ability to see the unseen and has transformative applications across medical disciplines. As its impact grows clear, it’s time for health systems to establish departments dedicated to clinical AI, much as they did for radiology 100 years ago.

Radiology, in fact, was one of the earliest use cases for AI in medicine today. Machine learning algorithms trained on medical images can learn to detect tumors and other malignancies that are, in many cases, too subtle for even a trained radiologist to perceive. That’s not to suggest that AI will replace radiologists, but rather that it can be a powerful tool for aiding them in the detection of potential illness — much like an X-ray or a CT scan. 

AI’s potential is not limited to radiology, however. Depending on the data it is trained on, AI can predict a wide range of medical outcomes, from sepsis and heart failure to depression and opioid abuse. As more of patients’ medical data is stored in the EHR, and as these EHR systems become more interconnected across health systems, AI will only become more sensitive and accurate at predicting a patient’s risk of deteriorating.

However, AI is even more powerful as a predictive tool when it looks beyond the clinical data in the EHR. In fact, research suggests that clinical care factors contribute to only 16% of health outcomes. The other 84% are determined by socioeconomic factors, health behaviors, and the physical environment. To account for these external factors, clinical AI needs external data. 

Fortunately, data on social determinants of health (SDOH) is widely available. Government agencies including the Census Bureau, EPA, HUD, DOT and USDA keep detailed data on relevant risk factors at the level of individual US Census tracts. For example, this data can show which patients may have difficulty accessing transportation to their appointments, which patients live in a food desert, or which patients are exposed to high levels of air pollution. 

These external risk factors can be connected to individual patients using only their address. With a more comprehensive picture of patient risk, Clinical AI can make more accurate predictions of patient outcomes. In fact, a recent study found that a machine learning model could accurately predict inpatient and emergency department utilization using only SDOH data.

Doctors rarely have insight on these external forces. More often than not, physicians are with patients for under 15 minutes at a time, and patients may not realize their external circumstances are relevant to their health. But, like medical imaging, AI has the power to make the invisible visible for doctors, surfacing external risk factors they would otherwise miss. 

But AI can do more than predict risk. With a complete view of patient risk factors, prescriptive AI tools can recommend interventions that address these risk factors, tapping the latest clinical research. This sets AI apart from traditional predictive analytics, which leaves clinicians with the burden of determining how to reduce a patient’s risk. Ultimately, the doctor is still responsible for setting the care plan, but AI can suggest actions they may not otherwise have considered.

By reducing the cognitive load on clinicians, AI can address another major problem in healthcare: burnout. Among professions, physicians have one of the highest suicide rates, and by 2025, the U.S. The Department of Health and Human Services predicts that there will be a shortage of nearly 90,000 physicians across the nation, driven by burnout. The problem is real, and the pandemic has only worsened its impact. 

Implementing clinical AI can play an essential role in reducing burnout within hospitals. Studies show burnout is largely attributed to bureaucratic tasks and EHRs combined, and that physicians spend twice as much time on EHRs and desk work than with patients. Clinical AI can ease the burden of these administrative tasks so physicians can spend more time face-to-face with their patients.

For all its promise, it’s important to recognize that AI is as complex a tool as any radiological instrument. Healthcare organizations can’t just install the software and expect results. There are several implementation considerations that, if poorly executed, can doom AI’s success. This is where clinical AI departments can and should play a role. 

The first area where clinical AI departments should focus on is the data. AI is only as good as the data that goes into it. Ultimately, the data used to train machine learning models should be relevant and representative of the patient population it serves. Failing to do so can limit AI’s accuracy and usefulness, or worse, introduce bias. Any bias in the training data, including pre-existing disparities in health outcomes, will be reflected in the output of the AI. 

Every hospital’s use of clinical AI will be different, and hospitals will need to deeply consider their patient population and make sure that they have the resources to tailor vendor solutions accordingly. Without the right resources and organizational strategies, clinical AI adoption will come with the same frustration and disillusionment that has come to be associated with EHRs

Misconceptions about AI are a common hurdle that can foster resistance and misuse. No matter what science fiction tells us, AI will never replace a clinician’s judgment. Rather, AI should be seen as a clinical decision support tool, much like radiology or laboratory tests. For a successful AI implementation, it’s important to have internal champions who can build trust and train staff on proper use. Clinical AI departments can play an outsized role in leading this cultural shift.  

Finally, coordination is the bedrock of quality care, and AI is no exception. Clinical AI departments can foster collaboration across departments to action AI insights and treat the whole patient. Doing so can promote a shift from reactive to preventive care, mobilizing ambulatory, and community health resources to prevent avoidable hospitalizations.

With the promise of new vaccines, the end of the pandemic is in sight. Hospitals will soon face a historic opportunity to reshape their practices to recover from the pandemic’s financial devastation and deliver better care in the future. Clinical AI will be a powerful tool through this transition, helping hospitals to get ahead of avoidable utilization, streamline workflows, and improve the quality of care. 

A century ago, few would have guessed that X-rays would be the basis for an essential department within hospitals. Today, AI is leading a new revolution in medicine, and hospitals would be remiss to be left behind.


About  John Frownfelter, MD, FACP

John is an internist and physician executive in Health Information Technology and is currently leading Jvion’s clinical strategy as their Chief Medical Information Officer. With 20 years’ leadership experience he has a broad range of expertise in systems management, care transformation and health information systems. Dr. Frownfelter has held a number of medical and medical informatics leadership positions over nearly two decades, highlighted by his role as Chief Medical Information Officer for Inpatient services at Henry Ford Health System and Chief Medical Information Officer for UnityPoint Health where he led clinical IT strategy and launched the analytics programs. 

Since 2015, Dr. Frownfelter has been bringing his expertise to healthcare through health IT advising to both industry and health systems. His work with Jvion has enhanced their clinical offering and their implementation effectiveness. Dr. Frownfelter has also held professorships at St. George’s University and Wayne State schools of medicine, and the University of Detroit Mercy Physician Assistant School. Dr. Frownfelter received his MD from Wayne State University School of Medicine.


2020’s Top 20 Digital Health M&A Deals Totaled $50B

Teladoc Health and Livongo Merge

2020’s Top 20 Digital Health M&A Deals Totaled $50B

The combination of Teladoc Health and Livongo creates a
global leader in consumer-centered virtual care. The combined company is
positioned to execute quantified opportunities to drive revenue synergies of
$100 million by the end of the second year following the close, reaching $500
million on a run-rate basis by 2025.

Price: $18.5B in value based on each share of Livongo
will be exchanged for 0.5920x shares of Teladoc Health plus cash consideration
of $11.33 for each Livongo share.


Siemens Healthineers Acquires Varian Medical

2020’s Top 20 Digital Health M&A Deals Totaled $50B

On August 2nd, Siemens Healthineers acquired
Varian Medical for $16.4B, with the deal expected to close in 2021. Varian is a
global specialist in the field of cancer care, providing solutions especially
in radiation oncology and related software, including technologies such as
artificial intelligence, machine learning and data analysis. In fiscal year 2019,
the company generated $3.2 billion in revenues with an adjusted operating
margin of about 17%. The company currently has about 10,000 employees
worldwide.

Price: $16.4 billion in an all-cash transaction.


Gainwell to Acquire HMS for $3.4B in Cash

2020’s Top 20 Digital Health M&A Deals Totaled $50B

Veritas Capital (“Veritas”)-backed Gainwell Technologies (“Gainwell”),
a leading provider of solutions that are vital to the administration and
operations of health and human services programs, today announced that they
have entered into a definitive agreement whereby Gainwell will acquire HMS, a technology, analytics and engagement
solutions provider helping organizations reduce costs and improve health
outcomes.

Price: $3.4 billion in cash.


Philips Acquires Remote Cardiac Monitoring BioTelemetry for $2.8B

2020’s Top 20 Digital Health M&A Deals Totaled $50B

Philips acquires BioTelemetry, a U.S. provider of remote
cardiac diagnostics and monitoring for $72.00 per share for an implied
enterprise value of $2.8 billion (approx. EUR 2.3 billion). With $439M in
revenue in 2019, BioTelemetry annually monitors over 1 million cardiac patients
remotely; its portfolio includes wearable heart monitors, AI-based data
analytics, and services.

Price: $2.8B ($72 per share), to be paid in cash upon
completion.


Hims & Hers Merges with Oaktree Acquisition Corp to Go Public on NYSE

Telehealth company Hims & Hers and Oaktree Acquisition Corp., a special purpose acquisition company (SPAC) merge to go public on the New York Stock Exchange (NYSE) under the symbol “HIMS.” The merger will enable further investment in growth and new product categories that will accelerate Hims & Hers’ plan to become the digital front door to the healthcare system

Price: The business combination values the combined
company at an enterprise value of approximately $1.6 billion and is expected to
deliver up to $280 million of cash to the combined company through the
contribution of up to $205 million of cash.


SPAC Merges with 2 Telehealth Companies to Form Public
Digital Health Company in $1.35B Deal

2020’s Top 20 Digital Health M&A Deals Totaled $50B

Blank check acquisition company GigCapital2 agreed to merge with Cloudbreak Health, LLC, a unified telemedicine and video medical interpretation solutions provider, and UpHealth Holdings, Inc., one of the largest national and international digital healthcare providers to form a combined digital health company. 

Price: The merger deal is worth $1.35 billion, including
debt.


WellSky Acquires CarePort Health from Allscripts for
$1.35B

2020’s Top 20 Digital Health M&A Deals Totaled $50B

WellSky, global health, and community care technology company, announced today that it has entered into a definitive agreement with Allscripts to acquire CarePort Health (“CarePort”), a Boston, MA-based care coordination software company that connects acute and post-acute providers and payers.

Price: $1.35 billion represents a multiple of greater
than 13 times CarePort’s revenue over the trailing 12 months, and approximately
21 times CarePort’s non-GAAP Adjusted EBITDA over the trailing 12 months.


Waystar Acquires Medicare RCM Company eSolutions

2020’s Top 20 Digital Health M&A Deals Totaled $50B

On September 13th, revenue cycle management
provider Waystar acquires eSolutions, a provider of Medicare and Multi-Payer revenue
cycle management, workflow automation, and data analytics tools. The
acquisition creates the first unified healthcare payments platform with both
commercial and government payer connectivity, resulting in greater value for
providers.

Price: $1.3 billion valuation


Radiology Partners Acquires MEDNAX Radiology Solutions

2020’s Top 20 Digital Health M&A Deals Totaled $50B

Radiology Partners (RP), a radiology practice in the U.S., announced a definitive agreement to acquire MEDNAX Radiology Solutions, a division of MEDNAX, Inc. for an enterprise value of approximately $885 million. The acquisition is expected to add more than 800 radiologists to RP’s existing practice of 1,600 radiologists. MEDNAX Radiology Solutions consists of more than 300 onsite radiologists, who primarily serve patients in Connecticut, Florida, Nevada, Tennessee, and Texas, and more than 500 teleradiologists, who serve patients in all 50 states.

Price: $885M


PointClickCare Acquires Collective Medical

2020’s Top 20 Digital Health M&A Deals Totaled $50B

PointClickCare Technologies, a leader in senior care technology with a network of more than 21,000 skilled nursing facilities, senior living communities, and home health agencies, today announced its intent to acquire Collective Medical, a Salt Lake City, a UT-based leading network-enabled platform for real-time cross-continuum care coordination for $650M. Together, PointClickCare and Collective Medical will provide diverse care teams across the continuum of acute, ambulatory, and post-acute care with point-of-care access to deep, real-time patient insights at any stage of a patient’s healthcare journey, enabling better decision making and improved clinical outcomes at a lower cost.

Price: $650M


Teladoc Health Acquires Virtual Care Platform InTouch
Health

2020’s Top 20 Digital Health M&A Deals Totaled $50B

Teladoc Health acquires InTouch Health, the leading provider of enterprise telehealth solutions for hospitals and health systems for $600M. The acquisition establishes Teladoc Health as the only virtual care provider covering the full range of acuity – from critical to chronic to everyday care – through a single solution across all sites of care including home, pharmacy, retail, physician office, ambulance, and more.

Price: $600M consisting of approximately $150 million
in cash and $450 million of Teladoc Health common stock.


AMN Healthcare Acquires VRI Provider Stratus Video

2020’s Top 20 Digital Health M&A Deals Totaled $50B

AMN Healthcare Services, Inc. acquires Stratus Video, a leading provider of video remote language interpretation services for the healthcare industry. The acquisition will help AMN Healthcare expand in the virtual workforce, patient care arena, and quality medical interpretation services delivered through a secure communications platform.

Price: $475M


CarepathRx Acquires Pharmacy Operations of Chartwell from
UPMC

2020’s Top 20 Digital Health M&A Deals Totaled $50B

CarepathRx, a leader in pharmacy and medication management
solutions for vulnerable and chronically ill patients, announced today a
partnership with UPMC’s Chartwell subsidiary that will expand patient access to
innovative specialty pharmacy and home infusion services. Under the $400M
landmark agreement, CarepathRx will acquire the
management services organization responsible for the operational and strategic
management of Chartwell while UPMC becomes a strategic investor in CarepathRx. 

Price: $400M


Cerner to Acquire Health Division of Kantar for $375M in
Cash

Cerner announces it will acquire Kantar Health, a leading
data, analytics, and real-world evidence and commercial research consultancy
serving the life science and health care industry.

This acquisition is expected to allow Cerner’s Learning
Health Network client consortium and health systems with more opportunities to
directly engage with life sciences for funded research studies. The acquisition
is expected to close during the first half of 2021.

Price: $375M


Cerner Sells Off Parts of Healthcare IT Business in
Germany and Spain

2020’s Top 20 Digital Health M&A Deals Totaled $50B

Cerner sells off parts of healthcare IT business in Germany and Spain to Germany company CompuGroup Medical, reflecting the company-wide transformation focused on improved operating efficiencies, enhanced client focus, a refined growth strategy, and a sharpened approach to portfolio management.

Price: EUR 225 million ($247.5M USD)


CompuGroup Medical Acquires eMDs for $240M

2020’s Top 20 Digital Health M&A Deals Totaled $50B

CompuGroup Medical (CGM) acquires eMDs, Inc. (eMDs), a
leading provider of healthcare IT with a focus on doctors’ practices in the US,
reaching an attractive size in the biggest healthcare market worldwide. With
this acquisition, the US subsidiary of CGM significantly broadens its position
and will become the top 4 providers in the market for Ambulatory Information
Systems in the US.

Price: $240M (equal to approx. EUR 203 million)


Change Healthcare Buys Back Pharmacy Network

2020’s Top 20 Digital Health M&A Deals Totaled $50B

Change
Healthcare
 buys
back
 pharmacy unit eRx Network
(“eRx”),
 a leading provider of comprehensive, innovative, and secure
data-driven solutions for pharmacies. eRx generated approximately $67M in
annual revenue for the twelve-month period ended February 29, 2020. The
transaction supports Change Healthcare’s commitment to focus on and invest in
core aspects of the business to fuel long-term growth and advance innovation.

Price: $212.9M plus cash on the balance sheet.


Walmart Acquires Medication Management Platform CareZone

2020’s Top 20 Digital Health M&A Deals Totaled $50B

Walmart acquires CareZone, a San Francisco, CA-based smartphone
service for managing chronic health conditions for reportedly $200M. By
working with a network of pharmacy partners, CareZone’s concierge services
assist consumers in getting their prescription medications organized and
delivered to their doorstep, making pharmacies more accessible to individuals
and families who may be homebound or reside in rural locations.

Price: $200M


Verisk Acquires MSP Compliance Provider Franco Signor

2020’s Top 20 Digital Health M&A Deals Totaled $50B

Verisk, a data
analytics provider, announced today that it has acquired Franco Signor, a Medicare Secondary Payer
(MSP) service provider to America’s largest insurance carriers and employers.
As part of the acquisition, Franco Signor will become part of Verisk’s Claims
Partners business, a leading provider of MSP compliance and other analytic
claim services. Claims Partners and Franco Signor will be combining forces to
provide the single best resource for Medicare compliance. 

Price: $160M


Rubicon Technology Partners Acquires Central Logic

2020’s Top 20 Digital Health M&A Deals Totaled $50B

Private equity firm Rubicon Technology Partners acquires
Central Logic, a provider of patient orchestration and tools to accelerate
access to care for healthcare organizations. Rubicon will be aggressively driving Central Logic’s
growth with additional cash investments into the business, with a focus
on product innovation, sales expansion, delivery and customer support, and
the pursuit of acquisition opportunities.

Price: $110M – $125 million, according to sources


AI Algorithms Can Predict Outcomes of COVID-19 Patients with Mild Symptoms in ER

AI Algorithms Can Predict Outcomes of COVID-19 Patients with Mild Symptoms in ER

What You Should Know:

– Artificial intelligence algorithms can predict outcomes
of COVID-19 patients with mild symptoms in emergency rooms, according to recent
research findings published in Radiology: Artificial Intelligence journal.

– Researchers trained the algorithm from data on 338
positive COVID-19 patients between the ages of 21 and 50 by using diverse
patient data from emergency departments within Mount Sinai Health System
hospitals (The Mount Sinai Hospital in Manhattan, Mount Sinai Queens, and Mount
Sinai Brooklyn) between March 10 and March 26.


Mount Sinai researchers have developed an artificial intelligence algorithm to rapidly predict outcomes of COVID-19 patients in the emergency room based on test and imaging results. Published in the journal, Radiology: Artificial Intelligence, the research reveals that if the AI algorithms were implemented in the clinical setting, hospital doctors can identify patients at high risk of developing severe cases of COVID-19 based on the severity score.  This can lead to closer observation and more aggressive and quicker treatment.

Research Background/Protocols

They trained the algorithm using electronic medical records (EMRs) of patients between 21 and 50 years old and combined their lab tests and chest X-rays to create this deep learning model. Investigators came up with a severity score to determine who is at the highest risk of intubation or death within 30 days of arriving at the hospital. If applied in a clinical setting, this deep learning model could help emergency room staff better identify which patients may become sicker and lead to closer observation and quicker triage, and could expedite treatment before hospital admission.

Led by Fred Kwon, Ph.D., Biomedical Sciences at the Icahn School of Medicine at Mount Sinai, researchers trained the algorithm from data on 338 positive COVID-19 patients between the ages of 21 and 50 by using diverse patient data from emergency departments within Mount Sinai Health System hospitals (The Mount Sinai Hospital in Manhattan, Mount Sinai Queens, and Mount Sinai Brooklyn) between March 10 and March 26. Data from the emergency room including chest X-rays, bloodwork (basic metabolic panel, complete blood counts), and blood pressure were used to develop a severity score and predict the disease course of COVID-19. 

Patients with a higher severity score would require
closer observation. The researchers then tested the algorithm using patient data on other patients in all adult age groups and
ethnicities.  The algorithm has an 82 percent sensitivity to predict intubation and death within 30 days of
arriving at the hospital. 

Why It
Matters

Many patients with COVID-19, especially younger ones, may show non-specific symptoms when they arrive at the emergency room, including cough, fever, and
respiratory issues that don’t provide any indication of disease severity. As a
result, clinicians cannot easily identify patients who get worse quickly. This algorithm can provide the probability that a patient may
require intubation before they get worse. That way clinicians can make more accurate decisions for appropriate
care.

Algorithms that predict outcomes of patients with COVID-19 do exist, but they are used in admitted patients who have already developed more severe symptoms and have additional imaging and laboratory
data taken after hospital admission.  This algorithm is different since it predicts outcomes in COVID-19 patients while they’re in the emergency room—even in those with mild symptoms. It only uses information from the initial
patient encounter in the hospital emergency department. 

“Our algorithm demonstrates that initial imaging and laboratory tests contain sufficient information to predict outcomes of patients with COVID-19. The algorithm can help clinicians anticipate acute worsening (decompensation) of patients, even those who present without any symptoms, to make sure resources are appropriately allocated,” explains Dr. Kwon. “We are working to incorporate this algorithm-generated severity score into the clinical workflow to inform treatment decisions and flag high-risk patients in the future.”

Docs are ROCs: a simple fix for a “methodologically indefensible” practice in medical AI studies

By LUKE OAKDEN-RAYNER

Anyone who has read my blog or tweets before has probably seen that I have issues with some of the common methods used to analyse the performance of medical machine learning models. In particular, the most commonly reported metrics we use (sensitivity, specificity, F1, accuracy and so on) all systematically underestimate human performance in head to head comparisons against AI models.

This makes AI look better than it is, and may be partially responsible for the “implementation gap” that everyone is so concerned about.

I’ve just posted a preprint on arxiv titled “Docs are ROCs: A simple off-the-shelf approach for estimating average human performance in diagnostic studies” which provides what I think is a solid solution to this problem, and I thought I would explain in some detail here.

Disclaimer: not peer reviewed, content subject to change 


A (con)vexing problem

When we compare machine learning models to humans, we have a bit of a problem. Which humans?

In medical tasks, we typically take the doctor who currently does the task (for example, a radiologist identifying cancer on a CT scan) as proxy for the standard of clinical practice. But doctors aren’t a monolithic group who all give the same answers. Inter-reader variability typically ranges from 15% to 50%, depending on the task. Thus, we usually take as many doctors as we can find and then try to summarise their performance (this is called a multi-reader multicase study, MRMC for short).

Since the metrics we care most about in medicine are sensitivity and specificity, many papers have reported the averages of these values. In fact, a recent systematic review showed that over 70% of medical AI studies that compared humans to AI models reported these values. This makes a lot of sense. We want to know how the average doctor performs at the task, so the average performance on these metrics should be great, right?

No. This is bad.

The problem with reporting the averages is that human sensitivity and specificity live on a curve. They are correlated values, a skewed distribution.

The independently pooled average points of curved distributions are nowhere near the curves.

What do we learn in stats 101 about using averages in skewed distributions?

In fact, this practice has been criticised many times in the methodology literature. Gatsonis and Paliwal go as far as to say “the use of simple or weighted averages of sensitivity and specificity to draw statistical conclusions is not methodologically defensible,” which is a heck of an academic mic drop.


What do you mean?

So we need an alternative to average sensitivity and specificity.

If you have read my blog before, you would know I love ROC curves. I’ve written tons about them before (here and here), but briefly: they visually reflect the trade-off between sensitivity and specificity (which is conceptually the same as the trade-off between overcalling or undercalling disease in diagnostic medicine), and the summary metric of the area under the ROC curve is a great measure of discriminative performance. In particular the ROC AUC is prevalence invariant, meaning we can compare the value across hospitals even if the rates of disease differ.

The problem is that human decision making is mostly binary in diagnostic medicine. We say “there is disease” or “there is no disease”. The patient needs a biopsy or they don’t. We give treatment or not*.

Binary decisions create single points in ROC space, not a curve.

The performance of 108 different radiologists at screening mammography, Beam et al, 1996.

AI models on the other hand make curves. By varying the threshold of a decision, the same model can move to different places in ROC space. If we want to be more aggressive at making a diagnosis, follow the curve to the right. If we want to avoid overcalls, shift to the left.

The black line is the model, the coloured dots are doctors. From Gulshan et al, 2016.

As these examples show, groups of humans tend to organise into curves. So why don’t we just … fit a model to the human points to characterise the underlying (hypothetical) curve?

I’ll admit I spent quite a long time trying various methods to do this, none of which worked great or seemed like “the” solution.

I’m not alone in trying, Rajpurkar et al tried out a spline-based approach which worked ok but had some pretty unsatisfying properties.

One day I was discussing this troubling issue with my stats/epi prof, Lyle Palmer, and he looked at me a bit funny and was like “isn’t this just meta-analysis?”.

I feel marginally better about not realising this myself since it appears that almost no-one else has thought of this either**, but dang is it obvious in hindsight.

Wait … what about all those ROCs of docs?

Now, if you read the diagnostic radiology literature, you might be confused. Don’t we use ROC curves to estimate human performance all the time?

The performance of a single radiologist reported in Roganovic et al.

It is true, we do. We can generate ROC curves of single doctors by getting them to estimate their confidence in their diagnosis. We then use each confidence level as a threshold, and calculate the sensitivity and specificity for each point. If you have 5 confidence levels, you get a 5 point ROC curve. After that there are established methods for reasonably combining the ROC curves of individual doctors into a summary curve and AUC.

But what the heck is a doctor’s confidence in their diagnosis? Can they really estimate it numerically?

In almost all diagnostic scenarios, doctors don’t estimate their confidence. They just make a diagnosis*. Maybe they have a single “hedge” category (i.e., “the findings are equivocal”), but we are taught to try to avoid those. So how are these ROC curves produced?

Well, there are two answers:

  1. It is mammography/x-rads, where every study is clinically reported with a score out of 5, which is used to construct a ROC curve for each doctor (ie the rare situation where scoring an image is standard clinical practice).
  2. It is any other test, where the study design forces doctors to use a scoring system they wouldn’t use in practice.

The latter is obviously a bit dodgy. Even subtle changes to experimental design can lead to significant differences in performance, a bias broadly categorised under the heading “laboratory effects“.

There has been a fair bit written about the failings of enforced confidence scores. For example, Gur et al report that confidence scores in practice are concentrated at the extreme ends of the ranges (essentially binary-by-stealth), and are often unrelated to the subtleness of the image features. Another paper by Gur et al highlights the fact that confidence scores do not relate to clinical operating points, and Mallet et al raise a number of further problems with using confidence scores, concluding that “…confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate.” (emphasis mine)

Despite these findings, the practice of forced confidence scoring is widespread. A meta-analysis by Dendumrongsup et al of imaging MRMC studies reported that confidence scores were utilised in all 51 studies they found, including the 31 studies on imaging tasks in which confidence scores are not used in clinical practice.

I reaaaaally hate this practice. Hence, trying to find a better way.


Meta meta meta

So what did Lyle mean? What does meta-analysis have to do with estimating average human reader performance?

Well, in the meta-analysis of diagnostic test accuracy, you take multiple studies that report the sensitivity and specificity of a test, performed at different locations and on different populations, and you summarise them by creating a summary ROC (SROC) curve.

Zhang and Ren, a meta-analysis of mammography diagnostic accuracy. Each dot is a study, with the size of dot proportional to sample size (between 50 and 500 cases). Lines reflect the SROC curve and the 95% confidence interval.

Well, it seems to me that a set of studies looks a lot like a group of humans tested on a diagnostic task. Maybe we should try to use the same method to produce SROC curves for readers? How about Esteva et al, the famous dermatology paper?

This is a model that best fits the reader results. If you compare it to the average (which was reported in the paper), you see that the average of sensitivity and specificity is actually bordering on the inner 95% CI of the fitted model, and only 4 dermatologists perform worse than the average by being inside that 95% CI line. It certainly seems like to SROC curve makes more sense as a summary of the performance of the readers than the average does.

So the approach looks pretty good. But is it hard? Will people actually use it?


Is it even research?

I initially just thought I’d write a blogpost on this topic. I am not certain it really qualifies as research, but in the end I decided to write a quick paper to present the idea to the non-blog-reading community.

The reason I felt this way is that the content of the paper is so simple. Meta-analysis and the methods to perform meta-analysis is one of the best understood parts of statistics. In fact, meta-analysis is generally considered the pinnacle of the pyramid of medical evidence.

Metanalysis is bestanalysis.

But this is why the idea is such a good solution in my opinion. There is nothing fancy, no new models to convince people about. It is just good, well-validated statistics. There are widely used packages in every major programming language. There are easily accessible tutorials and guidelines. The topic is covered in undergraduate courses.

So the paper isn’t anything fancy. It just says “here is a good tool. Use the good tool.”

It is a pretty short paper too, so all I will do here is cover the main highlights.


What and why?

In short, a summary ROC curve is a bivariate model fitted on the logit transforms of sensitivity and specificity. It comes in two main flavours, the fixed effects model and the random effects model, but all the guidelines recommend random effects models these days so we can ignore the fixed effects versions***.

When it comes to the nuts and bolts, there are a few main models that are used. I reference them in the paper, so check that out if you want to know more.

The “why do meta-analysis?” question is important. There are a couple of major benefits to this approach, but the biggest one by far is that we get reasonable estimates of variance in our summary measures.

See, when you average sensitivity and specificity, you calculate your standard deviations by pooling the confusion matrices across readers. Where before you had multiple readers, you now have one uber-reader. At this point, you can only account for variability across samples, not readers.

In this table, adapted from Obuchowski in a book chapter I wrote, we see that the number of readers, when accounted for, has a huge impact on sample size and power calculations. Frankly, not taking the number of readers into account is methodologically indefensible.

SROC analysis does though, considering both the number of readers and the “weight” of each reader (how many studies they read). Compare this SROC curve re-analysing the results of Rajpurkar and Irvin et al to the one from Esteva et al above:

With only 4 readers, look how wide that confidence region is! If we draw a vertical line from the “average point” it covers a sensitivity range between 0.3 and 0.7, but in their paper they reported an F1 score of 0.387, with a 95% CI of 0.33 to 0.44, a far narrower range even accounting for the different metric.

Another nice thing about SROC curves is that they can clearly show results stratified by experience level (or other subgroups), even when there are lots of readers.

From Tschandl et al. The raw reader points are unreadable, but summarising them with SROC curves is clean and tidy.

There are a few other good points of SROC curves which we mention in the paper, but I don’t want to extend this blog post too much. Just read the paper if you are interested.


Just use SROCs!

That’s really all I have to say. A simple, off-the-shelf, easily applied method to more accurately summarise human performance and estimate the associated standard errors in reader studies, particularly of use for AI human-vs-machine comparisons.

I didn’t invent anything here, so I’m taking no credit^, but I think it is a good idea. Use it! It will be better^^!

You wouldn’t want to be methodologically indefensible, right?


* I’ll have more to say on this in a future post, suffice to say for now that this is actually how medicine works when you realise that doctors don’t make descriptive reports, they make decisions. Every statement made by a radiologist (for example) is a choice between usually two but occasionally three or four actual treatment paths. A radiologist who doesn’t understand the clinical implications of their words is a bad radiologist.

**This actually got me really nervous right after I posted the paper to arxiv (like, why has no-one thought of this?), so I email-bombed some friends for urgent feedback on the paper while I could still remove it from the processing list, but I got the all clear :p

*** I semi-justify this in the paper. It makes sense to me anyway.

^ Well, I will take credit for the phrase “Docs are ROCs”. Not gonna lie, it was coming up with that phrase that motivated me to write the paper. It just had to exist.

^^ For anyone interested, it still isn’t perfect. There are some reports of persistent underestimation of performance using SROC analysis in simulation studies. It also doesn’t really account for the fact most reader studies have a single set of cases, so the variance between cases is artificially low. But you can’t really get around that without making a bunch of assumptions (these are accurate empirical estimates), and it is tons better than what we do currently. And heck, it is good enough for Cochrane :p^^^

^^^ Of course, if you disagree with this approach, let me know. This is a preprint currently, and I would love to get feedback on why you hate it and everything about it, so I can update the paper or my friends list accordingly :p

Luke Oakden-Rayner is a radiologist in South Australia, undertaking a Ph.D in Medicine with the School of Public Health at the University of Adelaide. This post originally appeared on his blog here.

GE Healthcare’s AI tool helps clinicians intubate patients accurately and safely

An artificial intelligence tool developed by GE Healthcare twinned with a mobile X-ray device can help the placement of endotracheal tubes (ETTs), a necessary step for COVID-19 patients who require ventilation.

The new tool – part of GE’s Critical Care Suite 2.0 – helps bedside staff and radiologists assess patients before intubation and make sure ETTs are positioned correctly which should reduce complications.

It also includes algorithms that help radiologists triage and prioritise critical cases, and automates processes to help cut average review times for X-rays, which can currently take up to eight hours even when flagged as urgent.

The company says up to a quarter of patients who are intubated outside of the operating room have misplaced ETTs on chest X-rays, which can lead to hyperinflation of the lungs, collapsed lung (pneumothorax), cardiac arrest and death.

GE Healthcare says the new tool is of particular value at the moment as the world is battling the coronavirus pandemic, as this has massively increased the demand for intubation and ventilation.

Overall, around 45% of all patients admitted to intensive care need to be intubated, and it is estimated that between 5% and 15% of COVID-19 cases require intensive care surveillance and intubation for ventilatory support.

Using the AI suite, ETTs are automatically identified in chest X-ray images, providing feedback to the clinician on positioning within seconds and warning them if it hasn’t been place correctly. It will also quickly detect complications like pneumothorax, and can automatically send an alert to a radiologist along with the x-ray images for review.

“Seconds and minutes matter when dealing with a collapsed lung or assessing endotracheal tube positioning in a critically ill patient,” said Dr Amit Gupta, director of diagnostic radiography at University Hospital Cleveland Medical Centre in the US.

The algorithm has already shown its worth in COVID-19 cases, identifying cases of pneumothorax as well as barotrauma – tissue injury caused by a pressure-related change in body compartment gas volume, he added.

“Today, clinicians are overwhelmed, experiencing mounting pressure as a result of an ever-increasing number of patients,” said Jan Makela, president and CEO, Imaging, at GE Healthcare.

“The pandemic has proven what we already knew – that data, AI and connectivity are central to helping those on the front lines deliver intelligently efficient care.”

Critical Care Suite 2.0 and its five quality algorithms were developed using GE Healthcare’s Edison platform and are deployed on its AMX 240 mobile X-ray system.

The post GE Healthcare’s AI tool helps clinicians intubate patients accurately and safely appeared first on .

GE Healthcare Unveils First X-Ray AI Algorithm to Assess ETT Placement for COVID-19 Patients

Why GE Healthcare Won’t Sell its Health IT Business

What You Should Know:

– GE Healthcare announced a new artificial intelligence
(AI) algorithm to help clinicians assess Endotracheal Tube (ETT) placements, a
necessary and important step when ventilating critically ill COVID-19 patients.

– The AI solution is one of five included in GE Healthcare’s Critical Care Suite 2.0, an industry-first collection of AI algorithms embedded on a mobile x-ray device for automated measurements, case prioritization, and quality control.


GE Healthcare today announced a new artificial intelligence (AI) algorithm to help clinicians assess Endotracheal Tube (ETT) placements, a necessary and important step when ventilating critically ill COVID-19 patients. The AI solution is one of five included in GE Healthcare’s Critical Care Suite 2.0, an industry-first collection of AI algorithms embedded on a mobile x-ray device for automated measurements, case prioritization, and quality control. GE Healthcare and UC San Francisco co-developed Critical Care Suite 2.0 using GE Healthcare’s Edison platform, which helps deploy AI algorithms quickly and securely. Critical Care Suite 2.0 is available on the company’s AMX 240 mobile x-ray system.

The on-device AI offers several benefits to radiologists and
technologists, including:

– ETT positioning and critical findings: GE
Healthcare’s algorithms are a fast and reliable way to ensure AI results are
generated within seconds of image acquisition, without any dependency on
connectivity or transfer speeds to produce the AI results.

– Eliminating processing delays: Results are then
sent to the radiologist while the device sends the original diagnostic image,
ensuring no additional processing delay.

– Ensuring quality: The AI suite also includes several quality-focused AI algorithms to analyze and flag protocol and field of view errors, as well as auto, rotate the images on-device. By automatically running these quality checks on-device, it integrates them into the technologist’s standard workflow and enables technologist actions – such as rejections or reprocessing – to occur at the patient’s bedside and before the images are sent to PACS.

Impact of ETTs

Up to 45% of ICU patients, including severe COVID-19 cases, receive ETT intubation for ventilation. While proper ETT placement can be difficult, Critical Care Suite 2.0 uses AI to automatically detect ETTs in chest x-ray images and provides an accurate and automated measurement of ETT positioning to clinicians within seconds of image acquisition, right on the monitor of the x-ray system. In 94% of cases, the ET Tube tip-to-Carina distance calculation is accurate to within 1.0 cm. With these measurements, clinicians can determine if the ETT is placed correctly or if additional attention is required for proper placement. The AI-generated measurements – along with an image overlay – are then made accessible in a picture archiving and communication system (PACS).

Improper positioning of the ETT during intubation can lead
to various complications, including a pneumothorax, a type of collapsed lung.
While the chest x-ray images of a suspected pneumothorax patient are often
marked “STAT,” they can sit waiting for up to eight hours for a radiologist’s
review. However, when a patient is scanned on a device with Critical Care Suite
2.0, the system automatically analyzes images and sends an alert for cases with
a suspected pneumothorax – along with the original chest x-ray – to the
radiologist for review via PACS. The technologist also receives a subsequent
on-device notification to provide awareness of the prioritized cases.

“Seconds and minutes matter when dealing with a collapsed lung or assessing endotracheal tube positioning in a critically ill patient,” explains Dr. Amit Gupta, Modality Director of Diagnostic Radiography at University Hospital Cleveland Medical Center and Assistant Professor of Radiology at Case Western Reserve University, Cleveland. “In several COVID-19 patient cases, the pneumothorax AI algorithm has proved prophetic – accurately identifying pneumothoraces/barotrauma in intubated COVID-19 patients, flagging them to radiologist and radiology residents, and enabling expedited patient treatment. Altogether, this technology is a game-changer, helping us operate more efficiently as a practice, without compromising diagnostic precision. We soon will evaluate the new ETT placement AI algorithm, which we hope will be equally valuable tool as we continue caring for critically ill COVID-19 patients.”

Research shows that up to 25 percent of patients intubated
outside of the operating room have misplaced ETTs on chest x-rays, which can
lead to severe complications for patients, including hyperinflation,
pneumothorax, cardiac arrest and death. Moreover, as COVID-19 cases climb, with
more than 50 million confirmed worldwide, anywhere from 5-15 percent require
intensive care surveillance and intubation for ventilatory support.

Providence Taps Nuance to Develop AI-Powered Integrated Clinical Intelligence

Nuance Integrates with Microsoft Teams for Virtual Telehealth Consults

What You Should Know:

– Nuance Communications, Inc. and one of the country’s
largest health systems, Providence, announced a strategic collaboration,
supported by Microsoft, dedicated to creating better patient experiences and ease
clinician burden.

– The collaboration centers around Providence harnessing
Nuance’s AI-powered solutions to securely and automatically capture
patient-clinician conversations.

– As part of the expanded partnership, Nuance and
Providence will jointly innovate to create technologies that improve health
system efficiency by reducing digital friction.


Nuance® Communications, Inc. and Providence, one of the largest health systems in the
country, today announced a strategic collaboration to improve both the patient
and caregiver experience. As part of this collaboration, Providence will
build on the long-term relationship with Nuance to deploy Nuance’s cloud
solutions across its 51-hospital, seven-state system. Together, Providence and
Nuance will also develop integrated clinical intelligence and enhanced revenue cycle
solutions
.

Enhancing the Clinician-Patient Experience

In partnership with Nuance, Providence will focus on the clinician-patient experience by harnessing a comprehensive voice-enabled platform that through patient consent uses ambient sensing technology to securely and privately listen to clinician-patient conversations while offering workflow and knowledge automation to complement the electronic health record (EHR). This technology is key to enabling physicians to focus on patient care and spend less time on the increasing administrative tasks that contribute to physician dissatisfaction and burnout.

“Our partnership with Nuance is helping Providence make it easier for our doctors and nurses to do the hard work of documenting the cutting-edge care they provide day in and day out,” said Amy Compton-Phillips, M.D., executive vice president and chief clinical officer at Providence. “The tools we’re developing let our caregivers focus on their patients instead of their keyboards, and that will go a long way in bringing joy back to practicing medicine.”

Providence to Expand Deployment of Nuance Dragon Medical
One

To further improve healthcare experiences for both providers
and patients, Providence will build on its deployment of Nuance Dragon
Medical One with the Dragon Ambient eXperience (DAX). Innovated by Nuance and
Microsoft, Nuance DAX combines Nuance’s conversational AI technology with
Microsoft Azure to securely capture and contextualize every word of the patient
encounter – automatically documenting patient care without taking the
physician’s attention off the patient.

Providence and Nuance to Jointly Create Digital Health
Solutions

As part of the expanded partnership, Nuance and Providence
will jointly innovate to create technologies that improve health system
efficiency by reducing digital friction. This journey will begin with the
deployment of CDE One for Clinical Documentation Integrity workflow management,
Computer-Assisted Physician Documentation (CAPD), and Surgical CAPD, which
focus on accurate clinician documentation of patient care. Providence will also
adopt Nuance’s cloud-based PowerScribe One radiology reporting solution to
achieve new levels of efficiency, accuracy, quality, and performance.

Why It Matters

By removing manual note-taking, Providence enables deeper
patient engagement and reduces burdensome paperwork for its clinicians. In
addition to better patient outcomes and provider experiences, this
collaboration also serves as a model for the deep partnerships needed to
transform healthcare.

Mobile Point-of-Care Ultrasound Is Now A Frontline Warrior in Pandemic

Point-of-Care Ultrasounds is Now a Frontline Warrior in Battling The Pandemic
Diku Mandavia, M.D., SVP, Chief Medical Officer at FUJIFILM Sonosite

Health authorities need to prioritize delivery and the repurposing of mobile point-of-care ultrasound machines which have proven to be reliable, affordable, and effective in saving the lives of coronavirus patients.  


Most Americans are familiar with ultrasound technology from the scans done to check on the status of the fetus during pregnancy.  

But far fewer are aware of how valuable mobile versions of these units have also become in America’s emergency rooms where they almost instantly detect and record everything from internal bleeding, abdominal pain to life-threatening infections. 

In recent days, mobile units have suddenly become a critical global technology for scanning the chests of coronavirus victims to precisely monitor the condition of their lungs.  

We now need to raise the status of these life-saving diagnostic machines, finding and rushing them to the frontlines of hospitals where coronavirus patients are triaged and cared for.

Even before the COVID-19 pandemic, there had been elevated global demand for these mobile – called “point of care” – units that can be brought to the bedside.  Some are small handheld devices that instantly connect to a smartphone.  

International relief organizations and national health authorities have issued urgent calls to manufacturers in the last few days for any surplus or underutilized ultrasound equipment capable of performing lung scans.  They are also seeking point-of-care ultrasound units that are underutilized or are in “retired” inventory at clinics and hospitals around the world, units that can be adapted for use in lung ultrasound (LU) diagnosis.  

Sales and maintenance records from manufacturers may also be used to track down operational LU machines that are already in-country and can be drafted into urgent service during the pandemic.

Because the most desired devices are mobile and move from patient to patient, very strict hygienic procedures must be carefully monitored and managed.  

As with so many technical innovations over the past half-century, taking the technology mobile was originally funded by one of the smallest but most consequential units in our U.S. military arsenal: Defense Advanced Research Projects Agency (DARPA).  

DARPA didn’t invent ultrasound, but it did help shrink the technology to mobile size so that frontline military physicians could take the technology closer to the battlefield and save the lives of wounded warriors.  These mobile units, now ubiquitous in ICUs and in emergency rooms around the world, are much cheaper and lower risk than radiography (x-ray) units which are difficult to maneuver to the bedside of the critically ill especially with diseases as transmittable as a coronavirus.  

It turns out that these popular mobile units provide particularly precise views of distressed lungs – important tools to have when doctors need to see the exact progression of the COVID-19 virus in infected patients who are quarantined and unable to be safely moved to a remote radiology suite.  COVID-19 often presents as a respiratory invader that causes acute inflammation in the lungs, primarily as a patchy, interstitial infiltrate – a condition recognized with ultrasound imaging.  

A small but important study was just published in Radiology by the Radiological Society of North America (RSNA) on March 13 which comes from other doctors also on the coronavirus frontlines in Italy.  

That report – covering the records of emergency physicians at Ospedale Guglielmo da Saliceto in Piacenza, Italy – claims a “strong correlation” between lung ultrasound and CT findings in patients with COVID-19 pneumonia, leading the investigators to “strongly recommend the use of bedside [ultrasound] for the early diagnosis of COVID-19 patients who present to the emergency department.”

Pneumonia and respiratory failure are a principal cause of death among COVID-19 patients.  What we can assess in a lung ultrasound right now in these patients is the involvement of both lungs with basically patchy findings.  Distinctive to the disease is typically ultrasonographic B lines – wide bands of hyperechoic artifacts that are often compared to the beam of a flashlight being swung back and forth.  

If there is a significant consolidation, diagnostics may also capture imagery of hepatization of the lung.  This information is critical to monitoring, addressing, and curing pneumonia.

For these patients and hospitals in crisis, mobile lung-ultrasound units are also scanning far more patients in a short period of time than more elaborate diagnostic imaging technologies, while delivering an accurate, actionable answer on the presence and degree of infection.  

Lung ultrasound is a critical application of the point-of-care mobile units in the emergency rooms battling COVID-19 around the world, but these patients very sick with COVID-19 may also need venous access under ultrasound guidance to administer fluids and medications.  Or they may be in shock and need a shock assessment, for which point-of-care ultrasound in COVID-19 resuscitation bays and ICUs are also very useful.

The COVID-19 pandemic is expected to get worse in the U.S. before it gets better.  New York, California, and the State of Washington have set up military-style hospitals  – 250-bed infirmaries that will be fully functional hospitals for COVID-19 patients – and will be placing point-of-care ultrasound there and elsewhere where it would be much more difficult to put a CT scanner.

The challenge in meeting that urgent goal is whether we can find and deploy enough functional lung ultrasound devices to COVID-19 responders in the next several weeks to save lives that are already in danger and restore COVID-19 patients alive and well to families desperate for medical rescue.  I believe we can and will.


About Diku Mandavia, M.D.

Diku Mandavia, M.D. is the Senior Vice President, Chief Medical Officer, at FUJIFILM Sonosite Inc., and FUJIFILM Medical Systems U.S.A., Inc.  He completed his residency in emergency medicine at LAC+USC Medical Center in Los Angeles where he still practices part-time. He is a Clinical Associate Professor of Emergency Medicine at the University of Southern California.


Will Nanox Disrupt The X-ray Systems Market?

Will Nanox Disrupt the X-ray Systems Market?

With its share price falling from more than $66 to less than $24, September was a tumultuous month for Nanox.

On August 25th, the medical imaging start-up closed its initial public offering, having raised $190m from the sale of 10,555,556 ordinary shares at a price of $18 each. Money poured in as investors were sold on Nanox’s cold cathode x-ray source and the subsequent reduction in costs that it would enable, as well as the vendor’s pay-per-scan pricing model that would let the company access new, untapped markets.

A week later the shares were being traded for almost double their opening amount, and by the 11th of September, they had reached a peak of $66.67. This meteoric rise soon came to an end though, as activist short-seller Andrew Left of Citron Research published a report comparing the Israeli start-up to disgraced medical testing firm Theranos and asserted that the company’s shares were worthless.

Other commentators added to Left’s criticism, causing investors to abandon the stock. Class action lawsuits followed, with legal firms hoping to defend shareholders against the imaging company’s alleged fabrication of commercial agreements and of misleading investors.

Nanox defended itself against the Citron attack, insisting that the allegations in the report are ‘completely without merit’, but the extra scrutiny and threat of legal repercussions have left the share price continuing to plummet, falling to $23.52 at month’s end.

Vendor Impact

– New business and payment models could capture demand from new customers in untapped and emerging markets

– Vendors should be reactive. A successful launch of Nanox’s X-ray system could channel more focus and resources on the portfolio of low-end X-ray systems

– Once established, recurring services are hard to displace

– However, brand loyalty and hard-earned reputations aren’t easily forgotten

Market Impact

– Potential for disruptive technology to expand access to medical imaging and provide affordable X-ray digital solutions, delivering a significant and rapid overall market expansion

– New customer bases could have less expertise and a lack of trained professionals – ease of use becomes a critical feature

– Where X-ray system price is a battleground, and a fundamental factor driving purchasing decisions, Nanox’s proposed ecosystem offers revenue-generating opportunities

The Signify View

Assessing the viability and long-term potential of any business is a dangerous game, doubly so if it depends on a closely guarded game-changing technological innovation as is the case with Nanox. Fortunes are won and lost on a daily basis by investors, speculators, and gamblers trying to get in on the ground floor of the next ground-breaking company after being convinced by slick presentations and thorough prospectuses.

There is likely merit in some of the arguments being put forward by those on either side of the Nanox debate. For example, the lack of peer-reviewed journal articles about new technology is questionable. But, the skepticism around the feasibility of Nanox’s technology seems to ignore that research into cold-cathode x-ray generation, the cornerstone of Nanox’s offering has been ongoing for numerous years, and isn’t as out of the blue as the naysayers may suggest.

Regardless of these and other specifics in the ongoing fracas between short-sellers, Nanox, investors, and lawyers, all of whom have their own agendas, the voracity with which the stocks were initially purchased shows the keen appetite investors have for a company that would bring disruption to the X-ray systems market.

When delving into Signify Research’s data on this market, it is easy to see why. Across many developed and mature regions, the market has become relatively stable. It is one of replacement and renewal rather than selling to new customers and increasing the accessibility of X-ray imaging. Developed markets do continue to drive growth for X-ray manufacturers to some extent, particularly as a result of digitalization and favored reimbursement for digital X-ray imaging.  However, by and large, the market remains broadly flat, with a CAGR of just 2.7% forecast for the period 2018-2023.

nanox image

Figure 1: While there are some growth areas, the X-ray market as a whole is very stable

New business

Nanox has strong ambitions to outperform this underwhelming outlook by utilizing its unique and more affordable technology to offer a relatively feature-rich system, dubbed the Arc, at a far lower price than existing digital X-ray systems. Competing on price is only one part of the equation, however.

After all, there are countries where, despite their economies of scale, the multi-national market leaders in medical imaging are unable to compete with domestic manufacturers, which are able to produce X-ray systems locally, with lower overheads, and no importation costs. Globally, there are also a large number of smaller imaging vendors, which have limited, yet low-cost offerings at the value end of the market, with this increased competition driving down average selling prices.

To differentiate itself further, Nanox also plans to launch with a completely new business model. Instead of traditional transactional sales, which see providers simply purchase and pay the full cost of the imaging system in one installment, use the system for the entire shelf life of the product and then replace with an equivalent model, Nanox plans to retain ownership of its machines, but charge providers to use them on a pay-per-scan basis.

There are some regions and some situations where legislation and other factors make this model unfeasible, so Nanox will also make its products available to purchase outright, as well as licensing its technology to other firms. However, the start-up’s focus is on offering medical imaging as a service.

The company says that this shift from a CapEx to a managed service approach means that instead of competing with established vendors over market share, it will be able to expand the total market, enabling access to imaging systems in settings where they have been hitherto absent, with urgent care units, outpatient clinics, and nursing homes being suggested as targets.

According to the Nanox investor’s prospectus, current contracts already secured (although the legitimacy of these deals is one of the issues raised by the short-sellers) feature a $40 per scan cost, of which Nanox receives $14 – although the exact figure varies depending on regional economics. The contracts feature a minimum service fee equivalent to seven scans a day, although the target is somewhat higher, with each machine expected to be used to produce 20 scans a day, for 23 days a month.

If Nanox’s order book is as valid as the company insists, and it already has deals for 5,150 units in place, each system will consequently be bringing in a minimum of $27,048 dollars per year for a minimum total revenue of $139m. If the systems are used 20 times a day as Nanox hopes, that means almost $400m in sticky recurring revenues annually. To put that in perspective, one of the market leaders for X-ray imaging systems in 2018 was Siemens Healthineers, which turned over almost $2.8bn across its general radiography, fluoroscopy, mammography, mobile, angiography, and CT imaging divisions.

With an order book that is, on the face of it, this healthy, there have been questions as to why Nanox went public at all, but the listing may be required for this business model to work. The Israeli vendor says that the vast majority of the investment will be sunk into producing the Nanox scanners, and the associated manufacturing capacity. This is necessary because unlike other imaging companies selling systems on a CapEx basis, Nanox will receive nothing for delivering scanners to customers. Revenue is generated later as the systems are used.

This means that the company is effectively fronting the initial cost of the systems, so needs to get as many units installed and being used as quickly as possible to recoup its initial costs. Unlike other vendors, it cannot rely on sales of a first tranche to fund the second and so on, in its new managed service model, it is better to mass produce everything at once.

Open to exposure

There is, however, nothing to stop other, established players from switching to a similar model. This should be of concern to Nanox, after all, Siemens Healthineers or GE Healthcare already have the manufacturing capacity and capital ready to offer products in a similar way.

And of course, Nanox, shouldn’t underestimate the difficulty of disrupting a long-established market. Despite ample funding and solid products, other companies are still struggling to make an impact in other markets. For example, Butterfly Network, a vendor offering an affordable handheld ultrasound solution, has a valuation of over $1 billion and has received more than $350m in funding.

In 2019, the company turned over $28m, enough to make it the market leader in the nascent handheld category, but in a global ultrasound market worth almost $7bn, at present, it is little more than a drop in the ocean.

Nanox hopes that its own new business model would be disruptive by opening up the market to a far greater range of customers than are currently served. A nursing home, for example, might not be able or willing to allocate the cost of a CT machine from a single year’s budget, but spreading that cost as the scanner is used, and particularly if that cost is passed on to patients at a time of use, on-site imaging suddenly becomes a far more feasible proposition.

What’s more, if a company was able to increase its product’s user base there is a strong possibility for upselling additional services, software, and tools. These could be things like AI modules that increase workflow efficiency, or, especially pertinent given the pricing model could allow machines to be installed in new settings that lack on-site expertise, tools that aid clinical decision making.

Beyond that, there is also ample scope for an imaging vendor to entice a customer into its ecosystem with a scanner that has no cost at the point of delivery, before getting it to commit to its own PACS and other IT systems. Being able to fully exploit these new customers relies, in the first instance, on being able to get a foot in the door. That is why an imaging service model could be so beneficial, even if the returns on the scans themselves aren’t especially lucrative.

Features first

While adopting a new business model and securing revenue from add-ons and upselling would help established vendors countenance the price differential Nanox proposes, if we are to take the start-up at its word, addressing its feature set might be another matter entirely.

As well as just providing imaging hardware, Nanox is offering a service that, at face value, is more complete. The Arc automatically uploads all imaging data to its cloud SaaS platform. This platform would initially use AI systems to ‘provide first response and decision assistive information’ before radiologists could provide final diagnoses that could then be shared with hospitals in real-time.

Fig2

Figure 2: With teleradiology read volumes increasing, it makes sense that the necessary hardware comes baked into the Arc

There is currently limited information available about the exact nature of the so-called Nanox.CLOUD and its integration with the Arc, although several assumptions can be made:

– Firstly, although built-in connectivity is being touted as a feature with clinical benefits, its inclusion is as likely to be a necessity as a design choice, given that Nanox presumably needs to be able to communicate with the systems in order to find out scan volumes and bill accordingly. Or, more drastically, render the system inoperable if people don’t keep up with payments.

– Another assumption that can be made is that the full suite of tools wouldn’t be included in the basic pay-per-scan fee. Signify’s Teleradiology World – 2020 report found that in 2020, the average revenue per read for a teleradiology platform is, in North America for example, $24.40. As such, teleradiology services would only be able to be offered at an additional cost, creating another revenue stream for Nanox.

– Another sticking point could also be Nanox’s promise to enable the integration of its cloud into existing medical systems, via APIs. While well and good in theory, the competitiveness, complexity, and proprietary nature of many medical imaging workflows, combined with the fact that many vendors have absolutely no incentive to make integration easy for the newcomer, mean that in practice, it is likely to either be a prohibitively expensive, or frustratingly limited offering. This is one area where established vendors, which already offer comprehensive medical imaging packages, have a distinct advantage.

Back down to Earth

The short positions promoted by commentators including Citron Research and Muddy Waters Research postulate that the Nanox.ARC scanner isn’t real. There are some legitimate questions, but running through their papers is also an attitude that Nanox’s claims are simply implausible, whether that is because it has an R&D budget a fraction of the size of GE, or because anonymous radiologists unrelated to the company haven’t seen anything like it before.

It is worth remembering, though, that these short sellers will benefit financially if Nanox slumps. Nanox conversely, is obviously financially incentivized to promote its technology and its potential, and it wouldn’t be the first company, to promote the limited fruits of its start-up labor in a flattering light.

As so often happens in these he said, she said situations, the truth could well lie somewhere between the two extremes. Even in this instance, even if Nanox fails to deliver on some of its more impressive promises, the fact is, it has suggested bringing a whole new customer base into play and laid out a strategy for selling to them.

With that being the case, for a big vendor the issue of whether Nanox is legitimate almost becomes moot, their focus should be what these other customers require, how to get these customers into their product ecosystems, and what add-on products, and additional services they can feasibly sell them at a later date.

If nothing else, the entire Nanox furor shows that to achieve growth in mature markets, a vendor’s innovation needs to extend beyond its products.


About Alan Stoddart

Alan Stoddart is the Editor at Signify Research, a UK-based market research firm focusing on health IT, digital health, and medical imaging. Alan joined Signify Research in 2020, using his editorial expertise to lead on the company’s insight and analysis services. 

Making the Case: Why Pagers and Smartphones Should Wed

Making the Case: Why Pagers and Smartphones Should Wed
Fred Lizza, CEO at Statum Systems

Clinicians in healthcare settings typically have information coming at them from all directions, at all times, and often with little distinction as to the level of urgency. It makes for inefficiency and confusion for today’s busy doctor.

In today’s hospital setting, that disjointed communication creates dissonance and distraction. Even though the world has gravitated to the ubiquitous use of smartphones, that’s not the dominant form of connection for physicians. The vast majority of hospitals still depend on paging systems to quickly reach doctors as they circulate through a facility and even outside it.

In fact, a study published in the Journal of Hospital Medicine in 2017 found that hospitals provided pagers to 80 percent of hospital-based clinicians, and more than half of all physicians in the survey reported that they received patient care-related communication most commonly by pager. Other information sources reported in the study included unsecured standard text messaging (53 percent of clinicians), and 27 percent used a secure messaging application.

While paging systems seem like a throwback form of technology, they have a history of providing reliable connections between clinicians in hospital settings. They operate on a frequency that is less prone to interference, and they travel significantly farther than messages traveling on cellular networks or Wi-Fi. That means pager signals reach hospital areas that are likely to have bad reception, such as radiology departments or basements. In addition, pager signals are not susceptible to surges in demand or network overload situations, which may occur during emergencies.

However, many hospitals are taking steps to resolve some of these issues. For example, a variety of technologies, such as repeaters, range extenders, or boosters, can improve coverage to challenge areas for both Wi-Fi and cellular networks.

Even so, pagers – a technology that was patented in 1949 and first used in New York City’s Jewish Hospital – are now a duplicative device that does not match the capability of the smartphones that physicians rely on. Many report that it’s frustrating to have to carry a separate paging device that does not fully meet their communication needs.

Pagers don’t work like physicians need them to. For example, it’s frustrating to receive a page, then return the call as requested, only to find that the doctor or nurse who initiated the page is no longer on duty or otherwise inaccessible. That typically requires a message to voicemail or further calls to find out how to reach the other clinician. Communication that could be handled in two minutes with a smartphone could take as much as half an hour to complete with a pager-based system. And that interferes with other work that a clinician should be accomplishing during hospital rounds.

Here’s one real-life example from a surgeon at a major Boston-area hospital. The doctor needed to reach a radiology technologist after regular work hours to get post-surgery X-ray images of a patient uploaded to another EHR system. The physician eventually calls the technologist’s pager number, but there are no instructions for how to ensure the message was left or even if the page went through. The physician calls a nurse to have her call the technologist’s page number on his behalf, but still has no assurance that the call went through. Finally, the technologist returned the call after 35 minutes and multiple phone calls.

Paging systems also have security shortcomings. Many pagers are not fully secure, exposing messages sent over a system to anyone who can tap into the frequency being used. As a result, many pagers and pager messaging systems are not HIPAA compliant, exposing hospitals to potential liability or even hacking or service attacks that could impact communications.

To improve efficiency and security, healthcare organizations need to look to gravitate toward an all-encompassing medical communications system that captures all pager-like messages and seamlessly incorporates them into a collaboration platform that does not rely on store-and-forward functionality. 

Over recent years, clinicians have come to accept and widely use smartphones as a form factor, and their multi-tasking capability also enables clinicians to do more than one task – for example, communicate via text messages, consult an electronic health records system and engage in verbal communication with one or more clinicians.

While the utility of the pager network remains and pager systems are likely to stay in use for the foreseeable future, it is important for healthcare systems to keep the technology but get away from the pager form factor. Transforming the system won’t get rid of pagers completely but will enable physicians to get pager messages in a different way, connecting the current highly accessible pager network directly to a medical professional’s smartphone.

Such a strategy combines the ease of use and convenience of a smartphone with the advantages of a pager network.


About Fred Lizza

Fred Lizza is CEO of Statum Systems , a developer of advanced mobile collaboration platforms geared to caregivers. He was previously CEO of StrategicClaim, an insurance claims platform, and Freestyle Solutions, an e-commerce leader. Fred earned his MBA from Harvard University.

Thriving in a Value-Based Care Environment: Impacting Outcome-to-Cost Ratio

Thriving in a value-based care environment: impacting the outcome-to-cost ratio
Jerry Carlson, Product Support Manager, BG DI BU IC Sales, Dunlee

As the COVID-19 pandemic creates surges in acute care, many imaging departments are experiencing a decrease in volume, due to patients deferring or canceling non-urgent appointments and surgeries. The impact of this makes it painfully obvious that — because imaging departments rely on a fee-for-service model – when the volume is down, finances suffer. As an aspect of healthcare that has historically been hyper-focused on volume, adoption of a value-based care approach in radiology has evolved slowly, even before the pandemic. Despite the hurdles COVID-19 has presented, the rationale behind value-based care remains – there is a need to drive improved patient outcomes at a lower cost – and healthcare reimbursement will continue to shift, encouraging quality care and enhanced patient experiences. Radiology can take an important role in realizing this transformation, influencing the entire process of early diagnosis, efficient treatment, and follow-up care.

So how can imaging departments thrive when confronted with a value-based care model? One way is to make sure referrers value radiologists’ expertise as part of the care team. Active participation in care team discussions, as well as case study presentations, can demonstrate the extent to which imaging affects outcomes. Imaging departments can also invest in building referrals for areas where imaging intersects more directly with care, such as oncology. But perhaps the most direct way is to focus on an area over which the imaging department has the most control: the cost-effective use of resources. The strategies chosen today could help or hurt a practice in the future, and departments should look toward reliable technology that delivers consistent results and allows staff to focus less on technical issues and more on patient care.

An efficient department with the right mix of technology can thrive in a value-based healthcare environment. Answering these three questions can guide your technology strategy and help you weather the pandemic disruption and the continuing adoption of value-based care:

Are your imaging systems appropriate for your patient demand? 

As it relates to value-based care, you can expect the future to entail less “confirmation” imaging and more investigational, prevention-focused imaging. However, different imaging solutions have different purposes; while confirmation CT studies don’t demand as high performance, investigational imaging will often require more sophisticated systems with the image quality and performance to support complex studies and confident diagnoses, and potentially even spot incidental findings that circumvent health issues down the line. When purchasing new systems, consider the type of studies that make up the majority of your business, as well as new areas in which you’d like to increase expertise and referrals.

How cost-effective are your imaging technology operations? 

Consider every aspect of your operation to uncover opportunities to decrease costs without affecting quality. For best results, involve the entire imaging department team in these explorations. One possible budget drain is consumables. Sometimes the easiest way to service your car is by bringing it to the dealer, but that’s not always the most cost-effective option. The same goes for imaging technology; be sure to consider third-party options, in addition to Original Equipment Manufacturer (OEM) parts. Today, alternate parts are available for almost every piece of your organization, even technically sophisticated components such as x-ray tubes.

Is your technology reliable? 

Speed to diagnosis may impact patient outcomes.  Your referrers are looking for a quick turnaround of imaging studies. Highly advanced technology with reliable uptime can help you become a partner of choice, and reduce time spent maintaining equipment. For example, radiation oncology depends on CT for treatment planning, and oncologists need radiology partners who have CT systems that are dependable, integrate easily into their workflow, and do not distract from patient care. Even a small change, such as CT tubes that use highly reliable liquid metal bearings to eliminate the need to wait for tube cooling between studies, will impact your throughput and thus your ability to meet referrers’ needs. 

Are you putting unnecessary stress on your imaging systems? 

Educate all system users about manufacturer-recommended procedures for system use and upkeep to keep your systems running at high performance. For example, shutting the system down by turning off the power, rather than by following manufacturer-recommended procedures, places unnecessary stress on components that need time to cool.

While it’s important to take a measured approach as we navigate the repercussions of COVID-19, now is the time to begin adapting to value-based care. As the pandemic has taught us, a nimble imaging department can adapt to changing circumstances and create lasting value. Revisit these questions frequently, because consistent assessment and vigilance is key to a department’s success.

AWS, PHDA Collaborate to Develop Breast Cancer Screening and Depression Machine Learning Models

AWS, PHDA Collaborate to Develop Breast Cancer Screening and Depression Machine Learning Models

What You Should Know:

– Amazon Web Services (AWS) and the Pittsburgh Health Data Alliance (PHDA) announce a collaboration to produce more accurate machine learning models for breast cancer screening and depression.

– In work funded through the PHDA-AWS collaboration, a research team led by Shandong Wu, an associate professor at the University of Pittsburgh Department of Radiology, is using deep-learning systems to analyze mammograms in order to predict the short‐term risk of developing breast cancer. 

– A team of experts in computer vision, deep learning,
bioinformatics, and breast cancer imaging, including researchers from the
University of Pittsburgh Medical Center (UPMC), the University of Pittsburgh,
and Carnegie Mellon University (CMU), are working together to develop a more
personalized approach for patients undergoing breast cancer screening.


Last August, the Pittsburgh Health Data Alliance (PHDA)
and Amazon Web Services (AWS)
announced a new collaboration to advance innovation in areas such as cancer
diagnostics, precision medicine, electronic health records,
and medical imaging. One year later: AWS collaboration with Pittsburgh Health
Data Alliance begins to pay dividends with new machine learning innovation.

Researchers from the University of Pittsburgh Medical Center
(UPMC), the University of Pittsburgh, and Carnegie Mellon University (CMU),
who were already supported by the PHDA,  received additional support
from  Amazon Research Awards to use machine learning
techniques to study breast cancer risk, identify depression markers, and
understand what drives tumor growth, among other projects.


Accurate Machine Learning Models for Breast Cancer Screening
and Depression

In work funded through the PHDA-AWS collaboration, a
research team led by Shandong Wu, an associate professor in the University of
Pittsburgh Department of Radiology, is using deep-learning systems to analyze
mammograms in order to predict the short‐term risk of developing breast
cancer.  A team of experts in computer vision, deep learning,
bioinformatics, and breast cancer imaging are working together to develop a
more personalized approach for patients undergoing breast cancer screening.

Wu and his colleagues collected 452 de-identified normal
screening mammogram images from 226 patients, half of whom later developed
breast cancer and half of whom did not. Leveraging AWS tools, such as
Amazon SageMaker,
they used two different machine learning models to analyze the images for
characteristics that could help predict breast cancer risk. As they reported in
the American Association of Physicists in Medicine, both
models consistently outperformed the simple measure of breast density, which
today is the primary imaging marker for breast cancer risk,  The team’s
models demonstrated between 33% and 35% improvement over these existing
models, based on metrics that incorporate sensitivity and specificity.


Why It Matters

“This preliminary work demonstrates the feasibility and promise of applying deep-learning methodologies for in-depth interpretation of mammogram images to enhance breast cancer risk assessment,” said Dr. Wu. “Identifying additional risk factors for breast cancer, including those that can lead to a more personalized approach to screening, may help patients and providers take more appropriate preventive measures to reduce the likelihood of developing the disease or catching it early on when interventions are most effective. “


Tools that could provide more accurate predictions from screening images could be used to guide clinical decision making related to the frequency of follow-up imaging and other forms of preventative monitoring. This could reduce unnecessary imaging examinations or clinical procedures, decreasing patients’ anxiety resulting from inaccurate risk assessments, and cutting costs.

Moving forward, researchers at the University of Pittsburgh
and UPMC will pursue studies with more training samples and longitudinal
imaging data to further evaluate the models. They also plan to combine deep
learning with known clinical risk factors to improve upon the ability to
diagnose and treat breast cancer earlier.


Second Project to Develop Biomarkers for Depression

In a second project, Louis-Philippe Morency, associate
professor of computer science at CMU, and Eva Szigethy, a clinical researcher
at UPMC and professor of psychiatry, medicine, and pediatrics at the University
of Pittsburgh, are developing sensing technologies that can automatically measure
subtle changes in individuals’ behavior — such as facial expressions and use of
language — that can act as biomarkers for depression.

These biomarkers will later be compared with the results of
traditional clinical assessments, allowing investigators to evaluate the
performance of their technology and make improvements where necessary. This
machine learning technology is intended to complement the ability of a
clinician to make decisions about diagnosis and treatment.  The team is working with a gastrointestinal-disorder
clinic at UPMC, due to the high rate of depression observed in patients with
functional gastrointestinal disorders.

This work involves training machine learning models on tens
of thousands of examples across multiple modalities, including language (the
spoken word), acoustic (prosody), and visual (facial expressions). The
computational load is heavy, but by running experiments in parallel on multiple
GPUS AWS services have allowed the researchers to train their models in a few
days instead of weeks.

A quick and objective marker of depression could help
clinicians more efficiently assess patients at baseline, identify patients who
would otherwise go undiagnosed, and more accurately measure patients’ responses
to interventions. The team presented a paper on the work, “Integrating
Multimodal Information in Large Pretrained Transformers”, at the July 2020
meeting of the Association for Computational Linguistics.


“Depression is a disease that affects more than 17 million adults in the United States, with up to two-thirds of all depression cases are left undiagnosed and therefore untreated,” said Dr. Morency. “New insights to increase the accuracy, efficiency, and adoption of depression screening have the potential to impact millions of patients, their families, and the healthcare system as a whole.”


The research projects on breast cancer and depression
represent just the tip of the iceberg when it comes to the research and
insights the collaboration across PHDA and AWS will ultimately deliver to
improve patient care. Teams of researchers, health-care professionals, and
machine learning experts across the PHDA continue to make progress on key
research topics, from the risk of aneurysms and predicting how cancer cells
progress, to improving the complex electronic-health-records
system
.


CancerIQ Raises $5M to Expand Genetic Cancer Risk Assessment Platform

CancerIQ Raises $5M to Expand Genetic Cancer Risk Assessment Platform

What You Should Know:

– CancerIQ raises $5M in Series A funding led by HealthXVentures to accelerate the growth of its genetic cancer risk assessment platform to identify and manage patients at high risk of cancer.

– CancerIQ’s technology enables hospitals to use genomics
to personalize the prevention and early detection of cancer.

– Two new hires recently joined CancerIQ’s newly formed
Integrated Products team from Epic, with the goal of advancing CancerIQ’s
integration with leading EMRs.


CancerIQ, an
enterprise precision health platform for cancer, today announced it has raised
$4.8M in Series A funding led by HealthX
Ventures
, a digital
health-focused
venture capital firm led by Mark Bakken, the founder and
former CEO of Nordic Consulting, the
largest Epic consulting firm. CancerIQ will use the funding to accelerate the
growth of its current offering and deepen integrations with EHRs and genetic
testing partners. Other institutional investors including Impact Engine and
Lightbank, co-founded by Eric Lefkofsky (founder of Tempus and co-founder of
Groupon) and Brad Keywell (co-founder of Groupon), also participated in the
round.


Genetic Cancer Risk Assessment Platform to Manage
Patients at High Risk of Cancer

Founded in 2013, CancerIQ helps healthcare providers use genetic information to predict, preempt, and prevent cancer across populations in both urban and rural settings. By analyzing family history, running predictive risk models, and automating NCCN guidelines, CancerIQ empowers providers with the genetic expertise to prevent cancer or catch it early.

CancerIQ’s workflows enable health systems to execute
precision health strategies for patients predisposed to cancer, by:

• Identifying the 25 percent of the patient population that
qualifies for genetic testing

• Streamlining the genetic testing and counseling process,
via telehealth if required

• Managing high-risk patients over time

• Tracking outcomes at the individual and population levels

In addition, the platform allows hospitals to convert their
cancer risk assessment and management programs to virtual visits with its
complete telehealth cancer risk platform. CancerIQ has been rapidly adopted by
some of the top health systems in the country and fully integrates with
genetics laboratories, EHRs, and specialty software vendors to streamline
workflow, guide clinician decision making, achieve cost savings, and — most
importantly — improve patient outcomes.


Recent Traction/Milestones

CancerIQ will use the funding to accelerate the growth of
its current offering and deepen integrations with EHRs and genetic testing
partners. The company is experiencing a rapid growth year despite the COVID-19
crisis. Precision health has become an even more important technique for early
detection and prevention of disease. Over 80,000 patients have missed their
cancer screening appointments, but health systems are rapidly adopting CancerIQ
to triage and prioritize those in need of most urgent care.

“Partnering with HealthX allows us to build on the solid foundation we have serving over 70 institutions, and enable system-wide precision health,” said Feyi Ayodele, CEO, CancerIQ.


Addition of Strategic Hires to Epic Integration Team

Two new hires recently joined CancerIQ’s newly formed
Integrated Products team from Epic, with the goal of advancing CancerIQ’s
integration with leading EMRs:

Lisa Glaspie, Director of Integrated Products

– Glaspie spent 16 years at Epic, where she was directly involved in many integrations, data management, and conversion projects spanning a wide array of clinical and specialty system vendors, as well as custom in-house products. She will inform how CancerIQ can be deeply integrated across more clinical specialties.

Ashar Wasi, Integrated Product Specialist

– Wasi spent the last 11 years at Epic on the implementation
team for Epic’s radiology and cardiology modules. At CancerIQ, he will help
client teams understand different integration methods and provide context on
the scalability of CancerIQ’s FHIR-based approach.

“To engage primary care, radiology, and cardiology in precision health — we need our content to be deeply embedded in the EHR systems they already use. We’re excited to bring Lisa and Ashar on board for their domain expertise with Epic, so fewer high risk patients fall through the cracks,” added Ayodele.


Sophia Genetics launches AI tool to find COVID-19 ‘unknowns’

Swiss medical data specialist Sophia Genetics has launched a platform that will sift through data generated at more than 1,000 hospitals around the world to try to work out how the COVID-19 pandemic will evolve in the coming months and years.

The data mining tool will be used to try to unearth some of the many unknowns with the virus, using next-generation sequencing (NGS) to see how the genome of SARS-CoV-2 changes over time, along with patient genetic information, results of lung and CT scans, and other clinical data.

At the heart of the system is an artificial intelligence (AI) system to conduct full-genome analysis of SARS-CoV-2, and a radiomics tool for lung data. Combined, they use machine learning to discover abnormalities predictive of disease evolution.

“There is a lot that we unfortunately still do not understand about the virus and its associated clinical manifestations,” Sophia Genetics’ chief medical officer Philippe Menu told pharmaphorum, noting that it’s likely due to different factors such as initial viral load at exposure, as well as viral and host genetic factors.

“Importantly, while we know that elderly people are unfortunately at much higher risk of suffering from severe forms of the disease, we do not understand why some healthy, young people can also go through a severe form of the disease while others remain completely asymptomatic,” he said.

Working out why COVID-19 manifests differently in different patient groups is a major “pain point” that if solved could help in resource allocation for healthcare systems, such as who should get early and aggressive treatment.

“We also do not know whether the virus sequence will evolve significantly through mutations as millions more people become infected,” said Menu.

“This could potentially have a major impact on the efficacy and safety of candidate vaccines and antiviral therapies. Being able to do a longitudinal tracking of the viral genomic evolution across geographies and time is therefore very important.”

The new tool can be used by labs to support their own COVID-19 research projects on their local patient base, and in turn could be used by pharma companies that are developing candidates vaccines and antiviral therapies.

It could be used to predict potential changes in efficacy down the line, for example if a mutation appears at scale in the virus that would be in the target region of an antiviral drug.

“Controlling this virus means understanding it at new levels that go beyond simple testing,” commented Jurgi Camblong, Sophia Genetics’ founder and CEO.

“The evolution of the disease must be predicted in order to create containment measures,” he added. “We can do this by building a world map of longitudinal tracking, beginning with highly accurate and reliable virus data, further powered by radiomics data.”

The post Sophia Genetics launches AI tool to find COVID-19 ‘unknowns’ appeared first on .

M&A Analysis: 3 Benefits of Siemens Healthineers’ $16.4B Acquisition of Varian Medical

M&A Analysis: 3 Benefits of Siemens Healthineers $16.4B Acquisition of Varian Medical

What You Should Know:

– Siemens Healthineers and Varian Medical announce a $16.4B deal in an all-cash transaction on 2nd August 2020.

– Deal expected to close in 1H 2021.

– Varian Medical will maintain its brand name and operate “independently”

– Siemens AG will drop holding in Siemens Healthineers from 85% to 72% as part of the transaction.


News of the deal between Siemens Healthineers and Varian Medical will have caught many industry onlookers off guard on Sunday evening. Flotation of the Healthineers business segment on the German stock market raised a few eyebrows back in 2017, but with Siemens AG retaining 85% of the stock, many observers postulated little change to the fortunes of the well-known business; an unwieldy technical hardware leader facing an uphill battle in an increasingly digital market.

However, the Varian deal has just made it very clear that Siemens Healthineers has emerged from the IPO with big ambitions and firepower to match. So, what does this mean for the future?

Win-win?

Three benefits of the deal are clear at first glance. Firstly, Siemens Healthineers will be adding an additional mature product set to its already strong modality hardware line-up. Radiation Therapy hardware (linear accelerators, or linac), is the lion’s share of Varian’s business, for which it is market leader holding over 55% of the global installed base in 2019. Combining this with Siemens’ extensive business in diagnostic imaging and diagnostics will create a product line-up that no major peer can today match. It also opens up opportunities for providing “end-to-end” oncology solutions (imaging, diagnostics, and therapy) under one vendor, a strong play in a market where health providers are increasingly looking to limit supply chain complexity and explore long-term managed service deals with fewer vendors.

Secondly, Varian is operating in a relatively exclusive market, with its only main competition coming from market peers Elekta and Accuray Inc. Demand for linacs has been consistently improving in recent years, with Varian suggesting only two-thirds of the Total Addressable Market (TAM) for Radiation Therapy has been catered for so far. The acquisition, therefore, opens a new growth market for Siemens Healthineers to offset the gradual slowing demand for its advanced imaging modality (MRI, CT) business, a more competitive and mature segment. The adoption of Radiation Therapy in emerging markets such as China and India is also well behind advanced imaging modalities, offering new greenfield opportunities near term, a rarity in most of Siemens Healthineers’ core markets.

Thirdly, Varian has grown to a size where progressing to the next level of growth will require substantial investment in operations and new market channels. Revenue growth over the last five years has been patchy, though gross margin remains strong for this sector. If Siemens can leverage its far larger operational and sales network and apply it to Varian’s product segments, none of Varian’s current main competitors will have the resources to compete, unless acquired by another major healthcare technology vendor.

The Digital Gem 

While the Radiation Therapy hardware business has gained the most attention for its potential impact on Siemens Healthineers’ business, Varian’s software business is arguably its most valuable jewel, hitting almost $600m and 18% YoY growth in FY19.

Many healthcare providers have become increasingly beleaguered by the challenges of digitalization today, especially in terms of complex integration of diagnostic and clinical applications across the healthcare system. This frustration is especially common in Oncology, which sits at the convergence of major departmental and enterprise IT systems, including the EMR, laboratory, radiology, and surgical segments.

Changing models of care provision towards multidisciplinary collaboration for diagnosis and care have only intensified focus on fixing this issue, with some preferring single-vendor offerings for major clinical or diagnostic departments. The Varian software suite is one of the few premium full-featured oncology IT portfolios available today, competing mostly against main rival Elekta, generalist oncology information system modules from EMR vendors (few of which have the same capability) and a host of smaller standalone specialist IT vendors.

For Siemens Healthineers, the Varian software asset is a great fit. Siemens has for some time been gradually changing direction in its digital strategy, away from large enterprise data management segments towards more targeted diagnostic and operational products. This process began with the sale of its EMR business to Cerner for $1.3B back in 2015, with notably reduced marketing focus and bidding or deal activity on big imaging management deals (PACS, VNA etc.) in North America in recent years.

Instead, Siemens Healthineers has channeled its digital efforts on three main areas where it has specialist capabilities: advanced visualization and access to artificial intelligence for image analysis; digitalization of advanced imaging hardware modalities, including driving efficiency for fleet management and radiology operations; and lab diagnostics automation. While still early in this transformation, this approach is tapping into the main challenges facing most healthcare providers today; improving clinical outcomes at a net neutral or reduced cost, better managing and reducing Total Cost of Ownership (TCO), and implementing autonomous technology to augment clinical and diagnostic practice.

Assuming integration with Siemens’ broader portfolio is not too bumpy, it is already clear how the different software assets of the Varian business sit well with Siemens’ digital strategy. The Aria Oncology Information System platform will provide an entry point for Siemens to build on clinical outcome improvement in Oncology (along with Noona/360 Oncology) while also integrating diagnostic content from the Siemens syngo imaging and AI-radiology applications. Further, with growing attention on operational software to support modality fleet services and radiology operations, Siemens could translate this business into RT linac fleet management, an area currently underserved.

With no competing vendor today able to match this capability in Oncology IT, the potential long-term benefits for Siemens’ digital strategy with Varian far outweigh the risks of integration.

From Morph Suits to Moon-shots

As alluded to in our introduction, perhaps most intriguing is the bullish signal Siemens Healthineers has made to its customers and the wider market about its future.

The Healthineers 2025 strategy identified three clear stages of transformation, with “reinforcing the core portfolio” the key aspect of the 2017-2019 post IPO. In the second phase “upgrading” the business focused on pushing up growth targets and earnings per share across all segments while adding capabilities in allied markets.

Picture9

Judged against the criteria for the “upgrading” phase, the Varian deal has ticked all the boxes, perhaps clarifying why Siemens was willing to pay a premium:

The scale of the deal has also reinforced that the gradual untethering of Siemens Healthineers from its corporate parent Siemens AG is bearing fruit, both in terms of flexibility to deal-make and the ability to use the financial firepower of its majority shareholder for competitive gain.

The deal, once completed in 1H 2021, also now puts Siemens Healthineers in an exclusive club of medical technology companies with annual revenues above $20B, with a potential position as the third-largest public firm globally (based on 2019 revenues, behind Medtronic and Johnson and Johnson).

It is therefore hard to argue that the Varian acquisition can be viewed as anything but positive for Siemens Healthineers. Given the current impact of the COVID-19 pandemic and expected challenging economic legacy, the growth potential of Varian will help to smooth the expected mid-term dip in some core business over the next few years.

Yet it is the intention and message that Siemens Healthineers is sending with the Varian acquisition that has is perhaps most impressive; despite the turmoil and challenges facing markets today, it fundamentally believes in its strategy to reinvent its healthcare business and target precision medicine long term.

Its major competitors should sit up and take note; Siemens Healthineers is fast re-establishing itself as a leading force within healthcare technology. The morph suits of the “Healthineers” brand launch was just one small step on this journey; the Varian acquisition is going to be one great leap.


About Steve Holloway 

Signify Research_Steve Holloway

Steve Holloway is the Director at Signify Research, an independent supplier of market intelligence and consultancy to the global healthcare technology industry. Steve has 9 years of experience in healthcare technology market intelligence, having served as Senior Analyst at InMedica (part of IMS Research) and Associate Director for IHS Inc.’s Healthcare Technology practice. Steve’s areas of expertise include healthcare IT and medical Imaging.

Caption Health AI Awarded FDA Clearance for Point-of-Care Ejection Fraction Evaluation

Caption Health AI Awarded FDA Clearance for Point-of-Care Ejection Fraction Evaluation

What You Should Know:

– Caption Health AI is awarded FDA 510(k) clearance for
its innovative point-of-care ejection fraction evaluation.

– Latest AI ultrasound tool makes it even easier to
automatically assess ejection fraction, a key indicator of cardiac function, at
the bedside–including on the front lines of the COVID-19 pandemic.


Caption Health, a Brisbane,
CA-based leader in medical AI technology, today announced it has received FDA
510(k) clearance for an updated version of Caption Interpretation™, which
enables clinicians to obtain quick, easy and accurate measurements of cardiac
ejection fraction (EF) at the point of care.

Impact of Left Ventricular Ejection Fraction

Left ventricular ejection fraction is one of the most widely
used cardiac measurements and is a key measurement in the assessment of cardiac
function across a spectrum of cardiovascular conditions. Cardiovascular
diseases kill nearly 700,000 Americans annually, according to the Centers for
Disease Control and Prevention; furthermore, considering EF as a new vital sign
may shed light on determining cardiac involvement in the progression of COVID-19. A
recent global survey published in European Heart Journal – Cardiovascular Imaging reported
that cardiac abnormalities were observed in half of all COVID-19 patients
undergoing ultrasound of the heart, and clinical management was changed in
one-third of patients based on imaging.

How Caption Interpretation Works

Caption Interpretation applies end-to-end deep learning to
automatically select the best clips from ultrasound exams, perform quality
assurance and produce an accurate EF measurement. The technology incorporates
three ultrasound views into its fully automated ejection fraction calculation:
apical 4-chamber (AP4), apical 2-chamber (AP2) and the readily-obtained
parasternal long-axis (PLAX) view—an industry first. While ejection fraction is
commonly measured using the more challenging apical views, the PLAX view is often
easier to acquire at the point of care in situations where patients may not be
able to turn on their sides, such as intensive care units, anesthesia
preoperative settings and emergency rooms. This software provides unprecedented
access for healthcare providers to bring specialized ultrasound techniques to
the bedside.

“Developing artificial intelligence that mimics an expert physician’s eye with comparable accuracy to automatically calculate EF—including from the PLAX view, which has never been done before—is a major breakthrough,” said Roberto M. Lang, MD, FASE, FACC, FESC, FAHA, FRCP, Professor of  Medicine and Radiology and Director of Noninvasive Cardiac Imaging Laboratories at the University of Chicago Medicine and past president of the American Society of Echocardiography. “Whether you are assessing cardiac function rapidly, or looking to monitor changes in EF in patients with heart failure, Caption Interpretation produces a very reliable assessment.”

Caption Interpretation Benefits

At the point of care, a less precise visual assessment of EF
is frequently performed in lieu of a quantitative measurement due to resource
and time constraints. Using Caption Interpretation in these settings provides
the best of both worlds: it is as easy as performing a visual assessment, but
with comparable performance to an expert quantitative measurement.

Caption Interpretation was trained on millions of image
frames to correctly estimate ejection fraction, emulating the way an expert
cardiologist learns by evaluating EF as part of their clinical practice. While
virtually all commercially available EF measurement software works by tracing
endocardial borders, Caption Interpretation analyzes every pixel and frame in a
given clip to produce highly accurate EF measurements.

Caption Health broke new ground in 2018 when it received the
first FDA clearance for a fully automated EF assessment software. Two years
later, Caption Interpretation remains the only fully automated EF tool
available to providers, and, with today’s clearance, continues to be the pacesetter
in ultrasound interpretation.

“We are pleased to have received FDA clearance for our latest AI imaging advancement—our third so far this year,” said Randolph P.  Martin, MD, FACC, FASE, FESC, Chief Medical Officer of Caption Health, Emeritus Professor of Cardiology at Emory University School of Medicine, and past president of the American Society of Echocardiography. “An accurate EF measurement is an indispensable tool in a cardiac functional assessment, and this update to Caption Interpretation makes it easier for time-constrained clinicians to incorporate it into their practice.”

Recent Traction/Milestones

Caption Interpretation works in tandem with Caption
Guidance, cleared by the FDA earlier this year, as part of the Caption AI platform. Caption Guidance
emulates the expertise of a sonographer by providing over 90 types of real-time
instructions and feedback. These visual prompts direct users to make specific
transducer movements to optimize and capture a diagnostic-quality image. In
contrast, use of other ultrasound systems requires years of expertise to
recognize anatomical structures and make fine movements, limiting access to
clinicians with specialized training.

The company recently closed
its Series B funding round with $53 million to further develop and
commercialize this revolutionary ultrasound technology that expands patient
access to high-quality and essential care.

Allscripts, Microsoft Ink 5-Year Partnership to Support Cloud-based Sunrise EHR, Drive Co-Innovation

Allscripts, Microsoft Ink 5-Year Partnership to Support Sunrise EHR, Drive Co-Innovation

What You Should Know:

– Allscripts and Microsoft sign a five-year partnership extension to support Allscripts’ cloud-based Sunrise electronic health record and drive co-innovation.

– The alliance will enable Allscripts to harness the power of Microsoft’s platform and tools, including Microsoft Azure, Microsoft Teams, and Power BI, creating a more seamless and highly productive user experience.


Today Allscripts and
Microsoft Corp. announced the
extension of their long-standing strategic alliance to enable the expanded
development and delivery of cloud-based health IT solutions.
The five-year extension will support Allscripts’ cloud-based Sunrise electronic health record
(EHR), making Microsoft the cloud provider for the solution and opening up
co-innovation opportunities to help transform healthcare with smarter, more
scalable technology. The alliance will enable Allscripts to harness the power
of Microsoft’s platform and tools, including Microsoft Azure, Microsoft Teams
and Power BI, creating a more seamless and highly productive user experience.

Partnership Impact for Cloud-based Sunrise EHR

Sunrise is an integrated EHR that connects all aspects of
care, including acute, ambulatory, surgical, pharmacy, radiology and laboratory
services including an integrated revenue cycle and patient administration
system. Cloud-based Sunrise will offer many added benefits beyond the
on-premise version that will improve organizational effectiveness, solution
interoperability, clinician ease of use and an improved patient experience.
Client benefits include a subscription model delivering faster implementations
and lower annual upgrade costs, helping organizations leverage the software
without increasing burdens on their internal IT resources.

The cloud-based Sunrise solution will provide enhanced
security, scalability and flexibility, as well as the opportunity to add new
capabilities quickly as business needs and the cloud evolve. The cloud-based
solution will also include expanded analytics and insights functionality that
can quickly engage with the Internet of Things. Finally, the cloud-based
Sunrise solution will include a marketplace that enables healthcare apps and
third parties to easily integrate with a hospital EHR. Allscripts clients will
begin to see these updates by the end of 2020.

Why It Matters

“The COVID-19 pandemic will forever change how healthcare is
delivered, and provider organizations around the world must ensure they are
powered by innovative, interoperable, comprehensive and lower-cost IT solutions
that meet the demands of our new normal,” said Allscripts chief executive
officer Paul Black. “Healthcare delivery is no longer defined by location —
providers need to have the capability to reach patients where they are to truly
deliver the care they require. Cloud solutions, mobile options, telehealth
functionality — these are the foundational tools for not just the future of
healthcare, but the present. Collaborating with Microsoft, the leader in the
public cloud sector, we will efficiently deliver the tools caregivers need to
improve the clinical outcomes of their patients and operational performance of
their organizations.”