Explore our Topics:

Providers reveal their gen AI investment strategies

Healthcare providers share their gen AI investment strategies as they begin to make big moves toward a digitally driven future.
By admin
Apr 29, 2024, 2:05 PM

Generative AI (gen AI) is quickly moving beyond the realm of sneaky homework helper into the big time as a valuable strategy for improving the healthcare experience. Providers are keen to harness the abilities of tools such as ChatGPT and its competitors to smooth out persistent pain points in the care experience, such as answering patient questions, assisting with documentation, and enhancing research capabilities.  

While there’s still a long way to go before these technologies are fully mature enough to fulfill all their promises, the majority of care providers are willing to be active participants in pushing the process along.    

According to a new survey from John Snow Labs, roughly three-quarters of organizations have increased their gen AI budgets already, with more than 20% of respondents saying they’ve upped their spend by up to 300% compared to 2023 levels. 

The willingness to invest so heavily – especially in tight economic times – indicates strong enthusiasm and optimism around integrating gen AI into broader efforts to optimize digital infrastructure and workflows. 

Here’s how organizations are approaching their gen AI investment strategies in 2024 and beyond. 

Big ideas are everywhere, but so are roadblocks

Generative AI is dangling tantalizing promises of a revolution in front of healthcare leaders – but after being burned by rushing head-first into previous waves of health IT hype, the C-suite is trying to take a more measured approach to adoption. 

Respondents to the poll clearly see the potential for gen AI’s large language models (LLMs) to support chatbots for patients, support better clinical coding, reduce burdens around documentation and information retrieval, and even generate synthetic data for research.  And they’re confident that these capabilities will make a fairly significant impact on day-to-day operations, especially when it comes to the patient-provider relationship. 

But most are at the very beginning stages of adoption, with nearly 40% still evaluating and experimenting with basic models that have not yet moved into production. Fourteen percent have at least one model up and running in some capacity, but only 11% consider themselves in “mid-stage” adoption with multiple solutions actively supporting real-world operations. 

The relatively limited adoption is the product of significant technical and organizational obstacles. When asked to rate the significance of their challenges, respondents were most likely to cite “lack of accuracy” as the biggest hurdle, followed closely by worries over the legal and/or reputational risks of embracing gen AI, frustrations with models that are not specifically built with healthcare or life sciences in mind, and concerns about bias, fairness, and equity. 

Deploying successful gen AI investment strategies

Despite the obstacles, healthcare leaders drawing on lessons learned from previous deployments to carefully evaluate gen AI tools before releasing them into the wild, with strategies focusing on ensuring fairness, privacy, and reliability.  

Accuracy and reliability are the most important criteria when evaluating a new model, the participants shared, followed very closely by the security and privacy risks of any new tool. Executives should ensure that gen AI tools can provide reproducible and consistent answers that are free of bias and hallucinations, and should prioritize tools that offer a high degree of transparency and explicability. 

The respondents also advised adopters to look for healthcare-specific models that are built for targeted use cases instead of trying to adapt broader tools for the unique care environment. This will reduce the time and cost of the pre-implementation process while producing better long-term results for end-users. 

The survey also reiterated the importance of ongoing human participation throughout the training, implementation, optimization, and monitoring phases. More than half (55%) of respondents said they always keep humans in the loop when testing and improving LLM models, while 18% also incorporate reinforcement learning from human feedback. 

Other popular testing and improvement techniques include adversarial testing, de-biasing processes, quantization and/or pruning, and the use of interpretability tools and techniques to ensure proper training and supervision of gen AI models. 

These early adopters are constantly evaluating their models against these criteria and keeping a critical eye out for any evidence of bias, disinformation, and data leakage to keep their tools on track for successes. 

Adopting similar strategies as generative AI becomes more widespread will ensure that these tools can contribute to healthcare operations in a positive way without skewing clinical or administrative decision-making. 

Implementing these guardrails – and continually monitoring and adjusting models to stay within approved parameters – will be essential for making the most of rising investments while extracting the full value from the next generation of artificial intelligence technologies. 


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at jennifer@inklesscreative.com.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.