Sep 27, 2024

Insights from LSI Europe ‘23: Adoption of Digital Medicine: Importance of Regulation, Safety, & Explainability in AI

Insights from LSI Europe ‘23: Adoption of Digital Medicine: Importance of Regulation, Safety, & Explainability in AI

433_LSI_Europe23   Day_2 (Track 2) large

A panel at LSI Europe '23 brought together industry leaders and experts to discuss the essential considerations surrounding the adoption of digital medicine and AI in healthcare. The conversations highlighted the importance of data quality, validation, and localization and addressed key challenges surrounding regulations and care pathways.

Data Quality is Paramount

One of the primary discussion points from the panel was the critical importance of data quality in developing effective AI models for healthcare. Poor data quality can undermine the potential benefits of AI solutions. Pall Johannesson, representing Greenlight Guru, emphasized, "It follows the principle of garbage in, garbage out. If you want to build continuous learning algorithms, you need to ensure that the data you're teaching it on is accurate and meets certain quality standards." This sentiment was echoed by Hélène Viatgé, Co-Founder of Agora Health, who noted, "Physicians capture data during clinical times, but often in different tools than those that will be used for research and development. You will find that in most countries, physicians duplicate the data they have collected in their clinical pathway for research and development of new algorithms."

Regulatory Challenges and Static Models

Current regulations require AI models to be validated and remain static, which limits their ability to improve continuously. Viatgé pointed out, "You certify your medical device, and as mentioned, you freeze your model. Then, if you want to make minor or major changes, you have to go through all the usual notified bodies communications and processes for them to validate, which takes time." Johannesson added, "Regulations are, by nature, reactive to technology. Continuous learning models contradict current regulations, which expect a static and validated device." This regulatory environment poses significant challenges for innovators in the medtech market who aim to keep their AI models up-to-date and effective.

Importance of Validation and Localization

Another critical point discussed was the need for thorough validation and localization of AI models. AI models trained in one region may not perform equally well in another due to differences in patient populations and care pathways. Viatgé highlighted this issue, saying, "You have to justify why the population is similar and how you feel your validation is valid. If it’s not, then you have to reproduce it." Johannesson echoed this sentiment, explaining that, much like other medical devices, AI models must be reassessed for new markets to ensure safety and efficacy.

Balancing Innovation with Practicality

AI developers must strike a balance between innovative capabilities and practical applicability within current healthcare infrastructures. Viatgé advised, "For AI-based algorithms, there's a clever balance between input data and output data. You can design the most brilliant algorithm with great predictivity and sensibility, but with 12 input data that are actually not available in the current care pathway. Your algorithm might be great with specific data, but it won’t be practical if it isn't available in the current care pathway." Johannesson suggested that startups and companies focus on incremental improvements rather than attempting revolutionary changes that the market might not be ready to accept. "Figure out the shortest way to revenue. Incremental improvements over time are usually safer than attempting a revolutionary change," he said.

The Role of Interoperability

For AI solutions to be widely adopted, they must seamlessly integrate with existing healthcare systems. Viatgé explained, "Clinicians know the value of these solutions and want to use them but struggle with interoperability and the need to duplicate work. We need to bridge the gap between these wonderful AI algorithms being developed and paper-based clinical data that is actually being used in the care pathway." Johannesson predicted that the future would see more integrated solutions working across different care pathway components. "We're going to see more solutions that integrate in different components of the pathway. Forcing vendors to work together for better outcomes will be crucial," he said.

Trustworthiness and Regulatory Insight

Building trust and providing clear regulatory insight are essential for the adoption of AI in healthcare. Viatgé shared their strategy, "We don't go into any businesses now without a clear view of the organizational and clinical impact and a clear roadmap of what we want to achieve." Simon Turner, who moderated the discussion, emphasized the importance of transparency, stating, "Companies need to demonstrate trust and utility in their AI approaches, ensuring they provide clear metrics and transparency in their functionality."

Conclusion

The full recording of the panel can be found in LSI’s resource hub at the link below:

The panel highlighted the intricate interplay between data quality, regulatory challenges, validation, practicality, interoperability, and trustworthiness in the medtech market. As digital medicine and AI continue to evolve, these insights will be vital for medical device investors and stakeholders to navigate the future of healthcare technology effectively. 

mobile-icon

Schedule an exploratory call

Request Info
LSI Europe '24 Page   icon
Revisit the full program and see who attended the event in Portugal
LSI Europe '24