Replicating an Economic Model ≠ Evaluating an AI Model Many early GenAI use cases in health economics (HE) focus on replicating existing models. But replication isn’t evaluation. Some GenAI tools can replicate existing health economic models with results that perfectly align with human-built models in terms of outputs like ICERs. Sounds impressive — but is it enough? Or is this really the direction we should take when evaluating AI-assisted HE models? Is it feasible or realistic that every time in real HTA submissions we build two models (AI vs. human) just to compare? ✅ Replication may test a single model’s performance, but it doesn’t advance the evaluation framework that’s needed to assess trustworthy and transparent AI-assisted modelling. Let me ask a simple question to my HEOR and HTA colleagues: If there’s an AI tool that replicates a model in disease area X, producing exactly the same outputs (Cost, QALYs, ICER) as an existing human-built model — does that alone remove your concerns about its transparency and reliability? And if you use that same AI tool to build a de novo model from scratch, how can you guarantee it will still be 100% aligned with a hypothetical human-built model (if one exists)? ✅ Also — let’s be fair. Humans make errors too. Models are built with uncertainty, and decisions are probabilistic, influenced by evolving evidence. Deviation between AI-built and human-built models does not necessarily mean AI is wrong. Maybe new data is emerging. Maybe AI uses different statistical methods that are actually better than Excel. Maybe it applies updated assumptions. As the classic quote says: “All models are wrong, but some are useful.” ✅ When we talk about “evaluating AI-built HE models”, we’re actually talking about two layers of evaluation: Evaluating the HE model itself — calculations, parameters, assumptions, settings, as highlighted in the NICE HTA Lab report. Evaluating the AI tool that builds the model — its trustworthiness, transparency, and reproducibility. This second layer is foundational. Every pharma, biotech, consultancy can build their own AI tool. There’s no single “universal” AI tool for HE modelling — nor should there be. For trust to be earned, we must evaluate the AI itself: What AI architecture is being used? If it’s RAG, what knowledge base is feeding it? If it’s an Agentic model, what supporting evidence does it output (e.g., R/Python survival analysis code, documentation of decision steps)? Replication and comparison are useful starting points. But it’s time for the HEOR community to move beyond replication and build robust evaluation frameworks tailored to GenAI’s strengths and risks — especially around transparency, documentation, and human oversight. 👉 NICE HTA Lab project: https://lnkd.in/e7rXkcVH Curious how others in #HEOR and #HTA are approaching this. Let’s discuss. #GenAI #HTA #Transparency #AgenticAI # #HTAinnovation
Evaluating AI in Health Economics: Beyond Replication
More Relevant Posts
-
I really enjoyed leading a collaboration with NICE to investigate the capabilities of generative AI for de novo model building and reporting in R. NICE's report on the project 'Generative AI in health economic evaluation' can be found here: https://lnkd.in/eAPh26-G My key takeaways from the project are: - We were able to generate complex R models (with PSA) and comprehensive Word technical reports using brief model outlines (~500 word average over the four examples) and Excel data files (with available data points) - We achieved a high level of coding accuracy through agentic pipelines with error-checking/sense-checking capabilities (roughly one error per 350 lines of code across four published models we replicated) - Deviation versus the published models were 2%, 3%, 16%, and 37% respectively, all cost-effectiveness conclusions were correct - Deviation was primarily explained by different assumption choices made by the AI pipeline versus authors of the published models - All assumptions were clearly reported in the AI-generated model reports produced by the pipeline I think this demonstrates a key point in the value of AI automation - AI outputs only add value if they align with our intentions (e.g., in this case, use our desired assumptions). Unless we provide vast amounts of instruction, its unlikely that AI outputs will align with our intentions exactly, first time. Sometimes, we don't even know what we want before we see a first output. Therefore, the key differentiator for an AI pipeline to add value is that the outputs can be easily, and automatically modified. In the modelling case, think leaving a Word comment on the AI-generated technical report and having the model re-run, with your comment actioned (e.g., maybe this specifies an alternative assumption you want to make). In 1-2 rounds of feedback, you can have a completely aligned output, delivering massive time savings. Some of Estima Scientific's recent work on survival analysis has made great progress towards implementing this type of feedback system. I'll be presenting on this at ISPOR in November (Survival Analysis in the Era of Generative AI).
To view or add a comment, sign in
-
NICE has now released the outcomes from the generative AI in health economic evaluation workshops which recently concluded: https://lnkd.in/e47kHTND It was an absolute pleasure to be a part of this. Key points: · Whilst there are multiple potential use cases for AI in health economic model development there is a lack of high quality evidence to support these · While GenAI shows promise in improving efficiency for model developers and reviewers, its broader impact on NICE’s HTA processes remains to be determined Best practice principles include · GenAI methods should only be used where there is demonstrable value from doing so (I would have added that this needs to be over and above simple automation techniques) · the submitting organisation remains fully accountable for the content of any evidence submission · submitting organisations and HTA bodies using GenAI are responsible for ensuring compliance with all applicable legislation, including data protection, copyright and licensing agreements (noting from experience this is difficult [impossible] to achieve!) · when available, consider using tools to support the explainability of AI methods and increase transparency of their application · AI methods should augment human involvement, not replace it A logical next step noted is a ‘mock’ appraisal project to test GenAI integration into NICE’s processes. #AI, #GenAI, #NICE, #Healtheconomics, #HTA
To view or add a comment, sign in
-
Exciting update from NICE! The UK's health guidance body has just released its stance on incorporating AI in evidence generation. Key Points: - AI should complement, not substitute, human expertise - Transparency is crucial - organizations must disclose AI utilization - Accountability for AI outcomes rests with organizations - Mitigating risks such as bias and conducting thorough validation are imperative AI Applications: - Systematic reviews - Clinical trial design - Real-world data analysis - Economic modeling These guidelines establish a solid framework for integrating AI in health tech evaluations, ensuring the credibility of generated evidence. #NICE #AI #HealthTech #HTA #MarketAccess #EvidenceGeneration Link to more details: [NICE AI in Health Tech](https://lnkd.in/egmdU9bh)
To view or add a comment, sign in
-
Proud to share our latest editorial in Value in Health – An HEOR Publication: “Methodological Challenges With Conducting Health Economic Evaluations in the Critical-Care Context.” 👉 https://lnkd.in/gTDVFFtA Critical care is one of the most resource-intensive and high-stakes areas in medicine. Yet despite its importance, health economic evaluations in intensive care units (ICUs) remain surprisingly rare and often methodologically inconsistent. In this piece, Stavros Petrou and I explore why conducting cost-effectiveness and value assessments in critical care is so difficult — and why getting it right matters for improving hospital efficiency and patient outcomes. We discuss: 💡 The massive heterogeneity among ICU patients that complicates study design 💡 The data gaps that make it hard to capture long-term outcomes or costs 💡 How better measurement and modeling can help hospitals eliminate “defects in value” and move toward zero waste in care delivery Even small methodological improvements can make a real dent in the trillion-dollar problem of wasteful care. I’m grateful to the ISPOR—The Professional Society for Health Economics and Outcomes Research Value in Health editorial team for highlighting this issue and to colleagues advancing economic evaluation in hospital medicine. Peter Pronovost MD, PhD, FCCM Patricia Davidson Mary Beth Makic USC Schaeffer Institute USC Alfred E. Mann School of Pharmacy and Pharmaceutical Sciences #healtheconomics #valueinhealth #criticalcare #costeffectiveness #healthpolicy #hospitalquality
To view or add a comment, sign in
-
💡 Poll Results Are In: Are Digital & AI Tools in Clinical Trials Overpromising on Cost Savings? Recently, I asked this question: "Digital tools and AI are reshaping how we run clinical trials — but are we too quick to assume they’ll save money?" Here's how respondents weighed in: 44% said “Yes, they lead to unexpected costs” 56% said “Depends on study design/execution” 0% said “No, they deliver efficiencies” 0% said “Not sure, still exploring” 🔍 Key Takeaways: No one sees digital/AI tools as a guaranteed cost-saver. That’s telling. The absence of votes for “delivers efficiencies” suggests industry professionals are not convinced these technologies automatically reduce costs. The majority believe context matters. The largest share points to variability based on study design and execution — reinforcing that these tools aren't a one-size-fits-all solution. Hidden costs are real. From onboarding and training to oversight and change management, many see implementation costs as significant — and often underestimated. 🧠 What this suggests: Innovation in trial operations is essential, but success hinges on strategic planning, cross-functional alignment, and change management, not just technology adoption. 👉 Over to you: Have you experienced unexpected challenges (or savings) when implementing digital tools in your trials? How are you assessing the ROI of AI and decentralized platforms in your organization? Let’s continue the conversation. Your insights could help shape a more pragmatic, value-driven approach to clinical trial innovation. #ClinicalTrials #DecentralizedTrials #DigitalHealth #AIinHealthcare #ClinicalResearch #TrialInnovation #CostManagement #ClinicalOperations #DrugDevelopment Vantage BioTrials Clinical Research Association of Canada ACRP - Association of Clinical Research Professionals Society of Clinical Research Associates (SOCRA) CANTRAIN | Clinical Trials Training Programs
To view or add a comment, sign in
-
-
🔥 𝗧𝗵𝗲 𝗽𝗵𝗮𝗿𝗺𝗮 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝘃𝗮𝗹𝘂𝗲 𝗲𝗾𝘂𝗮𝘁𝗶𝗼𝗻 𝗷𝘂𝘀𝘁 𝗳𝗹𝗶𝗽𝗽𝗲𝗱 My digital health advisor dropped this observation during our call: "There's an inflection point happening. Building tech products with AI is becoming trivially easy. 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁𝗶𝗮𝘁𝗼𝗿? 𝗘𝘅𝗽𝗲𝗿𝘁 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗮𝗻𝗱 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 𝗰𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀. ✨It can be called "𝘂𝗺𝗮𝗺𝗶 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲": that complex, hard-to-define expertise that makes everything else work together. 𝗧𝗵𝗲 𝗲𝘃𝗶𝗱𝗲𝗻𝗰𝗲 𝗶𝘀 𝗰𝗼𝗺𝗽𝗲𝗹𝗹𝗶𝗻𝗴: 🔸 Consulting giants are pivoting hard into this space. They're not selling code anymore, they're selling the ability to navigate complexity. McKinsey reports that 85% of pharma companies will be data-driven within 2 years. But here's the catch: 𝘁𝗵𝗲 𝘁𝗼𝗼𝗹𝘀 𝘁𝗵𝗲𝗺𝘀𝗲𝗹𝘃𝗲𝘀 𝗮𝗿𝗲 𝗯𝗲𝗰𝗼𝗺𝗶𝗻𝗴 𝗰𝗼𝗺𝗺𝗼𝗱𝗶𝘁𝗶𝘇𝗲𝗱. The tools exist. The APIs work. But knowing HOW to thread compliance into product design, WHEN to engage regulators, and WHO will actually champion your solution inside a health system? That's the magic. 🔸 From what I see with founders daily, this rings completely true. The ones who succeed aren't those with the shiniest tech. They're 𝘄𝗵𝗼 𝗱𝗲𝗲𝗽𝗹𝘆 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻, 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗽𝗮𝘁𝗵𝘄𝗮𝘆𝘀, 𝗮𝗻𝗱 𝘀𝘁𝗮𝗸𝗲𝗵𝗼𝗹𝗱𝗲𝗿 𝗽𝘀𝘆𝗰𝗵𝗼𝗹𝗼𝗴𝘆. We're moving from "Can we build it?" to "Should we build it, and how do we make it indispensable?" ✨The ones who succeed aren't those with the shiniest tech. They're the ones who deeply understand workflow integration, regulatory pathways, and stakeholder psychology. ✨ The winners in this new landscape won't be the best coders. They'll be 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗼𝗿𝘀, those who can blend technology, domain expertise, and human insight into solutions that actually get adopted. Are we witnessing the rise of the "knowledge orchestrator" in pharma? #DigitalHealth #PharmaInnovation #HealthTech
To view or add a comment, sign in
-
Navigating the Shift in MSL Engagement Metrics: Quality Over Quantity As Medical Affairs continues to evolve, one of the most meaningful changes I’ve observed is how we evaluate MSL impact. In a world where scientific data is instantly accessible and healthcare professionals face increasing information overload, the traditional “number of interactions” model no longer reflects the true value of field medical engagement. Modern MSLs are moving from transactional “science delivery” toward transformational scientific partnership — where quality, depth, and strategic alignment define success. Rather than asking “How many visits did we complete?”, we’re now asking: Did this engagement deliver clinically relevant, up-to-date insights? Did it strengthen collaboration and inform evidence generation or medical strategy? Did it contribute to improved patient outcomes or practice change? AI-driven analytics and NLP tools are also helping quantify these qualitative metrics — analyzing conversation content, sentiment, and emerging themes to provide a more holistic picture of value. However, as we embrace this qualitative shift, compliance remains our compass. Regulatory alignment, accurate documentation, and unbiased insight collection are essential to maintaining trust and scientific integrity. Ultimately, this transformation isn’t just operational — it’s cultural. MSLs who focus on insight-driven, meaningful engagements are helping shape not only organizational strategy but also the future of patient-centered science.
To view or add a comment, sign in
-
-
I have a core belief that separates successful clinical professionals from those who get left behind: The most successful clinical professionals don't wait for change to happen TO them—they position themselves AHEAD of change. I learned this during the EHR revolution. I watched three types of professionals: **The Resisters:** "This EHR thing will never last. Patients need human connection, not computers." Result → Found themselves unemployable as digital systems became standard. **The Adapters:** "I guess I have to learn this now that it's mandatory." Result → Learned to use systems after implementation, but had no influence and received no premium compensation. **The Early Adopters:** "Let me understand this technology before everyone else has to." Result → Commanded premium rates during the transition and influenced how implementation happened. The same three choices exist with AI today. Most clinical professionals are choosing to be Resisters or Adapters. They're waiting. Watching. Hoping it goes away or someone else figures it out first. But here's what I know from experience: **Change doesn't wait for permission.** **Technology doesn't slow down for comfort.** **And opportunities don't announce themselves twice.** The clinical professionals who develop AI literacy NOW will benefit from premium opportunities during the mandatory transition phase. Your choice isn't whether change will happen—it's whether you'll lead it or follow it. Ready to position yourself ahead of the AI transformation instead of waiting for it to happen to you? Let's discuss how to build your strategic advantage. Schedule time with me via the LinkedIn button on my profile.
To view or add a comment, sign in
-
-
I have a core belief that separates successful clinical professionals from those who get left behind: The most successful clinical professionals don't wait for change to happen TO them—they position themselves AHEAD of change. I learned this during the EHR revolution. I watched three types of professionals: **The Resisters:** "This EHR thing will never last. Patients need human connection, not computers." Result → Found themselves unemployable as digital systems became standard. **The Adapters:** "I guess I have to learn this now that it's mandatory." Result → Learned to use systems after implementation, but had no influence and received no premium compensation. **The Early Adopters:** "Let me understand this technology before everyone else has to." Result → Commanded premium rates during the transition and influenced how implementation happened. The same three choices exist with AI today. Most clinical professionals are choosing to be Resisters or Adapters. They're waiting. Watching. Hoping it goes away or someone else figures it out first. But here's what I know from experience: **Change doesn't wait for permission.** **Technology doesn't slow down for comfort.** **And opportunities don't announce themselves twice.** The clinical professionals who develop AI literacy NOW will benefit from premium opportunities during the mandatory transition phase. Your choice isn't whether change will happen—it's whether you'll lead it or follow it. Ready to position yourself ahead of the AI transformation instead of waiting for it to happen to you? Let's discuss how to build your strategic advantage. Schedule time with me via the LinkedIn button on my profile.
To view or add a comment, sign in
-
-
𝗪𝗶𝗹𝗹 𝗔𝗜 𝗿𝗲𝗽𝗹𝗮𝗰𝗲 𝗠𝗲𝗱𝗶𝗰𝗮𝗹 𝗔𝗳𝗳𝗮𝗶𝗿𝘀 𝘄𝗶𝘁𝗵𝗶𝗻 𝘁𝗵𝗲 𝗻𝗲𝘅𝘁 𝟯–𝟱 𝘆𝗲𝗮𝗿𝘀? The answer was 𝗡𝗢. It will 𝘦𝘯𝘩𝘢𝘯𝘤𝘦 — not replace. That was one of the key takeaways from yesterday’s 𝗔𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝗠𝗲𝗱𝗶𝗰𝗮𝗹 𝗔𝗳𝗳𝗮𝗶𝗿𝘀 virtual event hosted by #PharmaBrands. We’re not witnessing a change — we’re entering a 𝗻𝗲𝘄 𝘀𝘁𝗮𝘁𝗲. One where 𝗚𝗲𝗻𝗔𝗜, 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀, 𝗮𝗻𝗱 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 converge to reshape how we generate evidence, deliver scientific content, and measure value. 💡Not just faster decisions, but 𝘴𝘮𝘢𝘳𝘵𝘦𝘳, 𝘤𝘰𝘯𝘯𝘦𝘤𝘵𝘦𝘥 𝘰𝘯𝘦𝘴 — aligning teams, insights, and actions across the Medical Affairs ecosystem. 📊Medical Affairs is now anchored in 𝗾𝘂𝗮𝗹𝗶𝘁𝗮𝘁𝗶𝘃𝗲 𝗶𝗻𝗱𝗶𝗰𝗮𝘁𝗼𝗿𝘀, 𝗰𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗞𝗣𝗜𝘀, and 𝗲𝗮𝗿𝗹𝘆 𝘀𝗶𝗴𝗻𝗮𝗹𝘀 𝗼𝗳 𝗶𝗺𝗽𝗮𝗰𝘁. Speakers noted: 𝗔𝗱𝗼𝗽𝘁𝗶𝗼𝗻 𝗶𝘀 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲. -> How do we integrate GenAI responsibly into workflows, balance automation with human expertise, and build the trust needed for sustained use? 🧠GenAI should serve as a 𝘤𝘰𝘨𝘯𝘪𝘵𝘪𝘷𝘦 𝘢𝘮𝘱𝘭𝘪𝘧𝘪𝘦𝘳 — but only when supported by 𝘁𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆 and 𝗲𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆. In this event, we’ve seen 𝗻𝗲𝘄 𝗚𝗲𝗻𝗔𝗜 𝘁𝗼𝗼𝗹𝘀 already emerging: • 𝗠𝗦𝗟 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 powered by real-world data and adaptive learning • 𝗠𝗟𝗥 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 through AI-assisted review 👉A key rule was shared — 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝘀𝘁𝗮𝗿𝘁𝘀 𝘀𝗺𝗮𝗹𝗹: with pilots and proof-of-concepts to test, learn, and scale what truly works. We’re evolving into the 𝘁𝗵𝗶𝗿𝗱 𝗲𝗿𝗮 𝗼𝗳 𝗠𝗲𝗱𝗶𝗰𝗮𝗹 𝗔𝗳𝗳𝗮𝗶𝗿𝘀 — defined by orchestration, data fluency, and sustainable systems that persist through change. #MedicalAffairs #MedicalWriting #MedicalCommunications #GenAI #PharmaInnovation #Pharma #PharmaBrands #DataAnalytics #Insights #MLR #MSL #AcceleratingMedicalAffairs
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development