📢 We Just Took SPR-Based High-Throughput Screening (HTS) to the Next Level. Excited to introduce the first-ever SPR-based HTS used to discover small molecule modulators of any stimulatory immune checkpoint. 📄 Dive into the science: https://lnkd.in/exKixX_H 🎯The target? CD28—a critical T cell costimulatory receptor, long considered “undruggable” by small molecules. 📈 Hit rate: 1.14% ⏱️ Runtime: Single-day screen 🧪 Platform: Real-time, label-free kinetics The result? DDS5—a CD28 small molecule binder that: 🛑 Disrupts CD28–CD80 interaction ✔️ Remains stably bound over 100 ns of MD simulation 💊 Hits a previously unexploited, druggable pocket in CD28 This is not just a screen — it is a platform. A blueprint for drugging costimulatory receptors with small molecules. And a new chapter for what SPR + HTS can deliver. 🙏 Grateful to our amazing team — especially Laura Calvo-Barreiro for leading the effort, and to the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) for funding and supporting this work. #CD28 #SPR #HTS #DrugDiscovery #CheckpointInhibition #Biophysics #MedicinalChemistry
Innovations in Drug Screening Techniques
Explore top LinkedIn content from expert professionals.
-
-
Delighted to see this 👇 getting out finally! Latest work published in npj Drug Discovery: “Evaluation of DNA-Encoded Library and Machine Learning Model Combinations for Hit Discovery” 🔗 https://rdcu.be/ef6mN We evaluated how different combinations of DNA-encoded libraries (DELs) and machine learning (ML) models impact hit discovery, using three distinct DELs and five ML algorithms—resulting in 15 DEL+ML combinations. Key highlights: ➡️ Screened >140,000 compounds for binders to two drug targets (CK1α/δ) ➡️ ChemProp (graph neural network) and MLP (neural network) outperformed traditional ML models ➡️ Identified two nanomolar binders and validated >90% of predicted non-binders ➡️ Open-sourced the best-performing ML models + training data: GitHub Repo This work shows how pairing DELs with ML can significantly accelerate early-stage drug discovery, enabling virtual screening of diverse, drug-like libraries in a scalable fashion. Kudos to the amazing team at the Broad Institute of MIT and Harvard, fellow and current colleagues - Wei Jiang, Eric H., Tonia Aristotelous, Shuang Liu, Andrew Reidenbach, Cerise R., Alison Leed, @Chengkuan Chen, Larry Chung, Eric Sigel, Alex Burgin, Sandy Gould, and of course -- Holly Soutter :) #DrugDiscovery #MachineLearning #DNAEncodedLibraries #Bioinformatics #AIinBiotech #OpenScience #BroadInstitute
-
Recent advancements in deep learning and generative models have significantly expanded the applications of virtual screening for drug-like compounds. Here, we introduce a multitarget transformer model, PCMol, that leverages the latent protein embeddings derived from AlphaFold2 as a means of conditioning a de novo generative model on different targets. Incorporating rich protein representations allows the model to capture their structural relationships, enabling the chemical space interpolation of active compounds and target-side generalization to new proteins based on embedding similarities. In this work, we benchmark against other existing target-conditioned transformer models to illustrate the validity of using AlphaFold protein representations over raw amino acid sequences. We show that low-dimensional projections of these protein embeddings cluster appropriately based on target families and that model performance declines when these representations are intentionally corrupted. We also show that the PCMol model generates diverse, potentially active molecules for a wide array of proteins, including those with sparse ligand bioactivity data. The generated compounds display higher similarity known active ligands of held-out targets and have comparable molecular docking scores while maintaining novelty. Additionally, we demonstrate the important role of data augmentation in bolstering the performance of generative models in low-data regimes. Software package and AlphaFold protein embeddings are freely available at https://lnkd.in/etJCXs8g. Interesting new protein design paper by Andrius Bernatavicius and larger team! The text above is from the author's abstract, the full paper can be found here: https://lnkd.in/eMMxN--3 The audio summary was created using OpenAI's text-to-speech language model via the API. Video can also be accessed via Youtube: https://lnkd.in/ea9Akpw2
-
Traditional drug discovery is slow and costly, often taking over 10 years and $1 billion to develop a new therapy. Existing computational approaches still rely on searching limited molecule libraries instead of designing entirely new candidates. 𝗗𝗶𝗳𝗳𝗦𝗠𝗼𝗹 𝗶𝘀 𝗮 𝗴𝗲𝗻𝗔𝗜 𝗺𝗼𝗱𝗲𝗹 𝘁𝗵𝗮𝘁 𝗱𝗶𝗿𝗲𝗰𝘁𝗹𝘆 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝘀 𝗿𝗲𝗮𝗹𝗶𝘀𝘁𝗶𝗰 𝟯𝗗 𝗱𝗿𝘂𝗴 𝗰𝗮𝗻𝗱𝗶𝗱𝗮𝘁𝗲𝘀 𝗰𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻𝗲𝗱 𝗼𝗻 𝗯𝗶𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 𝘁𝗮𝗿𝗴𝗲𝘁 𝘀𝗵𝗮𝗽𝗲𝘀. 1. Outperformed all state-of-the-art shape-conditioned models by achieving a 61.4% success rate in generating molecules with highly similar shapes to ligands 2. Generated entirely new molecular graphs while preserving 3D shapes, ensuring 99.9% novelty compared to known datasets. 3. Improved binding affinities by 13.2% using protein pocket guidance and by 17.7% when combined with shape guidance. 4. Produced molecules for critical targets like CDK6 and neprilysin with higher predicted binding affinities and better ADMET profiles than existing FDA-approved drugs. 5. Ran more than 10× faster than previous protein-conditioned models, generating high-affinity candidates with favorable drug-likeness and synthetic accessibility. The two‐stage architecture balances expressivity and control: 1. Pretraining a dedicated shape encoder (SE) 2. Driving a shape‐conditioned diffusion model (DIFF) with a multilayer GVP‐based graph network (SMP) and shape‐aware scalar/vector fusion (SARL) But, I think unifying or streamlining modules (for instance, replacing separate SARL/BTRL blocks with an attention‐based equivariant layer) could reduce parameter counts and simplify training without sacrificing performance. Also, the inference efficiency (0.46 s per molecule for DiffSMol+p vs. 77.89 s for AR) demonstrates readiness for high‐throughput screening even though it could be a bit more efficient when training. Cool to see fast inference speed though Here's the awesome work: https://lnkd.in/gR8AvWwD Congrats to Ziqi Chen, Bo Peng, Tianhua Zhai, Daniel Adu-Ampratwum & Xia Ning! I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development