Learning is the goal. 📚 Community lead, Sree Harsha Nelaturu reminds us that impactful research isn’t about chasing the most complex ideas— it’s about following what excites you and growing through the process. Join our Open Science Community: https://lnkd.in/gTbtNjqf
Cohere Labs
Technology, Information and Internet
We’re Cohere's research lab changing where, how, and by whom ML breakthroughs happen.
About us
Who we are Cohere Labs is Cohere's research lab that seeks to solve complex machine learning problems. We support fundamental research that explores the unknown, and are focused on creating more points of entry into machine learning research. Our community is a space where researchers, engineers, linguists, social scientists and lifelong learners connect and collaborate with each other. We come together from all over the world and welcome you whether you are a mentor, dropout, just getting started, PhD, masters, undergraduate, unaffiliated, industry, academic or not really sure. We are excited to support community-driven research and to be shaped by our members' interests. Where we’ve come from In 2017, a team of friends, classmates, and engineers started a distributed research collaboration, with a focus on creating a medium for early-career AI enthusiasts to engage with experienced researchers – they called it “for.ai.” Two of those co-founding members, Aidan Gomez and Ivan Zhang, later went on to co-found Cohere, and many of the founding members went on to do exciting things (pursuing PhDs, working at industry and academic labs). At the time, For AI was one of the first community-driven research groups to support independent researchers around the world. Today, Cohere is proud to reintroduce For AI as Cohere Labs, a dedicated research lab and community for exploring the unknown, together.
- Website
-
https://cohere.com/research
External link for Cohere Labs
- Industry
- Technology, Information and Internet
- Company size
- 11-50 employees
- Founded
- 2022
- Specialties
- research, machine learning, and open science
Updates
-
“Individually, we are one drop. Together, we are an ocean.” - Ryunosuke Satoro ✨ Cohere Labs is excited to announce Connect - a 3-day virtual conference celebrating the power of collaboration in open science! Join us for inspiring keynotes, lightning talks, and interactive sessions that bring together curious minds from around the world. Throughout the conference, we’ll: 🔬 Showcase cutting-edge research 💡 Highlight meaningful collaborations 🤝 Inspire new partnerships We're excited to hear from speakers including Ivan Zhang, Joelle Pineau, Marzieh Fadaee, Shayne Longpre and 20+ other presenters who will share insights on open science, collaborative research, and community-driven innovation. Learn more and register now: https://lnkd.in/gcS6E9DW
-
Our AI Safety and Alignment group is excited to host Hashim Ali for a presentation on "Audio Antispoofing in the Age of Hyper-Realistic Deepfakes: Challenges, Datasets, and Robustness" on October 30th. Thanks to our community leads Abrar Rahman and Alif Munim for organizing this event! 👏 Learn more: https://lnkd.in/gMTMd2kE
-
-
The Art of Asking: Multilingual Prompt Optimization for Synthetic Data 🌍. In multilingual synthetic data generation, relying solely on translation often produces prompts that feel unnatural, Western-centric, or linguistically flat. Most existing methods focus on optimizing completions—the model’s responses—while overlooking the quality of the prompts themselves. But prompts shape the data we generate: Completions are only as diverse and representative as the inputs they are conditioned on. Our work introduces a prompt-focused paradigm, transforming translated prompts along three key dimensions: naturalness, cultural authenticity, and complexity. By treating prompts as adaptable, not static, we create richer, more realistic inputs that better mirror human data. These simple transformations yield consistent improvements across languages and benchmarks—especially for open-ended tasks—enhancing fluency, diversity, and challenge in generated outputs. Sometimes, the way we ask a question really does make all the difference 💡✨ Led by: David Mora, Viraat Aryabumi, Wei-Yin Ko, Sara Hooker, Julia Kreutzer, and Marzieh Fadaee. 📜Paper link: https://lnkd.in/gAHiWmuU
-
-
Cohere Labs reposted this
I’m very happy to share our latest work: The Art of Asking: Multilingual Prompt Optimization for Synthetic Data 🎨🌍. In multilingual synthetic data generation, relying solely on translation often produces prompts that feel unnatural, Western-centric, or linguistically flat. Most existing methods focus on optimizing completions—the model’s responses—while overlooking the quality of the prompts themselves. But prompts shape the data we generate: Completions are only as diverse and representative as the inputs they are conditioned on. Our work introduces a prompt-focused paradigm, transforming translated prompts along three key dimensions: naturalness, cultural authenticity, and complexity. By treating prompts as adaptable, not static, we create richer, more realistic inputs that better mirror human data. These simple transformations yield consistent improvements across languages and benchmarks—especially for open-ended tasks—enhancing fluency, diversity, and challenge in generated outputs. Sometimes, the art of asking the right question really does make all the difference 💡✨ To check more about our work check out our paper: (https://lnkd.in/eJvdWT3X) and special thanks to my mentors and collaborators Marzieh Fadaee, Julia Kreutzer, Viraat Aryabumi, Sara Hooker, Wei-Yin Ko ✨✨✨
-
-
Cohere Labs reposted this
Thank you Alona Fyshe, Scott Lilwall and the Approximately Correct podcast team at AMII for hosting me for this fun conversation! I continue to believe that responsible development and sharing of open-weight models is an important way to ensure that we are make meaningful progress in AI.
There are a lot of benefits to open-source models in AI. But the more open a model is, the more vulnerable it is to being misused. In the latest episode of Approximately Correct, Mila researcher and Chief AI Officer at Cohere Joelle Pineau talks about the marginal risk approach she uses, focusing on measuring the change in a specific harm before and after a model’s release, to lead to safer, more secure AI models. Have a listen: https://hubs.la/Q03PJx6k0 #AI #OpenSourceAI #MachineLearning #ResponsibleAI
-
Don't forget to join us tomorrow, October 23rd for a session with Calarina Muslimani on "Towards Improving Reward Design in RL: A Reward Alignment Metric for RL Practitioners" Learn more: https://lnkd.in/gHSmP_u2
Our Reinforcement Learning Group is looking forward to a hearing from Calarina Muslimani next week on October 23rd as she presents "Towards Improving Reward Design in RL: A Reward Alignment Metric for RL Practitioners" Thanks to Rahul Narava and Gusti Triandi Winata for organizing this session! ✨ Learn more: https://lnkd.in/gHSmP_u2
-
-
Share Your Research with Our Open Science Community! 🎙️ Our open science community thrives on collaboration and knowledge-sharing in the various fields of machine learning. We’d love to hear from you if: 🔬You've conducted some exciting research and want to share your findings 📚 You're currently a student and looking for the opportunity to present your latest discoveries 👩🏫 You’re an ML industry expert ready to share your knowledge and experience Submit your interest today: https://lnkd.in/gkXXsNWh
-
-
Be sure to tune in tomorrow, October 21st as we welcome Zhiwen(Aaron) F. for a session on "VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction." Learn more: https://lnkd.in/g_FP-4FV
Our Computer Vision group is thrilled to welcome Zhiwen(Aaron) F. for a presentation on "VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction" next week on Tuesday, October 21st. Thanks to Mayank Bhaskar and Benedict Emoe-kabu for organizing this event! 🤩 Learn more: https://lnkd.in/g_FP-4FV
-
-
Researchers and the public see AI in health care differently. 🏥🤖 While researchers focus on its potential to improve lives, many people remain uncertain or uneasy. Dr. Satish E. Viswanath explores how we can start bridging that divide: https://lnkd.in/gG-cXXyB
Dr Satish Viswanath - The Case of the Hidden Signatures Designing Imaging AI to Bridge Patterns
https://www.youtube.com/