View profile for Andrew Ng
Andrew Ng Andrew Ng is an Influencer

Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

When data agents fail, they often fail silently - giving confident-sounding answers that are wrong, and it can be hard to figure out what caused the failure. "Building and Evaluating Data Agents" is a new short course created with Snowflake and taught by Anupam Datta and Josh Reini. This course teaches you to build data agents with comprehensive evaluation built in. Skills you'll gain: - Build reliable LLM data agents using the Goal-Plan-Action framework and runtime evaluations that catch failures mid-execution - Use OpenTelemetry tracing and evaluation infrastructure to diagnose exactly where agents fail and systematically improve performance - Orchestrate multi-step workflows across web search, SQL, and document retrieval in LangGraph-based agents The result: visibility into every step of your agent's reasoning, so if something breaks, you have a systematic approach to fix it. Sign up to get started: https://lnkd.in/gVwKuCCQ

Wesley LIN

Agentic AI Solutions in Energy & Automation I Taiwan Market Entry

1mo

When agents fail, they don't just fail—they deceive, confidently serving up polished lies. This course offers a crucial, systematic approach to this invisible problem. It's a game-changer, moving us from hoping agents work to knowing why they fail, and providing the tools to fix them. It brings a new level of trust to a field where silent failures are the biggest risk. 🙂

Miku Jha

VP of Applied AI, FDE @ServiceNow: Leading Enterprises through Agentic AI transformation | Ex-Google, Ex-Meta | Driving $1B+ AI Revenue | AI/IoT & Interoperability Innovator (A2A) | 5X Founder | Forbes Next 1000

1mo

The emphasis on evaluation is exactly right. For any agentic solution aiming for production, evaluation must be built in from day zero — as a foundational part of the design, not an afterthought. Without it, the system is still an MVP, a pilot, or a POC — but it’s not production-ready or enterprise-grade.

Michael Jidael

Building frontier AI born in Africa for humanity globally | Safe and human-centered AI | From indigenous models to future superintelligence

1mo

Thank you Andrew Ng You guys are awesome 🤗 Please do consider adding DeepLearning.AI courses to Coursera Plus. Would mean a lot to the community.

Jean Pierre Mugenga

Founder, Culture & Clarity Ltd | Insights on Workplace Culture, Wellbeing & Ethical AI | Disclaimer: Views expressed here are my own and do not represent those of any organisation I’m affiliated with.

1mo

Finally—a course that teaches AI to stop gaslighting us with confidence and no clue. Half the time these agents sound like they’ve just come back from a TED Talk, but can’t find the exit. Goal-Plan-Action? Love it. Now we just need a module called “Don’t Make Stuff Up, Please.”

Anupam Datta

AI@Snowflake, ex-Founder@TruEra, ex-Professor@CMU

1mo

What is your Agent's GPA or Goal-Plan-Action Alignment? Agents have goals; they plan and act to achieve their goals, often alternating between planning and action steps to refine their plans after reflecting on the results of their actions. Observing that agent failures arise when their goals, plans, and actions are not aligned, we introduce a framework for evaluating and improving an agent’s GPA or Goal-Plan-Action alignment. Excited to have developed this course to share our learnings. Try it out hands-on and use the TruLens OSS project as you build and evaluate agents! Wonderful to collaborate with Allison Jia, Daniel Huang, Nikhil Vytla, Shayak Sen, Josh Reini at Snowflake, and John Mitchell at Stanford University.

Florian Bansac

AI - Agents - FinTech

1mo

Great! Come share what you build and learn with 7,200+ of us in the AI Agents group on linkedin: https://www.linkedin.com/groups/6672014

Additi Upadhyay

Helping companies make their AI agents reliable|Co-founder @ Noveum.ai/API.Market|Ex-Sambanova|Ex-Spenmo(YC S20)|Ex-Playo|B2B Product, Growth|GrowthX Demo Day Winner|IIM Lucknow

1mo

Visibility into what AI agents do and understanding the why behind the failures is absolutely a non-negotiable. Exactly what we help companies with at https://noveum.ai/en. We take a step ahead of helping the users with the fix too.

Kenny Li

Co-Founder at Ada.im | Ex-CTO/VP/GM at Ping’an-FinTech/JD.COM/Baidu.

1mo

Can't agree more, and these are exactly the principles which Ada.im (an AI Data Analyst) follows, thank you Andrew.

Rawan Ryahi

Founder @ AdVigilant | Visual Storytelling, Branding & Identity Design

1mo

The real risk isn’t that data agents fail it’s that they fail persuasively transparency in reasoning isn’t a feature it’s the foundation of trust

Dr. Sally Sameh

Lecturer & Researcher | PhD in Electronics & Communications | AI, Security & Computer Vision | SCADA & Renewable Energy | MBA | STEAM Certified

1mo

Silent failures are often overlooked but can be the most costly. Great to see a course tackling this with practical evaluation tools.

See more comments

To view or add a comment, sign in

Explore content categories