CMU 11-785 L22 Revisiting EM algorithm and generative models

本文介绍了CMU课程11-785中关于EM算法和生成模型的内容。EM算法是一种处理缺失数据或信息的迭代技术,通过完成数据并重新估计参数来估计概率模型。PCA被视作高斯数据的生成模型,而因子分析则是另一种高斯数据的生成模型,它允许噪音不一定是正交的。EM算法用于解决模型估计中的缺失数据问题,PCA则寻找最小化投影误差的主子空间,可以迭代求解,并且PCA实际上是一个线性自编码器的基础。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Key points

  • EM: An iterative technique to estimate probability models for data with missing components or information
    • By iteratively “completing” the data and reestimating parameters
  • PCA: Is actually a generative model for Gaussian data
    • Data lie close to a linear manifold, with orthogonal noise
    • A lienar autoencoder!
  • Factor Analysis: Also a generative model for Gaussian data
  • Data lie close to a linear manifold
  • Like PCA, but without directional constraints on the noise (not necessarily orthogonal)

Generative models

Learning a generative model

  • You are given some set of observed data X={ x}X=\{x\}X={ x}
  • You choose a model P(x;θ)P(x ; \theta)P(x;θ) for the distribution of xxx
    • θ\thetaθ are the parameters of the model
  • Estimate the theta such that P(x;θ)P(x ; \theta)P(x;θ) best “fits” the observations X={ x}X=\{x\}X={ x}
  • How to define “best fits”?
    • Maximum likelihood!
    • Assumption: The data you have observed are very typical of the process

EM algorithm

  • Tackle missing data and information problem in model estimation
  • Let ooo are observed data

log⁡P(o)=log⁡∑hP(h,o)=log⁡∑hQ(h)P(h,o)Q(h) \log P(o)=\log \sum_{h} P(h, o)=\log \sum_{h} Q(h) \frac{P(h, o)}{Q(h)} logP(o)=loghP(h,o)=loghQ(h)Q(h)P(h,o)

  • The logarithm is a concave function, therefore

log⁡∑hQ(h)P(h,o)Q(h)≥∑hQ(h)log⁡P(h,o)Q(h) \log \sum_{h} Q(h) \frac{P(h, o)}{Q(h)} \geq \sum_{h} Q(h) \log \frac{P(h, o)}{Q(h)} log

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值