This document discusses perturbed proximal gradient algorithms for minimizing composite functions involving both smooth and nonsmooth terms. It specifically focuses on cases where the gradient of the smooth term is intractable or can only be approximated using Markov chain Monte Carlo methods. The document outlines convergence results for stochastic proximal gradient descent with biased approximations and presents an accelerated Nesterov-based variant. It also discusses open questions regarding variance reduction techniques, averaging strategies, maximal achievable rates, and the potential benefits of faster increasing sequences.