This document discusses performance testing at the edge using dynaTrace. It describes challenges with traditional approaches like waterfalls and profiling, which don't provide ongoing monitoring and comparison over time. The dynaTrace approach uses a distributed architecture to monitor many platforms and configurations. It also discusses lessons learned around making testing continuous, avoiding re-runs, and focusing tests on specific metrics like response time and garbage collection to understand measurements better.