This document discusses type checking Scala Spark Datasets. It introduces Dataset transforms which allow checking field names and types at compile time rather than run time. The transforms include operations like map, filter, sort, join and aggregate. The implementation uses Scala macros to analyze case class definitions at compile time and generate meta structures representing the fields and types. This allows encoding the transforms as Spark SQL queries that benefit from optimization while also providing strong typing. Code and examples for the transforms are available on GitHub.