This document discusses data-intensive computing and provides examples of technologies used for processing large datasets. It defines data-intensive computing as concerned with manipulating and analyzing large datasets ranging from hundreds of megabytes to petabytes. It then characterizes challenges including scalable algorithms, metadata management, and high-performance computing platforms and file systems. Specific technologies discussed include distributed file systems like Lustre, MapReduce frameworks like Hadoop, and NoSQL databases like MongoDB.