Hadoop uses large 64MB blocks by default to store file data in HDFS for improved performance. The namenode manages file metadata and knows which datanodes store each block. Datanodes store and retrieve blocks as requested by clients. The secondary namenode helps manage the namenode metadata but cannot replace it in case of failure. Writing files involves breaking them into blocks and storing replicas across datanodes, while reading locates blocks and retrieves their data.