Comment by zerocrates
3 hours ago
"The dataset is organized as one Parquet file per calendar month, plus 5-minute live files for today's activity. Every 5 minutes, new items are fetched from the source and committed directly as a single Parquet block. At midnight UTC, the entire current month is refetched from the source as a single authoritative Parquet file, and today's individual 5-minute blocks are removed from the today/ directory."
So it's not really one big file getting replaced all the time. Though a less extreme variation of that is happening day to day.
Parquet is a very efficient storage approach. Data interfaces tend to treat paths as partitions, if logical.