Comment by time0ut

5 days ago

I have had the same experience within the last 18 months. The storage team came back to me and asked me to spread my ultra high throughput write workload across 52 (A-Za-z) prefixes and then they pre-partitioned the bucket for me.

S3 will automatically do this over time now, but I think there are/were edge cases still. I definitely hit one and experienced throttling at peak load until we made the change.

That’s sounds like the problem we were having. Lots of writes to a prefix over a short period of time and then low activity to it after about 2 weeks.