Comment by goblin89
13 days ago
SoundCloud used to be good prior to the redesign.
Recently I decided to evaluate it for serious use and start posting there again, only until their new uploader told me I need to switch to a paid plan, even though I triple-checked I was well within free limits and under my old now unused username I uploaded a lot more (mostly of experimental things I am not that proud of anymore).
It looks like their microservices architecture is in chaos and some system overrides the limits outlined in the docs with stricter ones. How can I be sure they respect the new limits once I do pay, instead of upselling me the next plan in line?
Adding to that things like the general jankiness or the never-ending spam from “get more fake listeners for $$$” accounts (which seem to be in an obvious symbiosis with the platform, boosting the numbers for optics), the last year’s ambiguous change in ToS allowing them to train ML systems on your work, it was enough for me to drop it. Thankfully, it was a trial run and I did not publish any pending releases.
If you still publish on SoundCloud, and you do original music (as opposed to publishing, say, DJ sets, where dealing with IP is problematic), ask yourself whether it is timr to grow up and do proper publishing!
This sounds like a classic consistency vs latency trade-off. Enforcing strict quotas across distributed services usually requires coordination that kills performance. They likely rely on asynchronous counters that drift, meaning the frontend check passes but the backend reconciliation fails later. It is surprisingly hard to solve this without making the uploader feel sluggish.
That would explain why the front-end would allow you to attempt something that goes over your limits, but not why the back-end would reject something that doesn't go over your limits.
My bet at the time was that they have a bunch of hidden extra limits based on account age, IP/user agent information, etc. If that is true, their problem is that they advertise the larger limits instead of the smaller limits (to get more users signed up), and that they do not communicate when their extra limits apply and instead straight up upsell you, which are both dark patterns.
2 replies →
Fair point. I suspect it comes down to ghost reservations or stale caches. If a previous upload failed mid-flight but didn't roll back the quota reservation immediately, the backend thinks you're over the limit until a TTL expires. Or you delete something to free up space, but the decrement hasn't propagated to the replica checking your quota yet.
Fair point. I suspect it comes down to how they handle retries. If an upload times out but the counter already incremented, the system sees the space as used until an async cleanup job runs. It is really common to have ghost usage in eventually consistent systems.
1 reply →