← Back to context

Comment by londons_explore

5 days ago

Median database workloads are probably doing writes of just a few bytes per transaction. Ie 'set last_login_time = now() where userid=12345'.

Due to the interface between SSD and host OS being block based, you are forced to write a full 4k page. Which means you really still benefit from a write ahead log to batch together all those changes, at least up to page size, if not larger.

A write-ahead log isn't a performance tool to batch changes, it's a tool to get durability of random writes. You write your intended changes to the log, fsync it (which means you get a 4k write), then make the actual changes on disk just as if you didn't have a WAL.

If you want to get some sort of sub-block batching, you need a structure that isn't random in the first place, for instance an LSM (where you write all of your changes sequentially to a log and then do compaction later)—and then solve your durability in some other way.

  • > A write-ahead log isn't a performance tool to batch changes, it's a tool to get durability of random writes.

    ¿Por qué no los dos?

  • you can unify database with write-ahead log using a persistent data structure. It also gives you cheap/free snapshots/checkpoints.

WALs are typically DB-page-level physical logs, and database page sizes are often larger than the I/O page size or the host page size.

I keep seeing this crap ass design where developers or some product a-hole decides their schema for users needs a field in their table where it shows the last_login_time. It's a shit design, it does not scale, nor is it necessary at all to get that one piece of information.

Use a damned log/journal table if you absolutely must, but make sure it's a fire away and forget non-transactional insert because you morons keep writing software that requires locking the whole users table and that causes huge performance issues.

  • Your rant is mostly valid except for one thing: most DBMS systems don’t lock the whole table for a point update. Almost all of them will take a short lived row lock only.

Don't some SSDs have 512b page size?

  • I would guess by now none have that internally. As a rule of thumb every major flash density increase (SLC, TLC, QLC) also tended to double internal page size. There were also internal transfer performance reasons for large sizes. Low level 16k-64k flash "pages" are common, and sometimes with even larger stripes of pages due to the internal firmware sw/hw design.

    • Also due to error correction issues. Flash is notoriously unreliable, so you get bit errors _all the time_ (correcting errors is absolutely routine). And you can make more efficient error-correcting codes if you are using larger blocks. This is why HDDs went from 512 to 4096 byte blocks as well.

  • They might present 512 blocks to host, but internally the ssd almost certainly manages data in larger pages