Comment by danielheath

17 days ago

The hash technique for uniqueness isn’t supported for indexes because it doesn’t handle hash collisions. The authors proposed solution suffers the same problem- values which do not already exist in the table will sometimes be rejected because they have the same hash as something that was already saved.

This is completely untrue. While the index only stores the hashes, the table itself stores the full value and postgres requires both the hash and the full value to match before rejecting the new row. Ie. Duplicate hashes are fine.

  • For a unique index, that requires a table check, which - IIRC - isn’t implemented during index updates.

    The article suggests using a check constraint to get around that - are you saying that does actually check the underlying value when USING HASH is set? If so, I think the docs need updating.

  • This is very good to know because it means this exclusion constraint workaround is a better approach over using a SQL hash function and a btree if you want to enforce uniqueness on values too long for a btree index.