Comment by koolba
2 years ago
Is there any validation or authorization for changes being merged back into the root database?
In a traditional client / server model, the server has an opportunity to validate each request and optionally reject it. The lower level you go with the sync protocol (data changes vs high level requests) the more difficult that becomes.
Have you addressed that and, if so, how? What prevents a malicious client from send arbitrary data streams to get synced into the root database?
Hey,
You can see our database rules spec here: https://electric-sql.com/docs/api/ddlx
We haven't implemented it all yet but you can see the intention / direction. It's similar to RLS but adapted for the context.
Connections are authenticated by signed JWT: https://electric-sql.com/docs/usage/auth
We also auto-generate a type-safe data access client from the electrified subset of the Postgres schema. This applies type-level write validation and will apply general write validation when we get there.
James.
> It's similar to RLS but adapted for the context.
I'm visiting the docs.
> ...This must have at least a user_id claim, which should be a non-empty string matching the primary key UUID of the authenticated user...
You should probably strike this from your docs. It sounds like you are still figuring out how this should work.
The "right" answer is to author an OIDC authentication module for Postgres and license it in a way that Amazon will steal it for the purposes of actual adoption; or try to add it to PGBouncer as a proxy, converting JWTs in the password field into "set local user.id = ..." and similar in a preamble for every sql statement.
Projects like postgrest have been down this journey and spell out their solutions. It's all a little vague with Supabase, there isn't anything magical about their authorization approach either, but I wouldn't keeping looking to them as the gold standard implementation for web backend stuff.
Anyway, none of this exists in SQLite, which is a huge pain in the butt. SQLite looks promising, and then it's missing all this stuff, because it doesn't make any sense for SQLite to have row level security or JWTs or whatever. That's the issue isn't it? Like why not step back and say, "is SQLite right for me, if it's not only missing all these features I need, but also they will never be added to SQLite because it doesn't make sense for SQLite to have them?"
Separately, when I visit https://electric-sql.com/docs/intro/local-first, it's ironic, the local first says its latency is 3ms, but because it had to load all this code and stuff, it took longer than 237ms of the "cloud-first" box for that 3ms number to even appear. I've been here before, I was a Meteor developer, I am cognizant of the tradeoffs and what this measurement is saying. There's no such thing as a free lunch.
> Projects like postgrest have been down this journey and spell out their solutions. It's all a little vague with Supabase
Just in case it's not entirely clear: supabase is just PostgreSQL + PostgREST. We contribute to + maintain PostgREST, so if it works with PostgREST it also works with Supabase.
> there isn't anything magical about their authorization approach either
I 100% agree with this, and that's intentional. We don't want to do anything special here, we want our solutions to be as interoperable as possible with existing approaches
I understand it'd be necessary to implement all auth and check rules in SQL queries using roles and the CHECK statement.
What's the alternative in cases where I need more advanced checking before doing an INSERT/UPDATE that is not possible in SQL?
This is usually done in the backend. The frontend is not a trusted environment.
So, I'd just send a request to the backend, perform the checks, modify the data in Postgres and then it'd sync to the clients?
The route that we went with for PowerSync (disclosure: co-founder) is to allow to define your own function for handling writes, where you would use your (presumably existing) backend application API to persist the writes to Postgres. Therefore you can handle any validation/authZ/business logic for writes in your backend layer.
The PowerSync client SDK still handles the queueing and retrying of writes so that they can be automatically retried if network connectivity is not available — whenever there is a retry, your callback function is called. (As a result of this approach, your write function should be idempotent; we commend using GUIDs or UUIDs generate client-side for primary keys)
Similar to Electric, PowerSync also uses JWT for auth against your backend.
There are some architecture diagrams explaining this on our docs root here: https://docs.powersync.co/
Can you please stop posting as much about PowerSync on a Show HN thread centered on ElectricSQL?
You are really taking the expression `shameless plug` to another level in this thread, aren't you?
Genuinely excited about this space and it's what I'm focused on full-time so definitely have thoughts to share. I am wary of self-promotion. I do want to contribute things that I feel are relevant to the discussion, since I assume folks would be interested to see different patterns/approaches around local-first/offline-first architecture.
1 reply →
Not the author, but PostgreSQL has constraint triggers that can run procedures / functions on insert/update/delete, to allow/reject a given row or statement. That would be one way to confirm that a given update from a client is valid, from the POV of the application.
The situation I’m considering is data that matches the referential integrity and check constraints of the database, but is malicious. For example syncing a “salary update to $1M” for yourself into the source database.
Yup, this is what's addressed by write permissions. You can express who can set salaries and column level rules to validate input values.
When it comes to concurrency problems like not spending money twice, the plan is https://electric-sql.com/blog/2022/05/03/introducing-rich-cr...
1 reply →