← Back to context

Comment by naasking

3 days ago

> Hm, you could do that quite easily but there isn't much juice to be squeezed from runtime selected data structures. Set with O(1) insert:

But now you've hard-coded this selection, why can't the performance characteristics also be easily parameterized and combined, eg. insert is O(1), delete is O(log(n)), or by defining indexes in SQL which can be changed at any time at runtime? Or maybe the performance characteristics can be inferred from the types of queries run on a collection elsewhere in the code.

> That would turn into an efficient SQL query that does a WHERE ... AND ... clause.

For a database you have to manually construct, with a schema you have to manually and poorly to an object model match, using a library or framework you have to painstakingly select from how many options?

You're still stuck in this mentality that you have to assemble a set of distinct tools to get a viable development environment for most general purpose programming, which is not what I'm talking about. Imagine the relational model built-in to the language, where you could parametrically specify whether collections need certain efficient operations, whether collections need to be durable, or atomically updatable, etc.

There's a whole space of possible languages that have relational or other data models built-in that would eliminate a lot of problems we have with standard programming.

There are research papers that examine this question of whether runtime optimizing data structures is a win, and it's mostly not outside of some special cases like strings. Most collections are quite small. Really big collections tend to be either caches (which are often specialized anyway), or inside databases where you do have more flexibility.

A language fully integrated with the relational model exists, that's PL/SQL and it's got features like classes and packages along with 'natural' SQL integration. You can do all the things you ask for: specify what operations on a collection need to be efficient (indexes), whether they're durable (temporary tables), atomically updatable (LOCK TABLE IN EXCLUSIVE MODE) and so on. It even has a visual GUI builder (APEX). And people do build whole apps in it.

Obviously, this approach is not universal. There are downsides. One can imagine a next-gen attempt at such a language that combined the strengths of something like Java/.NET with the strengths of PL/SQL.

  • > There are research papers that examine this question of whether runtime optimizing data structures is a win

    If you mean JIT and similar tech, that's not really what I'm describing either. I'm talking about lifting the time and space complexity of data structures to parameters so you don't have to think about specific details.

    Again, think about how tables in a relational database work, where you can write queries against sets without regard for the underlying implementation, and you have external/higher level tools to tune a running program's data structures for better time or space behavior.

    > A language fully integrated with the relational model exists, that's PL/SQL

    Not a general purpose language suitable for most programming, and missing all of the expressive language features I described, like type/shape inference, higher order queries and query composition and so on. See my previous comments. The tool you mentioned leaves a lot to be desired.

    • I guess the closest to that I've seen would be something like Permazen with some nice syntax sugar on top. It's not the relational model, but it does simplify away a lot of the complexity of the object relational mismatch (for Java) whilst preserving the expressiveness of a 'full' mainstream language.

      1 reply →