Comment by delusional

13 hours ago

> Constraints and parametrizing are the trivial parts of CAD, something you can now implement in a weekend with Claude Code

You go ahead and try that.

;)

Keywords: Jacobian, Newton-Raphson, Levenberg-Marquardt, Powell dog leg, Schur complements, sparse QR/Cholesky, and so on. The LLM can figure the rest out. Try it yourself!

I recommend Rust because the methods are old and most of the algorithms are already implemented by crates, you just have to wire them together. Like I said the hard part is the b-rep: you’re not going to find anything equivalent to Parasolid or ACIS in the literature or open source.

  • I can't help but find this comment a little insulting. It's very similar to saying "if, while, else, malloc. The LLM can figure the rest out!" as if CS were a solved thing and the whole challenge weren't assembling those elementary bricks together in computationally efficient and robust ways.

    Also more to the point, I doubt you'll have much success with local optimization on general surfaces if you don't have some kind of tessellation or other spacial structure to globalize that a bit, because you can very easily get stuck in local optima even while doing something as trivial as projecting a point onto a surface. Think of anything that "folds", like a U-shape, a point can be very close to one of the branches, but Newton might still find it on the other side if you seeded the optimizer closer to there. It doesn't matter whether you use vanilla Newton or Newton with tricks up to the gills. And anything to do with matrices will only help with local work as well because, well, these are non-linear things.

    "Just work in parameter space" is hardly a solution either, considering many mappings encountered in BREPs are outright degenerate in places or stretch the limits floating point stability. And the same issue with local minima will arise, even though the domain is now convex.

    So I might even reduce your list to: Taylor expansion, linear solver. You probably don't need much more than that, the difficulty is everything else you're not thinking of.

    And remember, this has to be fast, perfectly robust, and commit error under specified tolerance (ideally, something most CAD shops don't even promise).

  • Yeah but have you tried it? You can throw as many keywords as you want into Claude but it does get things wrong in sometimes subtle ways. I’ve tried it, I know.

  • Look, I'm not trying to decimate you here but your list of keywords is wrong and I know it because I explored that list last month for a completely different application.

    The Jacobian is the first order derivative for a function that accepts a vector as an input and produces a vector as an output, hence it must be a matrix.

    Newton-Raphson is an algorithm for finding the roots(=zeroes) of a function. Since the derivative of the minimum of a function is zero, it can be used for solving convex optimization problems.

    Levenberg-Marquardt is another way to solve optimization problems.

    The Powell dog leg method is new to me, but it is just an extension of Gauss-Newton which you could think of a special casing of Newton-Raphson where the objective function is quadratic (useful for objectives with vector norms aka distances between positions).

    Most of the algorithms require solving a linear system for finding the zero of the derivative. The Schur complement is a way to factor the linear system into a bunch of smaller linear systems and sparse QR/Cholesky are an implementation detail of solving linear systems.

    Now that we got the buzzwords out of the way I will tell you the problem with your buzzwords. Constraint solving algorithms are SAT or SMT based and generally not optimization based.

    Consider the humble circle constraint: a^2 + b^2 = c^2. If you have two circles with differing centers and radii, they may intersect and if they do, they will intersect at two points and this is readily apparent in the equations since c = sqrt(a^2 + b^2) has two solutions. This means you will need some sort of branching inside your algorithm and the optimization algorithms you listed are terrible at this.

    • Optimization can work well for interactive CAD usage, because geometric sketches tend to be close to the intended solution, and it's fast. It also has the nice property of stability (i.e., with optimization, small changes in parameters tend not to cause large perturbations in the solution). Doing more gets into what you mentioned, and is called chirality or root identification in the literature. That's much more intense computationally, but could be useful especially if cheaper approaches like optimization failed.