Comment by MITSardine

5 days ago

LLMs can write theorems, but can they come up with meaningful definitions?

I intended to imply this with "detection of contradiction". Coherence seems to me to be the only a priori meaning. Most of the meaning of "meaning" seems to me to be a posteriori. After all, what is the point of an a priori floating signifier?

  • Setting the framework (what I short-handed by "definitions") precludes the exploration of results (or at least an efficient one) that would yield to some framework-defining analysis.

    The search space is much too rich to be explored anything but greedily in timid steps off the trodden path, and the frameworks (arbitrary) set both the highways and the vehicles by which we move along and out of them.

    Now, the argument can be made that the "meta-mathematical" (but outright mathematical, really) setting of frameworks follows the same structure, and LLMs could also explore that space.

    Even assuming that, a major roadblock remains: mathematics should remain understandable by humans, and yield fast progress in desirable (by whom? until now, by humans) directions, so the constraints on admissible frameworks are not as simple as "yields to coherent results".

    Also, to take a step back, I wonder what the pertinence of using numerical math is to derive analytical math when we can already solve a great deal of problems through numerical methods. For instance, is it worth spending however many MWh on LLMs to derive an analytical solution to an optimization problem, which might itself be very expensive to compute (human-derived expressions tend to be particularly cheap to evaluate precisely because we are so limited; machines (formal calculus for instance) will happily give you multi-page formulae with thousands of operations to evaluate), when there's a vast array of algorithms at the ready to provide arbitrarily precise solutions?

    What remains is the kind of math that, arguably, is much more precious to understand than to derive.