← Back to context

Comment by AlienRobot

7 hours ago

That's one problem with LLM's. I had claude write a function in python for me that did a bit of math, because, like most programmers, I don't know math.

The function worked perfectly mathematically speaking, but after a bit of research I realized a human being would never write a piece of code so bad.

I don't remember exactly, but it looked like this:

    denominators = [...]

    def lcm(a, b):
        return abs(a * b) // math.gcd(a, b)
    
    return reduce(lcm, denominators)

There are 2 problems with this code.

First, that is the correct way to calculate the LCM that you'll quickly learn if you google it (or if you ask claude). The problem: math.lcm already exists! Any human being writing this would have paused to think "wait, Python has math.gcd, does it have math.lcm as well?" And then they would have just used that.

Second, you don't even need reduce. You can just math.lcm(*denominators). A human being would have realized this when intellisense showed it takes any number of arguments instead of just 2.

Pretty much every time I used an LLM to generate code it generates a rough draft barely held together that needs to be completely rewritten later. With Qt for example it generated 2 push buttons for Ok/Cancel when there is QDialogButtonBox for this that even orders the buttons to match the typical system order, or when generating a combo box that associated labels with objects it tried to figure out which object from the text of the label of the items when there is already a way to just set an arbitrary object for each item and then get it later with .currentData().

Every single time it makes me think: yes, this works. But no, not like this.

I can't imagine with 1 million lines of this feels like.