← Back to context

Comment by oblio

2 days ago

It's a clearly marked quote that adds more details. It's perfectly fine.

It's not adding more details in this case, it's adding incorrect information. CSS gradients are rasterized once during paint into a bitmap by the browser. Theres no recalculation going on per scroll unless something invalidates the gradient bitmap, and it doesn't matter how many steps the gradient has or how complicated it is.

The real issue is something was causing the container with the gradient to repaint on every scroll.

  • That’s helpful information but it doesn’t mean the use of Gemini is unwelcome. A human could have rendered the initial analysis too, and then you could have just replied to the human, correcting him or her. Why is the source of the analysis such an issue?

    • IMO it's because people have learned not to trust LLMs. It's like using AI code generators – they're a useful tool if you know what you're doing, but you need to review the material it produces and verify that it works (in this case, verify that what it says is correct). When they're used as a source in conversations, we never know if the "dev" has "reviewed the code," so to speak, or just copy and pasted.

      As for why people don't like LLMs being wrong versus a human being wrong, I think it's twofold:

      1. LLMs have a nasty penchant for sounding overly confident and "bullshitting" their way to an answer in a way that most humans don't. Where we'd say "I'm not sure," an LLM will say "It's obviously this."

      2. This is speculation, but at least when a human is wrong you can say "hey you're wrong because of [fact]," and they'll usually learn from that. We can't do that with an LLM because they don't learn (in the way humans do), and in this situation they're a degree removed from the conversation anyway.