Comment by miki123211

7 days ago

LLMs are tools that make it easier to hack incentives, but you still need a person to decide that they'll use an LLM t do so.

Blaming LLMs is unproductive. They are not going anywhere (especially since open source LLMs are so good.)

If we want to achieve real change, we need to accept that they exist, understand how that changes the scientific landscape and our options to go from here.

everyone keeps claiming "they're here to stay" as if it's gospel. this constant drumbeat is rather tiresome and without much hard evidence.

  • Genuinely curious, did we ever manage to ban a piece of technology worldwide and effectively?

    • A large part of geopolitics is concerned with limiting the spread of weapons of mass destruction worldwide and to the greatest possible degree of efficacy. Moreover, the investment to train state-of-the-art models is greater than the Manhattan project and involves larger and more complex supply chains-- it cannot be done clandestinely. Because the scope of the project is large and resource-intensive there are not many bodies that would have to cooperate in order to place impassable obstacles on the path that is presently being taken. 'What if they won't cooperate toward this goal?' -- Worth considering, but the fact is that they can and are choosing not to. If the choice is there it is not an inevitability but a decision.

      1 reply →

  • If they go away, it's because they have been replaced by something better(worse) like LLLM or LLMM or whatever.

    I'old enough to remember when GAN where going to be used to scam millions of people and flood social media with fake profiles.

  • What evidence do you need exactly?

    I think such statements are likely projections of people's own unwillingness to part with such tools given their own personal perceived utility.

    I, for one, wouldn't give up LLMs. Too useful to me personally. So, I will always seek them out.