← Back to context

Comment by pm215

18 days ago

That's funny, but also interesting that it didn't "sign" it. I would naively have expected that being handed a clear instruction like "reply with the following information" would strongly bias the LLM to reply as requested. I wonder if they've special cased that kind of thing in the prompt; or perhaps my intuition is just wrong here?

A comment on one of the threads, when a random person tried to have copilot change something, said that copilot will not respond to anyone without write access to the repo. I would assume that bot doesn't have write access, so copilot just ignores them.

AI can't, as I understand it, have copyright over anything they do.

Nor can it be an entity to sign anything.

I assume the "not-copyrightable" issue, doesn't in anyway interfere with the rights trying to be protected by the CLA, but IANAL ..

I assume they've explicitly told it not to sign things (perhaps, because they don't want a sniff of their bot agreeing to things on behalf of MSFT).

  • Are LLM contributions effectively under public domain?

    • IANAL. It's my understanding that this hasn't been determined yet. It could be under public domain, under the rights of everyone whose creations were used to train the AI or anywhere in-between.

      We do know that LLMs will happily reproduce something from their training set and that is a clear copyright violation. So it can't be that everything they produce is public domain.

    • This is my understanding, at least in US law.

      I can't remember the specific case now, but it has been ruled in the past, that you need human-novelty, and there was a case recently that confirmed this that involved LLMs.