← Back to context

Comment by adebayoj

3 days ago

You got it exactly right :) And you can update the attribution.md to have it NOT rely on opensource projects that have been compromised. Imagine asking claude code to write a package/function in the style of a codebase that you care about or force it to ALWAYS rely on some internal packages that you care about. The possibilities are endless when you insert such knobs into models.

I would rather see that it does not rely on open source projects that have not given permission to be used to train that particular AI on.

  • Doesn’t the nature of most open source licenses allow for AI training though?

    Example — MIT:

    > Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions

    • I remember seeing some new licenses like Human license or something iirc but they all had the valid criticism that it would be unenforcable or hard to catch decision.

      I haven't looked at the project that much but this could seem exciting to me if maybe these two things can get merged.

      I don't think that license is necessarily the problem in here. Licenses can change and people can adopt new licenses.