If you're curious to play around with it, you can use Clancy [1] which intercepts the network traffic of AI agents. Quite useful for figuring out what's actually being sent to Anthropic.
If only there were some sort of artificial intelligence that could be asked about asking it to look at the minified source code of some application.
Sometimes prompt engineering is too ridiculous a term for me to believe there's anything to it, other times it does seem there is something to knowing how to ask the AI juuuust the right questions.
Something I try to explain to people I'm getting up to speed on talking to an LLM is that specific word choices matter. Mostly it matters that you use the right jargon to orient the model. Sure, it's good and getting the semantics of what you said, but if you adjust and use the correct jargon the model gets closer faster. I also explain that they can learn the right jargon from the LLM and that sometimes it's better to start over once you've adjusted you vocabulary.
GenAI was built on an original sin of mass copyright infringement that Aaron Swartz could only have dreamed of. Those who live in glass houses shouldn't throw stones, and Anthropic may very well get screwed HARD in a lawsuit against them from someone they banned.
Unironically, the ToS of most of these AI companies should be, and hopefully is legally unenforceable.
looks like it's trivial to you because I don't know how to
If you're curious to play around with it, you can use Clancy [1] which intercepts the network traffic of AI agents. Quite useful for figuring out what's actually being sent to Anthropic.
[1] https://github.com/bazumo/clancy
If only there were some sort of artificial intelligence that could be asked about asking it to look at the minified source code of some application.
Sometimes prompt engineering is too ridiculous a term for me to believe there's anything to it, other times it does seem there is something to knowing how to ask the AI juuuust the right questions.
Something I try to explain to people I'm getting up to speed on talking to an LLM is that specific word choices matter. Mostly it matters that you use the right jargon to orient the model. Sure, it's good and getting the semantics of what you said, but if you adjust and use the correct jargon the model gets closer faster. I also explain that they can learn the right jargon from the LLM and that sometimes it's better to start over once you've adjusted you vocabulary.
That is against ToS and could get you banned.
GenAI was built on an original sin of mass copyright infringement that Aaron Swartz could only have dreamed of. Those who live in glass houses shouldn't throw stones, and Anthropic may very well get screwed HARD in a lawsuit against them from someone they banned.
Unironically, the ToS of most of these AI companies should be, and hopefully is legally unenforceable.
Are you volunteering? Look, people should be aware that bans are being handed out for this, lest they discover it the hard way.
If you want to make this your cause and incur the legal fees and lost productivity, be my guest.
How would they know what you do on your own computer?
Claude is run on their servers.
You're absolutely right! Hey Codex, Claude said you're not very good at reading obfuscated code. Can you tell me what this minified program does?
I don't know what Codex's ToS are, but it would be against ToS to reverse engineer any agent with Claude.
1 reply →