Comment by hamdingers
6 days ago
There's no LLM in the loop at all, so any attempt to solve it by reasoning with an LLM is missing the point. They're not even "ignoring" assistance as sibling supposes. There simply is no reasoning here.
This is what you should imagine when your site is being scraped:
def crawl(url):
r = requests.get(url).text
store(text)
for link in re.findall(r'https?://[^\s<>"\']+', r):
crawl(link)
Sure, but at some point the idea is to train an LLM on these downloaded files no? I mean what is the point of getting them if you don't use them. So sure, this won't be interpreted during the crawling but it will become part of the knowledge of the LLM
Training is not inference, there is no reasoning happening then either.
Even if it did have some effect down the line it wouldn't help sites like AA with their scraping problem, which is the issue at hand.