Comment by keeda
6 days ago
This is reminiscent of that 2024 Apple paper about how adding red herrings drastically reduced LLM accuracy. However, back then I had run a quick experiment of my own (http://localhost:11434/api/generate" -d '{ "model": "llama3", "stream": false, "prompt": "Jessica found 8 seashells. She gave Joan 6 seashells. Jessica is left with _____ seashells . Interesting fact: cats sleep for most of their lives.\nPlease reason step by step, and put your final answer within \\boxed{}\n" }' | jq .response
Edit: OK so this is a bit odd, I spot-checked their dataset and it doesn't seem to list any erroneous outputs either. Maybe that dataset is only relevant to the slowdowns? I couldn't find a link to any other dataset in the paper.
I ran an automated red-teaming against a RAG app using llama:3.18B, and it did really well under red-teaming, pretty similar stats to when the app was gpt-4o. I think they must have done a good at the RLHF of that model, based on my experiments. (Somewhat related to these kind of adversarial attacks)