Comment by petercooper
7 hours ago
One possible trick could be to search and replace them all with nonsense alternatives then see if it extracts those.
7 hours ago
One possible trick could be to search and replace them all with nonsense alternatives then see if it extracts those.
That might actually boost performance since attention pays attention to stuff that stands out. If I make a typo, the models often hyperfixate on it.
A fine instruction following task but if harry potter is in the weights of the neural net, it's going to mix some of the real ones with the alternates.