Comment by JackDanMeier

1 day ago

I was working on a product which has FSRS implemented, and is heavily inspired by anki. The change we made was that rather than rate yourself, you have to type your answer and its graded by an LLM. It also has a button to explain the concept to you as if you are 5 (eli5) and you get feedback on your answer. You can also create the flashcards by uploading a pdf and then generate them from it.

I've stopped working on it and am now building something highly similar aimed towards high school students, but any feedback is welcome. This version was built for uni students

mimair.com - I never got around to adding any payment option so its completely free

> graded by an LLM

This seems impossible to me. In anki, there's "hard", "good", and "easy" which are all for "I got this right".

For my usage, "hard" is "I got it right, but I was only like 60% sure", "good" is "I had to actively think", and "easy" is "effortlessly correct, no real thought required".

There's no way for an AI to tell if my identical input is the result of a 50/50 guess, or a little thought, or effortless recall. "delay to answer" also isn't a good approximation, I have a habit of alt-tabbing and chatting with a friend on random cards of any difficulty.

I find distinguishing those levels of easy for totally identical answers ends up making SRS more effective, and AI just can't know my inner thoughts. Maybe once we have brain implants.

  • Yes, this is also something I have been thinking about, can an LLM really know how well I know something. There is the issue with the grading with again, hard, good and easy that I can cut myself some slack and say "I knew that" even when I didn't(and I have a strong memory of having done this myself). And there is the possibility of bullshitting the LLM and just all you know about the subject rather than the exact definition of the flashcard. I'm leaning towards any knowledge rather than specifying that the exact answer should be graded. Whats your take?

    • Bullshitting the AI maliciously doesn't matter, if you don't want to study effectively, you won't study effectively, and that's not a problem for the app.

      > any knowledge rather than specifying that the exact answer should be graded

      I don't understand what you mean. The important thing is to feed back into the SRS algorithm "How much does this card need to be studied", and if you mean "any knowledge means we can study it less often", then I doubt the SRS will be able to be effective.

      What are you suggesting to feed back into SRS? How will you ensure cards the user knows very well quickly get pushed way back (so the user isn't overwhelmed with a boring slog), and cards they only sorta know bubble up more quickly to start to cement the knowledge?

      3 replies →

  • One way it could grade you automatically is by the speed of flipping the card (or entering the correct answer). If it took less than a second to confirm then evidently it was easy.

    • But conversely, if I alt-tabbed to chat with a friend, or paused studying because the person sitting next to me asked a question, or I took a sip from my coffee mug, that doesn't mean it's hard necessarily. Even though all of those take at least as much time as answering a hard card un-interrupted would.

      The AI cannot read my mind, there is no approximation that will work reasonably accurately here for "how confident was I in my answer", unless I input that myself.

    • It should definitely be added as a variable within the calculation, but the current FSRS predicts how likely you are to access the memory (if it's sufficiently available which is defined by its retrieval strength) and speed of retrieval isn't really a factor in this version. The different grades are more to define how well all parts of the memory is retrieved.

      Not to say that how quickly you can access it doesn't play a role in real life.

    • Whenever I try to use anki I can't figure what those four buttons actually mean, so I end up with 40 cards that I still can't recall and then the thing happily drops another 10 on top and I just delete the deck or the app. Haven't learned the thing I was trying to learn with it ever.

      Either I don't understand the algorithm or it doesn't understand me.

      3 replies →

Rating yourself is an important trait of SRS, it forces you to think how you are doing, what is good enough and what not, what is more or less important, etc.

Me too. I made a specialized Kanji learning app. My different approach is in the cards. I used free dictionary data to create a card for each kanji with all the relevant data in a single card. So a common kanji might have dozens (and even hundreds) of words (each word with 0-2 example sentences) to help you remember.

I like the anki way of self rating, so I kept it. I want to be able to say: “hey, I know I screwed up the stroke order this time, but it won‘t happen again, promise” and hit “Good”.

https://github.com/runarberg/shodoku

https://shodoku.app/