Comment by x187463
5 days ago
I'm really waiting for somebody to figure out the correct interface for all this. For example, study mode will present you with a wall of text containing information, examples, and questions. There's no great way to associate your answers with specific questions. The chat interface just isn't good for this sort of interaction. ChatGPT really needs to build its own canvas/artifact interface wherein questions/responses are tied together. It's clear, at this point, that we're doing way too much with a UI that isn't designed for more than a simple conversation.
I gave it a shot with periplus.app :). Not perfect by any means, but it's a different UX than chat so you might find it interesting.
Looks like a great start, played around with it a bit yesterday and today, I've basically been doing the same with my own CLI but the UI you came up with helps a great deal with navigation and resuming learning :)
One issue I found is the typical "LLM accuracy" issue, with seemingly no recurse. I tried to generate some courses for topics I already know well, just to review how accurate it is, and while popular subjects (ex: "Electronic Music Fundamentals") it gets most of the details correct, less popular subjects (ex: "Scene Transitions with Octatrack") are riddled with errors (both in the "docs" and the quizes/exercises), and I cannot find a way of correcting/adjusting/reporting the errors.
Yeah it's still hard to deal with LLM gaps (fwiw Study mode would also be prone to this). I do try to catch the super obvious stuff and put up a disclaimer but it's far from perfect.
I had some prototypes basing the generations in websearch but the APIs are still super expensive on that front + the models tend to overindex on the results.
This looks super cool—I've imagined something similar, especially the skill tree/knowledge map UI. Looking forward to trying it out.
Have you considered using the LLM to give tests/quizzes (perhaps just conversationally) in order to measure progress and uncover weak spots?
There are both in-document quizzes and larger exams (at a course level).
I've also been playing around with adapting content based on their results (e.g. proactively nudging complexity up/down) but haven't gotten it to a good place yet.
7 replies →
There is no "correct interface". People who want to learn put in the effort, doesn't matter if they have scrolls, books, ebooks or AI.
Agree, one thing that brought this home was the example where the student asks to learn all of game theory. There seems to be an assumption on both sides that this will be accomplished in a single chat session by a linear pass, necessarily at a pretty superficial level.
We are trying to solve this at https://roadmap.sh/ai
It's still a work in progress but we are trying to make it better everyday
There are so many options that could be done, like:
* for each statement, give you the option to rate how well you understood it. Offer clarification on things you didn't understand
* present knowledge as a tree that you can expand to get deeper
* show interactive graphs (very useful for mathy things when can you easily adjust some of the parameters)
* add quizzes to check your understanding
... though I could well imagine this being out of scope for ChatGPT, and thus an opportunity for other apps / startups.
> present knowledge as a tree that you can expand to get deeper
I'm very interested in this. I've considered building this, but if this already exists, someone let me know please!
Yeah. And how to tie in the teacher into all this. Need the teacher to upload the context, like the textbook, so the LLM can refer to tangible class material.