Comment by trevor-e

5 months ago

That's super cool, I love the SciShow videos.

I think you're right about the editorial thinking + what do people find interesting parts. But that doesn't have to be solved by directly by AI, it's easy enough to sidestep the problem and provide a nice interface for the human-in-the-loop part. I'd imagine that would save you a ton of time by having a nice starting point depending on how much you have to rewrite for tone.

That's true, it could just turn the writer's role into more of an editorial role. The main time-saving I have so far is being able to upload papers and get it to fact check for me. The editorial guidelines at SciShow are stricter than any academic journal I've published in: any non-trivial statement has to be supported by a direct, findable quote in (most-of-the-time) peer-reviewed scientific literature. I once had to find a citation for the idea that heat + fuel + oxygen generates a fire! (for this video: https://www.youtube.com/watch?v=BEcaE0e0CZg)

LLMs make that much easier. As I collect primary sources during my drafting/writing phrase, I can type up any non-trivial claims I'm making in my script in a separate document, share that with the LLM and say "Quoting directly from the set of attached PDFs, identifying which document, and on which page the quote comes from, find content which directly supports each of these assertions" and it generally goes a great job. At any rate, I have to check each of those quotes for accuracy but the help in _finding_ those quotes in order to pass a stringent fact checking procedure is a huge help if I didn't scribble down the supporting quotes during my research phase. This is also, by the way, stricter than the fact checking process for most non-fiction publishing.

  • >SciShow are stricter than any academic journal I've published in

    Now there's a testimonial. I look forward to browsing the source links with each video!