Comment by jack_pp
3 days ago
As someone who has used ffmpeg for 10+ years maintaining a relatively complex backend service that's basically a JSON to ffmpeg translator I did not fully understand this article.
Like the Before vs after section doesn't even seem to create the same thing, the before has no speedup, the after does.
In the end it seems they basically created a few services ("recipes") that they can reuse to do simple stuff like speed-up 2x or combine audio / video or whatever
thanks for calling it out, I will correct the before vs after section. But you can describe any ffmpeg capability in plain English and the underlying ffmpeg tool call takes care of it.
I have written a lot of ffmpeg-python and plain ffmpeg commands using LLMs and while I am amazed at how good Gemini or chatGPT can handle ffmpeg prompts it is still not 100% so this seems to me like a big gamble on your part. However it might work for most users that only ask for simple things.
so creators on 100x will create well defined workflows that others can reuse. If a workflow is not found, llm creates one on the go and saves it.
2 replies →