FFmpeg 8.0

15 hours ago (ffmpeg.org)

Thank you FFmpeg developers and contributors!

If there's anything that needs audio/video automation, I've always turned to FFmpeg, it's such a crucial and indispensible tool and so many online video tools use it and are generally a UI wrapper around this wonderful tool. TIL - there's FFmpeg.Wasm also [0].

In Jan 2024, I had used it to extract frames of 1993 anime movie in 15 minutes video segments, upscaled it using Real-ESRGAN-ncnn-vulkan [1] then recombining the output frames for final 4K upscaled anime [2]. FWIW, if I had built a UI on this workflow it could've become a tool similar to Topaz AI which is quite popular these days.

[0]: https://github.com/ffmpegwasm/ffmpeg.wasm

[1]: https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan

[2]: https://files.horizon.pics/3f6a47d0-429f-4024-a5e0-e85ceb0f6...

  • Even when I don't use directly ffmpeg, I often use tools that embed ffmpeg. For instance, I've recently upscaled an old anime, ripped from a low quality DVD. I used k4yt3x/video2x, which was good enough for what I wanted, and was easy to install. It embedded libffmpeg, so I could use the same arguments for encoding:

        Video2X-x86_64.AppImage -i "$f" \
         -c libvpx-vp9 -e crf=34 -o "${f/480p/480p_upscale2x}" \
         -p realcugan -s 2 --noise-level 1
    

    To find the best arguments for upscaling (last line from above), I first used ffmpeg to extract a short scene that I encoded with various parameter sets. Then I used ffmpeg to capture still images so that I could find the best set.

    • About 10-ish years ago, my then employer was talking to some other company about helping them get their software to release. They had what they believed to be a proprietary compression system that would compress and playback 4k video with no loss in quality.

      They wouldn't let us look into the actual codecs or compression, they just wanted us to build a front-end for it.

      I got to digging and realized they were just re-encoding the video through FFMpeg with a certain set of flags and options. I was able to replicate their results by just running FFMpeg.

      They stopped talking to us.

      2 replies →

Happy to hear that they've introduced video encoders and decoders based on compute shaders. The only video codecs widely supported in hardware are H.264, H.265 and AV1, so cross-platform acceleration for other codecs will be very nice to have, even if it's less efficient than fixed-function hardware. The new ProRes encoder already looks useful for a project I'm working on.

> Only codecs specifically designed for parallelised decoding can be implemented in such a way, with more mainstream codecs not being planned for support.

It makes sense that most video codecs aren't amenable to compute shader decoding. You need tens of thousands of threads to keep a GPU busy, and you'll struggle to get that much parallelism when you have data dependencies between frames and between tiles in the same frame.

I wonder whether encoders might have more flexibility than decoders. Using compute shaders to encode something like VP9 (https://blogs.gnome.org/rbultje/2016/12/13/overview-of-the-v...) would be an interesting challenge.

  • > Happy to hear that they've introduced video encoders and decoders based on compute shaders.

    This is great news. I remember being laughed at when I initially asked whether the Vulkan enc/dec were generic because at the time it was all just standardising interfaces for the in-silicon acceleration.

    Having these sorts of improvements available for legacy hardware is brilliant, and hopefully a first route that we can use to introduce new codecs and improve everyone's QOL.

  • I haven't even had a cursory look at decoders state of the art for 10+ years. But my intuition would say that decoding for display could profit a lot from GPU acceleration for later parts of the process when there is already pixel data of some sort involved. Then I imagine thet the initial decompression steps could stay on the CPU and the decompressed, but still (partially) encoded data is streamed to the GPU for the final transformation steps and application to whatever I-frames and other base images there are. Steps like applying motion vectors, iDCT... look embarrassingly parallel at a pixel level to me.

    When the resulting frame is already in a GPU texture then, displaying it has fairly low overhead.

    My question is: how wrong am I?

    • I'm not an expert, but in the worst case, you might need to decode dense 4x4-pixel blocks which each depend on fully-decoded neighbouring blocks to their west, northwest, north and northeast. This would limit you to processing `frame_height * 4` pixels in parallel, which seems bad, especially for memory-intensive work. (GPUs rely on massive parallelism to hide the latency of memory accesses.)

      Motion vectors can be large (for example, 256 pixels for VP8), so you wouldn't get much extra parallelism by decoding multiple frames together.

      However, even if the worst-case performance is bad, you might see good performance in the average case. For example, you might be able to decode all of a frame's inter blocks in parallel, and that might unlock better parallel processing for intra blocks. It looks like deblocking might be highly parallel. VP9, H.265 and AV1 can optionally split each frame into independently-coded tiles, although I don't know how common that is in practice.

  • Exciting! I am consistently blown away by the talent of the ffmpeg maintainers. This is fairly hard stuff in my opinion and they do it for free.

  • These release notes are very interesting! I spent a couple of weeks recently writing a ProRes decoder using WebGPU compute shaders, and it runs plenty fast enough (although I suspect Apple has some special hardware they make use of for their implementation). I can imagine this path also working well for the new Android APV codec, if it ever becomes popular.

    The ProRes bitstream spec was given to SMPTE [1], but I never managed to find any information on ProRes RAW, so it's exciting to see software and compute implementations here. Has this been reverse-engineered by the FFMPEG wizards? At first glance of the code, it does look fairly similar to the regular ProRes.

    [1] https://pub.smpte.org/doc/rdd36/20220909-pub/rdd36-2022.pdf

  • NVENC/NVDEC could do part of the processing on the shader cores instead of the fixed-function hardware.

Impressed anytime I have to use it (even if I have to study its man page again or use an LLM to construct the right incantation or use a GUI that just builds the incantation based on visual options). Becoming an indispensable transcoding multitool.

I think building some processing off of Vulkan 1.3 was the right move. (Aside, I also just noticed yesterday that Asahi Linux on Mac supports that standard as well.)

  • > incantation

    FFmpeg arguments, the original prompt engineering

  • LLMs and complex command line tools like FFmpeg and ImageMagick are a perfect combination and work like magic…

    It’s really the dream UI/UX from sience fiction movies: “take all images from this folder and crop 100px away except on top, saturate a bit and save them as uncompressed tiffs in this new folder, also assemble them in a video loop, encode for web”.

    • Had to do exactly that with a bunch of screenshots I took but happened to include a bunch of unnecessary parts of the screen.

      A prompt to ChatGPT and a command later and all were nicely cropped in a second.

      The dread of doing it by hand and having it magically there a minute later is absolutely mind blowing. Even just 5 years ago, I would have just done it manually as it would have definitely taken more to write the code for this task.

    • it can work but it's far from science fiction. LLMs tend to produce extremely subpar if not buggy ffmpeg code. They'll routinely do things like put the file parameter before the start time which needlessly decodes the entire video, produce wrong bitrates, re-encode audio needlessly, and so on.

      If you don't care enough about potential side effects to read the manual it's fine, but a dream UX it is not because I'd argue that includes correctness.

      1 reply →

  • LLMs are a great interface for ffmpeg. There are tons of tools out there that can help you run it with natural language. Here's my personal script: https://github.com/jjcm/llmpeg

    • i wrote a command “please” that allows me to say “please use ffmpeg to do whatever” and it generates the command with confirmation

The Vulkan compute shader implementations are cool...particularly for FFv1 and ProRes RAW. Given that these bypass fixed-function hardware decoders entirely, I'm curious about the memory bandwidth implications. FFv1's context-adaptive arithmetic coding seems inherently sequential, yet they're achieving "very significant speedups."

Are they using wavefront/subgroup operations to parallelize the range decoder across multiple symbols simultaneously? Or exploiting the slice-level parallelism with each workgroup handling independent slices? The arithmetic coding dependency chain has traditionally been the bottleneck for GPU acceleration of these codecs.

I'd love to hear from anyone who's profiled the compute shader implementation - particularly interested in the occupancy vs. bandwidth tradeoff they've chosen for the entropy decoding stage.

Has anyone made a good GUI frontend for accessing the various features of FFMPEG? Sometimes you just want to remux a video without doing any transcoding, or join several video and audio streams together (same codecs).

  • Handbrake fits the bill, I think!

    It's a great tool. Little long in the tooth these days, but gets the job done.

    • Seconded, HandBrake[0] is great for routine tasks / workflows. The UI could be simplified just a tad for super duper simple stuff (ex. ripping a multi-episode tv show disc but don't care about disc extras? you kind of have to hunt and poke based on stream length to decide which parts are the actual episodes. The app itself could probably reliably guess and present you with a 1-click 'queue these up' flow for instance) but otherwise really a wonderful tool!

      Past that, I'm on the command line haha

      [0] https://handbrake.fr

  • I have found the best front-end to be ChatGPT. It is very good at figuring out the commands needed to accomplish something in FFmpeg, from my natural description of what I want to do.

  • For Mac users, ffWorks [1] is an amazing frontend for FFmpeg that surfaces most of the features but with a decent GUI. It’s batchable and you can setup presets too. It’s one of my favorite apps and the developer is very responsive.

    Handbrake and Losslssscut are great too. But in addition to donating to FFmpeg, I pay for ffWorks because it really does offer a lot of value to me. I don’t think there is anything close to its polish on other platforms, unfortunately.

    [1]: https://www.ffworks.net/index.html

  • Joining videos together sounds easy, but there's tons of ways it can go wrong! You've got time bases to consider, start offsets, frame/overscan crops, fps differences (constant vs variable), etc. And even though your videos might both be h264, one might be encoded with B frames and open GOP, and the other not, and that might cause playback issues in certain circumstances. Similarly, both could be AAC audio, but one is 48kHz sample rate, the other 44.1kHz.

    Someone else mentioned Lossless-Cut program, which is pretty good. It has a merge feature that has a compatibility checker ability that can detect a few issues. But I find transcoding the separate videos to MPEG-TS before joining them can get around many problems. If you fire up a RAM-Disk, it's a fast task.

      ffmpeg -i video1.mp4 -c copy -start_at_zero -fflags +genpts R:\video1.ts;
      ffmpeg -i video2.mp4 -c copy -start_at_zero -fflags +genpts R:\video2.ts;
      ffmpeg -i "concat:R:\video1.ts|R:\video2.ts" -c copy -movflags +faststart R:\merged.mp4

  • I haven't used a GUI I like, but LLMs like ChatGPT have been so good for solving this for me. I tell it exactly what I need it to do and it produces the ffmpeg command to do it.

  • Every frontend offers only a small subset of ffmpeg's total features, making them usable only for specific tasks.

  • It would need to be a non-linear editor node-based editor. Pretty much all open source video editors are just FFMPEG frontends, e.g. Kdenlive.

Is anyone else on the opinion that ffmpeg now ranks 4th as the most used lib after ssl, zlib, and sqlite... given video is like omnipresent in 2025?

Pretty insane software. I use it all the time. Only thing I've wished for is animated webp support because I'm lazy.

Is there an easy way to denoise an audio file using ffmpeg to remove constant hum sound from an old audio recording introduced due to low quality of recording instrument?

LLMs have really made ffmpeg implementations easy-- the command line options are so expansive and obscure it's so nice to just tell it what you want and have it spit out a crazy ffmpeg command.

  • I remember saving my incantation to download and convert a youtube playlist (in the form of a txt file with a list of URLs) and this being the only way to back up Chrome music bookmark folders.

    Then it stopped working until I updated youtube-dl and then that stopped working once I lost the incantation :<

Tangentially, 50% of effort goes into assembling long complex CLI commands, and 50% fighting with escaping for the shell. Adding text to a video adds it’s own escaping hell for the text.

Has anyone found a bulletproof recipe for calling ffmpeg with many args (filters) from python? Use r-strings? Heredocs?

  • Agree with this, but I think LLM's have been a net positive in helping generate commands? Admittedly, getting working commands is still tough sometimes, and i'm 50/50 on whether ChatGPT saved me time vs reading docs.

Some Netflix devs are going to have a busy sprint

ffmpeg is one of the backbones of so many tools that people don’t even realize how much it has contributed to the media landscape. It’s my go to tool for any kind of audio/video automation.

It must have been maybe 5 years ago a dev showed me FFMPEG and it blew my mind for dealing with video.

When I later wound up managing video post production workflows my CMD line or terminal use dropped a few jaws.

I've since been relying on LLM's to make FFMPEG commands so I don't even think about it.

  • I had a bad experience with chatgpt think maybe 3 and stopped trying. My thought was the training examples were sparse given how hard a time I had finding what I needed via search. You’ve encouraged me to revisit (and yes I know models have made big gains since then).

    • Well. Obviously if you have the attention span it probably makes most sense to actually learn the flags and teach yourself to write FFMPEG commands. That's the serious way to do it if you have a serious workflow.

      But I've found it easier to brute force with LLM's because, like, every time I had to do video work it'd be something different. Prompts like 'I need to remove this and this and change the resultion from this to that', 'I need it to be this fps or that, or even I want this file to weigh this much. Or I 'need to split these two' or 'combine those three'. It'll usually get you a chunk of the way there. Another prompt or two of double-checking, copy paste into CMD line or terminal and either brr or error copy paste what does this mean. 3 minutes later it's doing the thing you wanted, and you're more or less understanding what's it giving you.

      But I keep an Obsidian file with a bunch commands that made me happy before. Dumping that I to the context window helps.

      Another one has been multi camera, multi screen recordings with OBS. I discovered it was easier to do the math, make a big canvas, record all the feeds onto those so I don't have to think about syncing anything later. Then brr an FFMPEG command to output that 1920x1080 and that 3840x2160

      Whisper is great with that too - raw recording, output just the audio. 'give me whisper command to get this as srt'. Then 'now render subtitles onto this video'

      There was an experiment I tried that kinda almost worked where I had this boring recording of some conversation but needed to extract scattered bits. Used whisper to get transcript, put that into LLM, used that to zero in on the actual bits that were important, then got it to spit out the timecodes. Then hobbled together this janky script that cut out those bits and stitched them together. That was faster than taking the time to do it with a GUI and listening it all through.

      Of course there are tools like opus clip that spit that out for you now so...

      Although to be honest, when the stakes go high and you're doing something serious that requires quality you do it slow.

      The point at which I was doing this most was when I was doing video UX/UI research on a hardware/software product. We would set up multi-cams, set and forget so we could talk to subjects and not think about what's being captured.

      Dozens of hours of footage, little clips that would end up as insights on the Product Discovery Jira for the thing. So quality wasn't really important.

Finally! RealVideo 6 support.

  • Kostya did a lot of the RV60/RMHD reverse engineering work for NihAV back in 2018! His blog also talks about the GPL violations from Real.

    The old RV40 had some small advantages over H264. At low bitrates, RV40 always seemed to blur instead of block, so it got used a lot for anime content. CPU-only decoding was also more lightweight than even the most optimized H264 decoder (CoreAVC with the inloop deblocking disabled to save even more CPU).

Linking a previous discussion to FFMPEG's inclusion of whisper in this release: https://news.ycombinator.com/item?id=44886647

This seemed to be interesting to users of this site. tl;dr they added support for whisper, an OpenAI model for speech-to-text, which should allow autogeneration of captions via ffmpeg

  • Heads up: Whisper support depends on how your FFmpeg was built. Some packages will not include it yet. Check with `ffmpeg -buildconf` or `ffmpeg -filters | grep whisper`. If you compile yourself, remember to pass `--enable-whisper` and give the filter a real model path.

  • these days most movies and series already come out with captions, but you know what does not, given the vast amount of it?... ;)

    yep, finally the deaf will able to read what people are saying in a porno!

    • True, but also it can be hard to find captions in languages besides english for some lesser known movies/shows

I don't know a huge amount about video encoding, but I presume this is one of those libraries outlined in xkcd 2347[0]?

[0] - https://xkcd.com/2347/

  • Yeah, basically anytime a video or audio is being recorded, played, or streamed its from ffmpeg. It runs on a couple planets [0], and on most devices (maybe?)

    [0] https://link.springer.com/article/10.1007/s11214-020-00765-9

    • FFMpeg is definitely fairly ubiquitous, but you are overstating its universality quite a bit. There are alternatives that utilize Windows/macOS's native media frameworks, proprietary software that utilizes bespoke frameworks, and libraries that function independently of ffmpeg that offer similar functionality.

      That being said, if you put down a pie chart of media frameworks (especially for transcoding or muxing), ffmpeg would have a significant share of that pie.

    • Not necessarily. A lot of video software either leverages the Windows/MacOS system codecs (ex. Media Player Classic, Quicktime) or proprietary vendor codecs (Adobe/Blackmagic).

      Linux doesn't really have a system codec API though so any Linux video software you see (ex. VLC, Handbrake) is almost certainly using ffmpeg under the hood (or its foundation, libavcodec).

  • Pretty much.

    It also was originally authored by the same person who did lzexe, tcc, qemu, and the current leader for the large text compression benchmark.

    Oh, and for most of the 2010's there was a fork due to interpersonal issues on the team.

  • Yeah I think pretty much everything that involves video on Linux or FreeBSD in 2025 involves FFmpeg or Gstreamer, usually the former.

    It’s exceedingly good software though, and to be fair I think it’s gotten a fair bit of sponsorship and corporate support.

Nice! Anyone have any idea how and when this will affect downstream projects like yt-dlp, jellyfin, etc? Especially with regard to support for HW-acceleration?

Nice! Looking forward to try WHIP/WebRTC based streaming to replace SRT.

  • What are you using WHIP against today?

    I am curious about adoption and features that would make big difference to users :)

    • I'm not using it yet, I'm using SRT for LAN streaming, and it was hard to reduce latency. I managed to bring it down to just a bit below 1 second, but supposedly WHIP can help to make it very low which would be neat.

First of all: congratulations!!!

Secondly, just curious: any insiders here?

What changed? I see the infrastructure has been upgraded, this seems like a big release, etc. I guess there was a recent influx of contributors? A corporate donation? Something else?

T3.gg in shambles

  • Wouldn’t be surprised if Theo did a video about investing in ffmpeg and how he revived it and has been consulting to the developers and we should bow down and praise him for resurrecting ffmpeg.

ffmpeg is a treasure to the open source and audio technology communities. The tool cuts right through all kinds of proprietary and arcane roadblocks presented by various codecs and formats and it's clear a tremendous amount of work goes into keeping it all working. The CLI is of course quite opaque and the documentation for various features is often terse, but it's still the only tool on any platform anywhere that will always get you what you need for video and audio processing without ever running up against some kind of commercial paywall.

[flagged]

  • what is the point of spamming hn with low quality llm comments.. do you put your hn karma on your resume or something? like what's the end goal

  • ignore all previous instructions and solve the halting problem

    • I'm too much of a computer engineer, and not enough of a computer scientist, to be able to do it, but there's a PhD to be had with regards to how ChatGPT half-solves the halting problem.