About 20 years ago there was a similar problem with demoscene creations. It was hard to capture demos in realtime in all their glory. So one guy created a tool[1] that waited for a frame render and presented proper time to demo so that frames would be paced properly. "All popular ways of getting time into the program are wrapped aswell - timeGetTime, QueryPerformanceCounter, you name it. This is necessary so .kkapture can make the program think it runs at a fixed framerate (whatever you specified)."
It's rather off-topic, but the linked blog is by the guy who made
.kkrieger, the tiny first-person shooter (only 96kB) in the early 2000s. Though the website for it is now gone, as .theprodukkt doesn't exist anymore, apparently. Nice to see his other stuff, didn't think to look at the time.
I've done similar shenanigans before. That main loop is probably simplified? It won't work well with anything that uses timing primitives for debouncing (massively slowing such code down, only progressing with each frame). Also a setInterval with, say 5ms may not "look" the same when it's always 1000/fps milliseconds later instead (if you're capturing at 24fps/30fps, that would be a huge difference).
What you should do is put everything that was scheduled on a timeline (every setTimeout, setInterval, requestAnimationFrame), then "play" through it until you arrive at the next frame, rather than calling each setTimeout/setInterval callback only for each frame.
Also their main loop will let async code "escape" their control. You want to make sure the microtask queue is drained before actually capturing anything. If you don't care about performance, you can use something like await new Promise(resolve => setTimeout(resolve, 0)) for this (using the real setTimeout) before you capture your frame. Use the MessageChannel trick if you want to avoid the delay this causes.
For correctness you should also make sure to drain the queue before calling each of the setTimeout/setInterval callbacks.
I'm leaning towards that code being simplified, since they'd probably have noticed the breakage this causes. Or maybe, given that this is their business, their whole solution is vibe-coded and they have no idea why it's sometimes acting strange. Anyone taking bets?
Crazy that this approach seems to be the preferred way to do it. How hard would it be to implement the recording in the browser engine? There you could do it perfectly, right?
This is the correct solution. However you'd need someone that knows C++ well, knows Chrome internals, is familiar with video stuff, audio stuff, knows Chromium rendering pipeline, possibly some GPU APIs as well. That person would cost huge amounts of money due to the required knowledge and complexity.
And then you'd need to maintain the code so it works with future Chrome versions.
Ha, my first thought is that I'd likely break this system. My page synchronizes its animation playback rate to an audio worklet, because I need to do both anyway, and some experimentation determined that syncing to audio resulted in smooth frame pacing across most browsers. This means that requestAnimationFrame has the very simple job of presenting the most recently rendered frame. It ignores the system time and, if there isn't a new frame to present yet, does nothing.
> The core issue is that browsers are real-time systems. They render frames when they can, skip frames under load, and tie animations to wall-clock time. If your screenshot takes 200ms but your animation expects 16ms frames, you get a stuttery, unwatchable mess.
But by faking the performance of your webpage, maybe you are lying to your potential users too?
> But by faking the performance of your webpage, maybe you are lying to your potential users too?
I think you're missing the point of it a little. The "user" is someone who wants to watch a rendered video of the brower's display, but if it takes longer than one frame (where you read the word frame in this comment, think of a frame of video or film, not a browser "frame" like people used to make broken menus with) to actually draw the visual the browser will skip it.
Instead this appears to just tell the browser it's got plenty of time, keep drawing, and then capture the output when it's done.
It's not too different to how you'd do for example stop motion animation - you'd take a few minutes to pose each figure and set up the scene, trip the shutter, take a few more minutes to pose each figure for the next part of each movement, trip the shutter again, and so on. Say it took five minutes to set up and shoot each frame then one second of film would take an hour of solid work (assuming 12 frames per second, or "shooting on twos").
It's just saying "take all the time you want, show me it when it's done" and then worrying about making it into smooth video after the work is done.
> The "user" is someone who wants to watch a rendered video of the brower's display
While such a person might indeed exist, I think the more common situation is a vendor showing a demo of how a website might work. In that situation the consumer wants a realistic depiction of someone interacting with the site. Though of course for the user of the video service it might be very useful if the video hides all manner of performance issues.
I did this a few years ago. The approach these guys are taking is kinda hacky compared to other better ways - and I've tried most of them.
It works but only in a limited way there's lots of problems and caveats that come up.
I dropped it in the end partly because of all the problems and edge cases, partly because its a solution looking for a problem an AI essentially wipes out any demand for generating video in browsers.
I ended up writing code that modified chromium and grabbed the frames directly from deep in the heartof the rendering system.
It was a big technical challenge and a lot of fun but as I say, fairly pointless.
And there are other solutions that are arguably better - like recording video with OBS / the GPU nvenc engine / with a hardware video capture dongle and there's other ways too that are purely software in Linux that work extremely well.
You can see some of the results I got from my work here:
> I dropped it in the end partly because of all the problems and edge cases, partly because its a solution looking for a problem an AI essentially wipes out any demand for generating video in browsers.
That is only because your view omits some other problems this solves/products this enables.
There is an incredible ecosystem of tools out the browser land, to create animation.
If you can capture frames from the browser you can render these animations as videos, with motion blur (render 2500 frame for a second of video, blend 100 frames each with a shutter function) to get 25fps with 100 motion blur samples (a number AfterEffects can't do, e.g).
There’s a tiny, tiny market for people who would pay for this.
Also you must understand that chrome is not a deterministic renderer. You cannot get the per frame control because it is fundamentally designed to get frames in front of the user fast.
They did some work around the concept of virtual time a few years ago with this sort of thing in mind and eventually dropped it.
"Use OBS" is one approach that definitely works. If you run the browser inside OBS it also disables hardware acceleration, which may cause some issues but has the advantage of turning DRM support off.
This post smells of LLM throughout. Not just the structure (many headings, bullet lists), but the phrasing as well. A few obvious examples:
- no special framework. No library buy-in. Just a URL
- Advance clock. Fire callbacks. Capture. Repeat. Every frame is deterministic, every time.
- We render dozens of frames that nobody will ever see, just to keep Chrome's compositor from going stale.
- The fundamental insight that you could monkey-patch browser time APIs ... is genuinely clever
- Where we diverged
The whole post is like this, but these examples stand out immediately. We haven't quite collectively put a name on this style of writing yet, but anyone who uses these tools daily knows how to spot it immediately.
I'm okay with using LLMs as editors and even drafters, but it's a sign of laziness and carelessness when your entire post feels written by an LLM and the voice isn't your own.
It feels inauthentic and companies like replit should consider the impact on their brand before just letting people write these kind of phoned-in blog posts. Especially after the catastrophe that was the Cloudflare Matrix incident (which they later "edited" and never owned up to).
And the lede is buried at the very end: This is just a vibe-coded modification of https://github.com/Vinlic/WebVideoCreator, and instead of making their changes open source since they're "standing on the shoulders of giants", the modifications are now proprietary.
In the end, being an AI company is no excuse for bad writing.
You forgot the first part. the famous x,y, and z: "by virtualizing time itself, patching key browser audio APIs, and waging war against headless Chrome's quirks.
About 20 years ago there was a similar problem with demoscene creations. It was hard to capture demos in realtime in all their glory. So one guy created a tool[1] that waited for a frame render and presented proper time to demo so that frames would be paced properly. "All popular ways of getting time into the program are wrapped aswell - timeGetTime, QueryPerformanceCounter, you name it. This is necessary so .kkapture can make the program think it runs at a fixed framerate (whatever you specified)."
[1] https://www.farbrausch.de/~fg/kkapture/
It's rather off-topic, but the linked blog is by the guy who made .kkrieger, the tiny first-person shooter (only 96kB) in the early 2000s. Though the website for it is now gone, as .theprodukkt doesn't exist anymore, apparently. Nice to see his other stuff, didn't think to look at the time.
Mentioned at the very end that this is based on https://github.com/Vinlic/WebVideoCreator
I've done similar shenanigans before. That main loop is probably simplified? It won't work well with anything that uses timing primitives for debouncing (massively slowing such code down, only progressing with each frame). Also a setInterval with, say 5ms may not "look" the same when it's always 1000/fps milliseconds later instead (if you're capturing at 24fps/30fps, that would be a huge difference).
What you should do is put everything that was scheduled on a timeline (every setTimeout, setInterval, requestAnimationFrame), then "play" through it until you arrive at the next frame, rather than calling each setTimeout/setInterval callback only for each frame.
Also their main loop will let async code "escape" their control. You want to make sure the microtask queue is drained before actually capturing anything. If you don't care about performance, you can use something like await new Promise(resolve => setTimeout(resolve, 0)) for this (using the real setTimeout) before you capture your frame. Use the MessageChannel trick if you want to avoid the delay this causes.
For correctness you should also make sure to drain the queue before calling each of the setTimeout/setInterval callbacks.
I'm leaning towards that code being simplified, since they'd probably have noticed the breakage this causes. Or maybe, given that this is their business, their whole solution is vibe-coded and they have no idea why it's sometimes acting strange. Anyone taking bets?
Crazy that this approach seems to be the preferred way to do it. How hard would it be to implement the recording in the browser engine? There you could do it perfectly, right?
This is the correct solution. However you'd need someone that knows C++ well, knows Chrome internals, is familiar with video stuff, audio stuff, knows Chromium rendering pipeline, possibly some GPU APIs as well. That person would cost huge amounts of money due to the required knowledge and complexity.
And then you'd need to maintain the code so it works with future Chrome versions.
You can screen share from browser so surely that API?
The purpose seems to be flashy demo videos to sell web-based tools, so rendering unrealistically smooth interactions is sort of the point.
Ha, my first thought is that I'd likely break this system. My page synchronizes its animation playback rate to an audio worklet, because I need to do both anyway, and some experimentation determined that syncing to audio resulted in smooth frame pacing across most browsers. This means that requestAnimationFrame has the very simple job of presenting the most recently rendered frame. It ignores the system time and, if there isn't a new frame to present yet, does nothing.
> The core issue is that browsers are real-time systems. They render frames when they can, skip frames under load, and tie animations to wall-clock time. If your screenshot takes 200ms but your animation expects 16ms frames, you get a stuttery, unwatchable mess.
But by faking the performance of your webpage, maybe you are lying to your potential users too?
> But by faking the performance of your webpage, maybe you are lying to your potential users too?
I think you're missing the point of it a little. The "user" is someone who wants to watch a rendered video of the brower's display, but if it takes longer than one frame (where you read the word frame in this comment, think of a frame of video or film, not a browser "frame" like people used to make broken menus with) to actually draw the visual the browser will skip it.
Instead this appears to just tell the browser it's got plenty of time, keep drawing, and then capture the output when it's done.
It's not too different to how you'd do for example stop motion animation - you'd take a few minutes to pose each figure and set up the scene, trip the shutter, take a few more minutes to pose each figure for the next part of each movement, trip the shutter again, and so on. Say it took five minutes to set up and shoot each frame then one second of film would take an hour of solid work (assuming 12 frames per second, or "shooting on twos").
It's just saying "take all the time you want, show me it when it's done" and then worrying about making it into smooth video after the work is done.
> The "user" is someone who wants to watch a rendered video of the brower's display
While such a person might indeed exist, I think the more common situation is a vendor showing a demo of how a website might work. In that situation the consumer wants a realistic depiction of someone interacting with the site. Though of course for the user of the video service it might be very useful if the video hides all manner of performance issues.
I did this a few years ago. The approach these guys are taking is kinda hacky compared to other better ways - and I've tried most of them.
It works but only in a limited way there's lots of problems and caveats that come up.
I dropped it in the end partly because of all the problems and edge cases, partly because its a solution looking for a problem an AI essentially wipes out any demand for generating video in browsers.
I ended up writing code that modified chromium and grabbed the frames directly from deep in the heartof the rendering system.
It was a big technical challenge and a lot of fun but as I say, fairly pointless.
And there are other solutions that are arguably better - like recording video with OBS / the GPU nvenc engine / with a hardware video capture dongle and there's other ways too that are purely software in Linux that work extremely well.
You can see some of the results I got from my work here:
https://www.youtube.com/watch?v=1Tac2EvogjE
https://www.youtube.com/watch?v=ZwqMdi-oMoo
https://www.youtube.com/watch?v=6GXts_yNl6s
https://www.youtube.com/watch?v=KzFngReJ4ZI
https://www.youtube.com/watch?v=LA6VWZcDANk
In the end if you want to capture browser video - use OBS or ffmpeg with nvenc or something - all the fancy footwork isn’t needed.
> I dropped it in the end partly because of all the problems and edge cases, partly because its a solution looking for a problem an AI essentially wipes out any demand for generating video in browsers.
That is only because your view omits some other problems this solves/products this enables.
There is an incredible ecosystem of tools out the browser land, to create animation.
If you can capture frames from the browser you can render these animations as videos, with motion blur (render 2500 frame for a second of video, blend 100 frames each with a shutter function) to get 25fps with 100 motion blur samples (a number AfterEffects can't do, e.g).
There’s a tiny, tiny market for people who would pay for this.
Also you must understand that chrome is not a deterministic renderer. You cannot get the per frame control because it is fundamentally designed to get frames in front of the user fast.
They did some work around the concept of virtual time a few years ago with this sort of thing in mind and eventually dropped it.
"Use OBS" is one approach that definitely works. If you run the browser inside OBS it also disables hardware acceleration, which may cause some issues but has the advantage of turning DRM support off.
No it doesn’t disable acceleration.
Just use nvenc or intel or AMD hardware video capture.
Here you go, capture browser video for $100……
https://www.amazon.com.au/AVerMedia-Streaming-Passthrough-Re...
Or use ffmpeg with nvenc it allows simultaneous capture of 12 sessions.
Toss away all the hard work futzing with the browser just put in one ffmpeg command.
This post smells of LLM throughout. Not just the structure (many headings, bullet lists), but the phrasing as well. A few obvious examples:
- no special framework. No library buy-in. Just a URL
- Advance clock. Fire callbacks. Capture. Repeat. Every frame is deterministic, every time.
- We render dozens of frames that nobody will ever see, just to keep Chrome's compositor from going stale.
- The fundamental insight that you could monkey-patch browser time APIs ... is genuinely clever
- Where we diverged
The whole post is like this, but these examples stand out immediately. We haven't quite collectively put a name on this style of writing yet, but anyone who uses these tools daily knows how to spot it immediately.
I'm okay with using LLMs as editors and even drafters, but it's a sign of laziness and carelessness when your entire post feels written by an LLM and the voice isn't your own.
It feels inauthentic and companies like replit should consider the impact on their brand before just letting people write these kind of phoned-in blog posts. Especially after the catastrophe that was the Cloudflare Matrix incident (which they later "edited" and never owned up to).
And the lede is buried at the very end: This is just a vibe-coded modification of https://github.com/Vinlic/WebVideoCreator, and instead of making their changes open source since they're "standing on the shoulders of giants", the modifications are now proprietary.
In the end, being an AI company is no excuse for bad writing.
Their whole product is about vibe-coding unmaintainable "apps", not surprised they put the same level of (dis)attention in their blog too.
Also yikes for the proprietary modifications. AI companies: "what's yours is mine, and what's mine is mine only"
You forgot the first part. the famous x,y, and z: "by virtualizing time itself, patching key browser audio APIs, and waging war against headless Chrome's quirks.
Yep, that's good one. "Virtualizing time itself" itself is such a dead giveaway. What a nonsensical phrase.
2 replies →
This is super smart but doesn't seem very future-proof...
[dead]