It is downloading a solver at runtime, took maybe half a second in total, downloads are starting way faster than before it seems to me.
[youtube] [jsc:deno] Solving JS challenges using deno
[youtube] [jsc:deno] Downloading challenge solver lib script from https://github.com/yt-dlp/ejs/releases/download/0.3.1/yt.solver.lib.min.js
It would be great if we could download the solver manually with a separate command, before running the download command, as I'm probably not alone in running yt-dlp in a restricted environment, and being able to package it up together with the solver before runtime would let me avoid lessening the restrictions for that environment. Not a huge issue though, happy in general the start of downloads seems much faster now.
Do you use Firefox on Linux, too? 4K Videos freeze so often for me, I don't even try watching them online, and always just download them with yt-dlp. It doesn't bother me enough to give Chrome a try, but maybe that'd make a difference.
I'm n=1 using chromium but the only problem I have is the video losing focus when maximizing, meaning l/r/space don't work for video controls anymore, happened about when the liquid glass styled interface did
> YouTube barely works in a full-on browser these days
Agreed. Shorts about half the time don't display comments, the back button breaks in mysterious ways. And I use Chrome on both Intel and M macOS machines, so the best in class there is, but my Windows Chrome doesn't fare much better. And Adblock ain't at fault, I pay for premium.
And that's just the technical side. The content side is even worse, comments sections are overrun by bots, not to mention the countless AI slop and content thieves, and for fucks sake I get that high class youtubers have a lot of effort to do to make videos, but why youtube doesn't step in and put clear regulations on sponsorship blocks is beyond me. Betterhelp, AG1, airup, NordVPN (and VPNs in general) should be outright banned.
And the ads, for those who aren't paying for premium, are also just fucked up. Fake game ads (Kingshot who stole sound effects from the original indie Thronefall ...) galore.
Google makes money here, they could go and actually hire a few people to vet ads and police the large youtubers with their sponsors.
What environment are you using that:
- Has access to Youtube
- Can run Python code
- Can’t run JS code
If the concern is security, it sounds like the team went to great lengths to ensure the JS was sandboxed (as long as you’re using Deno).
If you’re using some sort of weird OS or architecture that Deno/Node doesn’t support, you might consider QuickJS, which is written in pure C and should work on anything. (Although it will be a lot slower, I’m not clear just how slow.) Admittedly, you then loose the sandboxing, although IMO it seems like it should safe to trust code being served by Google on the official Youtube domain. (You don’t have to trust Google in general to trust that they won’t serve you actual malware.)
> What environment are you using that: - Has access to Youtube - Can run Python code - Can’t run JS code
Nothing specific, just tend to run tools in restricted VMs where things are whitelisted and it's pretty much as locked down as it can be. It can run whatever I want it to run, including JS, and as the logs in my previous comment shows, it is in fact running both Python and JS, and has access to YouTube, otherwise it wouldn't have worked :)
I tend to have the rule of "least possible privileges" so most stuff I run like that has to be "prepped" basically, especially things that does network requests sometimes (updating the solver in this case), just a matter of packaging it before I run it, so it's not the end of the world.
No weird OS or architecture here, just good ol' Linux.
> IMO it seems like it should safe to trust code being served by Google on the official Youtube domain
> What environment are you using that: - Has access to Youtube - Can run Python code - Can’t run JS code
They didn't say “can't run JS code”, but that from that location the solver could not be downloaded currently. It could be that it is an IPv6-only environment (IIRC youtube supports IPv6 but github does not), or just that all external sites must be assessed before whitelisted (I'm not sure why youtube would be but not github, but it is certainly possible).
No, don't need anything extra, `extra/yt-dlp` works perfectly fine and is enough. You'll get a warning if you run it without the flag:
WARNING: [youtube] [jsc] Remote components challenge solver script (deno) and NPM package (deno) were skipped. These may be required to solve JS challenges. You can enable these downloads with --remote-components ejs:github (recommended) or --remote-components ejs:npm , respectively. For more information and alternatives, refer to https://github.com/yt-dlp/yt-dlp/wiki/EJS
Providing one of the flags automatically lets it automatically get what it needs. No need for AUR packages :)
Edit: Maybe I misunderstood, now when I re-read your post. You meant it'll prevent the automatic download at runtime perhaps? That sounds about right if so.
It was just updated again today, and at least for me, when you install it using the package name "yt-dlp[default]", it already downloads both deno and the solver automatically.
I remember when QuickTime came out in 1991 and it was obvious to everyone that video should be copied, pasted and saved like any arbitrary data.
It's absolutely insane to me how bad the user experience is with video nowadays, even video that's not encumbered by DRM or complex JavaScript clients.
> It's absolutely insane to me how bad the user experience is with video nowadays, even video that's not encumbered by DRM or complex JavaScript clients.
The video experience for typical video files is great these days compared to the past. I think you may be viewing the past through rose colored glasses. For years it was a pain to deal with video because you had to navigate third party players (remember Real Player?), Flash plugins, and sketchy codec pack installs from adware infested download sites. If you were tech support for friends and family during that era, it was common to have to remove adware, spyware, and other unwanted programs after someone went down the rabbit home of trying to install software to watch some video they found.
The modern situation where your OS comes with software to play common files or you can install VLC and play anything is infinitely better than the past experience with local video.
Local video could be a nightmare in 90s. I remember those days. I remember when it was revolutionary that the Microsoft Media Player came out, and you could use one player for several formats, rather than each video format requiring its own (often buggy) player. Getting the right codecs was still a chore, though.
MS Media Player eventually fell behind the curve, but eventually we got VLC and things got great.
I'm absolutely not viewing the past through rose colored glasses. RealPlayer was a dumpster fire, but that came later.
I could hold shift and drag on the timeline to select, copy, then paste it into a document or another video. I can't do that with VLC today. Apple removed the feature in later releases too.
> For years it was a pain to deal with video because you had to navigate third party players (remember Real Player?), Flash plugins, and sketchy codec pack installs from adware infested download sites.
How is this any worse than what YouTube does now? Real Player and flash never made you watch ads.
1991 was the vibrant, exciting, crazy "adolescence" of the PC age and well into the period where it was cool to have a desktop PC and really learn about it.
Phones are dominant now and have passed the PC generation by - in number, not capability. The concept of copy/paste/save for arbitrary data lives on for the non-tech masses only in the form of screenshots and screen recording features.
The thing that stands out to me looking back over a few decades is how much of consumer/public computing is exploring the latest novel thing and companies trying to cash in on it. Multimedia was the buzzword aeons ago, but was a gradual thing with increasing color depth and resolution, video, 3D rendering, storage capabilities for local playback, sound going from basic built in speaker beeps to surround and spatial processing. Similar with the internet from modems to broadband to being almost ubiquitously available on mobile. Or stereoscopic 3D, or VR, or touchscreens, or various input devices.
Adolescence is a very good word to encompass it, lots of awkward experiments trying to make the latest thing stick along with some of them getting discarded along the way when we grow out of them, they turn out not to be (broadly) useful or fashion moves on. What I wonder about is if the personal computer has hit maturity now and we're past that experimental phase, for most people it's an appliance. Obviously you can still get PCs and treat them as a workstation to dive into whatever you're enthusiastic about but you need to specifically go out and pursue that, where the ecosystem might be lacking is a bridge between the device most have as their personal computer (phone/tablet) and something that'll introduce them to other areas.
Depending on where personal/portable AI devices go, phones might be significantly different or not exist in 10 years as they do today.
There might be a resurgence of some kind of device like a PC.
Seeing iPadOS gain desktop features, and MacOS starting to adopt more and more iPadOS type features clearly shows the desktop, laptop and tablet experiences will be merged at some point by Apple at least.
A specific issue with video data is that it’s much denser: the same concept in video takes up more bytes than in text or image. Therefore hosting is more expensive, so less people host and the ones that do (e.g. YouTube) expect revenue. Furthermore, because videos are dense, people want to download them streaming, which means hosts must not just have storage but reliable bandwidth.
Even then, there are a few competitors to YouTube like Nebula, PeerTube, and Odysee. But Nebula requires a subscription and PeerTube and Odysee have worse quality, because good video hosting and streaming is expensive.
The real problem is that YouTube built a model where the platform, not the creators, controls the money flow. They could have charged creators directly for hosting and left monetisation up to them, but by inserting themselves as the middleman, they gained leverage and authority over content itself. The "cost of hosting" is just the technical excuse for such centralisation.
I think companies always prioritized their own interests.
A company can increase its profits (1) by improving their products and services, so that they'll get more customers or customers willing to pay more, or (2) by increasing how much of their revenue is profit by (e.g.) cutting corners on quality or raising prices or selling customers' personal information to third parties.
Either of those can work. Yes, a noble idealistic company might choose #1 over #2 out of virtue, but I think that if most companies picked #1 in the past it's because they thought they'd get richer that way.
I think what's happened is that for some reason #2 has become easier or more profitable, relative to #1, over time. Or maybe it used not to be so clearly understood that #2 was a live option, and #1 seemed safer, but now everyone knows that you can get away with #2 so they do that.
Experience with video is excellent for most people. All the complexity is hidden from the end user, unless you are trying to hack something. In the 1990s, streaming effectively didn't exist because people didn't have enough bandwidth (it was mostly dial-up), and there was very little legal offering, and the little that existed was terrible. Home video was limited too, as few people knew how to make video files suitable for online diffusion.
Piracy did pretty well, but that's because the legal experience was so terrible. But even then, you had to download obscure players and codec packs, and sourcing wasn't as easy as it is now. For reference VLC and BitTorrent released in 2001.
I'd say the user experience steadily improved and peaked in the mid-2010s. I think it is worse now, but if it is worse now, back then, it was terrible, for different reasons.
I was just reading how ATSC 3 (over the air TV) is kind of stalling because they added DRM fairly late in the roll out. Several people bought receivers that are now incompatible.
Also, I'm not sure what the actual numbers are, but my impression is that a significant portion of OTA enthusiasts are feeding their OTA signals into a network connected tuner (HDHomeRun, Tablo, AirTV, etc.) and DRM kills all of these.
Yes, I see Youtube going deep into enshitiffication. On my Macbook this morning with a FF-dev edition it just stopped to work this morning. Don't know if it's related to the fact I tried to install an extension to "force H264" on my Ubuntu box. On the latter fans started to go crazy as soon as I open a single youtube tab lately and a quick research led me there.
Actually at this point the only thing that makes the good old aMule a bit less inconvenient to my own expectations are
- it's missing snippet previews
- it doesn't have as many resources on every topic out there.
> It's absolutely insane to me how bad the user experience is with video nowadays
Has nothing to do with video per se. Normal embeddings, using the standard `<video>` element and no unnecessary JS nonsense, still work the same way they did in the 90s: Right click the video and download it, it's a media element like any other.
The reason why user experience is going to shite, is because turbocapitalism went to work on what was once The Internet, and is trying to turn it into a paywalled profit-machine.
I've always found it insane how much software development web sites are willing to undertake, just to avoid using the standard video, audio, and img HTML elements. It's almost hilarious how over engineered everything is, just so they can 'protect' things they are ultimately publishing on the open web.
Plain <video> elements are easy to download, but not great for streaming, which is what most people are doing nowadays. Much of the JS complexity that gets layered on top is to facilitate adaptive bitrate selection and efficient seeking, and the former is especially important for users on crappier internet connections.
I'm not a fan of how much JS is required to make all that work though, especially given the vast majority of sites are just using one of two standards, HLS or DASH. Ideally the browsers would have those standards built-in so plain <video> elements can handle them (I think Safari is the only one which does that, and they only do HLS).
> still work the same way they did in the 90s: Right click the video and download it, it's a media element like any other.
I’m so confused reading these comments. Did everyone forget RealPlayer? Flash videos? All of the other nonsense we had to deal with to watch video on the internet?
Technically, you can profit off of ad revenue and subscriptions without exploiting the labour of your workers, so in this particular case it has nothing to do with the economic regime. Enshittification is its own thing.
The problem with a standard video element is that while it's mostly nice for the user, it tends to be pretty bad for the server operator. There's a ton of problems with browser video, beginning pretty much entirely with "what's the codec you're using". It sounds easy, but the unfortunate reality is that there's a billion different video codecs (and a heavy use of Hyrum's law/spec abuse on the codecs) and a browser only supports a tiny subset of them. Hosting video already at a basis requires transcoding the video to a different storage format; unlike a normal video file you can't just feed it to VLC and get playback, you're dealing with the terrible browser ecosystem.
Then once you've found a codec, the other problem immediately rears its head: video compression is pretty bad if you want to use a widely supported codec, even if for no other reason than the fact that people use non-mainstream browsers that can be years out of date. So you are now dealing with massive amounts of storage space and bandwidth that are effectively being eaten up by duplicated files, and that isn't cheap either. To give an estimate, under most VPS providers that aren't hyperscalers, a plain text document can be served to a couple million users without having to think about your bandwidth fees. Images are bigger, but not by enough to worry about it. 20 minutes of 1080p video is about 500mb under a well made codec that doesn't mangle the video beyond belief. That video is going to reach at most 40000 people before you burn through 20 terabytes of bandwidth (the Hetzner default amount) and in reality, probably less because some people might rewatch the thing. Hosting video is the point where your bandwidth bill will overtake your storage bill.
And that's before we get into other expected niceties like scrolling through a video while it's playing. Modern video players (the "JS nonsense" ones) can both buffer a video and jump to any point in the video, even if it's outside the buffer. That's not a guarantee with the HTML video element; your browser is probably just going to keep quietly downloading the file while you're watching it (eating into server operator cost) and scrolling ahead in the video will just freeze the output until it's done downloading up until that point.
It's easy to claim hosting video is simple, when in practice it's probably the single worst thing on the internet (well that and running your own mailserver, but that's not only because of technical difficulties). Part of YouTube being bad is just hyper capitalism, sure, but the more complicated techniques like HLS/DASH pretty much entirely exist because hosting video is so expensive and "preventing your bandwidth bill from exploding" is really important. That's also why there's no real competition to YouTube; the metrics of hosting video only make sense if you have a Google amount of money and datacenters to throw at the problem, or don't care about your finances in the first place.
I remember when QuickTime came out in 1991 and it was obvious to everyone that video should be copied, pasted and saved like any arbitrary data.
I remember when VCR's came out and everyone would take TV shows and share them with their friends.
By now we should be able to share video on SD Cards that just pop into a slot on the top of the TV, but the electronics companies are now also the content companies, so they don't want to.
You can plug a USB drive with videos on into a lot of TVs I've encountered over the years. Due to limited container/codec support I rarely made use of it though.
Was RealPlayer really that horrible or was it just trying to do streaming media on an extremely low bandwidth connection without hardware accelerated and sophisticated codecs? I only really used it with a 28.8K modem netscape and Windows 95. The experience was poor but the experience viewing moderately sized images wasn't great either. I remember at the time encountering MPEG decoder add-in cards (that nobody used), although I suspect video cards started to add these features during the 1990s at some point.
I never bothered trying to stream anything, but I do remember downloading 20mb episodes of Naruto in surprisingly good quality due to the .rmvb format.
A media business is predicated on exclusive rights over their media. The entire notion of media being freely copied and saved is contrary to their business models. I think there's a healthy debate to be had over whether those models are entitled to exist and how much harm to consumers is tolerable, but it's not really obvious how to create a business that deals in media without some kind of protection over the copying and distribution of that media.
I think what breaks computer peoples' brains a bit is the idea that the bytes flying around networks aren't just bytes, they represent information that society has granted individuals or businesses the right to control and the fact technology doesn't treat any bytes special is a problem when society wants to regulate the rights over that information.
I have worked on computer systems for media organizations and they have a very different view of intellectual property than the average programmer or technologist. The people I find the most militant about protecting their rights are the small guys, because they can't afford to sue a pediatrician for an Elsa mural or something.
I use yt-dlp (and back then youtube-dl) all the time to archive my liked videos. Started back in around 2010, now I have tens of thousands of videos saved. Storage is cheap and a huge percent of them are not available anymore on the site.
I also save temporary videos removed after a time for example NHK honbasho sumo highlights which are only available for a month or so then they permanently remove them.
You are a digital hoarder. I have taken so many pics that I wouldn't even bother to look back that them (do we ever?) but Google memories is really a neat feature, it refreshes memories. I think you should run a similar service to refresh memory of your favourite videos like they are on speed dail.
I look at my pictures regularly. They are on my phone, mostly I scroll back 1-3 months to refresh my memory, and I often go further back to check on how living things were around me, and to what my general surrounding looked like. I also like to look at game screenshots from time to time. Funny to see how I lived life back then.
The Memories feature sounds cool. I have something a bit similar on my Nextcloud, "On this day", that shows an image dated on the same day in previous years, and clicking it brings up more pictures from its general time. I love it! So many memories.
I'm an amateur photographer. Lately, I've taken to making curated collections from my "slush feeds". Meaning, going through a particular trip, time period, moment and grabbing the best photos, and parceling them out to a dedicated album. Makes for a much better experience and fun to share with friends/family.
I have an e-ink photo frame on the wall that switches picture once every 24h, picking one of my pictures of the last 10+ years by random. So every single one of my tens of thousands of pictures gets a real chance to be seen at least once during my lifetime :)
Often when I am bored I pick a random day in the past and look at where I was on that day and which pictures I took. Refreshing memories is a great idea but the low tech way is enough for me.
Might sound stupid, but: differences between Google memories vs. Snapchat memories?
Also my issue is that I would NEVER upload the photos I have on my hard drive due to privacy issues, but if I had a local model that could categorize photos and whatnot, that would be cool. I have over 10k screenshots / images. Many of them have text on it, so probably need OCR.
> You are a digital hoarder.
Is this meant to be negative? Many videos I have watched on YouTube are now unavailable. I wish I had saved them, too, i.e. I wish I was a digital hoarder, too, but eh, no space for me.
I've seen photography compared to archery recently, and that comparison stuck with me.
As long as you enjoy the act of shooting, that is enough. Archers doesnt have to keep and look at old scoreboards/targets for the archery to have been enjoyable and worthwhile, it's the same with modern photography.
I routinely review my pics and vigorously delete all duplicates or poor quality images. It helps if you do this for 10-15 minutes every day. At least I'm able to find most of the pictures I remember I took, and I don't have to scroll through 1000 snaps of some particular sunset to do that.
I started after channels started removing their own videos because they either didn't think the videos were good enough or they had a mental break and deleted their channel. So good stuff just gone.
There was one instance where a prominent "doujin" musical artist got fingered as a thief. Away went all of their videos, except... he'd packaged them as something completely different from wherever he'd taken them from. One song in particular sucked to lose, because its sibling still exists as an "extended" upload. So, I can listen to the one any time, but the other, I simply know that it once existed, and that it might still exist somewhere else, just under a different title. I can't even remember how it went.
I was just lamenting last night that we can't watch some of Terutsuyoshi's amazing makuuchi bouts from about three(?) years ago. I wish I'd archived them.
how do you manage the archive? I mean the file hierarchy structures etc. i started archiving youtube videos recently, now saving descriptions and other metadatas too, but simply having them all in one directory doesn't seem to be a good idea.
No! It would be easier but I burned myself so many times with removed videos that I do it on my own basically asap manually. Not a big deal once you have yt-dlp properly
Do you ever go back and actually watch those videos? Whenever I start to journal, track, or just document something, after some time I notice again and again that most of the value has already been created the moment I finish working on a specific entry. Even with something seemingly very important like medical records. Maybe one exception I can think of are recordings of memories involving people close to you
I have the same with journals, but the video archiving has actually come up a few times, still fairly rare though. I think the difference is that you control the journal (and so rarely feel like you need it's content) while the videos you're archiving are by default outside of your control and can be more easily lost.
I don’t think journaling is the same thing though as hoarding pics/videos. Even if you never go back and read through old hand written journals, just the physical process of writing has mental effects that pics/videos do not. There’s also a bit of therapeutic results from slowing down and putting thought to paper. So to me the only similarity is that you might not ever look at it again, that does not make them the same at all
I would be interested in knowing as well. I've been watching YouTube since it first came out and can't remember any times where I saw something I thought I needed to actually download and save in case I wanted it in 10 years. 10,000+ videos is a lot of videos to just seemingly save.
Same here and my motivation was that some of my liked videos were randomly removed and it's pretty cool music I wanted to keep forever.
I made another script that adds the video thumbnail as album art and somehow tries to put the proper ID3 tags, it works like 90% of the time which is good enough for me.
Then I made another script that syncs it to my phone when I connect it.
So now I have unlimited music in my phone and I only have to click on "Like" to add more.
And yet, none of Google's 900k TOC genius engineers have thought of this as a feature ...
I doubt that it’s a nobody else situation, and it’s more of a management doesn’t want it as it takes away the need for their own streaming offerings. Music industry also doesn’t want it, as there’s no more royalties coming in. Can’t release an app that pisses of the industry.
In ten years time YouTube will be entirely inaccessible from the browser as the iPad kids generation are used to doomscrolling the tablet app and Google feels confident enough to cut off the aging demographic.
maybe to stop the .01%. switching to app only, sign in only would get them pretty much all the way there.
They own the os, with sign-in, integrity checks, and the inability to install anything on it Google doesn't want you to install they could make it pretty much impossible to view the videos on a device capable of capturing them for the vast majority of people. Combine that with a generation raised in sandboxes and their content would be safe.
I guess at that point we could do it the old fashioned way by pointing a camera at the screen. Or, I guess, a more professional approach based on external recording.
I can only navigate to a video by long-pressing, copying the URL and pasting it into the URL bar, otherwise I get a meaningless "something went wrong" type error message. Mobile Safari, no content blockers, not logged into a Google account. After almost two decades of making the website worse they finally succeeded in breaking "clicking a video". I wonder what the hotshots at Alphabet manage to break next :o)
And the YouTube web interface is full of issues too. For example, livestreams had transient memory leaks for months already, thought to be related to their chat implementation.
In the meanwhile, YouTube spends its effort on measures against yt-dlp, which don't actually stop yt-dlp.
What the fuck is wrong with Google corporate as of late.
Ooh thanks. If the 21st century is going to belong to China, then BiliBili, along with v2ex.com, is gonna need to get added to my doomscrolling itinerary.
Pffft, and good riddance, comrade! Just think about native application and native performance, great native animations and native experience (and native ads, of course)! We won't have this god-awful Web (that propelled modern tech world in the first place) anymore, we can finally have personal vendetta against awful JS and DOM. No more interoperability, no more leverage against corpos, just glorious proprietary enclaves where local tyrant can do anything they want!
Think of iOS. You can basically use just 1 programming stack on iOS devices: Swift/Objective-C. You can't have JIT except for the JIT approved by the Apple Gods.
The biggest hack to this is React Native, which barged just in due to sheer Javascript and web dominance elsewhere, and even that has a ton of problems. Plus I'm fairly sure that the React Native JS only runs in the JIT approved by the Apple Gods, anyway.
Otherwise, we're stuck in the old days of compiled languages: C/C++ (they can't really get rid of these due to games, and they have tried... Apple generally hates/tolerates games but money is money). Rust works decently from what I hear. Microsoft bought Mono/Xamarin and that also sort of works.
But basically nothing else is at the level of quality and polish - especially in terms of deployment - as desktops, if you want to build an app in say, Python. Or Java. Or Ruby. Or whatever other language in which people write desktop apps.
And we're at a point where mobile computing power is probably 20x that of desktops available in 2007. The only factor that is holding us back is battery life, and that's only because phone manufacturers manufacture demand by pushing for ever slimmer phones. Plus we have tons of very promising battery techs very close to increasing battery capacities by 20-50%.
This is obviously not plausible. They're never going to shut off browser access on people's laptops. Watching YT at work is a major thing.
I have to assume you're joking, but I honestly can't figure out what point you're even trying to make. Do it think it's surprising that an ad-supported site has anti-scraping/anti-downloading mechanisms? YouTube isn't a charity, it's not Wikipedia.
Not to mention all of the iframe embeds. I’d argue it’d helped YouTube become the defacto go to platform for corporate videos. Yeah there’s other solutions but the number of corp sites that just toss videos on YouTube is insane.
I don’t think it’s such a wild possibility that more and more jobs will be able to be done with locked down tablets and smart phone while fewer will be done on laptops and desktops. We are already seeing it at the personal level - people are entirely forgoing personal computers and using mobile devices exclusively. The amount isn’t huge (like 10 or 15% in the US IIRC?) but 10 years ago that was unthinkable IMO.
I was reading a study recently that claimed Gen Z is the first generation where tech literacy has actually dropped. And I don’t blame them! When you don’t have to troubleshoot things and most of your technology “just works“ out the box compared to 20 or even 10 years ago, then you just don’t need to know how to work under the hood as much and you don’t need a fully fledged PC. You can simply download an app and generally it will just take care of whatever it is you need with a few more taps. Similar to how I am pretty worthless when it comes to working on a car vs my parents generation could all change their own oil and work on a carburetor (part of this is also technology has gotten more complicated and locked down, including cars, but you get my point).
The point of all this is I could definitely see a world where using a desktop/laptop computer starts becoming a more fringe choice or specific to certain industries. Or perhaps they become strictly “work” tools for heavy lifting while mobile devices are for everything else. In that world many companies will simply go “well over 90% of our users are only using the app and the desktop has become a pain in the ass to support as it continues to trend downwards so…why bother?”
Who knows the future? Some new piece of hardware could come out in 10 years and all of this becomes irrelevant. But I could see a world where devices in our hands are the norm and the large device on the desk becomes more of a thing of the past for a larger percentage of the population.
Because this will mean major shift to open-source and community solution, where creators will be paid directly by their viewers.
I have NO problem, what so ever, to pay content creators directly.
But I have HUGE problem to pay big corpos. It's ridiculous that we pay for Netflix same price as US people and for you it's cheaper than coffee and for us, if you compare median-salary, it's 5-10x MORE expensive. (cancelled every streaming platform year before as all of my friends, cloud seedbox here we go)
And I don't even wanna mention Netflix's agenda they want to push (eg.: Witcher)
That's why piracy is so frequent here in small country in EU :) Also it's legal or in grey-area, because nobody enforce it or copyright companies are unable to enforce it if you don't make money from sharing. (yes, you don't even need to use VPN with torrents)
> Because this will mean major shift to open-source and community solution, where creators will be paid directly by their viewers.
That’s an unrealistic nerd dream. People haven’t moved off of closed social networks such as Facebook and Instagram, and haven’t flocked to creator-owned platforms such as Nebula. The general public, i.e. the majority of people, will eat whatever Google, Meta, et al feed them. No matter how bad things get, too few people abandon those platforms in favour of something more open.
I'm sorry but this sounds hollow. Creators are specifically choosing to upload their content to YouTube. They have elected "big corpos" to handle payment for them.
You are not standing up for them by pirating their stuff from YouTube.
If you have a problem with it, it is on you to stop using YouTube to view their content. You did not gain a moral right to pirate their stuff just because you don't like the deal.
"yt-dlp is a feature-rich command-line audio/video downloader with support for thousands of sites. The project is a fork of youtube-dl based on the now inactive youtube-dlc."
I guess the point was that yt-dlp is only possible, because of the mandatory protocols you need in the browser. Moving to native app makes it much easier to prevent downloading and denying access to the unencrypted content.
Doesn't matter, yt-dlp looks like a browser to youtube. They can put authorization/encryption in an app that can't be done in a webpage. By killing browsers they gain control.
> What about Selenium or a headless browser solution?
>> The yt-dlp maintainers have no interest in doing this except as a measure of last resort. A headless browser solution would be an admission of defeat and would go against the spirit of this project.
More and more recently with youtube, they seem to be more and more confrontational with their users, from outright blocking adblockers, which has no bearing on youtube's service, to automatically scraping creators content for AI training and now anything API related. They're very much aware that there is no real competition and so they're taking full advantage of it. At the expense of the 'users experience' but these days, large companies simply don't suffer from a bad customer experience anymore.
> At the expense of the 'users experience' but these days, large companies simply don't suffer from a bad customer experience anymore.
This is my personal opinion. They're still affected by customer satisfaction and they're still driven by market forces. It's just that you and I are not their customers. It's not even the YT premium customers. Google is and always has been an ad service company and their primary customers have always been the big advertisers. And they do care about their experience. For example, they go overboard to identity the unique views of each ad.
Meanwhile the rest of us - those of us who don't pay, those who subscribe and even the content creators - are their captive resources whose creativity and attention they sell to the advertisers. Accordingly, they treat us like cattle, with poor quality support that they can't be bothered about. This is visible across their product lineup from YouTube and gmail to workspace. You can expect to be demonetized or locked out of your account and hung out to dry without any recourse if your account gets flagged by mistake or falsely suspected of politics that they don't like. Even in the best case, you can only hope to raise a stink on social media and pray that it catches the attention of someone over there.
Their advantage is that the vast majority of us choose to be their slaves, despite this abuse. Without our work and attention, they wouldn't have anything to offer their customers. To be fair to ourselves, they did pull off the bait and switch tactic on us in the beginning by offering YouTube for free and killing off all their competition in the process. Now it's really hard to match their hosting resources. But this is not sustainable anymore. We need other solutions, not complaints. Even paid ones are fine as long as they don't pull these sort of corporate shenanigans.
I’m recently also encountering more unskippable ads, especially in kids videos. There were always two ads. Sometimes the first wasn’t shippable and the second always was. That has gradually shifted to neither being skippable.
>outright blocking adblockers, which has no bearing on youtube's service
The scale of data storage, transcoding compute, and bandwidth to run YouTube is staggering. I'm open to the idea that adblocking doesn't have much effect on a server just providing HTML and a few images, but YouTube's operating costs are (presumably, I haven't looked into it) staggering and absolutely incompatible with adblocking.
YouTube broke even sometime around 2010 and has been profitable ever since. The ad revenue has always been more than enough to sustain operating costs. It's just more growthism = more ads. If you want the YouTube of 2010--you know, the product we all liked and got used to--you can't have it. Welcome to enshittification.
Personally I find YouTube unusable without an adblocker. On my devices that don't have an ad blocker, it's infuriating.
It's fine for this project since google is probably not in the business of triggering exploits in yt-dlp users but please do not use deno sandboxing as a your main security measure to execute untrusted code. Runtime-level sandboxing is always very weak. Relying on OS-level sandboxing or VMs (firecracker & co) is the right way for this.
For a long time, yt-dlp worked completely with Python. They implemented a lightweight JavaScript interpreter that could run basic scripts. But as the runtime requirements became more sophisticated it struggled to scale
I wonder how the whole thing works when I open a youtube video from a preview inside a chat application on my mobile phone.
It looks like the video loads and starts playing in some kind of in-app browser, but there is just full-screen video and nothing else. I also never faced any ads in this "mode" of playing a video, yet recently some strange things started happening where the playback would start together with an audio-track from the advertisement. The video itself would start playing but the sound would be replaced with the sound from ad which seemed very odd and much like a bug, only when advertisement audio track ends it will be replaced with audio track from the video itself.
I'm genuinely curious how is the whole playback process different when I watch a video from the Telegram preview, can I somehow achieve the same "just fullscreen video" kind of playback on the desktop as well? Does anyone have any insight?
> Is it because it would break compatibility with some devices?
This is a significant part of it. There are many smart devices that would not be capable of running that sort of software. As those cycle out of the support windows agreed way-back-when then this sort of limitation will be removed.
I'm sure this is not the only consideration, but it is certainly part of the equation.
Yeah, it's pretty much to support backwards compatibility with old smart TVs and the like. They already enforce stricter rules on new hi-res content, and once those old devices cycle out of service you can expect the support to go away.
I think because it cost money and they get little benefit on doing so.
Major platform like Netflix etc. don't implement that DRM since they care, it's because they content they distribute requires that they employ that measures, otherwise who produces the content doesn't give it to them. Content on YouTube does not have this requirement.
Also: implementing a strict DRM on all videos is probably bad for their reputation. That would restrict the devices that are able to play YouTube, and probably move a lot of content creators on other platforms that does not implement these requirements.
It's just an understandable reluctance to insert a bunch of additional dependencies in your playback stack unless you really, really have to.
People underestimate how much engineering Netflix have put in over the years to get it to work seamlessly and without much playback start latency, and replicating that over literally millions of existing videos is pretty non-trivial, as is re-transcoding.
It's not because of older devices - any TV that has got a YouTube app for a decade was required to support Widevine as part of the agreement to get the app, so the tail end of devices you'd cut off would be tiny, and even if they wanted to keep them in use you could probably use the client certificate to authenticate them and disallow general web access. It wouldn't be 100% fullproof but if any open source project used an extracted key you could revoke it quickly.
"Support for YouTube without a JavaScript runtime is now considered "deprecated." It does still work somewhat; however, format availability will be limited, and severely so in some cases (e.g. for logged-in users). "
The devil is in the details
There are some formats, perhaps the one(s) the user wants, that do not require a JS runtime
Interesting that "signing up" for a website publishing public infomation and "logging in" may not always work in the user's favor. For example, here they claim it limits format availability
"Format availability without a JS runtime is expected to worsen as time goes on, and this will not be considered a "bug" but rather an inevitability for which there is no solution. It's also expected that, eventually, support for YouTube will not be possible at all without a JS runtime."
It is speculated that this format availability might change in the future
How long until it comes with a DRM AI and then my anti-DRM AI will have to fight it in a virtual arena (with neon lights and killer soundtrack, of course)?
Even when the so called "ad-pocalypse" happened, this wasn't as big of an issue as it is today.
What's going on with Google being extra stingy seems to correlate well with the AI boom (curse). I suspect there are companies running ruthless bots scraping TBs of videos from YouTube. Not just new popular videos that are on fast storage, but old obscure ones that probably require more resources to fetch. This is unnatural, and goes contrary to the behaviour pattern of normal users that YT is optimized for.
I think AI-companies abusing the internet is why things are getting more constrained in general. If I'm right, they deserve the bulk of the blame imo.
This will happen in the real world when the robot mass production gets going. We'll climb the exponential till we run into the resource limits of the planet at meteoric speed.
Yes, the regulators will try and manage it, but eventually every decision about who can use the robot/AI genie for what will go through them because of the robot/AI genie's enormous strain on natural resources, and you'll be back to a planned economy where the central planners are the environmental regulators.
There are hard decisions to make as well. Who gets to open a rare earth processing plant and have a tailing pond that completely ecologically destroys that area? Someone has to do it to enable the modern economy. It's kind of like we won't have a good AI video generator and will always be behind China if some Youtube creators refuse to license their content for AI training. Same goes for the rare earth processing tailing pond. Nobody can agree on where it's going to go, so China wins.
Hopefully most of what the bots are ruthlessly scraping is all the AI slop that is filling YT. Hopefully garbage in - garbage out will kill off all the AI nonsense.
yes, "AI" can be useful, but nonsense and slop are not.
>I suspect there are companies running ruthless bots scraping TBs of videos from YouTube.
certainly, but for Google, that bandwidth and compute is a drop in the bucket. at the scale Google operates, even if there were a hundred such bots (there aren't - few companies can afford to store exabytes of data), those wouldn't even register on the radar. of course, like the other social media oligarchs, Google wants to be the only entity with unrestricted access to their catalog of other people's content, but even that isn't their motivation here - "login to prove you're not a bot :^)" was ALWAYS going to happen, even without the AI bubble.
enshitiffication is unstoppable and irreversible, and Google is its prophet - what they don't kill, they turn to shit.
>I think AI-companies abusing the internet is why things are getting more constrained in general.
even before the AI bubble, every other fuckass blog with 0.5 daily visitors was behind Cloudflare, for the same reason those fuckass blogs are built with FOTM javascript frameworks - there's nowt so queer as webshits.
Just one question. I see all these 3rd party clients solving the problem separately. Isn't it easier for everyone to build a unified decoder backend that exposes a stable and consistent interface for all the frontends? That way, it will get more attention and each modification will have to be done only once.
Since JS is the big issue here, the backend itself could be written in JS, TS or something else that compiles to WASM. That way, the decoder doesn't have to be split between two separate codebase. Deno also allows the bundle to be optionally compiled into a native executable that can run without having to install Deno separately.
Alternatively, I'm not sure if this might be an impetus to move the bulk of the codebase itself to TS/JS and just use Deno/Node/Bun or otherwise to move to Rust with rusty_v8 or deno_core directly.
We use this for AI transcriptions internally on our Linode VPS server.
It's been working great by itself for the most part since the beginning of the year, with only a couple of hiccups along the way.
We do use a custom cookies.txt file generated on the server as well as generate a `po_token` every time, which seems to help.
(I originally thought everything would just get blocked from a popular VPS provider, but surprisingly not?)
Most recently though, we were getting tons of errors like 429 until we switched to the `tv_embedded` client, which seems to have resolved things for the most part.
> if using QuickJS, version 2025-4-26 or later is strongly recommended for performance reasons
Oh, I wonder if they got performance to a reasonable level then? When the external JS requirements were first announced, they said it took upwards of half an hour, and a QuickJS developer wrote in the ticket that they didn’t see a path towards improving it significantly enough.
I'm glad there's a streaming service that pays independent creators for their hard work, but the player is glitchy almost to the point of being unusable.
Actually, it's completely to the point of being unusable. For several videos now, I've watched halfway through and suddenly playback stops and the video is replaced with "Error." And every time this happens I have to just pray the videos on youtube because, without exaggeration, it will never work again. Even after checking a week later.
If corporations could stop being dicks, that would be great. Between this and the Reddit API change feels like they all get together and plan this. Thank god for FOSS.
Frankly I think this is inevitable- it's practically one of the laws of computing: any sufficiently complex system will ultimately require a turing-complete language regardless of its actual necessity.
See also:
"""Zawinski's Law states: "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.""""
and
"""Greenspun's tenth rule of programming is an aphorism in computer programming and especially programming language circles that states:[1][2]
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."""
(from the above I conclude that if you want to take over the computer world, implementing a mail reader with an embedded Lisp).
Perhaps a stupid question, but is there some reason I can't potentially fall back to recording my screen / audio in realtime and saving videos that way? yt-dlp is obviously far superior to this, but just thinking about what my fallback points are.
You definitely can, it's just 1) vastly slower, and 2) you have to recompress the decompressed video, which loses quality. It's therefore an option of last resort.
Most people want to be able to download 5 hours of video in the background in 5 minutes. Not wait 5 hours while their computer is unusable.
I wonder if it has to be a real computer, display, and camera, or if doing it with a "headless display" that is nonetheless being fed to a "video recorder" would work...
Understood and agreed. I mostly don't even care about keeping videos from Youtube, but some of the most amazing music performances in the world are trapped on Youtube, and in many cases there is no obvious way to purchase or download them elsewhere.
In the current times yes, you can basically record your screen with whatever tool you fancy.
But even now, many video sites employ DRM, and only the weakest levels of DRM streams can be recorded off the screen. If they crank that up, which is perfectly possible today, the screen recordings only shows a blank rectangle, because the encryption goes from server to video card. At this stage, "hdmi recorders" are the next level - they capture the audio/video stream from the hdmi cable output for example.
Even further, there is technology to encrypt from server to screen. I'm not sure on the rollout on this one. I think we have a long time until this is implemented, and even then, I'm sure we will have the ability to buy screens that fake the encryption, and then let us record the signal. And, for mainstream media, there will be pirated copies until the end of time I think.
> Even further, there is technology to encrypt from server to screen. I'm not sure on the rollout on this one. I think we have a long time until this is implemented, and even then, I'm sure we will have the ability to buy screens that fake the encryption, and then let us record the signal. And, for mainstream media, there will be pirated copies until the end of time I think.
In the end, nobody will ever avoid people from having a camera pointed to a screen. At least till they can implant a description device in our brain, the stuff coming out of the screen can be recorded. Like in the past when people used to record movies at the cinema with cameras and upload them on emule. Sure, it would not be super high quality, but considering that is free compared to something you pay, who cares?
To me DRM is just a lost battle: while you can make it inconvenient to copy a media, people will always try to find a way. We used to pirate in the VHS era and that was not convenient, since you would have needed 2 VCR (quite expensive back then!) and it took the time of the whole movie to be copied.
With browser's and hardware's support for DRM they could make it impossible if they want to. Basically the OS / recording software sees a blank screen.
I was on live TV recently and wanted to keep a recording for myself, that wasn't just filming the screen with my phone. I first tried screen recording watching the show in my browser in their streaming service. Got a black video. Then I tried their phone app, got a black video. Finally, using my phone but the web page they enabled playback without DRM and I could record and store it. When more devices support DRM they will probably get rid of that fallback as well.
I imagine there would be ways around this. I know from personal experience that Kazam screen recorder on Firefox on Ubuntu can record anything and everything, including YouTube as well as DRM content on Disney+ and Prime Video.
I bet that it Google really wanted to it could force Firefox in line, but I imagine that actually preventing screen recording would require compliance at the OS level too, and I don't think that even Google could demand changes like that to Linux. Best they could do is block Linux clients from YouTube, but user agent spoofing or emulation could probably circumvent that.
And even if Google does somehow manage to entirely block screen recording, we can always exploit the analog loophole.
There is always the analog hole. Even HDCP can be worked around. Even if they do manage to stop all computers from doing direct bit copies, there are still old things such as Kinescopes which they used to use to broadcast television from film. There of course is a quality loss, but that's kind of irrelevant to the point.
I don't know if Youtube cares, but other website do attempt to block this as well. They will either black your screen or prevent playback if you try to screen record, even encrypting to prevent recording the HDMI/DP output.
Am I right in saying they need to be able to run JS code from YouTube to be able to get the download URL at this point? Deliberate obfuscation I'm guessing? I guess Deno makes the code fairly safe to execute and I guess the chances of YouTube daring to download spyware onto your machine is minimal :-)
I do not understand why Google doesn't just explicitly permit people who pay for premium to use yt-dlp or other tools to watch YouTube however the fuck they want. Put that in your terms, Google -- so people aren't afraid they'll lose their GMail because they wanted to watch a video -- and you'll get more paying customers...
It's quite worrying. A sizeable chunk of cultural and educational material produced in the last decade is in control of greedy bastards who will never have enough. Unfortunately, downloading the video data is only part of it. Even if we shared it all on BitTorrent it's nowhere near as useful without the index and metadata.
What are you talking about? It's in control of the creators. YT doesn't get exclusive copyright on user's content. Those creators can upload wherever they want.
And YT isn't "greedy bastards". They provide a valuable service, for free, that is extremely expensive to run. Do you think YT ought to be government-funded or a charity or something?
Benn Jordan made a pretty compelling video on this topic, arguing that the existing copyright system and artifacts of it are actually not that great and a potential government system might actually be better: https://www.youtube.com/watch?v=PJSTFzhs1O4
I will say that is something I would not have considered reasonable prior to watching his video.
Knock on wood not to jinx it, but I wonder why this manages to stay up on github when eg paywall-busting chrome extensions get banned from there (because of DMCA takedowns I guess?)
there was already an attempt to take it down back in 2020/2021 [0]. The DMCA claim's main argument was that ytdl was circumventing Techincal Protection Measures (TPMs) in order to access the content. Thanks to a letter from the EFF [1] which explains how ytdl accesses content in the same way that a browser does (i.e. it does circumvent anything such as DRM), github rejected the takedown.
this is also why ytdl has stood firm in saying they will never attempt to be compatible with anything protected by DRM.
Someday it will have to launch a VM with a copy of Chrome installed and use an AI model to operate the VM to make it look like a human, then use frame capture inside the VM to record the video and transcode it.
Seems its already in Arch's repositories, and seems to work, just add another flag to the invocation:
It is downloading a solver at runtime, took maybe half a second in total, downloads are starting way faster than before it seems to me.
It would be great if we could download the solver manually with a separate command, before running the download command, as I'm probably not alone in running yt-dlp in a restricted environment, and being able to package it up together with the solver before runtime would let me avoid lessening the restrictions for that environment. Not a huge issue though, happy in general the start of downloads seems much faster now.
Glad to hear it’s faster now!
YouTube barely works in a full-on browser these days, props to the team that keeps it accessible via a Python script!
I use YouTube on a daily basis. I haven't seen any of these problems.
23 replies →
Do you use Firefox on Linux, too? 4K Videos freeze so often for me, I don't even try watching them online, and always just download them with yt-dlp. It doesn't bother me enough to give Chrome a try, but maybe that'd make a difference.
2 replies →
I use YouTube daily in safari and edge, this is complete hyperbole.
21 replies →
I'm n=1 using chromium but the only problem I have is the video losing focus when maximizing, meaning l/r/space don't work for video controls anymore, happened about when the liquid glass styled interface did
I use YouTube in a browser (Brave) almost everyday. Works great for me.
3 replies →
> YouTube barely works in a full-on browser these days
Agreed. Shorts about half the time don't display comments, the back button breaks in mysterious ways. And I use Chrome on both Intel and M macOS machines, so the best in class there is, but my Windows Chrome doesn't fare much better. And Adblock ain't at fault, I pay for premium.
And that's just the technical side. The content side is even worse, comments sections are overrun by bots, not to mention the countless AI slop and content thieves, and for fucks sake I get that high class youtubers have a lot of effort to do to make videos, but why youtube doesn't step in and put clear regulations on sponsorship blocks is beyond me. Betterhelp, AG1, airup, NordVPN (and VPNs in general) should be outright banned.
And the ads, for those who aren't paying for premium, are also just fucked up. Fake game ads (Kingshot who stole sound effects from the original indie Thronefall ...) galore.
Google makes money here, they could go and actually hire a few people to vet ads and police the large youtubers with their sponsors.
1 reply →
> It would be great if we could download the solver manually with a separate command
Download a random video and then copy ejs from yt-dlp’s cache directory (I think it’s in /home/username/.cache)
> being able to package it up together with the solver
`make yt-dlp-extra`
What environment are you using that: - Has access to Youtube - Can run Python code - Can’t run JS code
If the concern is security, it sounds like the team went to great lengths to ensure the JS was sandboxed (as long as you’re using Deno).
If you’re using some sort of weird OS or architecture that Deno/Node doesn’t support, you might consider QuickJS, which is written in pure C and should work on anything. (Although it will be a lot slower, I’m not clear just how slow.) Admittedly, you then loose the sandboxing, although IMO it seems like it should safe to trust code being served by Google on the official Youtube domain. (You don’t have to trust Google in general to trust that they won’t serve you actual malware.)
> What environment are you using that: - Has access to Youtube - Can run Python code - Can’t run JS code
Nothing specific, just tend to run tools in restricted VMs where things are whitelisted and it's pretty much as locked down as it can be. It can run whatever I want it to run, including JS, and as the logs in my previous comment shows, it is in fact running both Python and JS, and has access to YouTube, otherwise it wouldn't have worked :)
I tend to have the rule of "least possible privileges" so most stuff I run like that has to be "prepped" basically, especially things that does network requests sometimes (updating the solver in this case), just a matter of packaging it before I run it, so it's not the end of the world.
No weird OS or architecture here, just good ol' Linux.
> IMO it seems like it should safe to trust code being served by Google on the official Youtube domain
The JS script being downloaded is from the yt-dlp GitHub organization (https://github.com/yt-dlp/ejs/releases/download/0.3.1/yt.sol...), not from Google or any websites, FWIW.
2 replies →
> What environment are you using that: - Has access to Youtube - Can run Python code - Can’t run JS code
They didn't say “can't run JS code”, but that from that location the solver could not be downloaded currently. It could be that it is an IPv6-only environment (IIRC youtube supports IPv6 but github does not), or just that all external sites must be assessed before whitelisted (I'm not sure why youtube would be but not github, but it is certainly possible).
4 replies →
> Although it will be a lot slower, I’m not clear just how slow.
Around 30-50x slower than V8 (node/deno).
I've been recently benchmarking a lot of different engines: https://ivankra.github.io/javascript-zoo/
3 replies →
https://aur.archlinux.org/packages/yt-dlp-ejs looks like what you need?
No, don't need anything extra, `extra/yt-dlp` works perfectly fine and is enough. You'll get a warning if you run it without the flag:
Providing one of the flags automatically lets it automatically get what it needs. No need for AUR packages :)
Edit: Maybe I misunderstood, now when I re-read your post. You meant it'll prevent the automatic download at runtime perhaps? That sounds about right if so.
1 reply →
I manually installed Deno via Chocolately, but I also installed yt-dlp from choco so it's on v2025.10.22
yt-dlp --cookies-from-browser firefox --remote-components ejs:github -f "bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best" 'https://www.youtube.com/watch?v=XXX'
It was just updated again today, and at least for me, when you install it using the package name "yt-dlp[default]", it already downloads both deno and the solver automatically.
I remember when QuickTime came out in 1991 and it was obvious to everyone that video should be copied, pasted and saved like any arbitrary data.
It's absolutely insane to me how bad the user experience is with video nowadays, even video that's not encumbered by DRM or complex JavaScript clients.
> It's absolutely insane to me how bad the user experience is with video nowadays, even video that's not encumbered by DRM or complex JavaScript clients.
The video experience for typical video files is great these days compared to the past. I think you may be viewing the past through rose colored glasses. For years it was a pain to deal with video because you had to navigate third party players (remember Real Player?), Flash plugins, and sketchy codec pack installs from adware infested download sites. If you were tech support for friends and family during that era, it was common to have to remove adware, spyware, and other unwanted programs after someone went down the rabbit home of trying to install software to watch some video they found.
The modern situation where your OS comes with software to play common files or you can install VLC and play anything is infinitely better than the past experience with local video.
Local video could be a nightmare in 90s. I remember those days. I remember when it was revolutionary that the Microsoft Media Player came out, and you could use one player for several formats, rather than each video format requiring its own (often buggy) player. Getting the right codecs was still a chore, though.
MS Media Player eventually fell behind the curve, but eventually we got VLC and things got great.
3 replies →
That's if you weren't using a Mac
1 reply →
I'm absolutely not viewing the past through rose colored glasses. RealPlayer was a dumpster fire, but that came later.
I could hold shift and drag on the timeline to select, copy, then paste it into a document or another video. I can't do that with VLC today. Apple removed the feature in later releases too.
16 replies →
> For years it was a pain to deal with video because you had to navigate third party players (remember Real Player?), Flash plugins, and sketchy codec pack installs from adware infested download sites.
How is this any worse than what YouTube does now? Real Player and flash never made you watch ads.
2 replies →
Real player was one of the first real video players, it wasn't a pain, it was a genuine addon.
Flash, also almost came built into every browser.
By the time both had gone away, HTML video built in was here. Of course, there were players like jwPlayer what played video fine.
Today, most browsers have most codecs.
2 replies →
1991 was the vibrant, exciting, crazy "adolescence" of the PC age and well into the period where it was cool to have a desktop PC and really learn about it.
Phones are dominant now and have passed the PC generation by - in number, not capability. The concept of copy/paste/save for arbitrary data lives on for the non-tech masses only in the form of screenshots and screen recording features.
The thing that stands out to me looking back over a few decades is how much of consumer/public computing is exploring the latest novel thing and companies trying to cash in on it. Multimedia was the buzzword aeons ago, but was a gradual thing with increasing color depth and resolution, video, 3D rendering, storage capabilities for local playback, sound going from basic built in speaker beeps to surround and spatial processing. Similar with the internet from modems to broadband to being almost ubiquitously available on mobile. Or stereoscopic 3D, or VR, or touchscreens, or various input devices.
Adolescence is a very good word to encompass it, lots of awkward experiments trying to make the latest thing stick along with some of them getting discarded along the way when we grow out of them, they turn out not to be (broadly) useful or fashion moves on. What I wonder about is if the personal computer has hit maturity now and we're past that experimental phase, for most people it's an appliance. Obviously you can still get PCs and treat them as a workstation to dive into whatever you're enthusiastic about but you need to specifically go out and pursue that, where the ecosystem might be lacking is a bridge between the device most have as their personal computer (phone/tablet) and something that'll introduce them to other areas.
> The concept of copy/paste/save for arbitrary data lives on for the non-tech masses only in the form of screenshots and screen recording features.
When it's not impeded by DRM, that is
Depending on where personal/portable AI devices go, phones might be significantly different or not exist in 10 years as they do today.
There might be a resurgence of some kind of device like a PC.
Seeing iPadOS gain desktop features, and MacOS starting to adopt more and more iPadOS type features clearly shows the desktop, laptop and tablet experiences will be merged at some point by Apple at least.
1 reply →
"Fitting into my pocket so I can use it in line at the post office" is a capability that desktop PCs have yet to manage to achieve.
13 replies →
long press -> save image/video is perfectly supported on a phone, it's just content diffusion platform that arbitrarily restrict it.
3 replies →
A specific issue with video data is that it’s much denser: the same concept in video takes up more bytes than in text or image. Therefore hosting is more expensive, so less people host and the ones that do (e.g. YouTube) expect revenue. Furthermore, because videos are dense, people want to download them streaming, which means hosts must not just have storage but reliable bandwidth.
Even then, there are a few competitors to YouTube like Nebula, PeerTube, and Odysee. But Nebula requires a subscription and PeerTube and Odysee have worse quality, because good video hosting and streaming is expensive.
The real problem is that YouTube built a model where the platform, not the creators, controls the money flow. They could have charged creators directly for hosting and left monetisation up to them, but by inserting themselves as the middleman, they gained leverage and authority over content itself. The "cost of hosting" is just the technical excuse for such centralisation.
Back then, the focus was on optimising for the user. Now, however, companies prioritise their own interests over the user.
I think companies always prioritized their own interests.
A company can increase its profits (1) by improving their products and services, so that they'll get more customers or customers willing to pay more, or (2) by increasing how much of their revenue is profit by (e.g.) cutting corners on quality or raising prices or selling customers' personal information to third parties.
Either of those can work. Yes, a noble idealistic company might choose #1 over #2 out of virtue, but I think that if most companies picked #1 in the past it's because they thought they'd get richer that way.
I think what's happened is that for some reason #2 has become easier or more profitable, relative to #1, over time. Or maybe it used not to be so clearly understood that #2 was a live option, and #1 seemed safer, but now everyone knows that you can get away with #2 so they do that.
We even have a name for this now…
https://en.wikipedia.org/wiki/Enshittification?wprov=sfti1
Indeed, the good old days when "optimizing for the user" got us... Windows 3.1 (release date April 6, 1992 , ref https://en.wikipedia.org/wiki/List_of_Microsoft_Windows_vers...) or the first version of Linux - which I did not have the honor to use but I can imagine how user friendly it was considering what I ended up using couple of years later (https://en.wikipedia.org/wiki/History_of_Linux)
/s
2 replies →
Experience with video is excellent for most people. All the complexity is hidden from the end user, unless you are trying to hack something. In the 1990s, streaming effectively didn't exist because people didn't have enough bandwidth (it was mostly dial-up), and there was very little legal offering, and the little that existed was terrible. Home video was limited too, as few people knew how to make video files suitable for online diffusion.
Piracy did pretty well, but that's because the legal experience was so terrible. But even then, you had to download obscure players and codec packs, and sourcing wasn't as easy as it is now. For reference VLC and BitTorrent released in 2001.
I'd say the user experience steadily improved and peaked in the mid-2010s. I think it is worse now, but if it is worse now, back then, it was terrible, for different reasons.
I was just reading how ATSC 3 (over the air TV) is kind of stalling because they added DRM fairly late in the roll out. Several people bought receivers that are now incompatible.
DRM being forced into freeview TV seems like a contradiction in terms, and yet here we are.
Also, I'm not sure what the actual numbers are, but my impression is that a significant portion of OTA enthusiasts are feeding their OTA signals into a network connected tuner (HDHomeRun, Tablo, AirTV, etc.) and DRM kills all of these.
Yes, I see Youtube going deep into enshitiffication. On my Macbook this morning with a FF-dev edition it just stopped to work this morning. Don't know if it's related to the fact I tried to install an extension to "force H264" on my Ubuntu box. On the latter fans started to go crazy as soon as I open a single youtube tab lately and a quick research led me there.
Actually at this point the only thing that makes the good old aMule a bit less inconvenient to my own expectations are
- it's missing snippet previews
- it doesn't have as many resources on every topic out there.
It’s not just you. My Firefox, with no extensions, have struggled on YouTube the past weeks.
Sometimes I can’t even click on the front page, sometimes when I open a video it refuses to play.
I don’t know what’s up, but it works in chrome.
2 replies →
I've got a fresh install of endeavouros/arch and yt is horribly slow now. The upside is I've reduced my usage of the site.
Oh and it's not working at all on my desktop with the same setup, it's telling me to disable ad block. I'd rather give up yt.
YouTube should have been a distributed p2p system with local storage of your favorite videos. A man can dream...
Didn't work because asymmetric upload/download speeds (which now are a thing of the past; however, it gave youtube an early advantage).
4 replies →
> It's absolutely insane to me how bad the user experience is with video nowadays
Has nothing to do with video per se. Normal embeddings, using the standard `<video>` element and no unnecessary JS nonsense, still work the same way they did in the 90s: Right click the video and download it, it's a media element like any other.
The reason why user experience is going to shite, is because turbocapitalism went to work on what was once The Internet, and is trying to turn it into a paywalled profit-machine.
I've always found it insane how much software development web sites are willing to undertake, just to avoid using the standard video, audio, and img HTML elements. It's almost hilarious how over engineered everything is, just so they can 'protect' things they are ultimately publishing on the open web.
Plain <video> elements are easy to download, but not great for streaming, which is what most people are doing nowadays. Much of the JS complexity that gets layered on top is to facilitate adaptive bitrate selection and efficient seeking, and the former is especially important for users on crappier internet connections.
I'm not a fan of how much JS is required to make all that work though, especially given the vast majority of sites are just using one of two standards, HLS or DASH. Ideally the browsers would have those standards built-in so plain <video> elements can handle them (I think Safari is the only one which does that, and they only do HLS).
5 replies →
The standard video element is really nice:
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
I have used it on a couple of client sites, and it works really well.
You can even add a thumbnail that shows before the video starts downloading/playing (the poster attribute). :-)
> still work the same way they did in the 90s: Right click the video and download it, it's a media element like any other.
I’m so confused reading these comments. Did everyone forget RealPlayer? Flash videos? All of the other nonsense we had to deal with to watch video on the internet?
1 reply →
Technically, you can profit off of ad revenue and subscriptions without exploiting the labour of your workers, so in this particular case it has nothing to do with the economic regime. Enshittification is its own thing.
The problem with a standard video element is that while it's mostly nice for the user, it tends to be pretty bad for the server operator. There's a ton of problems with browser video, beginning pretty much entirely with "what's the codec you're using". It sounds easy, but the unfortunate reality is that there's a billion different video codecs (and a heavy use of Hyrum's law/spec abuse on the codecs) and a browser only supports a tiny subset of them. Hosting video already at a basis requires transcoding the video to a different storage format; unlike a normal video file you can't just feed it to VLC and get playback, you're dealing with the terrible browser ecosystem.
Then once you've found a codec, the other problem immediately rears its head: video compression is pretty bad if you want to use a widely supported codec, even if for no other reason than the fact that people use non-mainstream browsers that can be years out of date. So you are now dealing with massive amounts of storage space and bandwidth that are effectively being eaten up by duplicated files, and that isn't cheap either. To give an estimate, under most VPS providers that aren't hyperscalers, a plain text document can be served to a couple million users without having to think about your bandwidth fees. Images are bigger, but not by enough to worry about it. 20 minutes of 1080p video is about 500mb under a well made codec that doesn't mangle the video beyond belief. That video is going to reach at most 40000 people before you burn through 20 terabytes of bandwidth (the Hetzner default amount) and in reality, probably less because some people might rewatch the thing. Hosting video is the point where your bandwidth bill will overtake your storage bill.
And that's before we get into other expected niceties like scrolling through a video while it's playing. Modern video players (the "JS nonsense" ones) can both buffer a video and jump to any point in the video, even if it's outside the buffer. That's not a guarantee with the HTML video element; your browser is probably just going to keep quietly downloading the file while you're watching it (eating into server operator cost) and scrolling ahead in the video will just freeze the output until it's done downloading up until that point.
It's easy to claim hosting video is simple, when in practice it's probably the single worst thing on the internet (well that and running your own mailserver, but that's not only because of technical difficulties). Part of YouTube being bad is just hyper capitalism, sure, but the more complicated techniques like HLS/DASH pretty much entirely exist because hosting video is so expensive and "preventing your bandwidth bill from exploding" is really important. That's also why there's no real competition to YouTube; the metrics of hosting video only make sense if you have a Google amount of money and datacenters to throw at the problem, or don't care about your finances in the first place.
1 reply →
Around 2012?, I had some extension that forced YouTube videos to play with Quicktime in-browser, which was leaner. Original file, no conversion.
I remember when QuickTime came out in 1991 and it was obvious to everyone that video should be copied, pasted and saved like any arbitrary data.
I remember when VCR's came out and everyone would take TV shows and share them with their friends.
By now we should be able to share video on SD Cards that just pop into a slot on the top of the TV, but the electronics companies are now also the content companies, so they don't want to.
You can plug a USB drive with videos on into a lot of TVs I've encountered over the years. Due to limited container/codec support I rarely made use of it though.
Remember RealPlayer? Grainy 128 x 128 streamed videos in 1998!
Was RealPlayer really that horrible or was it just trying to do streaming media on an extremely low bandwidth connection without hardware accelerated and sophisticated codecs? I only really used it with a 28.8K modem netscape and Windows 95. The experience was poor but the experience viewing moderately sized images wasn't great either. I remember at the time encountering MPEG decoder add-in cards (that nobody used), although I suspect video cards started to add these features during the 1990s at some point.
1 reply →
I never bothered trying to stream anything, but I do remember downloading 20mb episodes of Naruto in surprisingly good quality due to the .rmvb format.
1 reply →
Remember RealPlayer? Grainy 128 x 128 streamed videos in 1998!
I remember when someone slapped a big "Buffering" sign over the Real Networks logo on the company's building in Seattle.
A media business is predicated on exclusive rights over their media. The entire notion of media being freely copied and saved is contrary to their business models. I think there's a healthy debate to be had over whether those models are entitled to exist and how much harm to consumers is tolerable, but it's not really obvious how to create a business that deals in media without some kind of protection over the copying and distribution of that media.
I think what breaks computer peoples' brains a bit is the idea that the bytes flying around networks aren't just bytes, they represent information that society has granted individuals or businesses the right to control and the fact technology doesn't treat any bytes special is a problem when society wants to regulate the rights over that information.
I have worked on computer systems for media organizations and they have a very different view of intellectual property than the average programmer or technologist. The people I find the most militant about protecting their rights are the small guys, because they can't afford to sue a pediatrician for an Elsa mural or something.
I use yt-dlp (and back then youtube-dl) all the time to archive my liked videos. Started back in around 2010, now I have tens of thousands of videos saved. Storage is cheap and a huge percent of them are not available anymore on the site.
I also save temporary videos removed after a time for example NHK honbasho sumo highlights which are only available for a month or so then they permanently remove them.
You are a digital hoarder. I have taken so many pics that I wouldn't even bother to look back that them (do we ever?) but Google memories is really a neat feature, it refreshes memories. I think you should run a similar service to refresh memory of your favourite videos like they are on speed dail.
I look at my pictures regularly. They are on my phone, mostly I scroll back 1-3 months to refresh my memory, and I often go further back to check on how living things were around me, and to what my general surrounding looked like. I also like to look at game screenshots from time to time. Funny to see how I lived life back then.
The Memories feature sounds cool. I have something a bit similar on my Nextcloud, "On this day", that shows an image dated on the same day in previous years, and clicking it brings up more pictures from its general time. I love it! So many memories.
I'm an amateur photographer. Lately, I've taken to making curated collections from my "slush feeds". Meaning, going through a particular trip, time period, moment and grabbing the best photos, and parceling them out to a dedicated album. Makes for a much better experience and fun to share with friends/family.
I have an e-ink photo frame on the wall that switches picture once every 24h, picking one of my pictures of the last 10+ years by random. So every single one of my tens of thousands of pictures gets a real chance to be seen at least once during my lifetime :)
Often when I am bored I pick a random day in the past and look at where I was on that day and which pictures I took. Refreshing memories is a great idea but the low tech way is enough for me.
I compulsively take pictures of the sky, same never to be looked at
7 replies →
Might sound stupid, but: differences between Google memories vs. Snapchat memories?
Also my issue is that I would NEVER upload the photos I have on my hard drive due to privacy issues, but if I had a local model that could categorize photos and whatnot, that would be cool. I have over 10k screenshots / images. Many of them have text on it, so probably need OCR.
> You are a digital hoarder.
Is this meant to be negative? Many videos I have watched on YouTube are now unavailable. I wish I had saved them, too, i.e. I wish I was a digital hoarder, too, but eh, no space for me.
2 replies →
I've seen photography compared to archery recently, and that comparison stuck with me.
As long as you enjoy the act of shooting, that is enough. Archers doesnt have to keep and look at old scoreboards/targets for the archery to have been enjoyable and worthwhile, it's the same with modern photography.
I routinely review my pics and vigorously delete all duplicates or poor quality images. It helps if you do this for 10-15 minutes every day. At least I'm able to find most of the pictures I remember I took, and I don't have to scroll through 1000 snaps of some particular sunset to do that.
I started after channels started removing their own videos because they either didn't think the videos were good enough or they had a mental break and deleted their channel. So good stuff just gone.
There was one instance where a prominent "doujin" musical artist got fingered as a thief. Away went all of their videos, except... he'd packaged them as something completely different from wherever he'd taken them from. One song in particular sucked to lose, because its sibling still exists as an "extended" upload. So, I can listen to the one any time, but the other, I simply know that it once existed, and that it might still exist somewhere else, just under a different title. I can't even remember how it went.
Or because someone else made them take them off. Or because they were deemed 'too dangerous'. Or worse.
Cody's lab removed a few of them and many others.
Some of the old YTPs were fantastic. They don't exist now.
Generations of talent & creativity just gone.
2 replies →
Anything you see on the Internet can be gone in a moment. If something is important to you, you must save it to guarantee you want to see it again.
The problem then becomes organizing and resurfacing content, especially when it'll likely be outside the context you originally found it.
Wasn't expecting to see a fellow sumo hoarder on HN...there's dozens of us, dozens!
I was just lamenting last night that we can't watch some of Terutsuyoshi's amazing makuuchi bouts from about three(?) years ago. I wish I'd archived them.
Archive.org has it at least, everything from 2009 until 2023. But that's also need to be mirrored because can be taken down https://archive.org/download/jasons-all-sumo-channel-archive...
What is your storage setup, do you have lots of hard drives, or does this go online somewhere?
how do you manage the archive? I mean the file hierarchy structures etc. i started archiving youtube videos recently, now saving descriptions and other metadatas too, but simply having them all in one directory doesn't seem to be a good idea.
do you have a cron job or something? i know it is probably trivial but eh
Popular self-hosted solution: https://github.com/tubearchivist/tubearchivist
10 replies →
No! It would be easier but I burned myself so many times with removed videos that I do it on my own basically asap manually. Not a big deal once you have yt-dlp properly
What percentage, in numbers?
Do you ever go back and actually watch those videos? Whenever I start to journal, track, or just document something, after some time I notice again and again that most of the value has already been created the moment I finish working on a specific entry. Even with something seemingly very important like medical records. Maybe one exception I can think of are recordings of memories involving people close to you
I have the same with journals, but the video archiving has actually come up a few times, still fairly rare though. I think the difference is that you control the journal (and so rarely feel like you need it's content) while the videos you're archiving are by default outside of your control and can be more easily lost.
I don’t think journaling is the same thing though as hoarding pics/videos. Even if you never go back and read through old hand written journals, just the physical process of writing has mental effects that pics/videos do not. There’s also a bit of therapeutic results from slowing down and putting thought to paper. So to me the only similarity is that you might not ever look at it again, that does not make them the same at all
I actually do! I have a perpetual VLC playlist which plays those videos randomly if I need some background noise.
4 replies →
I would be interested in knowing as well. I've been watching YouTube since it first came out and can't remember any times where I saw something I thought I needed to actually download and save in case I wanted it in 10 years. 10,000+ videos is a lot of videos to just seemingly save.
1 reply →
Same here and my motivation was that some of my liked videos were randomly removed and it's pretty cool music I wanted to keep forever.
I made another script that adds the video thumbnail as album art and somehow tries to put the proper ID3 tags, it works like 90% of the time which is good enough for me.
Then I made another script that syncs it to my phone when I connect it.
So now I have unlimited music in my phone and I only have to click on "Like" to add more.
And yet, none of Google's 900k TOC genius engineers have thought of this as a feature ...
I doubt that it’s a nobody else situation, and it’s more of a management doesn’t want it as it takes away the need for their own streaming offerings. Music industry also doesn’t want it, as there’s no more royalties coming in. Can’t release an app that pisses of the industry.
> And yet, none of Google's 900k TOC genius engineers have thought of this as a feature ...
Isn’t that the YouTube Music app?
6 replies →
I have a script that calls out to a small llm
etc. with some stripping of newlines etc. It works well! they can often infer the correct answer even if it's not present in the title
1 reply →
In ten years time YouTube will be entirely inaccessible from the browser as the iPad kids generation are used to doomscrolling the tablet app and Google feels confident enough to cut off the aging demographic.
They’d need dedicated hardware to enforce any kind of effective DRM. Encrypted bitstream generated on the fly watchable only on L2 attested device.
>They’d need dedicated hardware to enforce any kind of effective DRM.
That's already here. Even random aliexpress tablets support widevine L1 (ie. highest security level)
maybe to stop the .01%. switching to app only, sign in only would get them pretty much all the way there.
They own the os, with sign-in, integrity checks, and the inability to install anything on it Google doesn't want you to install they could make it pretty much impossible to view the videos on a device capable of capturing them for the vast majority of people. Combine that with a generation raised in sandboxes and their content would be safe.
2 replies →
Netflix is already there for 4k streams
14 replies →
iOS can already attest to websites that they are running in unmodified Safari. https://developer.apple.com/news/?id=huqjyh7k
I guess that isn't quite enough to prevent screen recording but these devices also support DRM which does this.
Can you explain in simple terms what would prevent one from running the decryption programmatically posing as the end client?
6 replies →
I guess at that point we could do it the old fashioned way by pointing a camera at the screen. Or, I guess, a more professional approach based on external recording.
3 replies →
Which is why Windows 11 requires TPM.
3 replies →
The YouTube web app is so full of bugs it's almost unusable on a phone.
Comments also disappear regularly on all platforms...
I can only navigate to a video by long-pressing, copying the URL and pasting it into the URL bar, otherwise I get a meaningless "something went wrong" type error message. Mobile Safari, no content blockers, not logged into a Google account. After almost two decades of making the website worse they finally succeeded in breaking "clicking a video". I wonder what the hotshots at Alphabet manage to break next :o)
2 replies →
I only use the web app on my phone (via Firefox). It works well enough and I can play videos in the background and block ads.
Do you also get looping search results? I've also had it happen to the simple "videos" tab of a channel.
> Comments also disappear regularly on all platforms...
I don't believe that that's a bug. The disappearance depends a lot on the topic of those comments. It's very much deliberate censorship.
1 reply →
And the YouTube web interface is full of issues too. For example, livestreams had transient memory leaks for months already, thought to be related to their chat implementation.
In the meanwhile, YouTube spends its effort on measures against yt-dlp, which don't actually stop yt-dlp.
What the fuck is wrong with Google corporate as of late.
2 replies →
Google is having a hard time conforming to their own javascript standards.
One constant about Google, they always bet on the web.
Until the profits tells them not to.
i think a lot of millenials and older gen-z use youtube on browsers. It has more and more alternative competitors too, like bilibili in China.
Ooh thanks. If the 21st century is going to belong to China, then BiliBili, along with v2ex.com, is gonna need to get added to my doomscrolling itinerary.
They'll never leave money on the table like that. The older demographic are the only ones that can buy things.
Pffft, and good riddance, comrade! Just think about native application and native performance, great native animations and native experience (and native ads, of course)! We won't have this god-awful Web (that propelled modern tech world in the first place) anymore, we can finally have personal vendetta against awful JS and DOM. No more interoperability, no more leverage against corpos, just glorious proprietary enclaves where local tyrant can do anything they want!
> No more interoperability
> no more leverage against corpos
> just glorious proprietary enclaves where local tyrant can do anything they want!
These are all literally consequences of the web btw, as are things like attestation in consumer hardware.
1 reply →
Think of iOS. You can basically use just 1 programming stack on iOS devices: Swift/Objective-C. You can't have JIT except for the JIT approved by the Apple Gods.
The biggest hack to this is React Native, which barged just in due to sheer Javascript and web dominance elsewhere, and even that has a ton of problems. Plus I'm fairly sure that the React Native JS only runs in the JIT approved by the Apple Gods, anyway.
Otherwise, we're stuck in the old days of compiled languages: C/C++ (they can't really get rid of these due to games, and they have tried... Apple generally hates/tolerates games but money is money). Rust works decently from what I hear. Microsoft bought Mono/Xamarin and that also sort of works.
But basically nothing else is at the level of quality and polish - especially in terms of deployment - as desktops, if you want to build an app in say, Python. Or Java. Or Ruby. Or whatever other language in which people write desktop apps.
And we're at a point where mobile computing power is probably 20x that of desktops available in 2007. The only factor that is holding us back is battery life, and that's only because phone manufacturers manufacture demand by pushing for ever slimmer phones. Plus we have tons of very promising battery techs very close to increasing battery capacities by 20-50%.
3 replies →
This is obviously not plausible. They're never going to shut off browser access on people's laptops. Watching YT at work is a major thing.
I have to assume you're joking, but I honestly can't figure out what point you're even trying to make. Do it think it's surprising that an ad-supported site has anti-scraping/anti-downloading mechanisms? YouTube isn't a charity, it's not Wikipedia.
They can't shut off browser access, but they surely can kill all non-Chromium browsers.
3 replies →
Not to mention all of the iframe embeds. I’d argue it’d helped YouTube become the defacto go to platform for corporate videos. Yeah there’s other solutions but the number of corp sites that just toss videos on YouTube is insane.
I don’t think it’s such a wild possibility that more and more jobs will be able to be done with locked down tablets and smart phone while fewer will be done on laptops and desktops. We are already seeing it at the personal level - people are entirely forgoing personal computers and using mobile devices exclusively. The amount isn’t huge (like 10 or 15% in the US IIRC?) but 10 years ago that was unthinkable IMO.
I was reading a study recently that claimed Gen Z is the first generation where tech literacy has actually dropped. And I don’t blame them! When you don’t have to troubleshoot things and most of your technology “just works“ out the box compared to 20 or even 10 years ago, then you just don’t need to know how to work under the hood as much and you don’t need a fully fledged PC. You can simply download an app and generally it will just take care of whatever it is you need with a few more taps. Similar to how I am pretty worthless when it comes to working on a car vs my parents generation could all change their own oil and work on a carburetor (part of this is also technology has gotten more complicated and locked down, including cars, but you get my point).
The point of all this is I could definitely see a world where using a desktop/laptop computer starts becoming a more fringe choice or specific to certain industries. Or perhaps they become strictly “work” tools for heavy lifting while mobile devices are for everything else. In that world many companies will simply go “well over 90% of our users are only using the app and the desktop has become a pain in the ass to support as it continues to trend downwards so…why bother?”
Who knows the future? Some new piece of hardware could come out in 10 years and all of this becomes irrelevant. But I could see a world where devices in our hands are the norm and the large device on the desk becomes more of a thing of the past for a larger percentage of the population.
5 replies →
>Watching YT at work is a major thing.
Where are these jobs where I can get paid to watch YouTube?
5 replies →
[dead]
I hope they will do that, yes really.
Because this will mean major shift to open-source and community solution, where creators will be paid directly by their viewers.
I have NO problem, what so ever, to pay content creators directly.
But I have HUGE problem to pay big corpos. It's ridiculous that we pay for Netflix same price as US people and for you it's cheaper than coffee and for us, if you compare median-salary, it's 5-10x MORE expensive. (cancelled every streaming platform year before as all of my friends, cloud seedbox here we go) And I don't even wanna mention Netflix's agenda they want to push (eg.: Witcher)
That's why piracy is so frequent here in small country in EU :) Also it's legal or in grey-area, because nobody enforce it or copyright companies are unable to enforce it if you don't make money from sharing. (yes, you don't even need to use VPN with torrents)
> Because this will mean major shift to open-source and community solution, where creators will be paid directly by their viewers.
That’s an unrealistic nerd dream. People haven’t moved off of closed social networks such as Facebook and Instagram, and haven’t flocked to creator-owned platforms such as Nebula. The general public, i.e. the majority of people, will eat whatever Google, Meta, et al feed them. No matter how bad things get, too few people abandon those platforms in favour of something more open.
I'm sorry but this sounds hollow. Creators are specifically choosing to upload their content to YouTube. They have elected "big corpos" to handle payment for them.
You are not standing up for them by pirating their stuff from YouTube.
If you have a problem with it, it is on you to stop using YouTube to view their content. You did not gain a moral right to pirate their stuff just because you don't like the deal.
It's not YouTube though, but downloader :)
"yt-dlp is a feature-rich command-line audio/video downloader with support for thousands of sites. The project is a fork of youtube-dl based on the now inactive youtube-dlc."
I guess the point was that yt-dlp is only possible, because of the mandatory protocols you need in the browser. Moving to native app makes it much easier to prevent downloading and denying access to the unencrypted content.
6 replies →
Doesn't matter, yt-dlp looks like a browser to youtube. They can put authorization/encryption in an app that can't be done in a webpage. By killing browsers they gain control.
They know that. yt-dlp uses browser-like access to download.
Scraping sucks. Imagine a broken API with new breakages every few weeks. Now imagine the provider hates you. Shout out to the team for what they do.
> Now imagine the provider hates you.
That’s the best part.
Exactly. I could never maintain yt-dlp. Must be exhausting.
From https://github.com/yt-dlp/yt-dlp/issues/14404
> What about Selenium or a headless browser solution?
>> The yt-dlp maintainers have no interest in doing this except as a measure of last resort. A headless browser solution would be an admission of defeat and would go against the spirit of this project.
More and more recently with youtube, they seem to be more and more confrontational with their users, from outright blocking adblockers, which has no bearing on youtube's service, to automatically scraping creators content for AI training and now anything API related. They're very much aware that there is no real competition and so they're taking full advantage of it. At the expense of the 'users experience' but these days, large companies simply don't suffer from a bad customer experience anymore.
> At the expense of the 'users experience' but these days, large companies simply don't suffer from a bad customer experience anymore.
This is my personal opinion. They're still affected by customer satisfaction and they're still driven by market forces. It's just that you and I are not their customers. It's not even the YT premium customers. Google is and always has been an ad service company and their primary customers have always been the big advertisers. And they do care about their experience. For example, they go overboard to identity the unique views of each ad.
Meanwhile the rest of us - those of us who don't pay, those who subscribe and even the content creators - are their captive resources whose creativity and attention they sell to the advertisers. Accordingly, they treat us like cattle, with poor quality support that they can't be bothered about. This is visible across their product lineup from YouTube and gmail to workspace. You can expect to be demonetized or locked out of your account and hung out to dry without any recourse if your account gets flagged by mistake or falsely suspected of politics that they don't like. Even in the best case, you can only hope to raise a stink on social media and pray that it catches the attention of someone over there.
Their advantage is that the vast majority of us choose to be their slaves, despite this abuse. Without our work and attention, they wouldn't have anything to offer their customers. To be fair to ourselves, they did pull off the bait and switch tactic on us in the beginning by offering YouTube for free and killing off all their competition in the process. Now it's really hard to match their hosting resources. But this is not sustainable anymore. We need other solutions, not complaints. Even paid ones are fine as long as they don't pull these sort of corporate shenanigans.
I’m recently also encountering more unskippable ads, especially in kids videos. There were always two ads. Sometimes the first wasn’t shippable and the second always was. That has gradually shifted to neither being skippable.
>outright blocking adblockers, which has no bearing on youtube's service
The scale of data storage, transcoding compute, and bandwidth to run YouTube is staggering. I'm open to the idea that adblocking doesn't have much effect on a server just providing HTML and a few images, but YouTube's operating costs are (presumably, I haven't looked into it) staggering and absolutely incompatible with adblocking.
That’s fine, but YouTube has an obligation to make sure the ads they serve aren’t scams. They are falling short of that obligation.
6 replies →
YouTube had a $10B Q3. I cannot imagine them spending $10B on servers and staff in three months.
9 replies →
> (presumably, I haven't looked into it)
YouTube broke even sometime around 2010 and has been profitable ever since. The ad revenue has always been more than enough to sustain operating costs. It's just more growthism = more ads. If you want the YouTube of 2010--you know, the product we all liked and got used to--you can't have it. Welcome to enshittification.
Personally I find YouTube unusable without an adblocker. On my devices that don't have an ad blocker, it's infuriating.
3 replies →
From
https://github.com/yt-dlp/yt-dlp/wiki/EJS
it looks like deno is recommended for these reasons:
> Notes
> * Code is run with restricted permissions (e.g, no file system or network access)
> * Supports downloading EJS script dependencies from npm (--remote-components ejs:npm).
It's fine for this project since google is probably not in the business of triggering exploits in yt-dlp users but please do not use deno sandboxing as a your main security measure to execute untrusted code. Runtime-level sandboxing is always very weak. Relying on OS-level sandboxing or VMs (firecracker & co) is the right way for this.
> It's fine for this project since google is probably not in the business of triggering exploits in yt-dlp
yt-dlp supports a huge list of websites other than youtube
i wonder if it would be legal if they did, as an anti-circumvention counter-measure.
For a long time, yt-dlp worked completely with Python. They implemented a lightweight JavaScript interpreter that could run basic scripts. But as the runtime requirements became more sophisticated it struggled to scale
I wonder how the whole thing works when I open a youtube video from a preview inside a chat application on my mobile phone.
It looks like the video loads and starts playing in some kind of in-app browser, but there is just full-screen video and nothing else. I also never faced any ads in this "mode" of playing a video, yet recently some strange things started happening where the playback would start together with an audio-track from the advertisement. The video itself would start playing but the sound would be replaced with the sound from ad which seemed very odd and much like a bug, only when advertisement audio track ends it will be replaced with audio track from the video itself.
I'm genuinely curious how is the whole playback process different when I watch a video from the Telegram preview, can I somehow achieve the same "just fullscreen video" kind of playback on the desktop as well? Does anyone have any insight?
I wonder why YouTube doesn't implement full DRM, such as Widevine, at this point.
Is it because it would break compatibility with some devices? Is it too expensive?
(not that I'd like that; I always download videos from YouTube for my personal archive, and I only use 3rd party or modified clients)
They are already experimenting with DRM on all videos in certain clients (like the HTML5 TV one) https://github.com/yt-dlp/yt-dlp/issues/12563
Sooner or later, in the next couple of years, it will happen.
> Is it because it would break compatibility with some devices?
This is a significant part of it. There are many smart devices that would not be capable of running that sort of software. As those cycle out of the support windows agreed way-back-when then this sort of limitation will be removed.
I'm sure this is not the only consideration, but it is certainly part of the equation.
Yeah, it's pretty much to support backwards compatibility with old smart TVs and the like. They already enforce stricter rules on new hi-res content, and once those old devices cycle out of service you can expect the support to go away.
I think because it cost money and they get little benefit on doing so.
Major platform like Netflix etc. don't implement that DRM since they care, it's because they content they distribute requires that they employ that measures, otherwise who produces the content doesn't give it to them. Content on YouTube does not have this requirement.
Also: implementing a strict DRM on all videos is probably bad for their reputation. That would restrict the devices that are able to play YouTube, and probably move a lot of content creators on other platforms that does not implement these requirements.
It's just an understandable reluctance to insert a bunch of additional dependencies in your playback stack unless you really, really have to.
People underestimate how much engineering Netflix have put in over the years to get it to work seamlessly and without much playback start latency, and replicating that over literally millions of existing videos is pretty non-trivial, as is re-transcoding.
It's not because of older devices - any TV that has got a YouTube app for a decade was required to support Widevine as part of the agreement to get the app, so the tail end of devices you'd cut off would be tiny, and even if they wanted to keep them in use you could probably use the client certificate to authenticate them and disallow general web access. It wouldn't be 100% fullproof but if any open source project used an extracted key you could revoke it quickly.
I wish @pg would just add "Replace YouTube" to his Frighteningly Ambitious Startup ideas.
https://paulgraham.com/ambitious.html
"Support for YouTube without a JavaScript runtime is now considered "deprecated." It does still work somewhat; however, format availability will be limited, and severely so in some cases (e.g. for logged-in users). "
The devil is in the details
There are some formats, perhaps the one(s) the user wants, that do not require a JS runtime
Interesting that "signing up" for a website publishing public infomation and "logging in" may not always work in the user's favor. For example, here they claim it limits format availability
"Format availability without a JS runtime is expected to worsen as time goes on, and this will not be considered a "bug" but rather an inevitability for which there is no solution. It's also expected that, eventually, support for YouTube will not be possible at all without a JS runtime."
It is speculated that this format availability might change in the future
How long until it comes with a DRM AI and then my anti-DRM AI will have to fight it in a virtual arena (with neon lights and killer soundtrack, of course)?
Discussed here a few weeks ago:
https://news.ycombinator.com/item?id=45358980
Yt-dlp: Upcoming new requirements for YouTube downloads - 1244 points, 620 comments
Even when the so called "ad-pocalypse" happened, this wasn't as big of an issue as it is today.
What's going on with Google being extra stingy seems to correlate well with the AI boom (curse). I suspect there are companies running ruthless bots scraping TBs of videos from YouTube. Not just new popular videos that are on fast storage, but old obscure ones that probably require more resources to fetch. This is unnatural, and goes contrary to the behaviour pattern of normal users that YT is optimized for.
I think AI-companies abusing the internet is why things are getting more constrained in general. If I'm right, they deserve the bulk of the blame imo.
This will happen in the real world when the robot mass production gets going. We'll climb the exponential till we run into the resource limits of the planet at meteoric speed.
Yes, the regulators will try and manage it, but eventually every decision about who can use the robot/AI genie for what will go through them because of the robot/AI genie's enormous strain on natural resources, and you'll be back to a planned economy where the central planners are the environmental regulators.
There are hard decisions to make as well. Who gets to open a rare earth processing plant and have a tailing pond that completely ecologically destroys that area? Someone has to do it to enable the modern economy. It's kind of like we won't have a good AI video generator and will always be behind China if some Youtube creators refuse to license their content for AI training. Same goes for the rare earth processing tailing pond. Nobody can agree on where it's going to go, so China wins.
Hopefully most of what the bots are ruthlessly scraping is all the AI slop that is filling YT. Hopefully garbage in - garbage out will kill off all the AI nonsense.
yes, "AI" can be useful, but nonsense and slop are not.
>I suspect there are companies running ruthless bots scraping TBs of videos from YouTube.
certainly, but for Google, that bandwidth and compute is a drop in the bucket. at the scale Google operates, even if there were a hundred such bots (there aren't - few companies can afford to store exabytes of data), those wouldn't even register on the radar. of course, like the other social media oligarchs, Google wants to be the only entity with unrestricted access to their catalog of other people's content, but even that isn't their motivation here - "login to prove you're not a bot :^)" was ALWAYS going to happen, even without the AI bubble.
enshitiffication is unstoppable and irreversible, and Google is its prophet - what they don't kill, they turn to shit.
>I think AI-companies abusing the internet is why things are getting more constrained in general.
even before the AI bubble, every other fuckass blog with 0.5 daily visitors was behind Cloudflare, for the same reason those fuckass blogs are built with FOTM javascript frameworks - there's nowt so queer as webshits.
Just one question. I see all these 3rd party clients solving the problem separately. Isn't it easier for everyone to build a unified decoder backend that exposes a stable and consistent interface for all the frontends? That way, it will get more attention and each modification will have to be done only once.
Since JS is the big issue here, the backend itself could be written in JS, TS or something else that compiles to WASM. That way, the decoder doesn't have to be split between two separate codebase. Deno also allows the bundle to be optionally compiled into a native executable that can run without having to install Deno separately.
Alternatively, I'm not sure if this might be an impetus to move the bulk of the codebase itself to TS/JS and just use Deno/Node/Bun or otherwise to move to Rust with rusty_v8 or deno_core directly.
yt-dlp feels like a whole army fighting Google. Users reporting and the army performs.
If by army you mean underfunded open source volunteers then yes.
That's the point, they don't have the fund but still give the sense of fight and power of an army.
If it's free, can you even talk about underfunding?
4 replies →
We use this for AI transcriptions internally on our Linode VPS server.
It's been working great by itself for the most part since the beginning of the year, with only a couple of hiccups along the way.
We do use a custom cookies.txt file generated on the server as well as generate a `po_token` every time, which seems to help.
(I originally thought everything would just get blocked from a popular VPS provider, but surprisingly not?)
Most recently though, we were getting tons of errors like 429 until we switched to the `tv_embedded` client, which seems to have resolved things for the most part.
> if using QuickJS, version 2025-4-26 or later is strongly recommended for performance reasons
Oh, I wonder if they got performance to a reasonable level then? When the external JS requirements were first announced, they said it took upwards of half an hour, and a QuickJS developer wrote in the ticket that they didn’t see a path towards improving it significantly enough.
It wasn't a QuickJS developer but developer from a fork: https://github.com/bellard/quickjs/issues/445
The day that YouTube makes itself unusable with properly free tools is the day that I use Nebula and such instead
I'm glad there's a streaming service that pays independent creators for their hard work, but the player is glitchy almost to the point of being unusable.
Actually, it's completely to the point of being unusable. For several videos now, I've watched halfway through and suddenly playback stops and the video is replaced with "Error." And every time this happens I have to just pray the videos on youtube because, without exaggeration, it will never work again. Even after checking a week later.
If corporations could stop being dicks, that would be great. Between this and the Reddit API change feels like they all get together and plan this. Thank god for FOSS.
Frankly I think this is inevitable- it's practically one of the laws of computing: any sufficiently complex system will ultimately require a turing-complete language regardless of its actual necessity.
See also: """Zawinski's Law states: "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can."""" and """Greenspun's tenth rule of programming is an aphorism in computer programming and especially programming language circles that states:[1][2]
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."""
(from the above I conclude that if you want to take over the computer world, implementing a mail reader with an embedded Lisp).
previously: https://news.ycombinator.com/item?id=45358980
In case anyone was wondering, use `--js-runtimes node` to use the (as stated as insecure) node option.
Perhaps a stupid question, but is there some reason I can't potentially fall back to recording my screen / audio in realtime and saving videos that way? yt-dlp is obviously far superior to this, but just thinking about what my fallback points are.
You definitely can, it's just 1) vastly slower, and 2) you have to recompress the decompressed video, which loses quality. It's therefore an option of last resort.
Most people want to be able to download 5 hours of video in the background in 5 minutes. Not wait 5 hours while their computer is unusable.
I wonder if it has to be a real computer, display, and camera, or if doing it with a "headless display" that is nonetheless being fed to a "video recorder" would work...
Funny how it'd be like The Matrix...
3 replies →
Understood and agreed. I mostly don't even care about keeping videos from Youtube, but some of the most amazing music performances in the world are trapped on Youtube, and in many cases there is no obvious way to purchase or download them elsewhere.
eg: https://www.youtube.com/watch?v=HAi1pn3kBqE
In the current times yes, you can basically record your screen with whatever tool you fancy.
But even now, many video sites employ DRM, and only the weakest levels of DRM streams can be recorded off the screen. If they crank that up, which is perfectly possible today, the screen recordings only shows a blank rectangle, because the encryption goes from server to video card. At this stage, "hdmi recorders" are the next level - they capture the audio/video stream from the hdmi cable output for example.
Even further, there is technology to encrypt from server to screen. I'm not sure on the rollout on this one. I think we have a long time until this is implemented, and even then, I'm sure we will have the ability to buy screens that fake the encryption, and then let us record the signal. And, for mainstream media, there will be pirated copies until the end of time I think.
> Even further, there is technology to encrypt from server to screen. I'm not sure on the rollout on this one. I think we have a long time until this is implemented, and even then, I'm sure we will have the ability to buy screens that fake the encryption, and then let us record the signal. And, for mainstream media, there will be pirated copies until the end of time I think.
In the end, nobody will ever avoid people from having a camera pointed to a screen. At least till they can implant a description device in our brain, the stuff coming out of the screen can be recorded. Like in the past when people used to record movies at the cinema with cameras and upload them on emule. Sure, it would not be super high quality, but considering that is free compared to something you pay, who cares?
To me DRM is just a lost battle: while you can make it inconvenient to copy a media, people will always try to find a way. We used to pirate in the VHS era and that was not convenient, since you would have needed 2 VCR (quite expensive back then!) and it took the time of the whole movie to be copied.
1 reply →
With browser's and hardware's support for DRM they could make it impossible if they want to. Basically the OS / recording software sees a blank screen.
I was on live TV recently and wanted to keep a recording for myself, that wasn't just filming the screen with my phone. I first tried screen recording watching the show in my browser in their streaming service. Got a black video. Then I tried their phone app, got a black video. Finally, using my phone but the web page they enabled playback without DRM and I could record and store it. When more devices support DRM they will probably get rid of that fallback as well.
I imagine there would be ways around this. I know from personal experience that Kazam screen recorder on Firefox on Ubuntu can record anything and everything, including YouTube as well as DRM content on Disney+ and Prime Video.
I bet that it Google really wanted to it could force Firefox in line, but I imagine that actually preventing screen recording would require compliance at the OS level too, and I don't think that even Google could demand changes like that to Linux. Best they could do is block Linux clients from YouTube, but user agent spoofing or emulation could probably circumvent that.
And even if Google does somehow manage to entirely block screen recording, we can always exploit the analog loophole.
3 replies →
There is always the analog hole. Even HDCP can be worked around. Even if they do manage to stop all computers from doing direct bit copies, there are still old things such as Kinescopes which they used to use to broadcast television from film. There of course is a quality loss, but that's kind of irrelevant to the point.
I don't know if Youtube cares, but other website do attempt to block this as well. They will either black your screen or prevent playback if you try to screen record, even encrypting to prevent recording the HDMI/DP output.
I don't mind, but it has to work out of the box after a pip install.
Looks like the packaging will be a mess?
> it has to work out of the box after a pip install.
What do you mean by it has to ?
great tool for archiving ICE abuses posted on multiple platforms
Is there an UI wrapper for this?
Am I right in saying they need to be able to run JS code from YouTube to be able to get the download URL at this point? Deliberate obfuscation I'm guessing? I guess Deno makes the code fairly safe to execute and I guess the chances of YouTube daring to download spyware onto your machine is minimal :-)
I do not understand why Google doesn't just explicitly permit people who pay for premium to use yt-dlp or other tools to watch YouTube however the fuck they want. Put that in your terms, Google -- so people aren't afraid they'll lose their GMail because they wanted to watch a video -- and you'll get more paying customers...
It's quite worrying. A sizeable chunk of cultural and educational material produced in the last decade is in control of greedy bastards who will never have enough. Unfortunately, downloading the video data is only part of it. Even if we shared it all on BitTorrent it's nowhere near as useful without the index and metadata.
From the preservation point of view yes. But realistically, it's been the norm throughout human history that irrelevant culture simply gets removed.
What are you talking about? It's in control of the creators. YT doesn't get exclusive copyright on user's content. Those creators can upload wherever they want.
And YT isn't "greedy bastards". They provide a valuable service, for free, that is extremely expensive to run. Do you think YT ought to be government-funded or a charity or something?
> Do you think YT ought to be government-funded
Benn Jordan made a pretty compelling video on this topic, arguing that the existing copyright system and artifacts of it are actually not that great and a potential government system might actually be better: https://www.youtube.com/watch?v=PJSTFzhs1O4
I will say that is something I would not have considered reasonable prior to watching his video.
I am impressed at their resourcefulness.
Knock on wood not to jinx it, but I wonder why this manages to stay up on github when eg paywall-busting chrome extensions get banned from there (because of DMCA takedowns I guess?)
there was already an attempt to take it down back in 2020/2021 [0]. The DMCA claim's main argument was that ytdl was circumventing Techincal Protection Measures (TPMs) in order to access the content. Thanks to a letter from the EFF [1] which explains how ytdl accesses content in the same way that a browser does (i.e. it does circumvent anything such as DRM), github rejected the takedown.
this is also why ytdl has stood firm in saying they will never attempt to be compatible with anything protected by DRM.
[0] https://github.blog/news-insights/policy-news-and-insights/s...
[1] https://github.com/github/dmca/blob/master/2020/11/2020-11-1...
Thanks for context with good links!
Youtube obviously is making it harder to download videos because... it's training data
Someday it will have to launch a VM with a copy of Chrome installed and use an AI model to operate the VM to make it look like a human, then use frame capture inside the VM to record the video and transcode it.
Is captcha solving on yt-dlp's roadmap? This seems to be a natural next step. Maybe there is an external library they could integrate?
Then someday it with require an entire llm installed locally
god damn they the youtube is at fault, always says: forbidden when trying to download a book audiobook
Ah! So, that’s why brew no longer updates yt-dlp on my iMac from 2017 ¯\_(ツ)_/¯
Why deno over bun?
Per their wiki [0], these are their notes on Bun:
- No permission restrictions available. Scripts have full file system and network access.
- Supports downloading EJS script dependencies from npm (--remote-components ejs:npm).
- No support for SOCKS proxies when downloading EJS script dependencies from npm.
[0]: https://github.com/yt-dlp/yt-dlp/wiki/EJS#notes-2