Source code for 4kb demoscene production “Elevated” released

10 years ago (files.scene.org)

Release info: http://www.pouet.net/prod.php?which=52938

Binary: https://files.scene.org/view/parties/2009/breakpoint09/in4k/...

Video: https://www.youtube.com/watch?v=jB0vBmiTr6o

Everything you see and hear is procedurally generated by the 4096 byte executable, in real time. It still blows my mind 7 years after release...

For anyone interested/thinks this stuff is cool, the author of Elevated has made a website for experimenting with real time pixel shaders in WebGL: https://www.shadertoy.com/

Some pretty incredible things have been done there.

After viewing LFT's work in using an ATMEL microcontroller as a demoscene platform: http://www.linusakesson.net/scene/craft/index.php

...I had the thought a possible frontier in demoscene is make your own hardware out of discrete components to run your demo.

The MOnSter 6502 would count - http://monster6502.com/

In year 2000 when I was 16 years old 64k fr-08 by farbrausch told me that I know nothing about programming ;)

[1] https://www.youtube.com/watch?v=Y3n3c_8Nn2Y

Does this mean that lots of 3:30 minutes 1080p videos could be compressed into 4kb?

EDIT: We can be generous and say 40kb for sake of adding more colours / etc.

  • This is actually an insightful question.

    The practical answer is no. There is an unimaginable amount of possible 3:30 minute vidoes—far more than the number of possible 4kb or even 40kb files.

    To be fair, most of those possible vidoes are just noise. We don't have to be able to compress those because people don't care if one video of noise is different from another. We also don't have to reconstruct the video perfectly: as long as it looks more or less the same, the audience is happy. (This is called "lossy compression".)

    But even with these caveats, there is no realistic method for compressing realistic 3:30 minute videos that well on a computer. We likely can't do all that much more than current compression algorithms without a different set of tradeoffs. (Like being better at some videos but worse at others.)

    That said, a big part of how compression works is by relying on information already present when decompressing. This demo relies on having a particular kind of chip with certain capabilities (ie a CPU and a GPU) and presumably some standard library functions... etc.

    How well could we "compress" videos if we had more information available when decompressing? Here's a fun thought experiment: what if we had a model of a human mind? We could then feed in a pretty sparse description and have the model fill in the details in a natural intuitive way. It would be very lossy, but the results would be compelling.

    And you know what? That's a decent mental model of how speech works! If you just look at information content, spoken words are not very dense. But if I describe a scene you can imagine it almost as if you're seeing a video. This works because we both have the same sort of brain as well as shared experiences and intentions.

    You can think of speech as incredibly effective—but also rather lossy—compression.

    • It could be very useful to deliberately pursue SUPER lossy compression. As long as no one can really tell based on the end result, it doesn't really matter.

      For example, if you can only tell something was lossy by directly comparing two instances of the same video during playback, then that's probably good enough in most situations.

      It occurred to me that we could compress the hell out of written works by translating them into some super dense language, and ultimately only retain the basics of the meaning/concepts/some of the writing style. Then can re-translate that back to whatever language we want to read it in.

      For compressing pictures or videos, there could be some similar translation to a much more compact representation. Would probably rely on ML heavily though.

    • 4K of English text is a couple of pages of a novel, enough to describe a character and a situation, maybe an interaction. A good writer can conjure up a whole world in 4K... but probably not a description of an arbitrary 3 and a half minutes of activity.

      1 reply →

    • Nice insight you brought with the CPU and the standard libraries being a relevant factor, hadn't thought of that.

      Your thought experiment sounds more like a "codec" than a procedural generation. I guess it is an arbitrary line given that we are using CPU, etc. But the bigger the decompressing "model" the further away from true 4k compression we are.

  • Take a look at https://en.wikipedia.org/wiki/Kolmogorov_complexity

    The Kolmogorov Complexity of a video (or any other data) is the size of the shortest program which outputs that video then halts. This 4k executable is similar in spirit, but also follows strict rules about efficiency: Kolmogorov complexity places no time limits on that shortest program, whereas this program must output pixels fast enough to make the video realtime.

  • Well, it's not compressed, it's generated. You could generate an endless video with less code, but it would most likely be uninteresting. Scene demos are interesting because it's art and direction and music generated from algorithms rather than creating those things and compressing them efficiently.

    But, yes, at some level there is an idea of a dna seed and a process to create something much more profound, we as humanity haven't come close to cracking that, though.

    • I suspect that if at all possible to have an algorithm that can generate the seeds plus the process to expand them, then that algorithm would take orders of magnitude longer to run then there would be practical in any meaningful time scale.

  • Not visuals, but along a similar vein, random number generators with high dimensionality and equidistribution can be coerced into generating very specific output, given enough exploration of the output space.

    For example, and output of all zeros, or the source for a a random number generator itself, or a zipped archive of a work of Shakespeare.

    It's fun to think about anyway.

    http://www.pcg-random.org/party-tricks.html

I know a RasPi doesn't have near the specs needed to run this but I'd love to gut an old flat screen monitor and put it in a frame with a RasPi running something like this, generating random "art" and hang it on a wall somewhere...

also impressive: https://en.wikipedia.org/wiki/.kkrieger

This was one of my favorite demos back in the day. I still have a copy of it (along with 10~ other favorites) sitting in a "Demoscene" folder somewhere. Many of them don't work on today's hardware/software, sadly (including this one).

It's great that it's open source now! That means if someone's really motivated, they can update it to run on modern environments (by no longer keeping it 4 KB), even OS X, etc.

And I had been planning to reverse engineer it for some time, but never got around to doing it.

Hats off to Inigo and others.

Here's their release of how they made it as well. Super interesting read. It's shaders all the way down.

http://www.iquilezles.org/www/material/function2009/function...

That's unreal. On what kind of graphics hardware, though? Seems like it probably offloads most of the work on GPU whereas we'd have had to do most of it in software on HW weak enough that 4KB size actually mattered. And probably not achieve this demo.

  • >Seems like it probably offloads most of the work on GPU

    It does just about everything on the GPU. All the CPU does is repeatedly render two triangles and play music: https://www.shadertoy.com/view/MdX3Rr

    Edit: I'm wrong about the two triangles. From the .nfo-file:

      for those wondering, this a (too) low density flat mesh displaced with
      a procedural vertex shader. there arent any texturemaps for texturing,
      instead texturing (and shading) is defferred and computed procedurally
      in a full screen quad. this means there is zero overdraw for the quite
      expensive material at the cost of a single geometry pass. then another
      second full screen quad computes the motion blur. camera movements are 
      computed by a shader too and not in the cpu, as only the gpu knows the
      procedural definition of the landscape.

    • Thanks for detailed response. I figured it mostly did GPU stuff. So, real computing necessary here is a massively-parallel chip with generic and custom hardware with a bunch of memory plus a regular core using 4KB on other end. I think a more interesting challenge would be to force use of a subset of GPU functions or memory plus tiny memory on CPU side. I don't follow demoscene close enough to know if they subset GPU's like that. Idea being making them run closer to the old Voodoo or pre-GeForce GPU's to see just how much 2D or 3D performance once could squeeze out of it.

      Tricks could have long-term benefit given any emerging FOSS GPU is more likely to be like one of the older ones given complexity of new ones. I'd clone one like SGI's Octane ones they used to do movies on with mere 200MHz processors. Meanwhile, similar tricks might let one squeeze more out of the existing, embedded GPU's in use. Maybe subset a PC GPU in demoscenes like one of the smartphone GPU's. Yeah, that's got some interesting potential.

      14 replies →

This is actually 4KB (kilobytes). The title lead me to believe it was 4Kb (kilobits). Still impressive though.

Back in the day demos were more impressive imo. A lot of them now use direct x or opengl. For the most part the stuff you see isn't written by hand anymore AFAIK, they just have programs to generate the actual demo. Basically they use modeling programs.

  • There are all kinds of demos: some of them use models, some of them do not, some are technical feats, and some are artforms. Many are both.

    Models are just serialized polygon meshes. We've been using models for demos for way longer than DX/OGL have existed. They're just another tool in the box which you can use (if you want to).

    Using DirectX or OpenGL nowadays is like using the CPU: it's just part of the stack. They are probably lower level than you think: using DX/OGL isn't just doing something like drawModel(model, x, y, z), it's way lower level than that.

    This demo in particular is not very different from old school demos. Back in the day we had interruptions, now we have API calls, but in the end shaders are just code. Elevated uses D3D to execute them in the GPU but that's all. And the synthesizer is apparently coded in ASM.

    Even if you really miss the old school platforms there are still demos produced for them, often pushing the limits of what can be done.

    Check http://www.pouet.net/ for lots of impressive demos.

    • Ah, so yes I appreciate demos which pushes hardware to the limit or create some new effect which hasn't been seen before. I feel most demos on older hardware falls within this category then newer demos.

      Additionally, please correct me if I'm wrong, older demos didn't have nice graphics apis to call. They had to create and store what's given for free by Apis these days into their binary. I think the Amiga did have some 3d stuff?

      With opengl with a handful of lines you could get a spinning cube with lighting. So much more work had to be put into older demos to get to the same point. On top of that, the demo writers really had to know the hardware well, and diving into undocumented behavior. A lot more was being calculated on the cpu back then too.

      I haven't been on pouet.net in a while but I will look at newer demos.

  • While true for large demos, the 4kb — perhaps even 64kb? — are still written very much by hand.

Cool idea, too bad it's not free software. It's less free than any software I've seen (explicitly saying that you can't use it for "settings where security is critical" -- something that doesn't even make sense from a software license perspective). It's like the "Good not Evil" line in the JSON license.