Show HN: Tramway SDK – An unholy union between Half-Life and Morrowind engines
11 days ago (racenis.github.io)
Hello everyone, I would like to see if there is any interest in this little project that I have been working on for the past few years.
Could be relevant, seeing the direction in which the mainstream game engines are going.
I didn't really like any of the already existing options, so I tried to make my own and it turned out to be easier than expected.
It's sort of like a low-budget Unreal/Source, but with open-world streaming support and it is free and open source. Very old-school. But optimized for more modern hardware. Very fast too.
Still not production ready, but it seems like it is mostly working.
I want to finish a few larger projects with it to see what happens.
Btw, the name is probably temporary.
"Some might say "just get a better computer". This is why getting a better computer is bad:
1. Affordance: A lot of people, especially from 3rd world countries are very poor and can't afford to buy hardware to run Turbobloat.
2. e-Waste: Producing computer chips is very bad on the environment. If modern software wasn't Turbobloated you would buy new hardware only when the previous hardware broke and wasn't repairable.
3. Not putting up with Turbobloat: Why spend money on another computer if you already have one that works perfectly fine? Just because of someone else's turbobloat? You could buy 1000 cans of Dr. Pepper instead."
Took the words from my mouth. What a great project. Please keep posting your progress.
"Screen resolutions from 320x200 to 800x600."
Still, higher resolutions were not just invented because of Turbobloat.
Important:
This was just a joke from the site, I actually took serious!
There is no 800x600 limit.
But also a convenient excuse to sell more ramm and disk space 'for the textures'.
37 replies →
Is that a hard wired limit? I know nothing about game engines, so I'm a bit in the dark why it would only support up to that resolution. Is this about optimized code in terms of cpu cache aligned instruction pipelines etc?
2 replies →
They say that but the engine seems to require an OpenGL 4 GPU while the graphics look like something that could be done on a Voodoo card.
Requires a 15 year old card (so, 2010.) Six years after Half Life 2 but looks like Half Life 1, which shipped with a software renderer (no GPU needed at all!)
I fear the turbobloat is still with us.
2 replies →
What is ‘turbobloat’?
From context, I interpret it to be ‘graphics tech I don’t like’, but I’m not sure what counts as turbobloat.
The whole post in tongue in cheek, it just means "features the game you're making doesn't need (like modern graphics with advanced shaders and super high resolution requiring the latest graphics cards)".
If you're making a game that needs those features, obviously you'll need to bloat up. If you're not, maybe this SDK will be enough and be fast and small as well.
Manufacturing and shipping a new computer can be worth it long term. Improvements in performance and energy consumption can offset the environmental impact after some time.
Of course for entertainment it’s difficult to judge, especially when you may have more fun on an old gameboy than a brand new 1000W gaming PC.
> after some time.
This is doing a lot of heavy lifting in this sentence.
What you're talking about is called the embodied energy of a product[0]. In the case of electronic hardware it is pretty staggeringly high if I'm not mistaken.
[0] https://en.wikipedia.org/wiki/Embodied_energy
1 reply →
"A thing should be a thing. It should not be a bunch of things pretending to be a single thing. With nodes you have to pretend that a collection of things is a single thing."
Just want to say this line was great, very Terry Pratchett. Feels like something Sam Vimes would think during a particularly complex investigation. I love it and hope you keep it moving forward.
Haven't gotten a chance to mess around with it, but I have some ideas for my AI projects that might be able to really utilize it.
In isolation, isn't the quote prima facie so bad and so wrong though? We think of collections of things as single things constantly. A human is a collection of body parts, body parts are collections of chemicals, chemicals are collections of molecules, molecules are collections of atoms... and yet at each level we think of those collections as being single things. Not being able to do that is just... absurd.
The project looks awesome though.
Agreed. Type systems are nearly always "temporal" yet are too simply designed to address that.
"Temporal" to mean that at any given slice of time during a running application all objects have a signature that matches a type.
Yet most programming languages only allow compile-time analysis and "runtime" is treated as monolithic "we can't know at this point anything about types"
I think maybe it is intended as a critique of systems where the individual parts don't compose or scale particularly well, where it feels sort of hollow to call it a "system" because of how uncoordinated and inefficient it is at the "single things" layer.
I think the point is that a body is obviously and intuitively a thing, and doesn't need any pretending. Whereas take something like a marketing brand that has been spread too thin over a bunch of disparate products, everyone has to pretend really hard that it is one thing.
The thing about nodes is a joke like around 80% of the text.
Yes! In programming speak, you're talking about levels of abstraction.
It sounds like the sort of thing Sam Vimes would say before being begrudgingly forced to admit, after being forced by Sybil to undergo some painful personal growth, that maybe, sometimes, a thing might need to be more than just a thing.
And that Vetinari’s entity component system might seem complicated but it works, damnit and it makes the city function.
And once Nobby says he likes Tramway, everyone realizes Vetinari was right all along XD
(I'm just glad someone got the reference)
3 replies →
This quote is likely intended for people who've tried other solutions and disliked them, but as someone who's never used a game engine of any kind, I'd appreciate someone giving me an ELI5 of how "nodes" relate to "pretending that collections of things are things."
Is the problem here that using a nodal editor encourages/incentivizes you through its UX, to assign properties and relationships to e.g. a `Vector` of `Finger`s — but then you can't actually write code that makes the `Vector<Finger>` do anything, because it is just a "collection of things" in the end, not its own "type of thing" that can have its own behavior?
And does "everything is an Entity, just write code" mean that there's no UX layer that encourages `Vector<Finger>` over just creating a Hand class that can hold your Fingers and give the hand itself its own state/behavior?
Or, alternately, does that mean that rather than instantiating "nodes" that represent "instances of a thing that are themselves still types to be further instantiated, but that are pre-wired to have specific values for static members, and specific types or objects [implicitly actually factories] for relationship members" (which is... type currying, kind of?), you instead are expected to just subclass your Entity subclass to further refine it?
In a node-based engine, everything is just a graph of mostly ready-to-use nodes, all you do is create nodes, parent them, delete them; behavior can be attached to specific nodes. There may be no clear boundary where an entity "begins" and where it "ends", because everything is just a bunch of nodes. I'm not sure why the author is against it, in a proper engine you can quickly prototype mechanics/behaviors by just reusing existing nodes/components, and it's very flexible (say, I want to apply some logic to all child nodes recursively -- no problem; or I want to dynamically remove a certain part of a character -- I just unparent the node), and often such engines allow to test things without stopping/recompiling the project. On the other hand, OP's engine apparently requires you to do everything by hand (subclass Entity in code) and recompile everything each time. Basically, a node-based engine is about composition, and OP apparently prefers inheritance.
2 replies →
I think the paragraph after is really interesting:
“Also when creating things with nodes, you have to go back and forth between node GUI and code.”
You can see Godot’s Node/GDScript setup as a bit of a response to this argument. Or, they try to make the “going back and forth” as seamless and integrated possible with things like the $ operator and autocomplete.
That said, I do think at the end of day, the “thing is a thing” mindset ultimately prevails, as you have to ship a game.
I’ve been trying to learn godot for years and I’m not doing so hot. This chatter feels very relevant to my struggles but I’m not the best with software design, so what do I know? I was in a tizzy the other day and spammed my thoughts out about it, I hope it’s relevant here.
trying to wrap my head around using scenes vs. nodes in something simple like a 2d platformer.
Platforms:
My thinking: I'm gonna be using a ton of platforms, so it'd make sense to abstract the nodes that make up a platform to a scene, so I can easily instance in a bunch.
Maybe I'm already jumping the gun here? Maybe having a ton of an object (set of nodes) doesn't instantly mean it'd be better off as a scene?
Still, scenes seem instinctually like a good idea because it lets me easily instance in copies, but it becomes obvious fast that you lose flexibility.
So I make a scene, add a staticbody, sprite, and collision shape. I adjust the collision shape to match the image. Ideally at this point, I could just easily resize the parent static body object to make the platform whatever size I want. This would in theory properly resize the sprite and collision shape.
But I am aware it's not a good/supported idea to scale a collision shape indirectly, but to instead directly change its extents or size. So you have to do stuff based on the fact that this thing is not actually just a thing, but several things.
This seems like a bad idea, but maybe one way I could use scenes for platforms is to add them to my level scene and make each one have editable children. Problem with this is I'd need to make every shape resource unique, and I have to do it every time I add a platform. This same problem will occur if I try duplicating sets of nodes (not scenes) that represent platforms, too. Need to make each shape unique. That said, this is easier than using scenes + editable children.
Ultimately the ‘right’ way forward seems to be tilemaps, but I wanted to understand this from a principles perspective. The simple, intuitive thing (to me) does not seem possible.
When I ask questions about this kind of stuff, 9/10 times the suggestion is to do it in a paradigmatic way that one might only learn after spending a lot of time with an engine or asking the specific question, rather than what I would think is a way that makes dumb sense.
10 replies →
The problem is "a thing is a thing" only gets you those exact things with those exact thing-behaviors.
Sharing behaviors or making things look or act like a little bit like this other thing becomes an absolute nightmare, if not out right impossible, with "a thing is a thing."
There's a reason graph based systems or ECS is basically the corner stone of every modern engine. Because it works and is necessary.
Unfortunately everything is a collection of things pretending to be a single thing, even single things. The best we can do is pretend, or avoid finding out.
This quote looks like it could have been written by Alberto Caeiro, right before he would turn around and apologize for putting too much thingness into things, less they become less thingy in the eyes of us over-thingers.
I'm starting to believe there is an external force that drives down the quality of game engines over time. In most tech, the things that catch on are the things that are the easiest to develop curriculum for. The shape of a node-based editor like Unity is uniquely suited to explaining over a number of classes. (Source: I had to learn Unity at my University) On the other hand, an engine like raylib can be grokked in an afternoon, so a university-level raylib class wouldn't work. So you have all these amateur game developers and programmers coming out of diploma mills, and all they know is Unity/Unreal, so companies hire Unity/Unreal, so universities teach it, etc. See also: Java being popular. Then of course, all these companies have wildly different needs for their Unity projects, so Unity, being a for-profit company that serves its customers and not a single disgruntled programmer, has to conform their engine. So you end up with 'turbobloat.' (amazing term, btw)
The Half-Life and Morrowind engines are in a unique situation where they're put together by enthusiastic programmers who are paid to develop stuff they think is cool. You end up with minimal engines and great tech, suited to the needs of professional game developers.
This seems like something that sits in between a raylib and a Unity. I haven't used it, but I worry that it's doesn't do enough to appeal to amateur programmers, but it does too much to appeal to the kind of programmer who wants a smaller engine. I could be very wrong though, I hope to be very wrong. Seems like the performance here is very nice and it's very well put together. There's definitely a wave of developers coming out frustrated from Unity right now. As the nostalgia cycle moves to the 2000's, there's a very real demand to play and create games that are no more graphically complex than Half-Life 2.
Anyway, great project. Great web design. Documentation is written in a nice voice.
The other thing to remember is the games and the engines built together handle each other - Doom couldn't have a floor above another floor (engine limitation because of CPU limitations) so the level designers created tricks to make it feel like it did.
When you're designing both you can take advantage of features you add but also avoid the ones you can't do well - or even change the art style to "fit" the engine - pixelated angular mobs fit Minecraft quite well, but once they start getting more and more detailed you're in an "uncanny valley" where they look worse and more dated than Minecraft - until you finally have enough polygons to render something decent.
Oh, absolutely. I maintain the engine for my video game and it's ultra-minimal tailored to my needs. That leads to better performance, and a much slimmer build size. (currently sitting at ~900KB for the optimized build of a nontrivial game, assets bundled separately). It's also a better development experience, imo.
My argument was mainly about these more generalized engines, like raylib, 'Tramway', or Source.
A guy who worked on Bioshock (lead design?) said in an interview:
"At work if we want to experiment with a new idea I have to assembly a team, and spend at least a month before we have something we can work with. Meanwhile, at home, I can make a whole Doom campaign in one evening."
(quoting from memory, sorry)
There are new games that still use (modern forks of) the Doom engine!
https://store.steampowered.com/curator/42392172-GZDoom-Games...
It's like with cyberpunk if they didn't use red engine which is horrible bloatware they could've finished the game in half the time with half the people and it would run on a 10 year old laptop in 60 fps. /s
1 reply →
I love library based game dev, like raylib or libgdx, but there is a reason that games like slay the spire moved to unity and then godot for their sequel.
That is to say, I don't think people are using Unity because they were mistaught by complexity loving professors.
Another thing related to this that I found kind of interesting, is this post [0] (unfortunately on twitter) of the developer of Caves of Qud, where they fully ported their game from Unity to Godot as an experiment, showing that they seem to have built the game around a single node, essentially just using Unity (and then Godot) as the presentational layer, similar to a simple graphics library type thing, basically ignoring the whole node system of either engine.
I wonder if this kind of architecture might also be a pretty good approach. The fact that they were able to port the game to another engine within a day is pretty impressive.
[0] https://xcancel.com/unormal/status/1703163364229161236
I don't think that the developers of Slay The Spire were taught by complexity loving professors, no. But education does more than influence the people at universities. Education informs norms, traditions, and styles that permeate through industries. An example from outside tech: the music notation app Finale found a strangle hold on the education market, and now it's one of the standards for notation, despite being the worst option (source: have you tried Finale?).
I've never played the game, but my understanding is that Slay the Spire largely impresses on a design and artistic front, not a technical one. Its engine requirements were not based on feature set or code quality, but on what developers knew. So they probably picked Unity because it was ubiquitous. Education starts the problem, and then devs who need something common they care hire for continue the problem. I don't blame devs for this, it's the right choice to make and obviously Slay the Spire is great, but I am saying that this is a force that drives down the quality of game engines.
2 replies →
I’m curious, what was/is the reason? I would like to learn more about the tradeoffs people are experiencing.
1 reply →
Doesn't Slay the Spire rerender every card's render target every single frame? It runs like dogshit on the Switch for no good reason, given how graphically simple the game is compared to other titles on the platform.
2 replies →
Games used to be crisp as hell, and now they run like shit, crash, and take 150gb to download, and 150 years to launch. If we played games for graphics, one of the most popular MMOs wouldn't be based on a browser game from 2002, in fact we wouldn't be playing games we would be playing real life.
Look at what Epic Games did with fortnite. They killed a competitive scene game that ran smooth for turbobloat graphics and skins.
> and programmers coming out of diploma mills, and all they know is Unity/Unreal, so companies hire Unity/Unreal, so universities teach it, etc.
There is a similar phenomenon with ArcGIS.
Surely there's something good about Unity and its nodes if games like Kerbal Space Program can be made with it.
I wholeheartedly agree with the turbo bloat problem. Machines are so much more powerful nowadays, but most programs feel actually slower than before.
Very cool project. And the website design is A+
> but most programs feel actually slower than before
I feel like this is only true for people who happened to luck out with slightly overpowered hardware in very specific time periods.
As someone who used pretty average hardware in the windows 98/2000/xp era as a teenager even a low end modern laptop with an ssd running Windows 10/11/KDE/Gnome/Whatever is massively more responsive even running supposedly bloated webapps like vscode or slack.
Well... I recommend you to try an old Amiga 1200 . You will find a big surprise how this 20 Mhz machine it's highly responsive, and boots faster that any current machine with Windows 10/11. However, it would not look fancy to our current eyes.
4 replies →
I don't understand the term "turbobloat", never heard it before (and I've made games), the author doesn't define it and a quick search returns the submission article on Kagi, while nothing relevant at all on Google.
So, what does it mean? Just "very bloated"?
Edit: Reading around on the website and seeing more terms like "Hyperrealistic physics simulation" makes me believe it just means "very bloated".
I took it to mean "increasingly bloated over time relative to hardware, phased in a funny, irreverent way." It's a vibe thing, not a definition thing.
I don’t think it is a real word. “Turbo” means “very” or more accurately “extremely,” but is typically only used in a positive context, e.g. turbocharged. That makes the turbobloated neologism ironic and funny.
2 replies →
Because of that factor, I'm not quite sure what's going on with the article or comments here altogether.
If you gave it to me in a cleanroom and told me I had to share my honest opinion, I'd say it was repeating universally agreeable things, and hitching it to some sort of solo endeavor to wed together a couple old 3D engines, with a lack of technical clarity, or even prose clarity beyond "I will be better than the others."
I assume given the other reactions that I'm missing something, because I don't know 3D engines, and it'd be odd to have universally positive responses just because it repeats old chestnuts.
If bufferbloat is increased latency caused by excessive use of increasingly available RAM, then turbobloat is increased latency caused by excessive use of increasingly available CPU.
Certain vintage hardware had a "turbo" button to unleash the full speed of the newer CPUs. The designers blind to the horrors of induced demand.
1 reply →
> Most Unity games look like very bad, even with fancy shaders, normal mapping and other techniques.
This seems to be an increasingly common point of view among those of a certain age.
It is definitely the case that the art of a certain sort of texture mapping has been lost. The example I go back to is Ikaruga, where the backgrounds are simply way better than they have any right to be, especially a very simple forest effect early on. Some of the PS2 era train simulators also manage this.
The problem is these all fall apart when you have a strong directional light source like the sun pointed at shiny objects, and the player moves around. If you want to do overcast environments with zero dynamic objects though you totally could bypass a lot of modern hacks.
Yes. And the thing is, some modern games ARE overcast with no dynamic lights, and then go on to use Lumen of all things. This was the case with Silent Hill remake, and that thing runs very slowly, looks WORSE on PS5 Pro, the grass looks worse than in older games and so on.
Seriously, the plot of Silent Hill was invented to justify optimization hacks, you have a permanent foggy space called "fog space" to make easier to manage objects on screen, and the remake instead stupidly waste a ton of processing trying to make some realistic (instead of supernatural looking) fog.
It's not the 90s anymore. Using basic linear fog with ultra-realistic assets would just look terribly out of place.
The point about Lumen stands though. Baked lighting would have been much better in this case.
It's worse than that, in the Silent hill remake, everything is being rendered behind the fog too, yes you read that right, they render a whole town with complex gemetry to hide it with fog after so you see none of it.
Most good looking games built with Unity don’t ’look like Unity games’ so people don’t think of them as constituting an example of ‘what Unity games look like’. So the archetype for ‘what a Unity game looks like’ remains at ‘pretty rough’.
The ‘art’ of making stuff look good has not been lost at all. It’s just very unevenly distributed.
When a team has good model makers and good texture artists and good animators and good visual programming, it looks great, whether it’s built in Unreal or Unity or a bespoke engine or whatever.
I don’t think that is what people are getting at, since they uniformly want more texture detail.
There are a lot of technically polished Unity titles that get knocked because they look like very well rendered plasticine, for want of a better description.
For example, there was an argument on here not too long ago where various people pushing the “old graphics were better” (simplification) did not understand or care that the older titles had such limited lighting models.
In the games industry I recall a lot of private argument on the subject of if the art teams will ever understand physically based models, and this was one of the major motivations for a lot of rigs to photograph things and make materials automatically. (In AAA since like 2012). The now widespread adoption of the Disney model, because it is understandable, has contributed to a bizarre uniformity in how things look that I do think some find repulsive.
Edit to add: I am not sure this is a new phenomenon. Go back to the first showing of Wind Waker for possibly the most notorious reaction.
There's an insistence that materials can overcome lacking texturing and normal mapping. It's not true, but it's a result of a lot of marketing fluff from things like Unreal Engine being misunderstood or misrepresented. Did you know that in Super Mario Sunshine, for "sharp" shadows the Gamecube was unable to render, that they actually used flattened meshes instead? In Delfino Plaza the shadows under the canopies near the Shine Gate are actually meshes instead of textures. Meanwhile the tile plaza that the mesh shadows lie on looks so nice because it's not one giant texture, it's actually several dozen 128x128px textures all properly UV mapped. In a modern game you'd get two brick textures and a noise pattern to blend them, and they'd all be 2048x2048px with the shadows being raytraced so they have sharper edges.
Ironically as we've gotten hardware with more VRAM and higher bus speeds we've decided to go with bigger textures instead of more of them. The same with normal mapping, instead of using normal mapping alongside more subdivided models we've just decided that normal maps are obsolete and physically modelling all the details is technologically forward way. Less pointy spheres is one thing, but physically modelling all the cracks and scrapes on the sphere is just stupid and computationally wasteful.
> Ironically as we've gotten hardware with more VRAM and higher bus speeds we've decided to go with bigger textures instead of more of them. The same with normal mapping, instead of using normal mapping alongside more subdivided models we've just decided that normal maps are obsolete and physically modelling all the details is technologically forward way.
This right here is precisely what I alluded to in another reply as the motivator for generating meshes and PBR materials from controlled photography. Basically you now have enough parameters per texel, which interact in distinctly unintuitive ways, that authoring them is a nightmare, hence people resorting to what you describe.
Easier to market "more resolution" and "more polygons" than masterful use of uv mapping.
You can get something working quite quickly (especially with things like Unity) - but to get them looking amazing takes extra skill and polish.
Even a "2D" game like Factorio has amazing polish difference between original release, 1.0, and today.
(This can very obviously be seen with modded games, because the modded assets often are "usable" but don't look anywhere near as polished as the main game.)
I replayed Half-life 2 recently and was struck, even without high-res texture packs, how amazing the game still looks today.
I think this is because of how extremely cleverly they picked the art style for the game. You have a lot of diffuse surfaces for which prebaking the lighting just works. Overcast skies allow for diffuse ambient lighting rather than very directional lights, which force angle-dependent shading and sharp high contrast shadow outlines. And the overwhelming majority of glossy surfaces are not too shiny which also helps out a lot. All of these are believable choices in this run-down, occupied, extremely dystopian world. And the texturing with its muted color palette pulls it all together.
There's been a rumor going around that developers move away from prebaked lighting primarily because it complicates their workflow.
2 replies →
That's why I think really good art direction beats raw graphical power any day. Source was pretty impressive back in the day, but the bit that's stood the test of time is just how carefully considered the environments and models are. Valve really put their resources into detailing and maximizing the mileage they got out of their technical constraints, and it still looks cohesive and well-designed 20 years later
Still baffles me how unnerving the Ravenholm level is even today. It's got a creepy, unsettling vibe, 20 years later, entirely due to really decent art direction.
Definitely. A hyper-talented team combining new physics-based gameplay, art style and rendering technology made something just amazing.
Half-life 2 has received multiple updates to shading and level of detail since it was released, so it looks a little better than it did at release. Still, it was already a visually impressive game at release.
I just replayed Half Life 2 less than a week ago! I also caught myself thinking, "the levels may not be as detail filled as modern games, but the artistic direction both in graphics and level design is better than many modern designers with bigger budgets."
Great! I really liked the intro, with the Socialist state-style architecture and processes, and that degrading infrastructure contrasting strongly with the sleek, modern weaponry held by the oppressors. I could've just walked around that world and been pretty happy with the game!
You might enjoy "Black Mesa", HL1 remade with the HL2 engine. Played it during the pandemic. No Regrets.
Black Mesa is how I remember the original game. Worth every second i spent with the game!
3 replies →
Did you play the original Half-Life 2 from 2004 or one of the "remasters" (though they weren't called that) that comes every few years that updates the graphics and/or engine slightly?
I don't think there's any official way to play the original 2004 version (or even the Source 2006/Episode One version either). The Xbox version is probably closest but they used palettised textures for the Xbox version - something that no PC version of Source ever supported - probably to get it to run okay.
5 replies →
Fair question - no, I just played whatever's on Steam, on Linux. Maybe the textures are higher quality, but I remember the physics-based gameplay fresh as when I was playing in 2004!
Yeah, it was great. They really pulled out all the stops when it came to cinematic quality on that one. They also did a lot of second order things like marrying the scenes to the plot that a lot of games don't well or at all.
As someone currently working with a little team trying to make low-poly games using Godot - this is awesome!
> Also when creating things with nodes, you have to go back and forth between node GUI and code.
> All of the mainstream engines have a monolithic game editor. It doesn't matter how many features you use from it, you still have to wait 10 minutes for all of them to load in.
These notes really resonated; the debug loop even with Godot, using minimal fancy features, felt a lot slower than other contexts I've programmed in. Multiple editors working around a single data file spec is also a cool idea! In finding that a unified IDE makes it easier for different developers to create merge conflicts, I could see having editors of a more specific purpose may also help developers of different roles limit the scope and nature of their changes. Keen to see how the engine progresses!
I am pretty proud of figuring out how to TDD a C# module without booting Unity for a hackathon last month.
Managed to contribute my bit from an underpowered netbook.
I had never written a line of C# before, but I'll be damned if I'm going to concede TDD from the CLI. I knew it could be done, and I made it work. Everybody thought I was crazy, though, and none of the sponsors' DevRel were any help.
And, of course, the biggest point of friction for us, that weekend, was our beefiest machine still had to boot and reboot the damned Unity IDE for a thousand years! Incredible the fetters some folks tolerate.
I'm not very familiar with Unity and it's limitations / difficulty of this task. What challenges did you encounter and how did you solve this problem?
1 reply →
You reference "Turbobloat" and engines being "bloated" - which is to some extent fair. But it is maybe worth describing what that means to you - what features you consider "bloat" and which you have omitted from the Tramway project. To some the inclusion of an RPG framework may be considered bloat, for example, yet there is one present in Tramway.
That's why added it in as an optional extension. It is a part of the larger engine project, but it is completely optional.
I like the C++ principle of paying only for what you use.
Understandable, but the main thing was - you lean a lot on the idea of "TurboBloat" being this universally understood concept. And I think many people might have a vague feeling that a lot of modern software is slow and "bloated", but you may want to be clear on what you consider "bloat".
The RPG engine was just an example of why it may not be such a universal thing, I'm not saying it's bad - but clearly you think that is not "bloat" whereas to some it might be. So it's maybe better to head this off at the pass and just write a little paragraph with some examples of bloat you have observed in other engines that you have consciously avoided in Tramway.
> This article will cover the usage of the framework for beginners who are either scared of C++ or just bad at it
I'm in the latter camp and want to thank you for your "Getting Started" Page. The teapot appeared and I understood things I did not think I would understand. I do not have time to finish your tutorial at the moment (due to only having 30 whole minutes for lunch), but I want to, which says more about how entertaining and accessible it is than anything.
Did anyone else find the Design Patterns page? It's a score board with a goal at 100%. I love this so much.
Linked from the home page:
”Design patterns used 82%.
When all of the patterns get used, I will delete the project and rewrite it in Rust. With no OOP.”
I was looking for ages and still haven’t found this.
https://racenis.github.io/tram-sdk/patterns.html
1 reply →
You have to click on "Enterprise Mode" to find
> Design Patterns Used
> 82%
It only supports up to 800x600 resolution? For real? I know people like low res games and this is targeting old hardware but that is surprisingly low to me given the touting of how optimized this is.
Think of it as a fantasy console, like pico-8 which despite the extreme restrictions is home to some incredible content that of which exceeds many big studio engines. The imposed ceiling now allows a solo dev or a team to now concentrate on delivering gameplay and vivacious content instead of graphical gimmicks which eat resources both for the consumers and creators.
Nobody argues that FTL, Minecraft, baba is you, Stardew valley, RuneScape, or dwarf fortress are not a high enough resolution.
Minecraft is a bad example. It uses low resolution textures, but the screen resolution is as big as your display. I'm not even sure what the maximum is.
2 replies →
Can you make it a bit less photorealistic? I'm afraid that people would confuse reality with the games created with it and it could pose a danger to society.
Do you plan to create some videos showing the process of setting up a basic example?
Very cool! There need to be more options for developers with lower-end boxes, for gamers with low-end hardware. Unreal Engine 5 is a lost cause nowadays without 64GB of RAM, Unity is a mess and there need to be more options than Godot.
In my youth I cut my teeth on the quake 2 sdk. And even without a 3D suite and a c compiler I could get creating. When the Rage toolkit became available, almost none of the community were as besotted with eagerness as they had done before. It was a 30GB+ download with some hefty base requirements. While rage could run on a 4 core machine, not many gamers at the time had 16 core Xeon’s and 16gb of ram! The worst the HL2 modding scene had to contend with was running Perl on windows.
Still waiting for bevy to get an official editor.
Good. This is exactly what I've been complaining about for decades now...
I also have my own engine although it needs some refurbishment. I've never quite found the time to polish it to a point where it can be sold. It also runs on tiny old devices, although if you limit yourself to desktop hardware, that means anything from the last 30 years or so. It also has a design that allows it to load enormous (i.e. universe scale) data by streaming with most often an unperceptable loading time... on the iPhone 4 in about 200ms you are in an interactive state which could be "in game".
Unity and Unreal are top-tier garbage that don't deserve our money and time. The bigger practical reason to use them is that people have experience and the plugin and extension ecosystems are rich and filled with battle tested and useful stuff.
bespoke big company engines are often terrible too. Starfield contains less real world data than my universe app, but somehow looks uglier and needs a modern PC to run at all. mine runs on an iPhone 4, looks nicer and puts you in the world in the first 200ms... you might think its not comparable but it absolutely is, all of the same techniques could be applied to get exactly the same quality of result with all their stacks and stacks of art and custom data - and they could have a richer bunch of real world data to go with it!
>Unity and Unreal are top-tier garbage that don't deserve our money and time. The bigger practical reason to use them is that people have experience and the plugin and extension ecosystems are rich and filled with battle tested and useful stuff.
Both are effectively magical sandboxes where platform support is someone else's problem.
Unity is still pretty great, but it's chained to a company that has no real business plan to sustainability.
Unreal is okay, but developers aren't using it right. For any bigger project you should customized the engine for your needs. Or at the very least spend some time to optimize.
But we need to ship and we need to ship now.
Blame the developers not the tools.
i've been doing this for decades and my bedroom work had never done anything but put unreal and unity to shame. from top to bottom i can not understand the ignorance of their design from a simple "a programmer is making this" standpoint, it comes from a legacy of "a rookie wannabe with too much money had a good shot and too much promotion"
unreal is fucking awful, its a masterclass in how to not make:
* components
* hierarchies
* visual scripting
* networking
* editors
* geometry
* rendering
* culling
* in-game ui
* editor ui
* copy-paste
* kinematics
* physics integration
* plugin support
* build system
its just a tower of mistakes i learned not make before i dared to even enter the industry
it is fantastically and incredibly bad.
unity is a bit similar but they add c# complexity to the mix and in the beginning that was a much bigger disaster, especially going with mono. .NET was an enormous misstep by microsoft and remains so, although it improves over time they could have just not gotten it so incredibly wrong to start with.
i could go on.
i definitely blame the developers. of the terrible tools, i couldn't make that badly at most points in my career including the super early days in some cases.
they are also hard to fix because of the staggering depth of the badness.
if you would like more specifics feel free to poke, its more about not typing a wall of text than the cognitive load of knowing better, which is around zero.
oh... and the garbage collection is garbage that enables incompetents to make more garbage. never needed or wanted it. i had one hard memory leak to deal with in my life in native code. and a fucking zillion in their shit fest.
EDIT: i shit you not, it has not learned my first lessons from being an 8 year old trying to draw mandelbrot sets in qbasic.
1 reply →
I don't know anything about game programming but I quite approve of your sense of humor.
You said it's compatible with hardware from 15 years ago, but one of the examples have the graphical complexity of half-life from about 25 years ago, could this engine be optimized further to run on hardware from that vintage or at least closer to? Would be pretty cool making games that can run on a Ryzen 9950x 32 thread monster but scale all the way down to a 1Ghz Pentium III and a Voodoo 3.
The oldest computer that I have tried running this engine is a HP laptop from 2008, running a 32-bit version of Windows XP.
It seemed to work fine, but I did have some issues with the Direct3D 9 renderer. The renderer works fine on other computers, so I have no idea if it's a driver bug (Intel tends to have buggy drivers) or if it's a bug on my part.
The biggest problem with using old hardware is drivers. Older drivers will only work on older operating systems and it's difficult to find C++20 compilers that will work on them.
You can use modern MSVC or Clang with an old C runtime/Windows SDK. It's a pain in the ass since new compilers are way stricter with what they compile, so you get a bunch of warnings, but it will work.
Just wanna say the website aesthetic is legendary. Very on brand.
Except that it would be way better if it wasn't arbitrarily limited to a tiny column. I have a large screen, use it please. Don't make me dig into the developer console to undo your fixed width in order to have a pleasant reading experience.
Lots of people don't think super wide text is pleasant.
dude it's period correct just hit ctrl/cmd and plus and zoom in like the rest of us.
2 replies →
Seems pretty entitled if you ask me.
1 reply →
makes me feel like a kid again.
This is really cool. You should organize a game jam for it.
How is the wasm support? My main issue with Godot was large bundle sizes and slow load times. (GameMaker kicks its ass on both, but I never got the hang of it.)
I would say that it is way too early for a game jam.
The webassembly builds seem to work fine. A basic project would take up around 20MB and takes a couple of seconds to load in, so it's not great, but then again I haven't done any optimizations for this.
>too early for a game jam
All the more reason! Then you'd fix it faster ;)
From the perspective of someone who's dabbled in 3D graphics, and has made an engine for 3D visualizations for my science projects:
What is blocking this from high resolutions, and dynamic or smooth lighting? The former is free, and you can do the latter in Vulkan/Dx/Metal/OpenGl etc using a minimal pixel and fragment shader pair.
There's literally nothing preventing you from dragging the edge of the engine window and resizing it, or calling the screen resize function from the C++ or Lua API.
That bit about 24-bit color and 800x600 resolutions was mostly meant to be a fun nod to promotional text that you could find on the backs of old game boxes.
The default renderer for the engine is meant to emulate what you could achieve with a graphics card that has a fixed-function graphics pipeline.
I'll do more modern renderer later, for now I am mostly focusing on the engine architecture, tools and workflows.
Great info! That answers my question entirely.
> But what if all that you really want to make is just a lowpoly horror roguelite deckbuilder simulator?
Is this a reference to Inscryption?
Neat project!
By the way, to see a great example of how a modern game can be made using the classic Half Life engine, look at the fan made game Half Life: Echoes [1].
It actually looks pretty decent, and the gameplay is top notch.
[1] https://www.youtube.com/watch?v=fBQKi6vGX8U
> It does what Godoesn't.
> I am not reinventing the wheel, I am disrupting the wheel industry.
I am laughing out loud
> When all of the patterns get used, I will delete the project and rewrite it in Rust. With no OOP.
https://racenis.github.io/tram-sdk/patterns.html
Love it.
Woah, it's cutting it close - just 3-5 aren't used! The Rust port might be on the horizon :D
Time to cook up a PR.
> Btw, the name is probably temporary
It's announced, and the name is fine, so it'll stick :)
Love the entity init->use->yeet cycle. Fantastic terminology, may steal it.
I love the retro aesthetic of your website - it perfectly matches the spirit of the project. The detailed documentation and transparent roadmap on GitHub are excellent. It's clear you've put a lot of thought and effort into making this accessible for developers. Great job on the presentation overall!
This looks really cool, great work. One thing I want to preregister though: I bet against the whole Entity subclass thing. 60% of the way through the first serious-business project, you're going to RUE THE DAY. I'll look forward to seeing what people do :)
This sounds pretty cool! I like the name too, I would keep it like that.
This website rules.
This is a really cool project, and I love the writing style.
I am also in the early days of writing a very primitive 2.5D Raycasting engine [0] (think Wolfenstein3D) and have just got to texture mapping. Very fun
Its open source and written in C, a pretty small and easy to follow codebase so far
[0]- https://github.com/con-dog/2.5D-raycasting-engine/blob/maste...
Could call it Mega McBloatface(?)
The demo(s) should be linked from the page so that HN can complaint that the game is to hard.
https://racenis.itch.io/sulas-glaaze
https://racenis.itch.io/froggy-garden
It runs well in Firefox on my low end laptop.
Can you add the rule:
to the page, please? no_gifs.css is alright, but I need to visit the page (and run JavaScript) before I can find and click it, and by that point the damage is done.
The writeup, demos and proofs of concept, along with transparent roadmap/todos on the GitHub page are top notch. Great presentation. I definitely see myself trying this.
This is evidence of a great moment in modern indie game dev: the power of fun and simple prototyping.
This is fantastic, actually. I love that this will let us create games in the late 90s FPS style but with all the niceties of modern hardware. Now if only I had any skill in 3d modelling...
Typo, "Trawmay"
> Everyone always says that you "shouldn't create an open-world RPG", but that's just because they have never tried using the Trawmay SDK.
Love it <3
I have very similar, strong opinions about game engines and I think this is a great project. I am definitely going to mess around with this after work today.
I saw "fixed function pipeline" and immediately think of RTX Remix. This could've been raytracing modded in to add Turbobloat lol
Makes me wonder how far can we go with simple but high quality light maps.
It a practical way to bring global illumination to the masses without real time ray tracing
I think a project like this is a good idea with the popularity of retro 3d games and "de-makes" now days
Using a modern engine seems overkill
I fucking love this!
Hope some initial tutorials become available. I’ll gladly contribute some but I need a little guide to get started.
This is great! I'm wondering if there's anything on the roadmap for multiplayer support?
Wait, so what is the bit about Morrowind and Half life? Doesn't seem to be mentioned anywhere.
racenis, what program did you use to draw the header graphic?
I dream of a Mac port, but it's beyond my skills.
I made the website header in GIMP. The logo in the repository README was made in a very old version of MS office.
Can this be used as an alternative to Hammer to develop HL maps/mods?
It showed trenchbroom being used to make maps and I don't think that can be used to develop goldsource maps, so most likely not.
I like the name. It's the SDK that gives the name meaning anyway.
Damn this looks sweet! I’m gonna check this out. Cool project!
Why aren't more people commenting about Dr. Pepper?
Can it run on a MS-DOS machine with 640 KB of RAM?
That would call for a Wii port ;)
Don't understand shit, but congrats on the website. Is this React 19 ?
Based.
love the revolving toilet
The filename is 'poland.gif', I wonder what's the message there.
It's the same GIF that's used in the polish milk soup song video.
In case you missed it, have a look at the page about animation: https://racenis.github.io/tram-sdk/learn/animations.html
10/10 choice of model and animation, this website is amazing.
I can't decide if this article is satire.
License?
You've obviously put a lot of effort into this, but I'm always lost at how people publish something open source and forget to actually put a license on there. Since now it's technically closed source, hypothetically if you become a monk in the woods next week no one else can fork your code
I just realized that I had forgotten to actually add the license file to this repository. Added it now.
The license is MIT. Thanks for noticing.
An MIT license file was added (or edited) a minute ago in the repo :)
[dead]