Comment by TeMPOraL
5 years ago
> Put logic closest to where it needs to live (feature folders)
Can you say more about this?
I think I may have stumbled on a similar insight myself. In a side project (a roguelike game), I've been experimenting with a design that treats features as first-class, composable design units. Here is a list of the subfolder called game-features in the source tree:
actions
collision
control
death
destructibility
game-feature.lisp
hearing
kinematics
log
lore
package.lisp
rendering
sight
simulation
transform
An extract from the docstring of the entire game-feature package:
"A Game Feature is responsible for providing components, events,
event handlers, queries and other utilities implementing a given
aspect of the game. It's primarily a organization tool for gameplay code.
Each individual Game Feature is represented by a class inheriting
from `SAAT/GF:GAME-FEATURE'. To make use of a Game Feature,
an object of such class should be created, preferably in a
system description (see `SAAT/DI').
This way, all rules of the game are determined by a collection of
Game Features loaded in a given game.
Game Features may depend on other Game Features; this is represented
through dependencies of their classes."
The project is still very much work-in-progress (procrastinating on HN doesn't leave me much time to work on it), and most of the above features are nowhere near completion, but I found the design to be mostly sound. Each game feature provides code that implements its own concerns, and exports various functions and data structures for other game features to use. This is an inversion of traditional design, and is more similar to the ECS pattern, except I bucket all conceptually related things in one place. ECS Components and Systems, utility code, event definitions, etc. that implement a single conceptual game aspect live in the same folder. Inter-feature dependencies are made explicit, and game "superstructure" is designed to allow GFs to wire themselves into appropriate places in the event loop, datastore, etc. - so in game startup code, I just declare which features I want to have enabled.
(Each feature also gets its set of integration tests that use synthetic scenarios to verify a particular aspect of the game works as I want it to.)
One negative side effect of this design is that the execution order of handlers for any given event is hard to determine from code. That's because, to have game features easily compose, GFs can request particular ordering themselves (e.g. "death" can demand its event handler to be executed after "destructibility" but before "log") - so at startup, I get an ordering preference graph that I reconcile and linearize (via topological sorting). I work around this and related issues by adding debug utilities - e.g. some extra code that can, after game startup, generate a PlantUML/GraphViz picture of all events, event handlers, and their ordering.
(I apologize for a long comment, it's a bit of work I always wanted to talk about with someone, but never got around to. The source of the game isn't public right now because I'm afraid of airing my hot garbage code.)
I'd be interested in how you attempt this. Is it all in lisp?
It might be hard to integrate related things, e.g. physical simulation/kinematics <- related to collisions, and maybe sight/hearing <- related to rendering; Which is all great if information flows one way, as a tree, but maybe complicated if it's a graph with intercommunication.
I thought about this before, and figured maybe the design could be initially very loose (and inefficient), but then a constraint-solver could wire things up as needed, i.e. pre-calculate concerns/dependencies.
Another idea, since you mention "logs" as a GF: AOP - using " join points" to declaratively annotate code. This better handles code that is less of a "module" (appropriate for functions and libraries) and more of a cross-cutting "aspect" like logging. This can also get hairy though: could you treated "(bad-path) exception handling" as an aspect? what about "security"?
> I'd be interested in how you attempt this. Is it all in lisp?
Yes, it's all in Common Lisp.
> It might be hard to integrate related things, e.g. physical simulation/kinematics <- related to collisions, and maybe sight/hearing <- related to rendering;
It is, and I'm cheating a bit here. One simplification is that I'm writing a primarily text-based roguelike, so I don't have to bother with a lot of issues common to real-time 3D games. I can pick and choose the level of details I want to go (e.g. whether to handle collisions at a tile granularity, or to introduce sub-tile coordinates and maybe even some kind of object shape representation).
> Which is all great if information flows one way, as a tree, but maybe complicated if it's a graph with intercommunication.
The overall simulation architecture I'm exploring in this project is strongly event-based. The "game frame" is basically pulling events from a queue and executing them until the queue is empty, at which point the frame is considered concluded, and simulation time advances forward. It doesn't use a fixed time step - instead, when a simulation frame starts, the code looks through "actions" scheduled for game "actors" to find the one that will complete soonest, and moves the simulation clock to the completion time of that action. Then the action completion handler fires, which is the proper start of frame - completion handler will queue some events, and handlers of those events will queue those events, and the code just keeps processing events until the queue empties again, completing the simulation frame.
Structure-wise, simulation GF defines the concept of "start frame" and "end frame" (as events), "game clock" (as query) and ability to shift it (as event handler), but it's the actions GF that contains the computation of next action time. So, simulation GF knows how to tell and move time, but actions GF tells it where to move it to.
This is all supported by an overcomplicated event loop that lets GFs provide hints for handler ordering, but also separates each event handling process into four chains: pre-commit, commit, post-commit and abort. Pre-commit handlers fire first, filling event structure with data and performing validation. Then, commit handlers apply direct consequences of event to the real world - they alter the gameplay state. Then, post-commit handlers process further consequences of an event "actually happening". Alternatively, abort handlers process situations when an event was rejected during earlier chains. All of them can enqueue further events to be processed this frame.
So, for example, when you fire a gun, pre-commit handlers will ensure you're able to do it, and reject the event if you can't. If the event is rejected, abort chain will handle informing you that you failed to fire. Otherwise, the commit handlers will instantiate an appropriate projectile. Post-commit handlers may spawn events related to the weapon being fired, such as informing nearby enemies about the discharge.
This means that e.g. if I want to implement "ammunition" feature, I can make an appropriate GF that attaches a pre-commit handler to fire event - checking if you have bullets left and rejecting the event if you don't (events rejected in pre-commit stage are considered to "have never happened"), and a post-commit handler on the same event to decrease your ammo count. The GF is also responsible for defining appropriate components that store ammo count, so that (in classical ECS style) your "gun" entity can use it to keep track of ammunition. It also provides code for querying the current count, for other GFs that may care about it for some reason (and the UI rendering code).
> I thought about this before, and figured maybe the design could be initially very loose (and inefficient), but then a constraint-solver could wire things up as needed, i.e. pre-calculate concerns/dependencies.
I'm halfway there and I could easily do this, but for now opted against it, on the "premature optimization" grounds. That is, since all event handlers are registered when the actual game starts, I "resolve constraints" (read: convert sorting preferences into a DAG and toposort it; it's dumb and very much incorrect, but works well enough for now) and linearize handlers - so that during gameplay, each event handler chain (e.g. "fire weapon", pre-commit) is just a sequence of callbacks executed in order. It would be trivial to take such sequence, generate a function that executes them one by one (with the loop unrolled), and compile it down to metal - Common Lisp lets you do stuff like that - but I don't need it right now.
> Another idea, since you mention "logs" as a GF
FWIW, logs GF is implementing the user-facing logs typical for roguelike games - i.e. the bit that says "You open the big steel doors". Diagnostic logs I do classically.
> AOP - using " join points" to declaratively annotate code
In a way, my weird multi-chain event loop is a reimplementation of AOP. Method combinations in Common Lisp are conceptually similar too, but I'm not making big use of them in game feature-related code.
> This can also get hairy though: could you treated "(bad-path) exception handling" as an aspect? what about "security"?
Yeah, I'm not sure if this pattern would work for these - particularly in full-AOP, "inject anything anywhere" mode. I haven't tried it. Perhaps, with adequate tooling support, it's workable? Common Lisp is definitely not a language to try this in, though - it's too dynamic, so tooling would not be able to reliably tell you about arbitrary pointcuts.
In my case, I restricted the "feature-oriented design" to just game features - I feel it has a chance of working out, because in my mind, quite a lot of gameplay mechanics are conceptually additive. This project is my attempt at experimentally verifying if one could actually make a working game this way.
Sounds interesting. Is the game open source? Published anywhere?
Not at this moment. I intend to publish it under GPLv3, after I get the MVP done.
I've gone down roads similar to this. Long story short - the architecture solves for a lower priority class of problem, w/r to games, so it doesn't pay a great dividend, and you add a combination of boilerplate and dynamism that slows down development.
Your top issue in the runtime game loop is always with concurrency and synchronization logic - e.g. A spawns before B, if A's hitbox overlaps with B, is the first frame that a collision event occurs the frame of spawning or one frame after? That's the kind of issue that is hard to catch, occurs not often, and often has some kind of catastrophic impact if handled wrongly. But the actual effect of the event is usually a one-liner like "set a stun timer" - there is nothing to test with respect to the event itself! The perceived behavior is intimately coupled to when its processing occurs and when the effects are "felt" elsewhere in the loop - everything's tied to some kind of clock, whether it's the CPU clock, the rendered frame, turn-taking, or an abstracted timer. These kinds of bugs are a matter of bad specification, rather than bad implementation, so they resist automated testing mightily.
The most straightforward solution is, failing pure functions, to write more inline code(there is a John Carmack posting on inline code that I often use as a reference point). Enforce a static order of events as often as possible. Then debugging is always a matter of "does A happen before B?" It's there in the source code, and you don't need tooling to spot the issue.
The other part of this is, how do you load and initialize the scene? And that's a data problem that does call for more complex dependency management - but again, most games will aim to solve it statically in the build process of the game's assets, and reduce the amount of game state being serialized to save games, reducing the complexity surface of everything related to saves(versioning, corruption, etc). With a roguelike there is more of an impetus to build a lot of dynamic assets(dungeon maps, item placements etc.) which leads to a larger serialization footprint. But ultimately the focus of all of this is on getting the data to a place where you can bring it back up and run queries on it, and that's the kind of thing where you could theoretically use SQLite and have a very flexible runtime data model with a robust query system - but fully exploiting it wouldn't have the level of performance that's expected for a game.
Now, where can your system make sense? Where the game loop is actually dynamic in its function - i.e. modding APIs. But this tends to be a thing you approach gradually and grudgingly, because modders aren't any better at solving concurrency bugs and they are less incentivized to play nice with other mods, so they will always default to hacking in something that stomps the state, creating intermittent race conditions. So in practice you are likely to just have specific feature points where an API can exist(e.g. add a new "on hit" behavior that conditionally changes the one-liner), and those might impose some generalized concurrency logic.
The other thing that might help is to have a language that actually understands that you want to do this decoupling and has the tooling built in to do constraint logic programming and enforce the "musts" and "cannots" at source level. I don't know of a language that really addresses this well for the use case of game loops - it entails having a whole general-purpose language already and then also this other feature. Big project.
I've been taking the approach instead of aiming to develop "little languages" that compose well for certain kinds of features - e.g. instead of programming a finite state machine by hand for each type of NPC, devise a subcategory of state machines that I could describe as a one-liner, with chunks of fixed-function behavior and a bit of programmability. Instead of a universal graphics system, have various programmable painter systems that can manipulate cursors or selections to describe an image. The concurrency stays mostly static, but the little languages drive the dynamic behavior, and because they are small, they are easy to provide some tooling for.
Thanks for the detailed evaluation. I'll start by reiterating that the project is a typical tile-based roguelike, so some of the concerns you mention in the second paragraph don't apply. Everything runs sequentially and deterministically - though the actual order of execution may not be apparent from the code itself. I mitigate it to an extent by adding introspection features, like e.g. code that dumps PlantUML graphs showing the actual order of execution of event handlers, or their relationship with events (e.g. which handlers can send what subsequent events).
I'll also add that this is an experimental hobby project, used to explore various programming techniques and architecture ideas, so I don't care about most constraints under which commercial game studios operate.
> The perceived behavior is intimately coupled to when its processing occurs and when the effects are "felt" elsewhere in the loop - everything's tied to some kind of clock, whether it's the CPU clock, the rendered frame, turn-taking, or an abstracted timer. These kinds of bugs are a matter of bad specification, rather than bad implementation, so they resist automated testing mightily.
Since day one of the project, the core feature was to be able to run headless automated gameplay tests. That is, input and output are isolated by design. Every "game feature" (GF) I develop comes with automated tests; each such test starts up a minimal game core with fake (or null) input and output, the GF under test, and all GFs on which it depends, and then executes faked scenarios. So far, at least for minor things, it works out OK. I expect I might hit a wall when there are enough interacting GFs that I won't be able to correctly map desired scenarios to actual event execution orders. We'll see what happens when I reach that point.
> that's the kind of thing where you could theoretically use SQLite and have a very flexible runtime data model with a robust query system - but fully exploiting it wouldn't have the level of performance that's expected for a game.
Funny you should mention that.
The other big weird thing about this project is that it uses SQLite for runtime game state. That is, entities are database rows, components are database tables, and the canonical gameplay state at any given point is stored in an in-memory SQLite database. This makes saving/loading a non-issue - I just use SQLite's Backup API to dump the game state to disk, and then read it back.
Performance-wise, I tested this approach extensively up front, by timing artificial reads and writes in expected patterns, including simulating a situation in which I pull map and entities data in a given range to render them on screen. SQLite turned out to be much faster than I expected. On my machine, I could easily get 60FPS out of that with minimum optimization work - but it did consume most of the frame time. Given that I'm writing a ASCII-style, turn(ish) roguelike, I don't actually need to query all that data 60 times per second, so this is quite acceptable performance - but I wouldn't try that with a real-time game.
> The other thing that might help is to have a language that actually understands that you want to do this decoupling and has the tooling built in to do constraint logic programming and enforce the "musts" and "cannots" at source level. I don't know of a language that really addresses this well for the use case of game loops - it entails having a whole general-purpose language already and then also this other feature. Big project.
Or a Lisp project. While I currently do constraint resolution at runtime, it's not hard to move it to compile time. I just didn't bother with it yet. Nice thing about Common Lisp is that the distinction between "compilation/loading" and "runtime" is somewhat arbitrary - any code I can execute in the latter, I can execute in the former. If I have a function that resolves constraints on some data structure and returns a sequence, and that data structure can be completely known at compile time, it's trivial to have the function execute during compilation instead.
> I've been taking the approach instead of aiming to develop "little languages" that compose well for certain kinds of features
I'm interested in learning more about the languages you developed - e.g. how your FSMs are encoded, and what that "programmable painter system" looks like. In my project, I do little languages too (in fact, the aforementioned "game features" are a DSL themselves) - Lisp makes it very easy to just create new DSLs on the fly, and to some extent they inherit the tooling used to power the "host" language.
Sounds like you may be getting close to an ideal result, at least for this project! :) Nice on the use of SQLite - I agree that it's right in the ballpark of usability if you're just occasionally editing or doing simple turn-taking.
When you create gameplay tests, one of the major limitations is in testing data. Many games end up with "playground" levels that validate the major game mechanics because they have no easier way of specifying what is, in essence, a data bug like "jump height is too short to cross gap". Now, of course you can engineer some kind of test, but it starts to become either a reiteration of the data (useless) or an AI programming problem that could be inverted into "give me the set of values that have solutions fitting these constraints" (which then isn't really a "test" but a redefinition of the medium, in the same way that a procedural level is a "solution" for a valid level).
It's this latter point that forms the basis of many of the "little languages". If you hardcode the constraints, then more of the data resides in a sweet spot by default and the runtime is dealing with less generality, so it also becomes easier to validate. One of my favorite examples of this is the light style language in Quake 1: https://quakewiki.org/wiki/lightstyle
It's just a short character string that sequences some brightness changes in a linear scale at a fixed rate. So it's "data," but it's not data encoded in something bulky like a bunch of floating point values. It's of precisely the granularity demanded by the problem, and much easier to edit as a result.
A short step up from that is something like MML: https://en.wikipedia.org/wiki/Music_Macro_Language - now there is a mostly-trivial parsing step involved, but again, it's "to the point" - it assumes features around scale and rhythm that allow it to be compact. You can actually do better than MML by encoding an assumption of "playing in key" and "key change" - then you can barf nearly any sequence of scale degrees into the keyboard and it'll be inoffensive, if not great music. Likewise, you could define rhythm in terms of rhythmic textures over time - sparse, held, arpeggiated, etc. - and so not really have to define the music note by note, making it easy to add new arrangements.
With AI, a similar thing can apply - define a tighter structure and the simpler thing falls out. A lot of game AI FSMs will follow a general pattern of "run this sequenced behavior unless something of a higher priority interrupts it". So encode the sequence, then hardcode the interruption modes, then figure out if they need to be parameterized into e.g. multiple sequences, if they need to retain a memory scratchpad and resume, etc. A lot of the headache of generalizing AI is in discovering needs for new scratchpads, if just to do something like a cooldown timer on a behavior or to retain a target destination. It means that your memory allocation per entity is dependent on how smart they have to be, which depends on the AI's program. It's not so bad if you are in something as dynamic as a Lisp, but problematic in the typical usages of ECS where part of the point is to systematize memory allocation.
With painting what you're looking for is a structuring metaphor for classes of images. Most systems of illustration have structuring metaphors of some kind specifically for defining proportions - they start with simple ratios and primitive shapes, and then use those as the construction lines for more detailed elements which subdivide the shapes again with another set of ratios. This is the conceptual basis of the common "6-8 heads of height" tip used in figure drawing - and there are systems of figure drawing which get really specific about what shapes to draw and how. If I encode such a system, I therefore have a method of automatic illustration that starts not with the actual "drawing" of anything, but with a proportion specification creating construction lines, which are then an input to a styling system that defines how to connect the lines or superimpose other shapes. Something I've been experimenting with to get those lines is a system that works by interpolation of coordinate transforms that aggregate a Cartesian and polar system together - e.g. I want to say "interpolate along this Cartesian grid, after it's been rotated 45 degrees". It can also perform interpolation between two entirely different coordinate systems(e.g. the same grid at two different scales). I haven't touched it in a while, but it generates interesting abstract animations, and I have a vision for turning that into a system for specifying character mannequins, textures, etc. Right now it's too complex to be a good one-liner system, but I could get there by building tighter abstractions on it in the same way as the music system.
My main thing this year has been a binary format that lets me break away from text encodings as the base medium, and instead have more precise, richer data types as the base cell type. This has gone through a lot of iteration to test various things I might want to encode and forms I could encode them in. The key thing I've hit on is to encode with a lot of "slack" in the system - each "cell" of data is 16 bytes; half of that is a header that contains information about how to render it, its ID in a listing of user named types, bitflags defined by the type, a "feature" value(an enumeration defined by the type), and a version field which could be used for various editing features. The other half is a value, which could be a 64-bit value, 8 bytes, a string fragment, etc. - the rendering information field indicates what it is in those general terms, but the definite meaning is named by the user type. The goal is to use this as a groundwork to define the little languages further - rather than relying on "just text" and sophisticated parsing, the parse is trivialized by being able to define richer symbols - and then I can provide more sophisticated editing and visualization more easily. Of course, I'm placing a bet on either having an general-purpose editor for it that's worthwhile, or being able to define custom editors that trivialize editing, neither of which might pan out; there's a case for either "just text" or "just spreadsheets" still beating my system. But I'd like to try it, since I think this way of structuring the bits is likely to be more long-run sustainable.
1 reply →