Comment by csallen

14 hours ago

> One defining constraint must shape the product... Minecraft is built entirely from blocks. IKEA is flat-pack, self-assembly furniture.

I've been calling these things product primitives. I can't remember where I heard that term, but it refers to things like...

Blocks in Notion. Messages and conversations in Telegram. Frames and layers in Figma. Tweets in Twitter. Cells and sheets in Excel. Tools and layers in Photoshop. Commands in a CLI.

I think what makes for good product design is having a very small number of primitives. A bad product doesn't know what its primitives are. Or it has a very large number of primitives. It feels like everything in the product is some unique thing that works in its own unique way. So users have to learn a ton of different top-level primitives/concepts. It's confusing and intimidating and hard to teach. Ideally you just want one or two or three main primitives.

The complexity/power in an app comes from choosing powerful primitives that have depth, that are composable, etc. You can do a lot with Notion blocks. You can do a lot with Excel cells. You can do a lot with a CLI command. You can do a lot with a Minecraft block. There's depth there.

Yeah... this sounds a bit like the Alexandrian Pattern Language concepts which directly inspired the Gang of Four's Design Patterns.

I wonder, though, if what you're describing as "product primitives" actually maps more closely to what Alexander later called "Centers," rather than the patterns themselves.

From what I understand, while the software world heavily adopted his patterns, Alexander spent his later career arguing that the ultimate building block of a system is actually a Center: localized focal points of utility and coherence, eg a well-lit courtyard, window seat, or fireplace. A strong center is naturally composable; it "resolves local tension," is made of smaller centers, and acts as a building block to generate larger ones.

When a product feels confusing or bloated, it's rarely out of bad design intent. It's just that user needs—while not necessarily glaringly—are empirically discoverable, while the true, underlying "centers" that could elegantly solve them are incredibly subtle and hard to identify. The path of least resistance is almost always to just build a unique, rigid interface for the immediate user need right in front of you. Doing the deep architectural work to discover a core primitive that naturally absorbs those needs is difficult.

So maybe that's why we build so many faster horses.

We used to call this “concept count”. You usually want to minimize the number of core concepts that make up your product. I’ve also heard it as the “nouns and verbs” of a product.

I think this philosophy might be oversimplified. Tana has basically two primitives (bullets and supertags) and manages to be devastatingly complex to use to the point you have to watch hours of tutorials to do very simple things. Conversely Google Maps has a lot of “primitives” but the UX is fairly tight for 90% of use cases.

  • It applies more to design software, where a user is creating durable things and needs to understand those things themselves. Google Maps is more of an agent: It's responsible for understanding its own complexity and answering your queries.

  • Tana is basically a programming environment disguised as a text editor (in this way, it follows in the grand tradition of emacs you could say)

  • Doesn't Jira only have one primitive: the ticket. Everything else just augments it. You could say that these augmentations are separate primitives, but then the same would apply to all tools in the other cited examples like Photoshop too

I used a similar metric when judging programming languages. The language can get huge but if it's conceptually small, one can learn it and then leave the rest to compounding due to experience. Conceptually large languages had a barrier for me. The case when I felt this was perl.

> Commands in a CLI … I think what makes for good product design is having a very small number of primitives.

Small but not too small. Case in point: shell scripts (POSIX shell, bash) where the scripting part was decided to be modelled as commands thus not introducing another bunch of concepts. We all know what the result is (hot, slow mess).

  • I know it's in vogue to bash Bash but I feel that criticism is unfair.

    Shell scripting is a victim of its own success: it is _so easy_ to get started that most users get value out of knowing the first one percent and never bother to actually learn the rest.

    There aren't many who have read the Bash manual, or know what zsh can do that Bash cannot, etc.

    "Shell scripting is a hot, slow mess" is the same hot slow mess that you get wherever the barrier to entry is extremely low (e.g. early PHP, early JavaScript/frontend development, game development with a game engine where you can just click around in the editor, etc).

    • There’s also the fact that shell scripting is for automation of what you may do interactively. It’s not for stuff where you want data structures to manipulate in memory. Trying to use it like python is an exercise in frustration.

When I think about products with too many primitives, I instantly think of Snapchat and Instagram - my two least favorite apps.