← Back to context

Comment by jacquesm

5 days ago

As a very long time C programmer: don't try to be smart. The more you rely on fancy preprocessor tricks the harder it will be to understand and debug your code.

The C preprocessor gives you enough power to shoot yourself in the foot, repeatedly, with anything from small caliber handguns to nuclear weapons. You may well end up losing control over your project entirely.

One nice example: glusterfs. There are a couple of macros in use there that, when they work are magic. But when they don't you lose days, sometimes weeks. This is not the way to solve coding problems, you only appear smart as long as you remember what you've built. Your other self, three years down the road is going to want to kill the present one, and the same goes for your colleagues a few weeks from now.

> as long as you remember what you've built

yes! like any craft, this works only if you keep practising it.

various implementations of k, written in this style (with iterative improvements), have been in constant development for decades getting very good use out of these macros.

Losing control of a project is likely more due to the programmers on it than the tools they use. IMHO _anything_ done consistently can be reasoned about and if necessary undone.

  • Not necessarily. Sometimes the rot goes so deep that there is really no way out.

    And the C pre-processor has figured prominently in more than one such case in my career. And it was precisely in the kind of way that is described in TFA.

    For something to be doable it needs to make economic sense as well and that's the problem with nightmare trickery like this. Initially it seems like a shortcut, but in the long run the price tag keeps going up.

    • Best guess is that your analysis is missing some detail. People not tools write programs. Also any serious discussion here ends up in politics. If you design your software so that the programmers are fungible then the software suffers regardless of your choices.

      4 replies →

Seems to me that this is now exponentially true with AI coding assistants. If you don't understand what you're adding, and you're being clever - you can quickly end up in a situation where you can't reason effectively about your system.

I'm seeing this on multiple fronts, and it's quickly becoming an unsustainable situation in some areas. I expect I'm not alone in this regard.