Why are there both TMP and TEMP environment variables? (2015)

1 day ago (devblogs.microsoft.com)

> My recollection is that most CP/M programs were configured via patching. At least that’s how I configured them. I remember my WordStar manual coming with details about which bytes to patch to do what. There was also a few dozen bytes of patch space set aside for you to write your own subroutines, in case you needed to add custom support for your printer.

Huh. That is interesting, it was before my time, and I never heard of this :D

  • Yes, it was definitely a thing. The patching code had to be in Z80/8080 machine code. I wrote higher performance keyboard and display routines for my copy of Wordstar using this feature.

    • Stuff like that is also cool (reminds me a bit of modding some games), patching machine code to improve performance of a compiled app? Very cool! (My dad might have done that, he has an old ZX81)

      But I thought specifically patching something to configure it is such a weird concept that I never would have thought of.

      10 replies →

  • > > My recollection is that most CP/M programs were configured via patching.

    > Huh. That is interesting, it was before my time, and I never heard of this :D

    Yep, it was a thing, and for /some/ programs that were originally CP/M programs (i.e., WordStar 7.0 for DOS) it continued for a long time. The WordStar 7 documentation included patch locations to use (this time, IIRC, for DOS debug.exe) to change various behaviors of the program.

  • Some sort of still do. The stuff that suckless writes is generally configured by changing config.h and recompiling.

    https://suckless.org/

    Edit: oops just saw that this was already mentioned in another subthread on this page.

  • That is interesting, it was before my time, and I never heard of this

    It was necessary because both RAM and disk space were so severely limited, and because almost every computer came with an assembler.

    Many CP/M programs were expected to run in as little as 32K RAM, and 130K of slow-ass floppy disk. Or worse – From a cassette tape. If you had 64K of RAM and a 360K disk, you were something special.

    Unlike today, most programs were optimized for the bottom of the market, not the top. You wanted your program to run on as many system as possible so you could sell more copies. You didn't just shrug your shoulders and tell people to upgrade their hardware. The failure was yours, not your customers'.

    There simply wasn't room for any kind of external configuration file, or a program to generate that configuration file. Common functions could be accessed via a command-line parameter, but even that logic eats valuable bytes.

    Today people complain about the MacBook Neo having just 8,000,000,000 bytes of RAM, saying you can't do anything in such a limited space.

    Meanwhile, in 1978, people could write an entire rudimentary IDE in 2,048 bytes.

< Rewind to 1973. The operating system common on microcomputers was CP/M

OK. I love Raymond’s blog but this is crazy. Microcomputers existed only as a prototype in 1973 (things like Intel’s Intellec dev systems) and there were no operating systems for them. Strictly speaking, Kildall did start developing CP/M in 1973, but at that point it ran only on a simulator on a PDP-10 mainframe.

1979, sure. 1973? Way too early…

  • Wikipedia says it was created in 1974, so something's definitely off with the time line here.

    • Intel 8080 was launched in April 1974 and the development system for it, "Intellec 8 Mod 80", was available soon after that.

      CP/M could be developed only after the launch of the 8080 and the delivery of the development system.

      In UNIX, the environment variables were added in the Seventh Edition (1979-01), together with the Bourne shell.

      I do not remember whether any other command interpreters used something equivalent with environment variables before the UNIX shell (excluding the interpreters for general-purpose programming languages, like LISP and APL, where you can run a function in REPL and that function can access global variables).

      Therefore the quoted year may be a typo for 1979, when environment variables appeared in the UNIX shell, but were not available in the CP/M Console Command Processor (CCP, the predecessor of COMMAND.COM).

  • > 1979, sure. 1973? Way too early…

    Which is fun, because this is the same time difference between 2020 and now

    And then we think we didn't have ChatGPT in 2020

A great example of a decision that likely received little to no thought from an early developer but that has long legs and will stick around forever.

Also, chances are programs chose “TMP” because file extensions in MS-DOS were 3 characters, max, and programs used to use “.TMP” for names of temporary files.

I'm confused by the CP/M reference. Author says it'll be important later then proceeds to explain how it had nothing to do with CP/M or the 8080 CPU.

  • Agree, CP/M has nothing to do with the story, nor does the 8080/8086 sidetrack.

    The whole story is that Microsoft just never bothered to standardize, despite using it themselves.

  • If CP/M had used environment variables for configuration, presumably there would have been an established standard for TMP vs. TEMP that DOS would have adopted. The real catch, however, is that CP/M didn’t have directories. Nor did DOS 1.0.

  • could you quote the text you are referring to?

    • "Rewind to 1973. The operating system common on microcomputers was CP/M. The CP/M operating system had no environment variables. That sounds like a strange place to start a discussion of environment variables, but it’s actually important."

      9 replies →

It’s similar to how Unix programs weren’t consistent on whether they look for http_proxy or for HTTP_PROXY, and nowadays many(?) are looking for both, but maybe not in the same order.

1995-ish. Telstra (Australia Telecom). Probably about 50k desktop computers across the organisation. One day a small file turned up in everyone's network home directory called null. A *nix person had evidently had a go at writing a .bat file.

Why do we need to adopt extant standards? (I was going to ask, why standardise? But realised that might confound the North Americans. : )

  • >One day a small file turned up in everyone's network home directory called null. A *nix person had evidently had a go at writing a .bat file

    I assume that they first tried /dev/null which failed, so then moved onto just plain null?

    Otherwise it would not make sense that a unix programmer did this. More likely ula dos programmer misspelled NUL as null.

    • Fun fact: "/dev/nul" (with only one L) would have worked, even if there is no directory with that name.

      That's been a feature since DOS 2.0, there was even an undocumented option AVAILDEV to make the prefix mandatory, instead of having device names present everywhere. But it broke the common trick used to detect if a directory exists ("if exist c:\some\path\nul").

    • Unix programmer remembered that in there's no /dev/null in DOS and that it's something shorter, and tried null which worked. Didn't check the directory contents afterwards. So basically your first sentence - doesn't seem at all unlikely to me. (I mean, I think it happened to me at least once too)

    • I've already created a 'NULL' file, but it was not a Unix thing... It was just because I got confused if it was NULL as in the programming languages I usually use.

  • Some Logitech drivers installation program (not sure which version or what product) did it too... found a file named NULL on my HD, and of course there was a BAT file with something > NULL.

> My recollection is that most CP/M programs were configured via patching.

I honestly would have liked that better for a lot of programs than the dotfiles they litter all over my home directory.

  • If people just followed the XDG Base Directory Specification, config file littering would be a non-issue. More and more projects adopt it, even holdouts like Firefox.

    • I wonder how many apps actually read the correct XDG environment variables and obey them, versus just hardcoding paths that match their machine’s config paths (typically ~/.config/*)

      I almost want to set up a VM that sets up XDG_CONFIG_HOME as ~/.foobar and see how many apps actually respect it, and how many still write to ~/.config.

      1 reply →

    • I don't see much difference between a program littering in ~/.app versus littering in ~/.config/app, ~/.local/share/app , ~/.local/state/app and ~/.cache/app

      2 replies →

    • I don’t mind ~/.* for config, especially when the config is just one or two files. What I don’t like is programs like go and cargo treating my $HOME as a dumping ground for every file they want to download and/or cache.

      5 replies →

    • I contemplated for years and eventually saw someone implement a transparent kernel redirect for programs reaching for ~/.*

  • Part of the philosophy of the slightly odd suckless people is their projects are mostly configured by changing the source code and recompiling. This is I suppose a similar approach in a modern open source vein. Although their general asceticism makes their projects a bit of an acquired taste I suspect.

    • Ohh acquired taste it is.. I had two stints with suckless software. First, when i was in early twenties when I had a lot of time in the world, and thought the manliest way to talk to a machine is all through low level C code. Had a whole flow to patch it and heck the code is so well written and commented, i was able to understand it. Then, i guess life happened and i discovered more interesting stuff to spend time on.

      And now in my late twenties, suckless terminal is the only one that would work reliably on a shitty old enterprise linux system at work. Yeah, we got xterm and konsole (the older one). I am seeing them in a whole different light now. I did not read the source code now and it is effectively a foreign language to me, but just being able to have modern features in it without too many dependencies is a different level of bliss. This time, I am glad I have the flexi patch to the rescue since, i passed on suckless terminal as a real alternative since I don’t want to patch it manually or solve merge conflicts!

      Even though I don’t like the elitist attitude of the project, can’t deny they got a point. Why does a terminal emulator need to be so complicated!

      https://github.com/bakkeby/st-flexipatch

      1 reply →

    • the general asceticism, frign (on the board of suckless.org e.v.) setting his mailserver hostname to a nazi bunker, them picking a torch march through the german countryside as a bonding activity instead of literally anything else... yeah. at that point why would one want to bother trying to acquire the taste?

  • Well, they are supposed to be all in .config, problem is many app developers think they are special little boys that deserve its own directory

    • Most modern software uses .config, and I suspect the holdouts are due to cross-platform support issues. Windows may have it's own equivalents, but they are radically different from Unix. Developers may not want to deal with those differences, or don't want to deal with the support issues related to those differences.

      The remaining holdouts tend to be very old applications. (The XDG standards are less than 25 years old, then you have to give time for them to be adopted.) For some of those applications, it would create support issues even if it would be trivial to implement. For others, it would create issues since other software would have to be modified to reflect the changes. For others, the software did not have a distinct configuration directory so untangling it would be a major effort.

      In the case of the latter, just look at Firefox. Yes, it recently moved the .mozilla directory to .config. It is in no way reflective of the XDG standards. Among other things, there are log files, cache files, and add-ons in there. In my mind, that is worse than having ~/.mozilla. Instead of having a directory that can be cleanly backed up, with the exceptions being elsewhere, I am left having to sort through everything. I don't blame Firefox for taking that approach though: users were demanding a clean home directory and the developers had legacy code to deal with. They simply took the path of least resistance. (That said, Firefox isn't the only culprit here.)

  • I'll take dotfiles I can grep and mange with a text editor over settings littered all over a central binary registry. But maybe that's just what I'm used to.

    • Every single program has to write logic to parse/store/query/validate those values. A common API with a single store can be type-enforced, backed up, and likely easier to work with from an internationalization perspective.

      I do like dotfiles for portable apps where everything the program needs is in one folder. Personally, my need for portable apps has gone down year on year.

    • I'm the opposite: I'll take a centralized, strongly-typed registry over 'stringly' typed dotfiles scattered across the entire filesystem.

  • Yea this is something I'd love to see standardised, a distro that was able to enforce a .config folder somehow would be a winner for me. Think weve probably missed the boat though.

I didn't know it was such a chaos.

So I guess the moral of the story is: Ensure they always point to the same path, or else...

I have been pointing out these annoying things for decades with Microsoft. It’s funny because the “senior dev” know it all there person always used to have an answer. “Heh you see temp stands for temporary. Tmp stands for troubleshoot my pc. For debug logs. That’s why I’m a senior and you’re not.”

As I’ve gotten more senior turns out I was right to question it and we can actually talk to the original Microsoft devs now and they explain the whoopsie and how they had to keep it for backwards compatibility.

So then I ask why that excuse is valid, backwards compatibility, when many changes are frequently breaking core compatibility and active business flow (like New Outlook) and they go full hands off. Whoah I’m not the bad dev you’ll have to ask the new guys.

You can’t ask the new guys. And they’re hiding behind leetcode screens. It’s no wonder why these real problems don’t get fixed and we have new outlook. It’s the senior dev from earlier who now works there. All the real devs are retired.

Even when I do get a real answer from Microsoft on annoying things like the user home documents folder being used inappropriately by random programs or straight up forcefully deleted by onedrive in an oopsie, their answer they senior dev invented to give me or go on length about in a technical document or angry interview online is invalidated within 6 months when Microsoft just vibe pushes a random change that randomly alters how these things work in both not a good way and it invalidates their entire core argument.

Just like notepad updates as another example off the top of my head. There are dev interviews talking about how this is a very simple program because it needs to be 0 risk. Then it gets a Microsoft auth login with copilot.

The whole leetcode dev attitude and Microsoft culture really ruins the entire industry. We can’t have civil discussions. Everything turns into Nuh uh your argument is invalid because you don’t work at Microsoft.

Google Chrome famously having chrome install into appdata to exploit and bypass admin rights is a core memory. That’s clearly not the actual intention of that feature for 3rd parties to use in order to bypass having administrator authority. But now this is retconned by the devs as an intended feature because chrome ended up being good at the time at having to sort out the mess of deploying a 3rd party exception program on millions of locked down business computers would’ve been a nightmare.

One of the first things I do (after using O&O ShutUp10++ of course) when setting a new installation is to create "C:\Temp\" folder and go to them variables to point there. Mind you, you have global TMP and TEMP, and you have user defined (which points to \Windows\Temp instead of %LOCALAPPDATA%/Temp) ones. All 4 will point to my folder and makes my life a lot easier in the long run

Environment variables on *nix are ... strange.

I noticed this first when I assigned TZ to .tar.xz (or .tar.gz) as I was lazy. Then things suddenly no longer worked. Turns out TZ is ... timezone. So you should not define all variables, right? Well ... how to know that? People can perhaps read tons of documentation, but I want to ... minimize time investment here. So I learn mostly by learn-and-doing. And as far as I know there is no trivial way to know about "this can be dangerous". Those shells are fairly simple at all times - and not that sophisticated.

TMP versus TEMP may also be trivial but ... who can remember the differences? And why is it important?

I have come to think that environment variables are useful; I use them all the time. But they are a bit hackish and not super-elegant and may have side effects. This is not a very important side effect per se, as most people will not run into issues such as TZ or having to think which variable to use and which not, but in the moment when you DO get surprising results, and you don't know why, this may become a problem suddenly. I kind of semi-work around by prefixing via:

   MY_

This is even less elegant, to then have e. g.:

   MY_TEMP = /home/x/temp/

or something like that. But with the prefix "MY_" I rarely run into problems. So this then becomes my main pointer, e. g. MY_TEMP_DIRECTORY, and then TEMP and TMP may just be aliases to that. It's a bit stupid but it is also simple and kind of works. Beauty is not to be found here, but it is practical and that is not a bad thing.

In my own ruby code I also have to use ENV[] sometimes, which is a wrapper over environment variables, but I also try to be as independent as possible from that. For instance, all my settings are stored in simple .yml files, and these are then used to autogenerate environment variables or handle things in environments where there are no environment variables available (this is sometimes the case; I had that issue with .cgi files many years ago, where I wanted to access all environment variables but for a reason I don't fully remember, this was not available. Then I transitioned into those .yml files and that problem went away; now of course I need to load those .yml files, if necessary, but this is trivial in ruby, YAML.load_file() works very well for the most part, and I find this is more reliable than depending on reading environment variables output. On some restricted environments this may be unavailable though; I had that on university campus running ancient linux systems, so I have to be flexible in this regard).

> These surface vibrations are so minute that they are not critical from a mechanical point of view (unlike e.g. vibrations due to imbalance), but they can cause serious acoustic problems. The reason for this is that when the fan is running, there is a pressure difference between the intake and the outlet side of the fan (lower pressure on the upside of the blades, higher pressure on the downside of the blades). From an aero-acoustic point of view, this situation is similar to a stereo speaker where there is higher pressure inside the chassis and lower pressure outside the chassis. In both cases, the pressure difference leads to an efficient acoustic coupling, so the surface vibration of either the blades of the fan or the membrane of the speaker is transferred to the air.

I love rotary woofers :) I hope to get one some day.

always shove it to `%LOCALAPPDATA%/Temp`, or `~/AppData/Local/Temp`, and don't think otherwise