Comment by maccard

2 months ago

The author mentions rewriting core applications in C# on windows but I don’t think this is the problem. Write a simple hello world app in c#, compile it and see how long it takes to run vs a rust app or a python script - it’s almost native. Unity is locked to a horrifically ancient version of mono and still manages to do a lot of work in a small period of time. (If we start talking JavaScript or python on the other hand…)

I agree with him though. I recently had a machine that I upgraded from Win10 to Win11 and it was like someone kneecapped it. I don’t know if it’s modern app frameworks, or the OS, but something has gone horribly wrong on macOS and windows (iOS doesn’t suffer from this as much for whatever reason IME)

My gut instinct is an adjustment to everything being asynchronous, combined with development on 0 latency networks in isolated environments means that when you compound “wait for windows defender to scan, wait for the local telemetry service to respond, incrementally async load 500 icon or text files and have them run through all the same slowness” with frameworks that introduce latency, context switching, and are thin wrappers that spend most of our time FFI’ing things to native languages, and then deploy them in non perfect conditions you get the mess we’re in now.

> Unity is locked to a horrifically ancient version of mono and still manages to do a lot of work in a small period of time

Unity is the great battery killer!

The last example I remember is that I could play the first xcom remake (which had a native mac version) on battery for 3-4 hours, while I was lucky to get 2 hours for $random_unity_based_indie with less graphics.

> incrementally async load 500 icon or text files and have them run through all the same slowness”

This really shouldn't be slower when done asynchronously compared to synchronously. I would expect, actually, that it would be faster (all available cores get used).

  • > I would expect, actually, that it would be faster (all available cores get used).

    And I think this assumption is what's killing us. Async != parallel for a start, and parallel IO is not guaranteed to be fast.

    If you write a function:

        async Task<ImageFile> LoadFile(string path)
        {
            var f = await load_file(path);
            return new ImageFile(f);
        }
    
    

    And someone comes along and makes it into a batch operation;

        async Task<List<ImageFile>> LoadFiles(List<string> paths)
        {
            var results = new List<ImageFile>();
            foreach(var path in paths) {
                var f = await load_file(path);
                results.Add(ImageFile(f));
            }
            return results;
        }
    

    and provides it with 2 files instead of 1, you won't notice it. Over time 2 becomes 10, and 10 becomes 500. You're now at the mercy of whatever is running your tasks. If you yield alongside await [0] in an event loop, you introduce a loop iteration of latency in proceeding, meaning you've now introduced 500 loops of latency.

    In case you say "but that's bad code", well yes, it is. But it's also very common. When I was reading for this reply, I found this stackoverflow [0] post that has exactly this problem.

    [0] https://stackoverflow.com/questions/5061761/is-it-possible-t...

    • Why would someone put an await inside a loop?

      Don't get me wrong... I believe you have seen it, I just can't understand the thought process that led to that.

      In my mind, await is used when you want to use the result, not when you store it or return it.

      2 replies →

    • well I mean you'd use await foreach and IAsyncEnumerable equivalent... async would mean the UI would not be blocked so I agree with the original commenter you replied to.

      1 reply →

I agree that these are very simple cases, and it's like benchmarking compiliation or execution of helloworld.cpp

It would be interesting to see something like:

- cold boot

- load Windows

- load Office

- open a 200 page document

- create pdf from said document

- open 5000 row spreadsheet

- do a mail merge

- open 150,000 record database

- generate some goofy report

Do people still do mail merge? Not sure.

  • Just FWIW, that is what PC Pro magazine's benchmark suite did in the early 1990s.

    It ran (5-10x) a set of scripted operations in Word, Excel, PowerPoint, Access, PhotoShop, and WinZip, on large files, and that was the basis of of the benchmark score.

    I ported the benchmark suite from 16-bit Windows to 32-bit Windows.

It depends how the application is written in C#. A lot of Modern C# relies on IoC frameworks. These do some reflection shenanigans and this has a performance impact.

  • This is literally what I said:

    > The author mentions rewriting core applications in C# on windows but I don’t think this is the problem. Write a simple hello world app in c#, compile it and see how long it takes to run vs a rust app or a python script - it’s almost native <...>

    > My gut instinct is an adjustment to everything being asynchronous, combined <...> with frameworks that introduce latency, context switching, and are thin wrappers that spend most of our time FFI’ing things to native languages, and then deploy them in non perfect conditions you get the mess we’re in now.

    • I didn't really know what that second sentence meant tbh.

      Specifically with C# reflection will cause the app have a big affect on startup time. I have seen this with almost all the versions of .NET.