Comment by kevingadd

13 years ago

It's nice to see that internal developers feel the same way about XNA that external developers (who used to build XNA games, or still build XNA games) do.

From the outside I always assumed the constant flood of new, half-baked features instead of fixes and improvements to old ones was caused by interns and junior devs looking for glory - sad to hear that's actually partly true. I always considered frameworks like WPF (or Flex, for that matter) 'intern code' - not that interns necessarily wrote them, but they reek of not-experienced-enough engineers trying to solve problems by writing a bunch of new code, instead of fixing existing code.

It really is too bad, though. There are parts of the NT kernel (and even the Win32 API) that I consider a joy to use - I love IOCP, despite its warts, and APIs like MsgWaitForMultipleObjects are great tools for building higher-level primitives.

Plus, say what you want about GDI (there's a lot wrong with it at this point), but it's still a surprisingly efficient and flexible way to do 2D rendering, despite the fact that parts of it date back to before Windows 3.1. Some really smart people did some really good API design over time over at Microsoft...

Actually, I think one NT's largest advantages over POSIX systems is process management: yes, the venerable CreateProcess API.

See, in Windows, processes are first class kernel objects. You have handles (read: file descriptors) that refer to them. Processes have POSIX-style PIDs too, but you don't use a PID to manipulate a process the way you would with kill(2): you use a PID to open a handle to a process, then you manipulate the process using the handle.

This approach, at a stroke, solves all the wait, wait3, wait4, SIGCHLD, etc. problems that plague Unixish systems to this day. (Oh, and while you have a handle to a process open, its process ID won't be re-used.)

It's as if we live in a better, alternate universe where fork(2) returns a file descriptor.

You can wait on process handles (the handle becomes signaled and the wait completes when the process exits). You can perform this waiting using the same functions you use to wait on anything else, and you can use WaitForMultipleObjects as a kind of super-select to wait on anything.

If you want to wait on a socket, a process, and a global mutex and wake up when any of these things becomes available, you can do that. The Unix APIs for doing the same thing are a mess. Don't even get me started on SysV IPC.

Another thing I really like about NT is job objects (http://msdn.microsoft.com/en-us/library/windows/desktop/ms68...). They're a bit like cgroups, but a bit simpler (IMHO) to set up and use.

You can apply memory use, scheduling, UI, and other restrictions to processes in job objects. Most conveniently of all, you can arrange for the OS to kill everything in a job object if the last handle to that job dies --- the closest Linux has is PR_SET_PDEATHSIG, which needs to be set up individually for each child and which doesn't work for setuid children.

(Oh, and you can arrange for job objects to send notifications to IO completion ports.)

Yes, Windows gets a lot wrong, but it gets a lot right.

  • > WaitForMultipleObjects as a kind of super-select to wait on anything

    Except for the 64 handle limit, which makes it largely useless for anything that involves server applications where the number of handles grows with the number of clients. So then you'd spawn "worker" threads, each handling just 64 handles. And that's exactly where your code starts to go sour - you are forced to add fluff the sole purpose of which is to work around API limitations. What good is the super-select if I need cruft to actually use it in the app?

    And don't get me started on the clusterfuck of the API that is IOCP.

    > Yes, Windows gets a lot wrong, but it gets a lot right.

    Oh, no, it doesn't.

    Specifically, what Windows does not get is that it's other developers that are using its services and APIs, not just MSSQL team that can peek at the source code and cook up a magic combination of function arguments that does not return undocumented 0x8Fuck0ff. I have coded extensively for both Linux and Windows, including drivers and various kernel components, and while Windows may get things right at the conceptual level, using what they end up implemented as is an inferior and painful experience.

    • > not just MSSQL team that can peek at the source code and cook up a magic combination of function arguments that does not return undocumented

      Don't leave us hanging... what examples are you referring to?

      18 replies →

    • As far as I see you had a contact with Win API but you haven't understood enough, based on your complaints. Moreover, why do you mention "MySQL" as the team with the access to kernel sources?

      13 replies →

    • Can you name any software that doesn't have an internal API that's not meant for use by 3rd parties?

  • My personal annoyance is the introduction of win32k in NT4. With per-session CSRSS existing since WinFrame and Vista+ having Session 0 Isolation, not to mention much faster processors, and the removal of XPDM and the requirement of the DWM in Win8, it doesn't make as much sense anymore.

    Update: And I forgot to mention the font parsing security issues (look up Duqu), and the issues with user mode callbacks too.

    • > the issues with user mode callbacks too.

      Genuine question (because I really don't know) -- how do Linux GUI frameworks work without kernel-mode callbacks? What do they use instead, when they need to send messages to other windows?

      8 replies →

    • Nobody likes win32k, but it's there and it works.

      Okay, you're in charge. Are you sure you can't find a better way to deploy shareholder capital than using it to remove win32k?

      5 replies →

> I always considered frameworks like WPF (or Flex, for that matter) 'intern code' - not that interns necessarily wrote them, but they reek of not-experienced-enough engineers trying to solve problems by writing a bunch of new code, instead of fixing existing code.

This is unfair and unfounded accusation. If you look at the time frame, all the major platform were working on their first hardware accelerated UI toolkits and went through the similar teething problems (Cocoa anyone?). WinForms was a dead end, there was no fixing to do. WPF has turned out well enough, and WinRT has evolved into something very efficient (e.g. by using ref counting rather than GC).

> Plus, say what you want about GDI (there's a lot wrong with it at this point), but it's still a surprisingly efficient and flexible way to do 2D rendering

Not anymore. I get it that some love to use antique APIs and computer systems just for the sake of being retro, but when every computer ships these days with a GPU, using GDI is not even close to pragmatic.

  • > ...WinRT has evolved into something very efficient (e.g. by using ref counting rather than GC).

    Ah you mean by causing cache stalls when doing increments/decrements after each assignment and allowing for cyclic references?

    • If the ref counting is manual or optimized, it doesn't happen after every assignment, only ones which change object ownership. If you're really nuts about avoiding cache hits, you can pack most ref counts into the low bits of the class pointer, since allocations tend to be 16-byte aligned, including allocations for class objects. Assuming you use the object at least once following a change of ownership, the overall cost in cache misses becomes 0. Or you can store the references in a global table with better cache properties (IIRC this is what ObjC does). Or you can rely on the fact that cache lines are typically large enough to grab a few words at a time, so fetching the class pointer will automatically fetch the refcount (again with the assumption that you use objects at least once per assignment). Honestly, I'm having difficulty imagining how cache behavior could become a problem unless you wanted it to.

      As for not allowing cycles, I consider that a feature. The headache of memory management doesn't go away with GC. You still have to avoid inadvertently keeping references to objects through the undo stack & so on. Unintentional strong refs creep through GC code just as easily as memory leaks creep through code with manual memory management. Almost universally, I find that GC'd projects large enough to have memory management best practices implicitly do away with this freedom at the first opportunity by calling for acyclic or single-parent object ownership graphs. These restrictions primarily make it easier to think about object lifecycle -- the fact that they allow refcounting to suffice for memory management is icing on the cake.

      3 replies →

    • The caching misbehavior of referencing counting has been greatly exaggerated, especially in the context of UI where responsiveness is much more important than raw CPU speed. Also, the ref counting tradeoff seems to work better for device (e.g. all the cool kids [1] are doing it).

      [1] https://developer.apple.com/library/mac/documentation/Genera...

      .NET still does GC, it is only the WinRT APIs (something like COM) that manage resources through ref counting. There is some cool interop magic that makes this somewhat transparent to the programmer.

      12 replies →

  • GDI has been GPU accelerated literally forever. Vista may have dropped hardware acceleration for GDI, but it was promptly brought back in Windows 7.

    Since the Win32 UI stack uses GDI, it was hardware accelerated before WPF even existed.

    • Not the same thing! This is like comparing pre-DX8/programmable shader hardware acceleration to the fixed-function crud we had to deal with a long time ago. Ya, you can emulate fixed functions with programmable shaders, but its a poor way to make use of a modern GPU.

      Here is a good discussion of the topic:

      http://msdn.microsoft.com/en-us/library/windows/desktop/ff72...

      Excerpts:

      > When the GDI DDI was first defined, most display acceleration hardware targeted the GDI primitives. Over time, more and more emphasis was placed on 3D game acceleration and less on application acceleration. As a consequence the BitBlt API was hardware accelerated and most other GDI operations were not.

      > In order to maintain compatibility, GDI performs a large part of its rendering to aperture memory using the CPU. In contrast, Direct2D translates its APIs calls into Direct3D primitives and drawing operations. The result is then rendered on the GPU. Some of GDI?s rendering is performed on the GPU when the aperture memory is copied to the video memory surface representing the GDI window.

      > Existing GDI code will continue to work well under Windows 7. However, when writing new graphics rendering code, Direct2D should be considered, as it takes better advantage of modern GPUs.

The "Inside Windows Kernel" book series are quite interesting to understand on how it all works, and even some of the initial VMS influence in the original kernel design.

> "It's nice to see that internal developers feel the same way about XNA that external developers (who used to build XNA games, or still build XNA games) do. From the outside I always assumed the constant flood of new, half-baked features instead of fixes and improvements to old ones was caused by interns and junior devs looking for glory ..."

Am I understanding you correct; are you implying that you think XNA was created by juniors? Would you say that because of this, it's a good thing XNA is being killed?

Cause personally I think it's been a terrible decision by Microsoft to kill XNA. A lot of indie game developers have relied on XNA and I really feel Microsoft can use the indie support. Sure, big name games might be a priority, but personally I feel most _interesting_ work is being done by indies. Indies tend to be less concerned with proven formulas and seem to see it more of a creative outlet for themselves[1]. I think it's a good thing frameworks like Monogame[2] exist, so developers can still put their existing XNA knowledge to good use - and not just limited to the Windows platform, but e.g. iOS and Android as well.

The Monogame website might not show very impressing examples, but a game like Bastion[3] was ported to other platforms using Monogame, showing very high-quality work can be created with XNA.

[1]: http://www.youtube.com/watch?v=GhaT78i1x2M

[2]: http://monogame.codeplex.com

[3]: http://supergiantgames.com/index.php/2012/08/bastions-open-s...

What is the feeling about XNA? I haven't followed the area, so I found it unfortunate that it was treated in the original post without explanation. Was XNA a good thing or a bad thing? Why?

  • It was loved by Indies for XBox 360 development, got so popular, Microsoft decided to make it the way to develop games on the Windows Phone 7.

    When Windows Phone 8 came out, the native renaissance was full speed inside Microsoft, C++ with DirectX became the official way and XNA was canned while Microsoft left MonoGame and Unity pick up the C# developers still willing to invest into the platform.

    The sad thing is that this is not a native vs managed issue, because even C# gets compiled to native code when targeting Windows Phone 8. Just plain product management decision to kill the product, which was being sold as the way to develop Windows Phone games.

    • "XNA was canned while Microsoft left MonoGame and Unity pick up the C# developers still willing to invest into the platform"

      I have no insider information to suggest otherwise, but don't you think that a replacement will be released with the new console?

      8 replies →