Half the comments here are talking about the vtuber herself. Who cares. It's been talked before. Just imagine if half the thread is discussing what gender she is. What I am interested in is the claims here https://asahilinux.org/2022/11/tales-of-the-m1-gpu/#rust-is-.... (what is it called if it comes with a proof?).
The resident C/C++ experts here would have you believe that the same is possible in C/C++. Is that true?
In C? No, not unless you write your own scaffolding to do it.
In C++? Maybe, but you’d need to make sure you stay on top of using thread safe structures and smart pointers.
What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.
So you’re not spending time chasing oddities because you missed a variable initialisation, or you’re hitting a race condition or some kind of use after free.
While there’s a lot of people who say that this slows you down and a good programmer doesn’t need it, my experience is even the best programmers forget and (at least for me), I spend more time trying to reason about C++ code than rust, because I can trust my rust code more.
Put another way, Rust helps with reducing how much of the codebase I need to consider at any given time to just the most local scope. I work in many heavy graphics C and C++ libraries , and have never had that level of comfort or mental locality.
For me it isn't even that it catches these problems when I forget. It is that I can stop worrying about these problems when writing the vast majority of code. I just take references and use variables to get the business logic implemented without the need to worry about lifetimes the entire time. Then once the business logic is done I switch to dealing with compiler errors and fixing these problems that I was ignoring the first time around.
When writing C and C++ I feel like I need to spend half of my brainpower tracking lifetimes for every line of code I touch. If I touch a single line of code in a function I need to read and understand the relevant lifetimes in that function before changing a single line. Even if I don't make any mistakes doing this consumes a lot of time and mental energy. With Rust I can generally just change the relevant line and the compiler will let me know what other parts of the function need to be updated. It is a huge mental relief and time saver.
I agree that Rust is the better language because it gives you the safe tools by default.
Smart pointers are no panacea for memory safety in C++ though: even if you use them fastidiously, avoiding raw pointer access, iterator invalidation or OOB access will come for you. The minute you allocate and have to resize, you're exposed.
Additional advantage of Rust is the extensive macro system. The ability to generate a bunch of versioned structures out of a common description, all with their own boilerplate and validation code, is invaluable for this kind of work. Some of it can be done in C++ with templates as well, but the ergonomy is on a different level.
> What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.
For what it’s worth, the same is true of Swift. But since much of the original Rust team was also involved with Swift language development, I guess it’s not too much of a surprise. The “unsafe” api requires some deliberate effort to use, no accidents are possible there. It’s all very verbose through a very narrow window of opportunity if you do anything unsafe.
I am curios to know if the trait system helps a lot with mapping the underlying kernel features/quirks? Which language is better at creating abstractions that maps closer to how the kernel works?
I have a lot of experience in C, a lot of experience in C++, and some experience with Rust (I have some projects which use it). My opinion is that it's true, and the other comments are good explanations of why. But I want to point out, in addition to those: There's a reason why Rust was adopted into Linux, while C++ wasn't. Getting C++ to work in the kernel would almost certainly have been way less work than getting Rust to work. But only Rust can give you the strong guarantees which makes you avoid lifetime-, memory- and concurrency-related mistakes.
You can't underestimate the amount of personal hatred that Linus and several other linux maintainers have for C++. I can't really say I blame them - C++ before C++11 was a bit of a nightmare in terms of performance and safety.
I'm not exactly C or Rust expert so better to check
@dagmx comment for that, but I know some C++ and worked with networking enough to know some pitfalls.
Talking of C++ it can be really solid to work with your own data structures where you control code on both ends. Using templates with something like boost::serialization or protobuf for the first time is like magic. E.g you can serialize whole state of your super complex app and restore it on other node easily.
Unfortunately it's just not the case when you actually trying to work with someone else API / ABI that you have no contol over. Even worse when it's moving target and you need to maintain several different adapters for different client / server versions.
Possible? Definitely. Easier? Probably not. At least for the most part, there are a couple things which C(++) can sometimes be more ergonomic for and those can be isolated out and used independently.
watching a virtual persona stream their development of their M1 GPU drivers is one of the most cyberpunk things I've ever seen! it's easy to forget that this world is looking closer and closer to those dreamed up by Gibson, Stephenson, etc. what a time to be alive.
I like your optimism, but it seems more like a Phillip K. Dick novel to me.
>In 2021, society is driven by a virtual Internet, which has created a degenerate effect called "nerve attenuation syndrome" or NAS. Megacorporations control much of the world, intensifying the class hostility already created by NAS.
It's an interesting set of tradeoffs - vtubing has made it possible for people to be on-screen personalities who normally would not able to as easily because it can be very hard to overcome problems with your IRL appearance. That stuff really matters if you want to succeed on YouTube or Twitch. In comparison if you want to be a vtuber, there are relatively affordable ways to grab a stock model and customize it. You can also just commission a custom one from an artist and rigger - though I think the cost of that is sadly out of reach of an amateur it's not as high as you might assume.
If you stream without a face camera at all it generally hurts your ability to grow an audience, and unfortunately our society is still pretty focused on appearance so if you don't look great you're going to potentially get a lot of toxicity in your chat. A vtuber avatar acts as an equalizer in this sense and also lets people express their personality and aesthetics visually in a way that might not otherwise be easy - they can pick eye and hair colors they think represent them without having to use colored contacts or hair dyes, etc.
A few different people I know found that having a vtuber avatar made it much easier for them to get into streaming regularly and it did grow their audience, so I'm happy to see the technology catch on and improve.
Small nit but I was confused, I think Johnny Mnemonic is Gibbson? And I had to look up NAS, I think that part of the movie not the book. I think we have another couple decades before androids and mood organs of Phillip K dick at least but I could be wrong. But parts of Gibbsonian cyberpunk is already here!
Aesthetically it looks like the worlds imagined in a Phillip K. Dick novel, but none of the actual dystopian aspects are present in what GP described (rampant poverty/class disparity, environmental destruction, etc.)
I don't think someone sharing their craft through a virtual avatar is any more responsible for these things than the flying cars from Blade Runner would be.
A polity with an outmost shell of no bs ic spooks in a ratio of twenty to one cybersec defense to offense. There is the problem of sciengineers conceiving in the labs photonic computing but the committee member wage/salary slave cuts cost corners (or not but bloats up on unnecessary complexity) and we get the worsest join on the venn diagram in the industry spec.
Can someone explain this vtoon trend to me? It doesn't seem to be driven by anonymity because their real name is easily findable, so I assume it's something else? It seems very common, especially in certain communities.
In case of Marcan/Lina I got the impression that he created Lina just for fun. It started as an April fools joke (Lina 'took over' Marcan's live stream), but Marcan seems to enjoy it a lot, even going so far as contributing to the Inochi2D software (used to render Lina) to improve all sorts of facial features.
I don't have the impression that in Marcan's case it was ever about anonymity, it is more about a creative expression.
Up until Lina's introduction on April 1st, I had never seen a vTuber stream, and I must say it is quite fun to watch. Though personally I wish Lina's voice is tweaked a bit, because it can be hard to understand what she is saying.
Some people just prefer their public persona to be in the form of an avatar instead of their real face. They want to have something there to represent themselves instead of just streaming a screen and nothing else, but they would rather that representation be an avatar or character rather than their physical selves.
If you were the Genre of person who got rude / explicit / insulting comments whenever you showed your actual face on camera, the vtoon trend would be quite easy to understand
It’s like getting a specific haircut, choosing what model of glasses to get, or getting a nose job, or a tattoo. Or even just picking what style of clothes you want to represent yourself in. I.e. it’s simply choosing your appearance, using more modern technology.
Don't forget to dissociate the concept of virtual worlds which already exist and are quite popular (MMOs etc.), and the idea of a virtual world owned and imagined by Zuckerberg which has been a terrible failure so far.
Same thing was said about full touch screen iphones circa 2007.
VR/AR just hasn't been done right as of now, but its getting close. Demand is there. Imagine virtual schooling during time like Covid, but instead of Zoom, kids actually see each other in VR and can interact with each other.
The m1n1 hypervisor specialised for debugging is a pretty genius idea. Is anyone aware of anyone else taking a similar approach? Seems like it would be a pretty generally applicable technique and would make OS/hardware driver development a lot more approachable.
Even before true CPU-supported "hypervisors," there was shim software like SoftICE that worked similarly to m1n1 in that you would run an OS underneath and then use a supervisor tool to trace and debug the OS under inspection.
More recently, it's fairly common to use a hypervisor or simulator for kernel debugging in device driver development on Windows via Hyper-V.
A lot of Linux driver development is done using qemu as well, although this is usually more targeted and isn't quite the same "put a thin shim over the OS running on the hardware" approach.
The flexibility and I/O tracing framework in m1n1 are pretty uniquely powerful, though, since it was built for reverse engineering specifically.
Some developers used user mode Linux for driver development, and I think some development has happened on the NetBSD rump kernel more recently. I find the work that goes into building this kind of tooling all pretty impressive.
The nouveau project used a kernel module to intercept mmio accesses: https://nouveau.freedesktop.org/MmioTrace.html.
Generally speaking hooking onto driver code is one of the preferred ways of doing dynamic reverse engineering. For userspace components, you can build an LD_PRELOAD stub that logs ioctls, and so on.
Idea wise actually S/360 run on hardware microcode and all these idea of virtual machine and hypervisor came from an unauthorised development called CP67 or later VM. IBM used it for development MVS etc. as some hardware is yet to be built for certain features.
But the modern day these development is crazy.
How can yo manage a 100+ structure in a language you just learnt (Rust) for a secret GPU the vendor does not share info.
The fact so much hardware these days is running a full real-time OS all the time annoys me. I know it is normal and understandable but everything is such a black box and it has already caused headaches (looking at you, Intel).
This isn't even that new of a thing. The floppy disk drive sold for the Commodore 64 included it's own 6502 CPU, ROM, and RAM. This ran its own disk operating system[1]. Clever programmers would upload their own code to the disk drive to get faster read/writes, pack data more densely on the disk, and even copy protection schemes that could validate the authenticity of a floppy.
And all that engineering resulted in a floppy drive that was slower and more expensive than comparable units for other home computers. I'm not sure if there is a lesson there...
Oh I know it’s been a thing forever. Hell, my NeXT Cube with its NeXTDimension display board was such. The NeXTDimension board ran its own entire stripped down OS. It used an Intel i860 and a Mach kernel…. It also was massively underutilized. If NeXT had did a bit more leg work and made the actual Display PS server run entirely on the board it would have been insane. But the 68K still did everything.
Yes, but ... Commodore did this because they had incompetent management. They shipped products (VIC-20, 1540) with hardware defect in one of the chips (6522), chip they manufactured themselves. The kicker is
- C64 shipped with 6526, a fixed version of 6522
- C64 is incompatible with 1540 anyway
They crippled C64 for no reason other than to sell more Commodore manufactured chips inside a pointless box. C128 was similar trick of stuffing C64 with garbage leftover from failed projects and selling computer with 2 CPUs and 2 graphic chips at twice the price. Before slow serial devices they were perfectly capable of making fast and cheaper to manufacture floppies for PET/CBM systems.
In the era of CP/M machines, the terminal likely had a similar CPU and RAM to the computer running the OS too. So you had one CPU managing the text framebuffer and CRT driver, connected to one managing another text framebuffer and application, connected to another one managing the floppy disk servos.
Oh God, the 1541 ran soooo hot, hotter than the C64 itself. I remember using a fan on the drive during marathon Ultima sessions. The 1571 was so much cooler and faster.
There's this great USENIX talk by Timothy Roscoe [1], which is part of the Enzian Team at ETH Zürich.
It's about the dominant unholistic approach to modern operating system design, which is reflected in the vast number of independent, proprietary, under-documented RTOSes running in tandem on a single system, and eventually leading to uninspiring and lackluster OS research (e.g. Linux monoculture).
I'm guessing that hardware and software industries just don't have well-aligned interests, which unfortunately leaks into OS R&D.
I think making it harder to build an OS by increasing its scope is not going to help people to build Linux alternatives.
As for the components, at least their interfaces are standardized. You can remove memory sticks by manufacturer A and replace them with memory sticks from manufacturer B without problem. Same goes for SATA SSDs or mice or keyboards.
Note that I'm all in favour of creating OSS firmware for devices, that's amazing. But one should not destroy the fundamental boundary between the OS and the firmware that runs the hardware.
Every cell in your body is running a full blown OS fully capable of doing things that each individual cell has no need for. It sounds like this is a perfectly natural way to go about things.
Isn’t the primary purpose of the ME to run DRM and back door the system? How would it be useful at all open source? People would just turn it off entirely.
I don't know. This sounds very computer-sciency-ish. We build smaller tools to help build big things. Now the big things are so good and versatile we can replace our smaller tools with the big things too. With the more powerful tools, we can build even bigger things. It is just compiler bootstrapping happening in hardware world.
The problem is that there's so much unexplored territory in operating system design. "Everything is a file" and the other *nix assumptions are too often just assumed to be normal. So much more is possible.
Same. It's not about the principle, but that generally these OSes increase latency etc. There's so much you can do with interrupts, DMA, and targetted code when performance is a priority.
I sometimes wonder about how fast tings could go if we ditch the firmware, and also just bake a kernel / os right into the silicon. Not like all the subsystems which run their own os/kernels, but really just cut every layer, and have nothing in between.
You'd find yourself needing to add more CPUs to account for all the low level handling that is done by various coprocessors for you, eating into your compute budget, especially with high interrupt ratio as you wouldn't have it abstracted and batched in the now missing coprocessors
I'm actually very happy about the rise of VTubers/live avatars. I imagine that there are a lot of people that would love to interactively share their knowledge/skills on youtube/twitch but avoid doing so because they're not conventionally attractive or just too shy.
But what about people that hate the "twitchification" of media? I don't like when youtubers I enjoy watching switch to streaming and then all their content is identical "poggers" chat and donation begging garbage. Streamers all feel the same, regardless of the content. I don't feel there's any value to a hundred instances of a stupid emoji streaming by in a """chat""" window, and everything just feels like attention whoring "pick me" nonsense.
Vinesauce has been streaming since well before twitch, and their content got significantly more "Twitch"-y after they embraced the current system. It's obvious why, because if you play into the chat begging, the surface level """interaction""", then you get more money from the parasocial twelve year olds with mom's credit card.
But I don't want my content full of ten second interruptions as a robot voice reads off the same tired joke somebody paid ten dollars to get read off.
> I imagine that there are a lot of people that would love to interactively share their knowledge/skills on youtube/twitch but avoid doing so because they're not conventionally attractive or just too shy.
Couldn't they just not show themselves on camera at all?
The quantity of exclamation points lol. I assume I'm just too old to get it...I'm okay with that, and I'm damn impressed with the results, so more power to Lina, whatever works for her.
Yeah, it's just... sorry, but there is nothing in the world so exciting that 149 exclamation points (thanks to another poster for counting) is warranted.
When every statement is exciting and special, then none of them are.
Usually I get annoyed by that, but in this case I read the whole thing and didn't even notice. It helps that they didn't come in "packages" bigger than 1.
I was just getting ready to say the same thing and was wishing for a plugin that would replace all exclamation points with a period. That would make reading much easier
Would that not mean general excitement on the part of the author?
I find it hard to analyze these things by numbers alone. It's context that really matters and if there truly is a baseline excitement, there really should be a high number of exclamations.
I just tried watching this with a Pitch Shifter Chrome extension. The voice goes from grating to just ... bad audio, at the lowest possible setting - which is far more tolerable than the original. I may need to go and edit the extension to turn down the pitch even more.
> It feels like Rust’s design guides you towards good abstractions and software designs.
> The compiler is very picky, but once code compiles it gives you the confidence that it will work reliably.
> Sometimes I had trouble making the compiler happy with the design I was trying to use, and then I realized the design had fundamental issues!
I experience a similar sentiment all the time when writing Rust code (which for now is admittedly just toy projects). So far it's felt like the compiler gives you just enough freedom to write programs in a "correct" way.
I don't really do unsafe/lower-level coding, so I can't speak to much there however.
The 2015MBP one was the last one that was passable for me, what came after is horrible. Even the new MBP that has real ports again is still not as good as the 2015 in terms of keyboard.
Thinkpad keyboards are great (I own a couple T400’s and used to daily drive a X61s), but the latest MacBook Pros have real, actually good keyboards afaik too.
All currently shipping Macbook Airs and Pros have a keyboard that is, as far as I can tell, identical to the great one from 2015 that we love. They switched them all back after the butterfly keyboard fiasco, but hardware pipelines are 2-4 years deep and it took a while.
Not one comment here about the “GPU drivers in Python”. I like the idea of iteration speed, over pure speed.
And the coprocessor called “ASC” also have similarities with Python, where the GPU is doing the heavy lifting, but the ASC (like Python) interact using shared memory.
The same Python is doing with a lot of its libraries (written in C/C++)
> And the coprocessor called “ASC” also have similarities with Python
It's a processor, not a programming language :) The team has essentially strapped the API into something that you can poke with Python instead of with a native driver.
Loved reading this. About the triangle/cube screenshot, they were taken on Linux on a physical Mac OS computer? How were you able to deploy your driver, does the M1 GPU have a basic text/console mode allowing you to start and work with Linux?
Displaying to the screen and stuff was already working, you can already use Asahi Linux and have a GUI and everything, it’s just that it’s all rendered by the CPU right now
I've never played games on my M1 Macbook - what are some popular reasonably graphics intensive games that it would support? Could it run Dota2 for example?
Disco Elysium, Hades and CIV VI run really well on my MBA m1 (using a 4K display). These games are not as resource heavy as Dota2 AFAIK but I’m comparing them to my maxed out 16inch MBP from 2020 which acted more like a cursed semi sentient toaster than a hi spec laptop.
Resident Evil Village recently came out and it performs surprisingly well even on the low end MacBook Air M1 with only 7 GPU cores. What's even more impressive is that the game is playable (low gfx settings, 30fps) when running that machine on low power mode.
It is irksome to me given how much Linux is used inside Apple (board bringup, debugging, etc). You benefit from these gifts, Apple, give back a teensy bit in return. Everybody wins.
I think there's larger barriers to getting windows running on Apple Silicon that would need to be addressed first.
For one example, Windows ARM kernels are pretty tied to the GIC (ARM's reference interrupt controller), but Apple has its own interrupt controller. Normally on ntoskrnl this distinction would simply need hal.dll swapped out, but I've heard from those who've looked into it that the clean separation has broken down a bit and you'd have to binary patch a windows kernel now if you don't have source access.
Apple Silicon doesn't use GIC, but uses AIC (Apple Interrupt Controller).
"Apple designed their own interrupt controller, the Apple Interrupt Controller (AIC), not compatible with either of the major ARM GIC standards. And not only that: the timer interrupts - normally connected to a regular per-CPU interrupt on ARM - are instead routed to the FIQ, an abstruse architectural feature, seen more frequently in the old 32-bit ARM days. Naturally, Linux kernel did not support delivering any interrupts via the FIQ path, so we had to add that."
>Asahi Lina, our GPU kernel sourceress. Lina joined the team to reverse engineer the M1 GPU kernel interface, and found herself writing the world’s first Rust Linux GPU kernel driver. When she’s not working on the Asahi DRM kernel driver, she sometimes hacks on open source VTuber tooling and infrastructure.
Asahi Linux has been upstreaming, but of course it's ongoing. The GPU driver in particular depends on some rust inside the kernel bits which aren't in the mainline kernel, yet. The 6.1 kernel has some Rust bits, 6.2 will have more, but I don't believe that will be enough for the GPU driver ... yet.
Asahi Lina is a maintainer in Asahi Linux project. She is now much known because of the achivement she earned, programming the Asahi Linux GPU driver for MacOS.
I understood deadnaming to generally pertain to gender identity. I don't think it's far-fetched to initially consider Asahi Lina's name as a Pseudonym/Alias/Pen-Name, as many creatives (authors, artists, musicians) have been doing for hundreds of years.
If it is a gender identity decision, I still don't view it as malicious for the OP to ask. The context just isn't there in the blog post to make that clear.
Let's not head down this direction of madness please.
I've followed Japanese vtubers for some time and that is CERTAINLY not the case. Vtubers are just aliases for the real person. And each person picks and chooses how much they blend their real lives into that alias.
There are even some vtubers that will have a camera facing on themselves while they stream as a vtuber (for example stream their body, but not their face) or will alternate streams between a vtuber persona and a real live camera or vtubers who stream as a vtuber but the real person behind the vtuber is an open secret (i.e. artists who engage in vtubing but sell artwork at comic conventions attending as a real person). There's a huge range and spectrum of ways people choose to do vtubing.
(Note: A lot of the latter cases are more possible in Japan because of the general social/legal concept there that taking pictures of people without their permission is at least extremely rude and sometimes also illegal if you don't blur their face when publishing it. This is helped by the fact that it's a legal requirement that all devices capable of taking photographs must make a photographing noise when doing so. For example on iPhone in Japan it is impossible to silence the shutter sound effect without modifying the device hardware.)
Assuming it is the case I don't think it's polite to share this information. I don't know their motivation for creating a separate public image, but I think we should respect their decision to do so by not connecting them.
Half the comments here are talking about the vtuber herself. Who cares. It's been talked before. Just imagine if half the thread is discussing what gender she is. What I am interested in is the claims here https://asahilinux.org/2022/11/tales-of-the-m1-gpu/#rust-is-.... (what is it called if it comes with a proof?).
The resident C/C++ experts here would have you believe that the same is possible in C/C++. Is that true?
In C? No, not unless you write your own scaffolding to do it.
In C++? Maybe, but you’d need to make sure you stay on top of using thread safe structures and smart pointers.
What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.
So you’re not spending time chasing oddities because you missed a variable initialisation, or you’re hitting a race condition or some kind of use after free.
While there’s a lot of people who say that this slows you down and a good programmer doesn’t need it, my experience is even the best programmers forget and (at least for me), I spend more time trying to reason about C++ code than rust, because I can trust my rust code more.
Put another way, Rust helps with reducing how much of the codebase I need to consider at any given time to just the most local scope. I work in many heavy graphics C and C++ libraries , and have never had that level of comfort or mental locality.
> even the best programmers forget
For me it isn't even that it catches these problems when I forget. It is that I can stop worrying about these problems when writing the vast majority of code. I just take references and use variables to get the business logic implemented without the need to worry about lifetimes the entire time. Then once the business logic is done I switch to dealing with compiler errors and fixing these problems that I was ignoring the first time around.
When writing C and C++ I feel like I need to spend half of my brainpower tracking lifetimes for every line of code I touch. If I touch a single line of code in a function I need to read and understand the relevant lifetimes in that function before changing a single line. Even if I don't make any mistakes doing this consumes a lot of time and mental energy. With Rust I can generally just change the relevant line and the compiler will let me know what other parts of the function need to be updated. It is a huge mental relief and time saver.
I agree that Rust is the better language because it gives you the safe tools by default.
Smart pointers are no panacea for memory safety in C++ though: even if you use them fastidiously, avoiding raw pointer access, iterator invalidation or OOB access will come for you. The minute you allocate and have to resize, you're exposed.
1 reply →
Additional advantage of Rust is the extensive macro system. The ability to generate a bunch of versioned structures out of a common description, all with their own boilerplate and validation code, is invaluable for this kind of work. Some of it can be done in C++ with templates as well, but the ergonomy is on a different level.
> What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.
For what it’s worth, the same is true of Swift. But since much of the original Rust team was also involved with Swift language development, I guess it’s not too much of a surprise. The “unsafe” api requires some deliberate effort to use, no accidents are possible there. It’s all very verbose through a very narrow window of opportunity if you do anything unsafe.
4 replies →
Of course it's possible in either. By the way, Mac drivers are in fact written in a subset of C++. At least they used to be, maybe that has changed.
8 replies →
It would also have been possible in Ada, but of course it isn't cool.
2 replies →
I am curios to know if the trait system helps a lot with mapping the underlying kernel features/quirks? Which language is better at creating abstractions that maps closer to how the kernel works?
1 reply →
I have a lot of experience in C, a lot of experience in C++, and some experience with Rust (I have some projects which use it). My opinion is that it's true, and the other comments are good explanations of why. But I want to point out, in addition to those: There's a reason why Rust was adopted into Linux, while C++ wasn't. Getting C++ to work in the kernel would almost certainly have been way less work than getting Rust to work. But only Rust can give you the strong guarantees which makes you avoid lifetime-, memory- and concurrency-related mistakes.
You can't underestimate the amount of personal hatred that Linus and several other linux maintainers have for C++. I can't really say I blame them - C++ before C++11 was a bit of a nightmare in terms of performance and safety.
I'm not exactly C or Rust expert so better to check @dagmx comment for that, but I know some C++ and worked with networking enough to know some pitfalls.
Talking of C++ it can be really solid to work with your own data structures where you control code on both ends. Using templates with something like boost::serialization or protobuf for the first time is like magic. E.g you can serialize whole state of your super complex app and restore it on other node easily.
Unfortunately it's just not the case when you actually trying to work with someone else API / ABI that you have no contol over. Even worse when it's moving target and you need to maintain several different adapters for different client / server versions.
Possible? Definitely. Easier? Probably not. At least for the most part, there are a couple things which C(++) can sometimes be more ergonomic for and those can be isolated out and used independently.
I must admit that for me this was a pleasant change of format from the youtube videos.
Except this is a blog post, not a Asahi Lina video … did you even click the link?
watching a virtual persona stream their development of their M1 GPU drivers is one of the most cyberpunk things I've ever seen! it's easy to forget that this world is looking closer and closer to those dreamed up by Gibson, Stephenson, etc. what a time to be alive.
I like your optimism, but it seems more like a Phillip K. Dick novel to me.
>In 2021, society is driven by a virtual Internet, which has created a degenerate effect called "nerve attenuation syndrome" or NAS. Megacorporations control much of the world, intensifying the class hostility already created by NAS.
from Johnny Mnemonic
What can we do to make it more utopian?
It's an interesting set of tradeoffs - vtubing has made it possible for people to be on-screen personalities who normally would not able to as easily because it can be very hard to overcome problems with your IRL appearance. That stuff really matters if you want to succeed on YouTube or Twitch. In comparison if you want to be a vtuber, there are relatively affordable ways to grab a stock model and customize it. You can also just commission a custom one from an artist and rigger - though I think the cost of that is sadly out of reach of an amateur it's not as high as you might assume.
If you stream without a face camera at all it generally hurts your ability to grow an audience, and unfortunately our society is still pretty focused on appearance so if you don't look great you're going to potentially get a lot of toxicity in your chat. A vtuber avatar acts as an equalizer in this sense and also lets people express their personality and aesthetics visually in a way that might not otherwise be easy - they can pick eye and hair colors they think represent them without having to use colored contacts or hair dyes, etc.
A few different people I know found that having a vtuber avatar made it much easier for them to get into streaming regularly and it did grow their audience, so I'm happy to see the technology catch on and improve.
1 reply →
Small nit but I was confused, I think Johnny Mnemonic is Gibbson? And I had to look up NAS, I think that part of the movie not the book. I think we have another couple decades before androids and mood organs of Phillip K dick at least but I could be wrong. But parts of Gibbsonian cyberpunk is already here!
7 replies →
Aesthetically it looks like the worlds imagined in a Phillip K. Dick novel, but none of the actual dystopian aspects are present in what GP described (rampant poverty/class disparity, environmental destruction, etc.)
I don't think someone sharing their craft through a virtual avatar is any more responsible for these things than the flying cars from Blade Runner would be.
> I like your optimism, but it seems more like a Phillip K. Dick novel to me.
Is it?
It's basically forums & avatars brought in the medium of audio and video communication.
1 reply →
A polity with an outmost shell of no bs ic spooks in a ratio of twenty to one cybersec defense to offense. There is the problem of sciengineers conceiving in the labs photonic computing but the committee member wage/salary slave cuts cost corners (or not but bloats up on unnecessary complexity) and we get the worsest join on the venn diagram in the industry spec.
1 reply →
> What can we do to make it more utopian?
Move to Mars and start over.
1 reply →
> watching a virtual persona stream their development of their M1 GPU drivers is one of the most cyberpunk things I've ever seen!
What would really push it into cyberpunk territory is if it turns out this is not an actual human but an AI-controlled virtual person.
Which is using this to seed memetic triggers that will allow it to take control invisibly later.
Damn, I hope someone gets this into a script soon.
Is she not real? :-o
Can someone explain this vtoon trend to me? It doesn't seem to be driven by anonymity because their real name is easily findable, so I assume it's something else? It seems very common, especially in certain communities.
In case of Marcan/Lina I got the impression that he created Lina just for fun. It started as an April fools joke (Lina 'took over' Marcan's live stream), but Marcan seems to enjoy it a lot, even going so far as contributing to the Inochi2D software (used to render Lina) to improve all sorts of facial features.
I don't have the impression that in Marcan's case it was ever about anonymity, it is more about a creative expression.
Up until Lina's introduction on April 1st, I had never seen a vTuber stream, and I must say it is quite fun to watch. Though personally I wish Lina's voice is tweaked a bit, because it can be hard to understand what she is saying.
11 replies →
Some people just prefer their public persona to be in the form of an avatar instead of their real face. They want to have something there to represent themselves instead of just streaming a screen and nothing else, but they would rather that representation be an avatar or character rather than their physical selves.
8 replies →
If you were the Genre of person who got rude / explicit / insulting comments whenever you showed your actual face on camera, the vtoon trend would be quite easy to understand
5 replies →
It’s like getting a specific haircut, choosing what model of glasses to get, or getting a nose job, or a tattoo. Or even just picking what style of clothes you want to represent yourself in. I.e. it’s simply choosing your appearance, using more modern technology.
5 replies →
1 reply →
Which is what Zuck is trying to get us all into with the Meta, but the world is not ready yet.
Don't forget to dissociate the concept of virtual worlds which already exist and are quite popular (MMOs etc.), and the idea of a virtual world owned and imagined by Zuckerberg which has been a terrible failure so far.
21 replies →
The difference is Zuck wants to control it. He wants to de-democratize the movement so he can more easily profit from it.
No, Zucks Metaverse just sucks. "The world isn't ready for my brilliant ideas" is the rallying cry of people with ideas but no execution ability.
Same thing was said about full touch screen iphones circa 2007.
VR/AR just hasn't been done right as of now, but its getting close. Demand is there. Imagine virtual schooling during time like Covid, but instead of Zoom, kids actually see each other in VR and can interact with each other.
2 replies →
The m1n1 hypervisor specialised for debugging is a pretty genius idea. Is anyone aware of anyone else taking a similar approach? Seems like it would be a pretty generally applicable technique and would make OS/hardware driver development a lot more approachable.
Even before true CPU-supported "hypervisors," there was shim software like SoftICE that worked similarly to m1n1 in that you would run an OS underneath and then use a supervisor tool to trace and debug the OS under inspection.
More recently, it's fairly common to use a hypervisor or simulator for kernel debugging in device driver development on Windows via Hyper-V.
A lot of Linux driver development is done using qemu as well, although this is usually more targeted and isn't quite the same "put a thin shim over the OS running on the hardware" approach.
The flexibility and I/O tracing framework in m1n1 are pretty uniquely powerful, though, since it was built for reverse engineering specifically.
Some developers used user mode Linux for driver development, and I think some development has happened on the NetBSD rump kernel more recently. I find the work that goes into building this kind of tooling all pretty impressive.
The nouveau project used a kernel module to intercept mmio accesses: https://nouveau.freedesktop.org/MmioTrace.html. Generally speaking hooking onto driver code is one of the preferred ways of doing dynamic reverse engineering. For userspace components, you can build an LD_PRELOAD stub that logs ioctls, and so on.
Not that I know of. m1n1 originated from software that (IIRC) was used initially for reverse-engineering the Wii.
Idea wise actually S/360 run on hardware microcode and all these idea of virtual machine and hypervisor came from an unauthorised development called CP67 or later VM. IBM used it for development MVS etc. as some hardware is yet to be built for certain features.
But the modern day these development is crazy.
How can yo manage a 100+ structure in a language you just learnt (Rust) for a secret GPU the vendor does not share info.
The fact so much hardware these days is running a full real-time OS all the time annoys me. I know it is normal and understandable but everything is such a black box and it has already caused headaches (looking at you, Intel).
This isn't even that new of a thing. The floppy disk drive sold for the Commodore 64 included it's own 6502 CPU, ROM, and RAM. This ran its own disk operating system[1]. Clever programmers would upload their own code to the disk drive to get faster read/writes, pack data more densely on the disk, and even copy protection schemes that could validate the authenticity of a floppy.
1: https://en.wikipedia.org/wiki/Commodore_DOS
And all that engineering resulted in a floppy drive that was slower and more expensive than comparable units for other home computers. I'm not sure if there is a lesson there...
17 replies →
Oh I know it’s been a thing forever. Hell, my NeXT Cube with its NeXTDimension display board was such. The NeXTDimension board ran its own entire stripped down OS. It used an Intel i860 and a Mach kernel…. It also was massively underutilized. If NeXT had did a bit more leg work and made the actual Display PS server run entirely on the board it would have been insane. But the 68K still did everything.
… I miss my NeXTs..
Yes, but ... Commodore did this because they had incompetent management. They shipped products (VIC-20, 1540) with hardware defect in one of the chips (6522), chip they manufactured themselves. The kicker is
- C64 shipped with 6526, a fixed version of 6522
- C64 is incompatible with 1540 anyway
They crippled C64 for no reason other than to sell more Commodore manufactured chips inside a pointless box. C128 was similar trick of stuffing C64 with garbage leftover from failed projects and selling computer with 2 CPUs and 2 graphic chips at twice the price. Before slow serial devices they were perfectly capable of making fast and cheaper to manufacture floppies for PET/CBM systems.
In the era of CP/M machines, the terminal likely had a similar CPU and RAM to the computer running the OS too. So you had one CPU managing the text framebuffer and CRT driver, connected to one managing another text framebuffer and application, connected to another one managing the floppy disk servos.
1 reply →
Oh God, the 1541 ran soooo hot, hotter than the C64 itself. I remember using a fan on the drive during marathon Ultima sessions. The 1571 was so much cooler and faster.
There's this great USENIX talk by Timothy Roscoe [1], which is part of the Enzian Team at ETH Zürich.
It's about the dominant unholistic approach to modern operating system design, which is reflected in the vast number of independent, proprietary, under-documented RTOSes running in tandem on a single system, and eventually leading to uninspiring and lackluster OS research (e.g. Linux monoculture).
I'm guessing that hardware and software industries just don't have well-aligned interests, which unfortunately leaks into OS R&D.
[1] https://youtu.be/36myc8wQhLo
Also one by Bryan Cantrill recently around the stack of black boxes:
https://www.osfc.io/2022/talks/i-have-come-to-bury-the-bios-...
I think making it harder to build an OS by increasing its scope is not going to help people to build Linux alternatives.
As for the components, at least their interfaces are standardized. You can remove memory sticks by manufacturer A and replace them with memory sticks from manufacturer B without problem. Same goes for SATA SSDs or mice or keyboards.
Note that I'm all in favour of creating OSS firmware for devices, that's amazing. But one should not destroy the fundamental boundary between the OS and the firmware that runs the hardware.
10 replies →
Every cell in your body is running a full blown OS fully capable of doing things that each individual cell has no need for. It sounds like this is a perfectly natural way to go about things.
Organic units should not be admired for their design.
DNA is the worst spaghetti code imaginable.
The design is such a hack, that it's easier to let the unit die and just create new ones every few years.
11 replies →
Mature mammalian red blood cells ditch their DNA, which is one reason they don’t live very long.
7 replies →
It's interesting that microkernels didn't "win" at the OS layer, but they kind of seem to have "won" one layer down.
I think IBM's IO channels would like a word... it's been like this for most of computing.
The real-time nature is not what makes it closed though. It's simply that it's been designed to be closed.
For example, Intel's ME could be a really useful feature if we could do what we want with it. Instead they lock it down so it's just built-in spyware.
Isn’t the primary purpose of the ME to run DRM and back door the system? How would it be useful at all open source? People would just turn it off entirely.
6 replies →
I don't know. This sounds very computer-sciency-ish. We build smaller tools to help build big things. Now the big things are so good and versatile we can replace our smaller tools with the big things too. With the more powerful tools, we can build even bigger things. It is just compiler bootstrapping happening in hardware world.
The problem is that there's so much unexplored territory in operating system design. "Everything is a file" and the other *nix assumptions are too often just assumed to be normal. So much more is possible.
6 replies →
It's only a problem if they're closed source. We should be working on that.
Same. It's not about the principle, but that generally these OSes increase latency etc. There's so much you can do with interrupts, DMA, and targetted code when performance is a priority.
What makes you think performance was not a priority here?
I sometimes wonder about how fast tings could go if we ditch the firmware, and also just bake a kernel / os right into the silicon. Not like all the subsystems which run their own os/kernels, but really just cut every layer, and have nothing in between.
You'd find yourself needing to add more CPUs to account for all the low level handling that is done by various coprocessors for you, eating into your compute budget, especially with high interrupt ratio as you wouldn't have it abstracted and batched in the now missing coprocessors
It'd be slower. These coprocessor OSes are there to improve performance in the first place.
(Especially because wall clock time is not the only kind of performance that matters.)
Then you can't improve them.
Technically it is not a real time OS. There are very few OSs that have this moniker (vxworks, qnx, etc)
"Real-time" isn't a trademark, you can assign it to other things if they meet the typical guarantees of "real-time".
What an interesting tale. But I feel like an anime character described it to me.
I'm totally fine with it (I'm grateful the story is being told at all), but it is surreal tone for technical writing.
I'm actually very happy about the rise of VTubers/live avatars. I imagine that there are a lot of people that would love to interactively share their knowledge/skills on youtube/twitch but avoid doing so because they're not conventionally attractive or just too shy.
But what about people that hate the "twitchification" of media? I don't like when youtubers I enjoy watching switch to streaming and then all their content is identical "poggers" chat and donation begging garbage. Streamers all feel the same, regardless of the content. I don't feel there's any value to a hundred instances of a stupid emoji streaming by in a """chat""" window, and everything just feels like attention whoring "pick me" nonsense.
Vinesauce has been streaming since well before twitch, and their content got significantly more "Twitch"-y after they embraced the current system. It's obvious why, because if you play into the chat begging, the surface level """interaction""", then you get more money from the parasocial twelve year olds with mom's credit card.
But I don't want my content full of ten second interruptions as a robot voice reads off the same tired joke somebody paid ten dollars to get read off.
3 replies →
> I imagine that there are a lot of people that would love to interactively share their knowledge/skills on youtube/twitch but avoid doing so because they're not conventionally attractive or just too shy.
Couldn't they just not show themselves on camera at all?
5 replies →
The quantity of exclamation points lol. I assume I'm just too old to get it...I'm okay with that, and I'm damn impressed with the results, so more power to Lina, whatever works for her.
Yeah, it's just... sorry, but there is nothing in the world so exciting that 149 exclamation points (thanks to another poster for counting) is warranted.
When every statement is exciting and special, then none of them are.
2 replies →
There are 149 exclamation points on that page (!)
Usually I get annoyed by that, but in this case I read the whole thing and didn't even notice. It helps that they didn't come in "packages" bigger than 1.
I was just getting ready to say the same thing and was wishing for a plugin that would replace all exclamation points with a period. That would make reading much easier
Would that not mean general excitement on the part of the author?
I find it hard to analyze these things by numbers alone. It's context that really matters and if there truly is a baseline excitement, there really should be a high number of exclamations.
4 replies →
I suspect a certain Elaine Benes may have been the editor on this post!
https://www.youtube.com/watch?v=VSKn8RlD7Is
Not as surreal as an actual presentation by an anime character. WTF is going on here?
https://youtu.be/SDJCzJ1ETsM?t=1179
How can people watch this?
I just tried watching this with a Pitch Shifter Chrome extension. The voice goes from grating to just ... bad audio, at the lowest possible setting - which is far more tolerable than the original. I may need to go and edit the extension to turn down the pitch even more.
pretty much the appeal(?) of asahi lina. it's been a weird ride to follow for sure.
Have to say as much as I want to watch their streams, I can't get passed the annoying voice.
16 replies →
> But I feel like an anime character described it to me.
Mario Brothers would make more sense though. Whoever created this is a plumber par excellence.
I can hardly think of a better recommendation of Rust for kernel development than this.
This is seriously impressive!! Hats off to everyone involved
> It feels like Rust’s design guides you towards good abstractions and software designs.
> The compiler is very picky, but once code compiles it gives you the confidence that it will work reliably.
> Sometimes I had trouble making the compiler happy with the design I was trying to use, and then I realized the design had fundamental issues!
I experience a similar sentiment all the time when writing Rust code (which for now is admittedly just toy projects). So far it's felt like the compiler gives you just enough freedom to write programs in a "correct" way.
I don't really do unsafe/lower-level coding, so I can't speak to much there however.
Keep it up! I'm sticking with Thinkpads until Linux support
Also they have matte screens and real keyboards.
The 2015MBP one was the last one that was passable for me, what came after is horrible. Even the new MBP that has real ports again is still not as good as the 2015 in terms of keyboard.
Thinkpad keyboards are great (I own a couple T400’s and used to daily drive a X61s), but the latest MacBook Pros have real, actually good keyboards afaik too.
3 replies →
All currently shipping Macbook Airs and Pros have a keyboard that is, as far as I can tell, identical to the great one from 2015 that we love. They switched them all back after the butterfly keyboard fiasco, but hardware pipelines are 2-4 years deep and it took a while.
Not one comment here about the “GPU drivers in Python”. I like the idea of iteration speed, over pure speed.
And the coprocessor called “ASC” also have similarities with Python, where the GPU is doing the heavy lifting, but the ASC (like Python) interact using shared memory. The same Python is doing with a lot of its libraries (written in C/C++)
> And the coprocessor called “ASC” also have similarities with Python
It's a processor, not a programming language :) The team has essentially strapped the API into something that you can poke with Python instead of with a native driver.
Loved reading this. About the triangle/cube screenshot, they were taken on Linux on a physical Mac OS computer? How were you able to deploy your driver, does the M1 GPU have a basic text/console mode allowing you to start and work with Linux?
Awesome job.
Displaying to the screen and stuff was already working, you can already use Asahi Linux and have a GUI and everything, it’s just that it’s all rendered by the CPU right now
I've never played games on my M1 Macbook - what are some popular reasonably graphics intensive games that it would support? Could it run Dota2 for example?
Disco Elysium, Hades and CIV VI run really well on my MBA m1 (using a 4K display). These games are not as resource heavy as Dota2 AFAIK but I’m comparing them to my maxed out 16inch MBP from 2020 which acted more like a cursed semi sentient toaster than a hi spec laptop.
Resident Evil Village recently came out and it performs surprisingly well even on the low end MacBook Air M1 with only 7 GPU cores. What's even more impressive is that the game is playable (low gfx settings, 30fps) when running that machine on low power mode.
Dota 2 runs pretty great.
I'm loving seeing the shift from embedding and linking to tweets and twitter accounts to embedding and linking to fediverse posts and profiles.
Great work! Seems like Apple could make this a whole lot easier by giving even the slightest support...
It is irksome to me given how much Linux is used inside Apple (board bringup, debugging, etc). You benefit from these gifts, Apple, give back a teensy bit in return. Everybody wins.
https://opensource.apple.com/
is there an easy way to unvoicemod the streams?
Turn on closed-captioning, and mute the sound.
This looks to be very impressive and interesting but the writing style makes it an incredibly aggravating slog to get through.
Is anyone porting the GPU driver to Windows?
I think there's larger barriers to getting windows running on Apple Silicon that would need to be addressed first.
For one example, Windows ARM kernels are pretty tied to the GIC (ARM's reference interrupt controller), but Apple has its own interrupt controller. Normally on ntoskrnl this distinction would simply need hal.dll swapped out, but I've heard from those who've looked into it that the clean separation has broken down a bit and you'd have to binary patch a windows kernel now if you don't have source access.
hal.dll no longer exists on Windows. It’s just a stub for backwards compat now.
What you can do is having a small hypervisor to simulate the needed bits…
Apple Silicon doesn't use GIC, but uses AIC (Apple Interrupt Controller).
"Apple designed their own interrupt controller, the Apple Interrupt Controller (AIC), not compatible with either of the major ARM GIC standards. And not only that: the timer interrupts - normally connected to a regular per-CPU interrupt on ARM - are instead routed to the FIQ, an abstruse architectural feature, seen more frequently in the old 32-bit ARM days. Naturally, Linux kernel did not support delivering any interrupts via the FIQ path, so we had to add that."
https://news.ycombinator.com/item?id=25862077
TL;DR: No standard ARM interrupt controller, custom controller requires quirky architectural features
2 replies →
Keep an eye on this project: https://github.com/amarioguy/m1n1_windows
Porting what?
GPU driver. So that one can install Windows on Apple Silicon and get accelerated graphics.
5 replies →
This made me laugh. Had the same question lol.
MS or Apple can port it themselves, but I think there's not so much interest on the Apple side
You can always just run NT in a VM under Linux:)
Apple should upstream their drivers.
Who is Asahi Lina? Is that an actual person?
> Apple should upstream their drivers.
Apple don't have linux drivers. It would be great if they wrote some, but it's never going to happen.
> Who is Asahi Lina? Is that an actual person?
The virtual persona of an actual person who has chosen to remain anonymous (hence the name which would be a crazy coincidence otherwise).
While anonymous, they did put in some personal information at 10 minutes into: https://m.youtube.com/watch?v=LonzMviFCNs
They are Canadian born, currently studying in Japan, so that explains some of the cultural mix.
1 reply →
> Who is Asahi Lina? Is that an actual person?
Man... if I was a conspiracy theorist who believed Apple was genuinely evil, what if Asahi Lina is an Apple employee? ;)
12 replies →
>Asahi Lina, our GPU kernel sourceress. Lina joined the team to reverse engineer the M1 GPU kernel interface, and found herself writing the world’s first Rust Linux GPU kernel driver. When she’s not working on the Asahi DRM kernel driver, she sometimes hacks on open source VTuber tooling and infrastructure.
Their = apple? Their = Asahi Lina's?
Asahi Linux has been upstreaming, but of course it's ongoing. The GPU driver in particular depends on some rust inside the kernel bits which aren't in the mainline kernel, yet. The 6.1 kernel has some Rust bits, 6.2 will have more, but I don't believe that will be enough for the GPU driver ... yet.
Asahi Lina is a maintainer in Asahi Linux project. She is now much known because of the achivement she earned, programming the Asahi Linux GPU driver for MacOS.
Is Asahi Lina a pseudonym? Is Asahi Linux named after her? Or is it all one big coincidence?
12 replies →
I assume you mean Asahi Linux GPU driver for Mac M1? or does this run ontop of MacOS somehow?
1 reply →
I think it’s a vTuber anime persona of a very talented programmer or something?
And, sorry for continuing this thread, what is a "vTuber anime persona"?
18 replies →
> Apple should upstream their drivers.
Apple's drivers are upstreamed, in Darwin. I'm not aware of any reason to believe that Apple has any Linux drivers that they could upstream.
I understood deadnaming to generally pertain to gender identity. I don't think it's far-fetched to initially consider Asahi Lina's name as a Pseudonym/Alias/Pen-Name, as many creatives (authors, artists, musicians) have been doing for hundreds of years.
If it is a gender identity decision, I still don't view it as malicious for the OP to ask. The context just isn't there in the blog post to make that clear.
Let's not head down this direction of madness please.
I've followed Japanese vtubers for some time and that is CERTAINLY not the case. Vtubers are just aliases for the real person. And each person picks and chooses how much they blend their real lives into that alias.
There are even some vtubers that will have a camera facing on themselves while they stream as a vtuber (for example stream their body, but not their face) or will alternate streams between a vtuber persona and a real live camera or vtubers who stream as a vtuber but the real person behind the vtuber is an open secret (i.e. artists who engage in vtubing but sell artwork at comic conventions attending as a real person). There's a huge range and spectrum of ways people choose to do vtubing.
(Note: A lot of the latter cases are more possible in Japan because of the general social/legal concept there that taking pictures of people without their permission is at least extremely rude and sometimes also illegal if you don't blur their face when publishing it. This is helped by the fact that it's a legal requirement that all devices capable of taking photographs must make a photographing noise when doing so. For example on iPhone in Japan it is impossible to silence the shutter sound effect without modifying the device hardware.)
5 replies →
Assuming it is the case I don't think it's polite to share this information. I don't know their motivation for creating a separate public image, but I think we should respect their decision to do so by not connecting them.
2 replies →
> Apple should upstream their drivers.
To what upstream project?
i can only assume the poster meant apple should add linux drivers of M1/m2 to mainline linux kernel
13 replies →
The Linux kernel
2 replies →