In my experience (20+ years with C/C++, and about 4 years with Rust), Rust is significantly less complex than C++, while being similarly capable. The extra syntax that throws off so many C++ devs is almost exclusively about data types and lifetimes, which I find very useful for understanding my own code and others', and which I wish I had in C++.
Some of this is just knowing from experience, by the time C++ programmers knew they wanted the destructive move assignment semantic at the turn of the century they already had large codebases which relied on C++ copy assignment, so, too bad. It took a significant extra effort to land C++ 11 move semantics, which are still less useful but also have worse ergonomics. Whereas Rust knew it wanted the destructive move so, that's just how everything works in Rust.
But there are a bunch of unforced errors in C++ design beyond that. Default implicit conversion is a choice and a mistake. Multiple inheritance is a mistake, Stroustrup even says he did it because it was easy, which is exactly the same cause as Hoare's NULL. Choosing "All correct programs compile" (the other option was "No incorrect programs compile", you can't have both, see Henry Rice's PhD thesis) was a mistake. My favourite bad default in C++ is atomic memory ordering. The nasty trick here was that picking a default was the mistake, it's not that they picked the wrong one but that they picked a default at all. C++ programmers end up writing code which doesn't specify the ordering even though the ordering was their only important decision.
The real answer should have been a new language that has memory safety without all the extra conceptual changes and orthogonal subsystems that Rust brings. The core value of safety did not need the reinvention of everything else with the accompanying complexity and cognitive load. For example Zig which instead of introducing a new metaprogramkming language, it uses...... Zig - imagine using the same language instead of inventing a new additional language with all the accompanying complexity and cognitive load and problems. Rust is for those who revel in complexity. And traits - traits and extra complexity not needed for safety. And result and option and move by dedfault - none of these things were needed but they all add up to more complexity and unfamiliarity and cognitive load. And when you add it all together and intertwine it you end up with something so unfamiliar that it no longer looks like "ordinary programming" it looks like something from the Cambrian period.
> In my experience (20+ years with C/C++, and about 4 years with Rust), Rust is significantly less complex than C++, while being similarly capable
OK, what about if you have an existing code base of hundreds of thousands, or even millions, of lines of C or C++? Still think it's going to be better, and especially cheaper, to "rewrite it in Rust" instead of just porting it to Fil-C or something similar?
I don’t know what you mean by SafER but it’s important to remember that Fil-C sacrifices a lot of performance for that safety which detracts the reasons you’d be running that software as otherwise C was a bad language for them. Sometimes this won’t matter but there are places fil-c won’t be able to go that Rust can - embedded and OS kernel come to mind. Other places would be things like browsers or games. Rust gives you the safety without giving up the ability to do performance.
Also, I could be wrong but I believe any assembly linked into Fil-C bypassed the safety guarantees which would be something to keep in mind (not a big deal generally, but a source of hidden implicit unsafe).
I’m apparently comment happy on this OP, but, the typing of it looks funny because it starts the sentence, I’m pretty sure OP was saying safER, as opposed to SAFE (as in totally safe instead of comparatively safer). I have been quite charitable to OP in some sibling comments and will do so here. I think OP is attempting to give Fil-C some credit for being an attempt to increase the overall memory safety of existing code without incurring the complexity of a new language or the complexity of rewriting long running/widely distributed code. It is a decent sentiment and a viable methodology to achieve a laudable goal, but is certainly susceptible to caveats like the performance penalty you mention.
I’m going to start this comment by specifying that I don’t know what OP was considering complex about Rust and, unfortunately, a large amount of discussion on the topic tends toward strawman-ing by people looking to argue the ‘anti-Rust’ side of said discussions. Additionally, the lack of a contextual and well considered position against some aspect of Rust, as a language, is very common, and at worst the negative take is really just a overall confrontational stance against Rust’s uptick in usage broadly and its community of users, as perceived (and also strawmanned), in a generally negative light. But since borrowing is not explicitly mentioned by GP, I will give a slightly different position than he might, but I think this is an interesting perspective difference to discuss and not a blatant ad hom argument used to ‘fight’ Rust users on the internet.
From my position the complexity incurred by ownership semantics in Rust does not stem from Rust’s ‘formalization’ and semi-reification of a particular view on ownership as a means of program constraint. The complexity of Rust, in relation to ownership, comes with the lengths I would have to go to design systems using other logical means of handling references (particularly plain hardware implemented pointers) to semantic objects: their creation, specification, and their deletion. Additionally, other means of handling resources (particularly memory acquired via allocation): its acquisition, transport through local and distributed processes (from different cores to over the wire), and its deletion or handing back to OS.
Rust adopts ownership semantics (and value semantics to a large degree) to the maximum extent possible and has enmeshed those semantics throughout all levels of abstraction in the language definition (as far as a singular authoritative ‘definition’ can be said to exist as a non-implementation related formalism). At the level of Rust the language, not merely discussions and discourse about the language, ownership semantics are baked in at a specified granularity and that ownership is compositional over the abstraction mechanisms provided. These semantics dictate how everything from a single variable on the stack to large size allocations in a general heap to non-memory ‘resources’, like files, textures, databases, and general processes, are handled in a program. On top of the ownership semantics sit the rest of Rust’s semantics and they are all checked at compile time by a singular oracular subsystem (i.e. the borrow checker).
The complexity really begins to rise, for me, if ai want to attempt to program without engaging with ownership as the methodology or semantics for handling all of the above mentioned ‘resources’. I prefer, and believe, that a broader set of formalisms should be available for ‘handling’ resources, that those formalisms should exist with parameterized granularity, and that the foundational semantics for those mechanisms should come from type systems’ ability to encode capabilities and conditions for particular types in a program. That position is in contrast to the universal and foundational ownership semantic, especially with the individualistic fixed granularity, that Rust chose.
That being said, it is bordering on insanity to attempt to program in such a ‘style’/paradigm/method in Rust. My preferences make Rust’s chosen focus on ownership seem complex at the outset, and attempts to try and impose an alternate formalism in Rust (which would, by necessity, have to try and be some abstraction over Rust’s ownership semantics which hid those semantics and tried to present a different set of semantics to thenprogrammer) take that complexity to even higher levels.
The real problem with trying to frame my position here as complexity is the following: to me Rust and its ownership semantic is complex because I do not like it’s chosen core semantic construct, so when I think about achieving something using Rust I have to deal with additional semantics, semantic objects, and their constraints on my program that I do not think are fit for purpose. But, if I wanted to program in Rust without trying to circumvent, ignore, or disregard it’s choices as a language and just decided to accept (or embrace) it’s semantic choices the complexity I perceive would decrease significantly and immediately.
For me, Rust’s ownership semantics create an impedance mismatch that at the level of language use FEELS like complexity (and acts like complexity in a lot of ways), but is probably more correctly identified as just what it is… an impedance mismatch, nothing more and nothing less. For me, I just chose not to use Rust to avoid that, but for others they get focused on these issues and don’t actually get to the bottom of their issues and just default to calling it complexity during discussion.
All in all, I am probably being entirely to optimistic about the comments about the complexity of Rust and ownership and most commenters are just fighting to fight, but I genuinely believe there is much to discuss and work through in programming language design theory and writing walls of text on HN helps me do that.
I think it might be an interesting experiment to try to duplicate as much of the functionality of IKOS as possible in vanilla Rust using a no_panic-like [0] technique. My guess is that most of the checks are already done by Rust or can be covered by such a technique, albeit perhaps with more hand-holding than for IKOS. The pointer alignment and comparison checks are the ones I'm least certain about since Rust is relatively lax with those.
I’m pretty sure there is not any realistically feasible way to ever prove your statement. But I hope a majority of people can recognize the sheer magnitude of C++ as a language and take a position that it may not be possible to master the whole thing. Rust is ‘smaller’ language using some metrics (most metrics really) than C++ is another thing I would hope most people can accept. So, given that the comparisons between the two wholes being a semi-intractable discussion I would propose the following:
When considering some chosen subset of functionality for some specified use case, how do Rust and C++ compare in the ability to ‘master’. There are wide and varied groups (practically infinite) of features, constructs, techniques, and implementations that achieve targeted use cases in both languages, so when constructing a given subset which language grants the most expressivity and capability in the more ‘tight’ (i.e. masterable) package?
I think that’s a way more interesting discussion to have. Obviously, where the specified use case requires Rust’s definition of memory safety to be implemented 100% of the time (excluding a small-ish percentage of delimited ‘unsafe but identifiable’ sections) the Rust subset will be smaller due to the mandatory abstractions required to put C++ anywhere near complete coverage. So it may make sense to allow the subset to be defined as not only constructs in the base language, but include sealed abstractions (philosophically if not in reality) as potential components in the constructed subsets.
I may have to try and formulate some use cases to pose in a longer something to see if any truly experienced devs can lay out their preferred language’s best candidate subset in response. It would also be fascinating to see what abstractions and metaprogramming would be used to implement the subset candidates and figure out how that could factor into an overall measurement of the ‘masterable-ness’ of the given language (i.e. how impossible a task is it to be able to rely on a subject matter expert to implement any proposed subset for any given use case).
It could be interesting to try to apply Fil-C techniques to Rust. After all, while safe Rust is fully memory safe and has many other forms of bug-resistance that Fil-C lacks, Fil-C is far safer than unsafe Rust. Maybe one could achieve both.
And there are probably a lot of optimizations available by skipping safety checks in safe Rust where the compiler could prove that the check was already done.
> It could be interesting to try to apply Fil-C techniques to Rust.
The bulk of the work Fil-C does is implemented as an LLVM pass [0, 1], so at least in principle it should be possible to get Rust code to compile using Fil-C as a "backend".
AFAIK Fil-C does not catch all memory safety bugs, for example some use-after-free are just not bugs but work as intended (you still access the original data/allocation). This means that it's not a sanitizer and code that runs fine on Fil-C may show UB when run normally.
> for example some use-after-free are just not bugs but work as intended (you still access the original data/allocation)
That doesn't sound right? For example, from the Fil-C GC docs [0]:
> If you call `free`, the runtime will flag the object as free and all subsequent accesses to the object will trap. Additionally, FUGC will not scan outgoing references from the object (since they cannot be accessed anymore).
eh, I daily-drive a -fsanitize=address -fsanitize=undefined build of Qt and actual memory bugs are almost never a thing - I think the only time I had some were in tooling executables such as qmllint, but not in the framework itself. Most of the bugs by large are more "behaviour" bugs.
Depends in which sense you want it to "catch" the bugs. As this readme notes/quotes,
> All memory safety errors are caught as Fil-C panics.
If your problem is a memory-based bug causing a crash, I think this would just... catch the memory-based bug and crash. Like, it'd crash more reliably. On the other hand, if you want to find and debug the problem, that might be a good thing.
Sure, if the memory error is an immediately crashing one like a null per deref, but if is (for example) a memory corruption (e.g. an out of bounds write or a write-after-free) then this would be super helpful in exposing where those are happening at the source.
That’s what “catch” means here. As in, catch it in the act. Tools that make bugs crash more reliably and closer to the source of the problem are extremely valuable.
Most large C code bases aren’t really written in C. They’re written in an almost-C that includes certain extensions and undefined behavior. In this case, it uses inline assembly (an extension) and manipulating pointers as integers (undefined behavior).
While I’m always thankful when people give the broad perspective and context in a discussion, which your comment does. The specifics of this particular project’s usage of almost-C is not something I could have quickly figured out, so thanks. For such a large program, an to be as old as Qt is at this point, I find it impressive and slightly amazing that it has in some sense self-limited its divergence from standard C. It would be interesting to see what something like SQLite includes in its almost-C.
Afaict, there are some patterns that are not supported, like converting pointers to/from integers and doing stuff with them like bitmasks (which is a huge anti-pattern, but some code bases do it)
Massive projects like Qt also push compilers to their limits and use various compiler-specific and platform-specific techniques which might appear as bugs to Fil-C.
Sure fooled me. I follow his Twitter account and there isn't much he hasn't got building with it at this point. UX comes later. Amazing it's the random work of one person
Interesting to see Fil‑C used with a large framework like Qt. The fact that it compiles with minimal changes says a lot about the compatibility layer and the InvisiCaps approach.
I believe much more in making C/C++ safer than using something as complex as Rust.
SafER is better than deeply complex and unable to be understood except by Rust experts.
In my experience (20+ years with C/C++, and about 4 years with Rust), Rust is significantly less complex than C++, while being similarly capable. The extra syntax that throws off so many C++ devs is almost exclusively about data types and lifetimes, which I find very useful for understanding my own code and others', and which I wish I had in C++.
Some of this is just knowing from experience, by the time C++ programmers knew they wanted the destructive move assignment semantic at the turn of the century they already had large codebases which relied on C++ copy assignment, so, too bad. It took a significant extra effort to land C++ 11 move semantics, which are still less useful but also have worse ergonomics. Whereas Rust knew it wanted the destructive move so, that's just how everything works in Rust.
But there are a bunch of unforced errors in C++ design beyond that. Default implicit conversion is a choice and a mistake. Multiple inheritance is a mistake, Stroustrup even says he did it because it was easy, which is exactly the same cause as Hoare's NULL. Choosing "All correct programs compile" (the other option was "No incorrect programs compile", you can't have both, see Henry Rice's PhD thesis) was a mistake. My favourite bad default in C++ is atomic memory ordering. The nasty trick here was that picking a default was the mistake, it's not that they picked the wrong one but that they picked a default at all. C++ programmers end up writing code which doesn't specify the ordering even though the ordering was their only important decision.
11 replies →
Great, if you are right everyone is going to be using Rust eventually.
I like the concepts proposed by Rust but do not like fighting with the borrow checker or sprinkling code with box, ref, cell, rc, refcell, etc.
At some point there’s going to be a better designed language that makes these pain point go away.
39 replies →
The real answer should have been a new language that has memory safety without all the extra conceptual changes and orthogonal subsystems that Rust brings. The core value of safety did not need the reinvention of everything else with the accompanying complexity and cognitive load. For example Zig which instead of introducing a new metaprogramkming language, it uses...... Zig - imagine using the same language instead of inventing a new additional language with all the accompanying complexity and cognitive load and problems. Rust is for those who revel in complexity. And traits - traits and extra complexity not needed for safety. And result and option and move by dedfault - none of these things were needed but they all add up to more complexity and unfamiliarity and cognitive load. And when you add it all together and intertwine it you end up with something so unfamiliar that it no longer looks like "ordinary programming" it looks like something from the Cambrian period.
31 replies →
> In my experience (20+ years with C/C++, and about 4 years with Rust), Rust is significantly less complex than C++, while being similarly capable
OK, what about if you have an existing code base of hundreds of thousands, or even millions, of lines of C or C++? Still think it's going to be better, and especially cheaper, to "rewrite it in Rust" instead of just porting it to Fil-C or something similar?
1 reply →
C% agree.
Rust is better than C++.
I don’t know what you mean by SafER but it’s important to remember that Fil-C sacrifices a lot of performance for that safety which detracts the reasons you’d be running that software as otherwise C was a bad language for them. Sometimes this won’t matter but there are places fil-c won’t be able to go that Rust can - embedded and OS kernel come to mind. Other places would be things like browsers or games. Rust gives you the safety without giving up the ability to do performance.
Also, I could be wrong but I believe any assembly linked into Fil-C bypassed the safety guarantees which would be something to keep in mind (not a big deal generally, but a source of hidden implicit unsafe).
I’m apparently comment happy on this OP, but, the typing of it looks funny because it starts the sentence, I’m pretty sure OP was saying safER, as opposed to SAFE (as in totally safe instead of comparatively safer). I have been quite charitable to OP in some sibling comments and will do so here. I think OP is attempting to give Fil-C some credit for being an attempt to increase the overall memory safety of existing code without incurring the complexity of a new language or the complexity of rewriting long running/widely distributed code. It is a decent sentiment and a viable methodology to achieve a laudable goal, but is certainly susceptible to caveats like the performance penalty you mention.
If you can’t understand ownership I’m baffled how you believe you can write well behaved C or C++.
Rust at least embeds this information in the API with checks. C and C++ are doc comments at best.
I’m going to start this comment by specifying that I don’t know what OP was considering complex about Rust and, unfortunately, a large amount of discussion on the topic tends toward strawman-ing by people looking to argue the ‘anti-Rust’ side of said discussions. Additionally, the lack of a contextual and well considered position against some aspect of Rust, as a language, is very common, and at worst the negative take is really just a overall confrontational stance against Rust’s uptick in usage broadly and its community of users, as perceived (and also strawmanned), in a generally negative light. But since borrowing is not explicitly mentioned by GP, I will give a slightly different position than he might, but I think this is an interesting perspective difference to discuss and not a blatant ad hom argument used to ‘fight’ Rust users on the internet.
From my position the complexity incurred by ownership semantics in Rust does not stem from Rust’s ‘formalization’ and semi-reification of a particular view on ownership as a means of program constraint. The complexity of Rust, in relation to ownership, comes with the lengths I would have to go to design systems using other logical means of handling references (particularly plain hardware implemented pointers) to semantic objects: their creation, specification, and their deletion. Additionally, other means of handling resources (particularly memory acquired via allocation): its acquisition, transport through local and distributed processes (from different cores to over the wire), and its deletion or handing back to OS.
Rust adopts ownership semantics (and value semantics to a large degree) to the maximum extent possible and has enmeshed those semantics throughout all levels of abstraction in the language definition (as far as a singular authoritative ‘definition’ can be said to exist as a non-implementation related formalism). At the level of Rust the language, not merely discussions and discourse about the language, ownership semantics are baked in at a specified granularity and that ownership is compositional over the abstraction mechanisms provided. These semantics dictate how everything from a single variable on the stack to large size allocations in a general heap to non-memory ‘resources’, like files, textures, databases, and general processes, are handled in a program. On top of the ownership semantics sit the rest of Rust’s semantics and they are all checked at compile time by a singular oracular subsystem (i.e. the borrow checker).
The complexity really begins to rise, for me, if ai want to attempt to program without engaging with ownership as the methodology or semantics for handling all of the above mentioned ‘resources’. I prefer, and believe, that a broader set of formalisms should be available for ‘handling’ resources, that those formalisms should exist with parameterized granularity, and that the foundational semantics for those mechanisms should come from type systems’ ability to encode capabilities and conditions for particular types in a program. That position is in contrast to the universal and foundational ownership semantic, especially with the individualistic fixed granularity, that Rust chose.
That being said, it is bordering on insanity to attempt to program in such a ‘style’/paradigm/method in Rust. My preferences make Rust’s chosen focus on ownership seem complex at the outset, and attempts to try and impose an alternate formalism in Rust (which would, by necessity, have to try and be some abstraction over Rust’s ownership semantics which hid those semantics and tried to present a different set of semantics to thenprogrammer) take that complexity to even higher levels.
The real problem with trying to frame my position here as complexity is the following: to me Rust and its ownership semantic is complex because I do not like it’s chosen core semantic construct, so when I think about achieving something using Rust I have to deal with additional semantics, semantic objects, and their constraints on my program that I do not think are fit for purpose. But, if I wanted to program in Rust without trying to circumvent, ignore, or disregard it’s choices as a language and just decided to accept (or embrace) it’s semantic choices the complexity I perceive would decrease significantly and immediately.
For me, Rust’s ownership semantics create an impedance mismatch that at the level of language use FEELS like complexity (and acts like complexity in a lot of ways), but is probably more correctly identified as just what it is… an impedance mismatch, nothing more and nothing less. For me, I just chose not to use Rust to avoid that, but for others they get focused on these issues and don’t actually get to the bottom of their issues and just default to calling it complexity during discussion.
All in all, I am probably being entirely to optimistic about the comments about the complexity of Rust and ownership and most commenters are just fighting to fight, but I genuinely believe there is much to discuss and work through in programming language design theory and writing walls of text on HN helps me do that.
4 replies →
ICOS (from NASA) and Fil-C together make it possible to make more safe code than vanilla Rust.
https://github.com/NASA-SW-VnV/ikos
I think it might be an interesting experiment to try to duplicate as much of the functionality of IKOS as possible in vanilla Rust using a no_panic-like [0] technique. My guess is that most of the checks are already done by Rust or can be covered by such a technique, albeit perhaps with more hand-holding than for IKOS. The pointer alignment and comparison checks are the ones I'm least certain about since Rust is relatively lax with those.
[0]: https://docs.rs/no-panic/latest/no_panic/
Rust is a much easier language to master than C++.
I’m pretty sure there is not any realistically feasible way to ever prove your statement. But I hope a majority of people can recognize the sheer magnitude of C++ as a language and take a position that it may not be possible to master the whole thing. Rust is ‘smaller’ language using some metrics (most metrics really) than C++ is another thing I would hope most people can accept. So, given that the comparisons between the two wholes being a semi-intractable discussion I would propose the following:
When considering some chosen subset of functionality for some specified use case, how do Rust and C++ compare in the ability to ‘master’. There are wide and varied groups (practically infinite) of features, constructs, techniques, and implementations that achieve targeted use cases in both languages, so when constructing a given subset which language grants the most expressivity and capability in the more ‘tight’ (i.e. masterable) package?
I think that’s a way more interesting discussion to have. Obviously, where the specified use case requires Rust’s definition of memory safety to be implemented 100% of the time (excluding a small-ish percentage of delimited ‘unsafe but identifiable’ sections) the Rust subset will be smaller due to the mandatory abstractions required to put C++ anywhere near complete coverage. So it may make sense to allow the subset to be defined as not only constructs in the base language, but include sealed abstractions (philosophically if not in reality) as potential components in the constructed subsets.
I may have to try and formulate some use cases to pose in a longer something to see if any truly experienced devs can lay out their preferred language’s best candidate subset in response. It would also be fascinating to see what abstractions and metaprogramming would be used to implement the subset candidates and figure out how that could factor into an overall measurement of the ‘masterable-ness’ of the given language (i.e. how impossible a task is it to be able to rely on a subject matter expert to implement any proposed subset for any given use case).
It could be interesting to try to apply Fil-C techniques to Rust. After all, while safe Rust is fully memory safe and has many other forms of bug-resistance that Fil-C lacks, Fil-C is far safer than unsafe Rust. Maybe one could achieve both.
And there are probably a lot of optimizations available by skipping safety checks in safe Rust where the compiler could prove that the check was already done.
> It could be interesting to try to apply Fil-C techniques to Rust.
The bulk of the work Fil-C does is implemented as an LLVM pass [0, 1], so at least in principle it should be possible to get Rust code to compile using Fil-C as a "backend".
[0]: https://fil-c.org/how
[1]: https://fil-c.org/compiler
rust is not complex for C++ experts. Both have the same concepts. For beginners of course rust is harder, but that didn't mean bad for beginners
This is awesome. Would love to see if it catches some of the Qt bugs I found but haven't been resolved yet[1].
[1] https://qt-project.atlassian.net/browse/QTBUG-122658
I only tried to get Qt Base up and running. But Qt Declarative will be next, after Qt Base.
That would be awesome (:
AFAIK Fil-C does not catch all memory safety bugs, for example some use-after-free are just not bugs but work as intended (you still access the original data/allocation). This means that it's not a sanitizer and code that runs fine on Fil-C may show UB when run normally.
> for example some use-after-free are just not bugs but work as intended (you still access the original data/allocation)
That doesn't sound right? For example, from the Fil-C GC docs [0]:
> If you call `free`, the runtime will flag the object as free and all subsequent accesses to the object will trap. Additionally, FUGC will not scan outgoing references from the object (since they cannot be accessed anymore).
[0]: https://fil-c.org/fugc
Any use after free is a bug. There is no way you can use that area without ownership.
eh, I daily-drive a -fsanitize=address -fsanitize=undefined build of Qt and actual memory bugs are almost never a thing - I think the only time I had some were in tooling executables such as qmllint, but not in the framework itself. Most of the bugs by large are more "behaviour" bugs.
I’m impressed that QT runs clean enough under ubsan for daily use.
1 reply →
Here is another bug that led to crush thatt I reported while developing my block editor:
https://qt-project.atlassian.net/browse/QTBUG-124572
Good to know! Which version of Qt are you using?
1 reply →
Depends in which sense you want it to "catch" the bugs. As this readme notes/quotes,
> All memory safety errors are caught as Fil-C panics.
If your problem is a memory-based bug causing a crash, I think this would just... catch the memory-based bug and crash. Like, it'd crash more reliably. On the other hand, if you want to find and debug the problem, that might be a good thing.
Sure, if the memory error is an immediately crashing one like a null per deref, but if is (for example) a memory corruption (e.g. an out of bounds write or a write-after-free) then this would be super helpful in exposing where those are happening at the source.
That’s what “catch” means here. As in, catch it in the act. Tools that make bugs crash more reliably and closer to the source of the problem are extremely valuable.
Half a second's worth is a lifetime of debugging at today's clock rates so the closer to the fault the crash happens the sooner you can fix it.
It would be nice to have one thread about Fil-C without it becoming about Rust.
I thought the point of Fil-C was to be a drop in safe replacement of C. This project existing implies it isn't. What's going on?
Fil-C is as compatible with C as different variants of C are with one another.
The porting effort is usually a less than if you were going from 32-bit to 64-bit for the first time. For some programs, you need zero changes.
In Qt I think the changes were stuff like:
- Use an intrinsic instead of inline assembly for cpuid.
- Change how a tagged pointer works (I.e. a 64-bit value used to store some integer bits and a pointer)
(Source: I’m the Fil-C guy and I watched the Qt porting happen from the sidelines.)
Most large C code bases aren’t really written in C. They’re written in an almost-C that includes certain extensions and undefined behavior. In this case, it uses inline assembly (an extension) and manipulating pointers as integers (undefined behavior).
While I’m always thankful when people give the broad perspective and context in a discussion, which your comment does. The specifics of this particular project’s usage of almost-C is not something I could have quickly figured out, so thanks. For such a large program, an to be as old as Qt is at this point, I find it impressive and slightly amazing that it has in some sense self-limited its divergence from standard C. It would be interesting to see what something like SQLite includes in its almost-C.
2 replies →
Afaict, there are some patterns that are not supported, like converting pointers to/from integers and doing stuff with them like bitmasks (which is a huge anti-pattern, but some code bases do it)
QtBase is C++ first of all.
Massive projects like Qt also push compilers to their limits and use various compiler-specific and platform-specific techniques which might appear as bugs to Fil-C.
It isn't entirely drop-in for all programs.
It's not done yet.
Sure fooled me. I follow his Twitter account and there isn't much he hasn't got building with it at this point. UX comes later. Amazing it's the random work of one person
2 replies →
Very cool, thanks! I will try this with Mudlet.
Interesting to see Fil‑C used with a large framework like Qt. The fact that it compiles with minimal changes says a lot about the compatibility layer and the InvisiCaps approach.