Comment by lerno
16 days ago
Also, `fn` is used to make type inference for lambdas syntactically simple. But I would lie if I said I haven’t been considering removing `fn` many times. But there are good reasons for keeping it, despite the break with C.
Do you think it is ever going to be removed, or do the pros of having it outweigh the cons?
I don't like to say "never", because other things make change that invalidates previous conclusions.
For example, let's say that for some reason macros were removed (this is very unlikely to happen, but as a thought experiment), then the symmetry between macro/fn definitions wouldn't be an argument anymore, and the question could be revisited.
Similar things have happened before: the optional type syntax changed from `int!` to the more mainstream `int?`. So why did I stick with `int!` for so long? Because initially it was called a "failable" and had different semantics. Revisiting this syntax after other changes to the syntax in 0.6.0, made it clear that `int?` was now fine to use.
So that's why I don't say never. But it would need for the situation to change in some way.
Thank you. I am not going to have "fn" stop me from trying out C3 anyways.
There is a C (single header-only) library[1] that determines CPU features at runtime (similar to that of what libsodium does), so I might try to use that with C3 and implement BLAKE2. That might be a good starting point, or perhaps even TOTP, my friend told me implementing TOTP might give me some insight into the language, but for that I will need base32 (I checked, it exists[2]) and SHA{1,256,512}-HMAC, which may not be available in C3, although there may be an OpenSSL binding already, or perhaps I could directly use OpenSSL from C3? The latter would be pretty cool if so.
Regarding the mentioned C library, it might not work, because it is header-only, and it generates the C functions using macros (see line 219). What do you think? Would this work from C3?
[1] https://zolk3ri.name/cgit/cpudetect/tree/cpudetect.h
[2] https://github.com/c3lang/c3c/blob/master/lib/std/encoding/b...
---
I checked base32.c3. I am comparing it to Odin's, which can be found at https://github.com/odin-lang/Odin/blob/master/core/encoding/.... Apparently they added a way to gracefully handle error cases. Is it possible to do so with the current C3 implementation?
Edit: I noticed "@require padding < 0xFF : "Invalid padding character"", and there is "encoding::INVALID_CHARACTER", so I presume we can handle some errors or invalid input gracefully. Although I prefer Odin's current implementation because you can handle specific errors, e.g. not just "invalid character", but invalid length (and others, see base32.odin for more). Any thoughts on this?
Additionally, what are the differences between
and
exactly? If I want to handle invalid padding character, I would have to get rid of "@require"?
Additionally, what if there are many more error cases? It would be a "long line" of available error cases? Could I use an enum of errors or something instead or something similar to https://github.com/odin-lang/Odin/blob/master/core/encoding/...?
BTW I like programming by contract, but I am not sure that all error cases that Odin has could have a "@require", and I am not sure if I would like to mix them either, because what if I have a program where I want to handle even "padding < 0xFF" specifically (in terms of base32.c3), or let's say, I don't want it to fail when "padding < 0xFF". Would I really need to implement my own base32 encoding in that case, then, or what?
---
Thank you for your time and help in advance!
6 replies →