so there must be some reason why they went this design.
Some applications have a hard zero-alloc requirement.
so there must be some reason why they went this design.
Some applications have a hard zero-alloc requirement.
I never liked those FMVs. They age so badly too, those FMVs looked like a blurry mess when I was playing PS1 games on my PC using an emulator
First and foremost: It’s not about optimization, as I have mentioned before. Never once have I intended to optimize the conversion, because I know it is pointless. Stop making that assumption and only then we can continue the discussion.
There is no reason why people cannot use Rust as “C, but actually type-safe”. A type-safe representation of C’s error code pattern is a part of that. This way the code is “backwards compatible” with debuggers designed for C/C++, such that “-EINVAL” is actually displayed as “-EINVAL” in the debugger and not a mysterious struct of (discriminant, data)
.
The reason I asked the question, was that I wanted to keep an int
an int
throughout the program.
It’s not for performance reasons, it’s just that I feel like there is a certain elegance in keeping things type safe entirely “for free” much like how Option<&T>
is actually just a regular T*
, even if it could be pointless in the big picture.
Release builds of simple-raytracer are now benchmarked too. Release builds are slower but should still be faster than the LLVM backend.
Is the “faster, slower” here referring to compile times?
I don’t know where D fits nowadays and which problem it’s trying to solve.
My experience has been similar - it’s hard to categorize the language.
As a low-level system language like C, C++, Rust, Zig? The garbage collector makes it a hard sell to other people, even though one can opt out of it.
As a higher-level application language like Java and Go? D frequently gives me a “low-level language” feel, but I am not sure why.
As a scripting language? I feel like its type system works against the rapid-prototyping coding style commonly seen in scripts.
The IPv4 exhaustion is far more gnarly in developing countries. Something on the scale of hundreds of people sharing one IPv4 address.
If I want to get a public IPv4 address from my ISP, I have to spend extra. Some ISPs in my country simply don’t give public IPv4 addresses anymore. They have completely exhausted their pool.
You can’t talk about NAT and then mention speed in the same statement…
The 128-bit IPv6 addresses are just four simple 32-bit integers if you think about it, but with NAT you have juggle around and maintain the (internal IP, internal Port, external IP, external Port, Protocol) tuples all the time. That’s a significant overhead. Also, switches typically deal with the Layer 2 stuffs. IP is Layer 3.
See the HN discussion for more information.
It’s just easier to do IPv4 in every way
Except when you have to NAT transversal. Then you are in a world of hurt.
You don’t have NATs in the IPv6 world…
The numerous CGNAT deployed worldwide suggests otherwise.
Composition do not necessitate the creation of a new field like x.reader or x.writer, what are you on?
I think if you consider anything post C++03 (so C++11 or newer) to be “modern C++” then Concepts must be the top example, doesn’t it?
Counting from C++0x that’s almost a decade of waiting.