It will just force them to document and open up their APIs and protocols.
It will just force them to document and open up their APIs and protocols.
The context that lead DTolnay to write this: https://lemmy.world/post/4393780
And ThePhd didn’t like it: https://pony.social/@thephd/111005164984251004
The only time it has ever complained was a case where my platform does define the behavior and I was intentionally relying on that.
If by platform you mean target CPU you should be aware that it’s still undefined behaviour and that it could break optimizations, unless your compiler also makes a commitment to define that behavior that is stronger than what the standard requires.
As someone building embedded systems, the compile (in release mode otherwise the program does not even fit) + flash + run tests with limited visibility workflow is just soooo slow, have to do so little actual debugging thanks to the type system is a godsend.
I watched lower decks and I’m also confused
One thing that this article misses is that multi-threaded executors can very well optimize for latency.
If for some reason a task is a bit slow to run (say parsing a large JSON blob coming from a request), this means that the other request handlers can still run and don’t need to add the latency of this large request.
There was always a need for memory safety, we just didn’t know how to do it for low-level software and without significant performance decrease.
Now that we have the solution, it’s urgent to deploy it.