Well, it’s certainly more interesting than an email client, consider yourself lucky.
Mama told me not to come.
She said, that ain’t the way to have fun.
Well, it’s certainly more interesting than an email client, consider yourself lucky.
Isn’t the reverse true? If you make separate models for each query, the ORM knows exactly what data you need, so it can fetch it all as once. If you use generic models, the ORM needs to guess, and many revert to lazy loading if it’s not sure (i.e. lots of queries).
That’s at least my experience with SQLAlchemy, we put a lot of effort in to reduce those extra calls because we’re using complex, generalized structures.
Ah, I see. So you’re expecting to have one object for creation, updates, queries, etc.
I work with something like that at work (SQLAlchemy in Python), and I honestly prefer the Diesel design. I build an object for exactly what I need, so I’ll have a handful of related types used for different purposes. In Python, we have a lot of “contains_eager” calls to ensure data isn’t lazy loaded, and it really clutters up the code. With Diesel, that’s not necessary because I just don’t include data that I don’t need for that operation. Then again, I’m not generally a fan of ORMs and prefer to write SQL, so that’s where I’m coming from.
That’s too bad, I was thinking of replacing our nginx proxy with Rust. We need a little logic, so we currently have nginx -> gateway service -> microservice, and I was hoping to drop nginx. Maybe I still can, but it sounds like there would be some tradeoffs.
Can you give an example? Pretty much everything in Diesel is abstracted away through trait macros.
The only thing worse than a bad example is documentation like this:
fn do_thing(…)
Does thing.
It adds nothing, other than letting you know they were there and decided not to actually provide something useful.
Looks like a pretty small release, at least from the user’s perspective. Congrats to everyone that worked on it!
Cool. Good work!
559.7ms
So they optimized something from a half second to something in the microseconds. That’s certainly cool, but imo not the place to spend time optimizing. Especially since it’s probably mostly startup time, since the difference between the benchmark and bigger schema is 100ms in JS and 15x in Rust.
When you say “fast JS runtime,” did you try bun
? AFAIK, it is supposed to have way better startup time than V8.
It’s certainly cool though!
I’m just a hobbyist myself as well, but i’ve talked to actual professionals in the field, so i’m pretty sure that’s general wisdom.
And in game dev, a lot of what you’re doing is exploratory:
Requiring a rebuild for each of those would take too much time.
Ah, apparently for now you’re not allowed to allocate. But vec::new_in(allocator)
looks interesting. This works in nightly today:
#![feature(allocator_api)]
use std::alloc::Global;
fn main() {
const MY_VEC: Vec<i32> = const {
Vec::new_in(Global)
};
println!("{:?}", MY_VEC);
}
Maybe at some point I can append to it at compile time too. I’d love to be able to put a const {}
and have allocations that resolve down to a 'static
, and this seems to be a step toward that.
I guess I’m just excited that Vec::new()
is the example they picked, since the next obvious question is, “can I push?”
How was working with Leptos? It certainly looks cool.
Honestly, I disagree, but I obviously haven’t seen the code in question.
Go has a lot of really nice things going for it:
My problem isn’t with normal program flow, but that the syntax is deceptively simple. That complexity lives somewhere, and it’s usually in the quirks of the runtime. So it’s like any other abstraction, if you use it “correctly” (i.e. the way the maintainers intended), you’ll probably be fine, but if you deviate, be ready for surprises. And any sufficiently large project will deviate and run into those surprises.
Still working on my p2p “lemmy.” I finished a wave of FE work, so now updating the BE to match. Next steps:
Once that’s done, I’ll need another wave of FE work and then lots of testing before I’m ready to publish code. Hopefully I’ll get there next month.
I don’t think there’s a problem whatsoever. Rust just isn’t a great choice for projects that need to iterate quickly. People online claiming that’s not the case doesn’t change that fact.
If you need fast iteration time and can sacrifice memory safety, use a scripting language. I like using Lua within Rust for this reason, I can iterate quickly and move expensive logic that’s not going to change much to a Rust lib.
OP should’ve known this, they had experience already writing games, they just ignored the truth because some neck beards told them to. It’s okay to ignore people on the Internet when they’re wrong.
Yup, that was my impression as well.
Write the part you’re interested in, and find a solid project to handle the rest. If your want to write a game, use a popular game engine. If you want to write a game engine, use a popular scripting language to test it out. And so on.
Oh yeah, that is really nice, and something fantastic about Go.
That said, I found that I care about that a lot less now than I used to. With everything running through CI, having a build take a few minutes instead of a few seconds isn’t really a big deal anymore. And for personal things where I used to build small Go binaries, I just use Python, mostly because it’s easier to drop into a REPL than to iterate with Go.
I like Go in theory, and I hope they fix a lot of the issues I have with it. But given that Go 2 isn’t happening, maybe it won’t. Or maybe they’ll do the Rust editions thing (seems to be the case in that article) so they can fix fundamental issues. IDK. But I’m guessing some of the things I want aren’t happening, like:
map[K]V
should be concurrency-safe, or at least have a safe counterpart w/o needing an importinterface{}(T(nil)) == nil
- or better yet, no nil at allThose are pretty fundamental to how Go works, though maybe the last one could be fixed, but it has been there since 1.0 and people have complained since 1.0…
Woo! Learn you a Haskell for great good.