Btw, choco (and maybe even winget?) already has a gsudo tool, which implements sudo. It is super handy, and having a native version is definitely better, but before its available, I recommend gsudo.
Btw, choco (and maybe even winget?) already has a gsudo tool, which implements sudo. It is super handy, and having a native version is definitely better, but before its available, I recommend gsudo.
We’ve had to work in Pharo for our OOP uni course, and it was one of the worse experiences I’ve had in school. Mind you, it was something like 7 years ago, so the language may very well be a lot better now, but the whole “your IDE is the code” felt cubersome, it was buggy and crashed randomly, and in general I spent more time fighting with the IDE than doing something useful.
It was a bad time, but also a great learning experience. Being forced to work in something that IMO sucks is an useful skill, but I never want to see that language again :D
I’m starting to think that “good code” is simply a myth. They’ve drilled a lot of “best practices” into me during my masters, yet no matter how mich you try, you will eventually end up with something overengineered, or a new feature or a bug that’s really difficult to squeeze into whatever you’ve chosen.
But, ok, that doesn’t proove anything, maybe I’m just a vad programmer.
What made me sceptical however isn’t that I never managed to do it right in any of my projects, but the last two years of experience working on porting games, some of them well-known and larger games, to consoles.
I’ve already seen several codebases, each one with different take on how to make the core game architecture, and each one inevitably had some horrible issues that turned up during bugfixing. Making changes was hard, it was either overengineersled and almost impenetrable, or we had to resort tonugly hacks since there simply wasn’t a way how to do it properly without rewriting a huge chunk.
Right now, my whole prpgramming knowledge about game aechitecture is a list of “this desn’t work in the long run”, and if I were to start a new project, I’d be really at loss about what the fuck should i choose. It’s a hopeless battle, every aproach I’ve seen or tried still ran into problems.
And I think this may be authors problem - ot’s really easy to see that something doesn’t work. " I’d have done it diferently" or “There has to be a better way” is something that you notice very quickly. But I’m certain that watever would he propose, it’d just lead to a different set of problems. And I suspect that’s what may ve happening with his leads not letting him stick his nose into stuff. They have probably seen that before, at it rarely helps.
I had the same issue with gamedev industry, but thankfully Ive very quickly realized that’s how work works, and you usually have a choice - either earn a good living being a code monkey, or find a job in a small company that has passion, but they won’t be able to afford paying you well, or do it in your free time as a hobby. Capitalism and passion doesn’t work together.
So I went to work part-time in cybersecurity, where the money is enough to reasonably sustain me, and use the free time to work on games in my free time. Recently, Ive picked up an amazing second part time job in a small local indie studio that is exactly the kind of environment I was looking for, with passion behind their projects - but they simply can’t afford to pay a competitive wage. But I’m not there for the money, so Ibdon’t mind and am happy to help them. Since there are no investors whose pocket you fill, but the company is owned by a bunch of my friends, I have no issue with being underpaid.
But it’s important to realize this as soon as possible, before trying to make a living with something you’re passionate about will burn you out. A job has one purpose - earn you a living. Companies will exploit every single penny they can out of you, so fuck them, don’t give them anything more than a bare minimum, and keep your energy for your own projects.
And be carefull with trying to earn a living on your own - because whatever you do, no matter how passionate are you, if it’s your only income and your life depends on it, you will eventually have to make compromises to get by. It’s better to keep money separate from whatever you like doing, and just keep your passion pure.
EDIT: Oh, I forgot to mention one important thing - I’m fortunate to not have children, share living costs with a partner, and live in a city with good public transport, so no need for a car, and free healthcare. I suppose that makes it a lot more easier to get by with just a part time.
that got me thinking, is there any kind of statistic for average maintainer age for major FOSS projects and libraries? Is the influx of new maintainers still going strong, or should we expect a really huge problem in the next few decades?
Also, what are some good resources if you want to start with maintaining or collaborating on something, if you have zero experience with the dev side of FOSS ecosystem?
I was working on a pretty well known game, porting it to consoles.
On PS4 we started getting OOM crashes after you’ve played a few levels, because PS4 doesn’t have that much memory. I was mostly new on the project and didn’t know it very well, so I started profiling.
It turned out that all the levels are saved in a pretty descriptive JSON files. And all of them are in Unity’s Scriptable Objects, so even if you are not playing that level, they all get loaded into memory, since once something references a SO, it gets loaded immediately. It was 1.7Gb of JSON strings loaded into memory once the game started, that stays there for the whole gameplay.
I wrote a build script that compresses the JSON strings using gzip, and then uncompresses it when loading the actual level.
It reduced the memory of all the levels to 46Mb down from 1.7Gb, while also reduced the game load by around 5 seconds.
This is my experience as well. I’ve always tried to be privacy-conscious, and stick to self-hosted alternatives or FOSS, but I was also lazy and didn’t really tried too hard. With the recent enshittification problems for almost every product that has a corporation behind it, it’s a lot more in my face that it’s shit and I should be dealing with it.
It made me finally get a VPN and switch to Mullvad browser. Get rid of Reddit completely. I finally got a Pixel with GrapheneOS and got a NAS running.
It’s also doing wonders for my digital addiction. The companies are grossly mistaken in assuming that my addiction to their service is greater than my immense hatred for forced monetization, fingerpriting and dark patterns. It’s turning out it’s not, and I’ve dropped so many services in the last few months I never was able to really stop using, most of them thanks to popups like “You have to log in to view this content” or “This content is available only in app”, or “You are using an adblocker…”. Well, fuck you. I didn’t want to be here anyway.
I’ve been mostly working in C# for the past few years (and most of my life), and the only C++ experience I have is from college, so it’s getting some using to. And that’s what I was getting at - thanks to college, where I was forced to really learn (or at least, understand and be able to use) a wide range of drastically different languages, from Lisp through Bash, Pharo, Prolog, to Java and C#, that when I have to write something in a language I don’t know, it’s usually similar to at least one of them and I always could figure it out intuitively.
With Rust, even though it has an amazing compiler, I’m struggling - probably because of the borrowing and overly careful error handling being concepts I’ve never had to deal with to get a MVP code working. Sure, that probably means that the code wasn’t error-proof, which is exactly what Rust forces you to do and which is amazing, but it makes it a lot harder to just write a single script without prior knowledge when you have to.
I hope they are teaching Rust at universities now, we definitely didn’t have it 8 years ago, which is a shame.
I was just thinking about something similar in regards to gamedev.
For the past few years since college, we’ve been working on a 2D game in our spare time, running on Unity. And for the past few months I’ve been mostly working on performace, and it’s still mind-boggling to me how is it possible that we’re having troubles with performance. It’s a 2D game, and we’re not even doing that much with it. That said, I know it’s mostly my fault, being the lead programmer, and since most of the core system were written when I wasn’t really an experienced programmer, it shows, but still. It shouldn’t be that hard.
Is the engine overkill for what we need? Probably. Especially since it’s 2D, writing our own would probably be better - we don’t use most of the features anyway. The only problem would be tooling for scene building, but that’s also something that shouldn’t be that hard.
The blog post is inspiring, just yesterday I was looking into what would I need to get a basic rendering done in Rust, I may actually give it a try and see if I can make a basic 2D engine from scratch, it would definitely be an amazing learning experience. And I don’t really need that many features, right? Rendering, audio, sprite animation, collisions and scene editor should be sufficient, and I have a vague idea about how would I write each of those features in 2D.
Hmm. I wonder what would be the performance difference if I got an MVP working.
I’ve just started learning Rust, mostly by experimenting with winapi since that what’s I’m mostly going to use it for anyway, but this finally explains why I had so much trouble with trying to intuitively winging it. I’ve skimmed through the Rust book once, but judging by this article it’s no wonder I was mostly wrestling the compiler.
Looks like I have to go back to the drawing board. I understand why is Rust doing it, and I’m sure that once I finally get used to it, it’s going to be a way smoother experience, but maan, this is the first language I couldn’t just figure out in an hour. It’s a frustrating learning experience, but I also see why it’s neccessary and love it for that.
Cries in game dev
No, seriously. I’ve tried getting Unity to work on Linux once, and gave up after few hours of random crashes, bugs or errors. And I never even got to building the game, which I’m sure would be an entirely different adventure that would still in the end require to reboot to Windows and try the build there.
Also, getting O365 to work on Linux was another reason why I eventually gave up, since our company is simply a Windows-based, and the web apps are just too cubersome to use. And for alternative clients you usually need an app password (disabled in our domain) or another setting that you don’t want to enable for 95% of your employees, since it’s just a security risk in the wrong hands.
Oh, and then there are VPNs. I never managed to get Checkpoint mobile working on Linux, without it also requiring intervention from IT to enable some obscure configuration or protocol support.
It’s a shame, but every attempt I made to switch ended exactly the same - after few days of running into “make sure to enable this config on the server side” or “if you don’t see that option in the settings, contact your system administrator” for every tool I need for my job, I just gave up.
But I’m considering it giving it another try, and just go with the Unix + Windows VM for administrative tasks. But knowing myself, just the small hurdle of “having to spin up a VM” would be a reason to postpone and not do it properly, since that’s additional effort… And then there’s still the gamedev I do part-time, where I simply don’t believe it’s a good idea - after all, given the states the engines are in, it’s a recipe for disaster of “works on my machine but not in build” or “doesn’t work on my machine”…
It’s one of those tools that can both be used on a resume or as a diagnosis. I love it!
Do I understand it right that what the tool does is include install scripts in all of the other languages, that simply download a portable Deno runtime and then run the rest of the file (which is the original Javascript code) as Javascript?
So, you basically still have an install step, but it was just automated to work cross-platform though what’s basically a polyglot install script. Meaning that this could probably be done with almost any other language, assuming it has a portable runtime - such as portable python and similar, is that correct?
Oh, you’re right, I’ve totally forgotten about that. It was one of the (many) reasons why I gave up my last attempt to finally switch away from windows and to Linux.
I’ve never had any issues, it’s pretty well optimized and it’s miles ahead of TeamViewer. So, in my experience, it is pretty fast - if your net can handle it. And if you have lower bandwidth then it’s pretty good at optimizing for speed instead of quality, if that’s what you want.
Mozilla won’t implement WEI
They are going to fight against WEI. Tooth and nails, for our sakes!
Just like they did with EME, the closed source video DRM in 2014. By being deeply concerned with the direction the web is going, and definitely against it, but…
We face a choice between a feature our users want and the degree to which that feature can be built to embody user control and privacy.
With most competing browsers and the content industry embracing the W3C EME specification, Mozilla has little choice but to implement EME as well so our users can continue to access all content they want to enjoy.
Despite our dislike of DRM, we have come to believe Firefox needs to provide a mechanism for people to watch DRM-controlled content.
DRM requires closed systems to operate as currently required and is designed to remove user control, so Mozilla is taking steps to find alternative solutions to DRM. But Mozilla also believes that until an alternative system is in place, Firefox users should be able to choose whether to interact with DRM in order to watch streaming videos in the browser.
https://blog.mozilla.org/en/mozilla/drm-and-the-challenge-of-serving-users/
https://hacks.mozilla.org/2014/05/reconciling-mozillas-mission-and-w3c-eme/
That sounds cool, thanks! Apparently, you can do the same with JetBrains Rider, which would also be great. I have to check that out.
I’m avoiding google as much as I can, so this definitely isn’t for me. But, does anyone knows of any self-hosted similar solution? I’m already mostly working remotely on my desktop through Parsec, but having something like a FOSS web IDE running at home would be a little bit better solution for cases where the network speed/quality isn’t good enough to work for the whole streamed desktop case.
Ever since I’ve discovered Parsec (or any other remote desktop streaming solution that isn’t TeamViewer), I’ve switched from having to drag around a heavy laptop that still can barely run Unreal to just having a Surface, remotely WoL my desktop at home through a pooling solution that does not require any public facing service (my NAS is just pooling a website API for a trigger. Not efficient, but secure), and just connecting through Parsec.
RDP could also work I’d wager, but then I’d have to set up a VPN and I’m not really that comfortable with anything public facing. But if anyone asks me now for good laptop recommendations, I always recommend going the “better desktop for the same price, and small laptop for remote”.
I’ve yet to find a place where I couldn’t work comfortably through Parsec, it being optimized for gaming means the experience is pretty smooth, and it works pretty well even at lower network speeds. You still need at least 5-10Mbps, but if you have unlimited mobile data you’re good to go almost anywhere.
I’d like to mention one exception, because it took me ages to properly debug.
If your endpoint is serving mirrors for APT, don’t redirect to HTTPS.
APT packages are signed and validated, so there is no need to use TLS. Lot of docker images (such as Kali) do not have root certificates by default, so they can’t use the TLS, because cert validation fails. You also can’t install the certificates, because they install through APT. If your local mirror redirects to https by default, it will break it for people who choose the mirror, which IIRC happens automatically based on what’s closest to you. I think this issue is still there for Czech Kali package mirror, and it took me so long to figure out (because it’s also not an issue for most of the users, since they have different mirrors), so I like mentioning this when talking http/s. It’s an edge case, but one that I find interresting - mostly because it would never occur to me that this can be an issue, when setting up a mirror.
But that was more than a year ago, it may be better now.