• 0 Posts
  • 91 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle

  • So, if you just use the system API, then this means logging with syslog(3). Learn how to use it.

    This is old advice. These days just log to stdout, no need for your process to understand syslog, systemd, containers and modern systems just capture stdout and forward that where it needs to do. Then all applications can be simple and it us up to the system to handle them in a consistent way.

    NOTICE level: this will certainly be the level at which the program will run when in production

    I have never see anyone use this log level ever. Most use or default to Info or Warn. Even the author later says

    I run my server code at level INFO usually, but my desktop programs run at level DEBUG.

    If your message uses a special charset or even UTF-8, it might not render correctly at the end, but worst it could be corrupted in transit and become unreadable.

    I don’t know if this is true anymore. UTF-8 is ubiquitous these days and I would be surprised if any logging system could not handle it, or at least any modern one. I am very tempted to start adding some emoji to my logs to find out though.

    User 54543 successfully registered e-mail user@domain.com

    Now that is a big no no. Never ever log PII data if you don’t want a world of hurt later on.

    2013-01-12 17:49:37,656 [T1] INFO c.d.g.UserRequest User plays {‘user’:1334563, ‘card’:‘4 of spade’, ‘game’:23425656}

    I do not like that at all. The message should not contain json. Most logging libraries let you add context in a consistent way and can output the whole log line in Json. Having escaped json in json because you decided to add json manually is a pain, just use the tools you are given properly.

    Add timestamps either in UTC or local time plus offset

    Never log in local time. DST fucks shit up when you do that. Use UTC for everything and convert when displayed if needed, but always store dates in UTC.

    Think of Your Audience

    Very much this. I have seen far too many error message that give fuck all context to the problem and require diving through source code to figure out the hell went wrong. Think about how logs will be read without the context of the source code at hand.


  • Whatever language you chose you might want to also look at the htmx JS library. It lets you write in your html snippets better interactivity without actually needing to write JS. It basically lets you do things like when you click on an element, it can make a request to your server and replace some other element with the contents your server responds with - all with attributes on HTML tags instead of writing JS. This lets you keep all the state on the backend and lets you write more backend logic without only relying on full page refreshes to update small sections of the page.

    For a backend language I would use rust as that is what I am most familiar with now and enjoy using the most. Most languages are adequate at serving backend code though so it is hard to go wrong with anything that you enjoy using. Though with rust I tend to find I have fewer issues when I deploy something as appose to other languages which can cause all sorts of runtime errors as they let you ignore the error paths by default.



  • Yup, this is part of what’s lead me to advocate for SRP (the single responsibility principle).

    Even that gets overused and abused. My big problem with it is what is a single responsibility. It is poorly defined and leads to people thinking that the smallest possible thing is one responsibility. But when people think like that they create thousands of one to three line functions which just ends up losing the what the program is trying to do. Following logic through deeply nested function calls IMO is just as bad if not worst than having everything in a single function.

    There is a nice middle ground where SRP makes sense but like all patterns they never talk about where that line is. Overuse of any pattern, methodology or principle is a bad thing and it is very easy to do if you don’t think about what it is trying to achieve and when applying it no longer fits that goal.

    Basically, everything in moderation and never lean on a single thing.


  • Refactoring should not be a separate task that a boss can deny. You need to do feature X, feature X benefits from reworking some abstraction a bit, then you rework that abstraction before starting on feature X. And then maybe refactor a bit more after feature X now you know what it looks like. None of that should take substantially longer, and saves vast amounts of time later on if you don’t include it as part of the feature work.

    You can occasionally squeeze in a feature without reworking things first if time for something is tight, but you will run into problems if you do this too often and start thinking refactoring is a separate task to feature work.


  • “Best practices” might help you to avoid writing worse code.

    TBH I am not sure about this. I have seen many “Best practices” make code worst not better. Not because the rules themselves are bad but people take them as religious gospel and apply them to every situation in hopes of making their code better without actually looking at if it is making their code better.

    For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.

    Suddenly requirements change and now it’s bad code.

    This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.



  • It sounds like you may just want to wait a month or at least the end of this month. Looks like they are working hard on getting to the first alpha in 24.04 according to their post in January

    The goal for the COSMIC DE alpha is to feel like a complete product, albeit with features still to come. With a more stable alpha, we can better collect feedback on usability and focus on completing the Settings panels. From here, we can work towards an eventual 24.04 release over the summer.

    Though on their latest post on the 17th of April they only mentioned PopOS! 24.04 being released and did not really say much about maturing the prealpha to alpha at all. However they did mention their CEO is going to show off COSMIC DE at LinuxFest Northwest on the 27th

    A reminder that System76 CEO Carl Richell and UX Architect Maria Komarova will be at LinuxFest Northwest this year to showcase COSMIC DE.

    Might be worth waiting at least until then to see if anything gets announced or timelines get updated. Still only alpha though so may or may not be suitable for a full replacement yet, even if we are getting closer.


  • From looking into this s a few days ago it looked like things were packaged in the unstable channel so you can install them. But from what I read there is no way to configured it via nix configs so you would manually have to write them for now.

    https://nixos.wiki/wiki/COSMIC

    There is also this flake which should give some configuration for it but it mentions it might not be dully working yet and I have not yet actually tried it.

    I don’t expect things to change much in this regard though until they start releasing some more stable versions of cosmic which is still in a prealpha state.



  • Have not used it myself, but having a quick look at it I don’t think I likely ever will. Mostly for personal preferences with the projects goals and design than anything else.

    I don’t like large do everything frameworks. They are ok when you want to do what they were designed for. But as soon as you step outside that they become a nightmare to deal with. They also tend to grow more complex over time as more of what everyone wants gets added to them. A framework author’s case against frameworks is a great talk on the matter. Instead I prefer simpler smaller focused libraries that I can pick and choose from that best suit the application I want to build.

    Also it seems like the MVC pattern, which I dislike. Personally I like to group code that changes together next to each other. Where as MVC groups code that has the same function together and splits up code that tends to change together. This means for any change or feature you are editing many files across many folders which gets tedious rather than just co-locating all the related code in one directory or file.

    Because they include all dependencies for everything you might want they often lag behind upstream projects. This was a huge issue for me years ago when I tried out the Rocket framework. I wanted to use a hosted postgres DB that only supported https connections but the version of the library it was using did not yet include that feature - basically killing the project there.

    They can be great if they do everything you want in the way you want and loco looks to be well built and maintained overall. But I find far too often that they don’t, if not at the start of a project then eventually as my projects evolve (which is far worst). I would also question its staying power though (we have seen popular and promising frameworks before that suddenly stop development) but only time will answer that.







  • Unwaps or panicing or returning the error to the caller are all forms of handling the error - crash the program with a message that can tell you what went wrong and where in the code it happened. These give you a path to see what went wrong

    But silently ignoring an error is rarely the right move. It stops you from seeing what the cause of the problem is and often leads to some weird non sensical failure somewhere else. Which I have seen time and time again lead to hours down a rabbit hole trying to understand why things are not working because you are missing the root cause of the problem.

    There are times when you really don’t care about a failure at all, but those times are rare and should be carefully considered first, crashing the program is generally the first thing you should do if you are unsure.


  • That is a terrible time to throw away the error. Best to actually check for file not exists error and created the file only then. Other errors are important to see to debug why things are failing.

    It is very annoying to have a tool tell you it failed to create a file when the file exists but it just cannot read it for some reason. You can spend ages jumping down the wrong rabbit whole if you don’t realize what is happening.


  • I would start by learning rust at a user level via the rust book to get you familiar with the language without the extra layers that embedded systems tend to add on top of things. Keep in mind that the embedded space in rust is still maturing, though it is maturing quite quickly. However one of the biggest limitations ATM is the amount of architectures available - if you need to target one not supported then you cannot use rust ATM (though there is quite a few different projects bringing in support for more architectures).

    If you only need to use architectures that rust supports than once you have the basics of rust down take a look at the embedded book, the Discovery book and the Embedonomicon. Then there are various crates for different boards such as ruduino for the arduino uno, or the rp-pico for the raspberry pi pico, or various other crates for specific boards. There are also higher and lower level crates for things - like ones specific to a dev board and ones that are specific to a chipset.

    Lastly, there are embedded frameworks like Embassy that are helpful for more complex applications.