• 0 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: December 27th, 2023

help-circle
  • after looking at the ticket myself i think the relevant things IMHO are:

    • a person filed a bug report due to not seeing what changes in the new version caused a different behaviour
    • that person seemed pushy, first telling the dev where patches should be sent to (is this normal? i guess not, better let the dev decide where patches go or -in this case- if patches are needed at all), then coming up with ceo style wordings (highly visible, customer experience of untested but nevertheless released to live product is bad due to this (implicitly “your”) bug)
    • pushiness is counterparted by “please help”
    • free-of-charge consulting was given by the one pointing to changes likely beeing visible in changelog (i did not look though) but nevertheless it was pointed out to the parameter which assumes RTFM (if docs were indeed updated) that a default value had changed and its behavior could be adjusted by using that given parameter.

    up to there that person -belonging to M$ or not (don’t know and don’t care) - behaved IMHO rather correctly, submitting a bug report for something that looked like it, beeing a bit pushy, wanting priority, trying to command, but still formally at least “asking” for help. but at that point the “bug” seemed to have been resolved to me, it looks like the person was either not reading the manual and changelog, or maybe manual or changelog lacks that information, but that was not stated later so i guess that person just did not read neither changelog nor manual.

    instead - so it seems to me - that person demanded immediate and free-of-charge consulting of how exactly the switch should be used to work in that specific use case which would imply the dev looks into the example files, maybe try and error for himself just so that that person does not need to neither invest the time to learn use the software the company depends on, nor hire a consultant to do the work.

    i think (intentional or not) abusing a bug tracker for demanding free-of-charge enduser consulting by a dev is a bad idea unless one wants(!) to actively waste the precious time of the dev (that high priority ticket for the highly visible already live released product relies on) or has even worse intentions like:

    • uploading example files with exploits in them, pointing to the exact versions that include the RCE vulnerability that sample file would abuse and the “bug” was just reported cause it fits the version needed for exploitation and pressure was made by naming big companies to maybe make the dev run a vulnerable version on it on his workstation before someone finds out, so that an upstream attack could take place directly on the devs workstation. but thats just creating a fictive worst case scenario.

    to me this clearly looks like a “different culture” problem. in companies where all are paid from basically the same employer, abusing an internal bug tracker for quick internal consulting would probably be seen as just normal and best practice because the dev who knows and is actually working on the code is likely to have the solution right at hand without thinking much while the other person, who is in charge of quick fixing an untested but already live to customers released product, does not have sufficient knowledge of how the thing works and neither is given the time to learn or at least read changelogs and manual nor the time to learn the basics of general upstream software culture.

    in companies the https://en.m.wikipedia.org/wiki/Peter_principle could be a problem that imho likely leads to such situations, but this is a guess as i know nobody working there and i am not convinced that that person is in fact working for the named company, instead in that ticket shows up a name that i would assume to be a reason to not rely too much about names in the tickes system always be realnames.

    the behaviour that causes the bad postings here in this lemmy thread is to me likely “just” a culture problem and that person would be advised well if told to learn to know the open source culture, netiquette etc and learn to behave differently depending on to who, where and how they communicate with, what to expect and how to interact productively to the benefit of their upstream too, which is the “real price” all so often in open source. it could be that in the company that rolled out the untested product it is seen to be best practice to immediately grab the dev who knows a software and let him help you with whatever you can’t on your own (for whatever reason) whenever you manage to encounter one =]

    i assume the pushyness could likely come from their hierarchy. it is not uncommon that so called leaders just create pressure to below because they maybe have no clue of the thing and not want to gain that clue, but that i cannot know, its just a picture in my head. but in a company that seems to put pressure on releasing an untested product to customers i guess i am not too wrong with the direction of that assumption. what the company maybe should learn is that releasing untested and/or unfinished products to live is a bad habit. but i also assume that if they wanted to learn that, they maybe would have started to learn it like roundabout 2 decades ago. again, i do not know for what company that person works -or worked- for, could be just a subcontractor of the named one too. and also could be that the pushyness (telling its for m$, that its live, has impact to customers etc) was really decided by someone up the latter who would have literally no experience at all on how to handle upstream in such situations. hierarchies can be very dysfunctional sometimes and in companies saying “impact to customers” sometimes is likely the same as saying “boss says asap”.

    what i would suggest their customers (those who were given a beta version as production ready) should learn is that when someone (maybe) continously delivers differently than advertised, that after some few times of experiencing this, the customer would be insane when assuming that that bad behaviour would vanish by pure hope + throwing money into hands where money maybe already didn’t help improving their habits for assumingly decades. And when feeding everhungry with money does not resolve the problems, that maybe looking towards those who do have a non-money-dependant grown-up culture could actually provide more really usable products. Evaluation of new solutions (which one would really be best for a specific usecase i.e.) or testing new versions before really rolling them out to live might be costly especially when done throughout, but can provide a lot of really high valueable stability otherwise unreachable by those who only throw money at shareholders of brands and maybe rely on pure hope for all of the rest. Especially when that brand maybe even officially anounced to remove their testing department ;+) what should a sane and educated customer expect then ? but again to note, i do not know which companies really are involved and how exactly. from the ticket i do not see which company that person directly works for, nor if the claim that m$ is involved is a fact or just a false claim in hope for quicker help (companies already too desperate to test products before live could be desperate again in need for even more help when their bad habits piled up too long and begin falling on their heads)


  • the xz vulnerability was done through a superflous dependency to systemd, xz was only the library that was abused to use systemd’s superflous dependency hell. sshd does not use xz, but systemd does depend on it. sshd does not need systemd, but it was attacked through its library dependency.

    we should remove any pointless dependencies that can be found on a system to prevent such attacks in future by reducing dependency based attack vectors to a minimum.

    also we should increase the overall level of privilege separation where systemd is a good bad example, just look at the init binary and its capability zoo.

    The company who hired “the” systemd developer should IMHO start to really fix these issues !

    so please hold your “$they have fixed it” back until the the root cause that made the xz dependency level attack possible in the first place has been really fixed =)

    Of course pointing it out was good, but now the root cause should be fixed, not just a random symptom that happened to be the first visible atrack that used this attack vector introduced by systemd.


  • my 2 cents just in case…:

    A raid6 is not a replacement for backup ;-) i use rdiff-backup which is easy to use, stores only one full backup and all increments are to the past while it is only possible to delete the oldest increments (afaik no “merging”) i never needed anything else. The backup should be one off-site and another one offline to be synced once in a while manually. Make complete dumps (including triggers, etc) from databases before doing the backup ;-)

    i like to have a recreateable server setup, like setting it up manually, then putting everything i did into ansilbe, try to recreate a “spare” server using ansible and the backup, test everything and you can be sure you also have “documented” your setup to a good degree.

    for hardware i do not have much assumptions about performance (until it hits me), but an always-running in-house server should better safe power (i learned this the costly way). it is possible to turn cpu’s off and run only on one cpu with only a reduced freq in times without performance needs, that could help a bit, at least it would feel good to do so while turning cpu’s on again + set higher frequency is quick and can be easily scripted.

    hard drives: make sure you buy 24/7, they are usually way more hassle-free than the consumer grades and likely “only” cost double the price. i would always place the system on SSD but always as raid1 (not raid6), while the “other” could then maybe be a magnetic one set to write-mostly.

    as i do not buy “server” hardware for my home server, i always buy the components twice when i change something, so that i would have the spare parts ready at hand when i need it. running a server for 5+ years often ends up in not beeing able to buy the same again, and then you have to first search what you want, order, test, maybe send back as it might not fit… instable memory? mainboard released smoke signs? with spare parts at hand, a matter of minutes! only thing i am missing with my consumer grade home server hardware is ecc ram :-/

    for cooling i like to use a 12cm fan and only power it with 5v (instead of the 12v it wants) so that it runs smoothly slow and nearly as silent as a passive only cooling, but heat does not build up in the summer. do not forget to clean the dust once in a while… i never had a 5v powered 12V-12cm fan that had any problems with the bearings and i think one of them ran for over a decade. i think the 12volt fans last longer with 5v, but no warranty from me ;-)

    even with headless i like to have a quick way at hand to get to a console in case of network might not be working. i once used a serial cable and my notebook, then a small monitor/keyboard, now i use pikvm and could look to my servers physical console from my mobile phone (but would need ssl client certificate and TOTP to do so) but this involves network, i know XD

    you likely want smart monitoring and once in a while run memtest.

    for servers i also like to have some monitoring that could push a message to my phone somehow for some foreseeable conditions that i would like to handle manually.

    debsums, logcheck logwatch and fail2ban are also worth looking at depending on what you want.

    also after updating packages, have a look at lsof | egrep “DEL|deleted” to see what programs need a simple restart to really use libraries that have been updated. so reboots only for newer kernels.

    ok this is more than 2 cents, maybe 5. never mind

    hope these ideas help a bit