Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle


  • Some of those keys are public knowledge and only serves to tie what client it is (Chromium, Firefox, Safari probably) or otherwise stolen from one of those. This is a safe browsing API key, it’s used to check if sites have been marked as phishing/scam/etc and is used to warn users that the site is known to be malicious. Others are used to tie analytics or ads to the app, so it goes into the right developer’s account metrics.

    I wouldn’t call those leaked, they’re meant to be embedded into apps and aren’t considered as secret keys.

    It’s common practice to use API keys like that even if they’re not so secret, just for the sake of tracking which app is making what requests and so people can’t just openly use the API. You can easily shut down unapproved clients by just rolling out a new key, and it causes an annoying whack-a-mole game to constantly have to extract them from an APK.



  • Also replying to that bit:

    and job search is impossible as a person with anxiety and possibly autism?

    Don’t give up, job hunting is super fucked up right now with the market flooded with good engineers freshly laid off FAANG and other big companies.

    Autism sucks hard at times, but I don’t think I would be where I am without. Use it to your advantage, your abstract computer knowledge can grow much further than most people can possibly care to get into.


  • Story points are bullshit and I just hope it’s not the sole metric you’re judged on, or at least your team don’t see it that way. If you do, definitely try to steal a bunch of easy tickets to even out the playfield.

    How many story points can I complete? I don’t know because that’s a hard no for my team, but if we did, probably would not have many points either unless you assign like hundreds of points to my tickets. Some take me weeks and months to get through.

    Why? Because I get assigned all the incredibly cursed tickets, and those get assigned to me for a reason: they’re my kind of specialty and I’m the senior on the team with the skills to tackle them. And my performance reviews still say I exceed expectations. I complete those in record time, comparatively.

    I deal with and fix things most people don’t even dare to touch. It’s a well known fact all the way to the CTO and other senior staff of adjacent teams. It’s just, you can’t break down tasks to half day tickets when all you know is there’s a giant rabbit hole and you can’t see how deep it is until you start digging into it. I’m the guy that can just pull out GDB and strace and debug the interpreter that runs the software. My colleagues write standard PHP/NodeJS meanwhile I go browse the PHP and V8 source code to get down deep into why things break, report and fix bugs upstream. NodeJS crashing on a SIGPIPE on a socket that’s already closed? Yep, figured it out down to the exact series of syscalls that led there. Sometimes I have people from teams I never talked to reach out to me with their problems, because when you’re real stuck, you go get Max’s input on it.

    Ultimately, you should talk about that with your manager. Is your manager happy with your performance? Does your team seem frustrated with your performance? It’s very possible you get the tougher ones because people know you can handle those. If you’re struggling that badly and somehow you end up with the hard tickets, your manager is dropping the ball hard and not setting you up for success.


  • But why should we have to close the socket manually, and why only when a certain number of bytes has been received? Based on the documentation here it certainly seems like it should automatically destroy its file descriptor when the server is done sending data.

    My suspicion is, the client here never reads all the data that’s being sent by the server, so the server is there waiting to finish sending the page and on the client nobody’s reading it, so the connection never closes because the request isn’t completely finished. The TLS callbacks are probably still working, just not making any progress on the actual data stream. So it sits there for a really long time, until something times out, if anything times out there.

    That’d make sense if the check stops reading after the HTTP headers. It reads the chunks it needs and then just stops processing because it doesn’t need the rest of the data, and the TLS wrapper is probably preventing node from collecting the socket itself. The TLS stream happily processes TLS layer pings and keeps trying to push data out but nobody’s reading the Stream anymore. Node can’t release the underlying TCP connection because the TLS wrapper is still handling it, and it can’t collect the wrapper because the TCP connection is still piped into it.

    It probably works with small payloads because it’s small enough to be read in one chunk, so even if the TLS wrapper isn’t fully read, the underlying TCP connection is, so the server gets to close the connection, the fd is closed, and node is able to collect the TCP connection object and then the wrapper is fully orphaned and can also be collected.





  • I'll add, it also depends on the efficiency of the local power supplies if those devices were using wall warts. Those are often pretty generic, and may only be used at 25% which for some wall warts would be outside of their top efficiency curve. A single power supply in the form of PoE can be more efficient if it lets both the switch and PoE regulator on the device operate at a better efficiency point.

    In some way, stepping down 48V DC down to 3.3/5V is a bit easier than stepping down the 168V that results from rectifying 120V AC to DC. But the wart could be stepping down the 120V to 5V first with a simple AC transformer which are nearly always more efficient (95%+) than a DC/DC buck converter, but those can still reach 90% efficiency as well.

    In terms of cabling, power loss is a function of current and length (resistance). AC is nice because we can step it up easily and efficiently to extremely high voltages as to minimize the current flowing through the wire, and then step it back down to a manageable voltage. In that way, american 120V has more loss than rest of the world 240V, although it only matters for higher power devices. That also means that the location of the stepping down matters: if you're gonna run 30m of ethernet and a parallel run of 30m of 5V power, there will be more loss than if you just ran PoE. But again, you need to account the efficiency of the system as a whole. Maybe you'd have a wart that's 5% more efficient, but you lose that 5% in the cable and it's a wash. Maybe the wart is super efficient and it's still way better. Maybe the switch is more efficient.

    It's going to be highly implementation dependent in how well tuned all the power supplies are across the whole system. You'd need either the exact specs you'll run, or measure both options and see which has the least power usage.

    I would just run PoE for the convenience of not having to also have an outlet near the device, especially APs which typically work best installed on ceilings. Technically if you run the heat at all during the winter, the loss from the power supplies will contribute to your heating ever so slightly, but will also work against your AC during summers. In the end, I'd still expect the losses to amount to pennies or at best a few dollars. It may end up more expensive just in wiring if some devices are far from an outlet.


  • The switch can put out 15.4W, but it doesn't control how much power flows. The device can draw 15.4W if it wants to but it won't necessarily do so. The switch can lower the voltage it supplies, and it can cap the power output by lowering the voltage it supplies, but it can't push a certain amount of power. That would violate the fundamental physics of electronics.

    Put a 2.4kΩ resistor as the "device", and at 48V, the absolute maximum that will flow is ~1W. The switch would have to push 196V to force that resistor to use 15.4W which would put it way out of spec. And there's nothing preventing the device from being smart enough to adjust that resistance either to maintain 1W. That's basic Ohms law.

    The device must negotiate if it's going to use more than the default 15.4W, or it can advertise it's low power so the switch can allocate the power budget to other devices as needed. But the switch can only act as a limiter, it can't provide more than the device takes. It can have the ability to provide more than the device takes, but simply can't force the device to take more.


  • It’s better and worse at the same time: it just doesn’t bother with it for the most part. If you have files named with UTF-8 characters, and run it with a locale that uses an ISO-whatever charset, it just displays them wrong. As long as the byte is not a zero or an ASCII forward slash, it’ll take it.

    There’s still a path length limit but it’s bigger: 255 bytes for filenames and 4096 bytes for a whole path. That’s bytes, not characters. So if you use UTF-16 like on Windows, those numbers are halved.

    That said, it’s assumed to be UTF-8 these days and should be interpreted as UTF-8, nobody uses non-UTF-8 locales anymore. But you technically can.


  • A lot of bundling in the JS world is also either because of TypeScript, or transpiling to old JS so that it’s more compatible with old node / browser. JS has gone through quite drastic changes in syntax, from vars and prototypes to now let/const, ESM imports, classes, Promises, async/await. Lot of it which may run in an old browser. It also helps runtime speed, slightly, but it’s not something that matters all that much on a server because you just wait a second or two for it to load.

    JS is also kind of wild with how many libraries a given project may pull in, and how many minuscule files those tend to use, especially since each library also get their own versions of every dependencies too.

    Python uses much fewer libraries and has code cache. PHP has code caching and preloading built-in so filesystem accesses are reduced. Bash usually doesn’t grow that big. Ruby probably just accepts a second or to two to load up for the simplicity of the developer experience. Typically there’s one fairly large framework library and a few plugins and utilities, whereas a big Next.js project will pull in hundreds of libraries and tools in.

    A JS solution to a JS problem really. It needs to run in potentially ancient browsers, so we just make a giant JS file. For the other languages, you can pretty much just add it right to the runtime. If bundling was that big of a deal we’d read libraries right off a zip file like Java does with its jar files by default.

    Plus, if you really care, you can turn on filesystem compression on your project directory and get the same benefits as


  • I do, Postfix and Dovecot. Mine’s got 10 years of history so I’ve been spared being blocked everywhere.

    Most will tell you the software side is not too bad these days but the constant fighting to get your emails through can be really rough.

    Personally I find it useful if only for the sake of just registering every service to its own unique email address so I can track who got my data where, and I get the privacy of Google not knowing every site I’m registered with. I still use my Gmail when I want to be sure it goes through.

    I really don’t send that many emails so it works pretty well for me.


  • Your assumption is correct, Axios is using the proxy differently. The recommended way for HTTPS over a proxy is to use the CONNECT method, which just passes through the traffic directly and allows for proper end to end encryption.

    Axios just asks the proxy to get it over HTTPS as it would HTTP, and it seems Squid isn’t configured correctly and can’t handle outgoing TLS. You might need to enable TLSv2/TLSv3 in Squid, as the error says it couldn’t agree on security settings and one of them is probably using outdated ciphers.


  • You can probably ask them to pull the wires there but not install or terminate them for a patch panel.

    Because you specified a patch panel, they probably quoted for the installation of the rack and the patch panel, as it’s not there and therefore they need it to complete the task completely.

    You’ll end up with loose unterminated wires you can then just put an RJ45 plug on and wire directly to a switch or whatever.

    I’d just manage the actual patching with VLANs on the switch. Unless you plan a more complex setup with some jacks going directly to a server or other routers/switches, it should be plentiful to just have 24 live ports you can plug devices into. Fair amount of switches can be simply wall mounted without a rack.




  • That’s really what’s going on.

    Back in the days, people took the time it was necessary to write the software. And managers trusted the engineers to say when it’s ready or not.

    Nowadays, the software world is managers going “yes we know the database’s gonna blow up over the weekend without the query optimizations, but we want to build this new feature before the end of the week. We can deal with the database when it blows up over the weekend, that’s why you guys are on-call.”

    I did not make this up, I’ve actually heard this. This is why modern software is so fucked up, not because we can’t handle the complexity, because reliability and quality just isn’t prioritized at all anymore. Gotta dish out new features every day and you’re not allowed to work on fixing known critical bugs.