• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle


  • I’d love to see this become something greater. Consider this challenging problem:

    Suppose you have an instance with a community (“C”) that likes to promote subtle but wrong things.

    Suppose there’s a community of fact checkers (“F”) who wants to promote actual, verifiable/falsifiable facts by responding to lies with compelling and relevant references. They want to help by directly replying to posts or applying tags in community C, but they are not permitted to contribute by that instance. The community C seems to want their lies to remain unchallenged.

    And then suppose there’s some opted-in users (“U”) who want to receive help understanding when posts in community C are not factual. They would like to receive posts or tags from fact checkers, because people they trust have recommended they listen to these fact checkers.

    I’d love to see a tagging system that can help “U” and “F” connect, even if the owners of “C” don’t want them to, when browsing content in “C”. Ideally in an extensible way that lets some future implementer come up with novel ways to organize and maintain the fact-checking side of things in response to new threats.

    I probably explained this badly, and the letters are probably more pretentious than helpful. But I hope someone smarter can pick this up and run with it, because it’s something the world desperately needs.


  • That’s right. I know I was thrown off by large projects earlier in my career. The more you learn the stronger you get at understanding and packaging/setting-aside larger and larger pieces of a project. Bigger projects stress this ability in new ways. I think I lost a job in 2016 because I couldn’t stretch my brain around something bigger, at a small business with maybe 14 devs.

    This might be a bad way to communicate this, and I think I’m taking this in a weird direction, but: I’ll use the Mozilla project as an example of a large project, though I’ve never looked at its source.

    Suppose you were in an interview, and due to the specifics you are expected to be fast and fluent with the same technologies used in the Mozilla project, though you’ve never looked at the source before. Given a machine with the source already checked out and open in an IDE, you have one hour to read through the source and familiarize yourself with it, so you can answer questions about how you would approach adding features or test coverage.

    What I want to know is: how high does your heart rate go? Does it go up just a little, as expected for a high stakes situation? Or does it go up a lot, because you honestly have no idea how much another dev in your situation would be expected to accomplish, so you have no clue what “good enough” looks like?

    This is a crappy example because no interviewer could ever actually use this metric. But I’d say if it goes up a lot, for the reason I gave, you might not be ready for senior. And by this metric, it might not ever be possible to grow to “senior” without working at a company with large multi-team projects. But I think that’s accurate.

    (Edit: yes, sorry, Software Development Engineer. I think that’s a protected term in the US, in Texas and California at least, but anywhere else in the US you don’t need to pass an engineering board exam to use that title.)


  • It sounds like you’ve got enough familiarity with the whole development lifecycle, as applied to a smaller single-dev-sized project, that you’d be great as an SDE 2 at a larger company, ready within a few years to step up to Senior. There are companies with hundreds of developers who only rarely hire straight out of college, where your level of experience is exactly what they want.

    (There are also companies with hundreds of developers who do hire straight out of college, and I’m not trying to disillusion recent grads.)



  • Think of a programming language as a crutch for the human brain. Processors don’t need it: they don’t have to think about the code, they just execute it. Our mushy human brains need a lot of help, however.

    We need to think about things on our own terms. Different programming languages, different APIs that do the same thing, different object models, these all help people tackle new problems, or even just implement solutions in new ways.

    Some new languages have a completely different model of execution you may not be familiar with. Imperative languages are what we traditionally think of, because they work most similarly to how processors execute code: the major pattern used to make progress, do work, is to create variables and assign values to them. C, COBOL, BASIC, Pascal, C# (my personal favorite), Javascript, even Rust, are all imperative languages.

    But there are also functional languages, like ML or F#. (The latter, I keep installing with Visual Studio but never ever use) The main pattern there is function application. Functions themselves are first order data, and not in a hacky implementation-specific way like you’re passing machine code around. (I’ve only ever used this for grad school homework, never professionally, sadly.)

    And declarative languages like Prolog helped give IBM’s Watson its legendary open question answering ability on national TV. When you need a system to be really, actually smart, not just create smart-sounding text convincingly like a generative AI, why not use a language that lets you declare fact tables? (Again, only grad school homework use for me here)

    Programming is all about solving problems, and there are so many kinds of problems and so many ways to think about them. I know my own personal pile of gray mush needs all the help it can get.