Oh wow, looks like the Haskell devs have been hauling ass! Nice!
I remember the language server being a thing already, but it was in some alpha stage back then. Good to know it’s usable now! :D
Oh wow, looks like the Haskell devs have been hauling ass! Nice!
I remember the language server being a thing already, but it was in some alpha stage back then. Good to know it’s usable now! :D
Here’s what I remember from Haskell (around 2018):
I love the language, but hate the tooling.
Used it for Uni (did a minor where I learned Haskell, recursion, parsing and regex - probably the most information dense part of school I’ve ever had. Half a year of minor also burned me out, so I never went for my masters; I’m OK with my Bachelors :D ), but never felt like picking it back up.
Nix(OS) has been leaving Arch’s AUR in the dust too! https://repology.org/repositories/graphs
It’s pretty wild how many packages are available. Somewhat curious if that includes variants (be it architecture, be it version) and how “few” they would have without those “duplicates” :p
That box story right below the original message is hilarious! 😂 It’s always good to bring up happy memories after someone passed away. Good way to mourn, IMO.
Stroustrup to congress: “You expect me to talk?”
Congress: “No, Mr Stroustup, we expect your language to DIE!”
I’ve been using git for some three years now - never used Cherrypick (not consciously, anyway).
Just take on fewer points per sprint, if you can’t make it every time? Scrum is about becoming predictable, not being the absolute fastest. That’s been my experience, anyway. If your PO is pressuring you to take on more, you say “no”, because that’s your responsibility, not his.
But maybe that’s just me.
Linux (because Unix was originally created for programmers), and C because so many other languages derive from it.
Learn the language (types, functions, how to set up a project, etc), then learn the library (you can use the man pages from Linux).
You can use this knowledge for Python, as Python uses the library too, under the hood.
If someone flies the “software engineer” banner seriously, I expect them to have some theoretic knowledge besides the practical one. They would know different programming paradigms (procedural, OOP, FP), know about programming patterns, layers, UML, and at least a programming language or 4 (3 superficial, 1 in-depth).
A software developer can be any random code-monkey picked up from the street that is self-taught and/or had a boot camp of sorts. Nothing wrong with being self-taught or boot camps, as SDs need to eat, but it lacks a certain level or rigor I would expect from a SE.
If both had a certain amount of experience the SD would mostly catch up to the SE, in practice. Not sure if on theoretic knowledge too, but that depends.
Any hardware that’s abandoned needs to be forced to release the source of any needed software - the latest version.
We’d need a range of available licences, as to prevent any bullshit “you’re only allowed to read this source” license.
This is going to suck for Apple, but it’s going to be great for people who pay for some expensive microscope that’s not supported any more.
There’s probably a lot of legal nonsense that may make this impossible in practice, but I’d love to see this happen.
Even worse: Depending on (local or national) law, it may be the company’s property, even if written in personal time. Especially if the code is in competition with your work.
Yes, it’s ass-backwards, but that’s how it is in some places.
Yes, I too used to struggle with this.
Learn how to debug. For me, it’s a lifesaver for me to be able to step through some code to figure out what it actually does, instead of me trying to read the code to figure out what it may do. Yes, I do this for my own code too, because we tend to make assumptions, and I want to confirm those, always.
That means learning how to setup your IDE of choice - I presume you use vscode, so you’ll then have to google for “vscode debugging”. Maybe you’ll have to install some addons to add the support, probably setup some launch.json
in a local .vscode
folder. It all depends on your language of choice.
Learn how to test. This goes great with debugging. I write code in Python, so I end up creating src/
and tests/
folders. src/
for my actual code, and tests/
for my tests. I can use either pytest
on the terminal, or just the vscode test addons to run tests.
Anyway, my tests end up being something like this:
src/my_app/main.py
or something, with src/my_app/__init__.py
existing to turn that folder into a module:
def main():
# some code I want to run
Then in tests/test_main.py
(mirroring the src/
folder; adding test_
makes the file findable for pytest, and I call it main
to make it easier to link to the main code):
from my_app import main
def test_main():
main()
This is how I always start off with - just a simple piece of code that does a thing, and a test with almost the same name as the function I’m trying to test. I can now add a breakpoint inside test_main
and run the test within vscode, which means I have a way of hooking into the main function.
Think about how to cut up the steps to create your application into smaller and smaller steps. Whenever something feels insurmountable, I’ll just have to stop in my tracks and mentally cut up a task into smaller and smaller steps, until I feel comfortable to take some steps.
I’m a data engineer, which means I tend to write code to ‘ingest’ data (which means, grab it from source A and put it into target B (where B is some centralized location to store all raw data).
So the main task is:
I then have to figure out “what is the source”, because that dictates how I grab the data (do I have to loop over all folders in an SFTP server? Is there a state file that makes my life easier? Do I use an API instead?)
I then start writing a small piece of code that connects to the source, or just grabs some random data to show the connection works.
So now I can grab some data. Is that data too large to ingest all at once? If a file is super large, I may not be able to hold it into data, which means using a buffer. And how many files are there to download? Should I batch those?
and this is how I slowly grow my applications from an idea “ingest all data from some source” into something that can actually run.
Now, I do have some experience and know that filesize and filecount are important to take into account, but that’s something I learned along the way.
My first applications just loaded whole files into memory (a bad idea if your memory limit is 4 GB, and I’m trying to load multiple 1GB sized files into memory 😆), and taking local state (which files have I already downloaded) and external state (which one have updated or been added?) into account, etc.
Anyway, you’re already on the right path: You already know a weak point, and you’re smart enough to know your limits and ask for help when you’re stuck. That’s one of the fastest ways to grow as a programmer.
Modern Perl
Perl or Raku?
Summed up as
T H E E N E R G Y T RA N S I T I O N
Not that that means anything to people outside the industry (spoiler: it means our energy networks need upgrading to accommodate all those solar panels in the network, and all that generated energy needs to be tracked, which it’s not as of today, because only a handful of locations used to generate energy, which we didn’t need to track)
I’m just a data engineer, but that shit is pretty fascinating in and of itself!
An overly simplified summary: Developers run on “Copium”.
Even their RDBMS and SQL was copied from ideas that came from IBM. And I recall either E. F. Codd or one of the SQL guys making a remark about Oracle’s less-than-saviour sales tactics, even back in the 90s.