I’ve started noticing articles and YouTube videos touting the benefits of branchless programming, making it sound like this is a hot new technique (or maybe a hot old technique) that everyone should be using. But it seems like it’s only really applicable to data processing applications (as opposed to general programming) and there are very few times in my career where I’ve needed to use, much less optimize, data processing code. And when I do, I use someone else’s library.

How often does branchless programming actually matter in the day to day life of an average developer?

  • marcos@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    1 year ago

    If you want your code to run on the GPU, the complete viability of your code depend on it. But if you just want to run it on the CPU, it is only one of the many micro-optimization techniques you can do to take a few nanoseconds from an inner loop.

    The thing to keep in mind is that there is no such thing as “average developer”. Computing is way too diverse for it.

    • LaggyKar@programming.dev
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 year ago

      And the branchless version may end up being slower on the CPU, because the compiler does a better job optimizing the branching version.

    • Ethan@programming.devOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      If you want your code to run on the GPU, the complete viability of your code depend on it.

      Because of the performance improvements from vectorization, and the fact that GPUs are particularly well suited to that? Or are GPUs particularly bad at branches.

      it is only one of the many micro-optimization techniques you can do to take a few nanoseconds from an inner loop.

      How often do a few nanoseconds in the inner loop matter?

      The thing to keep in mind is that there is no such thing as “average developer”. Computing is way too diverse for it.

      Looking at all the software out there, the vast majority of it is games, apps, and websites. Applications where performance is critical, such as control systems, operating systems, databases, numerical analysis, etc, are relatively rare compared to apps/etc. So statistically speaking the majority of developers must be working on the latter (which is what I mean by an “average developer”). In my experience working on apps there are exceedingly few times where micro-optimizations matter (as in things like assembly and/or branchless programming as opposed to macro-optimizations such as avoiding unnecessary looping/nesting/etc).

      Edit: I can imagine it might matter a lot more for games, such as in shaders or physics calculations. I’ve never worked on a game so my knowledge of that kind of work is rather lacking.

      • LaggyKar@programming.dev
        link
        fedilink
        English
        arrow-up
        22
        ·
        edit-2
        1 year ago

        Or are GPUs particularly bad at branches.

        Yes. GPUs don’t have per-core branching, they have dozens of cores running the same instructions. So if some cores should run the if branch and some run the else branch, all cores in the group will execute both branches, and mask out the one they shouldn’t have run. I also think they they don’t have the advanced branch prediction CPUs have.

        https://en.wikipedia.org/wiki/Single_instruction,_multiple_threads

        • Ethan@programming.devOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Makes sense. The most programming I’ve ever done for a GPU was a few simple shaders for a toy project.

      • ishanpage@programming.dev
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        1 year ago

        How often do a few nanoseconds in the inner loop matter?

        It doesn’t matter until you need it. And when you need it, it’s the difference between life and death

      • graphicsguy@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Also if you branch on a GPU, the compiler has to reserve enough registers to walk through both branches (handwavey), which means lower occupancy.

        Often you have no choice, or removing the branch leaves you with just as much code so it’s irrelevant. But sometimes it matters. If you know that a particular draw call will always use one side of the branch but not the other, a typical optimization is to compile a separate version of the shader that removes the unused branch and saves on registers

      • 0x0@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        How often do a few nanoseconds in the inner loop matter?

        Fintech. Stock exchanges will go to extreme lengths to appease their wolves of Wallstreet.

    • 18107@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes GPUs are bad at branching. But my ray tracer that is made of 90% branches still runs faster on the GPU than the CPU.

      In general you are still correct.

  • Spzi@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    The better of those articles and videos also emphasize you should test and measure, before and after you “improved” your code.

    I’m afraid there is no standard, average solution. You trying to optimize your code might very well cause it to run slower.

    So unless you have good reasons (good as in ‘proof’) to do otherwise, I’d recommend to aim for readable, maintainable code. Which is often not optimized code.

    • Ethan@programming.devOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      One of the reasons I love Go is that it makes it very easy to collect profiles and locate hot spots.

      The part that seems weird to me is that these articles are presented as if it’s a tool that all developers should have in their tool belt, but in 10 years of professional development I have never been in a situation where that kind of optimization would be applicable. Most optimizations I’ve done come down to: I wrote it quickly and ‘lazy’ the first time, but it turned out to be a hot spot, so now I need to put in the time to write it better. And most of the remaining cases are solved by avoiding doing work more than once. I can’t recall a single time when a micro-optimization would have helped, except in college when I was working with microcontrollers.

      • Oliver Lowe@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Given the variety of software in existence I think it’s hard to say that something is so universally essential. Do people writing Wordpress plugins need to know about branch prediction? What about people maintaining that old .NET 3.5 application keeping the business running? VisualBasic macros?

        I agree it’s weird. Probably more about getting clicks/views.

      • tvbusy@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Please please please, God, Allah, Buddha, any god or non god out there, please don’t let any engineer bringing up branchless programming for a AWS lambda function in our one-function-per-micro-service f*ckitechture.

  • Lanthanae@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    It matters if you develop compilers 🤷,

    Otherwise? Readability trumps the minute performance gain almost every time (and that’s assuming your compiler won’t automatically do branchless substitutions for performance reasons anyway which it probably will)

    • Ethan@programming.devOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      I understand the principles, how branch prediction works, and why optimizing to help out the predictor can help. My question is more of, how often does that actually matter to the average developer? Unless you’re a developer on numpy, gonum, cryptography, digital signal processing, etc, how often do you have a hot loop that can be optimized with branchless programming techniques? I think my career has been pretty average in terms of the projects I’ve worked on and I can’t think of a single time I’ve been in that situation.

      I’m also generally aggravated at what skills the software industry thinks are important. I would not be surprised to hear about branchless programming questions showing up in interviews, but those skills (and algorithm design in general) are irrelevant to 99% of development and 99% of developers in my experience. The skills that actually matter (in my experience) are problem solving, debugging, reading code, and soft skills. And being able to write code of course, but that almost seems secondary.

      • Max-P@lemmy.max-p.me
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        I’ve never had to care about it in 16 years of coding. I’ve also seen a few absolutely horrifying code designs in the name of being branchless. Code readability is often way more important than eeking out every bit of compute out of a CPU. And it gets in a domain where architecture matters too: if you’re coding for a microprocessor or some low power embedded ARM processor, those don’t even have branch predictors so it’s a complete waste of time

        I’d say, being able to identify bottlenecks is what really matters, because it’s what will eventually lead you to the hot loop you’ll want to optimize.

        But the overwhelming majority of software is not CPU bound, it’s IO bound. And if it is CPU bound, it’s relatively rare that you can’t just add more CPUs to it.

        I do get your concern however, these interview questions are the plague and usually asked by companies with zero need for it. Personally I pass on any job interview that requires some LeetCode exercises. I know my value and my value isn’t remembering CS exercises from 10 years ago. I’ll absolutely unfuck your webserver or data breach at 3am though. Frontend, backend, Linux servers, cloud infrastructure, databases, you name it, I can handle it no problem.

        • Ethan@programming.devOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Code readability is often way more important

          This. 100% this. The only thing more important than readability is whether it actually works. If you can’t read it, you can’t maintain it. The only exception is throw away scripts I’m only going to use a few times. My problem is that what I find readable and what the other developers find readable are not the same.

          I’d say, being able to identify bottlenecks is what really matters, because it’s what will eventually lead you to the hot loop you’ll want to optimize.

          I love Go. I can modify a program to activate the built-in profiler, or throw the code in a benchmark function and use the tool chain to profile it, then have it render a flame graph that shows me exactly where the CPU is spending its time and/or what calls are allocating. It makes it so easy (most of the time) to identify bottlenecks.

        • Sekoia@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          (Branchless can technically be faster on CPUs without branch prediction, due to pipelines stalling from branches, but it’s still a waste of time unless you’ve actually identified it as a bottleneck)

      • rustic_tiddles@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Personally I try to keep my code as free of branches as possible for simplicity reasons. Branch-free code is often easier to understand and easier to predict for a human. If your program is a giant block of if statements it’s going to be harder to make changes easily and reliably. And you’re likely leaving useful reusable functionality gunked up and spread out throughout your application.

        Every piece of software actually is a data processing pipeline. You take some input, do some processing of some sort, then output something, usually along with some side effects (network requests, writing files, etc). Thinking about your software in this way can help you design better software. I rarely write code that needs to process large amounts of data, but pretty much any code can benefit from intentional simplicity and design.

        • Ethan@programming.devOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          I am all aboard the code readability train. The more readable code is, the more understandable and therefore debuggable and maintainable it is. I will absolutely advocate for any change that increases readability unless it hurts performance in a way that actually matters. I generally try to avoid nesting ifs and loops since deeply nested expressions tend to be awful to debug.

          This article has had a significant influence on my programming style since I read it (many years ago). Specifically this part:

          Don’t indent and indent and indent for the main flow of the method. This is huge. Most people learn the exact opposite way from what’s really proper — they test for a correct condition, and if it’s true, they continue with the real code inside the “if”.

          What you should really do is write “if” statements that check for improper conditions, and if you find them, bail. This cleans your code immensely, in two important ways: (a) the main, normal execution path is all at the top level, so if the programmer is just trying to get a feel for the routine, all she needs to read is the top level statements, instead of trying to trace through indention levels figuring out what the “normal” case is, and (b) it puts the “bail” code right next to the correctness check, which is good because the “bail” code is usually very short and belongs with the correctness check.

          When you plan out a method in your head, you’re thinking, “I should do blank, and if blank fails I bail, but if not I go on to do foo, and if foo fails I should bail, but if not i should do bar, and if that fails I should bail, otherwise I succeed,” but the way most people write it is, “I should do blank, and if that’s good I should do foo, and if that’s good I should do do bar, but if blank was bad I should bail, and if foo was bad I should bail, and if bar was bad I should bail, otherwise I succeed.” You’ve spread your thinking out: why are we mentioning blank again after we went on to foo and bar? We’re SO DONE with blank. It’s SO two statements ago.

          • rustic_tiddles@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Yep, that’s how I write my code too. I took a class in college, comparative programming languages, that really changed how I thought about programming. The first section of the class was Ruby, and the code most of us wrote was pretty standard imperative style code. If statements, loops, etc. Then we spent a month or so in Haskell, basically rewriting parts of the standard library by only using more basic functions. I found it insanely difficult to wrap my head around but eventually did it.

            Then we went back and wrote some more Ruby. A program that might have been 20-30 lines of imperative Ruby could often be expressed in 3 or 4 lines of functional style code. For me that was a huge eye opener and I’ve continued to apply functional style patterns regardless of the language I’m using (as long as it’s not out of style for the project, or makes anything less maintainable/reliable).

            Then one day a coworker showed us a presentation from Netflix (presentation was done by Netflix software engineers, not related to the service) and how to think about event handlers differently. Instead of thinking of them as “events”, think about them as async streams of data - basically just a list you’re iterating over (except asynchronously). That blew my mind at the time, because it allows you to unify both synchronous and asynchronous programming paradigms and reuse the same primitives (map/filter/reduce) and patterns in both.

            This is far beyond just eliminating if statements, but it turns out if you can reduce your code to a series of map/filter/reduce, you’re in an insanely good spot for any refactoring, reusing functionality, easily supporting new use cases, flexibility, etc. The downside would be more junior devs almost never think this way (so tough for them to work on), and it can get really messy and too abstract on large projects. You can’t take these things too far and need to stay practical, but those concepts really changed how I looked at programming in a major way.

            It went from “a program is a step by step machine for performing many types of actions” to “a program is a pipeline for processing lists of data”. A step by step machine is complex and can easily break down, esp when you start changing things. Pipelines are simple + reliable, and as long as you connect them up properly the data will flow where it needs to flow. It’s easy to add new parts without impacting and existing code. And any data is a list, even if it’s a list of a single element.

            • Ethan@programming.devOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Do you recall what the presentation was called? I built a pipelined packet processing system (for debugging packets sent over an RF channel) which sounds like a fairly representative example of what you’re talking about, but it’s not obvious to me how to naturally extend that to other types of projects.

              • rustic_tiddles@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                I don’t remember the presentation, but luckily I did remember the concept and here’s an article: https://netflixtechblog.com/reactive-programming-in-the-netflix-api-with-rxjava-7811c3a1496a

                It’s called “reactive” programming and that article goes over some of the basic premises. The context of the presentation was in front-end (web) code where it’s a god awful mess if you try to handle it in an imperative programming style. React = reactive programming. If you’ve ever wondered why React took off like it did, it’s because these concepts transformed the hellish nightmare landscape of jquery and cobbled together websites into something resembling manageable complexity (I’m ignoring a lot of stuff in between, the best parts of Angular were reactive too).

                Reactive programming is really a pipeline of your data. So the concepts are applicable to all sorts of development, from low level packet processing, to web application development on both the front and back end, to data processing, to anything else. You can use these patterns in any software, but unless your data is async it’s just “functional programming”.

                • Ethan@programming.devOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  I wonder how relevant this is to Go (which is what I work in these days), at least for simple data retrieval services. I can see how transforming code to a functional style could improve clarity, but Go pretty much completely eliminates the need to worry about threads. I can write IO bound code and be confident that Go will shuffle my routines between existing threads and create new OS threads as the existing ones are blocked by syscalls. Though I suppose to achieve high performance I may need to start thinking about that more carefully.

                  On the other hand, the other major component of the system I’m working on is responsible for executing business logic. It’s probably too late to adopt a reactive programming approach, but it does seem like a more interesting problem than reactive programming for a data retrieval service.

    • Ethan@programming.devOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I thought it might be helpful for optimizing cryptographic code, but it hadn’t occurred to me that it would prevent side channel leaks

  • FriendOfFalcons@kbin.social
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    I only know of a handful of cases where branchless programming is actually being used. And those are really niche ones.

    So no. The average programmer really doesn’t need to use it, probably ever.

  • ZILtoid1991@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    It’s useful in digital signal processing, but otherwise it just makes your code harder to read.

    const int resultBranchless = aVal * switch + bVal * (1 - switch);
    //vs
    const int resultWithBranching = switch ? aVal : bVal;
    
    

    Usually compilers will optimize the second one to a cmov or similar instruction, which is as close to fast branching as it can (except cfmov on older x86 CPUs), and is DSP compatible.

    • philm@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Yeah especially if it isn’t done on the GPU (where branch optimization certainly makes more sense). branch prediction in CPUs is pretty smart these days.

  • morhp@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    How often does branchless programming actually matter in the day to day life of an average developer?

    Barely never. When writing some code that really has to be high performance (i.e. where you know it slows down your program), it can help to think about if there are branches or jumps that you can potentially simplify or eliminate.

    Of course some things are often branchless, for example GPU shaders, which need very high performance and which usually always do the same things. But that’s an exception.

    • nakal@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      There are few people who are smarter than a compiler. And those who use “branchless coding” probably aren’t.

  • lowleveldata@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    edit-2
    1 year ago

    Can’t imagine any practical difference performance wise. Maybe it’s about making the flow easier to understand? I do recall that Sonarqube sometimes complains when you have too much branchings in a single function

    • Ethan@programming.devOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If you’re writing data processing code, there are real advantages to avoiding branches, and its especially helpful for SIMD/vectorization such as with AVX instructions or code for a GPU (i.e. shaders). My question is not about whether its helpful - it definitely is in the right circumstances - but about how often those circumstances occur.

      • lowleveldata@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Ya, and my examination is I don’t think it has practical impacts for day to day tasks. Unless you’re writing AVX instructions day to day but then you already knew the answer.