These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

“Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto”, per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

“(Jeff) Macpherson is a director and co-founder at Xagency.AI”, a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s “over 7 years in the tech sector” which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

“Illustrator Martin Deschatelets” whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

“Ottawa economist Armine Yalnizyan”, per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the “we” who have to adapt here?

AI is apparently “something that can tell you how many cows are in the world” (J.M.). Detecting a lack of results validation here again.

“At the end of the day that’s what it’s all for. The efficiency, the productivity, to put profit in all of our pockets”, from J.M.

“You now have the opportunity to become a Prompt Engineer”, from J.M. to the author and illustrator. (It’s worth watching the video to listen to this person.)

Me about the article:

I’m feeling that same underwhelming “is this it” bewilderment again.

Me about the video:

Critical thinking and ethics and “how software products work in practice” classes for everybody in this industry please.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    your industry isn’t alone in that — just like blockchains, LLMs and generative AI are a solution in search of a problem. and like with cryptocurrencies, there’s a ton of grifters with a lot of money riding on you not noticing that the tech isn’t actually good for anything

    • datarama@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Generative AI is full of grifters and hype like crypto (and some of them are the same grifters!), but it’s not only hype. Case in point: As per OP, illustrators (and other commercial visual artists) are losing work to AI image generators. Not because the AI is better - but it’s much cheaper and faster. The slightly uncanny and “off” look to most AI art is “good enough” for many commercial uses.

      And this is exactly the problem generative AI is presented as a promise to solve: Humans want to be paid for their work. There’s a reason most of these things aren’t designed as tools for professionals, but as something that poses as a replacement for that professional. This makes them appeal maximally to the people able to point a firehose of money at the AI companies, and makes them come across as a threat to workers.

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Unlike blockchains, LLMs have practical uses (GH copilot, for example, and some RAG usecases like summarizing aggregated search results). Unfortunately, everyone and their mother seems to think it can solve every problem they have, and it doesn’t help when suits in companies want to use LLMs just to market that they use them.

      Generally speaking, they are a solution in search of a problem though.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 year ago

        GH copilot, for example, and some RAG usecases like summarizing aggregated search results

        you have no idea how many engineering meetings I’ve had go off the rails entirely because my coworkers couldn’t stop pasting obviously wrong shit from copilot, ChatGPT, or Bing straight into prod (including a bunch of rounds of re-prompting once someone realized the bullshit the model suggested didn’t work)

        I also have no idea how many, thanks to alcohol

        • Steve@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          Haha they are, in fact, solutions that solve potential problems. They aren’t searching for problems but they are searching for people to believe that the problems they solve are going to happen if they don’t use AI.

        • TehPers@beehaw.org
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          1 year ago

          That sounds miserable tbh. I use copilot for repetitive tasks, since it’s good at continuing patterns (5 lines slightly different each time but otherwise the same). If your engineers are just pasting whatever BS comes out of the LLM into their code, maybe they need a serious talking to about replacing them with the LLM if they can’t contribute anything meaningful beyond that.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            as much as I’d like to have a serious talk with about 95% of my industry right now, I usually prefer to rant about fascist billionaire assholes like altman, thiel, and musk who’ve poured a shit ton of money and resources into the marketing and falsified research that made my coworkers think pasting LLM output into prod was a good idea

            I use copilot for repetitive tasks, since it’s good at continuing patterns (5 lines slightly different each time but otherwise the same).

            it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

            • 200fifty@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              1 year ago

              it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

              I was gonna say… good old qa....q 20@a does the job just fine thanks :p

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                6
                ·
                1 year ago

                “but my special boy text editing task surely needs more than a basic macro” that’s why Bram Moolenaar, Dan Murphy, and a bunch of grad students Stallman didn’t credit gave us Turing-complete editing languages

            • TehPers@beehaw.org
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              1 year ago

              Yes, the marketing of LLMs is problematic, but it doesn’t help that they’re extremely demoable to audiences who don’t know enough about data science to realize how unfeasable it is to have a service be inaccurate as often as LLMs are. Show a cool LLM demo to a C-suite and chances are they’ll want to make a product out of it, regardless of the fact you’re only getting acceptable results 50% of the time.

              it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

              I’m perfectly fine with vscode, and I know enough vim to make quick changes, save, and quit when git opens it from time to time. It also has multi-cursor support which helps when editing multiple lines in the same way, but not when there are significant differences between those lines but they follow a similar pattern. Copilot can usually predict what the line should be given enough surrounding context.