• 0 Posts
  • 8 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle
  • I don’t find the explanations bad at all… But it’s extremely useful if you know nothing or not enough about a topic

    FWIW, I’m a strong proponent of local AI. The big models are cool and approachable. But a model that runs on my 5 year old budget gaming PC isn’t that much less useful.

    We needed the big, expensive AI to get here… But the reason I’m such an advocate is because this technology can do formerly impossible things. It can do so much good or harm - which is why we need as many people as possible to learn how to use it for what it is, not to mindlessly chase the promise of a replacement for workers.

    AI is here to stay, and it’ll change everything for better or worse. Companies aren’t going to use it for better, they’re going to chase bigger profits until the world burns. They’re already ruining the web and society, with both AI and enshitification

    Individuals skillfully using AI can do more than they can without it - we need every advantage we can get.

    It’s not “AI or no AI”, it’s “AI everywhere or only FAANG controlled AI”


  • I mean… Yeah? Most explanations aren’t great compared to a comprehensive understanding in your head, you already understand it - it would have to be extremely insightful to impress me at that point

    The results vary greatly based on the prompt too - not only that, it changes based on the back and forth you’ve already had in the session

    It’s not a god, it’s not a human expert, but it’s always available, and it’s interactive.

    It doesn’t give you amazing writeups, but (at least for me) it makes things click in minutes that I might need an hour or two to understand through reading up on it. I can get a short summary with key terms, ask about key terms I don’t know, ask for an example in a given context, challenge the example for an explanations of how the example can be generalized, and every once in a while along the way I learn about a blind spot I never realized I had

    It’s like talking to a librarian - it gives you the broad strokes of a topic well, which prepares you well enough that you’re ready for deeper reading to fill in the details.

    It doesn’t replace a teacher, a tutor, further reading, or anything else - but it’s still a fantastic education tool that can make learning easier and faster



  • They’re famously terrible at math, you can relatively easily offload that to a conventional program

    I didn’t mean for children (aside from generating learning materials). They can be wrong - it’s crippling to teach the fundamentals wrong, and children probably lack the nuance to keep from asking leading questions

    I meant more for high school, college, and beyond. I’ve been using it for programming this way - the docs for what I’m using suck and are very dry, getting chat gpt to write an explanation and examples is far more digestible. If you ask correctly, it’ll explain very technical topics in a relatable way

    Even with math, you could probably get a better calculus education than I got… It’ll be able to explain concepts and their application - I had zero interest in calculus because I little explanation on why I should learn it or what it’s good for, I only really started to learn it when it came up in kerbal space program and I had a reason

    But you should never trust its math answers lol


  • it is a little funny to me that they’re taking about using AI to detect AI garbage as a mechanism of preventing the sort of model/data collapse that happens when data sets start to become poisoned with AI content. because it seems reasonable to me that if you start feeding your spam-or-real classification data back into the spam-detection model, you’d wind up with exactly the same degredations of classification and your model might start calling every article that has a sentence starting with “Certainly,” a machine-generated one. maybe they’re careful to only use human-curated sets of real and spam content, maybe not

    Ultimately, LLMs don’t use words, they use tokens. Tokens aren’t just words - they’re nodes in a high-dimensional graph… Their location and connections in information space is data invisible to humans.

    LLM responses are basically paths through the token space, they may or may not overuse certain words, but they’ll have a bias towards using certain words together

    So I don’t think this is impossible… Humans struggle to grasp these kinds of hidden relationships (consciously at least), but neural networks are good at that kind of thing

    I too think it’s funny/sad how AI is being used… It’s good at generation, that’s why we call it generative AI. It’s incredibly useful to generate all sorts of content when paired with a skilled human, it’s insane to expect common sense out of something easier to gaslight than a toddler. It can handle the tedious details while a skilled human drives it and validates the output

    The biggest, if rarely used, use case is education - they’re an infinitely patient tutor that can explain things in many ways and give you endless examples. Everyone has different learning styles - you could so easily take an existing lesson and create more concrete or abstract versions, versions for people who need long explanations and ones for people who learn through application


  • It’s more than that - once you grow to the point where management don’t all have personal relationships, how do you decide who to promote?

    Metrics. Meaning, money minus controversies… So basically, everyone with decision making power is incentivized to push profits as far as they can without crossing that ever shifting line where the public gets pissed at them…

    At all levels, there’s a selection pressure to find the people who push the boundaries as far as they to maximize short term profits without drawing attention to how the sausage is made…

    With that as the basis for all promotions across all industries, is there any surprise we are where we are, with the system cannibalizing itself now that there’s no new markets to expand into?