I’m not sure I agree, but then it goes to my second question:
What’s the effective difference?
I’m not sure I agree, but then it goes to my second question:
What’s the effective difference?
Chat GPT had the book entirely memorized
I feel like this exposes a fundamental misunderstanding of how LLMs are trained.
This is a vague non answer, although I agree it’s done very differently because our process is biological and ai is not.
But as I asked elsewhere, what’s the effective difference?
I guess it comes down to a philosophical question as to what “know” actually means.
But from my perspective is that it certainly knows some things. It knows how to determine what I’m asking, and it clearly knows how to formulate a response by stitching together information. Is it perfect? No. But neither are humans, we mistakenly believe we know things all the time, and miscommunications are quite common.
But this is why I asked the follow up question…what’s the effective difference? Don’t get me wrong, they clearly have a lot of flaws right now. But my 8 year old had a lot of flaws too, and I assume both will get better with age.
So, it’s either perfect right now, or never capable of anything. Great critical and nuanced thinking.
spicy autocomplete clearly cannot.
What you are basing this “it clearly cannot” on? Because an early iteration of it was mediocre at it? The first ICE cars were slower than horses, I’m afraid this statement may be the equivalent of someone pointing at that and saying “cars can’t get good at going fast.”
But I specifically asked “in this regard”, referring to taking a test after previously having trained yourself on the data.
I absolutely agree. However, if you think the LLMs are just fancy LUTs, then I strongly disagree. Unless, of course, we are also just fancy LUTs.
My question to you is how is it different than a human in this regard? I would go to class, study the material, hope to retain it, so I could then apply that knowledge on the test.
The ai is trained on the data, “hopes” to retain it, so it can apply it on the test. It’s not storing the book, so what’s the actual difference?
And if you have an answer to that, my follow up would be “what’s the effective difference?” If we stick an ai and a human in a closed room and give them a test, why does it matter the intricacies of how they are storing and recalling the data?
Why is that a criticism? This is how it works for humans too: we study, we learn the stuff, and then try to recall it during tests. We’ve been trained on the data too, for neither a human nor an ai would be able to do well on the test without learning it first.
This is part of what makes ai so “scary” that it can basically know so much.
Sorry, I meant static typing, not strongly typing. I often cross the two. But this is exactly what I mean, if you want something to be statically typed you have to put in the extra effort, if not you’ve got dynamically typing, which is fine when things are small but I find causes stumbling blocks when things get larger.
And depending on the scale of the project I’m working on, my unit tests usually take minutes to run, if not hours. If I’m debugging and I change a property, when I compile it instantly catches that I forgot to change it elsewhere. Hell, even when I save it I’ll get a little error warning. Maybe running unit tests all the time is fine if the project is small, but not if it’s large. I’m not going to run unit tests every time I’m starting a new debugging session. Linters kind of make up for this. But then we are back to making sure there are type hints, which, as I’ve been told, is not “pythonic.”
If people like it, more power to them, I’m not shitting on the language as even I like it. I just can’t use it for larger stuff, and I’ve never worked anywhere that uses it for larger stuff, and I think for good reason.
I don’t get it. I love python for small quick projects. But anytime things get more complicated, I find myself constantly tripping over myself without the strong typing and errors letting me know I when I’ve changed a property in a class that in falling elsewhere.
I find that writing detailed comments about things that aren’t clear in the code often lead me to a better, more clear solution. Same thing with writing detailed commit messages. So many times I realize something better during the message, I’ll finish the commit and then almost immediately amend that commit with the better way.
I don’t consider myself a never nester, but looking at my code now, I extract all the time and rarely go 4 tabs in. It just makes it more easily maintainable. I also like the idea of putting the failure conditions first. I haven’t looked at this yet but I’m sure there are some times I can use it.
Sure, sometimes you might not have a choice, but I do think there is a lot of value to what they are saying. I think it kind of goes in line with standard “functions should do one thing” paradigm.