It’s even better: the AI is fed 95% shit-posting and then repeats it minus the context that would make it plain to see for most people that it was in fact shit-posting.
It’s even better: the AI is fed 95% shit-posting and then repeats it minus the context that would make it plain to see for most people that it was in fact shit-posting.
Hey! Speak for yourself!
I for one am totally an idiot!
The point of my comment seems to have missed you, turned around and done another pass and missed you again.
Imagine if you gave away some old clothes to some Charity and they called you and said “Some of the socks have holes in them and we need you to come over here and fix those holes ASAP because we want to sell them in our used clothes store”. What would be your reaction to that?
The expectation of payment is not for the software (which MS already has and is already using, free of charge, same as everybody else), it’s for getting priority in bugfix and maintenance work, or in other words, it’s for dictating other people’s work rather than merelly getting the product of work they, of their own choice and in their own timings, did and gave away for free.
Free software is a social relationship, not a business relationship: the users get what they get because somebody chose to put their own time into it and is giving it out for free. Such relationship does not entitle the recipients of the goodwill of others to make demands on their time, especially if said recipients are actually profiting from what those other people gave away. If they want the right to get to use other people’s time as they see fit, then they have to get into a business relationship and that’s only going to happen in business terms that both parties are willing to have.
Further, nobody is stopping MS from using their own programmers to fix that problem themselves.
Most of the “conventions” (which are normally just “good practices”) are there to make the software easier to maintain, to make teamwork more efficient, to manage complexity in large code-bases, to reduce the chance of mistakes and to give a little boost in productivity.
For example, using descriptive names for variables (i.e. “sampleDataPoints” rather than “x”) reduces the chances of mistakes due to confusing variables (especially in long stretches of code) and allows others (and yourself if you don’t look at that code for many months) to pick up much faster what’s going on there in order to change it. Dividing your code into functions, on the other hand, promotes reusability of the same code in many places without the downsides of copy & paste of the same code all over the place, such as growing the code base (which makes it costlier to maintain) and, worse, unwittingly copying and pasting bugs so now you have to fix the same stuff in several places (and might even forget one or two) rather than just fixing it in that one function.
Stuff at a higher, software design level, such as classes, are mean to help structure the code into self-contained blocks with clear well controlled ways of interaction between them, thus reducing overall complexity (everything potentially connecting to everything else is the most complex web of connection you could have) increasing productivity (less stuff to consider at any one point whilst doing some code, as it can’t access everything), reduce bugs (less possibility of mistakes when certain things can only be changed by only a certain part of the code) and make it easier for others to use your stuff (they don’t need to know how your classes works, only to to talk to them, like a mini library). That said, it’s perfectly feasible to achieve a similar result as classes without using classes and using scope only, though more advance features of classes such as inheritance won’t be possible to easilly emulate like that.
That said, if your programs are small, pretty much one use (i.e. you don’t have to keep on using them for years) and you’re not having to work on the code as a team, you can get away with not using most “conventions” (certainly the design level stuff) with only the downside of some loss in productivity (you lose code clarity and simplification, which increases the likelihood of bugs and makes it slower to transverse and spot stuff in the code when you have to go back and forth to change things).
I’ve worked with people who weren’t programmers but did code (namelly with Quants in Finance) and they’re simply not very good at doing what is but a secondary job for them (Quants mainly do Mathematical modelling) which is absolutelly normal because unlike with actual Developers, doing code well and efficiently is not what their focus has been in for years.
Also in my experience reviewing and fixing things is often more time consuming that doing them yourself.
The outsourcing trend wasn’t good for junior devs in the West, mainly in english-speaking countries (except India, it was great there for them).
It’s worse that “copy-pasting from stack-overflow” because the LLM actually loses all the answer trustworthiness context (i.e. counts and ratios of upvotes and downvotes, other people’s comments).
That thing is trying to find the text tokens of answer text nearest to the text tokens of your prompt question in its text token distribution n-dimensional space (I know it sound weird, but its roughly how NNs work) and maybe you’re lucky and the highest probability combination of text-tokens was right there in the n-dimensional space “near” your prompt quest text-tokens (in which case straight googling it would probably have worked) or maybe you’re not luck and it’s picking up probabilistically close chains of text-tokens which are not logically related and maybe your’re really unlucky and your prompt question text tokens are in a sparcelly populated zone of the n-dimensional text space and you’re getting back something starting and a barelly related close cluster.
But that’s not even the biggest problem.
The biggest problem is that there is no real error margin output - the thing will give you the most genuine, professional-looking piece of output just as likely for what might be a very highly correlated chain of text-tokens as for what is just an association of text tokens which is has a low relation with your prompt question text-token.
Well, a senior coder is somebody with maybe 5 years experience, tops.
The only way I can see what is at the moment called AI even just touch things like systems design, requirements analysis, technical analysis, technical architecture design and software development process creation/adaption, is by transforming the clear lists of points which are the result of such processes into the kind of fluff-heavy thick documents that managerial types find familiar and measure (by thickness) as work.
*chugga* *chugga* *chugga* … *choo* *chooooooo…*
There goes another hype train…
The old ones didn’t.
I actually have one right here sitting in front of me which was used to develop iOS application (as Apple forces you contractually to use Apple machines for, at the very least do the final build of an iOS app to push to their store) and I actually bought a lower specced model and upgraded the memory myself as that was the cheaper option.
However if I’m not mistaken the model generation after that (or maybe 2 generations) came with soldered memory.
I’ve used Bose Quite Comfort (the I or the II, not sure anymore) in the trading floor of an Investment Bank (think fishmonger’s market but with financial assets) back in the late 00s to be able to do software development (which requires focus and concentration to do efficiently).
They most definitly block outside noise (still do, in fact, over a decade and 2 earcup replacements later).
Still work fine. Somehow the “condensation” “problem” was already solved a decade and a half ago.
Oh and even though noise-cancelling tech in headphones was near-bleeding edge back then, which is not at all now, they cost roughly half as much as these airpods which fail “due to condensation” (though inflation adjusted and using the higher GDPUSD cross-exchange rate from back then, it’s maybe only 2/3 as much)
I’ve had some Bose Quiet Comfort (I think maybe it was the very first model, but might be the 2nd) for over a decade, with tons of use (for a while I did software development in the middle of a Trading Floor, think highly intellectual focus requiring work in the middle of a fishmonger’s market) and had to use those things all the time.
They’re all scratched on the outside by now (from going in and out of a backpack) and the ear cups were replaced twice.
Still work fine.
Best £250 I ever spent.
Curiously “my way is the best way” is (IMHO) the greatest cause of problems at the design and architecture level in the Programming World.
I’ve worked for almost 2 decades as a freelance designer-developer (aka contractor) across various industries and even languages and frameworks (all the way from server-side Java to web interfaces and iOS applications) and one quite common scenario was me being hired because some piece of software was pretty much unmaintainable, and that was almost invariably that either a single designer-developer did the whole thing themselves during their oh-so-special learning stage were they’ve just learned software design and overengineer everything (in OO that usually means things such as totally unecessary, all the time, use of design patterns and inheritance for stuff were there is always and only 1 implementer) or 3+ people simultaneously or over the years who each got responsability for that software or part of it and decided “I know best” and proceeded to add a whole new layer with a different software design strategy, coding standard and sometimes even a different language, the result after a few of those being pretty much spaghetti code AND design and some of the most “interesting” bugs you can imagine (due to mismatching assumptions between layers).
By the way, you do notice exactly that kind of crap coming out as APIs and Frameworks from the kind of companies (*cough* Google *cough*) whose idea of Technical Architect is somebody with 8+ years experience and who never worked elsewhere: notice how the Android stuff has had various strategies for doing the same thing over the years, the API has grown over time massivelly (it was already too big and messy to begin with), there’s code-generators in the IDE to try and cover the gaps between newer techniques and older ones and they’ve even introduced a new language half-way for no actual architectural advantage (the pinacle of “I know best” “genious” is making your own language).
Personally during my time as Technical Lead I used to hire people very much based on “have you ever worked on a project over its entire life-cycle?!” because those burned by their own ignorant-yet-pretentious stage are the ones who get it that Ego is less important that pretty much everything else in making software and I firmly believe that “having seen and having had to maintain the results of your own fuckups” should be a requirement for anybody designing an API or Framework.
“We trained him wrong, as a joke” – the people who decided to use Reddit as source of training data