Universal Paperclips is one of the best clicker games.
In particular: because it isn’t a clicker game. It only starts off as one. There’s only about 2 sections IIRC that are “clicker”, the start (before auto-clippers kick in), and then the quantum computer.
I guess you have to launch your first 20 or 30 probes at the space stage and that’s done one-click-at-a-time… but I don’t think that counts as a “clicker” game since its so few clicks in the great scheme of things. At no other point is rapid-clicking that useful.
Meanwhile in Ubuntu-land, a Python2 script probably just straight up doesn’t work at all.
“At least the .NET code continues to run today”. And you can setup a 20-year-old developer VM running VS2008 in practice and code “the old way” to continue to maintain the old code (that still runs on today’s machines). Meanwhile, you’re FORCED to migrate the Python2 stuff in Ubuntu-land due to a litany of incompatible changes to systemd, X.org, Python2 vs 3 issues and more.
Not just Python2, but also Bash-scripts. (Weird changes to netcat, or ipconfig, or other tools that utterly bork old scripts).
Microsoft isn’t as good at backwards compatibility as it used to be. But they’re still leagues ahead of the OSS community on this.
Because code written 20 years ago on .NET and C# still works today, showing the stability of the platform.
It does sculpting, 3d tech and all that, it gets very precise
I’m not talking about sculpting. I’m talking about overhangs and other fundamental issues that 3d printers need to solve before the darn thing is printed.
I’m looking up Blender’s features, and it seems like there are features that can do this stuff (ex: https://docs.blender.org/manual/en/latest/addons/mesh/3d_print_toolbox.html), but even then…
Blender can be used to created meshes for 3D printing. Meshes exported from Blender are usually imported into a piece of software takes the mesh and “slices” it into paths that the 3D printer can execute. An example of such Slicer software is Cura.
Even in Blender’s manual, it seems like they’re suggesting you need a 2nd piece of software to do this job well.
The physical act of creating a 3d print needs to be thought of, especially in artistic designs. You will often create impossible shapes (most noticeably overhangs), especially if you’re ignorant to the whole 3d printing process. Having good software that detects these situations is… well… maybe not necessary. But it helps.
If you’ve never sent your (computer) sculptures through a CAM or thought about these issues before, I can guarantee you that you’ve accidentally made a wall too thin, or a overhang that’s impossible to print, or other such distortion that will 3d print poorly (or be impossible to 3d print).
The measurement of good software is the number of edge cases that it detects before you waste hours on a print job.
Each print job is a prototype. You gotta iterate. You 3d sculpt. Then you CAM-simulate to look for obvious errors. Then you 3d print. Then you figure out what went wrong and 3d sculpt again. Etc. etc.
The more issues the CAM-software detects before printing, the faster you iterate (ie: 3d sculpt. Find an issue in the CAM check. Return to 3d sculpt before printing to fix the issue). I have severe doubts that Blender (or even Blender + Cura, as recommended in this manual) covers as many issues as Rhino + RhinoCAM (as a random example of $1000+ software).
That’s… okay. I’m not saying you need to buy more expensive software. But what I’m saying is that what you’re losing out in terms of software is something YOU need to make up with experience. YOU need to learn about overhangs, or other such issues that can prevent a 3d print from becoming successful.
EDIT: Looks like you already ordered the printer. Well, you’ll learn soon enough one way or the other. Thinking about the print is easier than designing in the first place, but its still a process. Good luck with learning slicers + cura to get your Blender stuff to work!
Its not impossible, but don’t expect a success on your first print. And always be willing to go back to your Blender model and change it so that its physically possible to print. Iterate-iterate-iterate, that’s my ultimate advice to you. (And while good tools can quicken the iteration cycle, Blender is possible, just not ideal IMO).
Blender has a ton of “movie” features, such as animation, keyframes, bones, etc. etc. Its almost entirely focused on movie-making. None of these features are useful to you, and in fact they’re harming your workflow. (They’re distracting items on the menu and manual)
Rhino, which is a freeform CAD program for industrial design has many more features. Not only is it $1000 however, but its focus on making artistic 3d printed models is obvious once you use such a program.
AutoCAD is more of an engineer’s tool. Its extremely precise but non-artistic in design. Its $4000 as well, but also the wrong tool for making a board game piece.
You’re using the wrong 3d program (Blender) to make your board game pieces. That’s all I’m saying. The people in the know would use a program like Rhino (or a comparable industrial design 3d to manufacturing tool). Blender can work, but its obvious that it doesn’t have the CAD or CAM features that a proper industrial tool would have.
Without a CAM-plugin package, are you even sure that your design can be 3d printed correctly? Have you thought about how the 3d printer nozzle (or CNC mill, or whatever you’re using) will create the end-product? Do you have holes in your design?
Do you have any overhangs that are unstable or unable to be printed?
https://www.3dprintingera.com/3d-printing-overhangs-and-bridges/
A tool like RhinoCAM-Mesh (ugggh, another $1000, but you get the gist of this hobby…) will automatically 3d print supports that will snap off so that whatever shape you wanted will be possible to be made.
https://mecsoft.com/products/rhinocam/rhinocammesh/
Just because you made it in Blender doesn’t mean its possible to 3d print. You need to double-check the “head” of the 3d printer, see if it ever collides with your design, check to see overhangs are set, etc. etc. Sometimes, its impossible and you have to go back to square-one and redesign the whole toy (or sculpture) in order for it to be 3d printed.
Tightly-integrated CAM (computer-aided manufacturing) tools check these things for you. If you’ve never thought about how the 3d printer head moves, or what angles are impossible to print, or etc. etc., then you haven’t finished your job. You want to get the CAM to double-check these things for you, and yeah its expensive but its all software these days.
So yeah, a tool like Rhino (lol $1000) plus RhinoCAM-Mesh (lol another $1000) to do this workflow. Now you can do this all manually yourself of course and “design your 3d game piece” for 3d printing (including thinking of temporary struts / braces you need to print-then-cut-out to make your designs successful). But that takes a bit more skill and manual effort, because Blender has no such CAM tools available (at least, that I’m aware of).
“Makerspaces” exist for a reason.
You should be able to get access to a higher-quality 3d printer (or CNC mill, or Laser Cutter) from a typical makerspace. It’d be basically a club (often near universities) where people effectively pool their money together for collective ownership.
My local makerspace is at a community college. It requires a safety class before you can use the equipment, so there’s a few weeks of spinup time. The rules will be different wherever you are. In this case, my local State sponsored the funds for the 3d printer, but I still have to pay for resin costs and whatnot when using the printer.
Good software costs a ton of money too, and you might want to find a Makerspace just so that you can get access to the $4000+ class software that engineers use. Or at least the $1000+ software? Thinking like Rhino CAD, Autotools, or a few other professional tools.
Blender is more of a 3d graphics (think Toy Story movie) kind of workflow. It can do 3d designs but its not the original design.
I think there’s a bit of a difference between the two shows.
Rick and Morty makes fun of movies, sometimes Sci Fi movies but not always.
Futurama creates novel proofs as a joke to an episode, and provides the proof to the audience as part of the story.
Professor Lemire btw, is a high-performance professor who has been doing a lot of AVX512 techniques / articles for the past few years. His blogposts are very popular on Hacker News (news.ycombinator.com). Pretty cool guy, I think its well worth it to follow his blog if you’re into low-level assembly, low-level memory optimizations and the like.
pext (and the reverse, pdep) are basically a 64-bit bitwise gather and 64-bit bitwise scatter instruction. On Intel, they execute in 1-tick, but on AMD they execute on 19-ticks (at least, a few years ago). Rumor is that the newest AMD chips are faster at it.
pdep and pext are some of my favorite functions, because gather/scatter is an important supercomputer / parallelism concept, and Intel invented an extremely elegant way to describe bit-movement in 64-bit registers. Given the huge importance of gather/scatter is to supercomputer algorithms of the past 40 years, I expect many, many more applications of pdep/pext.
My own experiments with pdep and pext was to create a small-sized bit-scale relational database for solving 4-coloring theorem (like) problems. I was able to implement “select” with a pext, and “joins” as a pdep. (4-bits is a single-column table. 16-bits for a dual-column table. 64-bits for a triple-column table).
Its not so easy.
GPU-programmers are the expert in AoS vs SoA formats. And when you look at how RGB values are stored, its… incredibly complex. Sometimes you’ve got RRRRGGGGBBBB, sometimes its RGBARGBARGBA, sometimes its YYYYUUVV. What’s best for performance changes dramatically on system-to-system, requiring lots of benchmarking and ultimately… a massive slew of processor-specific / ARM NEON instructions that convert between every format imaginable.
On right, GPUs don’t need that processor-specific instruction because permute and bpermute instructions exist (32-way crossbar any data-to-any-lane movement, and vice versa any lane pulling from any data, permute and bpermute respectively). CPUs do need it though.
https://www.infoworld.com/article/3409071/java-challenger-7-debugging-java-inheritance.html#toc-2
composition is literally the “has a” relationship. That’s how its always been taught.
You’re not describing composition.
Go Files do not “hasa reader”. You don’t do file.reader.read(), you just do file.read(), that’s inheritance as file has inherited the read() method.
But the fact that TCPStreams isa file-descriptor, Files isa file-descriptor, Pipes isa file-descriptor, and other such “stream-like objects” in the Linux kernel proves that the read/recv and write/send system calls are generic enough to work in a wide variety of circumstances.
Yeah, they’re all different. But as far as the Linux API goes, they’re all file descriptors under it all and have abstractions that work well for inheritance in practice. In many cases, inheritance doesn’t work. But in many cases, it works. And works well, for decades.
Inheritance is useful.
However, “Dog isa Animal” is worse than useless, it actively hampers your code and makes your life worse.
Useful inheritance patterns are all over the place in GUI / Model View Controller code however. “Button isa Window”, and “FileStream isa Stream”, and “StringStream isa Stream” in C++. If you stick with SOLID principles, inheritance helps your code significantly.
OpenSSL / Heartbleed was the event when this comic came out IIRC.
The refcount absolutely is shared state across threads.
If Thread#1 thinks the refcount is 5, but Thread#2 thinks the refcount is 0, you’ve got problems.
Meta: Hmmm… replying to kbin.social users appears to be bugged from my instance (lemmy.world).
I’m replying to you instead. It doesn’t change the meaning of my post at least, but we’re definitely experiencing some bugs / growing pains with regards to Lemmy (and particularly lemmy.world).
GC overhead is mostly memory-based too, not CPU-based.
Because modern C++ (and Rust) is almost entirely based around refcount++ and refcount-- (and if refcount==0 then call destructor), the CPU-usage of such calls is surprisingly high in a multithreaded environment. That refcount++ and refcount-- needs to be synchronized between threads (atomics + memory barriers, or lock/unlock), which is slower than people expect.
Even then, C malloc/free isn’t really cheap either. Its just that in C we can do tricks like struct Foo{ char endOfStructTrick[0]; } and store malloc((sizeof(struct Foo)) + 255); or whatever the size of the end-of-struct string is, to collate malloc / frees together and otherwise abuse memory-layouts for faster code.
If you don’t use such tricks, I don’t think that C’s malloc/free is much faster than GC.
Furthermore, Fragmentation is worse in C’s malloc/free land (many GCs can compact and fix fragmentation issues). Once we take into account fragmentation issues, the memory advantage diminishes.
Still, C and C++ almost always seems to use less memory than Java and other GC languages. So the memory-savings are substantial. But CPU-power savings? I don’t think that’s a major concern. Maybe its just CPUs are so much faster today than before that its memory that we practically care about.
I’ve begun to pay for Kagi.com
I wouldn’t say that it “blows my mind” or anything, but simply that it seems to work as expected (which is more than what I can say for Google). There’s also a “Fediverse” button on Kagi.com, so it can search lemmy.world (and more??).