That does seem to be the case. As long as any modifications to the source are publicly available. Which is pretty reasonable.
That does seem to be the case. As long as any modifications to the source are publicly available. Which is pretty reasonable.
It’s in the RSALv2:
You may not make the functionality of the Software or a Modified version available to third parties as a service
You may not X in a way that Y
implies that You may X in a way that does not Y
, and is more specific (and changes the meaning of the license) vs You may not X
The legal distinction in this case allows for distributing the software for example as source code, but not as a service.
The wording says “third-parties as a service”, so as long as Redis isn’t accessible by people outside your organization, it’s fine. But paid Redis hosting wouldn’t be allowed on the new license.
I don’t see anything wrong with the quote? Other than the policy itself being a ridiculous change, the wording is pretty standard legal speak. Not sure why you’re jumping to “ChatGPT Lawyer”
I’m currently working on a C++ project that takes about 10 minutes to do a clean build (Plus another 5 minutes in CI to actually run the tests). Incremental builds are set up, and work quite well, but any header changes can easily result in a 5 minute incremental build.
As much as I’d like to try, I don’t see mutation testing being worthwhile for this project outside of maybe a few isolated modules that could be tested independently. It’s a highly interconnected codebase, and I’ve personally reviewed (or written) every test, so I already know they’re of fairly high quality, but it would be nice to be able to measure.
I’d never heard of mutation testing before either, and it seems really interesting. It reminds me of fuzzing, except for the code instead of the input. Maybe a little impractical for some codebases with long build times though. Still, I’ll have to give it a try for a future project. It looks like there’s several tools for mutation testing C/C++.
The most useful tests I write are generally regression tests. Every time I find a bug, I’ll replicate it in a test case, then fix the bug. I think this is just basic Test-Driven-Development practice, but it’s very useful to verify that your tests actually fail when they should. Mutation/Pit testing seems like it addresses that nicely.
It’s what Microsoft would do in the same situation. It’s only fair