“There’s absolutely no probability that you’re going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It’s going to take years, if not many decades, but I still think the time to focus on safety is now,” he said.
just days after poor lil sammyboi and co went out and ran their mouths! the horror!
Sources told Reuters that the warning to OpenAI’s board was one factor among a longer list of grievances that led to Altman’s firing, as well as concerns over commercializing advances before assessing their risks.
Asked if such a discovery contributed…, but it wasn’t fundamentally about a concern like that.
god I want to see the boardroom leaks so bad. STOP TEASING!
“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.
this appears to be a vaguely good statement, but I’m gonna (cynically) guess that it’s more steered by the fact that MS now repeatedly burned their fingers on human-interaction AI shit, and is reaaaaal reticent about the impending exposure
wonder if they’ll release a business policy update about usage suitability for *GPT and friends
Github / Microsoft has arguably rendered the GPL meaningless. If all GPL’d code turns into public domain when a big company has shoved it into an LLM, then what’s the point?
GPL was already marginalized. Most open source stuff[1] is permissively licensed. The Linux kernel and gcc are outliers.
For all that (or maybe because of it) GPL zealots are really really loud.
If an LLM spits out MIT-licensed code you might get mad the copyright notice isn’t included but if you choose a permissive license it’s what you signed up for.
[1] by whatever metric you mean, either popular, most deployed, most “productive”