The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.
For some context: prior to the release of chatGPT I didn’t realize that OpenAI had personnel affiliated with the rationalist movement (Altman, Sutskever, maybe others?), so I didn’t make the association, and i didn’t really know about anything OpenAI did prior to GPT-2 or so.
So, prior to chatGPT the only “rationalist” AI research I was aware of were the non-peer reviewed (and often self-published) theoretical papers that Yud and MIRI put out, plus the work of a few ancillary startups that seemed to go nowhere.
The rationalists seemed to be all talk and no action, so really I was surprised that a rationalist-affiliated organization had any marketable software product at all, “AI” or not.
and FWIW I was taught a different definition of AI when I was in college, but it seems like it’s one of those terms that gets defined different ways by different people.
My old prof was being slightly tongue-in-cheek, obviously. But only slightly: He’d been active in the field since back when it looked like Lisp machines were poised to take over the world, neural nets looked like they’d never amount to much, and all we’d need to get to real thinking machines was hiring lots of philosophers to write symbolic logic descriptions of common-sense tasks. He’d seen exciting AI turn into boring algorithms many, many times - and many more “almost there now!” approaches that turned out to lead to nowhere in particular.
He retired years ago, but I know he still keeps himself updated. I should write him a mail and ask if he has any thoughts about what’s currently going on in the field.
For some context: prior to the release of chatGPT I didn’t realize that OpenAI had personnel affiliated with the rationalist movement (Altman, Sutskever, maybe others?), so I didn’t make the association, and i didn’t really know about anything OpenAI did prior to GPT-2 or so.
So, prior to chatGPT the only “rationalist” AI research I was aware of were the non-peer reviewed (and often self-published) theoretical papers that Yud and MIRI put out, plus the work of a few ancillary startups that seemed to go nowhere.
The rationalists seemed to be all talk and no action, so really I was surprised that a rationalist-affiliated organization had any marketable software product at all, “AI” or not.
and FWIW I was taught a different definition of AI when I was in college, but it seems like it’s one of those terms that gets defined different ways by different people.
My old prof was being slightly tongue-in-cheek, obviously. But only slightly: He’d been active in the field since back when it looked like Lisp machines were poised to take over the world, neural nets looked like they’d never amount to much, and all we’d need to get to real thinking machines was hiring lots of philosophers to write symbolic logic descriptions of common-sense tasks. He’d seen exciting AI turn into boring algorithms many, many times - and many more “almost there now!” approaches that turned out to lead to nowhere in particular.
He retired years ago, but I know he still keeps himself updated. I should write him a mail and ask if he has any thoughts about what’s currently going on in the field.