lmao, Zoom is cooked. Their CEO has no idea how LLMs work or why they aren’t fit for purpose, but he’s 100% certain someone else will somehow solve this problem:
So is the AI model hallucination problem down there in the stack, or are you investing in making sure that the rate of hallucinations goes down?
I think solving the AI hallucination problem — I think that’ll be fixed.
But I guess my question is by who? Is it by you, or is it somewhere down the stack?
It’s someone down the stack.
Okay.
I think either from the chip level or from the LLM itself.
“We’re all in grave danger! What? Well no, we can’t give specifics unless we risk not getting paid. Signed, Anonymous”
I mean, I wasn’t exactly expecting the Einstein-Szilard letter 2.0 when I clicked that link, but this is pathetic.