i like contraversial places like this: a second opinion about things always comes in handy
- 0 Posts
- 5 Comments
zoe@lemm.eeto TechTakes@awful.systems•my chatbot is so efficient it only needs one $2000 GPU per userEnglish14·2 years agoit lacks human input. also there is no economic incentive for ai to learn chess by teaching it much needed human bias. also most useful jobs are more brain-dead than chess, i.e lawyering
zoe@lemm.eeto TechTakes@awful.systems•my chatbot is so efficient it only needs one $2000 GPU per userEnglish12·2 years agoexactly:lacks human bias ? add one
zoe@lemm.eeto TechTakes@awful.systems•my chatbot is so efficient it only needs one $2000 GPU per userEnglish23·2 years agowell running ai on consumer gpus isn’t supposed to be efficient: i assume when node sizes get smaller cores will be more efficient and consolidating vram (and gpu cores) on one big circuit board would be cost effective: just cores running fp16 or whatever ai specific. gpus like the a6000 exist for a reason. tbh pessimistic (or misleading) take on op’s part. the thing could replace lawyering jobs, save on grafic design costs, no more language teachers, youtube videos can be transribed in text format and used as learning material, why should this be bad tech ?
idk, vram is also inefficient since it wastes heat too (since its a variation of dram which implies that it combines a transistor and a capacitor, and a transistor dissipates heat).
alot of stuff need to witness a significant upgrade to cut down on Joule’s effect.
now process nodes require 2 years to go down 0.5 nm in size, and probably 4 years when smaller