

He’s Cartman. They’re all Cartman.
He’s Cartman. They’re all Cartman.
Going for a shadow run to the grocery store. Don’t forget to pick up a decker to bypass the ice.
deleted by creator
Can’t go home gain. Can’t step in the same river twice.
To run Resolve properly, you apparently have to run DaVinci’s flavor of Rocky Linux 8.6. If you’re doing other things with that machine, this may be undesirable. And as far as I know, there’s no equivalent for After Effects.
I made a smartass comment earlier comparing AI to fire, but it’s really my favorite metaphor for it - and it extends to this issue. Depending on how you define it, fire seems to meet the requirements for being alive. It tends to come up in the same conversations that question whether a virus is alive. I think it’s fair to think of LLMs (particularly the current implementations) as intelligent - just in the same way we think of fire or a virus as alive. Having many of the characteristics of it, but being a step removed.
Deepseek took the training of foundation models from a billionaire’s game to a millionaire’s game. If Elon wants an AI monopoly, it’ll have to be done through litigation. Which, ya know, they’re also trying.
Flames burn and smoke asphyxiates, perfectly highlighting why relying on fire is a bad idea.
o1/o3 use a smaller model to summarize the reasoning, but they don’t show the actual CoT generation the way deepseek does.
It was a free o1/o3 equivalent at a time when there were only paid options. But in the short interim, Google’s made their r model free to use.
*Plus, deepseek doesn’t hide its internal monologue the way o1/o3 do. It’s fun to watch it go back and forth with itself.
True, but they also released a paper that detailed their training methods. Is the paper sufficiently detailed such that others could reproduce those methods? Beats me.
My final semester in American Sign Language was “Sex, Drugs, and Profanity,” and most of the signs are just exactly what you’d guess. (I held on to those textbooks.) Plus, facial expressions are a big part of the grammar of the language. I don’t recognize this scene, but assuming it’s from a comedy - it’s probably also not far off from accurate.
To run the 671B parameter R1, my napkin math was something like 3/4 of a million dollars in hardware. But that (plus the much lower training cost) made this a millionaire’s game rather than a billionaire’s. Plus the distillations do seem better than anything else we have at the smaller sizes at the moment. That said, I’m more looking forward to the first use of deepseek’s methods with google’s Titan architectures.
On the other hand, AI can also spot obvious errors. And the more stressed out, overworked, understaffed, and generally bombarded departments become, the more people will miss obvious errors.
R3g3n3r47|ng w17h 1337 5P34k i5 7h3 n3w h0tn355.
Which feels poetic.
All the data centers in the US combined use 4% of the electric load, and one of the main upsides to deepseek is that it requires much less energy to train (the main cost).
You can always get into 3d printing.
Batman The Animated Series was drawn this way.