I think my favourite version of this was Matt Yglesias saying that Brandon enforcing antitrust was the real cause of democratic backsliding in the US.
Posts by >>=
Liberals started saying their pronouns. Therefore, I voted for Hitler. Sounds reasonable!
usually, if you are careful to be kind and attentive, receptive to pushback, that's all others need and they will be happy to have it...
On that note, sorry if this is too many replies! I hope you have a nice day
There's a balance that needs to be struck between trying your best to treat others well, and not psyching yourself out of society. Usually, if you're concerned about what other people will feel, you're not the problem;
Sometimes (often) I put my foot in my mouth. My hope is I've made my intentions explicitly goodwilled enough that it's not taken badly, but you can never be sure. Sometimes you're just the bad end of someone's interaction, and you can do your best, but no one's powers of social nicety are infinite
This is the kind of post that makes me feel like we really need tumblr style reblogs instead of twitter style QTs. This needs to circulate around 10000 freaks gathering increasingly nonsensical addenda
I made the horrible mistake of leaving my power cable in my office so I can genuinely say that posting rather than writing ISN'T wasting time tonight
If you're a psychometrician and haven't read Gardner, massive modularity may seem like the natural conclusion if g-factor theory is not just false, but the long tail of non-g components are also quite meaningless. ofc, MM is also hypothesized from aggressively wrong stats, but who let that stop them
The bit where he praised evopsych and the massive modularity hypothesis threw me, but mostly I liked this a lot and learned some things about stats, so thanks. I'd never put together that traditional arguments for the g-factor fail because PCA et al. aren't causal before, but it's dead right
(In relative, not absolute terms)
Probably, but it will also come from better-defining the problem. MoE, recently attention residuals, context management, and low-level optimizations all significantly mitigate the cost of scale, but the QA problem is yet to have any clear resolution across domains. Also related:
Feels like you're selfbotting to shotgun these sorts of questions but I'll bite anyway. That's out of the scope of the subject matter at hand and not really my wheelhouse. If someone else was able to beat AWS that would likely impact Amazon's business, but not AWS' operating margins (by a whole lot)
Obviously, legislation matters, but so does organized resistance making surveillance costlier and less effective. It is a form of civil disobedience to install and use Signal. You only have nothing to fear if the government isn't going though your things trying to find out what you have to hide.
Every so often, I see controversy about surveillance, and notice few are changing anything about how they organize their lives to mitigate it. There's a row, then we're all back on WhatsApp. The likelihood of it impacting your life is pretty low, but free speech and association depend on privacy
Certainly those Palantir integrations are probably commanding a higher margin than civilian use, for better or worse
I agree that many are willing to pay a lot. As far as I can tell, at the service provider level, I don't think marginal revenues are currently in line with marginal costs. But I'll freely admit that I'm using numbers as speculative as yours. Anthropic, et al. are not public and don't disclose costs
To clarify, the cash-on-hand that tech companies have been using to finance AI is not drawn from the operating funds of their infrastructure divisions; they invested in Anthropic, OpenAI, or internally at Google, in a way that is completely sustainable for them even if the startups go broke
Looking at AWS' balance sheet also may be misleading as AWS provides services to anthropic, e.g., and it's anthropic that's subsidizing tokens with its investment capital, not AWS with its operating funding. If Anthropic is out over its skis, that will look like operating profit for AWS
Even if I was completely right above, I didn't intend to suggest that running TNNs specifically (or AI models in general) was currently large enough, or was ever going to be large enough, to threaten AWS. AWS provides services to a plurality of SaaS and websites on the planet.
I'm happy to be wrong about scaling effects overshadowing other factors from a margin pov; that would suggest the AI firms may have a revenue model that works and they in fact don't need to oversell the tech, which would seem to contradict your earlier post. But maybe I misunderstood your intent
Given those numbers aren't public, I'm not sure what you're doing to estimate operating margins per token on their infrastructure, however caching and hardware optimization exist to mitigate the effect of scaling, which does, yes, do more computational work per token as the model gets larger
Alternate title: "Oops! All universal function approximators"
For a long time, yeah. They do kill unprofitable products, but they keep them going for much longer than it's worth it for them, because they have the cash. YouTube, e.g., lost money for over a decade. The key factor is sunk cost. They won't nuke billions they've spent on infrastructure on a dime
The desperation at this point is literally just a byproduct of the whole industry trying to escape the consequences for its cumulative, very expensive errors, by manifesting them into not having been errors
Generalization showed evidence of diminishing returns quickly, but in the final instance, I think that may be the source of things. If someone says "it's generalizing!", you might feel that lurch in your gut that says "what if it's the big one?" Good money will chase after bad trying to make it so
This was a deficit in both education and in the literature that we still have. Researchers don't recognize the difference between generalizing from acquired knowledge and task-independent reasoning. We never had evidence to believe that the latter comes from the former for free
But, that kind of generalization doesn't mean the same thing as "general" in general intelligence, as the GI sort of generality is thought to be mostly task-independent, not the product of applying knowledge from one problem domain to another. Generalization was novel, but not miraculous
The term "generalize" demands specific attention here. In context, what it meant is that training for one task would improve the model's performance on other tasks. People combined that with graphs seeming to show that models could keep improving without limit, and decided this might be the big one
An interesting piece of literature produced at the peak of the AGI hype cycle was a paper written by DeepMind researchers, defining what they mean by "AGI". It amounts to the idea that their technology will find profitable applications. Unrelated to anything non-CEOs think "general intelligence" is
Additionally, it seems as though their stock of cash on hand may have mattered less to them than the revenues associated with maintaining their market position in the future. Early on OpenAI took off and started taking a lot of money, so everyone reasoned that Microsoft knew something they didn't