Been experimenting a lot with AI coding tools (Lovable, Cursor, etc.) and had a bit of a realization. While code generation saves time, the more profound shift might be using LLMs as functional components within our code.
Plunging inference costs make it viable to replace chunks of complex, brittl
Posts by Mike Ulin
Selling to large enterprises can feel like chasing Moby Dick—uncertain, risky, and potentially game-changing.
I've navigated enterprise sales at RPX, ZestyAI, and Paxton, and each experience reinforced Marc Andreessen's brilliant insight from his essay, "The Moby Dick Theory of Big Companies": L
Just published a new blog post on DeepSeek—the buzzy open-source AI project that’s been making waves (and headlines) lately. But is it really the groundbreaking innovation everyone says it is?
pioneeringthoughts.substack.com/p/theres-mor...
Just published a new blog post on DeepSeek—the buzzy open-source AI project that’s been making waves (and headlines) lately. But is it really the groundbreaking innovation everyone says it is?
pioneeringthoughts.substack.com/p/theres-mor...
Open source LLMs are getting really interesting- Mosaic just released models with a 65k context window :) https://www.mosaicml.com/blog/mpt-7b