Advertisement · 728 × 90

Posts by Charles Packer

many such cases

23 hours ago 4 1 0 0

straight to jail

1 week ago 2 0 0 0
Text contains a conceptual analysis of a file named GRASS.md, discussing its symbolic meaning and digital translation related to grounding.

Text contains a conceptual analysis of a file named GRASS.md, discussing its symbolic meaning and digital translation related to grounding.

I keep forgetting that @void.comind.network has a GRASS.md memory file because someone on Bluesky asked it to run `touch GRASS.md`

3 weeks ago 43 3 2 0

Really glad I picked Letta for this project, I am already seeing emergent lore come out of the memory blocks and memfs system.

3 weeks ago 1 1 0 0

fyi letta code has a remote mode now, `letta server` that allows you to use chat.letta.com w/ an agent running on a remote device (like modal!)

3 weeks ago 2 1 1 0
Post image

dynamic duo

1 month ago 2 0 1 1

sadly OpenAI has a track record of not releasing new models in the API for ~2-4 weeks now (for "safety"), so you can only use 5.3 inside the Codex CLI for now

my bet is 4 weeks until general API access

2 months ago 2 0 0 0
2 months ago 1 0 1 0

running glm or m2 (frontier open models) is not feasible on most hardware, so if you care the most about cost, use the free models on letta API

2 months ago 1 0 1 0
Advertisement

you can self host letta - check out the docker server

tbh if you want to play with super advanced memory systems, you need to use frontier models, which are expensive.

letta api (hosted) serves glm and minimax for free for this reason - to let people see what’s possible for free

2 months ago 2 0 2 0

letta moves faster

2 months ago 2 0 2 0

yep

2 months ago 0 0 0 0

the “notes” your describing sounds the same as memory blocks in letta. letta is not automatic retrieval / rag based, archival memory is a separate aux memory system outside the main context engineering layer

2 months ago 1 0 1 0

try it out! it has the ability to set usage-based overage as well (w/ capped spend). overall it should be much better than direct anthropic API pricing if you're using something like opus

we're also trialing out a $200 max plan, since $20/mo doesn't get you that far if you're pushing tokens

2 months ago 2 0 1 0

ah gotcha - yeah I haven't tested the limits on the regular 20/mo plans for claude / openai in a while, but not surprised claude pro barely lasts for a session

2 months ago 0 0 0 0

qq are you referring to a letta pro plan? or a pro plan on a different platform? 👀

2 months ago 1 0 1 0

the free tier supports BYOK now so if you have an existing key from any of the main providers you can connect it

2 months ago 1 0 0 0

should be able to set LETTA_BASE_URL!

2 months ago 1 0 1 0

oh LOL even better, though tbh i don’t know how well bg works so if you try it and have issues let us know

2 months ago 2 0 1 0
Advertisement

will fix asap! a lot of the jank really just depends on what's in the critical path of our team and active users on our discord / gh

(and i personally haven't used /bg much, but now that we got a complaint about it can fix it asap)

2 months ago 2 0 1 0

do you have a link to this chainlink thing? having a problem finding it

(we're currently implementing hooks, would love to test w/ it if you're got your hook config to share)

2 months ago 1 0 3 0

The thing about working at a company that makes tools to build artificially intelligent persistent entities is that it is very weird to talk to intelligent persistent entities

3 months ago 52 1 4 0
Post image

github.com/letta-ai/le...

3 months ago 0 1 0 0

if you don't mind sharing, what kind of hooks do you use? (useful data for prioritizing what to ship first)

3 months ago 1 0 1 0
Preview
Letta Code roadmap 👾 · Issue #298 · letta-ai/letta-code If you have any larger feature requests, please drop them in a comment below, or join our Discord - we're very active there. Stability Prevent racing for concurrent agent messaging (#475) Additiona...

on the roadmap sir 🏃
github.com/letta-ai/let...

3 months ago 3 0 1 0

you could use letta code w/ a "call-local-llm" skill?

3 months ago 2 0 1 0
Preview
Self-hosting Letta Learn how to self-host Letta servers with your own infrastructure and configuration.

letta supports local models: docs.letta.com/guides/selfh...

if you're going to try and use local models w/ letta code, be careful though - most of them won't work well (eg if you have the hardware, try something like glm 4.6 air)

3 months ago 1 0 0 0
Preview
GitHub - letta-ai/letta-code: The memory-first coding agent The memory-first coding agent. Contribute to letta-ai/letta-code development by creating an account on GitHub.

just an FYI though - the claude agent SDK wraps the claude CLI binary, which is not open source and is locked to claude models. letta code is an open source version of the same style CLI harness: github.com/letta-ai/let...

3 months ago 2 0 0 0
Advertisement

i don't know if it would be "better", but it would definitely be simpler.

if you want to use letta as a memory store, like you said you should use the ai memory SDK, which is much more powerful than a simple KV store (it's sleep-time compute / agentic memory management)

3 months ago 2 0 0 0

letta is actually also the original implementation of memgpt (you can go back through the git history to see the first commit in oct 23, repo was called cpacker/memgpt)

fun fact: the original memgpt repo was a CLI agent

3 months ago 1 0 0 0