Logs are for for coding agent's eyes. Connect your agents to infra if you want them to be effective.
Surprised Claude hasn't added a debug mode like Cursor. I think it should be a first class citizen.
Posts by Mikyo
RSVP just opened up for our workshop with @mikeldking + Hamel: maven.com/p/2c8410/au...
Learn how to:
β‘ Connect Claude Code to Phoenix observability data
β‘Use CLI commands to fetch traces and debug agents
β‘Prompt AI to analyze system behavior in real-time
Phoenix 13.0
Phoenix 13 is a major release centered around Dataset Evaluators, a new system that turns your datasets into reusable evaluation suites. This release also introduces custom model providers, OpenAI Responses API support, and dozens of Playground and experiment UX improvements.
Phoenix Evals now supports message-based LLM-as-a-judge promptsβ an upgrade that aligns evals with how modern models actually expect instructions.
π§΅π
New Evals for TypeScript agent builders π₯
With Mastra now integrating directly with Phoenix, you can trace your TypeScript agents with almost zero friction.
And now⦠you can evaluate them too: directly from TypeScript using Phoenix Evals.
This is why going forward all AI features I help build will be natively instrumented with #OTEL. The telemetry data is the "fossil fuel" that feeds understanding and future improvement. AI cannot be treated as a black-box. It has to be inspected and understood.
Telemetry while testing and developing has been critical for me. It let's me hook into and inspect how systems like Vercel's AI SDK and LiteLLM work under the hood and figure out what prompts are being used for judgement.
Take evals. You might pick an eval and trust that it works. But this would be a mistake. It's rare that these evals will work for you across the board. Previously it would have been crazy to enable telemetry during testing. But with evals, you are going to want to inspect how your tests "operate".
Tracing and telemetry traditionally has been an operational requirement, not a development one. But I've found that with AI applications this fundamentally changes.
π³
@arize.bsky.social OSS Prompt Playground
@arize-phoenix.bsky.social gets Deepseek support! Now you can compare outputs of all the top tier reasoning models.
Which LLM provider would you like to see next? Let us know on GitHub!
github.com/Arize-ai/pho...
π¨βπ³ @arize-phoenix.bsky.social continues to cook
Announcing OpenInference instrumentation for Agno, Mastra, Bedrock Agents, and AutoGen AgentChat!
At @arize.bsky.social we believe observability deserves to be built in the open
s/o @anthonypowell.me and many others
github.com/Arize-ai/ope...
π§ͺ π The @arize-phoenix.bsky.social TS/JS client now supports Experiments and Datasets!
You can now create datasets, run experiments, and attach evaluations to experiments using the Phoenix TS/JS client.
Shoutout to @anthonypowell.me and @mikeldking.bsky.social for the work here!
@arizeai/phoenix-client@1.3.0 -
@arize-phoenix.bsky.social javascript client gets experiments π§ͺ
s/o @anthonypowell.me !
- native tracing of ai tasks and evaluators,
- async concurrency queues
- support for any evaluator (e.g. bring your own evals) and more!
OpenTelemetry instrumentation for Agno is published! Huge s/o to Dirk Brand.
A true testament that AI observability should be built in the open π
@arize-phoenix.bsky.social
pypi.org/project/open...
annotating an llm call
πAnnotation Configs in @arize-phoenix.bsky.social
Part of the "Look at the Data" initiative, create custom rubrics and forms to annotate your spans.
s/o to @anthonypowell.me here who built out all the rich UI features.
9β£ @arize-phoenix.bsky.social is gonna turn 9 today.
Project Retention Policies
Customize the data retention of your projects by number of days or by trace count. No more cron jobs or manual deleting of traces needed!
A much requested ask from our on-prem users and phoenix-cloud users alike.
Learn to prompt better
A speaker announcement card showing that Ben McHone is going to be presenting at Arize: Observe 2025 on June 25th, 2025.
I'll be speaking at Arize:Observe at SHACK15 on June 25! Looking forward to exploring whatβs next for AI agents & assistants. More details on my session to come. @arize.bsky.social
arize.com/observe-2025
I still own plenty of pencils but no erasers. What does that say about me?
Just dropped a tutorial on using the OpenAI Agents SDK + @arize-phoenix.bsky.social to go from building to evaluating agents.
βοΈ Trace agent decisions at every step
βοΈ Offline and Online Evals using LLM as a Judge
If you're building agents, measuring them is essential.
Full vid and cookbook below
Text reads: Building AI? Demo your app. Arize:Observe community demos. Submit by 4.30.25. Apply.
Demo your app at this year's Observe! Fill out a short application by 4.30 to be considered for our Demo Den. Great opportunity to showcase your work to the AI community in SF.
Apply here: docs.google.com/forms/d/e/1F...
"The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise." - Edsger W. Dijkstra. Just read this and I am going to be using it a LOT.
In case you missed it, Arize AI Phoenix crossed the 5k GitHub star mark last week! βοΈ
Phoenix has changed a TON since its first iteration.
I'm constantly in awe of the execution speed and quality of this team. Here's to the next 5k and beyond!
Love the community we're building!
For all my NYC friends! π½π
We're hosting an in-person office hours tomorrow all around LLM and Agent Evals.
Join for the free snacks/drinks, stay for the heated discussions about the validity of Pokemon-based model evaluations β‘οΈπ
How much more data does an LLM app really need?
In my latest tutorial, I explore how few-shot prompting boosts accuracy without massive datasets or retrainingβusing @arize-phoenix.bsky.social prompts and experiments to break it down.
This kicks off my prompting series... more to come!
π€ OpenAI 's agent framework openai-agents provides a rich set of composable primitives that enable you to build agents.
openinference-instrumentation-openai-agents, an OpenTelememetry instrumentor that is compatible with any OTel backend like @arize-phoenix.bsky.social. Fully OSS and free to use!
How can you programmatically improve your prompts? π€ π€
Forget manual prompt engineering - there are better (read: "more automatic") ways to improve your prompts.
This video and notebook break down these techniques.
Featuring:
- DSPy
- @arize-phoenix.bsky.social
Learn how we built a holistic prompt management system that preserves developer freedom.
With Phoenix 8.0, we built a prompt management system that prioritizes: LLM reproducibility, prompt versioning & tracking, & developer flexibilityβno vendor lock-in
arize.com/blog/prompt-...
AI is all about vibes lately