Advertisement · 728 × 90

Posts by Gagan Bansal

We recently release new work Society of Agents and Economics. Checkout the blog below.

5 months ago 2 0 0 0
Post image

Version 0.4.0.dev13 is here!

The release removes previously deprecated features, so ensure your code runs without warnings on dev12 before upgrading.

An initial migration guide is available: microsoft.github.io/autogen/0.4....

We're nearing the full 0.4.0 release!

1 year ago 4 1 1 0

AutoGen is now on BlueSky!

1 year ago 3 0 0 0
Post image

We are following Russel and Norvig’s definition, as mentioned in the introduction.

1 year ago 1 0 0 0

Joint work with my wonderful colleagues:

@jennwv.bsky.social
Dan Weld
Saleema Amershi
@erichorvitz.bsky.social
@adamfourney.bsky.social
Hussein Mozannar
Victor Dibia

#AIAgents #LLMs #TechNews

5/

1 year ago 2 0 0 0


We're calling on researchers and practitioners to prioritize these issues and enhance transparency, control, and trust in AI agents! 📄 Read full details at microsoft.com/en-us/resear...

4/

1 year ago 2 0 1 0
Post image

Why does this matter?

Without proper grounding, we risk safety failures, loss of user control, and ineffective collaboration. Trust and transparency in AI systems hinge on addressing these challenges. We supplement each challenge with examples.

3/

1 year ago 1 0 1 0
Post image

Some challenges focus on how agents can convey necessary information to help users form accurate mental models (A1-5). Others address enabling users to express their goals, preferences, and constraints to guide agent behavior (U1-3). We also focus on many overarching issues (X1-4).

2/

1 year ago 1 0 1 0
Post image

☀️New paper!

Generative AI agents are powerful but complex—how do we design them for transparency and human control? 🤖✨

At the heart of this challenge is establishing common ground, a concept from human communication. We identify 12 key challenges in improving common ground between humans & agents.

1 year ago 33 6 2 0