Advertisement Β· 728 Γ— 90

Posts by Pulkit Verma

Post image

If you are at #ICAPS2025 and interested in human-aware explainable planning, consider attending the HAXP workshop in Room 6.

We have amazing invited talks by Mor Vered and Antonio Rago, and a panel with Mor Vered, Sarath Sreedharan, and David Smith.

5 months ago 0 0 0 0

#IJCAI2025 Workshop on User-Aligned Assessment of Adaptive AI Systems (AIA2025) is now in progress. Please note that the room is changed to 520C.

8 months ago 1 0 0 0

The program for the #IJCAI2025 Workshop on User-Aligned Assessment of Adaptive AI Systems is now available. We have a fantastic lineup of invited speakers and talks.
Link: bit.ly/ijcai25-aia

@ruqi-zhang.bsky.social
@sidsrivast.bsky.social

8 months ago 4 1 0 1

Submission Deadline: August 10, 2025

Co-organizers: Rebecca Eifler, Benjamin Krarup, Alan Lindsay, Lindsay Sanneman, Silvia Tulli, @philosophicus.bsky.social

8 months ago 0 0 0 0
HAXP - ICAPS 2025

The deadline for #ICAPS2025 Workshop on Human-Aware and Explainable Planning (HAXP) is 10 days away. If you work on human-AI interaction and explainability, in the context of planning, scheduling, RL, etc., please check it out.

πŸ—“οΈ Nov 10 or 11, 2025
πŸ“ Melbourne, Australia
πŸ”— bit.ly/haxp25

8 months ago 1 1 1 0
Post image

The deadline for #IJCAI2025 Workshop on User-Aligned Assessment of Adaptive AI Systems is just 5 days away. If you are working on any aspect of assessment, regulation, compliance, etc., of AI systems, please check it out.

More details here: bit.ly/ijcai25-aia
Deadline: May 16, 2025 AoE

11 months ago 3 1 0 0
Post image

If you are attending #HRI2025, check out our talk on Interpretability Analysis of Symbolic Representations for Sequential Decision-Making Systems at the X-HRI Workshop. Joint work with Julie Shah.

⏱️ March 03, 09:15 AM AEDT.
πŸ“ Room 106, HRI2025.
πŸ”— sites.google.com/view/x-hri/s...

1 year ago 1 0 0 0

#AAAI2025 is almost here!

I’ll co-organize a tutorial with @sidsrivast.bsky.social on User-Driven Capability Assessment of Taskable AI Systems. The schedule is now live, so mark your calendars!

πŸ“… 26 February 2025
πŸ“Room 115A, Pennsylvania Convention Center
πŸ”— bit.ly/aia25-tutorial

1 year ago 2 2 0 0

Do consider attending if you are interested in topics like assessment of black-box AI systems in stationary and adaptive settings, and want to learn more about the recent advances in AI assessment.

(3/3)

1 year ago 2 0 0 0

In this tutorial, we'll cover the foundations of AI assessment, explore how to assess AI systems using state-of-the-art techniques, and discuss open problems and promising future directions.

(2/3)

1 year ago 2 0 1 0
Advertisement
Post image

Excited to organize a half-day tutorial at #AAAI2025 on User-Driven Capability Assessment of Taskable AI Systems with @sidsrivast.bsky.social.

πŸ“… 26 February 2025
⏱️ 8:30 AM - 12:30 PM EST
πŸ“ AAAI 2025, Philadelphia, USA
πŸ”— bit.ly/aia25-tutorial

(1/3)

1 year ago 7 1 1 1

Welcome to all those on Bluesky interested in AI Planning! πŸ¦‹

go.bsky.app/RiL1Dz3

1 year ago 8 4 0 0

Dwight Schrute loves BlueSky... @theoffice.bsky.social

#TheOffice

1 year ago 0 0 0 0

Rushang Karia, Jayesh Nagpal, Daksh Dobhal, Pulkit Verma, Rashmeet Kaur Nayyar, Naman Shah, Siddharth Srivastava
Using Explainable AI and Hierarchical Planning for Outreach with Robots
https://arxiv.org/abs/2404.00808

1 year ago 0 1 0 0

Rushang Karia, Daksh Dobhal, Daniel Bramblett, Pulkit Verma, Siddharth Srivastava
Can LLMs Converse Formally? Automatically Assessing LLMs in Translating and Interpreting Formal Specifications
https://arxiv.org/abs/2403.18327

2 years ago 0 1 0 0

Rushang Karia, Daniel Bramblett, Daksh Dobhal, Pulkit Verma, Siddharth Srivastava
$\forall$uto$\exists$val: Autonomous Assessment of LLMs in Formal Synthesis and Interpretation Tasks
https://arxiv.org/abs/2403.18327

1 year ago 1 1 0 0

Naman Shah, Jayesh Nagpal, Pulkit Verma, Siddharth Srivastava
From Reals to Logic and Back: Inventing Symbolic Vocabularies, Actions and Models for Planning from Raw Data
https://arxiv.org/abs/2402.11871

2 years ago 0 1 0 0