Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Agents, impact confidence, and prior beliefs

Agents might use data, but they need prior beliefs too, just like humans do

In order to mimic human decision making, agents need to estimate the probability that a particular intervention will move user behavior in the right direction, but also estimate how much confidence they should put in that probability assessment. The first estimate is easy - or, at least, straightforward. The second estimate is a lot harder, but it sits at the core of how agents balance exploration of new possibilities with exploitation of lessons already learned.

There is no purely empirical way to estimate confidence. Frequentists estimate it implicitly, whereas Bayesians estimate it explicitly, but in both cases the estimate has to come from the very squishy realm of prior beliefs.

When I was designing how our agents would assess confidence, I had to do some thinking about the properties of statistical distributions and how those map to different expectations about confidence we can be about a lesson learned from just a single intervention. No one wants to stake too much on a single interaction - in any context - but if we stake too little then we end up discounting a lot of valuable information so much that we ultimately end up ignoring it.

In the attached video, I walk through some of the ways we've addressed the topic of estimate confidence with our agents.

The Agentic Edge 🔀
The Agentic Edge Podcast
Conversations about agentic workflows, tools, and infrastructure!