
The AIhub espresso nook captures the musings of AI specialists over a brief dialog. This month we sort out the subject of agentic AI. Becoming a member of the dialog this time are: Sanmay Das (Virginia Tech), Tom Dietterich (Oregon State College), Sabine Hauert (College of Bristol), Sarit Kraus (Bar-Ilan College), and Michael Littman (Brown College).
Sabine Hauert: Right now’s matter is agentic AI. What’s it? Why is it taking off? Sanmay, maybe you would kick off with what you observed at AAMAS [the Autonomous Agents and Multiagent Systems conference]?
Sanmay Das: It was very attention-grabbing as a result of clearly there’s out of the blue been an infinite curiosity in what an agent is and within the improvement of agentic AI. Individuals within the AAMAS neighborhood have been fascinated with what an agent is for no less than three a long time. Nicely, longer really, however the neighborhood itself dates again about three a long time within the type of these conferences. One of many very attention-grabbing questions was about why all people is rediscovering the wheel and rewriting these papers about what it means to be an agent, and the way we must always take into consideration these brokers. The best way through which AI has progressed, within the sense that enormous language fashions (LLMs) at the moment are the dominant paradigm, is nearly solely completely different from the way in which through which individuals have considered brokers within the AAMAS neighborhood. Clearly, there’s been numerous machine studying and reinforcement studying work, however there’s this historic custom of fascinated with reasoning and logic the place you may even have specific world fashions. Even while you’re doing recreation idea, or MDPs, or their variants, you will have an specific world mannequin that means that you can specify the notion of find out how to encode company. Whereas I feel that’s a part of the disconnect now – every thing is a little bit bit black boxy and statistical. How do you then take into consideration what it means to be an agent? I feel by way of the underlying notion of what it means to be an agent, there’s quite a bit that may be learnt from what’s been executed within the brokers neighborhood and in philosophy.
I additionally assume that there are some attention-grabbing ties to fascinated with emergent behaviors, and multi-agent simulation. However it’s a little bit little bit of a Wild West on the market and there are all of those papers saying we have to first outline what an agent is, which is certainly rediscovering the wheel. So, at AAMAS, there was numerous dialogue of stuff like that, but additionally questions on what this implies on this specific period, as a result of now we out of the blue have these actually highly effective creatures that I feel no person within the AAMAS neighborhood noticed coming. Essentially we have to adapt what we’ve been doing locally to bear in mind that these are completely different from how we thought clever brokers would emerge into this extra normal area the place they’ll play. We have to work out how we adapt the sorts of issues that we’ve discovered about negotiation, agent interplay, and agent intention, to this world. Rada Mihalcea gave a very attention-grabbing keynote speak fascinated with the pure language processing (NLP) aspect of issues and the questions there.
Sabine: Do you’re feeling prefer it was a brand new neighborhood becoming a member of the AAMAS neighborhood, or the AAMAS neighborhood that was changing?
Sanmay: Nicely, there have been individuals who had been coming to AAMAS and seeing that the neighborhood has been engaged on this for a very long time. So studying one thing from that was positively the vibe that I bought. However my guess is, should you go to ICML or NeurIPS, that’s very a lot not the vibe.
Sarit Kraus: I feel they’re losing a while. I imply, overlook the “what’s an agent?”, however there have been many works from the agent neighborhood for a few years about coordination, collaboration, and so on. I heard about one current paper the place they reinvented Contract Nets. Contract Nets had been launched in 1980, and now there’s a paper about it. OK, it’s LLMs which are transferring duties from each other and signing contracts, but when they only learn the previous papers, it will save their time after which they may transfer to extra attention-grabbing analysis questions. At the moment, they are saying with LLM brokers that you should divide the duty into sub brokers. My PhD was about constructing a Diplomacy participant, and in my design of the participant there have been brokers that every performed a distinct a part of a Diplomacy play – one was a strategic agent, one was a Overseas Minister, and so on. And now they’re speaking about it once more.
Michael Littman: I completely agree with Sanmay and Sarit. The best way I give it some thought is that this: this notion of “let’s construct brokers now that we’ve got LLMs” to me feels a little bit bit like we’ve got a brand new programming language like Rust++, or no matter, and we will use it to jot down applications that we had been scuffling with earlier than. It’s true that new programming languages could make some issues simpler, which is nice, and LLMs give us a brand new, highly effective approach to create AI techniques, and that’s additionally nice. However it’s not clear that they clear up the challenges that the brokers neighborhood have been grappling with for thus lengthy. So, right here’s a concrete instance from an article that I learn yesterday. Claudius is a model of Claude and it was agentified to run a small on-line store. They gave it the power to speak with individuals, publish slack messages, order merchandise, set costs on issues, and other people had been really doing financial exchanges with the system. On the finish of the day, it was horrible. Someone talked it into shopping for tungsten cubes and promoting them within the retailer. It was simply nonsense. The Anthropic individuals considered the experiment as a win. They mentioned “ohh yeah, there have been positively issues, however they’re completely fixable”. And the fixes, to me, seemed like all they’d need to do is clear up the issues that the brokers neighborhood has been attempting to resolve for the final couple of a long time. That’s all, after which we’ve bought it excellent. And it’s not clear to me in any respect that simply making LLMs generically higher, or smarter, or higher reasoners out of the blue makes all these sorts of brokers questions trivial as a result of I don’t assume they’re. I feel they’re laborious for a purpose and I feel you need to grapple with the laborious questions to truly clear up these issues. However it’s true that LLMs give us a brand new potential to create a system that may have a dialog. However then the system’s decision-making is simply actually, actually unhealthy. And so I believed that was tremendous attention-grabbing. However we brokers researchers nonetheless have jobs, that’s the excellent news from all this.
Sabine: My bread and butter is to design brokers, in our case robots, that work collectively to reach at desired emergent properties and collective behaviors. From this swarm perspective, I really feel that over the previous 20 years we’ve got discovered numerous the mechanisms by which you attain consensus, the mechanisms by which you routinely design agent behaviours utilizing machine studying to allow teams to attain a desired collective process. We all know find out how to make agent behaviours comprehensible, all that great things you need in an engineered system. However up till now, we’ve been profoundly missing the person brokers’ potential to work together with the world in a means that provides you richness. So in my thoughts, there’s a very nice interface the place the brokers are extra succesful, to allow them to now do these native interactions that make them helpful. However we’ve got this entire overarching approach to systematically engineer collectives that I feel would possibly make one of the best of each worlds. I don’t know at what level that interface occurs. I suppose it comes partly from each neighborhood going a little bit bit in direction of the opposite aspect. So from the swarm aspect, we’re attempting visible language fashions (VLMs), we’re attempting to have our robots perceive utilizing LLMs their native world to speak with people and with one another and get a collective consciousness at a really native stage of what’s taking place. After which we use our swarm paradigms to have the ability to engineer what they do as a collective utilizing our previous analysis experience. I think about for many who are simply coming into this self-discipline they should begin from the LLMs and go up. I feel it’s a part of the method.
Tom Dietterich: I feel numerous it simply doesn’t have something to do with brokers in any respect, you’re writing pc applications. Individuals discovered that should you attempt to use a single LLM to do the entire thing, the context will get all tousled and the LLM begins having hassle decoding it. In actual fact, these LLMs have a comparatively small short-term reminiscence that they’ll successfully use earlier than they begin getting interference among the many various things within the buffer. So the engineers break the system into a number of LLM calls and chain them collectively, and it’s not an agent, it’s simply a pc program. I don’t know what number of of you will have seen this technique referred to as DSPy (written by Omar Khattab)? It takes an specific kind of software program engineering perspective on issues. Principally, you write a kind signature for every LLM module that claims “right here’s what it’s going to take as enter, right here’s what it’s going to supply as output”, you construct your system, after which DSPy routinely tunes all of the prompts as a kind of compiler part to get the system to do the precise factor. I need to query whether or not constructing techniques with LLMs as a software program engineering train will department off from the constructing of multi-agent techniques. As a result of nearly all of the “agentic techniques” will not be brokers within the sense that we’d name them that. They don’t have autonomy any greater than a daily pc program does.
Sabine: I’m wondering in regards to the anthropomorphization of this, as a result of now that you’ve completely different brokers, they’re all doing a process or a job, and hastily you get articles speaking about how one can change a complete workforce by a set of brokers. So we’re now not changing particular person jobs, we’re now changing groups and I’m wondering if this terminology additionally doesn’t assist.
Sanmay: To be clear, this concept has existed no less than because the early 90s, when there have been these “mushy bots” that had been mainly working Unix instructions and so they had been determining what to do themselves. It’s actually no completely different. What individuals imply once they’re speaking about brokers is giving a chunk of code the chance to run its personal stuff and to have the ability to do this in service of some sort of a objective.
I take into consideration this by way of financial brokers, as a result of that’s what I grew up (AKA, did my PhD) fascinated with. And, do I need an agent? I might take into consideration writing an agent that manages my (non-existent) inventory portfolio. If I had sufficient cash to have a inventory portfolio, I would take into consideration writing an agent that manages that portfolio, and that’s an affordable notion of getting autonomy, proper? It has some objective, which I set, after which it goes about making selections. If you concentrate on the sensor-actuator framework, its actuator is that it may make trades and it may take cash from my checking account so as to take action. So I feel that there’s one thing in getting again to the fundamental query of “how does this agent act on the planet?” after which what are the percepts that it’s receiving?
I utterly agree with what you had been saying earlier about this query of whether or not the LLMs allow interactions to occur in several methods. If you happen to take a look at pre-LLMs, with these brokers that had been doing pricing, there’s this hilarious story of how some outdated biology textbook ended up costing $17 million on Amazon as a result of there have been these two bots that had been doing the pricing of these books at two completely different used e book shops. Certainly one of them was a barely higher-rated retailer than the opposite, so it will take no matter value that the lower-rated retailer had and push it up by 10%. Then the lower-rated retailer was an undercutter and it will take the present highest value and go to 99% of that value. However this simply led to this spiral the place out of the blue that e book value $17 million. That is precisely the sort of factor that’s going to occur on this world. However the factor that I’m really considerably anxious about, and anthropomorphising, is how these brokers are going to resolve on their targets.There’s a chance for actually unhealthy errors to come back out of programming that wouldn’t be as dangerous in a extra constrained state of affairs.
Tom: Within the reinforcement studying literature, in fact, there’s all this dialogue about reward hacking and so forth, however now we think about two brokers interacting with one another and hacking one another’s rewards successfully, so the entire dynamics blows up – individuals are simply not ready.
Sabine: The breakdown of the issue that Tom talked about, I feel there’s maybe an actual profit to having these brokers which are narrower and that because of this are maybe extra verifiable on the particular person stage, they perhaps have clearer targets, they is likely to be extra inexperienced as a result of we would be capable of constrain what space they function with. After which within the robotics world, we’ve been taking a look at collaborative consciousness the place slender brokers which are task-specific are conscious of different brokers and collectively they’ve some consciousness of what they’re meant to be doing total. And it’s fairly anti-AGI within the sense that you’ve plenty of slender brokers once more. So a part of me is questioning, are we going again to heterogeneous task-specific brokers and the AGI is collective, maybe? And so this new wave, perhaps it’s anti-AGI – that will be attention-grabbing!
Tom: Nicely, it’s nearly the one means we will hope to show the correctness of the system, to have every part slender sufficient that we will really purpose about it. That’s an attention-grabbing paradox that I used to be lacking from Stuart Russell’s “What if we succeed?” chapter in his e book, which is what if we reach constructing a broad-spectrum agent, how are we going to check it?
It does look like it will be nice to have some individuals from the brokers neighborhood communicate on the machine studying conferences and attempt to do some diplomatic outreach. Or perhaps run some workshops at these conferences.
Sarit: I used to be at all times all for human-agent interplay and the truth that LLMs have solved the language difficulty for me, I’m very excited. However the different downside that has been talked about continues to be right here – you should combine methods and decision-making. So my mannequin is you will have LLM brokers which have instruments which are all kinds of algorithms that we developed and applied and there needs to be a number of of them. However the truth that someone solved our pure language interplay, I feel that is actually, actually nice and good for the brokers neighborhood as effectively for the pc science neighborhood usually.
Sabine: And good for the people. It’s a great level, the people are brokers as effectively in these techniques.
AIhub
is a non-profit devoted to connecting the AI neighborhood to the general public by offering free, high-quality data in AI.

AIhub
is a non-profit devoted to connecting the AI neighborhood to the general public by offering free, high-quality data in AI.
