Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Most research on probabilistic commitments focuses on commitments to achieve conditions for other agents. Our work reveals that probabilistic commitments to instead maintain conditions for others are surprisingly different from their... more
Most research on probabilistic commitments focuses on commitments to achieve conditions for other agents. Our work reveals that probabilistic commitments to instead maintain conditions for others are surprisingly different from their achievement counterparts, despite strong semantic similarities. We focus on the question of how the commitment recipient should model the provider’s effect on the recipient’s local environment, with only imperfect information being provided in the commitment specification. Our theoretic analyses show that we can more tightly bound the inefficiency of this imperfect modeling for achievement commitments than for maintenance commitments. We empirically demonstrate that probabilistic maintenance commitments are qualitatively more challenging for the recipient to model, and addressing the challenges can require the provider to adhere to a more detailed profile and sacrifice flexibility.
Most research on probabilistic commitments focuses on commitments to achieve enabling preconditions for other agents. Our work reveals that probabilistic commitments to instead maintain preconditions for others are surprisingly harder to... more
Most research on probabilistic commitments focuses on commitments to achieve enabling preconditions for other agents. Our work reveals that probabilistic commitments to instead maintain preconditions for others are surprisingly harder to use well than their achievement counterparts, despite strong semantic similarities. We isolate the key difference as being not in how the commitment provider is constrained, but rather in how the commitment recipient can locally use the commitment specification to approximately model the provider's effects on the preconditions of interest. Our theoretic analyses show that we can more tightly bound the potential suboptimality due to approximate modeling for achievement than for maintenance commitments. We empirically evaluate alternative approximate modeling strategies, confirming that probabilistic maintenance commitments are qualitatively more challenging for the recipient to model well, and indicating the need for more detailed specifications ...
Reducing the burden of interacting with complex systems has been a long standing goal of user interface design. In our approach to this problem, we have been developing user interfaces that allow users to interact with complex systems in... more
Reducing the burden of interacting with complex systems has been a long standing goal of user interface design. In our approach to this problem, we have been developing user interfaces that allow users to interact with complex systems in a natural way and in high-level, task-related terms. These capabilities help users concentrate on making important decisions without the distractions of manipulating systems and user interfaces. To attain such a goal, our approach uses a unique combination of multi-modal interaction and interaction planning. In this paper, we motivate the basis for our approach, we describe the user interface technologies we have developed, and briefly discuss the relevant research and development issues.
... References are not available. top of page CITED BY. Jeffrey Cox , Edmund Durfee, Efficient and distributable methods for solving the multiagent plan coordination problem, Multiagent and Grid Systems, v.5 n.4, p.373-408, December 2009.... more
... References are not available. top of page CITED BY. Jeffrey Cox , Edmund Durfee, Efficient and distributable methods for solving the multiagent plan coordination problem, Multiagent and Grid Systems, v.5 n.4, p.373-408, December 2009. top of page INDEX TERMS. ...
A distributed problem solving network is composed of semi-autonomous problem-solving nodes that can communicate with each other. Nodes work together to solve a single problem by individually solving interacting subproblems and integrating... more
A distributed problem solving network is composed of semi-autonomous problem-solving nodes that can communicate with each other. Nodes work together to solve a single problem by individually solving interacting subproblems and integrating their subproblem solutions into an overall solution. Because each node may have a limited local view of the overall problem, nodes must share subproblem solutions; cooperation thus requires intelligent local control decisions so that each node performs tasks which generate useful subproblem solutions. The use of a global “controller” to make these decisions for the nodes is not an option because it would be a severe communication and computational bottleneck and would make the network susceptible to complete collapse if it fails. Because nodes must make these decisions based only on their local information, well-coordinated or coherent cooperation is difficult to achieve [Davis and Smith, 1983; Lesser and Corkill, 1981].
Research into algorithms for coordinating computational agents that cooperatively solve problems can shine light on potential strategies for coordinating human computation. Here, we briefly summarize key concepts manifested in distributed... more
Research into algorithms for coordinating computational agents that cooperatively solve problems can shine light on potential strategies for coordinating human computation. Here, we briefly summarize key concepts manifested in distributed intelligent agent algorithms, and highlight some opportunities for translating pertinent concepts to benefit human computation.
Having described how the planner works in the previous chapter, we now examine how the planner affects problem solving. The first part of this chapter explores the activities of the planner and problem solver in a variety of experiments... more
Having described how the planner works in the previous chapter, we now examine how the planner affects problem solving. The first part of this chapter explores the activities of the planner and problem solver in a variety of experiments to better understand what the planner does. To fully understand the effects of the planner, these experiments not only examine how the planner improves control decisions, but also what the costs of those improvements are. It is important to remember that the planner’s job is to reduce the time needed to solve problems by improving control decisions, but if the planner needs a lot of time to make these decisions, then the net result may be that the time needs increase—the time saved in problem solving is used up in planning! In many of these experiments, therefore, the discussion covers not only how the planner affects local decisions but also whether the costs of planning are acceptable.
Downsizing the number of operators controlling complex systems can increase the decision-making demands on remaining operators, particularly in crisis situations. An answer to this problem is to offload decision-making tasks from people... more
Downsizing the number of operators controlling complex systems can increase the decision-making demands on remaining operators, particularly in crisis situations. An answer to this problem is to offload decision-making tasks from people to computational processes, and to use these processes to focus and expedite human decision making. In this paper, we describe a system comprised of multiple computational agents that has demonstrated an ability to help operators prioritize their tasks better, process their tasks faster, and enlist the aid of other operators more transparently. In developing this system, we have of course encountered challenges, particularly in devising content languages that adequately convey the right information (to be interpreted correctly) across the heterogeneous agents. We here summarize our work that addresses this challenge, and illustrate how our system improves performance for operators in naval situations.
The Autonomous Agents and MultiAgent Systems (AAMAS) conference series brings together researchers from around the world to share the latest advances in the field. It is the premier forum for research in the theory and practice of... more
The Autonomous Agents and MultiAgent Systems (AAMAS) conference series brings together researchers from around the world to share the latest advances in the field. It is the premier forum for research in the theory and practice of autonomous agents and multi-agent systems. AAMAS 2002, the first of the series, was held in Bologna, followed by Melbourne (2003), New York (2004), Utrecht (2005), Hakodate (2006), Honolulu (2007), Estoril (2008), Budapest (2009), Toronto (2010), Taipei (2011), Valencia (2012), Saint Paul (2013), Paris (2014), Istanbul (2015), and Singapore (2016). This volume constitutes the proceedings of AAMAS 2017, the sixteenth conference in the series, held in Sao Paulo in May 2017. AAMAS 2017 invited submissions for a general track and five special tracks: Innovative Applications, Robotics, Embodied Virtual Agents and Human-Agent Interaction, and Blue Sky Ideas, along with a track to present papers from JAAMAS (the Journal of Autonomous Agents and Multiagent Systems) that had not previously been presented at a major conference. The special tracks were chaired by leading researchers in their corresponding fields: Paul Scerri and Pradeep Varakantham chaired the Innovative Applications track, Chris Amato and Alessandro Farinelli the Robotics track, Catherine Pelachaud the Embodied Virtual Agents and Human-Agent Interaction track, and Vincent Conitzer the Blue Sky Ideas track. One of us (Kate Larson) solicited papers for the JAAMAS Presentation Track from the papers that appeared in JAAMAS from the preceding 12 months. Jointly with the program chairs, the special track chairs were responsible for appointing the Senior Program Committee (SPC) members, who in turn helped identify the strong and diverse Program Committee (PC) members for their tracks. Every paper was reviewed by at least 3 PC members, overseen by an SPC member who ensured reviews were clear and informative. After authors were given an opportunity to respond to the reviewers, the SPC member led a discussion where the reviewers considered each others', and the authors', comments to converge on a recommendation to the Track chairs. The Track chairs in turn worked with the program chairs to make final decisions about acceptance for the papers, to ensure uniformly high quality. The JAAMAS presentation Track submissions published as extended abstracts were handled by the track chair. AAMAS 2017 attracted a good number of high-quality submissions: the overall acceptance rate for full papers was 27% (155 out of 567 submissions were accepted) and for extended abstracts was 21%. Of the 567 submissions, 356 (63%) had a student as the primary author, and 82 of these (23%) were accepted as full papers, and an additional 91 (26%) as extended abstracts. While all the accepted papers are of very high quality, a select few were nominated for the Best Paper Award and the Pragnesh Jay Modi Best Student Paper Award. The Best Paper Award was presented at the conference to the best paper, and the Pragnesh Jay Modi Best Student Paper Award was given to the best of the remaining papers primarily authored by a student. The nominees for these awards are listed below, alphabetically by the first author's last name; papers primarily authored by a student are marked with an asterisk (*). *Daniel Claes, Frans Oliehoek, Hendrik Baier, and Karl Tuyls. Decentralized Online Planning for Multi-Robot Warehouse Commissioning *Arnold Filtser and Nimrod Talmon. Distributed Monitoring of Election Winners *Zhiyuan Li, Yicheng Liu, Pingzhong Tang, Tingting Xu, and Wei Zhan. Stability of Generalized Two-Sided Markets with Transaction Thresholds *Peta Masters and Sebastian Sardina. Cost-Based Goal Recognition for Path-Planning Matthias Scheutz, Evan Krause, Brad Oosterveld, Tyler Frasca, and Robert Platt. Spoken Instruction-Based One-Shot Object and Action Learning in a Cognitive Robotic Architecture *Adrian Sosic, Wasiur R. KhudaBukhsh, Abdelhak M. Zoubir, and Heinz Koeppl. Inverse Reinforcement Learning in Swarm Systems *Amulya Yadav, Bryan Wilder, Eric Rice, Robin Petering, Jaih Craddock, Amanda Yoshioka Maxwell, Mary Hemler, Laura Onasch-Vera, Milind Tambe, and Darlene Woo. Influence Maximization in the Field: The Arduous Journey from Emerging to Deployed Application These papers, and all other full papers were presented orally in 20 minute slots; all extended abstracts and, optionally, full papers were presented as posters during the conference. These proceedings also contain the extended abstracts of 13 Demonstrations and 26 submissions accepted to the Doctoral Consortium, as well as abstracts of the invited talks and details of some of the awards presented.
Abstract When two or more computing agents work on interacting tasks, their activities should be coordinated so that they cooperate coherently. Coherence is particularly problematic in domains where each agent has only a limited view of... more
Abstract When two or more computing agents work on interacting tasks, their activities should be coordinated so that they cooperate coherently. Coherence is particularly problematic in domains where each agent has only a limited view of the overall task, where communication between agents is limited, and where there is no``controller''to coordinate the agents. Our approach to coherent cooperation in such domains is developed in the context of a distributed problem-solving network where agents cooperate to solve a single ...
Autonomous agents operating in shared, dynamic environments need to coordinate their plans to preclude negative interactions. In this dissertation, we describe an online algorithm that coordinates hierarchical plans. The coordination... more
Autonomous agents operating in shared, dynamic environments need to coordinate their plans to preclude negative interactions. In this dissertation, we describe an online algorithm that coordinates hierarchical plans. The coordination algorithm incrementally detects potential conflicts between abstract plans that agents dynamically reduce to detailed plans of action, and recommends temporal ordering constraints to prevent conflicts. By inter-leaving coordination with plan execution, agents can exercise their ability to dynamically select plans in response to runtime contingencies, and do not have to commit to specific courses of action prior to execution. However, merely coordinating plans will not suffice, since coordination commitments can be endangered by plan execution failures caused by exogenous events. Recovering from plan failures at the multiagent system level is the other focus of this work; we describe a failure recovery algorithm that detects plan failure and its ramifications vis-a-vis coordination, and locally repairs the multiagent plan by replacing a failed plan with a substitute plan while working around previously instituted ordering constraints. We report the results of several experiments that evaluate and explain the behavior of the coordination algorithm as a function of various coordination problem parameters, and demonstrate the efficacy of the algorithm by comparing it to alternative coordination strategies.
Edmund H. Durfee, Patrick G. Kenny, Karl C. Kluge, Integrated premission planning and execution for unmanned ground vehicles, Proceedings of the first international conference on Autonomous agents, p. 348-354, February 05-08, 1997, Marina... more
Edmund H. Durfee, Patrick G. Kenny, Karl C. Kluge, Integrated premission planning and execution for unmanned ground vehicles, Proceedings of the first international conference on Autonomous agents, p. 348-354, February 05-08, 1997, Marina del Rey, California, ...
The majority of the experiments are conducted on four-node networks, although experiments involving both larger and smaller networks are also discussed. The two principal environments are shown in Figure 68. Environment A was used for... more
The majority of the experiments are conducted on four-node networks, although experiments involving both larger and smaller networks are also discussed. The two principal environments are shown in Figure 68. Environment A was used for examples in the previous two chapters. Its important features are the track shared by several nodes (d1-d15), the much less globally-important moderately-sensed data of node 1 (d’1-d’5), and the ambiguity introduced into node 2′s data by its sensor. Environment A thus focuses on issues of giving preference to more globally-important plans, providing predictive information, and avoiding redundant processing.

And 432 more