Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- courseDecember 2024
Automatic 3D modeling and exploration of indoor structures from panoramic imagery
SA Courses '24: SIGGRAPH Asia 2024 CoursesArticle No.: 1, Pages 1–9https://doi.org/10.1145/3680532.3689580Surround-view panoramic imaging delivers extensive spatial coverage and is widely supported by professional and commodity capture devices. Research on inferring and exploring 3D indoor models from 360° images has recently flourished, resulting in highly ...
- extended-abstractOctober 2024
Balancing Habit Repetition and New Activity Exploration: A Longitudinal Micro-Randomized Trial in Physical Activity Recommendations
RecSys '24: Proceedings of the 18th ACM Conference on Recommender SystemsPages 1147–1151https://doi.org/10.1145/3640457.3691715As repetition of activities can establish habits and exploration of new ones can provide a healthy variety, we investigate how a recommender system for physical activities can optimally balance these two approaches. We conducted an eight-week user study ...
- research-articleSeptember 2024
Beyond The Page-Break: Towards Better Tools for Remediation of Born-Digital Documents
HT '24: Proceedings of the 35th ACM Conference on Hypertext and Social MediaPages 70–77https://doi.org/10.1145/3648188.3678215A legacy of print is that much of our process and tooling is predicated on using text in paginated form, such as was required for (paper) printed media. Increasingly, digitally-created (‘born-digital’) documents will never be used non-digitally and yet ...
- research-articleAugust 2024
Multi-Task Neural Linear Bandit for Exploration in Recommender Systems
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 5723–5730https://doi.org/10.1145/3637528.3671649Exposure bias and its induced feedback loop effect are well-known problems in recommender systems. Exploration is believed to be the key to break such feedback loops. While classical contextual bandit algorithms such as Upper-Confidence-Bound and ...
- keynoteAugust 2024
Exploration: Measurements and Systems
ICTIR '24: Proceedings of the 2024 ACM SIGIR International Conference on Theory of Information RetrievalPage 1https://doi.org/10.1145/3664190.3672540Human curiosity compels us to explore and understand the unknown. The information retrieval and recommendation systems we relied upon daily for information acquisition and decision making however often fall prey to the "closed feedback loop". They ...
-
- posterAugust 2024
A Meta-Evolutionary Algorithm for Co-evolving Genotypes and Genotype / Phenotype Maps
GECCO '24 Companion: Proceedings of the Genetic and Evolutionary Computation Conference CompanionPages 467–470https://doi.org/10.1145/3638530.3654259Evolutionary computation (EC) is often used to automatically discover solutions to optimization problems. It is valued because it allows the programmer to intuitively design a search space to fit a task, and because it is a relatively open-ended search ...
- otherAugust 2024
The INSANE dataset: Large number of sensors for challenging UAV flights in Mars analog, outdoor, and out-/indoor transition scenarios
- Christian Brommer,
- Alessandro Fornasier,
- Martin Scheiber,
- Jeff Delaune,
- Roland Brockers,
- Jan Steinbrener,
- Stephan Weiss
International Journal of Robotics Research (RBRS), Volume 43, Issue 8Pages 1083–1113https://doi.org/10.1177/02783649241227245For real-world applications, autonomous mobile robotic platforms must be capable of navigating safely in a multitude of different and dynamic environments with accurate and robust localization being a key prerequisite. To support further research in this ...
- research-articleJune 2024
A Sampling Strategy for High-Dimensional, Simulation-Based Transportation Optimization Problems
Transportation Science (TRNPS), Volume 58, Issue 5Pages 947–972https://doi.org/10.1287/trsc.2023.0110When tackling high-dimensional, continuous simulation-based optimization (SO) problems, it is important to balance exploration and exploitation. Most past SO research focuses on the enhancement of exploitation techniques. The exploration technique of an ...
- research-articleMay 2024
Bayesian Model-Free Deep Reinforcement Learning
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsPages 2782–2784Exploration in reinforcement learning remains a difficult challenge. In order to drive exploration, ensembles with randomized prior functions have recently been popularized to quantify uncertainty in the value model. However these ensembles have no ...
- extended-abstractMay 2024
Bayesian Ensembles for Exploration in Deep Q-Learning
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsPages 2528–2530Exploration in reinforcement learning remains a difficult challenge. In order to drive exploration, ensembles with randomized prior functions have recently been popularized to quantify uncertainty in the value model. There is no theoretical reason for ...
- extended-abstractMay 2024
Guided Exploration in Reinforcement Learning via Monte Carlo Critic Optimization
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsPages 2348–2350In reinforcement learning (RL), the class of deep deterministic off-policy algorithms is effectively applied to solve challenging continuous control problems. Current approaches commonly utilize random noise as an exploration method, which has several ...
- extended-abstractMay 2024
Entropy Seeking Constrained Multiagent Reinforcement Learning
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsPages 2141–2143Multiagent Reinforcement Learning (MARL) has been successfully applied to domains requiring close coordination among many agents. However, real-world tasks require safety specifications that are not generally considered by MARL algorithms. In this work, ...
- research-articleMay 2024
Beyond Surprise: Improving Exploration Through Surprise Novelty
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsPages 1084–1092We present a new computing model for intrinsic rewards in reinforcement learning that addresses the limitations of existing surprise-driven explorations. The reward is the novelty of the surprise rather than the surprise norm. We estimate the surprise ...
- research-articleApril 2024
Multi-robot, multi-sensor exploration of multifarious environments with full mission aerial autonomy
- Graeme Best,
- Rohit Garg,
- John Keller,
- Geoffrey A. Hollinger,
- Sebastian Scherer,
- Shoudong Huang,
- Kris Hauser,
- Dylan A. Shell
International Journal of Robotics Research (RBRS), Volume 43, Issue 4Pages 485–512https://doi.org/10.1177/02783649231203342We present a coordinated autonomy pipeline for multi-sensor exploration of confined environments. We simultaneously address four broad challenges that are typically overlooked in prior work: (a) make effective use of both range and vision sensing ...
- research-articleMarch 2024
Fair Exploration via Axiomatic Bargaining
Management Science (MANS), Volume 70, Issue 12Pages 8922–8939https://doi.org/10.1287/mnsc.2022.01985Exploration is often necessary in online learning to maximize long-term rewards, but it comes at the cost of short-term “regret.” We study how this cost of exploration is shared across multiple groups. For example, in a clinical trial setting, patients ...
- research-articleJuly 2024
Unanticipated Progress Indication: Continuous Responsiveness for Courageous Exploration
Programming '24: Companion Proceedings of the 8th International Conference on the Art, Science, and Engineering of ProgrammingPages 80–86https://doi.org/10.1145/3660829.3660843Scripting environments support exploration from smaller programs to larger systems. From original Smalltalk workspaces to modern Python notebooks, such tool support is known to foster understanding. However, programmers struggle with unforeseen effects ...
- research-articleMarch 2024
Long-Term Value of Exploration: Measurements, Findings and Algorithms
- Yi Su,
- Xiangyu Wang,
- Elaine Ya Le,
- Liang Liu,
- Yuening Li,
- Haokai Lu,
- Benjamin Lipshitz,
- Sriraj Badam,
- Lukasz Heldt,
- Shuchao Bi,
- Ed H. Chi,
- Cristos Goodrow,
- Su-Lin Wu,
- Lexi Baugher,
- Minmin Chen
WSDM '24: Proceedings of the 17th ACM International Conference on Web Search and Data MiningPages 636–644https://doi.org/10.1145/3616855.3635833Effective exploration is believed to positively influence the long-term user experience on recommendation platforms. Determining its exact benefits, however, has been challenging. Regular A/B tests on exploration often measure neutral or even negative ...
- research-articleFebruary 2024
Balancing exploration and exploitation: Unleashing the adaptive power of automatic cuckoo search for meta-heuristic optimization
Intelligent Decision Technologies (INTDTEC), Volume 18, Issue 1Pages 485–508https://doi.org/10.3233/IDT-idt230275Meta-heuristic optimization algorithms are versatile and efficient techniques for solving complex optimization problems. When applied to clustering algorithms, these algorithms offer numerous advantages over traditional optimization methods, including ...
- posterDecember 2023
Trace to Follow, Run to Explore: A Demonstration using Interactive Sorting
CompEd 2023: Proceedings of the ACM Conference on Global Computing Education Vol 2Page 206https://doi.org/10.1145/3617650.3624930The fixed control structure of deterministic algorithms renders their behavior traceable but not amenable to interactive exploration.
In the accompanying poster, we illustrate how the runs of a transition system simulating a family of algorithms form the ...
- research-articleDecember 2023
Backtracking Exploration for Reinforcement Learning
DAI '23: Proceedings of the Fifth International Conference on Distributed Artificial IntelligenceArticle No.: 7, Pages 1–7https://doi.org/10.1145/3627676.3627687Exploration of the behavior policy plays an important role in reinforcement learning as it helps learning algorithms escape local optima. Taking linear value function approximation as an example, exploration directly affects the sampling of states, ...