Abstract: Various agent programming languages and frameworks have been developed by now, but very few systematic studies have been done as to how the elements in these languages may be and are in fact used in practice. Performing a study of these aspects contributes to the design of instruments for facilitating development of high-quality agent programs, namely programming language, programming guidelines and teaching methods, and development environment. In this paper we propose an approach for empirically studying how programmers use a programming language, in which we identify several analysis dimensions. We perform two case studies in which we analyze agent programs…written in the GOAL agent programming language along the identified dimensions. The case studies concern programs for the dynamic Blocks World and for controlling bots in the first-person shooter game UNREAL TOURNAMENT 2004. We evaluate our experimental setup and discuss to what extent our findings generalize to other cognitive agent programming languages. This provides insight into more practical aspects of the development of agent programs, and forms the basis for improvement of instruments for facilitating agent development.
Show more
Abstract: Smart devices, such as smart phones, voice assistants and social robots, provide users with a range of input modalities, e.g., speech, touch, gestures, and vision. In recent years, advancements in processing of these input channels enable more natural interaction (e.g., automated speech, face, and gesture recognition, dialog generation, emotion expression etc.) experiences for users. However, there are several important challenges that need to be addressed to create these user experiences. One challenge is that most smart devices do not have sufficient computing resources to execute the Artificial Intelligence (AI) techniques locally. Another challenge is that users expect responses in near…real-time when they interact with these devices. Moreover, users also want to be able to seamlessly switch between devices and services any time and from anywhere and expect personalized and privacy-aware services. To address these challenges, we design and develop a cloud-based middleware (CMI) which helps to develop multi-modal interaction applications and easily integrate applications to AI services. In this middleware, services developed by different producers with different protocols and smart devices with different capabilities and protocols can be integrated easily. In CMI, applications stream data from devices to cloud services for processing and consume the results. It supports data streaming from multiple devices to multiple services (and vice versa). CMI provides an integration framework for decoupling the services and devices and enabling application developers to concentrate on “interaction” instead of AI techniques. We provide simple examples to illustrate the conceptual ideas incorporated in CMI.
Show more