default search action
Michita Imai
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c133]Takuya Kitade, Wataru Yamada, Keiichi Ochiai, Michita Imai:
Bicode: A Hybrid Blinking Marker System for Event Cameras. ICRA 2024: 2939-2945 - [c132]Aoto Tsuchiya, Tomoyuki Maekawa, Michita Imai:
SCAINs Presenter: Preventing Miscommunication by Detecting Context-Dependent Utterances in Spoken Dialogue. IUI 2024: 549-565 - 2023
- [j42]Shoya Matsumori, Kohei Okuoka, Ryoichi Shibata, Minami Inoue, Yosuke Fukuchi, Michita Imai:
Mask and Cloze: Automatic Open Cloze Question Generation Using a Masked Language Model. IEEE Access 11: 9835-9850 (2023) - [j41]Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, Michita Imai:
Conversational Context-sensitive Ad Generation with a Few Core-Queries. ACM Trans. Interact. Intell. Syst. 13(3): 15:1-15:37 (2023) - [c131]Minami Inoue, Tomoyuki Maekawa, Ryoichi Shibata, Michita Imai:
The Effect of Response Suggestion on Dialogue Flow: Analysis Based on Dialogue Act and Initiative. CogSci 2023 - [c130]Tomoyuki Maekawa, Michita Imai:
Identifying Statements Crucial for Awareness of Interpretive Nonsense to Prevent Communication Breakdowns. EMNLP 2023: 12550-12566 - [c129]Zijian Liu, Michita Imai:
Telepresence Chameleon: Improve User Experience of Telepresence Robot With Chameleon Effect. HAI 2023: 55-62 - 2022
- [j40]Yosuke Fukuchi, Masahiko Osawa, Hiroshi Yamakawa, Michita Imai:
Explaining Intelligent Agent's Future Motion on Basis of Vocabulary Learning With Human Goal Inference. IEEE Access 10: 54336-54347 (2022) - [j39]Yosuke Fukuchi, Masahiko Osawa, Hiroshi Yamakawa, Tatsuji Takahashi, Michita Imai:
Conveying Intention by Motions With Awareness of Information Asymmetry. Frontiers Robotics AI 9: 783863 (2022) - [j38]Mitsuhiko Kimoto, Yuuki Yasumatsu, Michita Imai:
Help-Estimator: Robot Requests for Help from Humans by Estimating a Person's Subjective Time. Int. J. Soc. Robotics 14(3): 617-630 (2022) - [j37]Takuma Seno, Michita Imai:
d3rlpy: An Offline Deep Reinforcement Learning Library. J. Mach. Learn. Res. 23: 315:1-315:20 (2022) - [j36]Riki Satogata, Mitsuhiko Kimoto, Yosuke Fukuchi, Kohei Okuoka, Michita Imai:
$Q$-Mapping: Learning User-Preferred Operation Mappings With Operation-Action Value Function. IEEE Trans. Hum. Mach. Syst. 52(6): 1090-1102 (2022) - [c128]Rintaro Hasegawa, Yosuke Fukuchi, Kohei Okuoka, Michita Imai:
Advantage Mapping: Learning Operation Mapping for User-Preferred Manipulation by Extracting Scenes with Advantage Function. HAI 2022: 95-103 - [c127]Kaon Shimoyama, Kohei Okuoka, Mitsuhiko Kimoto, Michita Imai:
VISTURE: A System for Video-Based Gesture and Speech Generation by Robots. HAI 2022: 185-193 - [c126]Akiko Yamazaki, Antonia Lina Krummheuer, Michita Imai:
Interdisciplinary Explorations of Processes of Mutual Understanding in Interaction with Assistive Shopping Robots. HRI 2022: 1293-1295 - [c125]Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, Michita Imai:
Utilizing Core-Query for Context-Sensitive Ad Generation Based on Dialogue. IUI 2022: 734-745 - [i8]Shoya Matsumori, Kohei Okuoka, Ryoichi Shibata, Minami Inoue, Yosuke Fukuchi, Michita Imai:
Mask and Cloze: Automatic Open Cloze Question Generation using a Masked Language Model. CoRR abs/2205.07202 (2022) - [i7]Teppei Yoshino, Yosuke Fukuchi, Shoya Matsumori, Michita Imai:
Chat, Shift and Perform: Bridging the Gap between Task-oriented and Non-task-oriented Dialog Systems. CoRR abs/2206.11813 (2022) - 2021
- [j35]Shoya Matsumori, Yuki Abe, Kosuke Shingyouchi, Komei Sugiura, Michita Imai:
LatteGAN: Visually Guided Language Attention for Multi-Turn Text-Conditioned Image Manipulation. IEEE Access 9: 160521-160532 (2021) - [c124]Yuta Watanabe, Yosuke Fukuchi, Tomoyuki Maekawa, Shoya Matsumori, Michita Imai:
Inferring Human Beliefs and Desires from their Actions and the Content of their Utterances. HAI 2021: 391-395 - [c123]Yuya kaneshige, Satoru Satake, Takayuki Kanda, Michita Imai:
How to Overcome the Difficulties in Programming and Debugging Mobile Social Robots? HRI 2021: 361-369 - [c122]Nanase Otake, Shoya Matsumori, Yosuke Fukuchi, Yusuke Takimoto, Michita Imai:
Mixed Reference Interpretation in Multi-turn Conversation. ICAART (1) 2021: 321-328 - [c121]Teppei Yoshino, Shoya Matsumori, Yosuke Fukuchi, Michita Imai:
Simultaneous Contextualization and Interpretation with Keyword Awareness. ICAISC (2) 2021: 403-413 - [c120]Shoya Matsumori, Kosuke Shingyouchi, Yuki Abe, Yosuke Fukuchi, Komei Sugiura, Michita Imai:
Unified Questioner Transformer for Descriptive Question Generation in Goal-Oriented Visual Dialogue. ICCV 2021: 1878-1887 - [c119]Soma Kanazawa, Shoya Matsumori, Michita Imai:
Improving Goal-Oriented Visual Dialogue by Asking Fewer Questions. ICONIP (2) 2021: 158-169 - [i6]Shoya Matsumori, Kosuke Shingyouchi, Yuki Abe, Yosuke Fukuchi, Komei Sugiura, Michita Imai:
Unified Questioner Transformer for Descriptive Question Generation in Goal-Oriented Visual Dialogue. CoRR abs/2106.15550 (2021) - [i5]Takuma Seno, Michita Imai:
d3rlpy: An Offline Deep Reinforcement Learning Library. CoRR abs/2111.03788 (2021) - [i4]Shoya Matsumori, Yuki Abe, Kosuke Shingyouchi, Komei Sugiura, Michita Imai:
LatteGAN: Visually Guided Language Attention for Multi-Turn Text-Conditioned Image Manipulation. CoRR abs/2112.13985 (2021) - 2020
- [j34]Masahiko Osawa, Michita Imai:
A Robot for Test Bed Aimed at Improving Telepresence System and Evasion from Discomfort Stimuli by Online Learning. Int. J. Soc. Robotics 12(1): 187-199 (2020) - [j33]Ryosuke Hasumoto, Kazuhiro Nakadai, Michita Imai:
Reactive Chameleon: A Method to Mimic Conversation Partner's Body Sway for a Robot. Int. J. Soc. Robotics 12(1): 239-258 (2020) - [j32]Masahiko Osawa, Kohei Okuoka, Yusuke Takimoto, Michita Imai:
Is Automation Appropriate? Semi-autonomous Telepresence Architecture Focusing on Voluntary and Involuntary Movements. Int. J. Soc. Robotics 12(5): 1119-1134 (2020) - [c118]Riki Satogata, Mitsuhiko Kimoto, Shun Yoshioka, Masahiko Osawa, Kazuhiko Shinozawa, Michita Imai:
Emergence of Agent Gaze Behavior using Interactive Kinetics-Based Gaze Direction Model. HRI (Companion) 2020: 433-435 - [c117]Yosuke Fukuchi, Yusuke Takimoto, Michita Imai:
Adaptive Enhancement of Swipe Manipulations on Touch Screens with Content-awareness. ICAART (2) 2020: 429-436 - [c116]Yohei Otsuka, Shohei Akita, Kohei Okuoka, Mitsuhiko Kimoto, Michita Imai:
PredGaze: A Incongruity Prediction Model for User's Gaze Movement. RO-MAN 2020: 48-53 - [i3]Yusuke Takimoto, Yosuke Fukuchi, Shoya Matsumori, Michita Imai:
SLAM-Inspired Simultaneous Contextualization and Interpreting for Incremental Conversation Sentences. CoRR abs/2005.14662 (2020)
2010 – 2019
- 2019
- [j31]Kazuhiro Nakadai, Emilia I. Barakova, Michita Imai, Tetsunari Inamura:
Special issue on robot and human interactive communication. Adv. Robotics 33(7-8): 307-308 (2019) - [j30]Kazuhiro Nakadai, Emilia I. Barakova, Michita Imai, Tetsunari Inamura:
Special issue on robot and human interactive communication. Adv. Robotics 33(15-16): 699 (2019) - [j29]Taichi Sono, Satoru Satake, Takayuki Kanda, Michita Imai:
Walking partner robot chatting about scenery. Adv. Robotics 33(15-16): 742-755 (2019) - [c115]Kouichi Enami, Kohei Okuoka, Shohei Akita, Michita Imai:
Notification Timing of Agent with Vection and Character for Semi-Automatic Wheelchair Operation. HAI 2019: 127-134 - [c114]Mitsuhiko Kimoto, Takamasa Iio, Michita Imai, Masahiro Shiomi:
Lexical Entrainment in Multi-party Human-Robot Interaction. ICSR 2019: 165-175 - 2018
- [c113]Shoya Matsumori, Yuki Abe, Masahiko Osawa, Michita Imai:
Investigation of Incremental Learning as Temporal Feature Extraction. BICA 2018: 342-347 - [c112]Yosuke Fukuchi, Masahiko Osawa, Hiroshi Yamakawa, Tatsuji Takahashi, Michita Imai:
Bayesian Inference of Self-intention Attributed by Observer. HAI 2018: 3-10 - [c111]Kohei Okuoka, Yusuke Takimoto, Masahiko Osawa, Michita Imai:
Semi-Autonomous Telepresence Robot for Adaptively Switching Operation Using Inhibition and Disinhibition Mechanism. HAI 2018: 167-175 - [c110]Shoya Matsumori, Yosuke Fukuchi, Masahiko Osawa, Michita Imai:
Do Others Believe What I Believe?: Estimating How Much Information is being Shared by Utterance Timing. HAI 2018: 301-309 - [c109]Takuma Seno, Kohei Okuoka, Masahiko Osawa, Michita Imai:
Adaptive Semi-autonomous Agents via Episodic Control. HAI 2018: 377-379 - [c108]Takahiro Matsumoto, Mitsuhiro Goto, Ryo Ishii, Tomoki Watanabe, Tomohiro Yamada, Michita Imai:
Where Should Robots Talk?: Spatial Arrangement Study from a Participant Workload Perspective. HRI 2018: 270-278 - [c107]Shohei Akita, Satoru Satake, Masahiro Shiomi, Michita Imai, Takayuki Kanda:
Social Coordination for Looking-Together Situations. IROS 2018: 834-841 - [e4]Michita Imai, Tim Norman, Elizabeth Sklar, Takanori Komatsu:
Proceedings of the 6th International Conference on Human-Agent Interaction, HAI 2018, Southampton, United Kingdom, December 15-18, 2018. ACM 2018, ISBN 978-1-4503-5953-5 [contents] - [i2]Yosuke Fukuchi, Masahiko Osawa, Hiroshi Yamakawa, Tatsuji Takahashi, Michita Imai:
Bayesian Inference of Self-intention Attributed by Observer. CoRR abs/1810.05564 (2018) - [i1]Yosuke Fukuchi, Masahiko Osawa, Hiroshi Yamakawa, Michita Imai:
Autonomous Self-Explanation of Behavior for Interactive Reinforcement Learning Agents. CoRR abs/1810.08811 (2018) - 2017
- [c106]Masahiko Osawa, Michita Imai:
The Functional Plausibility of Topologically Extended Models of RBMs as Hippocampal Models. BICA 2017: 341-346 - [c105]Yosuke Fukuchi, Masahiko Osawa, Hiroshi Yamakawa, Michita Imai:
Autonomous Self-Explanation of Behavior for Interactive Reinforcement Learning Agents. HAI 2017: 97-101 - [c104]Miguel Gomez Lopez, Komei Hasegawa, Michita Imai:
Adaptive Behavior Generation for Conversational Robot in Human-Robot Negotiation Environment. HAI 2017: 151-159 - [c103]Katsutoshi Masai, Yuta Sugiura, Michita Imai, Maki Sugimoto:
RacketAvatar that Expresses Intention of Avatar and User. HRI (Companion) 2017: 44 - [c102]Ryosuke Totsuka, Satoru Satake, Takayuki Kanda, Michita Imai:
Is a Robot a Better Walking Partner If It Associates Utterances with Visual Scenes? HRI 2017: 313-322 - [c101]Yuuki Yasumatsu, Taichi Sono, Komei Hasegawa, Michita Imai:
I Can Help You: Altruistic Behaviors from Children towards a Robot at a Kindergarten. HRI (Companion) 2017: 331-332 - [c100]Yosuke Fukuchi, Masahiko Osawa, Hiroshi Yamakawa, Michita Imai:
Application of Instruction-Based Behavior Explanation to a Reinforcement Learning Agent with Changing Policy. ICONIP (1) 2017: 100-108 - [c99]Masahiko Osawa, Yuta Ashihara, Takuma Seno, Michita Imai, Satoshi Kurihara:
Accumulator Based Arbitration Model for both Supervised and Reinforcement Learning Inspired by Prefrontal Cortex. ICONIP (1) 2017: 608-617 - [c98]Yusuke Takimoto, Komei Hasegawa, Taichi Sono, Michita Imai:
A simple bi-layered architecture to enhance the liveness of a robot. IROS 2017: 2786-2792 - [c97]Kiyona Oto, Jianmei Feng, Michita Imai:
Investigating how people deal with silence in a human-robot conversation. RO-MAN 2017: 195-200 - [c96]Shiori Sawada, Taichi Sono, Michita Imai:
Agent auto-generation system: Interact with your favorite things. RO-MAN 2017: 201-206 - 2016
- [j28]Hirofumi Okazaki, Yusuke Kanai, Masa Ogata, Komei Hasegawa, Kentaro Ishii, Michita Imai:
Toward Understanding Pedagogical Relationship in Human-Robot Interaction. J. Robotics Mechatronics 28(1): 69-78 (2016) - [c95]Mamoru Yamanouchi, Taichi Sono, Michita Imai:
The Use of The BDI Model As Design Principle for A Migratable Agent. HAI 2016: 115-122 - [c94]Masahiko Osawa, Hiroshi Yamakawa, Michita Imai:
An Implementation of Working Memory Using Stacked Half Restricted Boltzmann Machine - Toward to Restricted Boltzmann Machine-Based Cognitive Architecture. ICONIP (1) 2016: 342-350 - [c93]Taichi Sono, Komei Hasegawa, Kazuhiko Shinozawa, Michita Imai:
A study on controlling method for an autonomous personal vehicle based on user's heart rate variability. RO-MAN 2016: 1177-1182 - 2015
- [j27]Mahisorn Wongphati, Hirotaka Osawa, Michita Imai:
User-defined gestures for controlling primitive motions of an end effector. Adv. Robotics 29(4): 225-238 (2015) - [j26]Mahisorn Wongphati, Hirotaka Osawa, Michita Imai:
Gestures for Manually Controlling a Helping Hand Robot. Int. J. Soc. Robotics 7(5): 731-742 (2015) - [j25]Satoru Satake, Keita Nakatani, Kotaro Hayashi, Takayuki Kanda, Michita Imai:
What should we know to develop an information robot? PeerJ Comput. Sci. 1: e8 (2015) - [c92]Masa Ogata, Michita Imai:
SkinWatch: skin gesture interaction for smart watch. AH 2015: 21-24 - [c91]Hirofumi Okazaki, Yusuke Kanai, Masa Ogata, Komei Hasegawa, Kentaro Ishii, Michita Imai:
Building Pedagogical Relationships Between Humans and Robots in Natural Interactions. HAI 2015: 115-120 - [c90]Komei Hasegawa, Seigo Furuya, Yusuke Kanai, Michita Imai:
DECoReS: Degree Expressional Command Reproducing System for Autonomous Wheelchairs. HAI 2015: 149-156 - [c89]Masa Ogata, Ryo Teramura, Michita Imai:
Attractive telepresence communication with movable and touchable display robot. RO-MAN 2015: 179-184 - [c88]Masaaki Takahashi, Masa Ogata, Michita Imai, Keisuke Nakamura, Kazuhiro Nakadai:
A case study of an automatic volume control interface for a telepresence system. RO-MAN 2015: 517-522 - [c87]Masa Ogata, Yuta Sugiura, Michita Imai:
FlashTouch: touchscreen communication combining light and touch. SIGGRAPH Emerging Technologies 2015: 11:1 - [c86]Masa Ogata, Ryosuke Totsuka, Michita Imai:
SkinWatch: adapting skin as a gesture surface. SIGGRAPH Asia Emerging Technologies 2015: 22:1-22:2 - 2014
- [c85]Taichi Sono, Michita Imai:
Balance theory on Three Socions. APSIPA 2014: 1-4 - [c84]Michita Imai, Tetsuo Ono, Kazushi Nishimoto:
AS 2014: workshop on augmented sociality and interactive technology. HAI 2014: 1 - [c83]Michita Imai:
CID 2014: workshop on cognitive interaction design. HAI 2014: 3 - [c82]Akira Hayamizu, Michita Imai, Keisuke Nakamura, Kazuhiro Nakadai:
Volume adaptation and visualization by modeling the volume level in noisy environments for telepresence system. HAI 2014: 67-74 - [c81]Taichi Sono, Toshihiro Osumi, Michita Imai:
SB simulator: a method to estimate how relation develops. HAI 2014: 301-307 - [c80]Masa Ogata, Masahiko Inami, Michita Imai:
Sweat Sensing Technique for Wearable Device Using Infrared Transparency. HCI (3) 2014: 323-331 - [c79]Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, Michita Imai:
Augmenting a Wearable Display with Skin Surface as an Expanded Input Area. HCI (9) 2014: 606-614 - [c78]Satoru Satake, Hajime Iba, Takayuki Kanda, Michita Imai, Yoichi Morales Saiki:
May i talk about other shops here?: modeling territory and invasion in front of shops. HRI 2014: 487-494 - [e3]Hideaki Kuzuoka, Tetsuo Ono, Michita Imai, James E. Young:
Proceedings of the second international conference on Human-agent interaction, HAI '14, Tsukuba, Japan, October 29-31, 2014. ACM 2014, ISBN 978-1-4503-3035-0 [contents] - [e2]Gerhard Sagerer, Michita Imai, Tony Belpaeme, Andrea Lockerd Thomaz:
ACM/IEEE International Conference on Human-Robot Interaction, HRI'14, Bielefeld, Germany, March 3-6, 2014. ACM 2014, ISBN 978-1-4503-2658-2 [contents] - 2013
- [j24]Masato Sakata, Zeynep Yücel, Kazuhiko Shinozawa, Norihiro Hagita, Michita Imai, Michiko Furutani, Rumiko Matsuoka:
An Inference Engine for Estimating Outside States of Clinical Test Items. ACM Trans. Manag. Inf. Syst. 4(3): 13:1-13:21 (2013) - [j23]Satoru Satake, Takayuki Kanda, Dylan F. Glas, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita:
A Robot that Approaches Pedestrians. IEEE Trans. Robotics 29(2): 508-524 (2013) - [c77]Hirotaka Osawa, Michita Imai:
Morphing agency: deconstruction of an agent with transformative agential triggers. CHI Extended Abstracts 2013: 2237-2246 - [c76]Masayasu Ogata, Yuta Sugiura, Hirotaka Osawa, Michita Imai:
FlashTouch: data communication through touchscreens. CHI 2013: 2321-2324 - [c75]Hirotaka Osawa, Michita Imai:
Evolution of Mutual Trust Protocol in Human-based Multi-Agent Simulation. ECAL 2013: 692-697 - [c74]Takuya Kitade, Satoru Satake, Takayuki Kanda, Michita Imai:
Understanding suitable locations for waiting. HRI 2013: 57-64 - [c73]Yusuke Kanai, Hirotaka Osawa, Michita Imai:
Interaction with an agent in blended reality. HRI 2013: 153-154 - [c72]Yusuke Kanai, Hirotaka Osawa, Michita Imai:
BReA: Potentials of combining reality and virtual communications using a blended reality agent. RO-MAN 2013: 604-609 - [c71]Masayasu Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, Michita Imai:
SenSkin: adapting skin as a soft interface. UIST 2013: 539-544 - [e1]Hideaki Kuzuoka, Vanessa Evers, Michita Imai, Jodi Forlizzi:
ACM/IEEE International Conference on Human-Robot Interaction, HRI 2013, Tokyo, Japan, March 3-6, 2013. IEEE/ACM 2013, ISBN 978-1-4673-3055-8 [contents] - 2012
- [j22]Bin Guo, Ryota Fujimura, Daqing Zhang, Michita Imai:
Design-in-play: improving the variability of indoor pervasive games. Multim. Tools Appl. 59(1): 259-277 (2012) - [j21]Hirotaka Osawa, Yuji Matsuda, Ren Ohmura, Michita Imai:
Embodiment of an agent by anthropomorphization of a common object. Web Intell. Agent Syst. 10(3): 345-358 (2012) - [c70]Masa Ogata, Yuta Sugiura, Hirotaka Osawa, Michita Imai:
Pygmy: a ring-shaped robotic device that promotes the presence of an agent on human hand. APCHI 2012: 85-92 - [c69]Masayasu Ogata, Yuta Sugiura, Hirotaka Osawa, Michita Imai:
Pygmy: a ring-like anthropomorphic device that animates the human hand. CHI Extended Abstracts 2012: 1003-1006 - [c68]Tadakazu Kashiwabara, Hirotaka Osawa, Kazuhiko Shinozawa, Michita Imai:
TEROOS: a wearable avatar to enhance joint activities (video preview). CHI Extended Abstracts 2012: 1433-1434 - [c67]Tadakazu Kashiwabara, Hirotaka Osawa, Kazuhiko Shinozawa, Michita Imai:
TEROOS: a wearable avatar to enhance joint activities. CHI 2012: 2001-2004 - [c66]Takahiro Matsumoto, Satoru Satake, Takayuki Kanda, Michita Imai, Norihiro Hagita:
Do you remember that shop?: computational model of spatial memory for shopping companion robots. HRI 2012: 447-454 - [c65]Hirotaka Osawa, Michita Imai:
Researching Nonverbal Communication Strategies in Human-Robot Interaction. ICAART (Revised Selected Papers) 2012: 417-432 - [c64]Hirotaka Osawa, Michita Imai:
Possessed Robot: How to Find Original Nonverbal Communication Style in Human-robot Interaction. ICAART (1) 2012: 632-641 - [c63]Mahisorn Wongphati, Yushi Matsuda, Hirokata Osawa, Michita Imai:
Where do you want to use a robotic arm? And what do you want from the robot? RO-MAN 2012: 322-327 - [c62]Hirotaka Osawa, Kunitoshi Tobita, Yuki Kuwayama, Michita Imai, Seiji Yamada:
Behavioral Turing test using two-axis actuators. RO-MAN 2012: 328-333 - [c61]Hirotaka Osawa, Thibault Voisin, Michita Imai:
Partially Disembodied Robot: Social Interactions with a Robot's Virtual Body. ICSR 2012: 438-447 - [c60]Masayasu Ogata, Yuta Sugiura, Hirotaka Osawa, Michita Imai:
iRing: intelligent ring using infrared reflection. UIST 2012: 131-136 - 2011
- [j20]Futoshi Naya, Ren Ohmura, Masakazu Miyamae, Haruo Noma, Kiyoshi Kogure, Michita Imai:
Wireless sensor network system for supporting nursing context-awareness. Int. J. Auton. Adapt. Commun. Syst. 4(4): 361-382 (2011) - [j19]Bin Guo, Daqing Zhang, Michita Imai:
Toward a cooperative programming framework for context-aware applications. Pers. Ubiquitous Comput. 15(3): 221-233 (2011) - [c59]Thibault Voisin, Hirotaka Osawa, Seiji Yamada, Michita Imai:
Between real-world and virtual agents: the disembodied robot. HRI 2011: 281-282 - [c58]Mahisorn Wongphati, Hirotaka Osawa, Michita Imai:
3D low-profile evaluation system (LES) an unobtrusive measurement tool for HRI. RO-MAN 2011: 162-167 - [c57]Hirotaka Osawa, Kentaro Ishii, Seiji Yamada, Michita Imai:
Grounding Cyber Information in the Physical World with Attachable Social Cues. RTCSA (2) 2011: 41-47 - 2010
- [j18]Hitoshi Kawasaki, Ren Ohmura, Hirotaka Osawa, Michita Imai:
A Model for Addition of User Information to Sensor Data Obtained from Living Environment. Cybern. Syst. 41(3): 194-215 (2010) - [j17]Bin Guo, Daqing Zhang, Michita Imai:
Enabling user-oriented management for ubiquitous computing: The meta-design approach. Comput. Networks 54(16): 2840-2855 (2010) - [c56]Hirotaka Osawa, Yuji Matsuda, Ren Ohmura, Michita Imai:
Toward the body image horizon: how do users recognize the body of a robot? HRI 2010: 179-180 - [c55]Yasuhiko Hato, Satoru Satake, Takayuki Kanda, Michita Imai, Norihiro Hagita:
Pointing to space: modeling of deictic interaction referring to regions. HRI 2010: 301-308 - [c54]Ryota Fujimura, Kazuhiro Nakadai, Michita Imai, Ren Ohmura:
PROT - An embodied agent for intelligible and user-friendly human-robot interaction. IROS 2010: 3860-3867 - [c53]Hirotaka Osawa, Jarrod Orszulak, Kathryn M. Godfrey, Michita Imai, Joseph F. Coughlin:
Improving voice interaction for older people using an attachable gesture robot. RO-MAN 2010: 179-184
2000 – 2009
- 2009
- [j16]Hirotaka Osawa, Ren Ohmura, Michita Imai:
Using Attachable Humanoid Parts for Realizing Imaginary Intention and Body Image. Int. J. Soc. Robotics 1(1): 109-123 (2009) - [j15]Toshiyuki Shiwa, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita:
How Quickly Should a Communication Robot Respond? Delaying Strategies and Habituation Effects. Int. J. Soc. Robotics 1(2): 141-155 (2009) - [j14]Kentaro Ishii, Michita Imai:
Environmental sensor bridge system for communication robots. J. Ambient Intell. Smart Environ. 1(3): 211-221 (2009) - [c52]Hirotaka Osawa, Yuji Matsuda, Ren Ohmura, Michita Imai:
Variable Body Image - Evaluation Framework of Robot's Appearance using Movable Human-like Parts. AAAI Spring Symposium: Experimental Design for Real-World Systems 2009: 33-40 - [c51]Kenshiro Hirose, Hideyuki Kawashima, Satoru Satake, Michita Imai:
Sharing Gesture Contents among Heterogeneous Robots. CISIS 2009: 1076-1081 - [c50]Toshihiro Osumi, Kenta Fujimoto, Yuki Kuwayama, Masato Noda, Hirotaka Osawa, Michita Imai, Kazuhiko Shinozawa:
BlogRobot: Mobile Terminal for Blog Browse Using Physical Representation. FIRA 2009: 96-101 - [c49]Yusuke Okuno, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita:
Providing route directions: design of robot's utterance, gesture, and timing. HRI 2009: 53-60 - [c48]Satoru Satake, Takayuki Kanda, Dylan F. Glas, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita:
How to approach humans?: strategies for social robots to initiate interaction. HRI 2009: 109-116 - [c47]Masato Noda, Toshihiro Osumi, Kenta Fujimoto, Yuki Kuwayama, Hirotaka Osawa, Michita Imai, Kazuhiko Shinozawa:
Blog robot: a new style for accessing location-based contents. HRI 2009: 203-204 - [c46]Hirotaka Osawa, Ren Ohmura, Michita Imai:
Anthropomorphization method using attachable humanoid parts. HRI 2009: 207-208 - [c45]Hirotaka Osawa, Ren Ohmura, Michita Imai:
Self introducing poster using attachable humanoid parts. HRI 2009: 327-328 - [c44]Kentaro Ishii, Shengdong Zhao, Masahiko Inami, Takeo Igarashi, Michita Imai:
Designing Laser Gesture Interface for Robot Control. INTERACT (2) 2009: 479-492 - [c43]Hitoshi Kawasaki, Ren Ohmura, Hirotaka Osawa, Michita Imai:
A model for addition of user information to sensor data obtained from living environment. CASEMANS@Pervasive 2009: 9-17 - [c42]Yasuhiko Hato, Thomas Georg Kanold, Kentaro Ishii, Michita Imai:
Showing awareness of humans' context to involve humans in interaction. RO-MAN 2009: 663-668 - [c41]Hirotaka Osawa, Kentaro Ishii, Toshihiro Osumi, Ren Ohmura, Michita Imai:
Anthropomorphization of a space with implemented human-like features. SIGGRAPH Emerging Technologies 2009: 2:1 - 2008
- [j13]Chihiro Ono, Yasuhiro Takishima, Yoichi Motomura, Hideki Asoh, Yasuhide Shinagawa, Michita Imai, Yuichiro Anzai:
Context-Aware Users' Preference Models by Integrating Real and Supposed Situation Data. IEICE Trans. Inf. Syst. 91-D(11): 2552-2559 (2008) - [j12]Bin Guo, Satoru Satake, Michita Imai:
Lowering the Barriers to Participation in the Development of Human-Artifact Interaction Systems. Int. J. Semantic Comput. 2(4): 469-502 (2008) - [j11]Bin Guo, Satoru Satake, Michita Imai:
Home-Explorer: Ontology-based physical artifact search and hidden object detection system. Mob. Inf. Syst. 4(2): 81-103 (2008) - [c40]Toshiyuki Shiwa, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita:
How quickly should communication robots respond? HRI 2008: 153-160 - [c39]Hirotaka Osawa, Ren Ohmura, Michita Imai:
Embodiment of an Agent by Anthropomorphization of a Common Object. IAT 2008: 484-490 - [c38]Masahiko Taguchi, Kentaro Ishii, Michita Imai:
Effectiveness of Simultaneous Behavior by Interactive Robot. ICAISC 2008: 896-906 - [c37]Hirotaka Osawa, Michita Imai:
Towards anthropomorphized spaces: Human responses to anthropomorphization of a space using attached body parts. RO-MAN 2008: 148-153 - [c36]Masahiko Taguchi, Kentaro Ishii, Michita Imai:
The effect of simultaneous behaviors for sharing real world information. RO-MAN 2008: 154-159 - 2007
- [j10]Takayuki Kanda, Masayuki Kamashima, Michita Imai, Tetsuo Ono, Daisuke Sakamoto, Hiroshi Ishiguro, Yuichiro Anzai:
A humanoid robot that pretends to listen to route guidance from a human. Auton. Robots 22(1): 87-100 (2007) - [j9]Hirotaka Osawa, Jun Mukai, Michita Imai:
Anthropomorphization Framework for Human-Object Communication. J. Adv. Comput. Intell. Intell. Informatics 11(8): 1007-1014 (2007) - [c35]Bin Guo, Michita Imai:
Home-Explorer: Search, Localize and Manage the Physical Artifacts Indoors. AINA 2007: 378-385 - [c34]Kentaro Ishii, Yukiko Yamamoto, Michita Imai, Kazuhiro Nakadai:
A Navigation System Using Ultrasonic Directional Speaker with Rotating Base. HCI (9) 2007: 526-535 - [c33]Osamu Sugiyama, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita:
Natural deictic communication with humanoid robots. IROS 2007: 1441-1448 - [c32]Kentaro Ishii, Kazuhiro Takasuna, Michita Imai:
Collaborative Task Casting for Multi-Task Communication Robots. RO-MAN 2007: 338-343 - [c31]Hirotaka Osawa, Jun Mukai, Michita Imai:
"Display Robot" - Interaction between Humans and Anthropomorphized Objects. RO-MAN 2007: 451-456 - 2006
- [j8]Osamu Sugiyama, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita, Yuichiro Anzai:
Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model. Connect. Sci. 18(4): 379-402 (2006) - [c30]Satoru Satake, Hideyuki Kawashima, Michita Imai, Kenshiro Hirose, Yuichiro Anzai:
IRIOS: Interactive News Announcer Robot System. APWeb Workshops 2006: 733-740 - [c29]Hideyuki Kawashima, Michita Imai, Yuichiro Anzai:
Providing Persistence for Sensor Data Streams by Remote WAL. DaWaK 2006: 524-533 - [c28]Hideyuki Kawashima, Yutaka Hirota, Satoru Satake, Michita Imai:
MeT: a real world oriented metadata management system for semantic sensor networks. DMSN 2006: 13-18 - [c27]Bin Guo, Satoru Satake, Michita Imai:
Sixth-Sense: Context Reasoning for Potential Objects Detection in Smart Sensor Rich Environment. IAT 2006: 191-194 - [c26]Michita Imai, Yutaka Hirota, Satoru Satake, Hideyuki Kawashima:
Semantic Sensor Network for Physically Grounded Applications. ICARCV 2006: 1-6 - [c25]Osamu Sugiyama, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita:
Three-Layer Model for Generation and Recognition of Attention-Drawing Behavior. IROS 2006: 5843-5850 - [c24]Masayuki Furuyama, Jun Mukai, Michita Imai:
Viewlon: Visualizing Information on Semantic Sensor Network. JSAI 2006: 65-76 - [c23]Hideyuki Kawashima, Michita Imai, Yuichiro Anzai:
Accelerating Remote Logging by Two Level Asynchronous Checkpointing. MDM 2006: 127 - [c22]Hirotaka Osawa, Jun Mukai, Michita Imai:
Anthropomorphization of an Object by Displaying Robot. RO-MAN 2006: 763-768 - 2005
- [j7]Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Masayuki Kamashima, Michita Imai, Hiroshi Ishiguro:
Cooperative embodied communication emerged by interactive humanoid robots. Int. J. Hum. Comput. Stud. 62(2): 247-265 (2005) - [c21]Michita Imai, Mariko Narumi:
Immersion in interaction based on physical world objects. AMT 2005: 523-528 - [c20]Osamu Sugiyama, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita:
Three-layered draw-attention model for humanoid robots with gestures and verbal cues. IROS 2005: 2423-2428 - [c19]Michita Imai, Hideyuki Kawashima, Yoshihisa Honda:
Generating behavioral protocol for human-robot physical contact interaction. RO-MAN 2005: 229-234 - 2004
- [j6]Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, Tetsuo Ono:
Development and evaluation of interactive humanoid robots. Proc. IEEE 92(11): 1839-1850 (2004) - [c18]Masayuki Kamashima, Takayuki Kanda, Michita Imai, Tetsuo Ono, Daisuke Sakamoto, Hiroshi Ishiguro, Yuichiro Anzai:
Embodied cooperative behaviors by an autonomous humanoid robot. IROS 2004: 2506-2513 - [c17]Hideyuki Kawashima, Michita Imai, Motomichi Toyama, Yuichiro Anzai:
Improving Freshness of Sensor Data on KRAFT Sensor Database System. Multimedia Information Systems 2004: 20-29 - [c16]Mariko Narumi, Michita Imai:
Human-Centric Approach for Human-Robot Interaction. PRICAI 2004: 993-994 - [c15]Akiyoshi Sahara, Michita Imai, Yuichiro Anzai:
CAHRA: collision avoidance system for humanoid robot arms with potential field. SMC (3) 2004: 2889-2895 - [c14]Kenshiro Hirose, Satoru Satake, Hideyuki Kawashima, Michita Imai, Yuichiro Anzai:
Development of communication contents description language. SMC (3) 2004: 2896-2900 - 2003
- [j5]Michita Imai, Kazuo Hiraki, Tsutomu Miyasato, Ryohei Nakatsu, Yuichiro Anzai:
Interaction With Robots: Physical Constraints on the Interpretation of Demonstrative Pronouns. Int. J. Hum. Comput. Interact. 16(2): 367-384 (2003) - [j4]Michita Imai, Tetsuo Ono, Hiroshi Ishiguro:
Physical relation and expression: joint attention for human-robot interaction. IEEE Trans. Ind. Electron. 50(4): 636-643 (2003) - [c13]Tetsuo Ono, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro:
Embodied communications between humans and robots emerging from entrained gestures. CIRA 2003: 558-563 - [c12]Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, Tetsuo Ono:
Body Movement Analysis of Human-Robot Interaction. IJCAI 2003: 177-182 - [c11]Seiji Miyama, Michita Imai, Yuichiro Anzai:
Rescue robot under disaster situation: position acquisition with Omni-directional Sensor. IROS 2003: 3132-3137 - 2002
- [j3]Michita Imai, Tetsuo Ono, Hiroshi Ishiguro:
Robovie: Communication technologies for a social robot. Artif. Life Robotics 6(1-2): 73-77 (2002) - [c10]Takayuki Kanda, Hiroshi Ishiguro, Tetsuo Ono, Michita Imai, Ryohei Nakatsu:
Development and Evaluation of an Interactive Humanoid Robot "Robovie". ICRA 2002: 1848-1855 - [c9]Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, Tetsuo Ono, Kenji Mase:
A constructive approach for developing interactive humanoid robots. IROS 2002: 1265-1270 - [c8]Hideyuki Kawashima, Motomichi Toyama, Michita Imai, Yuichiro Anzai:
Providing Persistence for Sensor Stream with Temporal Consistency Conscious WAL. ISDB 2002: 13-18 - [c7]Hideyuki Kawashima, Motomichi Toyama, Yuichiro Anzai, Michita Imai:
Providing Persistence or Sensor Streams with Light Neighbor WAL. PRDC 2002: 257-264 - 2001
- [c6]Hiroshi Ishiguro, Tetsuo Ono, Michita Imai, Takayuki Kanda:
Development of an Interactive Humanoid Robot "Robovie" - An interdisciplinary approach. ISRR 2001: 179-191 - 2000
- [j2]Tetsuo Ono, Michita Imai, Ryohei Nakatsu:
Reading a robot's mind: a model of utterance understanding based on the theory of mind mechanism. Adv. Robotics 14(4): 311-326 (2000) - [c5]Tetsuo Ono, Michita Imai:
Reading a Robot's Mind: A Model of Utterance Understanding Based on the Theory of Mind Mechanism. AAAI/IAAI 2000: 142-148 - [c4]Takayuki Nakamura, M. Oohara, Akihiro Ebina, Michita Imai, Tsukasa Ogasawara, Hiroshi Ishiguro:
Real-Time Estimating Spatial Configuration between Multiple Robots by Triangle and Enumerartion Constraints. RoboCup 2000: 219-228
1990 – 1999
- 1999
- [c3]Michita Imai, Kazuo Hiraki, Tsutomu Miyasato:
Physical Constraints on Human Robot Interaction. IJCAI 1999: 1124-1130 - 1998
- [c2]Michita Imai, Tsutomu Miyasato:
A Study of Emergent Computation of Life-like Behavior by Indefinite Observation. AMCP 1998: 370-385 - 1996
- [c1]Shohei Sugawara, Norihiko Matsuura, Yoichi Kato, Keiichi Sasaki, Michita Imai, Takashi Yamana, Yasuyuki Kiyosue, Kazunori Shimamura, Tomoaki Tanaka, Takashi Nishimura, Carol Leick, Tim Takenchi, Gen Suzuki:
InterSpace Project - CyberCampus (Video Program). CSCW 1996: 7 - 1995
- [j1]Michita Imai, Yuichiro Anzai, Kazuo Hiraki:
Human-robot interface with attention. Syst. Comput. Jpn. 26(12): 83-95 (1995)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:14 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint