default search action
Tetsuya Ogata
Person information
- affiliation: Waseda University, Department of Intermedia Art and Science, Tokyo, Japan
- affiliation (2003 - 2012): Kyoto University. Graduate School of Informatics, Japan
- affiliation (2001 - 2003): RIKEN Brain Science Institute, Wako, Japan
- affiliation (PhD 2000): Waseda University, Tokyo, Japan
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j95]Kento Kawaharazuka, Tatsuya Matsushima, Shuhei Kurita, Chris Paxton, Andy Zeng, Tetsuya Ogata, Tadahiro Taniguchi:
Special issue on real-world robot applications of the foundation models. Adv. Robotics 38(18): 1231 (2024) - [j94]Shardul Kulkarni, Satoshi Funabashi, Alexander Schmitz, Tetsuya Ogata, Shigeki Sugano:
Tactile Object Property Recognition Using Geometrical Graph Edge Features and Multi-Thread Graph Convolutional Network. IEEE Robotics Autom. Lett. 9(4): 3894-3901 (2024) - [j93]Gangadhara Naga Sai Gubbala, Masato Nagashima, Hiroki Mori, Young Ah Seong, Hiroki Sato, Ryuma Niiyama, Yuki Suga, Tetsuya Ogata:
Augmenting Compliance With Motion Generation Through Imitation Learning Using Drop-Stitch Reinforced Inflatable Robot Arm With Rigid Joints. IEEE Robotics Autom. Lett. 9(10): 8595-8602 (2024) - [j92]Satoshi Funabashi, Gang Yan, Fei Hongyi, Alexander Schmitz, Lorenzo Jamone, Tetsuya Ogata, Shigeki Sugano:
Tactile Transfer Learning and Object Recognition With a Multifingered Hand Using Morphology Specific Convolutional Neural Networks. IEEE Trans. Neural Networks Learn. Syst. 35(6): 7587-7601 (2024) - [c243]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Tetsuya Ogata:
Retry-behavior Emergence for Robot-Motion Learning Without Teaching and Subtask Design. AIM 2024: 178-183 - [c242]Abdullah Mustafa, Ryo Hanai, Ixchel Georgina Ramirez-Alpizar, Floris Erich, Ryoichi Nakajo, Yukiyasu Domae, Tetsuya Ogata:
Visual Imitation Learning of Non-Prehensile Manipulation Tasks with Dynamics-Supervised Models. CASE 2024: 3872-3879 - [c241]Naoki Shirakura, Natsuki Yamanobe, Tsubasa Maruyama, Yukiyasu Domae, Tetsuya Ogata:
Work Tempo Instruction Framework for Balancing Human Workload and Productivity in Repetitive Task. HRI (Companion) 2024: 980-984 - [c240]Kazuki Hori, Kanata Suzuki, Tetsuya Ogata:
Interactively Robot Action Planning with Uncertainty Analysis and Active Questioning by Large Language Model. SII 2024: 85-91 - [c239]Kenjiro Yamamoto, Hiroshi Ito, Hideyuki Ichiwara, Hiroki Mori, Tetsuya Ogata:
Real-Time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes. SII 2024: 390-397 - [c238]Hiroto Iino, Kei Kase, Ryoichi Nakajo, Naoya Chiba, Hiroki Mori, Tetsuya Ogata:
Generating Long-Horizon Task Actions by Leveraging Predictions of Environmental States. SII 2024: 478-483 - [c237]Suzuka Harada, Ryoichi Nakajo, Kei Kase, Tetsuya Ogata:
Automatic Segmentation of Continuous Time-Series Data Based on Prediction Error Using Deep Predictive Learning. SII 2024: 928-933 - [i40]André Yuji Yasutomi, Hiroki Mori, Tetsuya Ogata:
A Peg-in-hole Task Strategy for Holes in Concrete. CoRR abs/2403.19946 (2024) - [i39]Kanata Suzuki, Tetsuya Ogata:
Sensorimotor Attention and Language-based Regressions in Shared Latent Variables for Integrating Robot Motion Learning and LLM. CoRR abs/2407.09044 (2024) - [i38]Tamon Miyake, Namiko Saito, Tetsuya Ogata, Yushi Wang, Shigeki Sugano:
Dual-arm Motion Generation for Repositioning Care based on Deep Predictive Learning with Somatosensory Attention Mechanism. CoRR abs/2407.13376 (2024) - [i37]Masaki Yoshikawa, Hiroshi Ito, Tetsuya Ogata:
Achieving Faster and More Accurate Operation of Deep Predictive Learning. CoRR abs/2408.10231 (2024) - 2023
- [j91]Tomoki Ando, Hiroto Iino, Hiroki Mori, Ryota Torishima, Kuniyuki Takahashi, Shoichiro Yamaguchi, Daisuke Okanohara, Tetsuya Ogata:
Learning-based collision-free planning on arbitrary optimization criteria in the latent space through cGANs. Adv. Robotics 37(10): 621-633 (2023) - [j90]André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito, Hiroki Mori, Tetsuya Ogata:
Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions. IEEE Robotics Autom. Lett. 8(3): 1834-1841 (2023) - [j89]Takumi Hara, Takashi Sato, Tetsuya Ogata, Hiromitsu Awano:
Uncertainty-Aware Haptic Shared Control With Humanoid Robots for Flexible Object Manipulation. IEEE Robotics Autom. Lett. 8(10): 6435-6442 (2023) - [j88]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata:
Modality Attention for Prediction-Based Robot Motion Generation: Improving Interpretability and Robustness of Using Multi-Modality. IEEE Robotics Autom. Lett. 8(12): 8271-8278 (2023) - [c236]André Yuji Yasutomi, Tetsuya Ogata:
Automatic Action Space Curriculum Learning with Dynamic Per-Step Masking. CASE 2023: 1-7 - [c235]Kanata Suzuki, Yuya Kamiwano, Naoya Chiba, Hiroki Mori, Tetsuya Ogata:
Multi-Timestep-Ahead Prediction with Mixture of Experts for Embodied Question Answering. ICANN (6) 2023: 243-255 - [c234]Ryutaro Suzuki, Hayato Idei, Yuichi Yamashita, Tetsuya Ogata:
Hierarchical Variational Recurrent Neural Network Modeling of Sensory Attenuation with Temporal Delay in Action-Outcome. ICDL 2023: 244-249 - [c233]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata:
Multimodal Time Series Learning of Robots Based on Distributed and Integrated Modalities: Verification with a Simulator and Actual Robots. ICRA 2023: 9551-9557 - [c232]Namiko Saito, João Moura, Tetsuya Ogata, Marina Y. Aoyama, Shingo Murata, Shigeki Sugano, Sethu Vijayakumar:
Structured Motion Generation with Predictive Learning: Proposing Subgoal for Long-Horizon Manipulation. ICRA 2023: 9566-9572 - [c231]Ryo Hanai, Yukiyasu Domae, Ixchel Georgina Ramirez-Alpizar, Bruno Leme, Tetsuya Ogata:
Force Map: Learning to Predict Contact Force Distribution from Vision. IROS 2023: 3129-3136 - [i36]Ryo Hanai, Yukiyasu Domae, Ixchel Georgina Ramirez-Alpizar, Bruno Leme, Tetsuya Ogata:
Force Map: Learning to Predict Contact Force Distribution from Vision. CoRR abs/2304.05803 (2023) - [i35]Kanata Suzuki, Hiroshi Ito, Tatsuro Yamada, Kei Kase, Tetsuya Ogata:
Deep Predictive Learning : Motion Learning Concept inspired by Cognitive Robotics. CoRR abs/2306.14714 (2023) - [i34]Kazuki Hori, Kanata Suzuki, Tetsuya Ogata:
Interactively Robot Action Planning with Uncertainty Analysis and Active Questioning by Large Language Model. CoRR abs/2308.15684 (2023) - [i33]Kenjiro Yamamoto, Hiroshi Ito, Hideyuki Ichiwara, Hiroki Mori, Tetsuya Ogata:
Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes. CoRR abs/2309.12547 (2023) - [i32]Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki Sugano, Tetsuya Ogata:
Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot. CoRR abs/2309.14837 (2023) - [i31]André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito, Hiroki Mori, Tetsuya Ogata:
Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions. CoRR abs/2312.16438 (2023) - 2022
- [j87]Tadahiro Taniguchi, Takayuki Nagai, Shingo Shimoda, Angelo Cangelosi, Yiannis Demiris, Yutaka Matsuo, Kenji Doya, Tetsuya Ogata, Lorenzo Jamone, Yukie Nagai, Emre Ugur, Daichi Mochihashi, Yuuya Unno, Kazuo Okanoya, Takashi Hashimoto:
Special issue on Symbol Emergence in Robotics and Cognitive Systems (I). Adv. Robotics 36(1-2): 1-2 (2022) - [j86]Tadahiro Taniguchi, Takayuki Nagai, Shingo Shimoda, Angelo Cangelosi, Yiannis Demiris, Yutaka Matsuo, Kenji Doya, Tetsuya Ogata, Lorenzo Jamone, Yukie Nagai, Emre Ugur, Daichi Mochihashi, Yuuya Unno, Kazuo Okanoya, Takashi Hashimoto:
Special issue on symbol emergence in robotics and cognitive systems (II). Adv. Robotics 36(5-6): 217-218 (2022) - [j85]Namiko Saito, Takumi Shimizu, Tetsuya Ogata, Shigeki Sugano:
Utilization of Image/Force/Tactile Sensor Data for Object-Shape-Oriented Manipulation: Wiping Objects With Turning Back Motions and Occlusion. IEEE Robotics Autom. Lett. 7(2): 968-975 (2022) - [j84]Satoshi Funabashi, Tomoki Isobe, Fei Hongyi, Atsumu Hiramoto, Alexander Schmitz, Shigeki Sugano, Tetsuya Ogata:
Multi-Fingered In-Hand Manipulation With Various Object Properties Using Graph Convolutional Networks and Distributed Tactile Sensors. IEEE Robotics Autom. Lett. 7(2): 2102-2109 (2022) - [j83]Kei Kase, Ai Tateishi, Tetsuya Ogata:
Robot Task Learning With Motor Babbling Using Pseudo Rehearsal. IEEE Robotics Autom. Lett. 7(3): 8377-8382 (2022) - [j82]Hyogo Hiruma, Hiroshi Ito, Hiroki Mori, Tetsuya Ogata:
Deep Active Visual Attention for Real-Time Robot Motion Generation: Emergence of Tool-Body Assimilation and Adaptive Tool-Use. IEEE Robotics Autom. Lett. 7(3): 8550-8557 (2022) - [j81]Minori Toyoda, Kanata Suzuki, Yoshihiko Hayashi, Tetsuya Ogata:
Learning Bidirectional Translation Between Descriptions and Actions With Small Paired Data. IEEE Robotics Autom. Lett. 7(4): 10930-10937 (2022) - [j80]Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata:
Efficient multitask learning with an embodied predictive model for door opening and entry with whole-body control. Sci. Robotics 7(65) (2022) - [c230]Naoki Shirakura, Ryuichi Takase, Natsuki Yamanobe, Yukiyasu Domae, Tetsuya Ogata:
Time Pressure Based Human Workload and Productivity Compatible System for Human-Robot Collaboration. CASE 2022: 659-666 - [c229]Ryosuke Yamada, Hirokatsu Kataoka, Naoya Chiba, Yukiyasu Domae, Tetsuya Ogata:
Point Cloud Pre-training with Natural 3D Structures. CVPR 2022: 21251-21261 - [c228]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata:
Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility. ICRA 2022: 5375-5381 - [c227]Hiroshi Ito, Hideyuki Ichiwara, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata:
Integrated Learning of Robot Motion and Sentences: Real-Time Prediction of Grasping Motion and Attention based on Language Instructions. ICRA 2022: 5404-5410 - [c226]Hyogo Hiruma, Hiroki Mori, Hiroshi Ito, Tetsuya Ogata:
Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Prediction for Robot Pose Prediction. IECON 2022: 1-6 - [c225]Kei Kase, Chikara Utsumi, Yukiyasu Domae, Tetsuya Ogata:
Use of Action Label in Deep Predictive Learning for Robot Manipulation. IROS 2022: 13459-13465 - [c224]Hiroshi Ito, Takumi Kurata, Tetsuya Ogata:
Sensory-Motor Learning for Simultaneous Control of Motion and Force: Generating Rubbing Motion against Uneven Object. SII 2022: 408-415 - [c223]Pin-Chu Yang, Satoshi Funabashi, Mohammed Al-Sada, Tetsuya Ogata:
Generating Humanoid Robot Motions based on a Procedural Animation IK Rig Method. SII 2022: 491-498 - [c222]Wakana Fujii, Kanata Suzuki, Tomoki Ando, Ai Tateishi, Hiroki Mori, Tetsuya Ogata:
Buttoning Task with a Dual-Arm Robot: An Exploratory Study on a Marker-based Algorithmic Method and Marker-less Machine Learning Methods. SII 2022: 682-689 - [c221]André Yuji Yasutomi, Hiroki Mori, Tetsuya Ogata:
Curriculum-based Offline Network Training for Improvement of Peg-in-hole Task Performance for Holes in Concrete. SII 2022: 712-717 - [i30]Tomoki Ando, Hiroki Mori, Ryota Torishima, Kuniyuki Takahashi, Shoichiro Yamaguchi, Daisuke Okanohara, Tetsuya Ogata:
Collision-free Path Planning in the Latent Space through cGANs. CoRR abs/2202.07203 (2022) - [i29]Hyogo Hiruma, Hiroki Mori, Tetsuya Ogata:
Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction. CoRR abs/2202.10036 (2022) - [i28]Tomoki Ando, Hiroto Iino, Hiroki Mori, Ryota Torishima, Kuniyuki Takahashi, Shoichiro Yamaguchi, Daisuke Okanohara, Tetsuya Ogata:
Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs. CoRR abs/2202.13062 (2022) - [i27]Minori Toyoda, Kanata Suzuki, Yoshihiko Hayashi, Tetsuya Ogata:
Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data. CoRR abs/2203.04218 (2022) - [i26]Satoshi Funabashi, Tomoki Isobe, Fei Hongyi, Atsumu Hiramoto, Alexander Schmitz, Shigeki Sugano, Tetsuya Ogata:
Multi-Fingered In-Hand Manipulation with Various Object Properties Using Graph Convolutional Networks and Distributed Tactile Sensors. CoRR abs/2205.04169 (2022) - [i25]Hyogo Hiruma, Hiroshi Ito, Hiroki Mori, Tetsuya Ogata:
Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use. CoRR abs/2206.14530 (2022) - 2021
- [j79]Namiko Saito, Tetsuya Ogata, Hiroki Mori, Shingo Murata, Shigeki Sugano:
Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks. Frontiers Robotics AI 8: 748716 (2021) - [j78]Kei Kase, Noboru Matsumoto, Tetsuya Ogata:
Leveraging Motor Babbling for Efficient Robot Learning. J. Robotics Mechatronics 33(5): 1063-1074 (2021) - [j77]Hayato Idei, Shingo Murata, Yuichi Yamashita, Tetsuya Ogata:
Paradoxical sensory reactivity induced by functional disconnection in a robot model of neurodevelopmental disorder. Neural Networks 138: 150-163 (2021) - [j76]Namiko Saito, Tetsuya Ogata, Satoshi Funabashi, Hiroki Mori, Shigeki Sugano:
How to Select and Use Tools? : Active Perception of Target Objects Using Multimodal Deep Learning. IEEE Robotics Autom. Lett. 6(2): 2517-2524 (2021) - [j75]Kanata Suzuki, Hiroki Mori, Tetsuya Ogata:
Compensation for Undefined Behaviors During Robot Task Execution by Switching Controllers Depending on Embedded Dynamics in RNN. IEEE Robotics Autom. Lett. 6(2): 3475-3482 (2021) - [j74]Minori Toyoda, Kanata Suzuki, Hiroki Mori, Yoshihiko Hayashi, Tetsuya Ogata:
Embodying Pre-Trained Word Embeddings Through Robot Actions. IEEE Robotics Autom. Lett. 6(2): 4225-4232 (2021) - [j73]Momomi Kanamura, Kanata Suzuki, Yuki Suga, Tetsuya Ogata:
Development of a Basic Educational Kit for Robotic System with Deep Neural Networks. Sensors 21(11): 3804 (2021) - [c220]Mohammed Al-Sada, Pin-Chu Yang, Chang-Chieh Chiu, Tito Pradhono Tomo, MHD Yamen Saraiji, Tetsuya Ogata, Tatsuo Nakajima:
From Anime To Reality: Embodying An Anime Character As A Humanoid Robot. CHI Extended Abstracts 2021: 176:1-176:5 - [c219]André Yuji Yasutomi, Hiroki Mori, Tetsuya Ogata:
A Peg-in-hole Task Strategy for Holes in Concrete. ICRA 2021: 2205-2211 - [c218]Ryoichi Nakajo, Tetsuya Ogata:
Comparison of Consolidation Methods for Predictive Learning of Time Series. IEA/AIE (1) 2021: 113-120 - [c217]Satoshi Ohara, Tetsuya Ogata, Hiromitsu Awano:
Binary Neural Network in Robotic Manipulation: Flexible Object Manipulation for Humanoid Robot Using Partially Binarized Auto-Encoder on FPGA. IROS 2021: 6010-6015 - [c216]Kanata Suzuki, Momomi Kanamura, Yuki Suga, Hiroki Mori, Tetsuya Ogata:
In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning. IROS 2021: 6724-6731 - [i24]Kanata Suzuki, Tetsuya Ogata:
Stable deep reinforcement learning method by predicting uncertainty in rewards as a subtask. CoRR abs/2101.06906 (2021) - [i23]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata:
Spatial Attention Point Network for Deep-learning-based Robust Autonomous Robot Motion Generation. CoRR abs/2103.01598 (2021) - [i22]Kanata Suzuki, Momomi Kanamura, Yuki Suga, Hiroki Mori, Tetsuya Ogata:
In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning. CoRR abs/2103.09402 (2021) - [i21]Minori Toyoda, Kanata Suzuki, Hiroki Mori, Yoshihiko Hayashi, Tetsuya Ogata:
Embodying Pre-Trained Word Embeddings Through Robot Actions. CoRR abs/2104.08521 (2021) - [i20]Namiko Saito, Tetsuya Ogata, Satoshi Funabashi, Hiroki Mori, Shigeki Sugano:
How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning. CoRR abs/2106.02445 (2021) - [i19]Satoshi Ohara, Tetsuya Ogata, Hiromitsu Awano:
Binary Neural Network in Robotic Manipulation: Flexible Object Manipulation for Humanoid Robot Using Partially Binarized Auto-Encoder on FPGA. CoRR abs/2107.00209 (2021) - [i18]Hayato Idei, Wataru Ohata, Yuichi Yamashita, Tetsuya Ogata, Jun Tani:
Sensory attenuation develops as a result of sensorimotor experience. CoRR abs/2111.02666 (2021) - [i17]Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata:
Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility. CoRR abs/2112.06442 (2021) - 2020
- [j72]Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata:
Evaluation of Generalization Performance of Visuo-Motor Learning by Analyzing Internal State Structured from Robot Motion. New Gener. Comput. 38(1): 7-22 (2020) - [c215]Hiroki Mori, Masayuki Masuda, Tetsuya Ogata:
Tactile-based curiosity maximizes tactile-rich object-oriented actions even without any extrinsic rewards. ICDL-EPIROB 2020: 1-7 - [c214]Kanata Suzuki, Tetsuya Ogata:
Stable Deep Reinforcement Learning Method by Predicting Uncertainty in Rewards as a Subtask. ICONIP (2) 2020: 651-662 - [c213]Kei Kase, Chris Paxton, Hammad Mazhar, Tetsuya Ogata, Dieter Fox:
Transferable Task Execution from Pixels through Deep Planning Domain Learning. ICRA 2020: 10459-10465 - [c212]Satoshi Funabashi, Tomoki Isobe, Shun Ogasa, Tetsuya Ogata, Alexander Schmitz, Tito Pradhono Tomo, Shigeki Sugano:
Stable In-Grasp Manipulation with a Low-Cost Robot Hand by Using 3-Axis Tactile Sensors with a CNN. IROS 2020: 9166-9173 - [c211]Satoshi Funabashi, Shun Ogasa, Tomoki Isobe, Tetsuya Ogata, Alexander Schmitz, Tito Pradhono Tomo, Shigeki Sugano:
Variable In-Hand Manipulations for Tactile-Driven Robot Hand via CNN-LSTM. IROS 2020: 9472-9479 - [c210]Namiko Saito, Danyang Wang, Tetsuya Ogata, Hiroki Mori, Shigeki Sugano:
Wiping 3D-objects using Deep Learning Model based on Image/Force/Joint Information. IROS 2020: 10152-10157 - [c209]Kelvin Lukman, Hiroki Mori, Tetsuya Ogata:
Viewpoint Planning Based on Uncertainty Maps Created from the Generative Query Network. JSAI 2020: 37-48 - [c208]Pin-Chu Yang, Mohammed Al-Sada, Chang-Chieh Chiu, Kevin Kuo, Tito Pradhono Tomo, Kanata Suzuki, Nelson Yalta, Kuo-Hao Shu, Tetsuya Ogata:
HATSUKI : An anime character like robot figure platform with anime-style expressions and imitation learning based action generation. RO-MAN 2020: 384-391 - [c207]Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Shuki Goto, Tetsuya Ogata:
Visualization of Focal Cues for Visuomotor Coordination by Gradient-based Methods: A Recurrent Neural Network Shifts The Attention Depending on Task Requirements. SII 2020: 188-194 - [c206]Momomi Kanamura, Yuki Suga, Tetsuya Ogata:
Development of a Basic Educational Kit for Robot Development Using Deep Neural Networks. SII 2020: 1360-1363 - [i16]Kei Kase, Chris Paxton, Hammad Mazhar, Tetsuya Ogata, Dieter Fox:
Transferable Task Execution from Pixels through Deep Planning Domain Learning. CoRR abs/2003.03726 (2020) - [i15]Kanata Suzuki, Hiroki Mori, Tetsuya Ogata:
Undefined-behavior guarantee by switching to model-based controller according to the embedded dynamics in Recurrent Neural Network. CoRR abs/2003.04862 (2020) - [i14]Pin-Chu Yang, Mohammed Al-Sada, Chang-Chieh Chiu, Kevin Kuo, Tito Pradhono Tomo, Kanata Suzuki, Nelson Yalta, Kuo-Hao Shu, Tetsuya Ogata:
HATSUKI : An anime character like robot figure platform with anime-style expressions and imitation learning based action generation. CoRR abs/2003.14121 (2020)
2010 – 2019
- 2019
- [j71]Fady Ibrahim, A. A. Abouelsoud, Ahmed M. R. Fath El-Bab, Tetsuya Ogata:
Path following algorithm for skid-steering mobile robot based on adaptive discontinuous posture control. Adv. Robotics 33(9): 439-453 (2019) - [j70]Junpei Zhong, Martin Peniak, Jun Tani, Tetsuya Ogata, Angelo Cangelosi:
Sensorimotor input as a language generalisation tool: a neurorobotics model for generation and generalisation of noun-verb combinations with sensorimotor inputs. Auton. Robots 43(5): 1271-1290 (2019) - [j69]Junpei Zhong, Tetsuya Ogata, Angelo Cangelosi, Chenguang Yang:
Disentanglement in conceptual space during sensorimotor interaction. Cogn. Comput. Syst. 1(4): 103-112 (2019) - [j68]Tadahiro Taniguchi, Emre Ugur, Tetsuya Ogata, Takayuki Nagai, Yiannis Demiris:
Editorial: Machine Learning Methods for High-Level Cognitive Capabilities in Robotics. Frontiers Neurorobotics 13: 83 (2019) - [j67]Fady Ibrahim, A. A. Abouelsoud, Ahmed M. R. Fath El-Bab, Tetsuya Ogata:
Discontinuous Stabilizing Control of Skid-Steering Mobile Robot (SSMR). J. Intell. Robotic Syst. 95(2): 253-266 (2019) - [j66]Kazuma Sasaki, Tetsuya Ogata:
Adaptive Drawing Behavior by Visuomotor Learning Using Recurrent Neural Networks. IEEE Trans. Cogn. Dev. Syst. 11(1): 119-128 (2019) - [c205]Nelson Yalta, Shinji Watanabe, Takaaki Hori, Kazuhiro Nakadai, Tetsuya Ogata:
CNN-based Multichannel End-to-End Speech Recognition for Everyday Home Environments*. EUSIPCO 2019: 1-5 - [c204]Shingo Murata, Hiroki Sawa, Shigeki Sugano, Tetsuya Ogata:
Looking Back and Ahead: Adaptation and Planning by Gradient Descent. ICDL-EPIROB 2019: 151-156 - [c203]Shingo Murata, Wataru Masuda, Jiayi Chen, Hiroaki Arie, Tetsuya Ogata, Shigeki Sugano:
Achieving Human-Robot Collaboration with Dynamic Goal Inference by Gradient Descent. ICONIP (2) 2019: 579-590 - [c202]Satoshi Funabashi, Gang Yan, Andreas Geier, Alexander Schmitz, Tetsuya Ogata, Shigeki Sugano:
Morphology-Specific Convolutional Neural Networks for Tactile Object Recognition with a Multi-Fingered Hand. ICRA 2019: 57-63 - [c201]Nelson Yalta, Shinji Watanabe, Kazuhiro Nakadai, Tetsuya Ogata:
Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation. IJCNN 2019: 1-8 - [c200]Alexandre Antunes, Alban Laflaquière, Tetsuya Ogata, Angelo Cangelosi:
A Bi-directional Multiple Timescales LSTM Model for Grounding of Actions and Verbs. IROS 2019: 2614-2621 - [c199]Kei Kase, Ryoichi Nakajo, Hiroki Mori, Tetsuya Ogata:
Learning Multiple Sensorimotor Units to Complete Compound Tasks using an RNN with Multiple Attractors. IROS 2019: 4244-4249 - [c198]Namiko Saito, Nguyen Ba Dai, Tetsuya Ogata, Hiroki Mori, Shigeki Sugano:
Real-time Liquid Pouring Motion Generation: End-to-End Sensorimotor Coordination for Unknown Liquid Dynamics Trained with Deep Neural Networks. ROBIO 2019: 1077-1082 - [c197]Shingo Murata, Hikaru Yanagida, Kentaro Katahira, Shinsuke Suzuki, Tetsuya Ogata, Yuichi Yamashita:
Large-scale Data Collection for Goal-directed Drawing Task with Self-report Psychiatric Symptom Questionnaires via Crowdsourcing. SMC 2019: 3859-3865 - [i13]Andrey Barsky, Claudio Zito, Hiroki Mori, Tetsuya Ogata, Jeremy L. Wyatt:
Multisensory Learning Framework for Robot Drumming. CoRR abs/1907.09775 (2019) - [i12]Lorenzo Jamone, Tetsuya Ogata, Beata J. Grzyb:
From natural to artificial embodied intelligence: is Deep Learning the solution (NII Shonan Meeting 137). NII Shonan Meet. Rep. 2019 (2019) - 2018
- [j65]Chyon Hae Kim, Shohei Hama, Ryo Hirai, Kuniyuki Takahashi, Hiroki Yamada, Tetsuya Ogata, Shigeki Sugano:
Effective input order of dynamics learning tree. Adv. Robotics 32(3): 122-136 (2018) - [j64]Junpei Zhong, Angelo Cangelosi, Tetsuya Ogata, Xinzheng Zhang:
Encoding Longer-Term Contextual Information with Predictive Coding and Ego-Motion. Complex. 2018: 7609587:1-7609587:15 (2018) - [j63]Ryoichi Nakajo, Shingo Murata, Hiroaki Arie, Tetsuya Ogata:
Acquisition of Viewpoint Transformation and Action Mappings via Sequence to Sequence Imitative Learning by Deep Neural Networks. Frontiers Neurorobotics 12: 46 (2018) - [j62]Tatsuro Yamada, Hiroyuki Matsunaga, Tetsuya Ogata:
Paired Recurrent Autoencoders for Bidirectional Translation Between Robot Actions and Linguistic Descriptions. IEEE Robotics Autom. Lett. 3(4): 3441-3448 (2018) - [j61]Kanata Suzuki, Hiroki Mori, Tetsuya Ogata:
Motion Switching With Sensory and Instruction Signals by Designing Dynamical Systems Using Deep Neural Network. IEEE Robotics Autom. Lett. 3(4): 3481-3488 (2018) - [j60]Shingo Murata, Yuxi Li, Hiroaki Arie, Tetsuya Ogata, Shigeki Sugano:
Learning to Achieve Different Levels of Adaptability for Human-Robot Collaboration Utilizing a Neuro-Dynamical System. IEEE Trans. Cogn. Dev. Syst. 10(3): 712-725 (2018) - [c196]Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya Ogata, Shigeki Sugano:
Tool-Use Model Considering Tool Selection by a Robot Using Deep Learning. Humanoids 2018: 270-276 - [c195]Reda Elbasiony, Walid Gomaa, Tetsuya Ogata:
Deep 3D Pose Dictionary: 3D Human Pose Estimation from Single RGB Image Using Deep Convolutional Neural Network. ICANN (3) 2018: 310-320 - [c194]Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya Ogata, Shigeki Sugano:
Detecting Features of Tools, Objects, and Actions from Effects in a Robot using Deep Learning. ICDL-EPIROB 2018: 1-6 - [c193]Yuheng Wu, Kuniyuki Takahashi, Hiroki Yamada, Kitae Kim, Shingo Murata, Shigeki Sugano, Tetsuya Ogata:
Dynamic Motion Generation by Flexible-Joint Robot based on Deep Learning using Images. ICDL-EPIROB 2018: 169-174 - [c192]Kei Kase, Kanata Suzuki, Pin-Chu Yang, Hiroki Mori, Tetsuya Ogata:
Put-in-Box Task Generated from Multiple Discrete Tasks by aHumanoid Robot Using Deep Learning. ICRA 2018: 6447-6452 - [c191]Kazuma Sasaki, Tetsuya Ogata:
End-to-End Visuomotor Learning of Drawing Sequences using Recurrent Neural Networks. IJCNN 2018: 1-2 - [c190]Junpei Zhong, Angelo Cangelosi, Xinzheng Zhang, Tetsuya Ogata:
AFA-PredNet: The Action Modulation Within Predictive Coding. IJCNN 2018: 1-8 - [c189]Junpei Zhong, Tetsuya Ogata, Angelo Cangelosi:
Encoding Longer-term Contextual Sensorimotor Information in a Predictive Coding Model. SSCI 2018: 160-167 - [i11]Junpei Zhong, Angelo Cangelosi, Xinzheng Zhang, Tetsuya Ogata:
AFA-PredNet: The action modulation within predictive coding. CoRR abs/1804.03826 (2018) - [i10]Junpei Zhong, Tetsuya Ogata, Angelo Cangelosi:
Encoding Longer-term Contextual Multi-modal Information in a Predictive Coding Model. CoRR abs/1804.06774 (2018) - [i9]Nelson Yalta, Shinji Watanabe, Kazuhiro Nakadai, Tetsuya Ogata:
Weakly Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation. CoRR abs/1807.01126 (2018) - [i8]Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya Ogata, Shigeki Sugano:
Detecting Features of Tools, Objects, and Actions from Effects in a Robot using Deep Learning. CoRR abs/1809.08613 (2018) - [i7]Zhihao Li, Toshiyuki Motoyoshi, Kazuma Sasaki, Tetsuya Ogata, Shigeki Sugano:
Rethinking Self-driving: Multi-task Knowledge for Better Generalization and Accident Explanation Ability. CoRR abs/1809.11100 (2018) - [i6]Nelson Yalta, Shinji Watanabe, Takaaki Hori, Kazuhiro Nakadai, Tetsuya Ogata:
CNN-based MultiChannel End-to-End Speech Recognition for everyday home environments. CoRR abs/1811.02735 (2018) - 2017
- [j59]Kuniyuki Takahashi, Tetsuya Ogata, Jun Nakanishi, Gordon Cheng, Shigeki Sugano:
Dynamic motion learning for multi-DOF flexible-joint robots using active-passive motor babbling through deep learning. Adv. Robotics 31(18): 1002-1015 (2017) - [j58]Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata:
Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions. Frontiers Neurorobotics 11: 70 (2017) - [j57]Nelson Yalta, Kazuhiro Nakadai, Tetsuya Ogata:
Sound Source Localization Using Deep Learning Models. J. Robotics Mechatronics 29(1): 37-48 (2017) - [j56]Pin-Chu Yang, Kazuma Sasaki, Kanata Suzuki, Kei Kase, Shigeki Sugano, Tetsuya Ogata:
Repeatable Folding Task by Humanoid Robot Worker Using Deep Learning. IEEE Robotics Autom. Lett. 2(2): 397-403 (2017) - [j55]Kuniyuki Takahashi, Kitae Kim, Tetsuya Ogata, Shigeki Sugano:
Tool-body assimilation model considering grasping motion through deep learning. Robotics Auton. Syst. 91: 115-127 (2017) - [j54]Shingo Murata, Yuichi Yamashita, Hiroaki Arie, Tetsuya Ogata, Shigeki Sugano, Jun Tani:
Learning to Perceive the World as Probabilistic or Deterministic via Interaction With Others: A Neuro-Robotics Experiment. IEEE Trans. Neural Networks Learn. Syst. 28(4): 830-848 (2017) - [c188]Tatsuro Yamada, Tetsuro Kitahara, Hiroaki Arie, Tetsuya Ogata:
Four-Part Harmonization: Comparison of a Bayesian Network and a Recurrent Neural Network. CMMR 2017: 213-225 - [c187]Shingo Murata, Wataru Masuda, Saki Tomioka, Tetsuya Ogata, Shigeki Sugano:
Mixing Actual and Predicted Sensory States Based on Uncertainty Estimation for Flexible and Robust Robot Behavior. ICANN (1) 2017: 11-18 - [c186]Tatsuro Yamada, Saki Ito, Hiroaki Arie, Tetsuya Ogata:
Learning of Labeling Room Space for Mobile Robots Based on Visual Motor Experience. ICANN (1) 2017: 35-42 - [c185]Junpei Zhong, Tetsuya Ogata, Angelo Cangelosi, Chenguang Yang:
Understanding natural language sentences with word embedding and multi-modal interaction. ICDL-EPIROB 2017: 184-189 - [c184]Hayato Idei, Shingo Murata, Yiwen Chen, Yuichi Yamashita, Jun Tani, Tetsuya Ogata:
Reduced behavioral flexibility by aberrant sensory precision in autism spectrum disorder: A neurorobotics experiment. ICDL-EPIROB 2017: 271-276 - [c183]Junpei Zhong, Angelo Cangelosi, Tetsuya Ogata:
Toward abstraction from multi-modal data: Empirical studies on multiple time-scale recurrent models. IJCNN 2017: 3625-3632 - [i5]Junpei Zhong, Angelo Cangelosi, Tetsuya Ogata:
Toward Abstraction from Multi-modal Data: Empirical Studies on Multiple Time-scale Recurrent Models. CoRR abs/1702.05441 (2017) - [i4]Francisco Jesús Arjonilla García, Tetsuya Ogata:
General problem solving with category theory. CoRR abs/1709.04825 (2017) - [i3]Kanata Suzuki, Hiroki Mori, Tetsuya Ogata:
Online Motion Generation with Sensory Information and Instructions by Hierarchical RNN. CoRR abs/1712.05109 (2017) - 2016
- [j53]Tadahiro Taniguchi, Takayuki Nagai, Tomoaki Nakamura, Naoto Iwahashi, Tetsuya Ogata, Hideki Asoh:
Symbol emergence in robotics: a survey. Adv. Robotics 30(11-12): 706-728 (2016) - [j52]Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata:
Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction. Frontiers Neurorobotics 10: 5 (2016) - [j51]Kazuma Sasaki, Kuniaki Noda, Tetsuya Ogata:
Visual motor integration of robot's drawing behavior using recurrent neural network. Robotics Auton. Syst. 86: 184-195 (2016) - [c182]Kuniyuki Takahashi, Hadi Tjandra, Tetsuya Ogata, Shigeki Sugano:
Body Model Transition by Tool Grasping During Motor Babbling Using Deep Learning and RNN. ICANN (1) 2016: 166-174 - [c181]Kazuma Sasaki, Madoka Yamakawa, Kana Sekiguchi, Tetsuya Ogata:
Classification of Photo and Sketch Images Using Convolutional Neural Networks. ICANN (2) 2016: 283-290 - [c180]Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata:
Dynamical Linking of Positive and Negative Sentences to Goal-Oriented Robot Behavior by Hierarchical RNN. ICANN (1) 2016: 339-346 - [c179]Yiwen Chen, Shingo Murata, Hiroaki Arie, Tetsuya Ogata, Jun Tani, Shigeki Sugano:
Emergence of interactive behaviors between two robots by prediction error minimization mechanism. ICDL-EPIROB 2016: 302-307 - [c178]Ryoichi Nakajo, Maasa Takahashi, Shingo Murata, Hiroaki Arie, Tetsuya Ogata:
Self and Non-self Discrimination Mechanism Based on Predictive Learning with Estimation of Uncertainty. ICONIP (4) 2016: 228-235 - [c177]Tao Asato, Yuki Suga, Tetsuya Ogata:
A reusability-based hierarchical fault-detection architecture for robot middleware and its implementation in an autonomous mobile robot system. SII 2016: 150-155 - [c176]Yumi Nishimura, Yuki Suga, Tetsuya Ogata:
An effective visual programming tool for learning and using robotics middleware. SII 2016: 156-161 - [c175]Shingo Murata, Kai Hirano, Hiroaki Arie, Shigeki Sugano, Tetsuya Ogata:
Analysis of imitative interactions between humans and a robot with a neuro-dynamical system. SII 2016: 343-348 - [i2]Junpei Zhong, Martin Peniak, Jun Tani, Tetsuya Ogata, Angelo Cangelosi:
Sensorimotor Input as a Language Generalisation Tool: A Neurorobotics Model for Generation and Generalisation of Noun-Verb Combinations with Sensorimotor Inputs. CoRR abs/1605.03261 (2016) - 2015
- [j50]Kuniaki Noda, Yuki Yamaguchi, Kazuhiro Nakadai, Hiroshi G. Okuno, Tetsuya Ogata:
Audio-visual speech recognition using deep learning. Appl. Intell. 42(4): 722-737 (2015) - [j49]Tetsuya Ogata:
Special Issue on Cutting Edge of Robotics in Japan 2015. Adv. Robotics 29(1): 1 (2015) - [j48]Shun Nishide, Harumitsu Nobuta, Hiroshi G. Okuno, Tetsuya Ogata:
Preferential training of neurodynamical model based on predictability of target dynamics. Adv. Robotics 29(9): 587-596 (2015) - [c174]Kuniaki Noda, Naoya Hashimoto, Kazuhiro Nakadai, Tetsuya Ogata:
Sound source separation for robot audition using deep learning. Humanoids 2015: 389-394 - [c173]Shingo Murata, Saki Tomioka, Ryoichi Nakajo, Tatsuro Yamada, Hiroaki Arie, Tetsuya Ogata, Shigeki Sugano:
Predictive learning with uncertainty estimation for modeling infants' cognitive development with caregivers: A neurorobotics experiment. ICDL-EPIROB 2015: 302-307 - [c172]Ryoichi Nakajo, Shingo Murata, Hiroaki Arie, Tetsuya Ogata:
Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences. ICDL-EPIROB 2015: 326-331 - [c171]Kuniyuki Takahashi, Kanata Suzuki, Tetsuya Ogata, Hadi Tjandra, Shigeki Sugano:
Efficient Motor Babbling Using Variance Predictions from a Recurrent Neural Network. ICONIP (3) 2015: 26-33 - [c170]Kuniyuki Takahashi, Tetsuya Ogata, Hiroki Yamada, Hadi Tjandra, Shigeki Sugano:
Effective motion learning for a flexible-joint robot using motor babbling. IROS 2015: 2723-2728 - [c169]Kazuma Sasaki, Hadi Tjandra, Kuniaki Noda, Kuniyuki Takahashi, Tetsuya Ogata:
Neural network based model for visual-motor integration learning of robot's drawing behavior: Association of a drawing motion from a drawn image. IROS 2015: 2736-2741 - [c168]Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata:
Attractor representations of language-behavior structure in a recurrent neural network for human-robot interaction. IROS 2015: 4179-4184 - [i1]Tadahiro Taniguchi, Takayuki Nagai, Tomoaki Nakamura, Naoto Iwahashi, Tetsuya Ogata, Hideki Asoh:
Symbol Emergence in Robotics: A Survey. CoRR abs/1509.08973 (2015) - 2014
- [j47]Tsuyoshi Tasaki, Tetsuya Ogata, Hiroshi G. Okuno:
The interaction between a robot and multiple people based on spatially mapping of friendliness and motion parameters. Adv. Robotics 28(1): 39-51 (2014) - [j46]Shingo Murata, Hiroaki Arie, Tetsuya Ogata, Shigeki Sugano, Jun Tani:
Learning to generate proactive and reactive behavior using a dynamic neural network model with time-varying variance prediction mechanism. Adv. Robotics 28(17): 1189-1203 (2014) - [j45]Kuniaki Noda, Hiroaki Arie, Yuki Suga, Tetsuya Ogata:
Multimodal integration learning of robot behavior using deep neural networks. Robotics Auton. Syst. 62(6): 721-736 (2014) - [c167]Kuniyuki Takahashi, Tetsuya Ogata, Hadi Tjandra, Yuki Yamaguchi, Yuki Suga, Shigeki Sugano:
Tool-body assimilation model using a neuro-dynamical system for acquiring representation of tool function and motion. AIM 2014: 1255-1260 - [c166]Alexander Schmitz, Yusuke Bansho, Kuniaki Noda, Hiroyasu Iwata, Tetsuya Ogata, Shigeki Sugano:
Tactile object recognition using deep learning and dropout. Humanoids 2014: 1044-1050 - [c165]Shingo Murata, Hiroaki Arie, Tetsuya Ogata, Jun Tani, Shigeki Sugano:
Learning and Recognition of Multiple Fluctuating Temporal Patterns Using S-CTRNN. ICANN 2014: 9-16 - [c164]Kuniyuki Takahashi, Tetsuya Ogata, Hadi Tjandra, Shingo Murata, Hiroaki Arie, Shigeki Sugano:
Tool-Body Assimilation Model Based on Body Babbling and a Neuro-Dynamical System for Motion Generation. ICANN 2014: 363-370 - [c163]Shingo Murata, Yuichi Yamashita, Hiroaki Arie, Tetsuya Ogata, Jun Tani, Shigeki Sugano:
Generation of sensory reflex behavior versus intentional proactive behavior in robot learning of cooperative interactions with others. ICDL-EPIROB 2014: 242-248 - [c162]Shun Nishide, Keita Mochizuki, Hiroshi G. Okuno, Tetsuya Ogata:
Insertion of pause in drawing from babbling for robot's developmental imitation learning. ICRA 2014: 4785-4791 - [c161]Kuniaki Noda, Yuki Yamaguchi, Kazuhiro Nakadai, Hiroshi G. Okuno, Tetsuya Ogata:
Lipreading using convolutional neural network. INTERSPEECH 2014: 1149-1153 - [c160]Shun Nishide, Harumitsu Nobuta, Hiroshi G. Okuno, Tetsuya Ogata:
Applying intrinsic motivation for visuomotor learning of robot arm motion. URAI 2014: 364-367 - 2013
- [j44]Daichi Sakaue, Katsutoshi Itoyama, Tetsuya Ogata, Hiroshi G. Okuno:
Robust Multipitch Analyzer against Initialization based on Latent Harmonic Allocation using Overtone Corpus. Inf. Media Technol. 8(2): 467-476 (2013) - [j43]Daichi Sakaue, Katsutoshi Itoyama, Tetsuya Ogata, Hiroshi G. Okuno:
Robust Multipitch Analyzer against Initialization based on Latent Harmonic Allocation using Overtone Corpus. J. Inf. Process. 21(2): 246-255 (2013) - [c159]Kuniaki Noda, Hiroaki Arie, Yuki Suga, Tetsuya Ogata:
Multimodal integration learning of object manipulation behaviors using deep neural networks. IROS 2013: 1728-1733 - [c158]Tetsuya Ogata, Hiroshi G. Okuno:
Integration of behaviors and languages with a hierarchal structure self-organized in a neuro-dynamical model. RiiSS 2013: 89-95 - [c157]Yuki Yamaguchi, Kuniaki Noda, Shun Nishide, Hiroshi G. Okuno, Tetsuya Ogata:
Learning and association of synaesthesia phenomenon using deep neural networks. SII 2013: 659-664 - [c156]Kuniaki Noda, Hiroaki Arie, Yuki Suga, Tetsuya Ogata:
Intersensory Causality Modeling Using Deep Neural Networks. SMC 2013: 1995-2000 - [c155]Keita Mochizuki, Shun Nishide, Hiroshi G. Okuno, Tetsuya Ogata:
Developmental Human-Robot Imitation Learning of Drawing with a Neuro Dynamical System. SMC 2013: 2336-2341 - 2012
- [j42]Angelica Lim, Takeshi Mizumoto, Tetsuya Ogata, Hiroshi G. Okuno:
A Musical Robot that Synchronizes with a Coplayer Using Non-Verbal Cues. Adv. Robotics 26(3-4): 363-381 (2012) - [j41]Akira Maezawa, Katsutoshi Itoyama, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Automated Violin Fingering Transcription Through Analysis of an Audio Recording. Comput. Music. J. 36(3): 57-72 (2012) - [j40]Angelica Lim, Tetsuya Ogata, Hiroshi G. Okuno:
Towards expressive musical robots: a cross-modal framework for emotional gesture, voice and music. EURASIP J. Audio Speech Music. Process. 2012: 3 (2012) - [j39]Tatsuhiko Itohara, Takuma Otsuka, Takeshi Mizumoto, Angelica Lim, Tetsuya Ogata, Hiroshi G. Okuno:
A multimodal tempo and beat-tracking system based on audiovisual information from live guitar performances. EURASIP J. Audio Speech Music. Process. 2012: 6 (2012) - [j38]Kazunori Komatani, Mikio Nakano, Masaki Katsumaru, Kotaro Funakoshi, Tetsuya Ogata, Hiroshi G. Okuno:
Automatic Allocation of Training Data for Speech Understanding Based on Multiple Model Combinations. IEICE Trans. Inf. Syst. 95-D(9): 2298-2307 (2012) - [j37]Ryu Takeda, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Efficient Blind Dereverberation and Echo Cancellation Based on Independent Component Analysis for Actual Acoustic Signals. Neural Comput. 24(1): 234-272 (2012) - [j36]Shun Nishide, Jun Tani, Toru Takahashi, Hiroshi G. Okuno, Tetsuya Ogata:
Tool-Body Assimilation of Humanoid Robot Using a Neurodynamical System. IEEE Trans. Auton. Ment. Dev. 4(2): 139-149 (2012) - [c154]Tatsuhiko Itohara, Kazuhiro Nakadai, Tetsuya Ogata, Hiroshi G. Okuno:
Improvement of audio-visual score following in robot ensemble with human guitarist. Humanoids 2012: 574-579 - [c153]Kohei Nagira, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Complex Extension of Infinite Sparse Factor Analysis for Blind Speech Separation. LVA/ICA 2012: 388-396 - [c152]Yasuharu Hirasawa, Naoki Yasuraoka, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
A GMM Sound Source Model for Blind Speech Separation in Under-determined Conditions. LVA/ICA 2012: 446-453 - [c151]Daichi Sakaue, Katsutoshi Itoyama, Tetsuya Ogata, Hiroshi G. Okuno:
Initialization-robust multipitch estimation based on latent harmonic allocation using overtone corpus. ICASSP 2012: 425-428 - [c150]Kenri Kodaka, Tetsuya Ogata, Shigeki Sugano:
Rhythm-based adaptive localization in incomplete RFID landmark environments. ICRA 2012: 2108-2114 - [c149]Louis-Kenzo Cahier, Tetsuya Ogata, Hiroshi G. Okuno:
Incremental probabilistic geometry estimation for robot scene understanding. ICRA 2012: 3625-3630 - [c148]Katsutoshi Itoyama, Tetsuya Ogata, Hiroshi G. Okuno:
Automatic Chord Recognition Based on Probabilistic Integration of Acoustic Features, Bass Sounds, and Chord Transition. IEA/AIE 2012: 58-67 - [c147]Shun Nishide, Jun Tani, Hiroshi G. Okuno, Tetsuya Ogata:
Self-organization of object features representing motion using Multiple Timescales Recurrent Neural Network. IJCNN 2012: 1-8 - [c146]Harumitsu Nobuta, Kenta Kawamoto, Kuniaki Noda, Kohtaro Sabe, Shun Nishide, Hiroshi G. Okuno, Tetsuya Ogata:
Body area segmentation from visual scene based on predictability of neuro-dynamical system. IJCNN 2012: 1-8 - [c145]Takeshi Mizumoto, Tetsuya Ogata, Hiroshi G. Okuno:
Who is the leader in a multiperson ensemble? - Multiperson human-robot ensemble model with leaderness -. IROS 2012: 1413-1419 - [c144]Yusuke Yamamura, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Sound sources selection system by using onomatopoeic querries from multiple sound sources. IROS 2012: 2364-2369 - [p1]Takeshi Mizumoto, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Adaptive Pitch Control for Robot Thereminist Using Unscented Kalman Filter. Modern Advances in Intelligent Systems and Tools 2012: 19-24 - 2011
- [j35]Tetsuya Ogata, Tetsuo Sawaragi, Tadahiro Taniguchi:
Preface. Adv. Robotics 25(17): 2125-2126 (2011) - [j34]Yang Zhang, Tetsuya Ogata, Shun Nishide, Toru Takahashi, Hiroshi G. Okuno:
Classification of Known and Unknown Environmental Sounds Based on Self-Organized Space Using a Recurrent Neural Network. Adv. Robotics 25(17): 2127-2141 (2011) - [j33]Shun Nishide, Jun Tani, Hiroshi G. Okuno, Tetsuya Ogata:
Towards Written Text Recognition Based on Handwriting Experiences Using a Recurrent Neural Network. Adv. Robotics 25(17): 2173-2187 (2011) - [j32]Takuma Otsuka, Kazuhiro Nakadai, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Real-Time Audio-to-Score Alignment Using Particle Filter for Coplayer Music Robots. EURASIP J. Adv. Signal Process. 2011 (2011) - [j31]Tsuyoshi Tasaki, Fumio Ozaki, Nobuto Matsuhira, Tetsuya Ogata, Hiroshi G. Okuno:
People Detection Based on Spatial Mapping of Friendliness and Floor Boundary Points for a Mobile Navigation Robot. J. Robotics 2011: 683975:1-683975:10 (2011) - [j30]Wataru Hinoshita, Hiroaki Arie, Jun Tani, Hiroshi G. Okuno, Tetsuya Ogata:
Emergence of hierarchical structure mirroring linguistic composition in a recurrent neural network. Neural Networks 24(4): 311-320 (2011) - [c143]Angelica Lim, Tetsuya Ogata, Hiroshi G. Okuno:
Converting emotional voice to motion for robot telepresence. Humanoids 2011: 472-479 - [c142]Yang Zhang, Shun Nishide, Toru Takahashi, Hiroshi G. Okuno, Tetsuya Ogata:
Cluster Self-organization of Known and Unknown Environmental Sounds Using Recurrent Neural Network. ICANN (1) 2011: 167-175 - [c141]Akira Maezawa, Hiroshi G. Okuno, Tetsuya Ogata, Masataka Goto:
Polyphonic audio-to-score alignment based on Bayesian Latent Harmonic Allocation Hidden Markov Model. ICASSP 2011: 185-188 - [c140]Katsutoshi Itoyama, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Simultaneous processing of sound source separation and musical instrument identification using Bayesian spectral modeling. ICASSP 2011: 3816-3819 - [c139]Hiromitsu Awano, Shun Nishide, Hiroaki Arie, Jun Tani, Toru Takahashi, Hiroshi G. Okuno, Tetsuya Ogata:
Use of a Sparse Structure to Improve Learning Performance of Recurrent Neural Networks. ICONIP (3) 2011: 323-331 - [c138]Nobuhide Yamakawa, Toru Takahashi, Tetsuro Kitahara, Tetsuya Ogata, Hiroshi G. Okuno:
Environmental Sound Recognition for Robot Audition Using Matching-Pursuit. IEA/AIE (2) 2011: 1-10 - [c137]Yasuharu Hirasawa, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Robot with Two Ears Listens to More than Two Simultaneous Utterances by Exploiting Harmonic Structures. IEA/AIE (1) 2011: 348-358 - [c136]Yasuharu Hirasawa, Naoki Yasuraoka, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Fast and Simple Iterative Algorithm of Lp-Norm Minimization for Under-Determined Speech Separation. INTERSPEECH 2011: 1745-1748 - [c135]Takuma Otsuka, Kazuhiro Nakadai, Tetsuya Ogata, Hiroshi G. Okuno:
Bayesian Extension of MUSIC for Sound Source Localization and Tracking. INTERSPEECH 2011: 3109-3112 - [c134]Tatsuhiko Itohara, Takuma Otsuka, Takeshi Mizumoto, Tetsuya Ogata, Hiroshi G. Okuno:
Particle-filter based audio-visual beat-tracking for music robot ensemble with human guitarist. IROS 2011: 118-124 - [c133]Ui-Hyun Kim, Takeshi Mizumoto, Tetsuya Ogata, Hiroshi G. Okuno:
Improvement of speaker localization by considering multipath interference of sound wave for binaural robot audition. IROS 2011: 2910-2915 - [c132]Takuma Otsuka, Kazuhiro Nakadai, Tetsuya Ogata, Hiroshi G. Okuno:
Incremental Bayesian Audio-to-Score Alignment with Flexible Harmonic Structure Models. ISMIR 2011: 525-530 - [c131]Kazunori Komatani, Kyoko Matsuyama, Ryu Takeda, Tetsuya Ogata, Hiroshi G. Okuno:
Evaluation of Spoken Dialogue System that uses Utterance Timing to Interpret User Utterances. IWSDS 2011: 315-325 - [c130]Naoki Nishikawa, Katsutoshi Itoyama, Hiromasa Fujihara, Masataka Goto, Tetsuya Ogata, Hiroshi G. Okuno:
A musical mood trajectory estimation method using lyrics and acoustic features. MIRUM 2011: 51-56 - [c129]Shun Nishide, Hiroshi G. Okuno, Tetsuya Ogata, Jun Tani:
Handwriting prediction based character recognition using recurrent neural network. SMC 2011: 2549-2554 - 2010
- [j29]Kazunori Komatani, Yuichiro Fukubayashi, Satoshi Ikeda, Tetsuya Ogata, Hiroshi G. Okuno:
Selecting Help Messages by Using Robust Grammar Verification for Handling Out-of-Grammar Utterances in Spoken Dialogue Systems. IEICE Trans. Inf. Syst. 93-D(12): 3359-3367 (2010) - [j28]Toru Takahashi, Kazuhiro Nakadai, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Soft missing-feature mask generation for robot audition. Paladyn J. Behav. Robotics 1(1): 37-47 (2010) - [j27]Takuma Otsuka, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Voice-awareness control for a humanoid robot consistent with its body posture and movements. Paladyn J. Behav. Robotics 1(1): 80-88 (2010) - [j26]Tetsuya Ogata, Shun Nishide, Hideki Kozima, Kazunori Komatani, Hiroshi G. Okuno:
Inter-modality mapping in robot with recurrent neural network. Pattern Recognit. Lett. 31(12): 1560-1569 (2010) - [c128]Takuma Otsuka, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Design and Implementation of Two-level Synchronization for Interactive Music Robot. AAAI 2010: 1238-1244 - [c127]Kazunori Komatani, Masaki Katsumaru, Mikio Nakano, Kotaro Funakoshi, Tetsuya Ogata, Hiroshi G. Okuno:
Automatic Allocation of Training Data for Rapid Prototyping of Speech Understanding based on Multiple Model Combination. COLING (Posters) 2010: 579-587 - [c126]Toru Takahashi, Kazuhiro Nakadai, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Improvement in listening capability for humanoid robot HRP-2. ICRA 2010: 470-475 - [c125]Ryu Takeda, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Upper-limit evaluation of robot audition based on ICA-BSS in multi-source, barge-in and highly reverberant conditions. ICRA 2010: 4366-4371 - [c124]Wataru Hinoshita, Hiroaki Arie, Jun Tani, Tetsuya Ogata, Hiroshi G. Okuno:
Recognition and Generation of Sentences through Self-organizing Linguistic Hierarchy Using MTRNN. IEA/AIE (3) 2010: 42-51 - [c123]Takuma Otsuka, Takeshi Mizumoto, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Music-Ensemble Robot That Is Capable of Playing the Theremin While Listening to the Accompanied Music. IEA/AIE (1) 2010: 102-112 - [c122]Akira Maezawa, Katsutoshi Itoyama, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Violin Fingering Estimation Based on Violin Pedagogical Fingering Model Constrained by Bowed Sequence Estimation from Audio Input. IEA/AIE (3) 2010: 249-259 - [c121]Kyoko Matsuyama, Kazunori Komatani, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Improving Identification Accuracy by Extending Acceptable Utterances in Spoken Dialogue System Using Barge-in Timing. IEA/AIE (2) 2010: 585-594 - [c120]Nobuhide Yamakawa, Tetsuro Kitahara, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Effects of modelling within- and between-frame temporal variations in power spectra on non-verbal sound recognition. INTERSPEECH 2010: 2342-2345 - [c119]Kyoko Matsuyama, Kazunori Komatani, Ryu Takeda, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Analyzing user utterances in barge-in-able spoken dialogue system for improving identification accuracy. INTERSPEECH 2010: 3050-3053 - [c118]Yasuharu Hirasawa, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Exploiting harmonic structures to improve separating simultaneous speech in under-determined conditions. IROS 2010: 450-457 - [c117]Toru Takahashi, Kazuhiro Nakadai, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
An improvement in automatic speech recognition using soft missing feature masks for robot audition. IROS 2010: 964-969 - [c116]Ryu Takeda, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Speedup and performance improvement of ICA-based robot audition by parallel and resampling-based block-wise processing. IROS 2010: 1949-1956 - [c115]Takeshi Mizumoto, Takuma Otsuka, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Human-robot ensemble between robot thereminist and human percussionist using coupled oscillator model. IROS 2010: 1957-1963 - [c114]Angelica Lim, Takeshi Mizumoto, Louis-Kenzo Cahier, Takuma Otsuka, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Robot musical accompaniment: integrating audio and visual cues for real-time synchronization with a human flutist. IROS 2010: 1964-1969 - [c113]Shun Nishide, Tetsuya Ogata, Jun Tani, Toru Takahashi, Kazunori Komatani, Hiroshi G. Okuno:
Motion generation based on reliable predictability using self-organized object features. IROS 2010: 3453-3458 - [c112]Hiromitsu Awano, Tetsuya Ogata, Shun Nishide, Toru Takahashi, Kazunori Komatani, Hiroshi G. Okuno:
Human-robot cooperation in arrangement of objects using confidence measure of neuro-dynamical system. SMC 2010: 2533-2538
2000 – 2009
- 2009
- [j25]Hyun-Don Kim, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Human Tracking System Integrating Sound and Face Localization Using an Expectation-Maximization Algorithm in Real Environments. Adv. Robotics 23(6): 629-653 (2009) - [j24]Shun Nishide, Tetsuya Ogata, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Self-organization of Dynamic Object Features Based on Bidirectional Training. Adv. Robotics 23(15): 2035-2057 (2009) - [j23]Hyun-Don Kim, Jinsung Kim, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Target Speech Detection and Separation for Communication with Humanoid Robots in Noisy Home Environments. Adv. Robotics 23(15): 2093-2111 (2009) - [j22]Katsutoshi Itoyama, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Parameter Estimation for Harmonic and Inharmonic Models by Using Timbre Feature Distributions. Inf. Media Technol. 4(3): 672-682 (2009) - [j21]Katsutoshi Itoyama, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Parameter Estimation for Harmonic and Inharmonic Models by Using Timbre Feature Distributions. J. Inf. Process. 17: 191-201 (2009) - [j20]Shun Nishide, Tetsuya Ogata, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Autonomous Motion Generation Based on Reliable Predictability. J. Robotics Mechatronics 21(4): 478-488 (2009) - [c111]Shun Shiramatsu, Tadachika Ozono, Toramatsu Shintani, Kazunori Komatani, Tetsuya Ogata, Toru Takahashi, Hiroshi G. Okuno:
Development of a Meeting Browser towards Supporting Public Involvement. CSE (4) 2009: 717-722 - [c110]Ryu Takeda, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Automatic estimation of reverberation time with robot speech to improve ICA-based robot audition. Humanoids 2009: 250-255 - [c109]Takuma Otsuka, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Voice quality manipulation for humanoid robots consistent with their head movements. Humanoids 2009: 405-410 - [c108]Ryu Takeda, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
ICA-based efficient blind dereverberation and echo cancellation method for barge-in-able robot audition. ICASSP 2009: 3677-3680 - [c107]Tetsuya Ogata, Ryunosuke Yokoya, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Prediction and imitation of other's motions by reusing own forward-inverse model in robots. ICRA 2009: 4144-4149 - [c106]Hisashi Kanda, Tetsuya Ogata, Toru Takahashi, Kazunori Komatani, Hiroshi G. Okuno:
Continuous vocal imitation with self-organized vowel spaces in Recurrent Neural Network. ICRA 2009: 4438-4443 - [c105]Masaki Katsumaru, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Adjusting Occurrence Probabilities of Automatically-Generated Abbreviated Words in Spoken Dialogue Systems. IEA/AIE 2009: 481-490 - [c104]Kyoko Matsuyama, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Enabling a user to specify an item at any time during system enumeration - item identification for barge-in-able conversational dialogue systems. INTERSPEECH 2009: 252-255 - [c103]Masaki Katsumaru, Mikio Nakano, Kazunori Komatani, Kotaro Funakoshi, Tetsuya Ogata, Hiroshi G. Okuno:
Improving speech understanding accuracy with limited training data using multiple language models and multiple understanding models. INTERSPEECH 2009: 2735-2738 - [c102]Ryu Takeda, Kazuhiro Nakadai, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Step-size parameter adaptation of multi-channel semi-blind ICA with piecewise linear model for barge-in-able robot audition. IROS 2009: 2277-2282 - [c101]Takuma Otsuka, Toru Takahashi, Hiroshi G. Okuno, Kazunori Komatani, Tetsuya Ogata, Kazumasa Murata, Kazuhiro Nakadai:
Incremental polyphonic audio to score alignment using beat tracking for singer robots. IROS 2009: 2289-2296 - [c100]Takeshi Mizumoto, Hiroshi Tsujino, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Thereminist robot: Development of a robot theremin player with feedforward and feedback arm control based on a Theremin's pitch model. IROS 2009: 2297-2302 - [c99]Toru Takahashi, Kazuhiro Nakadai, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Missing-feature-theory-based robust simultaneous speech recognition system with non-clean speech acoustic model. IROS 2009: 2730-2735 - [c98]Wataru Hinoshita, Tetsuya Ogata, Hideki Kozima, Hisashi Kanda, Toru Takahashi, Hiroshi G. Okuno:
Emergence of evolutionary interaction with voice and motion between two robots using RNN. IROS 2009: 4186-4192 - [c97]Shun Nishide, Tatsuhiro Nakagawa, Tetsuya Ogata, Jun Tani, Toru Takahashi, Hiroshi G. Okuno:
Modeling tool-body assimilation using second-order Recurrent Neural Network. IROS 2009: 5376-5381 - [c96]Hisashi Kanda, Tetsuya Ogata, Toru Takahashi, Kazunori Komatani, Hiroshi G. Okuno:
Phoneme acquisition model based on vowel imitation using Recurrent Neural Network. IROS 2009: 5388-5393 - [c95]Akira Maezawa, Katsutoshi Itoyama, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Bowed String Sequence Estimation of a Violin Based on Adaptive Audio Signal Classification and Context-Dependent Error Correction. ISM 2009: 9-16 - [c94]Naoki Yasuraoka, Takehiro Abe, Katsutoshi Itoyama, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno:
Changing timbre and phrase in existing musical performances as you like: manipulations of single part using harmonic and inharmonic models. ACM Multimedia 2009: 203-212 - [c93]Masaki Katsumaru, Mikio Nakano, Kazunori Komatani, Kotaro Funakoshi, Tetsuya Ogata, Hiroshi G. Okuno:
A Speech Understanding Framework that Uses Multiple Language Models and Multiple Understanding Models. HLT-NAACL (Short Papers) 2009: 133-136 - [c92]Kazunori Komatani, Satoshi Ikeda, Yuichiro Fukubayashi, Tetsuya Ogata, Hiroshi G. Okuno:
Ranking Help Message Candidates Based on Robust Grammar Verification Results and Utterance History in Spoken Dialogue Systems. SIGDIAL Conference 2009: 314-321 - 2008
- [j19]Shun Nishide, Tetsuya Ogata, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Predicting Object Dynamics From Visual Images Through Active Sensing Experiences. Adv. Robotics 22(5): 527-546 (2008) - [j18]Jean-Julien Aucouturier, Katsushi Ikeuchi, Hirohisa Hirukawa, Shinichiro Nakaoka, Takaaki Shiratori, Shunsuke Kudoh, Fumio Kanehiro, Tetsuya Ogata, Hideki Kozima, Hiroshi G. Okuno, Marek P. Michalowski, Yuta Ogai, Takashi Ikegami, Kazuhiro Kosuge, Takahiro Takeda, Yasuhisa Hirata:
Cheek to Chip: Dancing Robots and AI's Future. IEEE Intell. Syst. 23(2): 74-84 (2008) - [j17]Yuki Suga, Tetsuya Ogata, Shigeki Sugano:
Human-Adaptive Robot Interaction Using Interactive EC with Human-Machine Hybrid Evaluation. J. Robotics Mechatronics 20(4): 610-620 (2008) - [j16]Chyon Hae Kim, Tetsuya Ogata, Shigeki Sugano:
Reinforcement Signal Propagation Algorithm for Logic Circuit. J. Robotics Mechatronics 20(5): 757-774 (2008) - [j15]Kazunori Komatani, Satoshi Ikeda, Tetsuya Ogata, Hiroshi G. Okuno:
Managing out-of-grammar utterances by topic estimation with domain extensibility in multi-domain spoken dialogue systems. Speech Commun. 50(10): 863-870 (2008) - [j14]Kazuyoshi Yoshii, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
An Efficient Hybrid Music Recommender System Using an Incrementally Trainable Probabilistic Generative Model. IEEE Trans. Speech Audio Process. 16(2): 435-447 (2008) - [j13]Shun Shiramatsu, Kazunori Komatani, Kôiti Hasida, Tetsuya Ogata, Hiroshi G. Okuno:
A game-theoretic model of referential coherence and its empirical verification using large Japanese and English corpora. ACM Trans. Speech Lang. Process. 5(3): 6:1-6:27 (2008) - [c91]Shun Nishide, Tetsuya Ogata, Ryunosuke Yokoya, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Object dynamics prediction and motion generation based on reliable predictability. ICRA 2008: 1608-1614 - [c90]Hyun-Don Kim, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Two-channel-based voice activity detection for humanoid robots in noisy home environments. ICRA 2008: 3495-3501 - [c89]Satoshi Ikeda, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Integrating Topic Estimation and Dialogue History for Domain Selection in Multi-domain Spoken Dialogue Systems. IEA/AIE 2008: 294-304 - [c88]Yuichiro Fukubayashi, Kazunori Komatani, Mikio Nakano, Kotaro Funakoshi, Hiroshi Tsujino, Tetsuya Ogata, Hiroshi G. Okuno:
Rapid Prototyping of Robust Language Understanding Modules for Spoken Dialogue Systems. IJCNLP 2008: 210-216 - [c87]Masaki Katsumaru, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Expanding vocabulary for recognizing user's abbreviations of proper nouns without increasing ASR error rates in spoken dialogue systems. INTERSPEECH 2008: 187-190 - [c86]Satoshi Ikeda, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Extensibility verification of robust domain selection against out-of-grammar utterances in multi-domain spoken dialogue system. INTERSPEECH 2008: 487-490 - [c85]Toru Takahashi, Shun'ichi Yamamoto, Kazuhiro Nakadai, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Soft missing-feature mask generation for simultaneous speech recognition system in robots. INTERSPEECH 2008: 992-995 - [c84]Shun Nishide, Tetsuya Ogata, Ryunosuke Yokoya, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Active sensing based dynamical object feature extraction. IROS 2008: 1-7 - [c83]Takeshi Mizumoto, Ryu Takeda, Kazuyoshi Yoshii, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
A robot listens to music and counts its beats aloud by separating music from counting voice. IROS 2008: 1538-1543 - [c82]Hyun-Don Kim, Jinsung Kim, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Target speech detection and separation for humanoid robots in sparse dialogue with noisy home environments. IROS 2008: 1705-1711 - [c81]Hisashi Kanda, Tetsuya Ogata, Kazunori Komatani, Hiroshi G. Okuno:
Segmenting acoustic signal with articulatory movement using Recurrent Neural Network for phoneme acquisition. IROS 2008: 1712-1717 - [c80]Ryu Takeda, Kazuhiro Nakadai, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Barge-in-able robot audition based on ICA and missing feature theory under semi-blind situation. IROS 2008: 1718-1723 - [c79]Hyun-Don Kim, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Design and evaluation of two-channel-based sound source localization over entire azimuth range for moving talkers. IROS 2008: 2197-2203 - [c78]Yuji Kubota, Masatoshi Yoshida, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Design and Implementation of 3D Auditory Scene Visualizer towards Auditory Awareness with Face Tracking. ISM 2008: 468-476 - [c77]Kouhei Sumi, Katsutoshi Itoyama, Kazuyoshi Yoshii, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Automatic Chord Recognition Based on Probabilistic Integration of Chord Transition and Bass Pitch Estimation. ISMIR 2008: 39-44 - [c76]Katsutoshi Itoyama, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Instrument Equalizer for Query-by-Example Retrieval: Improving Sound Source Separation Based on Integrated Harmonic and Inharmonic Models. ISMIR 2008: 133-138 - [c75]Yuji Kubota, Shun Shiramatsu, Masatoshi Yoshida, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
3D Auditory Scene Visualizer with Face Tracking: Design and Implementation for Auditory Awareness Compensation. ISUC 2008: 42-49 - [c74]Shun Shiramatsu, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
SalienceGraph: Visualizing Salience Dynamics of Written Discourse by Using Reference Probability and PLSA. PRICAI 2008: 890-902 - 2007
- [j12]Hiroaki Arie, Tetsuya Ogata, Jun Tani, Shigeki Sugano:
Reinforcement learning of a continuous motor sequence with hidden states. Adv. Robotics 21(10): 1215-1229 (2007) - [j11]Ryunosuke Yokoya, Tetsuya Ogata, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Experience-based imitation using RNNPB. Adv. Robotics 21(12): 1351-1367 (2007) - [j10]Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Instrument Identification in Polyphonic Music: Feature Weighting to Minimize Influence of Sound Overlaps. EURASIP J. Adv. Signal Process. 2007 (2007) - [j9]Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Instrogram: Probabilistic Representation of Instrument Existence for Polyphonic Music. Inf. Media Technol. 2(1): 279-291 (2007) - [j8]Kazuyoshi Yoshii, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Drumix: An Audio Player with Real-time Drum-part Rearrangement Functions for Active Music Listening. Inf. Media Technol. 2(2): 601-611 (2007) - [c73]Shun'ichi Yamamoto, Kazuhiro Nakadai, Mikio Nakano, Hiroshi Tsujino, Jean-Marc Valin, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Design and implementation of a robot audition system for automatic speech recognition of simultaneous speech. ASRU 2007: 111-116 - [c72]Katsutoshi Itoyama, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Integration and Adaptation of Harmonic and Inharmonic Models for Separating Polyphonic Musical Signals. ICASSP (1) 2007: 57-60 - [c71]Hisashi Kanda, Tetsuya Ogata, Kazunori Komatani, Hiroshi G. Okuno:
Vowel Imitation Using Vocal Tract Model and Recurrent Neural Network. ICONIP (2) 2007: 222-232 - [c70]Chyon Hae Kim, Tetsuya Ogata, Shigeki Sugano:
Enhancement of Self Organizing Network Elements for Supervised Learning. ICRA 2007: 92-98 - [c69]Haruhiko Niwa, Tetsuya Ogata, Kazunori Komatani, Hiroshi G. Okuno:
Distance Estimation of Hidden Objects Based on Acoustical Holography by applying Acoustic Diffraction of Audible Sound. ICRA 2007: 423-428 - [c68]Tetsuya Ogata, Shohei Matsumoto, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Human-Robot Cooperation using Quasi-symbols Generated by RNNPB Model. ICRA 2007: 2156-2161 - [c67]Shun Nishide, Tetsuya Ogata, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Predicting Object Dynamics from Visual Images through Active Sensing Experiences. ICRA 2007: 2501-2506 - [c66]Hyun-Don Kim, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Real-Time Auditory and Visual Talker Tracking Through Integrating EM Algorithm and Particle Filter. IEA/AIE 2007: 280-290 - [c65]Ryu Takeda, Shun'ichi Yamamoto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Evaluation of Two Simultaneous Continuous Speech Recognition with ICA BSS and MFT-Based ASR. IEA/AIE 2007: 384-394 - [c64]Satoshi Ikeda, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Topic estimation with domain extensibility for guiding user's out-of-grammar utterances in multi-domain spoken dialogue systems. INTERSPEECH 2007: 2561-2564 - [c63]Ryunosuke Yokoya, Tetsuya Ogata, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Discovery of other individuals by projecting a self-model through imitation. IROS 2007: 1009-1014 - [c62]Kazuyoshi Yoshii, Kazuhiro Nakadai, Toyotaka Torii, Yuji Hasegawa, Hiroshi Tsujino, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
A biped robot that keeps steps in time with musical beats while listening to music with its own ears. IROS 2007: 1743-1750 - [c61]Ryu Takeda, Kazuhiro Nakadai, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Exploiting known sound source signals to improve ICA-based robot audition in speech separation and recognition. IROS 2007: 1757-1762 - [c60]Hisashi Kanda, Tetsuya Ogata, Kazunori Komatani, Hiroshi G. Okuno:
Vocal imitation using physical vocal tract model. IROS 2007: 1846-1851 - [c59]Tetsuya Ogata, Masamitsu Murase, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Two-way translation of compound sentences and arm motions by recurrent neural networks. IROS 2007: 1858-1863 - [c58]Hyun-Don Kim, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Auditory and visual integration based localization and tracking of humans in daily-life environments. IROS 2007: 2021-2027 - [c57]Kazuyoshi Yoshii, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Improving Efficiency and Scalability of Model-Based Music Recommender System Based on Incremental Training. ISMIR 2007: 89-94 - [c56]Kôiti Hasida, Shun Shiramatsu, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Meaning Games. JSAI 2007: 228-241 - [c55]Hyun-Don Kim, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Auditory and Visual Integration based Localization and Tracking of Multiple Moving Sounds in Daily-life Environments. RO-MAN 2007: 399-404 - [c54]Kazunori Komatani, Yuichiro Fukubayashi, Tetsuya Ogata, Hiroshi G. Okuno:
Introducing Utterance Verification in Spoken Dialogue System to Improve Dynamic Help Generation for Novice Users. SIGdial 2007: 202-205 - 2006
- [j7]Mototaka Suzuki, Kuniaki Noda, Yuki Suga, Tetsuya Ogata, Shigeki Sugano:
Dynamic perception after visually guided grasping by a human-like autonomous robot. Adv. Robotics 20(2): 233-254 (2006) - [j6]Tsuyoshi Tasaki, Shohei Matsumoto, Hayato Ohba, Shun'ichi Yamamoto, Mitsuhiko Toda, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Dynamic Communication of Humanoid Robot with Multiple People Based on Interaction Distance. Inf. Media Technol. 1(1): 285-295 (2006) - [j5]Tetsuya Ogata, Shigeki Sugano, Jun Tani:
Acquisition of Motion Primitives of Robot in Human-Navigation Task. Inf. Media Technol. 1(1): 305-313 (2006) - [c53]Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Instrogram: A New Musical Instrument Recognition Technique Without Using Onset Detection NOR F0 Estimation. ICASSP (5) 2006: 229-232 - [c52]Kazuyoshi Yoshii, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
An Error Correction Framework Based on Drum Pattern Periodicity for Improving Drum Sound Detection. ICASSP (5) 2006: 237-240 - [c51]Hiromasa Fujihara, Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
F0 Estimation Method for Singing Voice in Polyphonic Audio Signal Based on Statistical Vocal Model and Viterbi Search. ICASSP (5) 2006: 253-256 - [c50]Hiroaki Arie, Jun Namikawa, Tetsuya Ogata, Jun Tani, Shigeki Sugano:
Reinforcement Learning Algorithm with CTRNN in Continuous Action Space. ICONIP (1) 2006: 387-396 - [c49]Shun'ichi Yamamoto, Kazuhiro Nakadai, Mikio Nakano, Hiroshi Tsujino, Jean-Marc Valin, Ryu Takeda, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Genetic Algorithm-Based Improvement of Robot Hearing Capabilities in Separating and Recognizing Simultaneous Speech Signals. IEA/AIE 2006: 207-217 - [c48]Hiromasa Fujihara, Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Speaker identification under noisy environments by using harmonic structure extraction and reliable frame weighting. INTERSPEECH 2006 - [c47]Yuichiro Fukubayashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Dynamic help generation by estimating user²s mental model in spoken dialogue systems. INTERSPEECH 2006 - [c46]Ryu Takeda, Shun'ichi Yamamoto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Improving speech recognition of two simultaneous speech signals by integrating ICA BSS and automatic missing feature mask generation. INTERSPEECH 2006 - [c45]Shun'ichi Yamamoto, Ryu Takeda, Kazuhiro Nakadai, Mikio Nakano, Hiroshi Tsujino, Jean-Marc Valin, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Leak energy based missing feature mask generation for ICA and GSS and its evaluation with simultaneous speech recognition. SAPA@INTERSPEECH 2006: 42-47 - [c44]Ryu Takeda, Shun'ichi Yamamoto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Missing-Feature based Speech Recognition for Two Simultaneous Speech Signals Separated by ICA with a pair of Humanoid Ears. IROS 2006: 878-885 - [c43]Haruhiko Niwa, Tetsuya Ogata, Kazunori Komatani, Hiroshi G. Okuno:
Multiple Acoustical Holography Method for Localization of Objects in Broad Range using Audible Sound. IROS 2006: 1145-1150 - [c42]Chyon Hae Kim, Shigeki Sugano, Tetsuya Ogata:
Efficient Organization of Network Topology based on Reinforcement Signals. IROS 2006: 3154-3159 - [c41]Yuki Suga, Chihiro Endo, Daizo Kobayashi, Takeshi Matsumoto, Shigeki Sugano, Tetsuya Ogata:
Adaptive Human-Robot Interaction System using Interactive EC. IROS 2006: 3663-3668 - [c40]Ryunosuke Yokoya, Tetsuya Ogata, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Experience Based Imitation Using RNNPB. IROS 2006: 3669-3674 - [c39]Shun'ichi Yamamoto, Kazuhiro Nakadai, Mikio Nakano, Hiroshi Tsujino, Jean-Marc Valin, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Real-Time Robot Audition System That Recognizes Simultaneous Speech in The Real World. IROS 2006: 5333-5338 - [c38]Hiromasa Fujihara, Masataka Goto, Jun Ogata, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Automatic Synchronization between Lyrics and Music CD Recordings Based on Viterbi Alignment of Segregated Vocal Signals. ISM 2006: 257-264 - [c37]Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Musical Instrument Recognizer "Instrogram" and Its Application to Music Retrieval Based on Instrumentation Similarity. ISM 2006: 265-274 - [c36]Katsutoshi Itoyama, Tetsuro Kitahara, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Automatic Feature Weighting in Automatic Transcription of Specified Part in Polyphonic Music. ISMIR 2006: 172-175 - [c35]Kazuyoshi Yoshii, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Hybrid Collaborative and Content-based Music Recommendation Using Probabilistic Model with Latent User Preferences. ISMIR 2006: 296-301 - [c34]Shun'ichi Yamamoto, Ryu Takeda, Kazuhiro Nakadai, Mikio Nakano, Hiroshi Tsujino, Jean-Marc Valin, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Recognition of Simultaneous Speech by Estimating Reliability of Separated Signals for Robot Audition. PRICAI 2006: 484-494 - [c33]Kazunori Komatani, Naoyuki Kanda, Mikio Nakano, Kazuhiro Nakadai, Hiroshi Tsujino, Tetsuya Ogata, Hiroshi G. Okuno:
Multi-Domain Spoken Dialogue System with Extensibility and Robustness against Speech Recognition Errors. SIGDIAL Workshop 2006: 9-17 - 2005
- [j4]Tetsuya Ogata, Shigeki Sugano, Jun Tani:
Open-end human-robot interaction from the dynamical systems perspective: mutual adaptation and incremental learning. Adv. Robotics 19(6): 651-670 (2005) - [j3]Tetsuya Ogata, Hayato Ohba, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Extracting Multimodal Dynamics of Objects Using RNNPB. J. Robotics Mechatronics 17(6): 681-688 (2005) - [c32]Shun'ichi Yamamoto, Jean-Marc Valin, Kazuhiro Nakadai, Jean Rouat, François Michaud, Tetsuya Ogata, Hiroshi G. Okuno:
Enhanced Robot Speech Recognition Based on Microphone Array Source Separation and Missing Feature Theory. ICRA 2005: 1477-1482 - [c31]Tsuyoshi Tasaki, Shohei Matsumoto, Hayato Ohba, Mitsuhiko Toda, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Distance-Based Dynamic Interaction of Humanoid Robot with Multiple People. IEA/AIE 2005: 111-120 - [c30]Masamitsu Murase, Shun'ichi Yamamoto, Jean-Marc Valin, Kazuhiro Nakadai, Kentaro Yamada, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Multiple moving speaker tracking by microphone array on mobile robot. INTERSPEECH 2005: 249-252 - [c29]Kazunori Komatani, Naoyuki Kanda, Tetsuya Ogata, Hiroshi G. Okuno:
Contextual constraints based on dialogue models in database search task for spoken dialogue systems. INTERSPEECH 2005: 877-880 - [c28]Tetsuya Ogata, Hayato Ohba, Jun Tani, Kazunori Komatani, Hiroshi G. Okuno:
Extracting multi-modal dynamics of objects using RNNPB. IROS 2005: 966-971 - [c27]Tsuyoshi Tasaki, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Spatially mapping of friendliness for human-robot interaction. IROS 2005: 1277-1282 - [c26]Yuki Suga, Yoshinori Ikuma, Daisuke Nagao, Shigeki Sugano, Tetsuya Ogata:
Interactive evolution of human-robot communication in real world. IROS 2005: 1438-1443 - [c25]Shun'ichi Yamamoto, Kazuhiro Nakadai, Jean-Marc Valin, Jean Rouat, François Michaud, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Making a robot recognize three simultaneous sentences in real-time. IROS 2005: 4040-4045 - [c24]Hiromasa Fujihara, Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Singer Identification Based on Accompaniment Sound Reduction and Reliable Frame Selection. ISMIR 2005: 329-336 - [c23]Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Instrument Identification in Polyphonic Music: Feature Weighting with Mixed Sounds, Pitch-Dependent Timbre Modeling, and Use of Musical Context. ISMIR 2005: 558-563 - [c22]Kenri Kodaka, Tetsuya Ogata, Hiroshi G. Okuno:
Walking with body-sense in virtual space using the nonlinear oscillator. SMC 2005: 324-329 - 2004
- [c21]Yuki Suga, Hiroaki Arie, Tetsuya Ogata, Shigeki Sugano:
Constructivist approach to human-robot emotional communication - design of evolutionary function for WAMOEBA-3. Humanoids 2004: 869-884 - [c20]Tetsuya Ogata, Shigeki Sugano, Jun Tani:
Open-End Human Robot Interaction from the Dynamical Systems Perspective: Mutual Adaptation and Incremental Learning. IEA/AIE 2004: 435-444 - [c19]Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno, Tsuyoshi Tasaki, Takeshi Yamaguchi:
Robot motion control using listener's back-channels and head gesture information. INTERSPEECH 2004: 1033-1036 - [c18]Kazushi Ishihara, Yuya Hattori, Tomohiro Nakatani, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Disambiguation in determining phonemes of sound-imitation words for environmental sound recognition. INTERSPEECH 2004: 1485-1488 - [c17]Yuki Suga, Tetsuya Ogata, Shigeki Sugano:
Acquisition of reactive motion for communication robots using interactive EC. IROS 2004: 1198-1203 - [c16]Yoshihiro Sakamoto, Tetsuya Ogata, Shigeki Sugano:
Human-robot communication using multiple recurrent neural networks. IROS 2004: 1574-1579 - [c15]Tetsuya Ogata, Masaki Matsunaga, Shigeki Sugano, Jun Tani:
Human-robot collaboration using behavioral primitives. IROS 2004: 1592-1597 - [c14]Takuya Yoshioka, Tetsuro Kitahara, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno:
Automatic Chord Transcription with Concurrent Recognition of Chord Symbols and Boundaries. ISMIR 2004 - [c13]Kazushi Ishihara, Tomohiro Nakatani, Tetsuya Ogata, Hiroshi G. Okuno:
Automatic Sound-Imitation Word Recognition from Environmental Sounds Focusing on Ambiguity Problem in Determining Phonemes. PRICAI 2004: 909-918 - 2003
- [c12]Kuniaki Noda, Mototaka Suzuki, Naofumi Tsuchiya, Yuki Suga, Tetsuya Ogata, Shigeki Sugano:
Robust modeling of dynamic environment based on robot embodiment. ICRA 2003: 3565-3570 - [c11]Tetsuya Ogata, Noritaka Masago, Shigeki Sugano, Jun Tani:
Interactive learning in human-robot collaboration. IROS 2003: 162-167 - 2002
- [c10]Sadao Kawamura, T. Yamamoto, D. Ishida, Tetsuya Ogata, Y. Nakayama, Osamu Tabata, Susumu Sugiyama:
Development of Passive Elements with Variable Mechanical Impedance for Wearable Robots. ICRA 2002: 248-253 - 2001
- [c9]Tetsuya Ogata, Takaaki Komiya, Shigeki Sugano:
Motion generation of the autonomous robot based on body structure. IROS 2001: 2338-2343 - 2000
- [j2]Tetsuya Ogata, Yoshihiro Matsuyama, Shigeki Sugano:
Acquisition of internal representation in robots - toward human-robot communication using primitive language. Adv. Robotics 14(4): 277-291 (2000) - [j1]Yasuhisa Hayakawa, Ikuo Kitagishi, Yusuke Kira, Kensuke Satake, Tetsuya Ogata, Shigeki Sugano:
Assembly Support Based on Human Model -Provision of Physical Support According to Implicit Desire for Support-. J. Robotics Mechatronics 12(2): 118-125 (2000) - [c8]Yasuhisa Hayakawa, Tetsuya Ogata, Shigeki Sugano:
A Robotic Co-Operation System Based on a Self-Organization Approached Human Work Model. ICRA 2000: 4057-4062 - [c7]Tetsuya Ogata, Yoshihiro Matsuyama, Takaaki Komiya, Masataka Ida, Kuniaki Noda, Shigeki Sugano:
Development of emotional communication robot: WAMOEBA-2R-experimental evaluation of the emotional communication between robots and humans. IROS 2000: 175-180 - [c6]Tetsuya Ogata, Akitoshi Shimura, Koji Shibuya, Shigeki Sugano:
A violin playing algorithm considering the change of phrase impression. SMC 2000: 1342-1347
1990 – 1999
- 1999
- [c5]Tetsuya Ogata, Shigeki Sugano:
Emotional Communication Between Humans and the Autonomous Robot Which Has the Emotion Model. ICRA 1999: 3177-3182 - [c4]Tetsuya Ogata, Shigeki Sugano:
Emotional communication between humans and robots - consideration of primitive language in robots. IROS 1999: 870-875 - 1998
- [c3]Tetsuya Ogata, Shigeki Sugano:
Communication between behavior-based robots with emotion model and humans. SMC 1998: 1095-1100 - 1997
- [c2]Tetsuya Ogata, Kazuki Hayashi, Ikuo Kitagishi, Shigeki Sugano:
Generation of behavior automaton on neural network. IROS 1997: 608-613 - 1996
- [c1]Shigeki Sugano, Tetsuya Ogata:
Emergence of mind in robots for human interface - research methodology and robot model. ICRA 1996: 1191-1198
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-08 21:30 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint