default search action
Tomio Watanabe
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c104]Yutaka Ishii, Masaki Matsuno, Tomio Watanabe:
Agent "Nah": Development of a Voice-Driven Embodied Entrainment Character with Non-agreeable Responses. HCI (6) 2024: 3-11 - [c103]Yoshihiro Sejima, Shota Hashimoto, Tomio Watanabe:
A Speech-Driven Embodied Listening System with Mirroring of Pupil Response. HCI (6) 2024: 42-50 - [c102]Teruaki Ito, Tomio Watanabe:
ARM-COMS Motor Display System for Active Listening in Remote Communication. HCI (8) 2024: 309-318 - 2023
- [c101]Yutaka Ishii, Kenta Koike, Miwako Kitamura, Tomio Watanabe:
Development of a Speech-Driven Communication Support System Using a Smartwatch with Vibratory Nodding Responses. HCI (5) 2023: 370-378 - [c100]Teruaki Ito, Tomio Watanabe:
Coordinated Motor Display System of ARM-COMS for Evoking Emotional Projection in Remote Communication. HCI (5) 2023: 379-388 - [c99]Liheng Yang, Yoshihiro Sejima, Tomio Watanabe:
Effects of Gaze on Human Behavior Prediction of Virtual Character for Intention Inference Design. HCI (5) 2023: 445-454 - 2022
- [c98]Yutaka Ishii, Satoshi Kurokawa, Miwako Kitamura, Tomio Watanabe:
Development of a Web-Based Interview Support System Using Characters Nodding with Various Movements. HCI (44) 2022: 76-87 - [c97]Liheng Yang, Yoshihiro Sejima, Tomio Watanabe:
Effects of Virtual Character's Eye Movements in Reach Motion on Target Prediction. HCI (5) 2022: 162-171 - [c96]Teruaki Ito, Tomio Watanabe:
Natural Involvement to Video Conference Through ARM-COMS. HCI (5) 2022: 238-246 - 2021
- [c95]Tomofumi Sakata, Keiichi Watanuki, Kazunori Kaede, Tomio Watanabe:
Evaluation of Entrainment of Heart Rate and Brain Activation Depending on the Listener's Nodding Response and the Conversation Situation. AHFE (2) 2021: 788-795 - [c94]Yutaka Ishii, Satoshi Kurokawa, Tomio Watanabe:
Avatar Twin Using Shadow Avatar in Avatar-Mediated Communication. HCI (4) 2021: 297-305 - [c93]Teruaki Ito, Takashi Oyama, Tomio Watanabe:
Smart Speaker Interaction Through ARM-COMS for Health Monitoring Platform. HCI (5) 2021: 396-405 - [c92]Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe:
Development of a Presentation Support System Using Group Pupil Response Interfaces. HCI (5) 2021: 429-438 - 2020
- [c91]Teruaki Ito, Takashi Oyama, Tomio Watanabe:
Speech Recognition Approach for Motion-Enhanced Display in ARM-COMS System. HCI (42) 2020: 135-144 - [c90]Yoshihiro Sejima, Makiko Nishida, Tomio Watanabe:
Development of an Interface that Expresses Twinkling Eyes by Superimposing Human Shadows on Pupils. HCI (42) 2020: 271-279
2010 – 2019
- 2019
- [c89]Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe:
Development of a Pupil Response System with Empathy Expression in Face-to-Face Body Contact. AHFE (1) 2019: 95-102 - [c88]Masakatsu Kubota, Tomio Watanabe, Yutaka Ishii:
A Speech Promotion System by Using Embodied Entrainment Objects of Spoken Words and a Listener Character for Joint Attention. HAI 2019: 311-312 - [c87]Yutaka Ishii, Tomio Watanabe:
Development of an Embodied Group Entrainmacent Response System to Express Interaction-Activated Communication. HCI (3) 2019: 203-211 - [c86]Teruaki Ito, Hiroki Kimachi, Tomio Watanabe:
Combination of Local Interaction with Remote Interaction in ARM-COMS Communication. HCI (5) 2019: 347-356 - 2018
- [j11]Irini Giannopulu, Kazunori Terada, Tomio Watanabe:
Communication using robots: a Perception-action scenario in moderate ASD. J. Exp. Theor. Artif. Intell. 30(5): 603-613 (2018) - [c85]Teruaki Ito, Hiroki Kimachi, Tomio Watanabe:
Experimental Observation of Nodding Motion in Remote Communication Using ARM-COMS. HCI (4) 2018: 194-203 - [c84]Yoshihiro Sejima, Ryosuke Maeda, Daichi Hasegawa, Yoichiro Sato, Tomio Watanabe:
A Video Communication System with a Virtual Pupil CG Superimposed on the Partner's Pupil. HCI (4) 2018: 336-345 - [c83]Yoshihiro Sejima, Shoichi Egawa, Ryosuke Maeda, Yoichiro Sato, Tomio Watanabe:
A Speech-Driven Pupil Response System with Affective Expression Using Hemispherical Displays. RO-MAN 2018: 228-233 - 2017
- [c82]Teruaki Ito, Tomio Watanabe:
Eye-Tracking Analysis of User Behavior with an Active Display Interface. AHFE (1) 2017: 72-77 - [c81]Teruaki Ito, Tomio Watanabe:
Image-Based Active Control for AEM Function of ARM-COMS. HCI (3) 2017: 529-538 - [c80]Yoshihiro Sejima, Koki Ono, Tomio Watanabe:
A Speech-Driven Embodied Communication System Based on an Eye Gaze Model in Interaction-Activated Communication. HCI (3) 2017: 607-616 - [c79]Michiya Yamamoto, Saizo Aoyagi, Satoshi Fukumori, Tomio Watanabe:
Development of a Communication Robot for Forwarding a User's Presence to a Partner During Video Communication. HCI (3) 2017: 640-649 - [c78]Yoshihiro Sejima, Shoichi Egawa, Ryosuke Maeda, Yoichiro Sato, Tomio Watanabe:
A speech-driven pupil response robot synchronized with burst-pause of utterance. RO-MAN 2017: 437-442 - 2016
- [j10]Irini Giannopulu, Valérie Montreynaud, Tomio Watanabe:
Minimalistic toy robot to analyze a scenery of speaker-listener condition in autism. Cogn. Process. 17(2): 195-203 (2016) - [c77]Yutaka Ishii, Tomio Watanabe, Yoshihiro Sejima:
Development of an Embodied Avatar System using Avatar-Shadow's Color Expressions with an Interaction-activated Communication Model. HAI 2016: 337-340 - [c76]Teruaki Ito, Tomio Watanabe:
Motion Control Algorithm of ARM-COMS for Entrainment Enhancement. HCI (4) 2016: 339-346 - [c75]Takashi Yamada, Tomio Watanabe:
Development of grip strength measuring systems for infants. SII 2016: 138-143 - [c74]Shoichi Egawa, Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe:
A laughing-driven pupil response system for inducing empathy. SII 2016: 520-525 - 2015
- [c73]Saizo Aoyagi, Ryuji Kawabe, Michiya Yamamoto, Tomio Watanabe:
Hand-Raising Robot for Promoting Active Participation in Classrooms. HCI (5) 2015: 275-284 - [c72]Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe, Mitsuru Jindai:
Development of a Speech-Driven Embodied Entrainment Character System with Pupil Response. HCI (5) 2015: 378-386 - [c71]Michiya Yamamoto, Saizo Aoyagi, Satoshi Fukumori, Tomio Watanabe:
KiroPi: A life-log robot by installing embodied hardware on a tablet. RO-MAN 2015: 258-263 - [c70]Irini Giannopulu, Tomio Watanabe:
Conscious/unconscious emotional dialogues in typical children in the presence of an InterActor Robot. RO-MAN 2015: 264-270 - [c69]Yoshihiro Sejima, Yoichiro Sato, Tomio Watanabe:
Development of an expressible pupil response interface using hemispherical displays. RO-MAN 2015: 285-290 - [c68]Keizou Esaki, Shota Inoue, Tomio Watanabe, Yukata Ishii:
An embodied entrainment avatar-shadow system to support avatar mediated communication. RO-MAN 2015: 419-424 - [c67]Takashi Yamada, Tomio Watanabe:
Development of a joint attention system using a facial image character with indicator light tracking. SII 2015: 569-574 - [c66]Mayo Yamamoto, Noriko Takabayashi, Tomio Watanabe, Yutaka Ishii:
A nursing communication education support system with the function of reflection. SII 2015: 912-917 - 2014
- [c65]Irini Giannopulu, Valérie Montreynaud, Tomio Watanabe:
PEKOPPA: a minimalistic toy robot to analyse a listener-speaker situation in neurotypical and autistic children aged 6 years. HAI 2014: 9-16 - [c64]Yutaka Ishii, Tomio Watanabe:
Evaluation of a video communication system with speech-driven embodied entrainment audience characters with partner's face. HAI 2014: 221-224 - [c63]Teruaki Ito, Tomio Watanabe:
Three Key Challenges in ARM-COMS for Entrainment Effect Acceleration in Remote Communication. HCI (12) 2014: 177-186 - [c62]Ryuji Kawabe, Michiya Yamamoto, Saizo Aoyagi, Tomio Watanabe:
Measurement of Hand Raising Actions to Support Students' Active Participation in Class. HCI (12) 2014: 199-207 - [c61]Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai:
Development of an interaction-activated communication model based on a heat conduction equation in voice communication. RO-MAN 2014: 832-837 - [c60]Irini Giannopulu, Valérie Montreynaud, Tomio Watanabe:
Neurotypical and autistic children aged 6 to 7 years in a speaker-listener situation with a human or a minimalist InterActor robot. RO-MAN 2014: 942-948 - 2013
- [c59]Teruaki Ito, Tomio Watanabe:
ARM-COMS: ARm-Supported eMbodied COmmunication Monitor System. HCI (15) 2013: 307-316 - [c58]Hiroki Kanegae, Masaru Yamane, Michiya Yamamoto, Tomio Watanabe:
Effects of a Communication with Make-Believe Play in a Real-Space Sharing Edutainment System. HCI (15) 2013: 326-335 - [c57]Yutaka Ishii, Tomio Watanabe:
Evaluation of Superimposed Self-character Based on the Detection of Talkers' Face Angles in Video Communication. HCI (13) 2013: 431-438 - [c56]Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai, Atsushi Osa:
Eyeball Movement Model for Lecturer Character in Speech-Driven Embodied Group Entrainment System. ISM 2013: 506-507 - [c55]Takuya Matsumoto, Ryota Tamura, Michiya Yamamoto, Tomio Watanabe:
Development of a life-log robot for supporting group interaction in everyday life. RO-MAN 2013: 216-219 - [c54]Yutaka Ishii, Shiho Nakayama, Tomio Watanabe:
A superimposed self-character mediated video chat system with the function of face-to-face projection based on talker's face direction. RO-MAN 2013: 581-586 - [c53]Kazuaki Nakamura, Tomio Watanabe, Mitsuru Jindai:
Development of nodding detection system based on Active Appearance Model. SII 2013: 400-405 - [c52]Shiho Nakayama, Tomio Watanabe, Yutaka Ishii:
Video communication system with speech-driven embodied entrainment audience characters with partner's face. SII 2013: 873-878 - 2012
- [c51]Yutaka Ishii, Tomio Watanabe:
E-VChat: A video communication system in which a speech-driven embodied entrainment character working with head motion is superimposed for a virtual face-to-face scene. RO-MAN 2012: 191-196 - [c50]Michiya Yamamoto, Munehiro Komeda, Takashi Nagamatsu, Tomio Watanabe:
Development of a gaze-and-touch algorithm for a tabletop Hyakunin-Isshu game with a computer opponent. RO-MAN 2012: 203-208 - [c49]Yoshihiro Sejima, Tomio Watanabe, Mitsuru Jindai, Atsushi Osa, Yukari Zushi:
A speech-driven embodied group entrainment system with the model of lecturer's eyeball movement. RO-MAN 2012: 1086-1091 - [c48]Michiya Yamamoto, Yusuke Shigeno, Ryuji Kawabe, Tomio Watanabe:
Development of a context-enhancing surface based on the entrainment of embodied rhythms and actions sharing via interaction. ITS 2012: 363-366 - 2011
- [c47]Takashi Yamada, Tomio Watanabe:
An arm wrestling robot system for human upper extremity wear. AH 2011: 38 - [c46]Yutaka Ishii, Tomio Watanabe:
Embodied Communication Support Using a Presence Sharing System under Teleworking. HCI (23) 2011: 41-45 - [c45]Yoshihiro Sejima, Yutaka Ishii, Tomio Watanabe:
A Virtual Audience System for Enhancing Embodied Interaction Based on Conversational Activity. HCI (12) 2011: 180-189 - [c44]Kentaro Okamoto, Michiya Yamamoto, Tomio Watanabe:
A Configuration Method of Visual Media by Using Characters of Audiences for Embodied Sport Cheering. HCI (2) 2011: 585-592 - [c43]Yuya Takao, Michiya Yamamoto, Tomio Watanabe:
Development of Embodied Visual Effects Which Expand the Presentation Motion of Emphasis and Indication. HCI (2) 2011: 603-612 - [c42]Michiya Yamamoto, Hiroshi Sato, Keisuke Yoshida, Takashi Nagamatsu, Tomio Watanabe:
Development of an Eye-Tracking Pen Display for Analyzing Embodied Interaction. HCI (11) 2011: 651-658 - [c41]Michiya Yamamoto, Munehiro Komeda, Takashi Nagamatsu, Tomio Watanabe:
Hyakunin-Eyesshu: a tabletop Hyakunin-Isshu game with computer opponent by the action prediction based on gaze detection. NGCA 2011: 5 - [c40]Yusuke Shigeno, Michiya Yamamoto, Tomio Watanabe:
Analysis of pointing motions by introducing a joint model for supporting embodied large-surface presentation. ITS 2011: 250-251 - 2010
- [c39]Michiya Yamamoto, Takashi Nagamatsu, Tomio Watanabe:
Development of eye-tracking pen display based on stereo bright pupil technique. ETRA 2010: 165-168 - [c38]Yutaka Ishii, Yoshihiro Sejima, Tomio Watanabe:
Effects of delayed presentation of self-embodied avatar motion with network delay. IUCS 2010: 262-267 - [c37]Michiya Yamamoto, Kouzi Osaki, Shotaro Matsune, Tomio Watanabe:
An embodied entrainment character cell phone by speech and head motion inputs. RO-MAN 2010: 298-303 - [c36]Michiya Yamamoto, Munehiro Komeda, Takashi Nagamatsu, Tomio Watanabe:
Development of eye-tracking tabletop interface for media art works. ITS 2010: 295-296
2000 – 2009
- 2009
- [c35]Yutaka Ishii, Tomio Watanabe:
Development of a Virtual Presence Sharing System Using a Telework Chair. FIRA 2009: 173-178 - [c34]Yutaka Ishii, Kouzi Osaki, Tomio Watanabe:
Ghatcha: GHost Avatar on a Telework CHAir. HCI (12) 2009: 216-225 - [c33]Michiya Yamamoto, Kouzi Osaki, Tomio Watanabe:
Video Content Production Support System with Speech-Driven Embodied Entrainment Character by Speech and Hand Motion Inputs. HCI (3) 2009: 358-367 - [c32]Yoshihiro Sejima, Tomio Watanabe:
A speech-driven embodied entrainment wall picture system for supporting virtual communication. IUCS 2009: 309-314 - [c31]Yoshihiro Sejima, Tomio Watanabe:
An embodied virtual communication system with a speech-driven embodied entrainment picture. RO-MAN 2009: 979-984 - 2008
- [j9]Michiya Yamamoto, Tomio Watanabe:
Effects of Time Lag of Utterances to Communicative Actions on Embodied Interaction With Robot and CG Character. Int. J. Hum. Comput. Interact. 24(1): 87-107 (2008) - [j8]Mitsuru Jindai, Tomio Watanabe, Satoru Shibata, Tomonori Yamamoto:
Development of a Handshake Robot System Based on a Handshake Approaching Motion Model. J. Robotics Mechatronics 20(4): 650-659 (2008) - [j7]Takashi Yamada, Tomio Watanabe:
Development of a Virtual Arm Wrestling System for Force Display Communication Analysis. J. Robotics Mechatronics 20(6): 872-879 (2008) - [c30]Mitsuru Jindai, Tomio Watanabe:
A handshake robot system based on a shake-motion leading model. IROS 2008: 3330-3335 - [c29]Yoshihiro Sejima, Tomio Watanabe, Michiya Yamamoto:
Analysis by Synthesis of Embodied Communication via VirtualActor with a Nodding Response Model. ISUC 2008: 225-230 - [c28]Yutaka Ishii, Kouzi Osaki, Tomio Watanabe, Yoshijiro Ban:
Evaluation of embodied avatar manipulation based on talker's hand motion by using 3D trackball. RO-MAN 2008: 653-658 - [c27]Michiya Yamamoto, Tomio Watanabe:
Development of an edutainment system with interactors of a teacher and a student in which a user plays a double role of them. RO-MAN 2008: 659-664 - [c26]Takashi Yamada, Tomio Watanabe:
Development of a pneumatic cylinder-driven arm wrestling robot system. RO-MAN 2008: 665-670 - 2007
- [c25]Tomio Watanabe:
Human-Entrained E-COSMIC: Embodied Communication System for Mind Connection. HCI (8) 2007: 1008-1016 - [c24]Michiya Yamamoto, Tomio Watanabe:
Development of an Embodied Image Telecasting Method Via a Robot with Speech-Driven Nodding Response. HCI (8) 2007: 1017-1025 - [c23]Kouzi Osaki, Tomio Watanabe, Michiya Yamamoto:
Speech-driven embodied entrainment character system with hand motion input in mobile environment. ICMI 2007: 285-290 - [c22]Tomio Watanabe:
Human-Entrained Embodied Interaction and Communication Technology for Advanced Media Society. RO-MAN 2007: 31-36 - [c21]Yutaka Ishii, Tomio Watanabe:
An Embodied Avatar Mediated Communication System with VirtualActor for Human Interaction Analysis. RO-MAN 2007: 37-42 - [c20]Michiya Yamamoto, Tomio Watanabe:
Analysis by Synthesis of an Information Presentation Method of Embodied Agent Based on the Time Lag Effects of Utterance to Communicative Actions. RO-MAN 2007: 43-48 - [c19]Takashi Yamada, Tomio Watanabe:
Virtual Facial Image Synthesis with Facial Color Enhancement and Expression under Emotional Change ofAnger. RO-MAN 2007: 49-54 - 2006
- [j6]Takashi Yamada, Tomio Watanabe:
Analysis and Synthesis of Facial Color for Facial Image Synthesis in a Virtual Arm Wrestling System. J. Robotics Mechatronics 18(4): 433-441 (2006) - [c18]Michiya Yamamoto, Tomio Watanabe:
Time Lag Effects of Utterance to Communicative Actions on CG Character-Human Greeting Interaction. RO-MAN 2006: 629-634 - [c17]Mitsuru Jindai, Tomio Watanabe, Satoru Shibata, Tomonori Yamamoto:
Development of a Handshake Robot System for Embodied Interaction with Humans. RO-MAN 2006: 710-715 - [c16]Takashi Yamada, Tomio Watanabe:
Development of a Virtual Arm Wrestling System for Force Display Communication Analysis. RO-MAN 2006: 775-780 - 2005
- [c15]Masashi Okubo, Tomio Watanabe:
Development of an embodied collaboration support system for 3D shape evaluation in virtual space. AMT 2005: 207-212 - [c14]Hiroyuki Nagai, Tomio Watanabe, Michiya Yamamoto:
InterPointer: speech-driven embodied entrainment pointer system. AMT 2005: 213-218 - [c13]Takashi Yamada, Tomio Watanabe:
Analysis and synthesis of facial color for the affect display of virtual facial image under fearful emotion. AMT 2005: 219-224 - [c12]Michiya Yamamoto, Tomio Watanabe, Koji Osaki:
Development of an embodied interaction system with InterActor by speech and hand motion input. RO-MAN 2005: 323-328 - [c11]Masashi Okubo, Tomio Watanabe:
Effects of InterActor's nodding on a collaboration support system. RO-MAN 2005: 329-334 - 2004
- [j5]Tomio Watanabe, Masashi Okubo, Mutsuhiro Nakashige, Ryusei Danbara:
InterActor: Speech-Driven Embodied Interactive Actor. Int. J. Hum. Comput. Interact. 17(1): 43-60 (2004) - [j4]Tomio Watanabe, Masamichi Ogikubo, Yutaka Ishii:
Visualization of Respiration in the Embodied Virtual Communication System and Its Evaluation. Int. J. Hum. Comput. Interact. 17(1): 89-102 (2004) - 2003
- [c10]Takeshi Shintoku, Tomio Watanabe:
An embodied virtual communication system for three human interaction support and analysis by synthesis. CIRA 2003: 211-216 - [c9]Michiya Yamamoto, Tomio Watanabe:
Time delay effects of utterance to communicative actions on greeting interaction by using a voice-driven embodied interaction system. CIRA 2003: 217-222 - [c8]Tomio Watanabe, Masashi Okubo, Ryusei Danbara:
InterActor for Human Interaction and Communication Support. INTERACT 2003 - 2001
- [j3]Hiroki Ogawa, Tomio Watanabe:
InterRobot: speech-driven embodied interaction robot. Adv. Robotics 15(3): 371-377 (2001) - 2000
- [j2]Tomio Watanabe, Masashi Okubo, Hiroki Ogawa:
An Embodied Interaction Robots System Based on Speech. J. Robotics Mechatronics 12(2): 126-134 (2000) - [c7]Tomio Watanabe, Masashi Okubo, Hiroki Ogawa:
A speech driven embodied interaction robots system for human communication support. SMC 2000: 852-857
1990 – 1999
- 1999
- [c6]Tomio Watanabe, Masashi Okubo:
Virtual face-to-face communication system for human interaction analysis by synthesis. HCI (2) 1999: 182-186 - 1998
- [c5]Masashi Okubo, Tomio Watanabe:
Lip Motion Capture and Its Application to 3-D Molding. FG 1998: 187-193 - 1997
- [c4]Tomio Watanabe, Masashi Okubo:
Physiological Analysis of Entrainment in Face-to-Face Communication. HCI (2) 1997: 411-414 - 1993
- [c3]Tomio Watanabe:
Voice-Responsive Eye-Blinking Feedback for Improved Human-to-Machine Speech Input. HCI (2) 1993: 1091-1096 - 1992
- [c2]Tomio Watanabe:
Voice-reactive facial expression graphics feedback for improved human-to-machine speech input. CHI Posters and Short Talks 1992: 69 - 1990
- [j1]Tomio Watanabe:
The adaptation of machine conversational speed to speaker utterance speed in human-machine communication. IEEE Trans. Syst. Man Cybern. 20(2): 502-507 (1990) - [c1]Tomio Watanabe, Masaki Kohda:
Lip-reading of Japanese vowels using neural networks. ICSLP 1990: 1373-1376
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-06-19 20:59 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint