default search action
Kazuteru Miyazaki
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j20]Kazuteru Miyazaki, Masaaki Ida:
Performance evaluation of character-level CNNs using tweet data and analysis for weight perturbations. Artif. Life Robotics 29(2): 266-273 (2024) - [j19]Kazuteru Miyazaki, Hitomi Miyazaki:
Suppression of negative tweets using reinforcement learning systems. Cogn. Syst. Res. 84: 101207 (2024) - [j18]Kazuteru Miyazaki, Keiki Takadama:
Editorial: Cutting Edge of Reinforcement Learning and its Hybrid Methods. J. Adv. Comput. Intell. Intell. Informatics 28(2): 379 (2024) - [j17]Kazuteru Miyazaki, Shu Yamaguchi, Rie Mori, Yumiko Yoshikawa, Takanori Saito, Toshiya Suzuki:
Proposal of a Course-Classification Support System Using Deep Learning and its Evaluation When Combined with Reinforcement Learning. J. Adv. Comput. Intell. Intell. Informatics 28(2): 454-467 (2024) - [j16]Kazuteru Miyazaki:
Enhanced Naive Agent in Angry Birds AI Competition via Exploitation-Oriented Learning. J. Robotics Mechatronics 36(3): 580-588 (2024) - 2022
- [j15]Naoki Kodama, Taku Harada, Kazuteru Miyazaki:
Traffic Signal Control System Using Deep Reinforcement Learning With Emphasis on Reinforcing Successful Experiences. IEEE Access 10: 128943-128950 (2022) - [c24]Kazuteru Miyazaki:
Modeling of placebo effect in stochastic reward tasks by reinforcement learning. BICA*AI 2022: 255-262 - 2021
- [j14]Naoki Kodama, Taku Harada, Kazuteru Miyazaki:
Home Energy Management Algorithm Based on Deep Reinforcement Learning Using Multistep Prediction. IEEE Access 9: 153108-153115 (2021) - [j13]Kazuteru Miyazaki:
Proposal and evaluation of deep exploitation-oriented learning under multiple reward environment. Cogn. Syst. Res. 70: 29-39 (2021) - 2020
- [c23]Kazuteru Miyazaki:
Application of Deep Reinforcement Learning to Decision-Making System based on Consciousness. BICA*AI 2020: 631-636 - [c22]Kazuteru Miyazaki:
Classification of Medical Data using Character-level CNN. ICISS 2020: 43-47
2010 – 2019
- 2019
- [c21]Naoki Kodama, Taku Harada, Kazuteru Miyazaki:
Deep Reinforcement Learning with Dual Targeting Algorithm. IJCNN 2019: 1-6 - 2018
- [c20]Naoki Kodama, Kazuteru Miyazaki, Taku Harada:
A Proposal for Reducing the Number of Trial-and-Error Searches for Deep Q-Networks Combined with Exploitation-Oriented Learning. ICMLA 2018: 983-988 - [c19]Kazuteru Miyazaki, Naoki Kodama, Hiroaki Kobayashi:
Proposal and Evaluation of an Indirect Reward Assignment Method for Reinforcement Learning by Profit Sharing Method. IntelliSys (1) 2018: 187-200 - [c18]Daisuke Shiraishi, Kazuteru Miyazaki, Hiroaki Kobayashi:
Proposal of Detour Path Suppression Method in PS Reinforcement Learning and Its Application to Altruistic Multi-agent Environment. PRIMA 2018: 638-645 - [c17]Kazuteru Miyazaki, Masaaki Ida:
Consistency Assessment between Diploma Policy and Curriculum Policy using Character-Level CNN. SCIS&ISIS 2018: 626-631 - 2017
- [j12]Keiki Takadama, Kazuteru Miyazaki:
Editorial: Cutting Edge of Reinforcement Learning and its Hybrid Methods. J. Adv. Comput. Intell. Intell. Informatics 21(5): 833 (2017) - [j11]Kazuteru Miyazaki:
Exploitation-Oriented Learning with Deep Learning - Introducing Profit Sharing to a Deep Q-Network -. J. Adv. Comput. Intell. Intell. Informatics 21(5): 849-855 (2017) - [j10]Kazuteru Miyazaki, Koudai Furukawa, Hiroaki Kobayashi:
Proposal of PSwithEFP and its Evaluation in Multi-Agent Reinforcement Learning. J. Adv. Comput. Intell. Intell. Informatics 21(5): 930-938 (2017) - [c16]Kazuteru Miyazaki:
Proposal of a Deep Q-network with Profit Sharing. BICA 2017: 302-307 - 2016
- [c15]Kazuteru Miyazaki:
A Study of an Indirect Reward on Multi-agent Environments. BICA 2016: 94-101 - [c14]Kazuteru Miyazaki, Koudai Furukawa, Hiroaki Kobayashi:
Proposal of an Action Selection Strategy with Expected Failure Probability and Its Evaluation in Multi-agent Reinforcement Learning. EUMAS/AT 2016: 172-186 - [c13]Kazuteru Miyazaki, Koudai Furukawa, Hiroaki Kobayashi:
Proposal and Evaluation of an Action Selection Strategy with Expected Failure Probability in Multi-agent Learning. ICA 2016: 127-130 - 2014
- [c12]Kazuteru Miyazaki, Jun'ichi Takeno:
The Necessity of a Secondary System in Machine Consciousness. BICA 2014: 15-22 - 2013
- [j9]Kazuteru Miyazaki:
Proposal of an Exploitation-oriented Learning Method on Multiple Rewards and Penalties Environments and the Design Guideline. J. Comput. 8(7): 1683-1690 (2013) - 2012
- [j8]Kazuteru Miyazaki:
Proposal of the Continuous-Valued Penalty Avoiding Rational Policy Making Algorithm. J. Adv. Comput. Intell. Intell. Informatics 16(2): 183-190 (2012) - [j7]Seiya Kuroda, Kazuteru Miyazaki, Hiroaki Kobayashi:
Introduction of Fixed Mode States into Online Reinforcement Learning with Penalties and Rewards and its Application to Biped Robot Waist Trajectory Generation. J. Adv. Comput. Intell. Intell. Informatics 16(6): 758-768 (2012) - [c11]Kazuteru Miyazaki, Masaki Itou, Hiroaki Kobayashi:
Evaluation of the Improved Penalty Avoiding Rational Policy Making Algorithm in Real World Environment. ACIIDS (1) 2012: 270-280 - [c10]Kazuteru Miyazaki, Masaaki Ida:
Proposal of an Active Course Classification Support system with Exploitation-oriented Learning extended by positive and negative examples. SCIS&ISIS 2012: 1520-1527 - 2011
- [c9]Seiya Kuroda, Kazuteru Miyazaki, Hiroaki Kobayashi:
Introduction of Fixed Mode States into Online Profit Sharing and Its Application to Waist Trajectory Generation of Biped Robot. EWRL 2011: 297-308 - [c8]Kazuteru Miyazaki, Masaaki Ida:
Proposal and Evaluation of the Active Course Classification Support System with Exploitation-Oriented Learning. EWRL 2011: 333-344 - 2010
- [c7]Kazuteru Miyazaki:
The Penalty Avoiding Rational Policy Making Algorithm in Continuous Action Spaces. IDEAL 2010: 178-185
2000 – 2009
- 2009
- [j6]Kazuteru Miyazaki, Shigenobu Kobayashi:
Exploitation-Oriented Learning PS-r#. J. Adv. Comput. Intell. Intell. Informatics 13(6): 624-630 (2009) - [j5]Takuji Watanabe, Kazuteru Miyazaki, Hiroaki Kobayashi:
A New Improved Penalty Avoiding Rational Policy Making Algorithm for Keepaway with Continuous State Spaces. J. Adv. Comput. Intell. Intell. Informatics 13(6): 675-682 (2009) - 2008
- [c6]Kazuteru Miyazaki, Shigenobu Kobayashi:
Proposal of Exploitation-Oriented Learning PS-r#. IDEAL 2008: 1-8 - 2007
- [j4]Kazuteru Miyazaki, Shigenobu Kobayashi:
Reinforcement Learning for Penalty Avoidance in Continuous State Spaces. J. Adv. Comput. Intell. Intell. Informatics 11(6): 668-676 (2007) - 2006
- [c5]Daisuke Katagami, Katsumi Nitta, Kazuteru Miyazaki:
Multi User Learning Agent on the Distribution of MDPs. RO-MAN 2006: 698-703 - 2004
- [j3]Kazuteru Miyazaki, Sougo Tsuboi, Shigenobu Kobayashi:
Development of a reinforcement learning system to play Othello. Artif. Life Robotics 7(4): 177-181 (2004) - 2001
- [j2]Kazuteru Miyazaki, Shigenobu Kobayashi:
Rationality of Reward Sharing in Multi-agent Reinforcement Learning. New Gener. Comput. 19(2): 157-172 (2001) - 2000
- [c4]Kazuteru Miyazaki, Shigenobu Kobayashi:
Reinforcement learning for penalty avoiding policy making. SMC 2000: 206-211
1990 – 1999
- 1999
- [c3]Sachiyo Arai, Kazuteru Miyazaki, Shigenobu Kobayashi:
Multi-agent Reinforcement Learning for Crane Control Problem: Designing Rewards for Conflict Resolution. ISADS 1999: 310-319 - [c2]Kazuteru Miyazaki, Shigenobu Kobayashi:
Rationality of Reward Sharing in Multi-agent Reinforcement Learning. PRIMA 1999: 111-125 - 1997
- [j1]Kazuteru Miyazaki, Masayuki Yamamura, Shigenobu Kobayashi:
k-Certainty Exploration Method: An Action Selector to Identify the Environment in Reinforcement Learning. Artif. Intell. 91(1): 155-171 (1997) - [c1]Hajime Kimura, Kazuteru Miyazaki, Shigenobu Kobayashi:
Reinforcement Learning in POMDPs with Function Approximation. ICML 1997: 152-160
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-02 21:26 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint