default search action
Rolando Cavazos-Cadena
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j43]Rolando Cavazos-Cadena, Hugo Cruz-Suárez, Raúl Montes-de-Oca:
Characterization of the optimal average cost in Markov decision chains driven by a risk-seeking controller. J. Appl. Probab. 61(1): 340-367 (2024) - 2023
- [j42]Rolando Cavazos-Cadena, Hugo Cruz-Suárez, Raúl Montes-de-Oca:
Average criteria in denumerable semi-Markov decision chains under risk-aversion. Discret. Event Dyn. Syst. 33(3): 221-256 (2023) - [j41]Gustavo Portillo-Ramírez, Rolando Cavazos-Cadena, Hugo Cruz-Suárez:
Contractive approximations in average Markov decision chains driven by a risk-seeking controller. Math. Methods Oper. Res. 98(1): 75-91 (2023) - 2022
- [j40]Carlos Camilo-Garay, Rolando Cavazos-Cadena, Hugo Cruz-Suárez:
Contractive Approximations in Risk-Sensitive Average Semi-Markov Decision Chains on a Finite State Space. J. Optim. Theory Appl. 192(1): 271-291 (2022) - 2021
- [j39]Rolando Cavazos-Cadena, Luis Rodríguez-Gutiérrez, Dulce María Sánchez-Guillermo:
Markov stopping games with an absorbing state and total reward criterion. Kybernetika 57(3): 474-492 (2021) - [j38]Rolando Cavazos-Cadena, Mario Cantú-Sifuentes, Imelda Cerda-Delgado:
Nash equilibria in a class of Markov stopping games with total reward criterion. Math. Methods Oper. Res. 94(2): 319-340 (2021) - 2020
- [j37]Julio Saucedo-Zul, Rolando Cavazos-Cadena, Hugo Cruz-Suárez:
A Discounted Approach in Communicating Average Markov Decision Chains Under Risk-Aversion. J. Optim. Theory Appl. 187(2): 585-606 (2020) - [j36]Rubén Blancas-Rivera, Rolando Cavazos-Cadena, Hugo Cruz-Suárez:
Discounted approximations in risk-sensitive average Markov cost chains with finite state space. Math. Methods Oper. Res. 91(2): 241-268 (2020)
2010 – 2019
- 2019
- [j35]Rolando Cavazos-Cadena, Daniel Hernández-Hernández:
The Vanishing Discount Approach in a class of Zero-Sum Finite Games with Risk-Sensitive Average Criterion. SIAM J. Control. Optim. 57(1): 219-240 (2019) - 2018
- [j34]Rolando Cavazos-Cadena:
Characterization of the Optimal Risk-Sensitive Average Cost in Denumerable Markov Decision Chains. Math. Oper. Res. 43(3): 1025-1050 (2018) - 2016
- [j33]Rolando Cavazos-Cadena, Daniel Hernández-Hernández:
Local Poisson equations associated with the Varadhan functional. Asymptot. Anal. 96(1): 23-50 (2016) - [j32]Rolando Cavazos-Cadena:
A poisson equation for the risk-sensitive average cost in semi-markov chains. Discret. Event Dyn. Syst. 26(4): 633-656 (2016) - [j31]Selene Chávez-Rodríguez, Rolando Cavazos-Cadena, Hugo Cruz-Suárez:
Controlled Semi-Markov Chains with Risk-Sensitive Average Cost Criterion. J. Optim. Theory Appl. 170(2): 670-686 (2016) - [j30]Rolando Cavazos-Cadena, Daniel Hernández-Hernández:
A Characterization of the Optimal Certainty Equivalent of the Average Cost via the Arrow-Pratt Sensitivity Function. Math. Oper. Res. 41(1): 224-235 (2016) - 2015
- [j29]Rolando Cavazos-Cadena, Raúl Montes-de-Oca, Karel Sladký:
Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with the Average Reward Criterion. J. Appl. Probab. 52(2): 419-440 (2015) - [j28]Selene Chávez-Rodríguez, Rolando Cavazos-Cadena, Hugo Cruz-Suárez:
Continuity of the optimal average cost in Markov decision chains with small risk-sensitivity. Math. Methods Oper. Res. 81(3): 269-298 (2015) - 2014
- [j27]Rolando Cavazos-Cadena, Raúl Montes-de-Oca, Karel Sladký:
A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion. J. Optim. Theory Appl. 163(2): 674-684 (2014) - 2012
- [j26]Alfredo Alanís-Durán, Rolando Cavazos-Cadena:
An optimality system for finite average Markov decision chains under risk-aversion. Kybernetika 48(1): 83-104 (2012) - [j25]Rolando Cavazos-Cadena, Daniel Hernández-Hernández:
Nash equilibria in a class of Markov stopping games. Kybernetika 48(5): 1027-1044 (2012) - [j24]Rolando Cavazos-Cadena, Graciela González-Farías:
Optimal reparametrization and large sample likelihood inference for the location-scale skew-normal model. Period. Math. Hung. 64(2): 181-211 (2012) - 2011
- [j23]Rolando Cavazos-Cadena, Daniel Hernández-Hernández:
Discounted Approximations for Risk-Sensitive Average Criteria in Markov Decision Chains with Finite State Space. Math. Oper. Res. 36(1): 133-146 (2011) - 2010
- [j22]Rolando Cavazos-Cadena:
Generalized communication conditions and the eigenvalue problem for a monotone and homogenous function. Kybernetika 46(4): 665-683 (2010) - [j21]Rolando Cavazos-Cadena:
Optimality equations and inequalities in a class of risk-sensitive average cost Markov decision chains. Math. Methods Oper. Res. 71(1): 47-84 (2010)
2000 – 2009
- 2009
- [j20]Rolando Cavazos-Cadena:
The Risk-Sensitive Poisson Equation for a Communicating Markov Chain on a Denumerable State Space. Kybernetika 45(5): 716-736 (2009) - [j19]Rolando Cavazos-Cadena:
Solutions of the average cost optimality equation for finite Markov decision chains: risk-sensitive and risk-neutral criteria. Math. Methods Oper. Res. 70(3): 541-566 (2009) - [j18]Rolando Cavazos-Cadena, Daniel Hernández-Hernández:
Necessary and sufficient conditions for a solution to the risk-sensitive Poisson equation on a finite state space. Syst. Control. Lett. 58(4): 254-258 (2009) - 2008
- [j17]Rolando Cavazos-Cadena, Daniel Hernández-Hernández:
A central limit theorem for normalized products of random matrices. Period. Math. Hung. 56(2): 183-211 (2008) - 2004
- [j16]Rolando Cavazos-Cadena, Daniel Hernández-Hernández:
A characterization of exponential functionals in finite Markov chains. Math. Methods Oper. Res. 60(3): 399-414 (2004) - 2003
- [j15]Rolando Cavazos-Cadena, Daniel Hernández-Hernández:
Solution to the risk-sensitive average optimality equation in communicating Markov decision chains with finite state space: An alternative approach. Math. Methods Oper. Res. 56(3): 473-479 (2003) - [j14]Rolando Cavazos-Cadena:
Solution to the risk-sensitive average cost optimality equation in a class of Markov decision processes with finite state space. Math. Methods Oper. Res. 57(2): 263-285 (2003) - [j13]Rolando Cavazos-Cadena, Raúl Montes-de-Oca:
The Value Iteration Algorithm in Risk-Sensitive Average Markov Decision Chains with Finite State Space. Math. Oper. Res. 28(4): 752-776 (2003) - 2002
- [j12]Rolando Cavazos-Cadena:
Value iteration and approximately optimal stationary policies in finite-state average Markov decision chains. Math. Methods Oper. Res. 56(2): 181-196 (2002) - [c2]Rolando Cavazos-Cadena, Daniel Hernández-Hernández:
On a representation of Varadhan's functional as a convex minimization problem. CDC 2002: 1398-1401 - 2001
- [j11]Rolando Cavazos-Cadena:
Adaptive control of average Markov decision chains under the Lyapunov stability condition. Math. Methods Oper. Res. 54(1): 63-99 (2001) - [c1]Rolando Cavazos-Cadena, Emmanuel Fernández-Gaucherand:
Markov decision processes with risk-sensitive criteria: dynamic programming operators and discounted stochastic games. CDC 2001: 2110-2112 - 2000
- [j10]Rolando Cavazos-Cadena, Raúl Montes-de-Oca:
Nearly optimal policies in risk-sensitive positive dynamic programming on discrete spaces. Math. Methods Oper. Res. 52(1): 133-167 (2000) - [j9]Rolando Cavazos-Cadena, Eugene A. Feinberg, Raúl Montes-de-Oca:
A Note on the Existence of Optimal Policies in Total Reward Dynamic Programs with Compact Action Sets. Math. Oper. Res. 25(4): 657-666 (2000) - [j8]Rolando Cavazos-Cadena, Emmanuel Fernández-Gaucherand:
The vanishing discount approach in Markov chains with risk-sensitive criteria. IEEE Trans. Autom. Control. 45(10): 1800-1816 (2000)
1990 – 1999
- 1999
- [j7]Rolando Cavazos-Cadena, Emmanuel Fernández-Gaucherand:
Controlled Markov chains with risk-sensitive criteria: Average cost, optimality equations, and optimal solutions. Math. Methods Oper. Res. 49(2): 299-324 (1999) - [j6]Rolando Cavazos-Cadena, Raúl Montes-de-Oca:
Nearly optimal stationary policies in negative dynamic programming. Math. Methods Oper. Res. 49(3): 441-456 (1999) - 1996
- [j5]Rolando Cavazos-Cadena, Emmanuel Fernández-Gaucherand:
Denumerable controlled Markov chains with strong average optimality criterion: Bounded & unbounded costs. Math. Methods Oper. Res. 43(3): 281-300 (1996) - 1995
- [j4]Rolando Cavazos-Cadena, Emmanuel Fernández-Gaucherand:
Denumerable controlled Markov chains with average reward criterion: Sample path optimality. Math. Methods Oper. Res. 41(1): 89-108 (1995) - 1992
- [j3]Rolando Cavazos-Cadena, Linn I. Sennott:
Comparing recent assumptions for the existence of average optimal stationary policies. Oper. Res. Lett. 11(1): 33-37 (1992) - 1991
- [j2]Rolando Cavazos-Cadena:
Solution to the optimality equation in a class of Markov decision chains with the average cost criterion. Kybernetika 27(1): 23-37 (1991)
1980 – 1989
- 1989
- [j1]Rolando Cavazos-Cadena:
Weak conditions for the existence of optimal stationary policies in average Markov decision chains with unbounded costs. Kybernetika 25(3): 145-156 (1989)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:16 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint