Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
In this paper we propose a novel procedure, for the estimation of semiparametric survival functions. The proposed technique adapts penalized likelihood survival models to the context of lifetime value modeling. The method extends... more
In this paper we propose a novel procedure, for the estimation of semiparametric survival functions. The proposed technique adapts penalized likelihood survival models to the context of lifetime value modeling. The method extends classical Cox model by introducing a smoothing parameter that can be estimated by means of penalized maximum likelihood procedures. Markov Chain Monte Carlo methods are employed to effectively estimate such smoothing parameter, using an algorithm which combines Metropolis–Hastings and Gibbs sampling. Our proposal is contextualized and compared with conventional models, with reference to a marketing application that involves the prediction of customer’s lifetime value estimation.
In this article, we present a novel methodology to assess predictive models for a binary target. In our opinion, the main weakness of the criteria proposed in the literature is not to take the financial costs of a wrong decision into... more
In this article, we present a novel methodology to assess predictive models for a binary target. In our opinion, the main weakness of the criteria proposed in the literature is not to take the financial costs of a wrong decision into account.The objective of this article is to derive the optimal cut-off in predictive classification models and to improve model assessment on the basis of a general class of loss functions. We describe how our proposal performs in a real application on credit scoring.
Credit risk concentration is one of the leading topics in modern finance, as the bank regulation has made increasing use of external and internal credit ratings. Concentration risk in credit portfolios comes into being through an uneven... more
Credit risk concentration is one of the leading topics in modern finance, as the bank regulation has made increasing use of external and internal credit ratings. Concentration risk in credit portfolios comes into being through an uneven distribution of bank loans to individual borrowers (single-name concentration) or in a hierarchical dimension such as in industry and services sectors and geographical regions (sectorial concentration).To measure single-name concentration risk the literature proposes specific concentration indexes such as the Herfindahl–Hirschman index, the Gini index or more general approaches to calculate the appropriate economic capital needed to cover the risk arising from the potential default of large borrowers.However, in our opinion, the Gini index and the Herfindahl–Hirschman index can be improved taking into account methodological and theoretical issues which are explained in this paper.We propose a new index to measure single-name credit concentration risk and we prove the properties of our contribution.Furthermore, considering the guidelines of Basel II, we describe how our index works on real financial data. Finally, we compare our index with the common procedures proposed in the literature on the basis of simulated and real data.
This paper extends the existing literature on empirical research in the field of credit risk default for Small Medium Enterprizes (SMEs). We propose a non-parametric approach based on Random Survival Forests (RSF) and we compare its... more
This paper extends the existing literature on empirical research in the field of credit risk default for Small Medium Enterprizes (SMEs). We propose a non-parametric approach based on Random Survival Forests (RSF) and we compare its performance with a standard logit model. To the authors’ knowledge, no studies in the area of credit risk default for SMEs have used a variety of statistical methodologies to test the reliability of their predictions and to compare their performance against one another. As for the in-sample results, we find that our non-parametric model performs much better that the classical logit model. As for the out-of-sample performances, the evidence is just the opposite, and the logit performs better than the RSF model. We explain this evidence by showing how error in the estimates of default probabilities can affect classification error when the estimates are used in a classification rule.
... Silvia Figini was awarded her BSc in Economics from the University of Pavia and her PhD in Statistics from Bocconi University. ... Several authors have proposed tests for whether to include random effects (Commenges and Jacqmin-Gadda,... more
... Silvia Figini was awarded her BSc in Economics from the University of Pavia and her PhD in Statistics from Bocconi University. ... Several authors have proposed tests for whether to include random effects (Commenges and Jacqmin-Gadda, 1997; Hall and Praestgaard, 2001; Lin ...
According to the last proposals by the Basel Committee, banks are allowed to use statistical approaches for the computation of their capital charge covering financial risks such as credit risk, market risk and operational risk.It is... more
According to the last proposals by the Basel Committee, banks are allowed to use statistical approaches for the computation of their capital charge covering financial risks such as credit risk, market risk and operational risk.It is widely recognized that internal loss data alone do not suffice to provide accurate capital charge in financial risk management, especially for high-severity and low-frequency events. Financial institutions typically use external loss data to augment the available evidence and, therefore, provide more accurate risk estimates. Rigorous statistical treatments are required to make internal and external data comparable and to ensure that merging the two databases leads to unbiased estimates.The goal of this paper is to propose a correct statistical treatment to make the external and internal data comparable and, therefore, mergeable. Such methodology augments internal losses with relevant, rather than redundant, external loss data.
In the paper we propose nonparametric approaches for e-learning data. In particular we want to supply a measure of the relative exercises importance, to estimate the acquired Knowledge for each student and finally to personalize the... more
In the paper we propose nonparametric approaches for e-learning data. In particular we want to supply a measure of the relative exercises importance, to estimate the acquired Knowledge for each student and finally to personalize the e-learning platform. The methodology employed is based on a comparison between nonparametric statistics for kernel density classification and parametric models such as generalized linear models and generalized additive models.
Following Basel II, credit risk management (CRM) is a central task for banks. Within a quantitative approach, the key variable in quantifying credit risk is the probability of default (PD), which one may assign to a specific obligor or to... more
Following Basel II, credit risk management (CRM) is a central task for banks. Within a quantitative approach, the key variable in quantifying credit risk is the probability of default (PD), which one may assign to a specific obligor or to a certain rating category. Besides quantitative data, a large set of information could be derived from the qualitative data and from ontologies. In particular, in CRM, quantitative data are derived from balance sheet and qualitative data are based on questionnaire and unstructured information.
Research Interests:
Research Interests:
Research Interests:
In this paper we analyse a real e-learning dataset derived from the e-learning platform of the University of Pavia. The dataset concerns an online learning environment with in-depth teaching materials. The main focus of this paper is to... more
In this paper we analyse a real e-learning dataset derived from the e-learning platform of the University of Pavia. The dataset concerns an online learning environment with in-depth teaching materials. The main focus of this paper is to supply a measure of the relative importance of the exercises (test) at the end of each training unit; to build predictive models of student’s performance and finally to personalize the e-learning platform. The methodology employed is based on nonparametric statistical methods for kernel density estimation and generalized linear models and generalized additive models for predictive purposes.