2017_Book_ActuarialSciencesAndQuantitati.pdf - Springer...

  • No School
  • AA 1
  • ConstableThunder132
  • 177
  • 100% (1) 1 out of 1 people found this document helpful

This preview shows page 1 out of 177 pages.

Unformatted text preview: Springer Proceedings in Mathematics & Statistics Jaime A. Londoño José Garrido Monique Jeanblanc Editors Actuarial Sciences and Quantitative Finance ICASQF2016, Cartagena, Colombia, June 2016 Springer Proceedings in Mathematics & Statistics Volume 214 Springer Proceedings in Mathematics & Statistics This book series features volumes composed of selected contributions from workshops and conferences in all areas of current research in mathematics and statistics, including operation research and optimization. In addition to an overall evaluation of the interest, scientific quality, and timeliness of each proposal at the hands of the publisher, individual contributions are all refereed to the high quality standards of leading journals in the field. Thus, this series provides the research community with well-edited, authoritative reports on developments in the most exciting areas of mathematical and statistical research today. More information about this series at Jaime A. Londoño • José Garrido Monique Jeanblanc Editors Actuarial Sciences and Quantitative Finance ICASQF2016, Cartagena, Colombia, June 2016 123 Editors Jaime A. Londoño Departamento de Matemáticas y Estadí-stica Universidad Nacional de Colombia Manizales, Caldas, Colombia José Garrido Department of Mathematics and Statistics Concordia University Montréal, QC, Canada Department of Mathematics National University of Colombia Bogota, Bogota, Colombia Monique Jeanblanc LaMME, Batiment IBGBI Université d’Evry Val D’Essone Evry Cedex, Essonne, France ISSN 2194-1009 ISSN 2194-1017 (electronic) Springer Proceedings in Mathematics & Statistics ISBN 978-3-319-66534-4 ISBN 978-3-319-66536-8 (eBook) Library of Congress Control Number: 2017955004 Mathematics Subject Classification (2010): 62P05, 91B30, 91G20, 91G80 © Springer International Publishing AG 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Preface The chapters in this volume of the Springer Proceedings in Mathematics and Statistics entitled “Actuarial Sciences and Quantitative Finance: ICASQF2016, Cartagena, Colombia, June 2016” are from selected papers presented at the Second International Congress on Actuarial Science and Quantitative Finance, which took place in Cartagena from June 15 to 18, 2016. The conference was organized jointly by the Universidad Nacional de Colombia, Universidad de Cartagena, Universidad del Rosario, Universidad Externado de Colombia, Universidad de los Andes, ENSIIE/Université Evry Val d’Essonne, and ADDACTIS Latina. It also received support from Universidad Industrial de Santander, Ambassade de France en Colombie, and ICETEX. The conference took place in the Claustro de San Agustín and Casa Museo Arte y Cultura la Presentación in the walled city of Cartagena. This congress was the second edition of a series of events to be organized every other year, with the objective of becoming a reference in actuarial science and quantitative finance in Colombia, the Andean region (Peru, Colombia, Venezuela, Ecuador, and Bolivia), and the Caribbean. The congress had participation from researchers, students, and practitioners from different parts of the world. This second edition helped enhance the relations between the academic and industrial actuarial and financial communities in North America, Europe, and other regions of the world. The emphasis of the event was equally distributed between actuarial sciences and quantitative finance and covered a variety of topics such as Statistical Techniques in Finance and Actuarial Science, Portfolio Management, Derivative Valuation, Risk Theory, Life and Pension Insurance Mathematics, Non-life Insurance Mathematics, and Economics of Insurance. The event consisted of plenary sessions with invited speakers in the areas of actuarial science and quantitative finance, oral sessions of contributed talks on these topics, as well as short courses taught by some of the invited speakers and poster sessions. The list of invited speakers reflects the broad variety of topics: Nicole El Karoui (Self-Exciting Process in Finance and Insurance for Credit Risk and Longevity Risk Modeling in Heterogeneous Portfolios), Julien Guyon v vi Preface (Path-Dependent Volatility), Christian Hipp (Stochastic Control for Insurance: New Problems and Methods), Jean Jacod (Estimation of Volatility in Presence of High Activity Jumps and Noise), Glenn Meyers (Aggressive Backtesting of Stochastic Loss Reserve Models—Where It Leads Us), Michael Sherris (To Borrow or Insure? Long-Term Care Costs and the Impact of Housing), Qihe Tang (Mitigating Extreme Risks Through Securitization), and Fernando Zapatero (Riding the Bubble with Convex Incentives). Topics for short courses included the following: The New Post-crisis Landscape of Derivatives and Fixed Income Activity Under Regulatory Constraints on Credit Risk, Liquidity Risk, and Counterparty Risk (Nicole El Karoui); Stochastic Control for Insurers: What Can We Learn from Finance, and What Are the Differences? (Christian Hipp); High-Frequency Statistics in Finance (Jean Jacod); and Using Bayesian MCMC Models for Stochastic Loss Reserving (Glenn Meyers). Additionally, researchers and students presented oral contributions and posters. There were 30 contributed oral presentations, 26 invited oral contributions, and ten poster presentations. We received 85 contributions and 34 invited contributions. The selection process was the result of careful deliberations, and 54 oral contributed presentations of the 85 submissions and 20 posters were accepted. Authors came from different corners of the world and countries of origin including Australia, Brazil, Canada, Chile, Colombia, Egypt, France, Germany, Italy, Jamaica, Mexico, Spain, Switzerland, the United Kindom, Uruguay, and the United States. The number of contributions along with the total number of 279 registered participants shows the steady growth of the congress and its consolidation as the main event of the area in the Andean region and the Caribbean. The congress put the emphasis on enhancing relations between industry and academia providing a day to address problems arising from the financial and insurance industries. As a matter of fact, topics and speakers themselves came from these sectors. The congress provided practitioners a platform to present and discuss with academics and students different approaches in addressing problems arising from the industries in the region. The current proceedings are based on invitations to selected oral contributions and selected contributions presented by the invited speakers. All contributions were subject to an additional review process. The spectrum of the eight papers published here reflects the diverse nature of the presentations: there are five papers on actuarial sciences and three papers on quantitative finance. Special thanks go to the members of the organizing committee, which included Javier Aparicio (Colombia, ADDACTIS Latina), Prof. Sergio Andrés Cabrales (Colombia, Universidad de los Andes), Prof. Carlos Alberto Castro (Colombia, Universidad del Rosario), Prof. Margaret Johanna Garzón (Colombia, Universidad Nacional de Colombia, Bogotá), Prof. Sandra Gutiérrez (Colombia, Universidad de Cartagena), Prof. Jaime A. Londoño (Colombia, Universidad Nacional de Colombia, Bogotá), Prof. Sergio Pulido (France, ENSIIE/Université Evry Val d’Essonne), Prof. Javier Sandoval (Colombia, Universidad Externado de Colombia), Preface vii and Prof. Arunachalam Viswanathan (Colombia, Universidad Nacional de Colombia, Bogotá). Finally, we would like to thank all the conference participants who made this event a great success. Manizales, Colombia Montréal, QC, Canada Evry Cedex, France May 2017 Jaime A. Londoño José Garrido Monique Jeanblanc Contents Part I Actuarial Sciences Robust Paradigm Applied to Parameter Reduction in Actuarial Triangle Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gary Venter 3 Unlocking Reserve Assumptions Using Retrospective Analysis. . . . . . . . . . . . . Jeyaraj Vadiveloo, Gao Niu, Emiliano A. Valdez, and Guojun Gan 25 Spatial Statistical Tools to Assess Mortality Differences in Europe . . . . . . . . Patricia Carracedo and Ana Debón 49 Stochastic Control for Insurance: Models, Strategies, and Numerics . . . . . . Christian Hipp 75 Stochastic Control for Insurance: New Problems and Methods . . . . . . . . . . . . 115 Christian Hipp Part II Quantitative Finance Bermudan Option Valuation Under State-Dependent Models . . . . . . . . . . . . . . 127 Anastasia Borovykh, Andrea Pascucci, and Cornelis W. Oosterlee Option-Implied Objective Measures of Market Risk with Leverage . . . . . . . 139 Matthias Leiss and Heinrich H. Nax The Sustainable Black-Scholes Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Yannick Armenti, Stéphane Crépey, and Chao Zhou Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 ix Part I Actuarial Sciences Robust Paradigm Applied to Parameter Reduction in Actuarial Triangle Models Gary Venter Abstract The recognition that models are approximations used to illuminate features of more complex processes brings a challenge to standard statistical testing, which assumes the data is generated from the model. Out-of-sample tests are a response. In my view this is a fundamental change in statistics that renders both classical and Bayesian approaches outmoded, and I am calling it the “robust paradigm” to signify this change. In this context, models need to be robust to samples that are never fully representative of the process. Actuarial models of loss development and mortality triangles are often over-parameterized, and formal parameter-reduction methods are applied to them here within the context of the robust paradigm. Keywords Loss reserving • Mortality • Bayesian shrinkage • MCMC 1 Introduction Section 2 discusses model testing under the robust paradigm, including out-ofsample tests and counting the effective number of parameters. Section 3 introduces parameter-reduction methods including Bayesian versions. Section 4 reviews actuarial triangle modeling based on discrete parameters by row, column, etc., and how parameter-reduction can be used for them. Section 5 gives a mortality model example, while Sect. 6 illustrates examples in loss reserving. Section 7 concludes. 2 Model Testing Within the Robust Paradigm Both Bayesian and classical statistics typically assume that the data being used to estimate a model has been generated by the process that the model specifies. In many, perhaps most, financial models this is not the case. The data is known to come G. Venter () University of New South Wales, Sydney, NSW, Australia e-mail: [email protected] © Springer International Publishing AG 2017 J.A. Londoño et al. (eds.), Actuarial Sciences and Quantitative Finance, Springer Proceedings in Mathematics & Statistics 214, 3 4 G. Venter from a more complex process and the model is a hopefully useful but simplified representation of that process. Goodness-of-fit measures that assume that the data has been generated from the sample are often not so reliable in this situation, and out-of-sample tests of some sort or another are preferred. These can help address how well the model might work on data that was generated from a different aspect of the process. I have coined the term “robust paradigm” to refer to statistical methods useful when the data does not come from the model. Much statistics today is based on pragmatic approaches that keep the utility of the model for its intended application in mind, and regularly deviate from both pure Bayesian and pure classical paradigms. That in itself does not mean that they are dealing with data that does not come from the models. In fact, even out-of-sample testing may be done purely to address issues of sample bias in the parameters, even assuming that the data did come from the model. But simplified models for complex processes are common and pragmatic approaches are used to test them. This is what is included in the robust paradigm. When models are simplified descriptions of more complex processes, you can never be confident that new data from the same process will be consistent with the model. In fact with financial data, it is not unusual for new data to show somewhat different patterns from those seen previously. However, if the model is robust to a degree of data change, it may still work fairly well in this context. More parsimonious models often hold up better when data is changing like that. Out-ofsample testing methods are used to test for such robustness. A typical ad hoc approach is the rotating 45 ths method: the data is divided, perhaps randomly, into five subsets, and the model is fit to every group of four of these five. Then the fits are tested on the omitted populations, for example by computing the negative loglikelihood (NLL). Competing models can be compared on how well they do on the omitted values. A well-regarded out-of-sample test is leave one out, or “loo.” This fits the model many times, leaving out one data point at a time. Then the fit is tested at each omitted point to compare alternative models. The drawback is in doing so many fits for each model. In Bayesian estimation, particularly in Markov Chain Monte Carlo (MCMC), there is a shortcut to loo. The estimation produces many sample parameter sets from the posterior distribution of the parameters. By giving more weight to the parameter sets that fit poorly at a given data observation, an approximation to the parameters that would be obtained without that observation can be made. This idea is not new, but such approximations have been unstable. A recent advance, called Pareto smoothed importance sampling, appears to have largely solved the instability problem. A package to do this, called loo, is available with the Stan package for MCMC. It can be used with MCMC estimation not done in Stan as well. It allows comparison of the NLL of the omitted points across models. This modestly increases the estimation time, but is a substantial improvement over multiple re-estimation. Having such a tool available makes loo likely to become a standard out-of-sample fitting test. Robust Paradigm Applied to Parameter Reduction in Actuarial Triangle Models 5 This is a direct method to test models for too many parameters. Over-fitted models will not perform well out of sample. If the parameters do better out of sample, they are worth it. Classical methods for adjusting for over-parameterization, like penalized likelihood, are more artificial by comparison, and never have become completely standardized. In classical nonlinear models, counting the effective number of parameters is also a bit complex. 2.1 Counting Parameters In nonlinear models it is not always apparent how many degrees of freedom are being used up by the parameter estimation. One degree of freedom per parameter is not always realistic, as the form of the model may constrict the ability of parameters to pull the fitted values towards the actual values. A method that seems to work well within this context is the generalized degrees of freedom method of Ye (1998). Key to this is the derivative of a fitted point from a model with respect to the actual point. That is the degree to which the fitted point will change in response to a change in the actual point. Unfortunately this usually has to be estimated numerically for each data point. The generalized degrees of freedom of a model fit to a data set is then the sum across all the data points of the derivatives of the fitted points with respect to the actual points, done one at a time. In a linear model this is just the number of parameters. It seems to be a reasonable representation of the degrees of freedom used up by a model fit, and so can be used like the number of parameters is used in linear models to adjust goodness-of-fit measures, like NLL. A method of counting the effective number of parameters is also built into the loo package. 3 Introduction to Parameter Reduction Methodology Two currently popular parameter reduction methodologies are: • Linear mixed models (LMM), or in the GLM environment GLMM • Lasso—Least Absolute Shrinkage and Selection Operator 3.1 Linear Mixed Models LMM starts by dividing the explanatory variables from a regression model into two types: fixed effects and random effects. The parameters of the random effects are to be shrunk towards zero, based perhaps on there being some question about whether or not these parameters should be taken at face value. See for example Lindstrom and Bates (1990) for a discussion in a more typical statistical context. 6 G. Venter Suppose you are doing a regression to estimate the contribution of various factors to accident frequency of driver/vehicle combinations. You might make color of car a random effect, thinking that probably most colors would not affect accident frequency, but a few might, and even for those you would want the evidence to be fairly strong. Then all the parameters for the car color random effects would be shrunk towards or to zero, in line with this skepticism but with an openness to being convinced. This could be looked at as an analysis of the residuals. Suppose you have done the regression without car color but suspect some colors might be important. You could divide the residuals into groups by car color. Many of these groups of residuals might average to zero, but a few could have positive or negative mean—some of those by chance, however. In LMM you give color i parameter bi and specify that bi is normal with mean zero and variance di  2 , where  2 is the regression variance and di is a variance parameter for color i. LMM packages like in SAS, Matlab, R, etc. generally allow a wide choice of covariance matrices for these variances, but we will mainly describe the base case, where all of them are independent. The di ’s are also parameters to be estimated. A color with consistently high residuals is believably a real effect, and it would be estimated with a fairly high di to allow bi to be away from zero. The bi ’s are usually assumed to be independent of the residuals. LMM simultaneously maximizes the probability of the bi ’s, P.b/, and the conditional probability of the observations given b, P.yjb/, by maximizing the joint likelihood P.y; b/ D P.yjb/P.b/. For a bi parameter to get further...
View Full Document

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture