Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Magnus Wiktorsson

    Using classical simulated annealing to maximise a function ψ defined on a subset of R, the probability P(ψ(θn) ≤ ψmax − ε) tends to zero at a logarithmic rate as n increases; here θn is the state in the n-th stage of the simulated... more
    Using classical simulated annealing to maximise a function ψ defined on a subset of R, the probability P(ψ(θn) ≤ ψmax − ε) tends to zero at a logarithmic rate as n increases; here θn is the state in the n-th stage of the simulated annealing algorithm and ψmax is the maximal value of ψ. We propose a modified scheme for which this probability is of order n log n, and hence vanishes at an algebraic rate. To obtain this faster rate, the exponentially decaying acceptance probability of classical simulated annealing is replaced by a more heavy-tailed function, and the system is cooled faster. We also show how the algorithm may be applied to functions that cannot be computed exactly but only approximated, and give an example of maximising the log-likelihood function for a state-space model.
    The hot water circulation system in a building is a system which helps prevent Legionella problems whilst ensuring that tenants have access to hot water quickly. Poorly designed or implemented systems not only increase the risk to... more
    The hot water circulation system in a building is a system which helps prevent Legionella problems whilst ensuring that tenants have access to hot water quickly. Poorly designed or implemented systems not only increase the risk to people’s health and thermal comfort, but even result in an increase in the energy needed for this system to function properly. Results from previous studies showed that the total hot water circulation system loss can be as high as 25 kWh/m2 heated floor area per year. The purpose of this project is to measure the total energy use per year of the hot water circulation system in about 200 multifamily dwellings of different ages to verify that a system loss of 4 kWh/m2, year is a realistic assumption for both newer and older/retrofitted buildings. The preliminary results from the first 134 measurements showed that the assumption of 4 kWh/m2, year is rarely fulfilled. An average energy use of more than three times this is more common, even in newer buildings. ...
    Fast simulated annealing in R d and an application to maximum likelihood estimation in state-space models
    Using classical simulated annealing to maximise a function ψ defined on a subset of ^d, the probability (ψ(θ_n)≤ψ_-ϵ) tends to zero at a logarithmic rate as n increases; here θ_n is the state in the n-th stage of the simulated annealing... more
    Using classical simulated annealing to maximise a function ψ defined on a subset of ^d, the probability (ψ(θ_n)≤ψ_-ϵ) tends to zero at a logarithmic rate as n increases; here θ_n is the state in the n-th stage of the simulated annealing algorithm and ψ_ is the maximal value of ψ. We propose a modified scheme for which this probability is of order n^-1/3 n, and hence vanishes at an algebraic rate. To obtain this faster rate, the exponentially decaying acceptance probability of classical simulated annealing is replaced by a more heavy-tailed function, and the system is cooled faster. We also show how the algorithm may be applied to functions that cannot be computed exactly but only approximated, and give an example of maximising the log-likelihood function for a state-space model.
    Robust calibration of option valuation models to quoted option prices is non-trivial but crucial for good performance. A framework based on the state-space formulation of the option valuation model is introduced. Non-linear (Kalman)... more
    Robust calibration of option valuation models to quoted option prices is non-trivial but crucial for good performance. A framework based on the state-space formulation of the option valuation model is introduced. Non-linear (Kalman) filters are needed to do inference since the models have latent variables (e.g. volatility). The statistical framework is made adaptive by introducing stochastic dynamics for the parameters. This allows the parameters to change over time, while treating the measurement noise in a statistically consistent way and using all data efficiently. The performance and computational efficiency of standard and iterated extended Kalman filters (EKF and IEKF) are investigated. These methods are compared to common calibration such as weighted least squares (WLS) and penalized weighted least squares (PWLS). A simulation study, using the Bates model, shows that the adaptive framework is capable of tracking time varying parameters and latent processes such as stochastic ...
    Implied volatility If the Black-Scholes model were true all we need to know is the volatility to price options.
    We consider limits for sequences of the type [fn(X n)−f(X)] , where {Xn}n either are Dirichlet processes or more generally processes admitting to quadratic variations. It is assumed that the functions {fn}n are locally Lipschitz... more
    We consider limits for sequences of the type [fn(X n)−f(X)] , where {Xn}n either are Dirichlet processes or more generally processes admitting to quadratic variations. It is assumed that the functions {fn}n are locally Lipschitz continuous or C, depending on the context. We also provide a version of the Itô formula for the transformation of Dirichlet processes by locally Lipschitz continuous functions (differing from the one provided in Lowther (2010)). Moreover important examples are given of how to apply this theory for considering the stability of integrators as well as applications to sequential jump removal.
    As regulations regarding energy use and emissions of CO2 equivalents in buildings become more stringent, the need for more accurate tools and improved methods for predicting these parameters in building performance simulations increases.... more
    As regulations regarding energy use and emissions of CO2 equivalents in buildings become more stringent, the need for more accurate tools and improved methods for predicting these parameters in building performance simulations increases. In the first part of this project, a probabilistic method was developed and applied to the transient energy calculations and evaluated using a single-family dwelling case study. The method was used to successfully predict the variation of the energy use in 26 houses built in the same residential area and with identical building characteristics and services. This project continues the development and testing of the probabilistic method for energy calculations by applying it to a multi-family building. The complexity of the building model increases as the multi-family model consists of 52 zones, compared to the single-zone model used for the single-family dwelling. The multi-family model also includes additional parameters that are evaluated, such as ...
    We study simulated annealing algorithms to maximise a function ψ on a subset of R d . In classical simulated annealing, given a current state θ n in stage n of the algorithm, the probability to accept a proposed state z at which ψ is... more
    We study simulated annealing algorithms to maximise a function ψ on a subset of R d . In classical simulated annealing, given a current state θ n in stage n of the algorithm, the probability to accept a proposed state z at which ψ is smaller, is exp(−β n+1 (ψ(z) − ψ(θ n )) where (β n ) is the inverse temperature. With the standard logarithmic increase of (β n ) the probability P(ψ(θ n ) ≤ ψ max − ε), with ψ max the maximal value of ψ, then tends to zero at a logarithmic rate as n increases. We examine variations of this scheme in which (β n ) is allowed to grow faster, but also consider other functions than the exponential for determining acceptance probabilities. The main result shows that faster rates of convergence can be obtained, both with the exponential and other acceptance functions. We also show how the algorithm may be applied to functions that cannot be computed exactly but only approximated, and give an example of maximising the log-likelihood function for a state-space ...
    In this paper, we test the market efficiency of the OMXS30 Index option market. The market efficiency definition is the absence of arbitrage opportunity in the market. We first check the arbitrage opportunity by examining the boundary... more
    In this paper, we test the market efficiency of the OMXS30 Index option market. The market efficiency definition is the absence of arbitrage opportunity in the market. We first check the arbitrage opportunity by examining the boundary conditions and the Put-Call-Parity that must be satisfied in the market. Then a variance based efficiency test is performed by establishing a risk neutral portfolio and re-balance the initial portfolio in different trading strategies. In order to choose the most appropriate model for option price and hedging strategies, we calibrate several most applied models, i.e. the BS, Merton, Heston, Bates model and Affine Jump Diffusion models. Our results indicate that the AJD model significantly outperforms other models in the option price forecast and the trading strategies. The boundary and the PCP test and the dynamic hedging strategy results evidence that no significant abnormal returns can be obtained in the OMXS30 option market, therefore supporting the ...
    Discrete time hedging in a complete diffusion market is considered. The hedge portfolio is rebalanced when the absolute difference between delta of the hedge portfolio and the derivative contract reaches a threshold level. The rate of... more
    Discrete time hedging in a complete diffusion market is considered. The hedge portfolio is rebalanced when the absolute difference between delta of the hedge portfolio and the derivative contract reaches a threshold level. The rate of convergence of the expected squared hedging error as the threshold level approaches zero is analyzed. The results hinge to a great extent on a theorem stating that the difference between the hedge ratios normalized by the threshold level tends to a triangular distribution as the threshold level tends to zero.
    This text describes the Fourier pricing methods in general and the Fourier Gauss Laguerre FGL method and its implementation used the BENCHOP-project.
    Using classical simulated annealing to maximise a function $\psi$ defined on a subset of $\R^d$, the probability $\p(\psi(\theta\_n)\leq \psi\_{\max}-\epsilon)$ tends to zero at a logarithmic rate as $n$ increases; here $\theta\_n$ is the... more
    Using classical simulated annealing to maximise a function $\psi$ defined on a subset of $\R^d$, the probability $\p(\psi(\theta\_n)\leq \psi\_{\max}-\epsilon)$ tends to zero at a logarithmic rate as $n$ increases; here $\theta\_n$ is the state in the $n$-th stage of the simulated annealing algorithm and $\psi\_{\max}$ is the maximal value of $\psi$. We propose a modified scheme for which this probability is of order $n^{-1/3}\log n$, and hence vanishes at an algebraic rate. To obtain this faster rate, the exponentially decaying acceptance probability of classical simulated annealing is replaced by a more heavy-tailed function, and the system is cooled faster. We also show how the algorithm may be applied to functions that cannot be computed exactly but only approximated, and give an example of maximising the log-likelihood function for a state-space model.
    Abstract Building regulations in Sweden require that an energy calculation is done for every building to show that the building design meets the maximum specific energy use as outlined in the Swedish Building Code. The result of this... more
    Abstract Building regulations in Sweden require that an energy calculation is done for every building to show that the building design meets the maximum specific energy use as outlined in the Swedish Building Code. The result of this energy calculation is always one number, for example a building might use 89 kWh/m2 year when the building regulation requires 90 kWh/m2 year. This level of reporting can lead to conflicts if the measured energy use is over the calculated energy use. With the current tools you need to do a time-consuming parametric study in order to see which risks are associated to the design and material properties. This paper is part of a project called “Calculation method for probabilistic energy use in buildings” and is developing and testing the application of Monte Carlo simulations using two popular energy calculation tools developed in Sweden. The goals of the project are; to look at which input parameters have the largest influence on the result; to begin defining a realistic spread of the most significant parameters; to study the advantages and disadvantages of probabilistic energy calculations; and to look at the discrepancies between calculated and measured energy use. This paper presents the results of the first stage of the study defining which input parameters should vary and defining a realistic spread of the values of these parameters. Out of all the input parameters in the case object, it was determined that the method should be tested with 16 parameters with variable values. This paper also presents the preliminary results of an energy calculation done on a real object using the variable parameters and 1000 iterations compared to the base calculation without Monte Carlo simulations.
    The problem of approximating/tracking the value of a Wiener process is considered. The discretization points are placed at times when the value of the process differs from the approximation by some amount, here denoted by eta. It is found... more
    The problem of approximating/tracking the value of a Wiener process is considered. The discretization points are placed at times when the value of the process differs from the approximation by some amount, here denoted by eta. It is found that the limiting difference, as eta goes to 0, between the approximation and the value of the process normalized with eta converges in distribution to a triangularly distributed random variable.
    Abstract Robust calibration of option valuation models to real world option prices is non-trivial, but as important for good performance as the valuation model itself. The standard textbook approach to option calibration is ordinary or... more
    Abstract Robust calibration of option valuation models to real world option prices is non-trivial, but as important for good performance as the valuation model itself. The standard textbook approach to option calibration is ordinary or weighted non-linear least squares ...
    We study the simulation of stochastic processes defined as stochastic integrals with respect to type G Lévy processes for the case where it is not possible to simulate the type G process exactly. The type G Lévy process as well as the... more
    We study the simulation of stochastic processes defined as stochastic integrals with respect to type G Lévy processes for the case where it is not possible to simulate the type G process exactly. The type G Lévy process as well as the stochastic integral can on compact ...
    A random variable is said to be of type G if it is a Gaussian variance mixture with the mixing distribution being in nitely divisible. AL evy process is said to be of type G if its increments is of type G. Every such L evy process on 0;... more
    A random variable is said to be of type G if it is a Gaussian variance mixture with the mixing distribution being in nitely divisible. AL evy process is said to be of type G if its increments is of type G. Every such L evy process on 0; 1] can be represented as an in nite ...
    ... Supplementary material Lecture Advanced Financial models Magnus Wiktorsson FMS161/MASM18 Financial Statistics December 1, 2010 Magnus Wiktorsson Lecture Advanced Financial models Page 2. Motivation Lévy ...
    A random variable is said to be of type G if it is a Gaussian variance mixture with the mixing distribution being infinitely divisible. A Lévy process is said to be of type G if its increments is of type G. Every such Lévy process on [0,... more
    A random variable is said to be of type G if it is a Gaussian variance mixture with the mixing distribution being infinitely divisible. A Lévy process is said to be of type G if its increments is of type G. Every such Lévy process on [0, 1] can be represented as an infinite series ...
    In a complete market setting contingent claims can be perfectly replicated by trading in the underlying or in some other derivative. In general if the market is modelled by a continuous time process then the hedge portfolio has to be... more
    In a complete market setting contingent claims can be perfectly replicated by trading in the underlying or in some other derivative. In general if the market is modelled by a continuous time process then the hedge portfolio has to be rebalanced at every time instant. Continuous trading is ...
    ... THE SIMULATION OF STOCHASTIC PROCESSES MAGNUS WIKTORSSON Centre for Mathematical Sciences Mathematical Statistics Page 2. ... ISBN 91-628-4640-X LUTFMS-1014-2001 ¡Magnus Wiktorsson, 2001 Printed in Sweden by KFS AB Lund 2001 Page 3.... more
    ... THE SIMULATION OF STOCHASTIC PROCESSES MAGNUS WIKTORSSON Centre for Mathematical Sciences Mathematical Statistics Page 2. ... ISBN 91-628-4640-X LUTFMS-1014-2001 ¡Magnus Wiktorsson, 2001 Printed in Sweden by KFS AB Lund 2001 Page 3. Contents ...
    Assumptions: Let σ (y)= σ (ey). A1.(i) There is a positive constant σ0 such that σ (y)≥ σ0 for all y∈ R.(ii) The function σ is bounded, uniformly Lipschitz continuous in compact subsets of R and uniformly Hölder continuous. A2. The... more
    Assumptions: Let σ (y)= σ (ey). A1.(i) There is a positive constant σ0 such that σ (y)≥ σ0 for all y∈ R.(ii) The function σ is bounded, uniformly Lipschitz continuous in compact subsets of R and uniformly Hölder continuous. A2. The functions (∂ k/∂ yk) σ (y), i∈{1, 2, 3, 4}, are ...
    We study the simulation of stochastic processes defined as stochastic integrals with respect to type G Lévy processes for the case where it is not possible to simulate the type G process exactly. The type G Lévy process as well as the... more
    We study the simulation of stochastic processes defined as stochastic integrals with respect to type G Lévy processes for the case where it is not possible to simulate the type G process exactly. The type G Lévy process as well as the stochastic integral can on compact ...
    We consider all two-times iterated Itô integrals obtained by pairing m independent standard Brownian motions. First we calculate the conditional joint characteristic function of these integrals, given the Brownian increments over the... more
    We consider all two-times iterated Itô integrals obtained by pairing m independent standard Brownian motions. First we calculate the conditional joint characteristic function of these integrals, given the Brownian increments over the integration interval, and show that it has ...
    Using classical simulated annealing to maximise a function $\psi$ defined on a subset of $\R^d$, the probability $\p(\psi(\theta\_n)\leq \psi\_{\max}-\epsilon)$ tends to zero at a logarithmic rate as $n$ increases; here $\theta\_n$ is the... more
    Using classical simulated annealing to maximise a function $\psi$ defined on a subset of $\R^d$, the probability $\p(\psi(\theta\_n)\leq \psi\_{\max}-\epsilon)$ tends to zero at a logarithmic rate as $n$ increases; here $\theta\_n$ is the state in the $n$-th stage of the simulated annealing algorithm and $\psi\_{\max}$ is the maximal value of $\psi$. We propose a modified scheme for which this probability is of order $n^{-1/3}\log n$, and hence vanishes at an algebraic rate. To obtain this faster rate, the exponentially decaying acceptance probability of classical simulated annealing is replaced by a more heavy-tailed function, and the system is cooled faster. We also show how the algorithm may be applied to functions that cannot be computed exactly but only approximated, and give an example of maximising the log-likelihood function for a state-space model.
    The Discrete Wavelet Transform (DWT) is a new way to analyze gas flow measurements in engine cylinders. The DWT technique differs significantly from the Fast Fourier Transform (FFT) in that with the former, the analysis of information can... more
    The Discrete Wavelet Transform (DWT) is a new way to analyze gas flow measurements in engine cylinders. The DWT technique differs significantly from the Fast Fourier Transform (FFT) in that with the former, the analysis of information can be done in both the time ...
    The evidence for dispersal activity among soil-living invertebrates comes mainly from observations of their movement on artificial substrates or of colonisation of defaunated soils in the field. In an attempt to elucidate the dispersal... more
    The evidence for dispersal activity among soil-living invertebrates comes mainly from observations of their movement on artificial substrates or of colonisation of defaunated soils in the field. In an attempt to elucidate the dispersal pattern of soil collembolans in the presence of conspecifics, statistical analyses were undertaken to describe and simulate the movement of groups of Onychiurus armatus released in trays of homogeneous soil. A chi(2) test was used to reject the null hypothesis that individuals moved independently of each other and uniformly in all directions. The mean radial distance moved (1-2 cm day(-1)) and the radial standard deviation varied temporally and with the density of conspecifics. To capture the interaction between the moving individuals, four dispersal models (pure diffusion, diffusion with drift interaction, drift interaction and synchronised diffusion, and drift interaction and behavioural mood), were formulated as stochastic differential equations. The parameters of the models were estimated by minimising the deviance between the observed replicates and replicates that were simulated using the models. The dynamics of movement were best described by modelling the drift interaction as dependent on whether individuals were in a social or an asocial mood.
    We use a linear autoregressive model to describe the movement of a soil-living insect, Protaphorura armata (Collembola). Models of this kind can be viewed as extensions of a random walk, but unlike a correlated random walk, in which the... more
    We use a linear autoregressive model to describe the movement of a soil-living insect, Protaphorura armata (Collembola). Models of this kind can be viewed as extensions of a random walk, but unlike a correlated random walk, in which the speed and turning angles are independent, our model identifies and expresses the correlations between the turning angles and a variable speed. Our model uses data in x- and y-coordinates rather than in polar coordinates, which is useful for situations in which the resolution of the observations is limited. The movement of the insect was characterized by (i) looping behaviour due to autocorrelation and cross correlation in the velocity process and (ii) occurrence of periods of inactivity, which we describe with a Poisson random effects model. We also introduce obstacles to the environment to add structural heterogeneity to the movement process. We compare aspects such as loop shape, inter-loop time, holding angles at obstacles, net squared displacement, number, and duration of inactive periods between observed and predicted movement. The comparison demonstrates that our approach is relevant as a starting-point to predict behaviourally complex moving, e.g. systematic searching, in a heterogeneous landscape.
    Analysis of small-scale movement patterns of animals we may help to understand and predict movement at a larger scale, such as dispersal, which is a key parameter in spatial population dynamics. We have chosen to study the movement of a... more
    Analysis of small-scale movement patterns of animals we may help to understand and predict movement at a larger scale, such as dispersal, which is a key parameter in spatial population dynamics. We have chosen to study the movement of a soil-dwelling Collembola, Protaphorura armata, in an experimental system consisting of a clay surface with or without physical obstacles. A combination of video recordings, descriptive statistics, and walking simulations was used to evaluate the movement pattern. Individuals were found to link periods of irregular walk with those of looping in a homogeneous environment as well as in one structured to heterogeneity by physical obstacles. The number of loops varied between 0 and 44 per hour from one individual to another and some individuals preferred to make loops by turning right and others by turning left. P. armata spent less time at the boundary of small obstacles compared to large, presumably because of a lower probability to track the steepness of the curvature as the individual walks along a highly curved surface. Food deprived P. armata had a more winding movement and made more circular loops than those that were well fed. The observed looping behaviour is interpreted in the context of systematic search strategies and compared with similar movement patterns found in other species.
    Robust calibration of option valuation models to quoted option prices is non-trivial but crucial for good performance. A framework based on the state-space formulation of the option valuation model is introduced. Non-linear (Kalman)... more
    Robust calibration of option valuation models to quoted option prices is non-trivial but crucial for good performance. A framework based on the state-space formulation of the option valuation model is introduced. Non-linear (Kalman) filters are needed to do inference ...