ABSTRACT We consider the utility maximization problem for an investor who faces a solvency or ris... more ABSTRACT We consider the utility maximization problem for an investor who faces a solvency or risk constraint in addition to a budget constraint. The investor wishes to maximize her expected utility from terminal wealth subject to a bound on her expected solvency at maturity. We measure solvency using a solvency function applied to the terminal wealth. The motivation for our analysis is an optimal investment problem where the investor faces a random and unhedgeable liability at maturity.
... 2. By setting g(x) = x2 we obtain the one-factor squared Gaussian model described by Pelsser ... more ... 2. By setting g(x) = x2 we obtain the one-factor squared Gaussian model described by Pelsser [1997]. 3. By setting g(x) =ex we obtain a version of the Black and Karasinski [1991] model. ... [1994] and Black and Karasinski [1991] when they orig-inally proposed their models. ...
ABSTRACT This article presents a novel approach for calculating swap vega per bucket in the Libor... more ABSTRACT This article presents a novel approach for calculating swap vega per bucket in the Libor BGM model. We show that for some forms of the volatility an approach based on re-calibration may lead to a large uncertainty in estimated swap vega, as the instantaneous volatility structure may be distorted by re-calibration. This does not happen in the case of constant swap rate volatility. We then derive an alternative approach, not based on re-calibration, by comparison with the swap market model. The strength of the method is that it accurately estimates vegas for any volatility function and at a low number of simulation paths. The key to the method is that the perturbation in the Libor volatility is distributed in a clear, stable and well understood fashion, whereas in the re-calibration method the change in volatility is hidden and potentially unstable.
... (2008). In this paper, building forth on the results of van Haastrecht et al. (2008),Antonov ... more ... (2008). In this paper, building forth on the results of van Haastrecht et al. (2008),Antonov et al. (2008 ... dn(t) = [ϑn(t) − ann(t)]dt + σndWn(t), (1) dr(t) = [ϑr(t) − ρr,IσIσr − arr(t)]dt + σrdWr(t), (2) dI(t) = I(t)[n(t) − r(t)]dt + σII(t)dWI(t), (3) with ...
A broad class of exotic interest rate derivatives can be valued simply by adjusting the forward i... more A broad class of exotic interest rate derivatives can be valued simply by adjusting the forward interest rate. This adjustment is known in the market as convexity correction. Various ad hoc rules are used to calculate the convexity correction for different products, many of them mutually inconsistent. In this research paper we put convexity correction on a firm mathematical basis
International Journal of Theoretical and Applied Finance, 2010
ABSTRACT We deal with discretization schemes for the simulation of the Heston stochastic volatili... more ABSTRACT We deal with discretization schemes for the simulation of the Heston stochastic volatility model. These simulation methods yield a popular and flexible pricing alternative for pricing and managing a book of exotic derivatives which cannot be valued using closed-form expressions. For the Heston dynamics an exact simulation method was developed by Broadie and Kaya (2006), however we argue why its practical use is limited. Instead we focus on efficient approximations of the exact scheme, aimed to resolve the disadvantages of this method; one of the main bottlenecks in the exact scheme is the simulation of the Non-central Chi-squared distributed variance process, for which we suggest an efficient caching technique. At first sight the creation of a cache containing the inverses of this distribution might seem straightforward, however as the parameter space of the inverse Non-central Chi-squared distribution is three-dimensional, the design of such a direct cache is rather complicated, as pointed out by Broadie and Andersen. Nonetheless, for the case of the Heston model we are able to tackle this dimensionality problem and show that the three-dimensional inverse of the non-central chi-squared distribution can effectively be reduced to a one dimensional cache. The performed analysis hence leads to the development of three new efficient simulation methods (the NCI, NCI-QE and BK-DI scheme). Finally, we conclude with a comprehensive numerical study of these new schemes and the exact scheme of Broadie and Kaya, the almost exact scheme of Smith, the Kahl-Jäckel scheme, the FT scheme of Lord et al. and the QE-M scheme of Andersen. From these results, we find that the QE-M scheme is the most efficient, followed closely by the NCI-M, NCI-QE-M and BK-DI-M schemes, whilst we observe that all other considered schemes perform a factor 6 to 70 times less efficient than the latter four methods.
... Initially, the work was focussed on valuing return guarantees embedded in equity-linked insur... more ... Initially, the work was focussed on valuing return guarantees embedded in equity-linked insurance policies, see for example, Brennan and Schwartz (1976), Boyle and Schwartz (1977), Aase and Persson (1994), Boyle and Hardy (1997) and Bacinello and Persson (2002). ...
Some recent results for frictionless economies show that popular dynamic portfolio strategies suc... more Some recent results for frictionless economies show that popular dynamic portfolio strategies such as stop-loss and lock-in are inefficient. I.e for each of these strategies there exists an alternative portfolio strategy that gives the same final payoff distribution at lower initial costs. However, the alternative strategies require considerably more active trading than the simple strategies. The results rely heavily on the assumption of no transaction costs. Under this assumption the initial investment required is a linear function of the prices of the contingent claims that build the final payoff distribution. In this paper we demonstrate that, even for modest levels of transaction costs, the efficient strategies are more costly than the simple strategies, i.e. a strategy that replicates the final payoff distribution of an efficient strategy is excessively costly due to the transaction costs and the heavy trading involved. Since the initial investment is no longer a linear function of the contingent claims, the optimization problems to find the most efficient strategy are complicated combinatorial optimization problems which can only be solved for trees with a small number of steps. In a world without transaction costs, options are redundant instruments, since all payoff distributions can be replicated by trading in stocks and bonds. In the second half of this paper we show that the use of options in a world with transaction costs enables investors to realize final value distributions at lower initial costs than would be possible with trades in stocks an bonds only. Hence, although in theory options do not give rise to other portfolio strategies, they do in a more restrictive setting with transaction costs.
ABSTRACT We consider the utility maximization problem for an investor who faces a solvency or ris... more ABSTRACT We consider the utility maximization problem for an investor who faces a solvency or risk constraint in addition to a budget constraint. The investor wishes to maximize her expected utility from terminal wealth subject to a bound on her expected solvency at maturity. We measure solvency using a solvency function applied to the terminal wealth. The motivation for our analysis is an optimal investment problem where the investor faces a random and unhedgeable liability at maturity.
... 2. By setting g(x) = x2 we obtain the one-factor squared Gaussian model described by Pelsser ... more ... 2. By setting g(x) = x2 we obtain the one-factor squared Gaussian model described by Pelsser [1997]. 3. By setting g(x) =ex we obtain a version of the Black and Karasinski [1991] model. ... [1994] and Black and Karasinski [1991] when they orig-inally proposed their models. ...
ABSTRACT This article presents a novel approach for calculating swap vega per bucket in the Libor... more ABSTRACT This article presents a novel approach for calculating swap vega per bucket in the Libor BGM model. We show that for some forms of the volatility an approach based on re-calibration may lead to a large uncertainty in estimated swap vega, as the instantaneous volatility structure may be distorted by re-calibration. This does not happen in the case of constant swap rate volatility. We then derive an alternative approach, not based on re-calibration, by comparison with the swap market model. The strength of the method is that it accurately estimates vegas for any volatility function and at a low number of simulation paths. The key to the method is that the perturbation in the Libor volatility is distributed in a clear, stable and well understood fashion, whereas in the re-calibration method the change in volatility is hidden and potentially unstable.
... (2008). In this paper, building forth on the results of van Haastrecht et al. (2008),Antonov ... more ... (2008). In this paper, building forth on the results of van Haastrecht et al. (2008),Antonov et al. (2008 ... dn(t) = [ϑn(t) − ann(t)]dt + σndWn(t), (1) dr(t) = [ϑr(t) − ρr,IσIσr − arr(t)]dt + σrdWr(t), (2) dI(t) = I(t)[n(t) − r(t)]dt + σII(t)dWI(t), (3) with ...
A broad class of exotic interest rate derivatives can be valued simply by adjusting the forward i... more A broad class of exotic interest rate derivatives can be valued simply by adjusting the forward interest rate. This adjustment is known in the market as convexity correction. Various ad hoc rules are used to calculate the convexity correction for different products, many of them mutually inconsistent. In this research paper we put convexity correction on a firm mathematical basis
International Journal of Theoretical and Applied Finance, 2010
ABSTRACT We deal with discretization schemes for the simulation of the Heston stochastic volatili... more ABSTRACT We deal with discretization schemes for the simulation of the Heston stochastic volatility model. These simulation methods yield a popular and flexible pricing alternative for pricing and managing a book of exotic derivatives which cannot be valued using closed-form expressions. For the Heston dynamics an exact simulation method was developed by Broadie and Kaya (2006), however we argue why its practical use is limited. Instead we focus on efficient approximations of the exact scheme, aimed to resolve the disadvantages of this method; one of the main bottlenecks in the exact scheme is the simulation of the Non-central Chi-squared distributed variance process, for which we suggest an efficient caching technique. At first sight the creation of a cache containing the inverses of this distribution might seem straightforward, however as the parameter space of the inverse Non-central Chi-squared distribution is three-dimensional, the design of such a direct cache is rather complicated, as pointed out by Broadie and Andersen. Nonetheless, for the case of the Heston model we are able to tackle this dimensionality problem and show that the three-dimensional inverse of the non-central chi-squared distribution can effectively be reduced to a one dimensional cache. The performed analysis hence leads to the development of three new efficient simulation methods (the NCI, NCI-QE and BK-DI scheme). Finally, we conclude with a comprehensive numerical study of these new schemes and the exact scheme of Broadie and Kaya, the almost exact scheme of Smith, the Kahl-Jäckel scheme, the FT scheme of Lord et al. and the QE-M scheme of Andersen. From these results, we find that the QE-M scheme is the most efficient, followed closely by the NCI-M, NCI-QE-M and BK-DI-M schemes, whilst we observe that all other considered schemes perform a factor 6 to 70 times less efficient than the latter four methods.
... Initially, the work was focussed on valuing return guarantees embedded in equity-linked insur... more ... Initially, the work was focussed on valuing return guarantees embedded in equity-linked insurance policies, see for example, Brennan and Schwartz (1976), Boyle and Schwartz (1977), Aase and Persson (1994), Boyle and Hardy (1997) and Bacinello and Persson (2002). ...
Some recent results for frictionless economies show that popular dynamic portfolio strategies suc... more Some recent results for frictionless economies show that popular dynamic portfolio strategies such as stop-loss and lock-in are inefficient. I.e for each of these strategies there exists an alternative portfolio strategy that gives the same final payoff distribution at lower initial costs. However, the alternative strategies require considerably more active trading than the simple strategies. The results rely heavily on the assumption of no transaction costs. Under this assumption the initial investment required is a linear function of the prices of the contingent claims that build the final payoff distribution. In this paper we demonstrate that, even for modest levels of transaction costs, the efficient strategies are more costly than the simple strategies, i.e. a strategy that replicates the final payoff distribution of an efficient strategy is excessively costly due to the transaction costs and the heavy trading involved. Since the initial investment is no longer a linear function of the contingent claims, the optimization problems to find the most efficient strategy are complicated combinatorial optimization problems which can only be solved for trees with a small number of steps. In a world without transaction costs, options are redundant instruments, since all payoff distributions can be replicated by trading in stocks and bonds. In the second half of this paper we show that the use of options in a world with transaction costs enables investors to realize final value distributions at lower initial costs than would be possible with trades in stocks an bonds only. Hence, although in theory options do not give rise to other portfolio strategies, they do in a more restrictive setting with transaction costs.
Uploads
Papers