Risk Modelling Project

Robust Portfolio Optmization

Dr. Martin A. Negron

Introduction

In Markowitz portfolios the use of sample estimators for the expected returns and the covariance matrix can result in sub-optimal portfolio results due to estimation error.

Extreme portfolio weights and/or erratic swings in the asset mix are commonly observed in ex post simulations. In general, this empirical fact is undesirable because of transaction costs.

From a statistical point of view, these artifacts can mainly be attributed to the sensitivity of the ordinary sample estimators with respect to outliers. These outlying data points influence the dispersion estimates to a lesser extent than the means.

Hence, but not only for this reason, minimum-variance portfolios are advocated compared to mean-variance portfolios.

Introduction

It would be desirable to have estimators available which lessen the impact of outliers and thus produce estimates that are representative of the bulk of sample data, and/or optimization techniques that incorporate estimations errors directly.

The former can be achieved by utilizing robust statistics and the latter by employing robust optimization techniques.

Robust

It was previously stated that in empirical work the arithmetic mean and the sample covariance are widely employed in estimating the corresponding theoretical location and dispersion moments of the population.

When the distributional assumption is not met, the estimators lose their desirable properties. In fact, the arithmetic mean—as an estimator for the location of a population—is sensitive to extreme observations, such that the estimate does not reflect the bulk of the data well.

The field of robust statistics deals with problems of this kind and offers solutions in the form of robust estimators and inference based upon these.

Robust

An outlier problem could be resolved by means of trimming (removing of outliers) or winsorizing (equalizing extreme observations to a fixed quantile value).

Both methods can be considered as means of robustification. Unfortunately, the outcome of these procedures is greatly dependent on the subjective choice of the researcher.

The median could perhaps be employed as an estimator for the location and the mean absolute deviation for the scale. Both produce more robust estimates than the ordinary sample estimators of the arithmetic mean and standard deviation.

The advantage of these robust estimators is that they avoid specifying in advance which data points are viewed as outliers. Hence, a subjective element in the data analysis is removed.

It is worth mentioning that so far the term “outlier” or “extreme observation” has not been precisely defined.

Portfolio simulation: robust vs classical statistics

In this empirical application a simulation comparing classical and robust estimators will be conducted. Returns for five fictional assets will be randomly generated according to one of the following data-generating processes (DGPs):

Gauss copula with normally distributed margins

Gauss copula with t-distributed margins

Student’s t copula with t-distributed margins.

Portfolio simulation: robust vs classical statistics

The first DGP corresponds to the case where the classical estimators are the best linear unbiased estimators for the Gauss copula with normal margins; the other two DGPs reflect the stylized facts of financial market returns, namely excess kurtosis and tail dependence.

For the t distribution five degrees of freedom have been chosen, and the sample sizes for all DGPs are 60, 120, and 240 observations.

Assuming a monthly frequency, this corresponds to a 5-, 10-, and 20-year time span for the data in each of the portfolio optimizations.

For each DGP a set of 1000 samples is generated. An equally correlated dependence structure between the five assets has been assumed with a value of ? = 0.5.

Portfolio simulation: robust vs classical statistics

Load the required packages

library(copula)
library(quadprog)
library(rrcov)

## Loading required package: robustbase

## Scalable Robust Estimators with High Breakdown Point (version 1.5-2)

Portfolio simulation: robust vs classical statistics

Copula objects for the Gauss and Student’s t copula are created and named ncop and tcop, respectively.

ncop <- normalCopula(param = 0.5, dim = 5) tcop <- tCopula(param = 0.5, dim = 5, df = 5, df.fixed = TRUE) Portfolio simulation: robust vs classical statistics The DGPs are created by utilizing the function mvdc(). This requires an object of class copula and a list object that contains the parameter information about the marginal distributions assumed. The objects are labeled NcopMargN, NcopMargT, and TcopMargT for the three multivariate distribution models. mvdc: Density, distribution function, and random generator for a multivariate distribution via copula and parametric margins. NcopMargN <- mvdc(ncop, margins = "norm", paramMargins = list(list(mean = 0, sd = 1)), marginsIdentical = TRUE) NcopMargT <- mvdc(ncop, margins = "t", paramMargins = list(df = 5), marginsIdentical = TRUE) TcopMargT <- mvdc(tcop, margins = "t", paramMargins = list(df = 5), marginsIdentical = TRUE) Portfolio simulation: robust vs classical statistics The objects created are then employed to draw the random samples. In order to do so, first a list object, Lobj, is created and the size of the simulation is assigned as its length. A seed is set for replication purposes. Lobj <- list() length(Lobj) <- 1000 set.seed(12345) Portfolio simulation: robust vs classical statistics In principle, the 1000 random samples for each of the DGPs could be assigned as list elements with a for loop, but it is more in the nature of R to use lapply() instead. In the last three lines the list object Lobj is used for this purpose and the results are stored in the objects rNcopMargN, rN- copMargT, and rTcopMargT. These list objects consist of 1000 samples for each of the DGPs with 240 rows and five columns for the fictional asset returns. Sample data for the shorter sample spans can then be swiftly extracted from the list elements. rNcopMargN <- lapply(Lobj, function(x) rMvdc(240, NcopMargN)) rNcopMargT <- lapply(Lobj, function(x) rMvdc(240, NcopMargT)) rTcopMargT <- lapply(Lobj, function(x) rMvdc(240, TcopMargT)) Portfolio simulation: robust vs classical statistics As the next step, a function is created that returns the dispersion estimates for the classical and robust methods. This function can then be applied for the simulated data sets for each of the DGPs and then used for optimizing the minimum-variance portfolios. The comparative simulation will encompass the classical estimators and the M-, MM, S-, MCD, MVE, SD, and OGK robust estimators. The function is specified with three arguments. The first, x, is used for the random data set, the second for determining what kind of estimator is employed, and the third is the ellipsis argument that is passed down to do.call() so that the user has command of the arguments of the estimating function. In the first line of the function body, partial matching for the name of the estimating function is included. In the second line, the function determined by the argument method is applied to the data set x. Finally, the estimate is extracted from the object ans by means of the access function getCov() and its result is returned. Moments <- function(x, method = c("CovClassic", "CovMcd", "CovMest", "CovMMest", "CovMve", "CovOgk", "CovSde", "CovSest"), ...){ method <- match.arg(method) ans <- do.call(method, list(x = x, ...)) return(getCov(ans)) } Portfolio simulation: robust vs classical statistics CovClassic computes an estimate of the covariance/correlation matrix and location vector using classical methods. CovMcd computes the Minimum Covariance Determinant (MCD) estimator, a robust multivariate location and scale estimate with a high breakdown point, via the ‘Fast MCD’ or ‘Deterministic MCD’ (“DetMcd”) algorithm. CovMest computes constrained M-Estimates of multivariate location and scatter based on the translated biweight function (‘t-biweight’) using a High breakdown point initial estimate. The default initial estimate is the Minimum Volume Ellipsoid computed with CovMve. The raw (not reweighted) estimates are taken and the covariance matrix is standardized to determinant 1. CovMMest Computes MM-Estimates of multivariate location and scatter starting from an initial S-estimate. CovMve Computes a robust multivariate location and scatter estimate with a high breakdown point, using the ‘MVE’ (Minimum Volume Ellipsoid) estimator CovOgk Computes a robust multivariate location and scatter estimate with a high breakdown point, using the pairwise algorithm proposed by Marona and Zamar (2002) which in turn is based on the pairwise robust estimator proposed by Gnanadesikan-Kettenring (1972). CovSde Compute a robust estimate of location and scale using the Stahel-Donoho projection based estimator CovSest Computes S-Estimates of multivariate location and scatter based on Tukey’s biweight function using a fast algorithm similar to the one proposed by Salibian-Barrera and Yohai (2006) for the case of regression. Alternativley, the Ruppert’s SURREAL algorithm, bisquare or Rocke type estimation can be used. Portfolio simulation: robust vs classical statistics The function can now be used to estimate the second moment (variance) for each of the list elements and for each of the DGPs and sample sizes. The dimension of the simulation study is defined: there are three models for the multivariate distributions, DGP. These list objects were created in earlier steps. Each list object contains 1000 elements in the form of (240 × 5) matrices. DGP <- c("rNcopMargN", "rTcopMargT") Portfolio simulation: robust vs classical statistics The function names of the estimators to be used are collected in the character vector EST. The sample sizes are set in the numeric vector SAMPLE. EST <- c("CovClassic", "CovMcd", "CovMest", "CovMMest", "CovMve", "CovOgk", "CovSde", "CovSest") SAMPLE <- c(60, 120, 240) Portfolio simulation: robust vs classical statistics In the first double for loop, the object names are created and displayed by the cat() function so that the user can better track the progress of the loop. In addition, the names of these list objects are saved in the vector datnames for use in the second for loop construction. In the last line, lapply() is used to extract for each list element and for each DGP the number of rows according to SAMPLE. Thus, after the execution of the loop has finished, nine new list objects have been created. With these data sets at hand, one can now proceed and estimate the moments for each single list element. datnames <- NULL for(i in DGP){ for(j in SAMPLE){ objname <- paste(i, j, sep = "") datnames <- c(datnames, objname) cat(paste("Creating list object", objname, "n")) assign(objname, lapply(eval(as.name(i)), function(x) x[1:j, ])) } } ## Creating list object rNcopMargN60 ## Creating list object rNcopMargN120 ## Creating list object rNcopMargN240 ## Creating list object rTcopMargT60 ## Creating list object rTcopMargT120 ## Creating list object rTcopMargT240 Portfolio simulation: robust vs classical statistics The names of the list objects that will store the estimates for the mean and covariance are created, and these names are displayed to enable progress monitoring when the for loop is executed. Similar to the first loop, the names of these objects are saved in the character vector objnames. In the final line, lapply() is used to apply the function Moments() to each list element for each of the DGPs and sample sizes. After the successful completion of this loop, a total of 3 × 3 × 8 = 72 list objects have been created and each contains a dispersion estimate. The portfolio optimizations are then conducted with respect to these 72 objects, which implies a total of 72 000 optimizations to be done. objnames <- NULL for(i in datnames){ for(j in EST){ objname <- paste(j, i, sep = "") objnames <- c(objnames, objname) cat(paste("Creating list object", objname, "n")) assign(objname, lapply(eval(as.name(i)), Moments, method = j)) } } ## Creating list object CovClassicrNcopMargN60 ## Creating list object CovMcdrNcopMargN60 ## Creating list object CovMestrNcopMargN60 ## Creating list object CovMMestrNcopMargN60 ## Creating list object CovMverNcopMargN60 ## Creating list object CovOgkrNcopMargN60 ## Creating list object CovSderNcopMargN60 ## Creating list object CovSestrNcopMargN60 ## Creating list object CovClassicrNcopMargN120 ## Creating list object CovMcdrNcopMargN120 ## Creating list object CovMestrNcopMargN120 ## Creating list object CovMMestrNcopMargN120 ## Creating list object CovMverNcopMargN120 ## Creating list object CovOgkrNcopMargN120 ## Creating list object CovSderNcopMargN120 ## Creating list object CovSestrNcopMargN120 ## Creating list object CovClassicrNcopMargN240 ## Creating list object CovMcdrNcopMargN240 ## Creating list object CovMestrNcopMargN240 ## Creating list object CovMMestrNcopMargN240 ## Creating list object CovMverNcopMargN240 ## Creating list object CovOgkrNcopMargN240 ## Creating list object CovSderNcopMargN240 ## Creating list object CovSestrNcopMargN240 ## Creating list object CovClassicrTcopMargT60 ## Creating list object CovMcdrTcopMargT60 ## Creating list object CovMestrTcopMargT60 ## Creating list object CovMMestrTcopMargT60 ## Creating list object CovMverTcopMargT60 ## Creating list object CovOgkrTcopMargT60 ## Creating list object CovSderTcopMargT60 ## Creating list object CovSestrTcopMargT60 ## Creating list object CovClassicrTcopMargT120 ## Creating list object CovMcdrTcopMargT120 ## Creating list object CovMestrTcopMargT120 ## Creating list object CovMMestrTcopMargT120 ## Creating list object CovMverTcopMargT120 ## Creating list object CovOgkrTcopMargT120 ## Creating list object CovSderTcopMargT120 ## Creating list object CovSestrTcopMargT120 ## Creating list object CovClassicrTcopMargT240 ## Creating list object CovMcdrTcopMargT240 ## Creating list object CovMestrTcopMargT240 ## Creating list object CovMMestrTcopMargT240 ## Creating list object CovMverTcopMargT240 ## Creating list object CovOgkrTcopMargT240 ## Creating list object CovSderTcopMargT240 ## Creating list object CovSestrTcopMargT240 Portfolio simulation: robust vs classical statistics The optimization is carried out by employing cccp() from the package cccp. The minimum-variance optimization takes place under the constraints of being fully invested (objects a1 and b1) and with only long positions allowed (objects nno1). The function returns the weight vector. cccp: This function is the main function for defining and solving convex problems in the form of either linear or quadratic programs with cone constraints. PortMinVar <- function(x){ Dmat <- x k <- ncol(Dmat) dvec <- rep.int(0, k) a1 <- rep.int(1, k) b1 <- 1 a2 <- diag(k) b2 <- rep.int(0, k) Amat <- t(rbind(a1, a2)) bvec <- c(b1, b2) opt <- solve.QP(Dmat = Dmat, dvec = dvec, Amat = Amat, bvec = bvec, meq = 1) return(opt$solution) } Portfolio simulation: robust vs classical statistics In the for loop the optimizations are carried out and the 1000 portfolio risk figures are stored for each of the DGP, estimator, and sample size combinations. portnames <- NULL idx <- 1:1000 for(i in objnames){ objname <- paste("Port", i, sep = "") portnames <- c(portnames, objname) obj <- eval(as.name(i)) weights <- lapply(obj, PortMinVar) assign(objname, sapply(idx, function(x) sqrt(t(weights[[x]]) %*% obj[[x]] %*% weights[[x]]))) } Portfolio simulation: robust vs classical statistics The median and interquartile range (IQR) are computed for the portfolio risks. mednames <- NULL iqrnames <- NULL for(i in portnames){ objname1 <- paste("Med", i, sep = "") objname2 <- paste("IQR", i, sep = "") mednames <- c(mednames, objname1) iqrnames <- c(iqrnames, objname2) assign(objname1, median(eval(as.name(i)))) assign(objname2, IQR(eval(as.name(i)))) } Portfolio simulation: robust vs classical statistics The medians and interquartile ranges are reported in the columns for each of the three DGPs. Even in the case of the Gauss copula with normally distributed margins the classical covariance estimator does not yield a portfolio structure of lowest risk on average, and the dispersion of the portfolio risks is also not the smallest. This result holds for all sample sizes, but the differences are becoming negligible for T = 240 with respect to the median risks. Lower portfolio risks can be achieved with the robust OGK and Stahel–Donoho estimators for this DGP. For T = 60 the M-, MM, and S-estimators excel likewise. For the second DGP, a multivariate distribution with excess kurtosis but no tail dependence the risk levels increase for all types of estimator and this increase is most pronounced for the classical estimator in contrast, the increase in the median risks for the robust estimators is rather negligible, and all of these estimators outperform the classic estimator, that is, producing portfolio allocations of lower risk. The OGK estimator fares best. This picture remains pretty much unchanged for the third DGP, where now tail dependencies exist between the included assets. The differences between the second and third DGPs in the median levels are rather small for all estimators and sample sizes. One can conclude that tail dependence does not have a material impact on the overall portfolio risk for the given simulation design. Portfolio simulation: robust vs classical statistics When the interquartile ranges of the portfolio risks for each of the estimators and sample sizes are compared, two features emerge. in the case of the Gaussian DGP, the portfolio risks are more concentrated compared to the robust estimators for all sample sizes. This result confirms the best linear unbiased estimator property of the classical estimator. by slightly violating the assumption of normality the implied portfolio risks of the classical estimator are dispersed more widely compared to most of the robust estimators. For the latter, the interquartile range remains pretty much the same, regardless of DGP.

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
Open chat
1
You can contact our live agent via WhatsApp! Via + 1 929 473-0077

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 20% with the discount code GURUH