Passa al contenuto

Investment Strategy Assignment


Click here to download it!

 

In this advanced university assignment, I conducted a comprehensive replication and comparative stress test of three distinct eras of portfolio management theory. The project was structured as three separate "works" within a single framework, where I replicated the core algorithms from seminal academic papers to test whether theoretical sophistication translates to out-of-sample performance in the modern market.

My analysis focused on the 11 GICS sectors of the S&P 500 (using SPDR ETFs) over a highly turbulent rolling window from mid-2018 to May 2025.

Part 1: Replicating DeMiguel, Garlappi, & Uppal (2009) The Hypothesis: "Optimal vs. Naive Diversification"

The first pillar of my work involved replicating the findings of DeMiguel et al., famously known as the "1/N" paper.

The Theory: The authors argue that due to estimation errors in forecasting returns and covariances, sophisticated optimization models often fail to outperform a simple Equally Weighted portfolio out-of-sample.

My Implementation: I constructed the "Naive" benchmark portfolio, rebalancing it to ensure equal capital allocation across all sectors regardless of their volatility or correlation.

The Goal: This served as the "null hypothesis" of my study. If the complex models from Parts 2 and 3 could not beat this simple heuristic, they failed the test of practical utility.

Part 2: Replicating Markowitz (1952) The Hypothesis: "Mean-Variance Efficiency"

In the second part, I went back to the foundations of Modern Portfolio Theory to replicate Harry Markowitz's Mean-Variance Optimization (MVO).

The Implementation: I coded the quadratic optimization problem to minimize portfolio variance for a target expected return. This involved solving the Lagrangian function to find the optimal weight vector w* based on the inverse of the covariance matrix.

The Extension: I also implemented the Minimum Variance and Global Minimum Variance (GMVP) portfolios, which theoretically offer the lowest possible risk on the efficient frontier.

The Flaw Tested: I specifically aimed to test the "Markowitz Curse"—the model's extreme sensitivity to inputs. By feeding it data from the COVID-19 crash and the 2022 bear market, I sought to observe how the "error maximization" property of the algorithm would handle ill-conditioned covariance matrices.

Part 3: Replicating Lopez de Prado (2016) The Hypothesis: "Hierarchical Risk Parity (HRP)"

For the final and most advanced part of the assignment, I replicated the machine-learning approach proposed by Marcos Lopez de Prado.

The Innovation: De Prado argues that treating the correlation matrix as a complete graph (where every asset connects to every other asset) causes instability. Instead, he proposes using graph theory and machine learning to introduce a hierarchy.

My Implementation: I built the HRP algorithm from scratch, following three distinct steps:

  1. Tree Clustering: I used Hierarchical Clustering to group the S&P 500 sectors into a dendrogram based on their correlation distance.
  2. Quasi-Diagonalization: I reorganized the covariance matrix to place similar assets closer together, enhancing stability.
  3. Recursive Bisection: I implemented a top-down capital allocation strategy, splitting weights recursively down the tree based on cluster variance.

The Comparative Results: A "Horse Race"

I pitted these three methodologies against each other using a rolling-window framework (12 and 18 months) with varying rebalancing frequencies. To ensure statistical rigor, I evaluated the results using Block Bootstrapping to generate confidence intervals for the Sharpe and Sortino Ratios.

The Verdict: The results of my replication confirmed the DeMiguel (Part 1) hypothesis.

Correlation Breakdown: During the 2020 crisis, I observed that correlations between sectors spiked significantly, which violated the diversification assumptions of the Markowitz model.

The Failure of Complexity: The Equally Weighted portfolio (Part 1) consistently ranked as the top performer. The Mean-Variance model (Part 2) suffered from estimation errors, leading to poor risk-adjusted returns. Even the machine-learning-based HRP (Part 3), while more stable than Markowitz, failed to outperform the naive benchmark during this specific period of high volatility.

Alpha Analysis: Using the Jensen's Alpha regression, I proved that none of the optimization strategies generated statistically significant positive alpha relative to the market.

This project demonstrated that while mathematical sophistication (Markowitz) and machine learning (HRP) offer theoretical elegance, in regimes of high correlation and uncertainty, the estimation error they introduce often outweighs their benefits compared to simple naive diversification.