Jonathan M. Borwein
Commemorative Conference
25—29 September, 2017
◄Financial Mathematics►
Theme chaired by Qiji (Jim) Zhu
Keynote talk:
Jon Borwein and Financial Mathematics
Jon had been a great mentor, colleague and friend to me in near a quarter century. I will use this opportunity to reflect on what he had taught me over the years and to highlight Jon's and our joint research
in applying entropy maximization in financial problems. Entropy maximization plays important role in several fundamental results in financial mathematics. They are the two fund theorem for Markowitz efficient
portfolios, the existence and uniqueness of a market portfolio in the capital asset pricing model, the fundamental theorem of asset pricing, the selection of a martingale measure for pricing contingent claims in
an incomplete market and the calculation of super/sub-hedging bounds and portfolios. The connection of diverse important results in finance with the method of entropy maximization indicates the significant
influence of methodology of physical science in financial research.
A More Scientific Approach to Applied Economics: Reconstructing Statistical, Analytical Significance, and Correlation Analysis
There is a deep and well regarded tradition in economics and other social sciences as well as in the physical sciences to assign causality to correlation analysis and statistical significance. This paper
presents a critique of the application of correlation analysis, unsubstantiated with any empirical backing of prior assumption, as the core analytical measure for causation. Moreover, this paper presents a
critique of the past and current focus on statistical significance as the core indicator of substantive or analytical significance, especially well paired with correlation analysis. The focus on correlation
analysis and statistical significance results in analytical conclusions that are false, misleading, or spurious in terms of causality and analytical significance. This can generate highly misguided policy at an
organizational, social, or even at a personal level.
In spite of substantive critiques of the application of tests of statistical significance, they remain pervasive in economics, across methodological and ideological perspectives. I argue that given the culture of the quantitative profession (that statistical significance tests are a vital component of quantitative economic analysis) and some important scientific attributes of tests of statistical significance (error of estimates from a randomly drawn sample), it cannot be easily excluded from empirical analysis. The same is the case with correlation analysis. What is important, however, is to understand the severe limits of statistical significance tests and correlation analysis for scientific study. At best, when used correctly, these statistical tools provide information on the probability that ones results are a fluke (statistical significance) (that there is an error in one’s estimates) and that there is a possible causal relationship between selected variables (correlation analysis). Therefore, statistical tools should only form a small part of the analytical narrative; not dominate it. Scientific empirical research must go beyond tests of statistical significance, the reporting of signs, and correlation. But our scientific culture, publication culture, herding, peer pressure, present or status quo bias, path dependency, inadequate competition in the academic market, amongst other factors, contribute towards very large costs (versus benefits) towards improving our scientific practices to the detriment of our scientific endeavors and social wellbeing.
In spite of substantive critiques of the application of tests of statistical significance, they remain pervasive in economics, across methodological and ideological perspectives. I argue that given the culture of the quantitative profession (that statistical significance tests are a vital component of quantitative economic analysis) and some important scientific attributes of tests of statistical significance (error of estimates from a randomly drawn sample), it cannot be easily excluded from empirical analysis. The same is the case with correlation analysis. What is important, however, is to understand the severe limits of statistical significance tests and correlation analysis for scientific study. At best, when used correctly, these statistical tools provide information on the probability that ones results are a fluke (statistical significance) (that there is an error in one’s estimates) and that there is a possible causal relationship between selected variables (correlation analysis). Therefore, statistical tools should only form a small part of the analytical narrative; not dominate it. Scientific empirical research must go beyond tests of statistical significance, the reporting of signs, and correlation. But our scientific culture, publication culture, herding, peer pressure, present or status quo bias, path dependency, inadequate competition in the academic market, amongst other factors, contribute towards very large costs (versus benefits) towards improving our scientific practices to the detriment of our scientific endeavors and social wellbeing.
Stock portfolio design and backtest overfitting
In mathematical finance, backtest overfitting connotes the usage of historical market data to develop an investment strategy, where too many variations of the strategy are tried, relative to the amount of
data available. Backtest overfitting is now thought to be a primary reason why investment models and strategies that look good on paper often disappoint in practice. In this study we addressed overfitting in
the context of designing a mutual fund or investment portfolio as a weighted collection of stocks. In particular, we developed a computer program that, given any desired performance profile, designs a portfolio
consisting of common securities, such as the constituents of the S&P 500 index, that achieves the desired profile based on in-sample backtest data. We then show that these portfolios typically perform
erratically on more recent, out-of-sample data. This is symptomatic of statistical overfitting. Less erratic results can be obtained by restricting the portfolio to only positive-weight components, but then
the results are quite unlike the target profile on both in-sample and out-of-sample data. This analysis shows in yet another way why the typical backtest-driven portfolio design process often fails to deliver
real-world performance.
Evaluation and Ranking of Market Forecasters
During a period of 2.5 years working with Jon, I learnt so many things from him. He's been a great mentor, colleague and friend to me. His passion for research and benefiting the research to impact the
community has always inspired many of us including myself. Sadly, he wasn't among us when we completed the project on "evaluating and ranking market forecasters", and published its outcomes in Journal of
Investment Management. I would like to use this opportunity to highlight Jon's research on quantitative finance, and our joint research in evaluating market forecasters. Many investors rely on market experts and
forecasters when making investment decisions, such as when to buy or sell securities. Ranking and grading market forecasters provides investors with metrics on which they may choose forecasters with the best
record of accuracy for their particular market exposure. This study develops a novel ranking methodology to rank the market forecaster. In particular, we distinguish forecasts by their specifi city, rather than
considering all predictions and forecasts equally important, and we also analyze the impact of the number of forecasts made by a particular forecaster. We have applied our methodology on a dataset including
6,627 forecasts made by 68 forecasters.