Jonathan M. Borwein
Commemorative Conference
25—29 September, 2017
◄Financial Mathematics►
Theme chaired by Qiji (Jim) Zhu
Keynote talk:
Jon Borwein and Financial Mathematics

A More Scientific Approach to Applied Economics: Reconstructing Statistical, Analytical Significance, and Correlation Analysis
There is a deep and well regarded tradition in economics and other social sciences as well as in the physical sciences to assign causality to correlation analysis and statistical significance. This paper
presents a critique of the application of correlation analysis, unsubstantiated with any empirical backing of prior assumption, as the core analytical measure for causation. Moreover, this paper presents a
critique of the past and current focus on statistical significance as the core indicator of substantive or analytical significance, especially well paired with correlation analysis. The focus on correlation
analysis and statistical significance results in analytical conclusions that are false, misleading, or spurious in terms of causality and analytical significance. This can generate highly misguided policy at an
organizational, social, or even at a personal level.
In spite of substantive critiques of the application of tests of statistical significance, they remain pervasive in economics, across methodological and ideological perspectives. I argue that given the culture of the quantitative profession (that statistical significance tests are a vital component of quantitative economic analysis) and some important scientific attributes of tests of statistical significance (error of estimates from a randomly drawn sample), it cannot be easily excluded from empirical analysis. The same is the case with correlation analysis. What is important, however, is to understand the severe limits of statistical significance tests and correlation analysis for scientific study. At best, when used correctly, these statistical tools provide information on the probability that ones results are a fluke (statistical significance) (that there is an error in one’s estimates) and that there is a possible causal relationship between selected variables (correlation analysis). Therefore, statistical tools should only form a small part of the analytical narrative; not dominate it. Scientific empirical research must go beyond tests of statistical significance, the reporting of signs, and correlation. But our scientific culture, publication culture, herding, peer pressure, present or status quo bias, path dependency, inadequate competition in the academic market, amongst other factors, contribute towards very large costs (versus benefits) towards improving our scientific practices to the detriment of our scientific endeavors and social wellbeing.
In spite of substantive critiques of the application of tests of statistical significance, they remain pervasive in economics, across methodological and ideological perspectives. I argue that given the culture of the quantitative profession (that statistical significance tests are a vital component of quantitative economic analysis) and some important scientific attributes of tests of statistical significance (error of estimates from a randomly drawn sample), it cannot be easily excluded from empirical analysis. The same is the case with correlation analysis. What is important, however, is to understand the severe limits of statistical significance tests and correlation analysis for scientific study. At best, when used correctly, these statistical tools provide information on the probability that ones results are a fluke (statistical significance) (that there is an error in one’s estimates) and that there is a possible causal relationship between selected variables (correlation analysis). Therefore, statistical tools should only form a small part of the analytical narrative; not dominate it. Scientific empirical research must go beyond tests of statistical significance, the reporting of signs, and correlation. But our scientific culture, publication culture, herding, peer pressure, present or status quo bias, path dependency, inadequate competition in the academic market, amongst other factors, contribute towards very large costs (versus benefits) towards improving our scientific practices to the detriment of our scientific endeavors and social wellbeing.
Stock portfolio design and backtest overfitting

Evaluation and Ranking of Market Forecasters
