Since studying as an undergraduate student at the Technion (Israel Institute of Technology) and learning, for the first time in my life, that randomness too has its own laws (in the form of statistical distributions, amongst others), I have become extremely appreciative of the ingenuity of the concept of statistical distribution. The sheer combining of randomness with laws, formulated in the language of mathematics not unlike any other branch of the exact sciences, fascinated me considerably, young man that I was at the time.
That admiration has all since evaporated as I have become increasingly aware of the gigantic number of statistical distributions, defined and used within the science of statistics to describe random behavior, either of real-world phenomena or of sample-statistics embedded in statistical-analysis procedures (like hypothesis testing). I realized that unlike with modern-day physics, engaged to this day in the unification of the basic forces of nature, the science of statistics has failed to carry out similar attempts at unification. What the latter implies for me is derivation of a single universal distribution, relative to which all current distributions might be regarded as statistically insignificant random deviations (not unlike a sample average is a random deviation from the population mean). Such unification has never materialized, or even been attempted or debated, within the science of statistics.
Personally, I attribute this failure at unification to the fact that current foundations of statistics, with its basic concepts like probability function, probability density function (pdf) or distribution function (often denoted cumulative density function, or CDF), have been established back in the eighteenth century to derive various early-day distributions. These foundations have not been challenged ever since. Some well-known mathematicians of the time, like Jacob and Daniel Bernoulli, Abraham de Moivre, Carl Friedrich Gauss, Pierre-Simon Laplace and Joseph Louis Lagrange have all used those basic terms of statistics to derive specific distributions. However, the basic tenets underlying formation of those mathematical models of random variation have not been challenged to this day. Central amongst these tenets is the belief that random phenomena, with their associated properly-defined random variables, have each its own specific distribution. That tenet remained intact and unchallenged to this day. Consequently, no serious attempt at unification has ever become the core objective of the science of statistics. Furthermore, no discussion of how to proceed in the pursuit of the “universal distribution” has ever been conducted.
My sentiment about the feasibility of revolutionizing the concept of statistical distribution and deriving a universal distribution, relative to which all current distributions may be regarded as random deviations, has changed dramatically with the introduction of a new non-linear modeling approach, denoted Response Modeling Methodology, RMM). I have developed RMM back in the closing years of the previous century (Shore, 2005, and references therein), and only some years later I realized that the “Continuous Monotone Convexity (CMC)” property, part and parcel of RMM, could serve to derive the universal distribution, in the sense described in the previous paragraph. (Read about the CMC property in another post in this blog).
The result of the new realization is the development of an arch-type for a universal distribution and the description of the new approach in a new paper, titled:
“A General Model of Random Variation”.
This paper had been published in Communications in Statistics · May 2015. An Author’s Accepted Version is linked herewith:
On October, 28, 2016, I have posted on my ResearchGate page a more advanced version of the Universal Distribution, summarized in a new yet-unpublished article: