Islamic statistics and science

not only WHAT, but WHO does statistics and science could matter… see for example:

Asad zaman is a genius, and i don’t use this word lightly… Check out his background and his work…

Anyway, if you are non-muslim, do ignore the ‘islamic’ part of his post… You’ll know when you read it… I don’t think it detracts from the rest of the post… I feel like his explanation fills the gap of my statistical understanding… Of course as he admitted his is a minority view, but a very thought-provoking one at that… :slight_smile:

Note: When at igdore bali for 4 months in 2018, i read like crazy all the physical books and papers available there… Plus tens of thousands of pages on statistics and science in general… Never came across what asad zaman discusses below… Confirming the saying “absence of evidence is not evidence of absence”… :slight_smile:

Zaman certainly has an unusual account and opinions on the history of statistics. @surya, is this a common view amongst Islamic statisticians and economists? I must admit that most of my statistical education has focused on usage rather than history and so I’d be interested in see more detail about this. Can you link to any other sources discussing this issue?

[Note, I merged your topics with links to Zaman as they seemed to be a continuous narrative]

thanks @Gavin.

in terms of statistics, zaman is the first one i encounter to elaborate in such details. he is a mathematician and statistician through and through after all.

“Zaman finished high school in Karachi, Pakistan, in 1971, and moved to MIT, Boston, for higher education. He finished his BS in math in 1974. He finished his Ph.D. in economics from Stanford University in another three years, from 1974 to 1977, picking up a master’s degree in statistics along the way.”

and zaman is the first i know to coin ‘islamic statistics’.

in terms of other disciplines, 'islamization of knowledge (science, humanities, arts, etc.)" is a huge endeavor spanning decades and continents. one of its proponents is another mit graduate, syed hossein nasr. i cannot do justice to elaborate this movement now. perhaps another time.

quickly, i would just like to say that indonesia has the most islamic universities in the world, public and private ones. these institutions literally has ‘islam’ in their names. there’s a lot of significance to this, which i would elaborate on another time. :slight_smile:

in addition to being a thorough reader, i am also a careful reader. so i checked zaman’s sources meticulously, as well as search for the keywords “eugenics and statistics” on google, facebpok, and twitter (forgot reddit! later. :)). here’s what i found (and have read in the past two days):

http://www.rutherfordjournal.org/article010107.html

https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.462.7279&rep=rep1&type=pdf

1 Like

to add, i went through the rabbit hole that is this person’s facebook timeline yesterday’s afternoon… just fascinating… :slight_smile:

for a conciliatory view of these islamic/indian/chinese/etc. contribution to modern science, see:

for non-conciliatory views, i’ve spent many years reading these and much else:

http://ckraju.net/, example: http://ckraju.net/usm/usm.html

the last one is in malay… and i bet that there are many others i’ve missed due to language barrier (chinese, japanese, persian/iranian, hindi, etc.)…

well, as the saying goes, the more you know… :slight_smile:

I read the second link. I’m not going to comment on the historical or political sides of this. On the purely technical side, I find Zaman to misrepresent the status quo. Most of his criticisms apply only to frequentist methods; anyone versed in Bayesian methods will find at least some of his technical arguments to be reasonable, though I would make different points. That he calls this “modern statistics” is just wrong as he’s mostly criticizing frequentist statistics. I also don’t think he’s accurately describing the current acceptance of causal methods. As far as I’m aware these methods are taught in economics batchelor’s programs in the US (contrary to his claims that econometricians have been “indoctrinated” against them). I’ve also been told that they don’t work that well. I have no particular knowledge of these methods; this is just what I heard from a friend of mine that I trust who seems fairly knowledgeable about these approaches. Outside of controlled experiments, causality is really hard.

Overall, I don’t think there’s much that’s new or of value in the first two links.

1 Like

thanks @btrettel. i think bayesian methods are mentioned somewhere in the other links sent, in the vein of your comment.

also, i am aware of the fierce debates between the frequentists and the bayesians. :slight_smile:

as for asad zaman, in any case, i would credit him for bringing my attention to the issue of ‘eugenics’ and ‘statistics’.

i’ve also searched reddit for the keywords “eugenics statistics”. useful reading.

also, these royal society tweets and replies are “entertaining”. :slight_smile:

Well, this is very engrossing. @Gavin and @btrettel, what do you thinkof this article? Does it go beyond frequentist versus bayesian debate? :slight_smile:

@surya, I also wasn’t aware of the link between early statisticians and eugenics, although from the other links you sent it seems like it is fairly well known. I didn’t look through all your links, but I felt that the UnHerd and Georgetown Review articles discussed this rather more concisely and directly than Zaman does on his blog (besides the religious aspect, he mixes in a lot of emotive anecdotes and sensational claims that seem out of place and unnecessary in a scientific discussion - but maybe I’m just unfamiliar with this writing style).

Anyway, the Western articles seem to focus more on questioning, from a justice-oriented standpoint, if Fisher (and the other early statisticians) should be recognised for their contribution to frequentist statistics. In my view, the bigger question is whether the frequentist statistics Fisher developed (in the context of eugenics and, as Zaman notes, without access to computers), which were then widely adopted in biology, are still the best default option to use for statistical testing. Frequentist statistics is just a tool after all, but developing new tools are hard and so it is often tempting to keep using the tool you already have, even if it doesn’t suit the problem perfectly. In my view, it doesn’t seem necessary to emphasise the link between Fisher and eugenics to argue for better statistical methods, after all, if Darwin had invented frequentist statistics to test for difference in beak length between different populations of finches, we’d have ended up with the same tool but no unethical origin. Zaman starts to address this in his latest post and I believe that inadequacy of frequentist statistics for their current use cases is the core issue, rather than morality of their initial uses.

I agree that the assumptions required for frequentist statistics are usually violated by real-world datasets and I think that this is well known by statisticians, although it often seems to be ignored by the end-users of statistical testing software (who often also misuse the results!). So I’m uncertain as to whether, by presenting alternatives to frequentist statistics as specifically Islamic approach, Zaman is really doing justice to mainstream/western efforts to achieve similar goals. Is there something else I’m missing here @surya?

As an aside, Zaman’s comments in lecture 5c:

The hugely popular philosophy of science developed by Karl Popper was very useful in elevating the importance of the p-value: we can never PROVE a scientific hypothesis, but we can disprove them. A significant p-value disproves a null hypothesis creating a scientific fact. Insignificant p-values mean nothing. This led a fundamentally flawed statistical methodology currently being taught and used all over the world. The problem is that there are huge numbers of hypothesis which are NOT in gross conflict with the data. By careful choice of parametric models, we can ensure that our desired null hypothesis does not conflict with the data.

Seems related to one of Maxwell’s points on aim-oriented empiricism:

From D’Alembert in the 18th century to Popper in the 20th, the widely held view, amongst both scientists and philosophers, has been (and continues to be) that science proceeds by assessing theories impartially in the light of evidence, no permanent assumption being accepted by science about the universe independently of evidence.

But this standard empiricist view is untenable. If taken literally, it would instantly bring science to a standstill. For, given any accepted scientific theory, T, Newtonian theory say, or quantum theory, endlessly many rivals can be concocted which agree with T about observed phenomena but disagree arbitrarily about some unobserved phenomena. Science would be drowned in an ocean of such empirically successful rival theories.

In practice, these rivals are excluded because they are disastrously disunified. Two considerations govern acceptance of theories in science: empirical success and unity. But in persistently accepting unified theories, to the extent of rejecting disunified rivals that are just as, or even more, empirically successful, science makes a big persistent assumption about the universe. The universe is such that all disunified theories are false.

Both point to difficulties in existing statistical and scientific methods for identifying a single best explanation for experimental data without relying on untestable assumptions.

thanks @gavin for the erudite comment. :slight_smile:

zaman’s is a whole project. what exciting to me is he is constructing an alternative, instead of just deconstructing “western” sciences/knowledge. also it is instructive to see how he proceeds from his “axiology” to his “epistemology”, no matter our judgement of his success/effort. :slight_smile:

he is one of the very few whom i known to have done this (note that maxwell is not one of these few). another is ck raju, whose effort is commendable like zaman. see his alternative calculus materials (and all his works basically). :slight_smile:

in essence, both zaman and raju are “modern pre-modern” thinkers who try to be consistent in their thoughts. raju went to the extent of equating mainstream “western” physics with “christian” physics, which he thinks hinders further development of physics.

by the way, the claim made by raju and zaman, as well as others, is much larger: all “western” sciences (and knowledge) are unethical in origin.

as i mentioned, there are conciliatory and non-conciliatory views between “western” and “other” sciences/knowledge. so, up to us which we wish to adopt, or not, but knowing the views is useful, not to mention, entertaining. :slight_smile:

as to your other points regarding statistics and science, i broadly agree. :slight_smile:

note: the distinction between modern and pre-modern perhaps could be encapsulated with the split between religion and science/knowledge in the european experience.

1 Like

Just read these… :slight_smile:

https://community.amstat.org/communities/community-home/digestviewer/viewthread?MessageKey=be8fa87b-3892-4a84-a559-17af4b1a1307&CommunityKey=6b2d607a-e31f-4f19-8357-020a8631b999

I recently came across this post which I felt hit the core of the problem in a way that other articles haven’t.

We assume that the mathematics itself is unbiased. This may well be true, but in applying mathematics to the real world we need to make philosophical assumptions that can carry bias. In terms of the statistics of Pearson and Fisher there is a strong bias against causal explanation. In this article I argue that such a bias would tend to support a eugenics agenda. I suggest that classical statistical methods have a politically conservative bias, and that modern developments in causal inference represent the other side of the socio-political coin.

@Daniel_Cleather discusses this more in the second half of his book Subvert!, which I can recommend.