Statistics fundamentally flawed

1
During the recent Neptune Jupiter square there was an incidence of a clinical trial of a new drug causing unexpected exceedingly dramatic and unpleasant effects in the participants in the trial. After ticking this off on my list of the sort of things Neptune and Jupiter are likely to manifest it got me thinking?.

We, as astrologers, are often presented with the view that astrology has not been proven to work despite a number of statistical studies, and this tends to result in us getting dragged into debate over the limits of the particular studies undertaken

However it ignores a more fundamental point. Statistical proofs, themselves, depend on a key factor; that the choice of constituents of the sample from the population must be independent of each other except for the criteria being tested. The ideal way to do this is supposedly a pure random sample. Ignoring the fact that random sampling in practice often suffers from flaws, particularly when the target population comprises living creatures (be they human or otherwise), the way I see it the real issue here is......

If we start by saying we know astrology works; it is statistics that cannot. And I do not just mean statistical studies of astrology; I mean statistical studies of anything! If astrology, and especially horary and electional astrology, works then there can be no such thing as a random sample- the astrological characteristics of the time of selection must determine the outcome of study, via the numbers selected, the resulting participants and the unfolding processes of that study.

The example of the drug trials given above is extreme, it wasn?t a random sample ? they were paid volunteers and there were a very small number of them. However the same principle would apply, totally unconsciously, to the selection of much larger samples even if the ideal of random numbers were used- simply because those numbers would not be random in the intended sense ? they would reflect the moment of selection.

This has all sorts of ramifications. E.g. Scientific results that were demonstrated in may 2000 during the Saturn Jupiter Uranus square, even if enough studies were repeated throughout the month to make the scientists happy that the results were replicable, would not necessarily show the same result if they were ?replicated? in May 2005. According to astrology it is impossible to replicate anything completely? ever.

Instead of worrying about astrological sceptics, we should be the sceptics and we should be devoting time to questioning all these supposed ?scientific? results and all the product launches, and policy decisions that ride on the back of them.

2
I agree entirely with your final paragraph. Since 1980 the science of statistics has been preverted wholesale, notably by epidemiologists to start with, and then by other disciplines wanting to join the bandwagon of results (results=funding), with no small help from the media.

The astrologically interesting thing here is that Big Science seems have departed in a postmodern direction at very much the same time as architecture, literature, music, etc. This being a societal/cultural phenomenon, we might begin to look, as many astrologers tell us, to the outer planets and societal factors they represent. However, the most obvious indicator of change, sign ingress, does not seem indicative. Uranus entered Sgr in late '81 (a bit late in my chronology), and Chiron entered Tau in 1976 (maybe a bit early). Something for the back burner.

However, your general point suffers over an ambiguity in the application of the concept of "random", even if you don't mean that the outcome of drug trials is influenced by the planets. (Do you?). You are conflating two appositions of the concept. On the one hand the apposition with "caused", "determined", "predictable", etc, which is the sense with which people are concerned when they argue about the theological possibility of miracles,
or the sense in which mathematical or computer-generated random numbers are or are not truly random. On the other hand the apposition with "biased", "preferred", "unrepresentative" etc. But to say that a statistical sample is not a random sample, i.e. it has or shows bias, is not to imply anything about causation in regard to the items themselves, or even to some of them (presumably the "rogues" which prevent the sample being properly "random"), but rather something about how they have been selected. (Or in the case of readers' or viewers' straw polls, self-selected). Consider this: the computer scientist whose random number generator can be shown to be faulty has not necessarily succumbed to bias when s/he devised it (even if the project was done from envy, pique, malice, etc).

As for statistics, we have that time honoured association with lies and damn lies. I once heard a claim that speed cameras saved lives some miles from where they were positioned. This is prima facie not unreasonable, as drivers, knowing where speed cameras are, might adjust their speed accordingly well in advance. But when I read the actual learned paper in statistics on which the claim was based, it had, apart from several suspect logical manoeuvres in getting to its conclusion, the (incompatible) implications that (1) speed cameras positioned on Mars would also save a few lives on Earth, (2) within the statistical limts selected by the author, cameras caused deaths, not reduced them, i.e. the (statistical) argument failed to support the conclusion. Don't expect media reports of statistical studies to divulge confidence intervals, relative risk ratios, or anything that might allow an independent minded person to come to their own view about the study and its policy implications.