Dataset Viewer
id
stringclasses 10
values | source_type
stringclasses 1
value | title
stringclasses 10
values | url
stringclasses 10
values | language
stringclasses 1
value | year
stringclasses 1
value | topics
listlengths 0
0
| text
stringclasses 10
values |
|---|---|---|---|---|---|---|---|
wiki::en::A/B testing
|
wiki
|
A/B testing
|
https://en.wikipedia.org/wiki/A/B_testing
|
en
|
[] |
A/B testing (also known as bucket testing, split-run testing or split testing) is a user-experience research method. A/B tests consist of a randomized experiment that usually involves two variants (A and B), although the concept can be also extended to multiple variants of the same variable. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is employed to compare multiple versions of a single variable, for example by testing a subject's response to variant A against variant B, and to determine which of the variants is more effective.
Multivariate testing or multinomial testing is similar to A/B testing but may test more than two versions at the same time or use more controls. Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations—commonplace with survey data, offline data, and other, more complex phenomena.
Definition
"A/B testing" is a shorthand for a simple randomized controlled experiment, in which a number of samples (e.g. A and B) of a single vector-variable are compared. A/B tests are widely considered the simplest form of controlled experiment, especially when they only involve two variants. However, by adding more variants to the test, its complexity grows.
The following example illustrates an A/B test with a single variable:
A company has a customer database of 2,000 people and launches an email campaign with a discount code in order to generate sales through its website. The company creates two versions of the email with different calls to action (the part of the copy that encourages customers to act—in the case of a sales campaign, make a purchase) and identifying promotional codes.
To 1,000 people, the company sends an email with the call to action stating "Offer ends this Saturday! Use code A1",
To the remaining 1,000 people, it sends an email with the call to action stating "Offer ends soon! Use code B1".
All other elements of the emails' copy and layout are identical.
The company then monitors which campaign has the higher success rate by analyzing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1,000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first call to action is more effective and will use it in future sales. A more nuanced approach would involve applying statistical testing to determine whether the differences in response rates between A1 and B1 were statistically significant (highly likely that the differences are real, repeatable and the result to random chance).
In the previous example, the purpose of the test is to determine the more effective strategy to encourage customers to make a purchase. If, however, the aim of the test had been to determine which email would generate the higher clickthrough rate (the percentage of people who actually click the link after receiving the email), the results might have been different.
For example, even though more of the customers receiving the code B1 accessed the website, because the call to action did not state the end date of the promotion, many recipients may feel no urgency to make an immediate purchase. Consequently, if the purpose of the test had been simply to determine which email would bring more traffic to the website, the email containing code B1 might well have been more successful. An A/B test should have a defined, measurable outcome, such as sales converted, clickthrough rate or registration rate.
Common test statistics
Two-sample hypothesis tests are appropriate for comparing the two samples in which the samples are divided by the two control cases in the experiment. Z-tests are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation. Student's t-tests are appropriate for comparing means under relaxed conditions when less is assumed. Welch's t-test assumes the least and is therefore the most commonly used two-sample hypothesis test in which the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used.
Fisher's exact test can be employed to compare two binomial distributions, such as a click-through rate.
Segmentation and targeting
A/B tests most commonly apply the same variant (e.g., user interface element) with equal probability to all users. However, in some circumstances, responses to variants may be heterogeneous. While a variant A might have a higher response rate overall, variant B may have an even higher response rate within a specific segment of the customer base.
For instance, in the above example, the breakdown of the response rates by gender could have been:
In this case, while variant A attracted a higher response rate overall, variant B actually elicited a higher response rate with men.
As a result, the company might select a segmented strategy as a result of the A/B test, sending variant B to men and variant A to women in the future. In this example, a segmented strategy would yield a 30% increase in expected response rates from
5
%
=
40
+
10
500
+
500
{\textstyle 5\%={\frac {40+10}{500+500}}}
to
6.5
%
=
40
+
25
500
+
500
{\textstyle 6.5\%={\frac {40+25}{500+500}}}
.
If segmented results are expected from the A/B test, the test should be properly designed at the outset to be evenly distributed across key customer attributes, such as gender. The test should contain a representative sample of men vs. women and assign men and women randomly to each “variant” (variant A vs. variant B). Failure to do so could lead to experiment bias and inaccurate conclusions.
This segmentation and targeting approach can be further generalized to include multiple customer attributes rather than a single customer attribute—for example, customers' age and gender—to identify more nuanced patterns that may exist in the test results.
Tradeoffs
Positives
The results of A/B tests are simple to interpret to create a clear picture of real user preferences, as they directly test one option over another. A/B tests can also provide answers to highly specific design questions. One example of this is Google's A/B testing with hyperlink colors. In order to optimize revenue, Google tested dozens of hyperlink hues to determine which colors attract the most clicks.
Negatives
A/B tests are sensitive to variance; they require a large sample size in order to reduce standard error and produce a statistically significant result. In applications in which active users are abundant, such as with popular online social-media platforms, obtaining a large sample size is trivial. In other cases, large sample sizes are obtained by increasing the experiment enrollment period. However, using a technique coined by Microsoft as Controlled Experiment Using Pre-Experiment Data (CUPED), variance from before the experiment start can be taken into account so that fewer samples are required to produce a statistically significant result.
Because of its nature as an experiment, running an A/B test introduces the risk of wasted time and resources if the test produces unwanted or unhelpful results.
In December 2018, representatives with experience in large-scale A/B testing from 13 organizations (Airbnb, Amazon, Booking.com, Facebook, Google, LinkedIn, Lyft, Microsoft, Netflix, Twitter, Uber and Stanford University) summarized the top challenges in a paper. The challenges were grouped into four areas: analysis, engineering and culture, deviations from traditional A/B tests and data quality.
History
It is difficult to definitively establish when A/B testing was first used. The first randomized double-blind trial to assess the effectiveness of a homeopathic drug occurred in 1835. Experimentation with advertising campaigns, which has been compared to modern A/B testing, began in the early 20th century. The advertising pioneer Claude Hopkins used promotional coupons to test the effectiveness of his campaigns. However, this process, which Hopkins described in his 1923 book Scientific Advertising, did not incorporate concepts such as statistical significance and the null hypothesis, which are used in statistical hypothesis testing. Modern statistical methods for assessing the significance of sample data were developed separately in the same period. This work was conducted in 1908 by William Sealy Gosset when he altered the Z-test to create Student's t-test.
With the growth of the internet, new ways to sample populations have become available. Google engineers ran their first A/B test in 2000 to determine the optimum number of results to display in its search-engine results. The first test was unsuccessful because of glitches that resulted from slow loading times. Later A/B testing research was more advanced, but the foundation and underlying principles generally remain the same, and in 2011, Google ran more than 7,000 different A/B tests.
In 2012, a Microsoft employee working on the search engine Bing created an experiment to test different methods of displaying advertising headlines. Within hours, the alternative format produced a revenue increase of 12% with no impact on user-experience metrics. Today, major software companies such as Microsoft and Google each conduct over 10,000 A/B tests annually.
A/B testing has been claimed by some to be a change in philosophy and business-strategy in certain niches, although the approach is identical to a between-subjects design, which is commonly used in a variety of research traditions. A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice.
Many companies now use the "designed experiment" approach to making marketing decisions, with the expectation that relevant sample results can improve positive conversion results. It is an increasingly common practice as the tools and expertise grow in this area.
Applications
Online social media
A/B tests have been used by large social-media sites such as LinkedIn, Facebook and Instagram to understand user engagement and satisfaction of online features, such as a new feature or product. A/B tests have also been used to conduct complex experiments on subjects such as network effects when users are offline, how online services affect user actions and how users influence one another.
E-commerce
On an e-commerce website, the purchase funnel is typically a helpful candidate for A/B testing, as even marginal decreases in drop-off rates can represent a significant gain in sales. Significant improvements can be sometimes seen through testing elements such as copy text, layouts, images and colors. In these tests, users only see one of two versions, as the goal is to discover which of the two versions is preferable.
Product pricing
A/B testing can be used to determine the right price for a product, which is one of the most difficult challenges faced when a new product or service is launched. A/B testing (especially valid for digital goods) is an effective mechanism to identify the price point that maximizes the total revenue.
Political A/B testing
A/B tests have also been used by political campaigns. In 2007, Barack Obama's presidential campaign used A/B testing to garner online attraction and understand what voters wanted to see from Obama. For example, Obama's team tested four distinct buttons on their website that led users to register for newsletters. Additionally, the team used six different accompanying images to attract users.
HTTP routing and API feature testing
A/B testing is commonly employed when deploying a newer version of an API. For real-time user experience testing, an HTTP layer 7 reverse proxy is configured in such a way that n% of the HTTP traffic is routed to the newer version of the backend instance, while the remaining 100-n% of HTTP traffic hits the (stable) older version of the backend HTTP application service. This is usually achieved to limit the exposure of customers to a newer backend instance such that, if there is a bug with the newer version, only n% of the total user agents or clients are affected while others are routed to a stable backend, which is a common ingress control mechanism.
See also
Adaptive control
Between-group design experiment
Choice modelling
Multi-armed bandit
Multivariate testing
Randomized controlled trial
Scientific control
Stochastic dominance
Test statistic
Two-proportion Z-test
== References ==
|
|
wiki::en::Sequential analysis
|
wiki
|
Sequential analysis
|
https://en.wikipedia.org/wiki/Sequential_analysis
|
en
|
[] |
In statistics, sequential analysis or sequential hypothesis testing is statistical analysis where the sample size is not fixed in advance. Instead data is evaluated as it is collected, and further sampling is stopped in accordance with a pre-defined stopping rule as soon as significant results are observed. Thus a conclusion may sometimes be reached at a much earlier stage than would be possible with more classical hypothesis testing or estimation, at consequently lower financial and/or human cost.
History
The method of sequential analysis is first attributed to Abraham Wald with Jacob Wolfowitz, W. Allen Wallis, and Milton Friedman while at Columbia University's Statistical Research Group as a tool for more efficient industrial quality control during World War II. Its value to the war effort was immediately recognised, and led to its receiving a "restricted" classification. At the same time, George Barnard led a group working on optimal stopping in Great Britain. Another early contribution to the method was made by K.J. Arrow with D. Blackwell and M.A. Girshick.
A similar approach was independently developed from first principles at about the same time by Alan Turing, as part of the Banburismus technique used at Bletchley Park, to test hypotheses about whether different messages coded by German Enigma machines should be connected and analysed together. This work remained secret until the early 1980s.
Peter Armitage introduced the use of sequential analysis in medical research, especially in the area of clinical trials. Sequential methods became increasingly popular in medicine following Stuart Pocock's work that provided clear recommendations on how to control Type 1 error rates in sequential designs.
Alpha spending functions
When researchers repeatedly analyze data as more observations are added, the probability of a Type 1 error increases. Therefore, it is important to adjust the alpha level at each interim analysis, such that the overall Type 1 error rate remains at the desired level. This is conceptually similar to using the Bonferroni correction, but because the repeated looks at the data are dependent, more efficient corrections for the alpha level can be used. Among the earliest proposals is the Pocock boundary. Alternative ways to control the Type 1 error rate exist, such as the Haybittle–Peto bounds, and additional work on determining the boundaries for interim analyses has been done by O'Brien & Fleming and Wang & Tsiatis.
A limitation of corrections such as the Pocock boundary is that the number of looks at the data must be determined before the data is collected, and that the looks at the data should be equally spaced (e.g., after 50, 100, 150, and 200 patients). The alpha spending function approach developed by Demets & Lan does not have these restrictions, and depending on the parameters chosen for the spending function, can be very similar to Pocock boundaries or the corrections proposed by O'Brien and Fleming. Another approach that has no such restrictions at all is based on e-values and e-processes.
Applications of sequential analysis
Clinical trials
In a randomized trial with two treatment groups, group sequential testing may for example be conducted in the following manner: After n subjects in each group are available an interim analysis is conducted. A statistical test is performed to compare the two groups and if the null hypothesis is rejected the trial is terminated; otherwise, the trial continues, another n subjects per group are recruited, and the statistical test is performed again, including all subjects. If the null is rejected, the trial is terminated, and otherwise it continues with periodic evaluations until a maximum number of interim analyses have been performed, at which point the last statistical test is conducted and the trial is discontinued.
Other applications
Sequential analysis also has a connection to the problem of gambler's ruin that has been studied by, among others, Huygens in 1657.
Step detection is the process of finding abrupt changes in the mean level of a time series or signal. It is usually considered as a special kind of statistical method known as change point detection. Often, the step is small and the time series is corrupted by some kind of noise, and this makes the problem challenging because the step may be hidden by the noise. Therefore, statistical and/or signal processing algorithms are often required. When the algorithms are run online as the data is coming in, especially with the aim of producing an alert, this is an application of sequential analysis.
Bias
Trials that are terminated early because they reject the null hypothesis typically overestimate the true effect size. This is because in small samples, only large effect size estimates will lead to a significant effect, and the subsequent termination of a trial. Methods to correct effect size estimates in single trials have been proposed. Note that this bias is mainly problematic when interpreting single studies. In meta-analyses, overestimated effect sizes due to early stopping are balanced by underestimation in trials that stop late, leading Schou & Marschner to conclude that "early stopping of clinical trials is not a substantive source of bias in meta-analyses".
The meaning of p-values in sequential analyses also changes, because when using sequential analyses, more than one analysis is performed, and the typical definition of a p-value as the data “at least as extreme” as is observed needs to be redefined. One solution is to order the p-values of a series of sequential tests based on the time of stopping and how high the test statistic was at a given look, which is known as stagewise ordering, first proposed by Armitage.
See also
Optimal stopping
Sequential estimation
Sequential probability ratio test
CUSUM
Notes
References
Wald, Abraham (1947). Sequential Analysis. New York: John Wiley and Sons.
Bartroff, J., Lai T.L., and Shih, M.-C. (2013) Sequential Experimentation in Clinical Trials: Design and Analysis. Springer.
Ghosh, Bhaskar Kumar (1970). Sequential Tests of Statistical Hypotheses. Reading: Addison-Wesley.
Chernoff, Herman (1972). Sequential Analysis and Optimal Design. SIAM.
Siegmund, David (1985). Sequential Analysis. Springer Series in Statistics. New York: Springer-Verlag. ISBN 978-0-387-96134-7.
Bakeman, R., Gottman, J.M., (1997) Observing Interaction: An Introduction to Sequential Analysis, Cambridge: Cambridge University Press
Jennison, C. and Turnbull, B.W (2000) Group Sequential Methods With Applications to Clinical Trials. Chapman & Hall/CRC.
Whitehead, J. (1997). The Design and Analysis of Sequential Clinical Trials, 2nd Edition. John Wiley & Sons.
External links
R Package: Wald's Sequential Probability Ratio Test by OnlineMarketr.com
Software for conducting sequential analysis and applications of sequential analysis in the study of group interaction in computer-mediated communication by Dr. Allan Jeong at Florida State University
SAMBO Optimization – a Python framework for sequential, model-based optimization.
Commercial
PASS Sample Size Software includes features for the setup of group sequential designs.
|
|
wiki::en::False discovery rate
|
wiki
|
False discovery rate
|
https://en.wikipedia.org/wiki/False_discovery_rate
|
en
|
[] |
In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR, which is the expected proportion of "discoveries" (rejected null hypotheses) that are false (incorrect rejections of the null). Equivalently, the FDR is the expected ratio of the number of false positive classifications (false discoveries) to the total number of positive classifications (rejections of the null). The total number of rejections of the null include both the number of false positives (FP) and true positives (TP). Simply put, FDR = FP / (FP + TP). FDR-controlling procedures provide less stringent control of Type I errors compared to family-wise error rate (FWER) controlling procedures (such as the Bonferroni correction), which control the probability of at least one Type I error. Thus, FDR-controlling procedures have greater power, at the cost of increased numbers of Type I errors.
History
Technological motivations
The modern widespread use of the FDR is believed to stem from, and be motivated by, the development in technologies that allowed the collection and analysis of a large number of distinct variables in several individuals (e.g., the expression level of each of 10,000 different genes in 100 different persons). By the late 1980s and 1990s, the development of "high-throughput" sciences, such as genomics, allowed for rapid data acquisition. This, coupled with the growth in computing power, made it possible to seamlessly perform a very high number of statistical tests on a given data set. The technology of microarrays was a prototypical example, as it enabled thousands of genes to be tested simultaneously for differential expression between two biological conditions.
As high-throughput technologies became common, technological and/or financial constraints led researchers to collect datasets with relatively small sample sizes (e.g. few individuals being tested) and large numbers of variables being measured per sample (e.g. thousands of gene expression levels). In these datasets, too few of the measured variables showed statistical significance after classic correction for multiple tests with standard multiple comparison procedures. This created a need within many scientific communities to abandon FWER and unadjusted multiple hypothesis testing for other ways to highlight and rank in publications those variables showing marked effects across individuals or treatments that would otherwise be dismissed as non-significant after standard correction for multiple tests. In response to this, a variety of error rates have been proposed—and become commonly used in publications—that are less conservative than FWER in flagging possibly noteworthy observations. The FDR is useful when researchers are looking for "discoveries" that will give them followup work (E.g.: detecting promising genes for followup studies), and are interested in controlling the proportion of "false leads" they are willing to accept.
Literature
The FDR concept was formally described by Yoav Benjamini and Yosef Hochberg in 1995 (BH procedure) as a less conservative and arguably more appropriate approach for identifying the important few from the trivial many effects tested. The FDR has been particularly influential, as it was the first alternative to the FWER to gain broad acceptance in many scientific fields (especially in the life sciences, from genetics to biochemistry, oncology and plant sciences). In 2005, the Benjamini and Hochberg paper from 1995 was identified as one of the 25 most-cited statistical papers.
Prior to the 1995 introduction of the FDR concept, various precursor ideas had been considered in the statistics literature. In 1979, Holm proposed the Holm procedure, a stepwise algorithm for controlling the FWER that is at least as powerful as the well-known Bonferroni adjustment. This stepwise algorithm sorts the p-values and sequentially rejects the hypotheses starting from the smallest p-values.
Benjamini (2010) said that the false discovery rate, and the paper Benjamini and Hochberg (1995), had its origins in two papers concerned with multiple testing:
The first paper is by Schweder and Spjotvoll (1982) who suggested plotting the ranked p-values and assessing the number of true null hypotheses (
m
0
{\displaystyle m_{0}}
) via an eye-fitted line starting from the largest p-values. The p-values that deviate from this straight line then should correspond to the false null hypotheses. This idea was later developed into an algorithm and incorporated the estimation of
m
0
{\displaystyle m_{0}}
into procedures such as Bonferroni, Holm or Hochberg. This idea is closely related to the graphical interpretation of the BH procedure.
The second paper is by Branko Soric (1989) which introduced the terminology of "discovery" in the multiple hypothesis testing context. Soric used the expected number of false discoveries divided by the number of discoveries
(
E
[
V
]
/
R
)
{\displaystyle \left(E[V]/R\right)}
as a warning that "a large part of statistical discoveries may be wrong". This led Benjamini and Hochberg to the idea that a similar error rate, rather than being merely a warning, can serve as a worthy goal to control.
The BH procedure was proven to control the FDR for independent tests in 1995 by Benjamini and Hochberg. In 1986, R. J. Simes offered the same procedure as the "Simes procedure", in order to control the FWER in the weak sense (under the intersection null hypothesis) when the statistics are independent.
Definitions
Based on definitions below we can define Q as the proportion of false discoveries among the discoveries (rejections of the null hypothesis):
Q
=
V
R
=
V
V
+
S
.
{\displaystyle Q={\frac {V}{R}}={\frac {V}{V+S}}.}
where
V
{\displaystyle V}
is the number of false discoveries and
S
{\displaystyle S}
is the number of true discoveries.
The false discovery rate (FDR) is then simply the following:
F
D
R
=
Q
e
=
E
[
Q
]
,
{\displaystyle \mathrm {FDR} =Q_{e}=\mathrm {E} \!\left[Q\right],}
where
E
[
Q
]
{\displaystyle \mathrm {E} \!\left[Q\right]}
is the expected value of
Q
{\displaystyle Q}
. The goal is to keep FDR below a given threshold q. To avoid division by zero,
Q
{\displaystyle Q}
is defined to be 0 when
R
=
0
{\displaystyle R=0}
. Formally,
F
D
R
=
E
[
V
/
R
|
R
>
0
]
⋅
P
(
R
>
0
)
{\displaystyle \mathrm {FDR} =\mathrm {E} \!\left[V/R|R>0\right]\cdot \mathrm {P} \!\left(R>0\right)}
.
Classification of multiple hypothesis tests
The following table defines the possible outcomes when testing multiple null hypotheses.
Suppose we have a number m of null hypotheses, denoted by: H1, H2, ..., Hm.
Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant.
Summing each type of outcome over all Hi yields the following random variables:
m is the total number hypotheses tested
m
0
{\displaystyle m_{0}}
is the number of true null hypotheses, an unknown parameter
m
−
m
0
{\displaystyle m-m_{0}}
is the number of true alternative hypotheses
V is the number of false positives (Type I error) (also called "false discoveries")
S is the number of true positives (also called "true discoveries")
T is the number of false negatives (Type II error)
U is the number of true negatives
R
=
V
+
S
{\displaystyle R=V+S}
is the number of rejected null hypotheses (also called "discoveries", either true or false)
In m hypothesis tests of which
m
0
{\displaystyle m_{0}}
are true null hypotheses, R is an observable random variable, and S, T, U, and V are unobservable random variables.
Controlling procedures
The settings for many procedures is such that we have
H
1
…
H
m
{\displaystyle H_{1}\ldots H_{m}}
null hypotheses tested and
P
1
…
P
m
{\displaystyle P_{1}\ldots P_{m}}
their corresponding p-values. We list these p-values in ascending order and denote them by
P
(
1
)
…
P
(
m
)
{\displaystyle P_{(1)}\ldots P_{(m)}}
. A procedure that goes from a small test-statistic to a large one will be called a step-up procedure. In a similar way, in a "step-down" procedure we move from a large corresponding test statistic to a smaller one.
Benjamini–Hochberg procedure
The Benjamini–Hochberg procedure (BH step-up procedure) controls the FDR at level
α
{\displaystyle \alpha }
. It works as follows:
For a given
α
{\displaystyle \alpha }
, find the largest k such that
P
(
k
)
≤
k
m
α
{\displaystyle P_{(k)}\leq {\frac {k}{m}}\alpha }
Reject the null hypothesis (i.e., declare discoveries) for all
H
(
i
)
{\displaystyle H_{(i)}}
for
i
=
1
,
…
,
k
{\displaystyle i=1,\ldots ,k}
Geometrically, this corresponds to plotting
P
(
k
)
{\displaystyle P_{(k)}}
vs. k (on the y and x axes respectively), drawing the line through the origin with slope
α
m
{\displaystyle {\frac {\alpha }{m}}}
, and declaring discoveries for all points on the left, up to, and including the last point that is not above the line.
The BH procedure is valid when the m tests are independent, and also in various scenarios of dependence, but is not universally valid. It also satisfies the inequality:
E
(
Q
)
≤
m
0
m
α
≤
α
{\displaystyle E(Q)\leq {\frac {m_{0}}{m}}\alpha \leq \alpha }
If an estimator of
m
0
{\displaystyle m_{0}}
is inserted into the BH procedure, it is no longer guaranteed to achieve FDR control at the desired level. Adjustments may be needed in the estimator and several modifications have been proposed.
Note that the mean
α
{\displaystyle \alpha }
for these m tests is
α
(
m
+
1
)
2
m
{\displaystyle {\frac {\alpha (m+1)}{2m}}}
, the Mean(FDR
α
{\displaystyle \alpha }
) or MFDR,
α
{\displaystyle \alpha }
adjusted for m independent or positively correlated tests (see AFDR below). The MFDR expression here is for a single recomputed value of
α
{\displaystyle \alpha }
and is not part of the Benjamini and Hochberg method.
Benjamini–Yekutieli procedure
The Benjamini–Yekutieli procedure controls the false discovery rate under arbitrary dependence assumptions. This refinement modifies the threshold and finds the largest k such that:
P
(
k
)
≤
k
m
⋅
c
(
m
)
α
{\displaystyle P_{(k)}\leq {\frac {k}{m\cdot c(m)}}\alpha }
If the tests are independent or positively correlated (as in Benjamini–Hochberg procedure):
c
(
m
)
=
1
{\displaystyle c(m)=1}
Under arbitrary dependence (including the case of negative correlation), c(m) is the harmonic number:
c
(
m
)
=
∑
i
=
1
m
1
i
{\displaystyle c(m)=\sum _{i=1}^{m}{\frac {1}{i}}}
. Note that
c
(
m
)
{\displaystyle c(m)}
can be approximated by using the Taylor series expansion and the Euler–Mascheroni constant (
γ
=
0.57721...
{\displaystyle \gamma =0.57721...}
):
∑
i
=
1
m
1
i
≈
ln
(
m
)
+
γ
+
1
2
m
.
{\displaystyle \sum _{i=1}^{m}{\frac {1}{i}}\approx \ln(m)+\gamma +{\frac {1}{2m}}.}
Using MFDR and formulas above, an adjusted MFDR (or AFDR) is the minimum of the mean
α
{\displaystyle \alpha }
for m dependent tests, i.e.,
M
F
D
R
c
(
m
)
=
α
(
m
+
1
)
2
m
[
ln
(
m
)
+
γ
]
+
1
{\displaystyle {\frac {\mathrm {MFDR} }{c(m)}}={\frac {\alpha (m+1)}{2m[\ln(m)+\gamma ]+1}}}
.
Another way to address dependence is by bootstrapping and rerandomization.
Storey-Tibshirani procedure
In the Storey-Tibshirani procedure, q-values are used for controlling the FDR.
Properties
Adaptive and scalable
Using a multiplicity procedure that controls the FDR criterion is adaptive and scalable. Meaning that controlling the FDR can be very permissive (if the data justify it), or conservative (acting close to control of FWER for sparse problem) - all depending on the number of hypotheses tested and the level of significance.
The FDR criterion adapts so that the same number of false discoveries (V) will have different implications, depending on the total number of discoveries (R). This contrasts with the family-wise error rate criterion. For example, if inspecting 100 hypotheses (say, 100 genetic mutations or SNPs for association with some phenotype in some population):
If we make 4 discoveries (R), having 2 of them be false discoveries (V) is often very costly. Whereas,
If we make 50 discoveries (R), having 2 of them be false discoveries (V) is often not very costly.
The FDR criterion is scalable in that the same proportion of false discoveries out of the total number of discoveries (Q), remains sensible for different number of total discoveries (R). For example:
If we make 100 discoveries (R), having 5 of them be false discoveries (
q
=
5
%
{\displaystyle q=5\%}
) may not be very costly.
Similarly, if we make 1000 discoveries (R), having 50 of them be false discoveries (as before,
q
=
5
%
{\displaystyle q=5\%}
) may still not be very costly.
Dependency among the test statistics
Controlling the FDR using the linear step-up BH procedure, at level q, has several properties related to the dependency structure between the test statistics of the m null hypotheses that are being corrected for. If the test statistics are:
Independent:
F
D
R
≤
m
0
m
q
{\displaystyle \mathrm {FDR} \leq {\frac {m_{0}}{m}}q}
Independent and continuous:
F
D
R
=
m
0
m
q
{\displaystyle \mathrm {FDR} ={\frac {m_{0}}{m}}q}
Positive dependent:
F
D
R
≤
m
0
m
q
{\displaystyle \mathrm {FDR} \leq {\frac {m_{0}}{m}}q}
In the general case:
F
D
R
≤
m
0
m
q
1
+
1
2
+
1
3
+
⋯
+
1
m
≈
m
0
m
q
ln
(
m
)
+
γ
+
1
2
m
,
{\displaystyle \mathrm {FDR} \leq {\frac {m_{0}}{m}}{\frac {q}{1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots +{\frac {1}{m}}}}\approx {\frac {m_{0}}{m}}{\frac {q}{\ln(m)+\gamma +{\frac {1}{2m}}}},}
where
γ
{\displaystyle \gamma }
is the Euler–Mascheroni constant.
Proportion of true hypotheses
If all of the null hypotheses are true (
m
0
=
m
{\displaystyle m_{0}=m}
), then controlling the FDR at level q guarantees control over the FWER (this is also called "weak control of the FWER"):
F
W
E
R
=
P
(
V
≥
1
)
=
E
(
V
R
)
=
F
D
R
≤
q
{\displaystyle \mathrm {FWER} =P\left(V\geq 1\right)=E\left({\frac {V}{R}}\right)=\mathrm {FDR} \leq q}
, simply because the event of rejecting at least one true null hypothesis
{
V
≥
1
}
{\displaystyle \{V\geq 1\}}
is exactly the event
{
V
/
R
=
1
}
{\displaystyle \{V/R=1\}}
, and the event
{
V
=
0
}
{\displaystyle \{V=0\}}
is exactly the event
{
V
/
R
=
0
}
{\displaystyle \{V/R=0\}}
(when
V
=
R
=
0
{\displaystyle V=R=0}
,
V
/
R
=
0
{\displaystyle V/R=0}
by definition). But if there are some true discoveries to be made (
m
0
<
m
{\displaystyle m_{0}<m}
) then FWER ≥ FDR. In that case there will be room for improving detection power. It also means that any procedure that controls the FWER will also control the FDR.
Average power
The average power of the Benjamini-Hochberg procedure can be computed analytically
Related concepts
The discovery of the FDR was preceded and followed by many other types of error rates. These include:
PCER (per-comparison error rate) is defined as:
P
C
E
R
=
E
[
V
m
]
{\displaystyle \mathrm {PCER} =E\left[{\frac {V}{m}}\right]}
. Testing individually each hypothesis at level α guarantees that
P
C
E
R
≤
α
{\displaystyle \mathrm {PCER} \leq \alpha }
(this is testing without any correction for multiplicity)
FWER (the family-wise error rate) is defined as:
F
W
E
R
=
P
(
V
≥
1
)
{\displaystyle \mathrm {FWER} =P(V\geq 1)}
. There are numerous procedures that control the FWER.
k
-FWER
{\displaystyle k{\text{-FWER}}}
(The tail probability of the False Discovery Proportion), suggested by Lehmann and Romano, van der Laan at al, is defined as:
k
-FWER
=
P
(
V
≥
k
)
≤
q
{\displaystyle k{\text{-FWER}}=P(V\geq k)\leq q}
.
k
-FDR
{\displaystyle k{\text{-FDR}}}
(also called the generalized FDR by Sarkar in 2007) is defined as:
k
-FDR
=
E
(
V
R
I
(
V
>
k
)
)
≤
q
{\displaystyle k{\text{-FDR}}=E\left({\frac {V}{R}}I_{(V>k)}\right)\leq q}
.
Q
′
{\displaystyle Q'}
is the proportion of false discoveries among the discoveries", suggested by Soric in 1989, and is defined as:
Q
′
=
E
[
V
]
R
{\displaystyle Q'={\frac {E[V]}{R}}}
. This is a mixture of expectations and realizations, and has the problem of control for
m
0
=
m
{\displaystyle m_{0}=m}
.
F
D
R
−
1
{\displaystyle \mathrm {FDR} _{-1}}
(or Fdr) was used by Benjamini and Hochberg, and later called "Fdr" by Efron (2008) and earlier. It is defined as:
F
D
R
−
1
=
F
d
r
=
E
[
V
]
E
[
R
]
{\displaystyle \mathrm {FDR} _{-1}=Fdr={\frac {E[V]}{E[R]}}}
. This error rate cannot be strictly controlled because it is 1 when
m
=
m
0
{\displaystyle m=m_{0}}
.
F
D
R
+
1
{\displaystyle \mathrm {FDR} _{+1}}
was used by Benjamini and Hochberg, and later called "pFDR" by Storey (2002). It is defined as:
F
D
R
+
1
=
p
F
D
R
=
E
[
V
R
|
R
>
0
]
{\displaystyle \mathrm {FDR} _{+1}=pFDR=E\left[\left.{\frac {V}{R}}\right|R>0\right]}
. This error rate cannot be strictly controlled because it is 1 when
m
=
m
0
{\displaystyle m=m_{0}}
. JD Storey promoted the use of the pFDR (a close relative of the FDR), and the q-value, which can be viewed as the proportion of false discoveries that we expect in an ordered table of results, up to the current line. Storey also promoted the idea (also mentioned by BH) that the actual number of null hypotheses,
m
0
{\displaystyle m_{0}}
, can be estimated from the shape of the probability distribution curve. For example, in a set of data where all null hypotheses are true, 50% of results will yield probabilities between 0.5 and 1.0 (and the other 50% will yield probabilities between 0.0 and 0.5). We can therefore estimate
m
0
{\displaystyle m_{0}}
by finding the number of results with
P
>
0.5
{\displaystyle P>0.5}
and doubling it, and this permits refinement of our calculation of the pFDR at any particular cut-off in the data-set.
False exceedance rate (the tail probability of FDP), defined as:
P
(
V
R
>
q
)
{\displaystyle \mathrm {P} \left({\frac {V}{R}}>q\right)}
W
-FDR
{\displaystyle W{\text{-FDR}}}
(Weighted FDR). Associated with each hypothesis i is a weight
w
i
≥
0
{\displaystyle w_{i}\geq 0}
, the weights capture importance/price. The W-FDR is defined as:
W
-FDR
=
E
(
∑
w
i
V
i
∑
w
i
R
i
)
{\displaystyle W{\text{-FDR}}=E\left({\frac {\sum w_{i}V_{i}}{\sum w_{i}R_{i}}}\right)}
.
FDCR (False Discovery Cost Rate). Stemming from statistical process control: associated with each hypothesis i is a cost
c
i
{\displaystyle \mathrm {c} _{i}}
and with the intersection hypothesis
H
00
{\displaystyle H_{00}}
a cost
c
0
{\displaystyle c_{0}}
. The motivation is that stopping a production process may incur a fixed cost. It is defined as:
F
D
C
R
=
E
(
c
0
V
0
+
∑
c
i
V
i
c
0
R
0
+
∑
c
i
R
i
)
{\displaystyle \mathrm {FDCR} =E\left(c_{0}V_{0}+{\frac {\sum c_{i}V_{i}}{c_{0}R_{0}+\sum c_{i}R_{i}}}\right)}
PFER (per-family error rate) is defined as:
P
F
E
R
=
E
(
V
)
{\displaystyle \mathrm {PFER} =E(V)}
.
FNR (False non-discovery rates) by Sarkar; Genovese and Wasserman is defined as:
F
N
R
=
E
(
T
m
−
R
)
=
E
(
m
−
m
0
−
(
R
−
V
)
m
−
R
)
{\displaystyle \mathrm {FNR} =E\left({\frac {T}{m-R}}\right)=E\left({\frac {m-m_{0}-(R-V)}{m-R}}\right)}
F
D
R
(
z
)
{\displaystyle \mathrm {FDR} (z)}
is defined as:
F
D
R
(
z
)
=
p
0
F
0
(
z
)
F
(
z
)
{\displaystyle \mathrm {FDR} (z)={\frac {p_{0}F_{0}(z)}{F(z)}}}
f
d
r
{\displaystyle \mathrm {fdr} }
, local-fdr is defined as:
f
d
r
=
p
0
f
0
(
z
)
f
(
z
)
{\displaystyle \mathrm {fdr} ={\frac {p_{0}f_{0}(z)}{f(z)}}}
in a local interval of
z
{\displaystyle \mathrm {z} }
.
False coverage rate
The false coverage rate (FCR) is, in a sense, the FDR analog to the confidence interval. FCR indicates the average rate of false coverage, namely, not covering the true parameters, among the selected intervals. The FCR gives a simultaneous coverage at a
1
−
α
{\displaystyle 1-\alpha }
level for all of the parameters considered in the problem. Intervals with simultaneous coverage probability 1−q can control the FCR to be bounded by q. There are many FCR procedures such as: Bonferroni-Selected–Bonferroni-Adjusted, Adjusted BH-Selected CIs (Benjamini and Yekutieli (2005)), Bayes FCR (Zhao and Hwang (2012)), and other Bayes methods.
Bayesian approaches
Connections have been made between the FDR and Bayesian approaches (including empirical Bayes methods), thresholding wavelets coefficients and model selection, and generalizing the confidence interval into the false coverage statement rate (FCR).
Structural False Discovery Rate (sFDR)
The Structural False Discovery Rate (sFDR) is a generalization of the classical False Discovery Rate (FDR) introduced by D. Meskaldji and collaborators in 2018.
The sFDR extends the FDR by replacing the linear denominator R in the expected ratio E[V/R] with a non-decreasing concave function s(R), yielding the criterion E[V/s(R)]. This approach allows the control of false discoveries to adapt to the scale of testing, so that prudence increases faster than linearly as the number of rejections grows.
When s(R)=R, the classical FDR is recovered, while specific choices of s(R) can interpolate between FDR control and family-wise error control (k-FWER). The sFDR provides a structural connection between classical, local, and generalized false discovery concepts, and has been extended to online and adaptive settings.
Software implementations
False Discovery Rate Analysis in R – Lists links with popular R packages
False Discovery Rate Analysis in Python – Python implementations of false discovery rate procedures
See also
Positive predictive value
References
External links
The False Discovery Rate - Yoav Benjamini, Ruth Heller & Daniel Yekutieli - Rousseeuw Prize for Statistics ceremony lecture from 2024.
False Discovery Rate: Corrected & Adjusted P-values - MATLAB/GNU Octave implementation and discussion on the difference between corrected and adjusted FDR p-values.
Understanding False Discovery Rate - blog post
StatQuest: FDR and the Benjamini-Hochberg Method clearly explained on YouTube
Understanding False Discovery Rate - Includes Excel VBA code to implement it, and an example in cell line development
|
|
wiki::en::Sample size determination
|
wiki
|
Sample size determination
|
https://en.wikipedia.org/wiki/Sample_size_determination
|
en
|
[] |
Sample size determination or estimation is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power. In complex studies, different sample sizes may be allocated, such as in stratified surveys or experimental designs with multiple treatment groups. In a census, data is sought for an entire population, hence the intended sample size is equal to the population. In experimental design, where a study may be divided into different treatment groups, there may be different sample sizes for each group.
Sample sizes may be chosen in several ways:
using experience – small samples, though sometimes unavoidable, can result in wide confidence intervals and risk of errors in statistical hypothesis testing.
using a target variance for an estimate to be derived from the sample eventually obtained, i.e., if a high precision is required (narrow confidence interval) this translates to a low target variance of the estimator.
the use of a power target, i.e. the power of statistical test to be applied once the sample is collected.
using a confidence level, i.e. the larger the required confidence level, the larger the sample size (given a constant precision requirement).
Introduction
Sample size determination is a crucial aspect of research methodology that plays a significant role in ensuring the reliability and validity of study findings. In order to influence the accuracy of estimates, the power of statistical tests, and the general robustness of the research findings, it entails carefully choosing the number of participants or data points to be included in a study.
Consider the case where we are conducting a survey to determine the average satisfaction level of customers regarding a new product. To determine an appropriate sample size, we need to consider factors such as the desired level of confidence, margin of error, and variability in the responses. We might decide that we want a 95% confidence level, meaning we are 95% confident that the true average satisfaction level falls within the calculated range. We also decide on a margin of error, of ±3%, which indicates the acceptable range of difference between our sample estimate and the true population parameter. Additionally, we may have some idea of the expected variability in satisfaction levels based on previous data or assumptions.
Importance
Larger sample sizes generally lead to increased precision when estimating unknown parameters. For instance, to accurately determine the prevalence of pathogen infection in a specific species of fish, it is preferable to examine a sample of 200 fish rather than 100 fish. Several fundamental facts of mathematical statistics describe this phenomenon, including the law of large numbers and the central limit theorem.
In some situations, the increase in precision for larger sample sizes is minimal, or even non-existent. This can result from the presence of systematic errors or strong dependence in the data, or if the data follows a heavy-tailed distribution, or because the data is strongly dependent or biased.
Sample sizes may be evaluated by the quality of the resulting estimates, as follows. It is usually determined on the basis of the cost, time or convenience of data collection and the need for sufficient statistical power. For example, if a proportion is being estimated, one may wish to have the 95% confidence interval be less than 0.06 units wide. Alternatively, sample size may be assessed based on the power of a hypothesis test. For example, if we are comparing the support for a certain political candidate among women with the support for that candidate among men, we may wish to have 80% power to detect a difference in the support levels of 0.04 units.
Estimation
Estimation of a proportion
A relatively simple situation is estimation of a proportion. It is a fundamental aspect of statistical analysis, particularly when gauging the prevalence of a specific characteristic within a population. For example, we may wish to estimate the proportion of residents in a community who are at least 65 years old.
The estimator of a proportion is
p
^
=
X
/
n
{\displaystyle {\hat {p}}=X/n}
, where X is the number of 'positive' instances (e.g., the number of people out of the n sampled people who are at least 65 years old). When the observations are independent, this estimator has a (scaled) binomial distribution (and is also the sample mean of data from a Bernoulli distribution). The maximum variance of this distribution is 0.25, which occurs when the true parameter is p = 0.5. In practical applications, where the true parameter p is unknown, the maximum variance is often employed for sample size assessments. If a reasonable estimate for p is known the quantity
p
(
1
−
p
)
{\displaystyle p(1-p)}
may be used in place of 0.25.
As the sample size n grows sufficiently large, the distribution of
p
^
{\displaystyle {\hat {p}}}
will be closely approximated by a normal distribution. Using this and the Wald method for the binomial distribution, yields a confidence interval, with Z representing the standard Z-score for the desired confidence level (e.g., 1.96 for a 95% confidence interval), in the form:
(
p
^
−
Z
0.25
n
,
p
^
+
Z
0.25
n
)
{\displaystyle \left({\widehat {p}}-Z{\sqrt {\frac {0.25}{n}}},\quad {\widehat {p}}+Z{\sqrt {\frac {0.25}{n}}}\right)}
To determine an appropriate sample size n for estimating proportions, the equation below can be solved, where W represents the desired width of the confidence interval. The resulting sample size formula, is often applied with a conservative estimate of p (e.g., 0.5):
Z
0.25
n
=
W
/
2
{\displaystyle Z{\sqrt {\frac {0.25}{n}}}=W/2}
for n, yielding the sample size
n
=
Z
2
W
2
{\displaystyle n={\frac {Z^{2}}{W^{2}}}}
, in the case of using 0.5 as the most conservative estimate of the proportion. (Note: W/2 = margin of error.)
In the figure below one can observe how sample sizes for binomial proportions change given different confidence levels and margins of error.
Otherwise, the formula would be
Z
p
(
1
−
p
)
n
=
W
/
2
{\displaystyle Z{\sqrt {\frac {p(1-p)}{n}}}=W/2}
, which yields
n
=
4
Z
2
p
(
1
−
p
)
W
2
{\displaystyle n={\frac {4Z^{2}p(1-p)}{W^{2}}}}
.
For example, in estimating the proportion of the U.S. population supporting a presidential candidate with a 95% confidence interval width of 2 percentage points (0.02), a sample size of (1.96)2/ (0.022) = 9604 is required with the margin of error in this case is 1 percentage point. It is reasonable to use the 0.5 estimate for p in this case because the presidential races are often close to 50/50, and it is also prudent to use a conservative estimate. The margin of error in this case is 1 percentage point (half of 0.02).
In practice, the formula :
(
p
^
−
1.96
0.25
n
,
p
^
+
1.96
0.25
n
)
{\displaystyle \left({\widehat {p}}-1.96{\sqrt {\frac {0.25}{n}}},\quad {\widehat {p}}+1.96{\sqrt {\frac {0.25}{n}}}\right)}
is commonly used to form a 95% confidence interval for the true proportion. The equation
2
0.25
n
=
W
/
2
{\displaystyle 2{\sqrt {\frac {0.25}{n}}}=W/2}
can be solved for n, providing a minimum sample size needed to meet the desired margin of error W. The foregoing is commonly simplified: n = 4/W2 = 1/B2 where B is the error bound on the estimate, i.e., the estimate is usually given as within ± B. For B = 10% one requires n = 100, for B = 5% one needs n = 400, for B = 3% the requirement approximates to n = 1000, while for B = 1% a sample size of n = 10000 is required. These numbers are quoted often in news reports of opinion polls and other sample surveys. However, the results reported may not be the exact value as numbers are preferably rounded up. Knowing that the value of the n is the minimum number of sample points needed to acquire the desired result, the number of respondents then must lie on or above the minimum.
Estimation of a mean
Simply speaking, if we are trying to estimate the average time it takes for people to commute to work in a city. Instead of surveying the entire population, you can take a random sample of 100 individuals, record their commute times, and then calculate the mean (average) commute time for that sample. For example, person 1 takes 25 minutes, person 2 takes 30 minutes, ..., person 100 takes 20 minutes. Add up all the commute times and divide by the number of people in the sample (100 in this case). The result would be your estimate of the mean commute time for the entire population. This method is practical when it's not feasible to measure everyone in the population, and it provides a reasonable approximation based on a representative sample.
In a precisely mathematical way, when estimating the population mean using an independent and identically distributed (iid) sample of size n, where each data value has variance σ2, the standard error of the sample mean is:
σ
n
.
{\displaystyle {\frac {\sigma }{\sqrt {n}}}.}
This expression describes quantitatively how the estimate becomes more precise as the sample size increases. Using the central limit theorem to justify approximating the sample mean with a normal distribution yields a confidence interval of the form
(
x
¯
−
Z
σ
n
,
x
¯
+
Z
σ
n
)
{\displaystyle \left({\bar {x}}-{\frac {Z\sigma }{\sqrt {n}}},\quad {\bar {x}}+{\frac {Z\sigma }{\sqrt {n}}}\right)}
,
where Z is a standard Z-score for the desired level of confidence (1.96 for a 95% confidence interval).
To determine the sample size n required for a confidence interval of width W, with W/2 as the margin of error on each side of the sample mean, the equation
Z
σ
n
=
W
/
2
{\displaystyle {\frac {Z\sigma }{\sqrt {n}}}=W/2}
can be solved. This yields the sample size formula, for n:
n
=
4
Z
2
σ
2
W
2
{\displaystyle n={\frac {4Z^{2}\sigma ^{2}}{W^{2}}}}
.
For instance, if estimating the effect of a drug on blood pressure with a 95% confidence interval that is six units wide, and the known standard deviation of blood pressure in the population is 15, the required sample size would be
4
×
1.96
2
×
15
2
6
2
=
96.04
{\displaystyle {\frac {4\times 1.96^{2}\times 15^{2}}{6^{2}}}=96.04}
, which would be rounded up to 97, since sample sizes must be integers and must meet or exceed the calculated minimum value. Understanding these calculations is essential for researchers designing studies to accurately estimate population means within a desired level of confidence.
Required sample sizes for hypothesis tests
One of the prevalent challenges faced by statisticians revolves around the task of calculating the sample size needed to attain a specified statistical power for a test, all while maintaining a pre-determined Type I error rate α, which signifies the level of significance in hypothesis testing. It yields a certain power for a test, given a predetermined. As follows, this can be estimated by pre-determined tables for certain values, by formulas, by simulation, by Mead's resource equation, or by the cumulative distribution function:
Tables
The table shown on the right can be used in a two-sample t-test to estimate the sample sizes of an experimental group and a control group that are of equal size, that is, the total number of individuals in the trial is twice that of the number given, and the desired significance level is 0.05. The parameters used are:
The desired statistical power of the trial, shown in column to the left.
Cohen's d (= effect size), which is the expected difference between the means of the target values between the experimental group and the control group, divided by the expected standard deviation.
Formulas
Calculating a required sample size is often not easy since the distribution of the test statistic under the alternative hypothesis of interest is usually hard to work with. Approximate sample size formulas for specific problems are available - some general references are
and
A computational approach (QuickSize)
The QuickSize algorithm
is a very general approach that is simple to use yet versatile enough to give an exact solution for a broad range of problems. It uses simulation together with a search algorithm.
Mead's resource equation
Mead's resource equation is often used for estimating sample sizes of laboratory animals, as well as in many other laboratory experiments. It may not be as accurate as using other methods in estimating sample size, but gives a hint of what is the appropriate sample size where parameters such as expected standard deviations or expected differences in values between groups are unknown or very hard to estimate.
All the parameters in the equation are in fact the degrees of freedom of the number of their concepts, and hence, their numbers are subtracted by 1 before insertion into the equation.
The equation is:
E
=
N
−
B
−
T
,
{\displaystyle E=N-B-T,}
where:
N is the total number of individuals or units in the study (minus 1)
B is the blocking component, representing environmental effects allowed for in the design (minus 1)
T is the treatment component, corresponding to the number of treatment groups (including control group) being used, or the number of questions being asked (minus 1)
E is the degrees of freedom of the error component and should be somewhere between 10 and 20.
For example, if a study using laboratory animals is planned with four treatment groups (T=3), with eight animals per group, making 32 animals total (N=31), without any further stratification (B=0), then E would equal 28, which is above the cutoff of 20, indicating that sample size may be a bit too large, and six animals per group might be more appropriate.
Cumulative distribution function
Let Xi, i = 1, 2, ..., n be independent observations taken from a normal distribution with unknown mean μ and known variance σ2. Consider two hypotheses, a null hypothesis:
H
0
:
μ
=
0
{\displaystyle H_{0}:\mu =0}
and an alternative hypothesis:
H
a
:
μ
=
μ
∗
{\displaystyle H_{a}:\mu =\mu ^{*}}
for some 'smallest significant difference' μ* > 0. This is the smallest value for which we care about observing a difference. Now, for (1) to reject H0 with a probability of at least 1 − β when
Ha is true (i.e. a power of 1 − β), and (2) reject H0 with probability α when H0 is true, the following is necessary:
If zα is the upper α percentage point of the standard normal distribution, then
Pr
(
x
¯
>
z
α
σ
/
n
∣
H
0
)
=
α
{\displaystyle \Pr({\bar {x}}>z_{\alpha }\sigma /{\sqrt {n}}\mid H_{0})=\alpha }
and so
'Reject H0 if our sample average (
x
¯
{\displaystyle {\bar {x}}}
) is more than
z
α
σ
/
n
{\displaystyle z_{\alpha }\sigma /{\sqrt {n}}}
'
is a decision rule which satisfies (2). (This is a 1-tailed test.) In such a scenario, achieving this with a probability of at least 1−β when the alternative hypothesis Ha is true becomes imperative. Here, the sample average originates from a Normal distribution with a mean of μ*. Thus, the requirement is expressed as:
Pr
(
x
¯
>
z
α
σ
/
n
∣
H
a
)
≥
1
−
β
{\displaystyle \Pr({\bar {x}}>z_{\alpha }\sigma /{\sqrt {n}}\mid H_{a})\geq 1-\beta }
Through careful manipulation, this can be shown (see Statistical power Example) to happen when
n
≥
(
z
α
+
Φ
−
1
(
1
−
β
)
μ
∗
/
σ
)
2
{\displaystyle n\geq \left({\frac {z_{\alpha }+\Phi ^{-1}(1-\beta )}{\mu ^{*}/\sigma }}\right)^{2}}
where
Φ
{\displaystyle \Phi }
is the normal cumulative distribution function.
Stratified sample size
With more complicated sampling techniques, such as stratified sampling, the sample can often be split up into sub-samples. Typically, if there are H such sub-samples (from H different strata) then each of them will have a sample size nh, h = 1, 2, ..., H. These nh must conform to the rule that n1 + n2 + ... + nH = n (i.e., that the total sample size is given by the sum of the sub-sample sizes). Selecting these nh optimally can be done in various ways, using (for example) Neyman's optimal allocation.
There are many reasons to use stratified sampling: to decrease variances of sample estimates, to use partly non-random methods, or to study strata individually. A useful, partly non-random method would be to sample individuals where easily accessible, but, where not, sample clusters to save travel costs.
In general, for H strata, a weighted sample mean is
x
¯
w
=
∑
h
=
1
H
W
h
x
¯
h
,
{\displaystyle {\bar {x}}_{w}=\sum _{h=1}^{H}W_{h}{\bar {x}}_{h},}
with
Var
(
x
¯
w
)
=
∑
h
=
1
H
W
h
2
Var
(
x
¯
h
)
.
{\displaystyle \operatorname {Var} ({\bar {x}}_{w})=\sum _{h=1}^{H}W_{h}^{2}\operatorname {Var} ({\bar {x}}_{h}).}
The weights,
W
h
{\displaystyle W_{h}}
, frequently, but not always, represent the proportions of the population elements in the strata, and
W
h
=
N
h
/
N
{\displaystyle W_{h}=N_{h}/N}
. For a fixed sample size, that is
n
=
∑
n
h
{\displaystyle n=\sum n_{h}}
,
Var
(
x
¯
w
)
=
∑
h
=
1
H
W
h
2
Var
(
x
¯
h
)
(
1
n
h
−
1
N
h
)
,
{\displaystyle \operatorname {Var} ({\bar {x}}_{w})=\sum _{h=1}^{H}W_{h}^{2}\operatorname {Var} ({\bar {x}}_{h})\left({\frac {1}{n_{h}}}-{\frac {1}{N_{h}}}\right),}
which can be made a minimum if the sampling rate within each stratum is made
proportional to the standard deviation within each stratum:
n
h
/
N
h
=
k
S
h
{\displaystyle n_{h}/N_{h}=kS_{h}}
, where
S
h
=
Var
(
x
¯
h
)
{\displaystyle S_{h}={\sqrt {\operatorname {Var} ({\bar {x}}_{h})}}}
and
k
{\displaystyle k}
is a constant such that
∑
n
h
=
n
{\displaystyle \sum {n_{h}}=n}
.
An "optimum allocation" is reached when the sampling rates within the strata
are made directly proportional to the standard deviations within the strata
and inversely proportional to the square root of the sampling cost per element
within the strata,
C
h
{\displaystyle C_{h}}
:
n
h
N
h
=
K
S
h
C
h
,
{\displaystyle {\frac {n_{h}}{N_{h}}}={\frac {KS_{h}}{\sqrt {C_{h}}}},}
where
K
{\displaystyle K}
is a constant such that
∑
n
h
=
n
{\displaystyle \sum {n_{h}}=n}
, or, more generally, when
n
h
=
K
′
W
h
S
h
C
h
.
{\displaystyle n_{h}={\frac {K'W_{h}S_{h}}{\sqrt {C_{h}}}}.}
Qualitative research
Qualitative research approaches sample size determination with a distinctive methodology that diverges from quantitative methods. Rather than relying on predetermined formulas or statistical calculations, it involves a subjective and iterative judgment throughout the research process. In qualitative studies, researchers often adopt a subjective stance, making determinations as the study unfolds.
Sample size determination in qualitative studies takes a different approach. It is generally a subjective judgment, taken as the research proceeds. One common approach is to continually include additional participants or materials until a point of "saturation" is reached. Saturation occurs when new participants or data cease to provide fresh insights, indicating that the study has adequately captured the diversity of perspectives or experiences within the chosen sample saturation is reached. The number needed to reach saturation has been investigated empirically.
Unlike quantitative research, qualitative studies face a scarcity of reliable guidance regarding sample size estimation prior to beginning the research.
Imagine conducting in-depth interviews with cancer survivors, qualitative researchers may use data saturation to determine the appropriate sample size. If, over a number of interviews, no fresh themes or insights show up, saturation has been reached and more interviews might not add much to our knowledge of the survivor's experience. Thus, rather than following a preset statistical formula, the concept of attaining saturation serves as a dynamic guide for determining sample size in qualitative research. There is a paucity of reliable guidance on estimating sample sizes before starting the research, with a range of suggestions given. In an effort to introduce some structure to the sample size determination process in qualitative research, a tool analogous to quantitative power calculations has been proposed. This tool, based on the negative binomial distribution, is particularly tailored for thematic analysis.
See also
Design of experiments
Engineering response surface example under Stepwise regression
Cohen's h
Receiver operating characteristic
References
General references
Bartlett, J. E. II; Kotrlik, J. W.; Higgins, C. (2001). "Organizational research: Determining appropriate sample size for survey research" (PDF). Information Technology, Learning, and Performance Journal. 19 (1): 43–50. Archived from the original (PDF) on 2009-03-06. Retrieved 2009-09-07.
Kish, L. (1965). Survey Sampling. Wiley. ISBN 978-0-471-48900-9.
Smith, Scott (8 April 2013). "Determining Sample Size: How to Ensure You Get the Correct Sample Size". Qualtrics. Retrieved 19 September 2018.
Israel, Glenn D. (1992). "Determining Sample Size". University of Florida, PEOD-6. Retrieved 29 June 2019.
Rens van de Schoot, Milica Miočević (eds.). 2020. Small Sample Size Solutions (Open Access): A Guide for Applied Researchers and Practitioners. Routledge.
Further reading
NIST: Selecting Sample Sizes
ASTM E122-07: Standard Practice for Calculating Sample Size to Estimate, With Specified Precision, the Average for a Characteristic of a Lot or Process
External links
A MATLAB script implementing Cochran's sample size formula
Sample Size Calculator for various statistical tests
Statulator for various statistical tests
|
|
wiki::en::Power (statistics)
|
wiki
|
Power (statistics)
|
https://en.wikipedia.org/wiki/Power_(statistics)
|
en
|
[] |
In frequentist statistics, power is the probability of detecting an effect (i.e. rejecting the null hypothesis) given that some prespecified effect actually exists using a given test in a given context. In typical use, it is a function of the specific test that is used (including the choice of test statistic and significance level), the sample size (more data tends to provide more power), and the effect size (effects or correlations that are large relative to the variability of the data tend to provide more power).
More formally, in the case of a simple hypothesis test with two hypotheses, the power of the test is the probability that the test correctly rejects the null hypothesis (
H
0
{\displaystyle H_{0}}
) when the alternative hypothesis (
H
1
{\displaystyle H_{1}}
) is true. It is commonly denoted by
1
−
β
{\displaystyle 1-\beta }
, where
β
{\displaystyle \beta }
is the probability of making a type II error (a false negative) conditional on there being a true effect or association.
Background
Statistical testing uses data from samples to assess, or make inferences about, a statistical population. For example, we may measure the yields of samples of two varieties of a crop, and use a two sample test to assess whether the mean values of this yield differs between varieties.
Under a frequentist hypothesis testing framework, this is done by calculating a test statistic (such as a t-statistic) for the dataset, which has a known theoretical probability distribution if there is no difference (the so called null hypothesis). If the actual value calculated on the sample is sufficiently unlikely to arise under the null hypothesis, we say we identified a statistically significant effect.
The threshold for significance can be set small to ensure there is little chance of falsely detecting a non-existent effect. However, failing to identify a significant effect does not imply there was none. If we insist on being careful to avoid false positives, we may create false negatives instead. It may simply be too much to expect that we will be able to find satisfactorily strong evidence of a very subtle difference even if it exists. Statistical power is an attempt to quantify this issue.
In the case of the comparison of the two crop varieties, it enables us to answer questions like:
Is there a big danger of two very different varieties producing samples that just happen to look indistinguishable by pure chance?
How much effort do we need to put into this comparison to avoid that danger?
How different do these varieties need to be before we can expect to notice a difference?
Description
Suppose we are conducting a hypothesis test. We define two hypotheses
H
0
{\displaystyle H_{0}}
the null hypothesis, and
H
1
{\displaystyle H_{1}}
the alternative hypothesis. If we design the test such that α is the significance level (α being the probability of rejecting
H
0
{\displaystyle H_{0}}
when
H
0
{\displaystyle H_{0}}
is in fact true) then the power of the test is 1 − β where β is the probability of failing to reject
H
0
{\displaystyle H_{0}}
when the alternative
H
1
{\displaystyle H_{1}}
is true.
To make this more concrete, a typical statistical test would be based on a test statistic t calculated from the sampled data, which has a particular probability distribution under
H
0
{\displaystyle H_{0}}
. A desired significance level α would then define a corresponding "rejection region" (bounded by certain "critical values"), a set of values t is unlikely to take if
H
0
{\displaystyle H_{0}}
was correct. If we reject
H
0
{\displaystyle H_{0}}
in favor of
H
1
{\displaystyle H_{1}}
only when the sample t takes those values, we would be able to keep the probability of falsely rejecting
H
0
{\displaystyle H_{0}}
within our desired significance level. At the same time, if
H
1
{\displaystyle H_{1}}
defines its own probability distribution for t (the difference between the two distributions being a function of the effect size), the power of the test would be the probability, under
H
1
{\displaystyle H_{1}}
, that the sample t falls into our defined rejection region and causes
H
0
{\displaystyle H_{0}}
to be correctly rejected.
Statistical power is one minus the type II error probability and is also the sensitivity of the hypothesis testing procedure to detect a true effect. There is usually a trade-off between demanding more stringent tests (and so, smaller rejection regions) and trying to have a high probability of rejecting the null under the alternative hypothesis. Statistical power may also be extended to the case where multiple hypotheses are being tested based on an experiment or survey. It is thus also common to refer to the power of a study, evaluating a scientific project in terms of its ability to answer the research questions they are seeking to answer.
Applications
The main application of statistical power is "power analysis", a calculation of power usually done before an experiment is conducted using data from pilot studies or a literature review. Power analyses can be used to calculate the minimum sample size required so that one can be reasonably likely to detect an effect of a given size (in other words, producing an acceptable level of power). For example: "How many times do I need to toss a coin to conclude it is rigged by a certain amount?" If resources and thus sample sizes are fixed, power analyses can also be used to calculate the minimum effect size that is likely to be detected.
Funding agencies, ethics boards and research review panels frequently request that a researcher perform a power analysis. An underpowered study is likely be inconclusive, failing to allow one to choose between hypotheses at the desired significance level, while an overpowered study will spend great expense on being able to report significant effects even if they are tiny and so practically meaningless. If a large number of underpowered studies are done and statistically significant results published, published findings are more likely false positives than true results, contributing to a replication crisis. However, excessive demands for power could be connected to wasted resources and ethical problems, for example the use of a large number of animal test subjects when a smaller number would have been sufficient. It could also induce researchers trying to seek funding to overstate their expected effect sizes, or avoid looking for more subtle interaction effects that cannot be easily detected.
Power analysis is primarily a frequentist statistics tool. In Bayesian statistics, hypothesis testing of the type used in classical power analysis is not done. In the Bayesian framework, one updates his or her prior beliefs using the data obtained in a given study. In principle, a study that would be deemed underpowered from the perspective of hypothesis testing could still be used in such an updating process. However, power remains a useful measure of how much a given experiment size can be expected to refine one's beliefs. A study with low power is unlikely to lead to a large change in beliefs.
In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric test and a nonparametric test of the same hypothesis. Tests may have the same size, and hence the same false positive rates, but different ability to detect true effects. Consideration of their theoretical power proprieties is a key reason for the common use of likelihood ratio tests.
Rule of thumb for t-test
Lehr's (rough) rule of thumb says that the sample size
n
{\displaystyle n}
(for each group) for the common case of a two-sided two-sample t-test with power 80% (
β
=
0.2
{\displaystyle \beta =0.2}
) and significance level
α
=
0.05
{\displaystyle \alpha =0.05}
should be:
n
≈
16
s
2
d
2
,
{\displaystyle n\approx 16{\frac {s^{2}}{d^{2}}},}
where
s
2
{\displaystyle s^{2}}
is an estimate of the population variance and
d
=
μ
1
−
μ
2
{\displaystyle d=\mu _{1}-\mu _{2}}
the to-be-detected difference in the mean values of both samples. This expression can be rearranged, implying for example that 80% power is obtained when looking for a difference in means that exceeds about 4 times the group-wise standard error of the mean.
For a one sample t-test 16 is to be replaced with 8. Other values provide an appropriate approximation when the desired power or significance level are different.
However, a full power analysis should always be performed to confirm and refine this estimate.
Factors influencing power
Statistical power may depend on a number of factors. Some factors may be particular to a specific testing situation, but in normal use, power depends on the following three aspects that can be potentially controlled by the practitioner:
the test itself and the statistical significance criterion used
the magnitude of the effect of interest
the size and variability of the sample used to detect the effect
For a given test, the significance criterion determines the desired degree of rigor, specifying how unlikely it is for the null hypothesis of no effect to be rejected if it is in fact true. The most commonly used threshold is a probability of rejection of 0.05, though smaller values like 0.01 or 0.001 are sometimes used. This threshold then implies that the observation must be at least that unlikely (perhaps by suggesting a sufficiently large estimate of difference) to be considered strong enough evidence against the null. Picking a smaller value to tighten the threshold, so as to reduce the chance of a false positive, would also reduce power (and so increase the chance of a false negative). Some statistical tests will inherently produce better power, albeit often at the cost of requiring stronger assumptions.
The magnitude of the effect of interest defines what is being looked for by the test. It can be the expected effect size if it exists, as a scientific hypothesis that the researcher has arrived at and wishes to test. Alternatively, in a more practical context it could be determined by the size the effect must be to be useful, for example that which is required to be clinically significant. An effect size can be a direct value of the quantity of interest (for example, a difference in mean of a particular size), or it can be a standardized measure that also accounts for the variability in the population (such as a difference in means expressed as a multiple of the standard deviation). If the researcher is looking for a larger effect, then it should be easier to find with a given experimental or analytic setup, and so power is higher.
The nature of the sample underlies the information being used in the test. This will usually involve the sample size, and the sample variability, if that is not implicit in the definition of the effect size. More broadly, the precision with which the data are measured can also be an important factor (such as the statistical reliability), as well as the design of an experiment or observational study. Ultimately, these factors lead to an expected amount of sampling error. A smaller sampling error could be obtained by larger sample sizes from a less variability population, from more accurate measurements, or from more efficient experimental designs (for example, with the appropriate use of blocking), and such smaller errors would lead to improved power, albeit usually at a cost in resources. How increased sample size translates to higher power is a measure of the efficiency of the test—for example, the sample size required for a given power.
Discussion
The statistical power of a hypothesis test has an impact on the interpretation of its results. Not finding a result with a more powerful study is stronger evidence against the effect existing than the same finding with a less powerful study. However, this is not completely conclusive. The effect may exist, but be smaller than what was looked for, meaning the study is in fact underpowered and the sample is thus unable to distinguish it from random chance. Many clinical trials, for instance, have low statistical power to detect differences in adverse effects of treatments, since such effects may only affect a few patients, even if this difference can be important. Conclusions about the probability of actual presence of an effect also should consider more things than a single test, especially as real world power is rarely close to 1.
Indeed, although there are no formal standards for power, many researchers and funding bodies assess power using 0.80 (or 80%) as a standard for adequacy. This convention implies a four-to-one trade off between β-risk and α-risk, as the probability of a type II error β is set as 1 - 0.8 = 0.2, while α, the probability of a type I error, is commonly set at 0.05. Some applications require much higher levels of power. Medical tests may be designed to minimise the number of false negatives (type II errors) produced by loosening the threshold of significance, raising the risk of obtaining a false positive (a type I error). The rationale is that it is better to tell a healthy patient "we may have found something—let's test further," than to tell a diseased patient "all is well."
Power analysis focuses on the correct rejection of a null hypothesis. Alternative concerns may however motivate an experiment, and so lead to different needs for sample size. In many contexts, the issue is less about deciding between hypotheses but rather with getting an estimate of the population effect size of sufficient accuracy. For example, a careful power analysis can tell you that 55 pairs of normally distributed samples with a correlation of 0.5 will be sufficient to grant 80% power in rejecting a null that the correlation is no more than 0.2 (using a one-sided test, α = 0.05). But the typical 95% confidence interval with this sample would be around [0.27, 0.67]. An alternative, albeit related analysis would be required if we wish to be able to measure correlation to an accuracy of +/- 0.1, implying a different (in this case, larger) sample size. Alternatively, multiple under-powered studies can still be useful, if appropriately combined through a meta-analysis.
Many statistical analyses involve the estimation of several unknown quantities. In simple cases, all but one of these quantities are nuisance parameters. In this setting, the only relevant power pertains to the single quantity that will undergo formal statistical inference. In some settings, particularly if the goals are more "exploratory", there may be a number of quantities of interest in the analysis. For example, in a multiple regression analysis we may include several covariates of potential interest. In situations such as this where several hypotheses are under consideration, it is common that the powers associated with the different hypotheses differ. For instance, in multiple regression analysis, the power for detecting an effect of a given size is related to the variance of the covariate. Since different covariates will have different variances, their powers will differ as well.
Additional complications arise when we consider these multiple hypotheses together. For example, if we consider a false positive to be making an erroneous null rejection on any one of these hypotheses, our likelihood of this "family-wise error" will be inflated if appropriate measures are not taken. Such measures typically involve applying a higher threshold of stringency to reject a hypothesis (such as with the Bonferroni method), and so would reduce power. Alternatively, there may be different notions of power connected with how the different hypotheses are considered. "Complete power" demands that all true effects are detected across all of the hypotheses, which is a much stronger requirement than the "minimal power" of being able to find at least one true effect, a type of power that might increase with an increasing number of hypotheses.
A priori vs. post hoc analysis
Power analysis can either be done before (a priori or prospective power analysis) or after (post hoc or retrospective power analysis) data are collected. A priori power analysis is conducted prior to the research study, and is typically used in estimating sufficient sample sizes to achieve adequate power. Post-hoc analysis of "observed power" is conducted after a study has been completed, and uses the obtained sample size and effect size to determine what the power was in the study, assuming the effect size in the sample is equal to the effect size in the population. Whereas the utility of prospective power analysis in experimental design is universally accepted, post hoc power analysis is controversial. Many statisticians have argued that post-hoc power calculations are misleading and essentially meaningless.
Example
The following is an example that shows how to compute power for a randomized experiment: Suppose the goal of an experiment is to study the effect of a treatment on some quantity, and so we shall compare research subjects by measuring the quantity before and after the treatment, analyzing the data using a one-sided paired t-test, with a significance level threshold of 0.05. We are interested in being able to detect a positive change of size
θ
>
0
{\displaystyle \theta >0}
.
We first set up the problem according to our test. Let
A
i
{\displaystyle A_{i}}
and
B
i
{\displaystyle B_{i}}
denote the pre-treatment and post-treatment measures on subject
i
{\displaystyle i}
, respectively. The possible effect of the treatment should be visible in the differences
D
i
=
B
i
−
A
i
,
{\displaystyle D_{i}=B_{i}-A_{i},}
which are assumed to be independent and identically Normal in distribution, with unknown mean value
μ
D
{\displaystyle \mu _{D}}
and variance
σ
D
2
{\displaystyle \sigma _{D}^{2}}
.
Here, it is natural to choose our null hypothesis to be that the expected mean difference is zero, i.e.
H
0
:
μ
D
=
μ
0
=
0.
{\displaystyle H_{0}:\mu _{D}=\mu _{0}=0.}
For our one-sided test, the alternative hypothesis would be that there is a positive effect, corresponding to
H
1
:
μ
D
=
θ
>
0.
{\displaystyle H_{1}:\mu _{D}=\theta >0.}
The test statistic in this case is defined as:
T
n
=
D
¯
n
−
μ
0
σ
^
D
/
n
=
D
¯
n
−
0
σ
^
D
/
n
,
{\displaystyle T_{n}={\frac {{\bar {D}}_{n}-\mu _{0}}{{\hat {\sigma }}_{D}/{\sqrt {n}}}}={\frac {{\bar {D}}_{n}-0}{{\hat {\sigma }}_{D}/{\sqrt {n}}}},}
where
μ
0
{\displaystyle \mu _{0}}
is the mean under the null so we substitute in 0, n is the sample size (number of subjects),
D
¯
n
{\displaystyle {\bar {D}}_{n}}
is the sample mean of the difference
D
¯
n
=
1
n
∑
i
=
1
n
D
i
,
{\displaystyle {\bar {D}}_{n}={\frac {1}{n}}\sum _{i=1}^{n}D_{i},}
and
σ
^
D
{\displaystyle {\hat {\sigma }}_{D}}
is the sample standard deviation of the difference.
Analytic solution
We can proceed according to our knowledge of statistical theory, though in practice for a standard case like this software will exist to compute more accurate answers.
Thanks to t-test theory, we know this test statistic under the null hypothesis follows a Student t-distribution with
n
−
1
{\displaystyle n-1}
degrees of freedom. If we wish to reject the null at significance level
α
=
0.05
{\displaystyle \alpha =0.05\,}
, we must find the critical value
t
α
{\displaystyle t_{\alpha }}
such that the probability of
T
n
>
t
α
{\displaystyle T_{n}>t_{\alpha }}
under the null is equal to
α
{\displaystyle \alpha }
. If n is large, the t-distribution converges to the standard normal distribution (thus no longer involving n) and so through use of the corresponding quantile function
Φ
−
1
{\displaystyle \Phi ^{-1}}
, we obtain that the null should be rejected if
T
n
>
t
α
≈
Φ
−
1
(
0.95
)
≈
1.64
.
{\displaystyle T_{n}>t_{\alpha }\approx \Phi ^{-1}(0.95)\approx 1.64\,.}
Now suppose that the alternative hypothesis
H
1
{\displaystyle H_{1}}
is true so
μ
D
=
θ
{\displaystyle \mu _{D}=\theta }
. Then, writing the power as a function of the effect size,
B
(
θ
)
{\displaystyle B(\theta )}
, we find the probability of
T
n
{\displaystyle T_{n}}
being above
t
α
{\displaystyle t_{\alpha }}
under
H
1
{\displaystyle H_{1}}
.
B
(
θ
)
≈
Pr
(
T
n
>
1.64
|
μ
D
=
θ
)
=
Pr
(
D
¯
n
−
0
σ
^
D
/
n
>
1.64
|
μ
D
=
θ
)
=
1
−
Pr
(
D
¯
n
−
0
σ
^
D
/
n
<
1.64
|
μ
D
=
θ
)
=
1
−
Pr
(
D
¯
n
−
θ
σ
^
D
/
n
<
1.64
−
θ
σ
^
D
/
n
|
μ
D
=
θ
)
{\displaystyle {\begin{aligned}B(\theta )&\approx \Pr \left(T_{n}>1.64~{\big |}~\mu _{D}=\theta \right)\\&=\Pr \left({\frac {{\bar {D}}_{n}-0}{{\hat {\sigma }}_{D}/{\sqrt {n}}}}>1.64~{\Big |}~\mu _{D}=\theta \right)\\&=1-\Pr \left({\frac {{\bar {D}}_{n}-0}{{\hat {\sigma }}_{D}/{\sqrt {n}}}}<1.64~{\Big |}~\mu _{D}=\theta \right)\\&=1-\Pr \left({\frac {{\bar {D}}_{n}-\theta }{{\hat {\sigma }}_{D}/{\sqrt {n}}}}<1.64-{\frac {\theta }{{\hat {\sigma }}_{D}/{\sqrt {n}}}}~{\Big |}~\mu _{D}=\theta \right)\\\end{aligned}}}
D
¯
n
−
θ
σ
^
D
/
n
{\displaystyle {\frac {{\bar {D}}_{n}-\theta }{{\hat {\sigma }}_{D}/{\sqrt {n}}}}}
again follows a student-t distribution under
H
1
{\displaystyle H_{1}}
, converging on to a standard normal distribution for large n. The estimated
σ
^
D
{\displaystyle {\hat {\sigma }}_{D}}
will also converge on to its population value
σ
D
{\displaystyle \sigma _{D}}
Thus power can be approximated as
B
(
θ
)
≈
1
−
Φ
(
1.64
−
θ
σ
D
/
n
)
.
{\displaystyle B(\theta )\approx 1-\Phi \left(1.64-{\frac {\theta }{\sigma _{D}/{\sqrt {n}}}}\right).}
According to this formula, the power increases with the values of the effect size
θ
{\displaystyle \theta }
and the sample size n, and reduces with increasing variability
σ
D
{\displaystyle \sigma _{D}}
. In the trivial case of zero effect size, power is at a minimum (infimum) and equal to the significance level of the test
α
,
{\displaystyle \alpha \,,}
in this example 0.05. For finite sample sizes and non-zero variability, it is the case here, as is typical, that power cannot be made equal to 1 except in the trivial case where
α
=
1
{\displaystyle \alpha =1}
so the null is always rejected.
We can invert
B
{\displaystyle B}
to obtain required sample sizes:
n
>
σ
D
θ
(
1.64
−
Φ
−
1
(
1
−
B
(
θ
)
)
)
.
{\displaystyle {\sqrt {n}}>{\frac {\sigma _{D}}{\theta }}\left(1.64-\Phi ^{-1}\left(1-B(\theta )\right)\right).}
Suppose
θ
=
1
{\displaystyle \theta =1}
and we believe
σ
D
{\displaystyle \sigma _{D}}
is around 2, say, then we require for a power of
B
(
θ
)
=
0.8
{\displaystyle B(\theta )=0.8}
, a sample size
n
>
4
(
1.64
−
Φ
−
1
(
1
−
0.8
)
)
2
≈
4
(
1.64
+
0.84
)
2
≈
24.6.
{\displaystyle n>4\left(1.64-\Phi ^{-1}\left(1-0.8\right)\right)^{2}\approx 4\left(1.64+0.84\right)^{2}\approx 24.6.}
Simulation solution
Alternatively we can use a Monte Carlo simulation method that works more generally. Once again, we return to the assumption of the distribution of
D
n
{\displaystyle D_{n}}
and the definition of
T
n
{\displaystyle T_{n}}
. Suppose we have fixed values of the sample size, variability and effect size, and wish to compute power. We can adopt this process:
1. Generate a large number of sets of
D
n
{\displaystyle D_{n}}
according to the null hypothesis,
N
(
0
,
σ
D
)
{\displaystyle N(0,\sigma _{D})}
2. Compute the resulting test statistic
T
n
{\displaystyle T_{n}}
for each set.
3. Compute the
(
1
−
α
)
{\displaystyle (1-\alpha )}
th quantile of the simulated
T
n
{\displaystyle T_{n}}
and use that as an estimate of
t
α
{\displaystyle t_{\alpha }}
.
4. Now generate a large number of sets of
D
n
{\displaystyle D_{n}}
according to the alternative hypothesis,
N
(
θ
,
σ
D
)
{\displaystyle N(\theta ,\sigma _{D})}
, and compute the corresponding test statistics again.
5. Look at the proportion of these simulated alternative
T
n
{\displaystyle T_{n}}
that are above the
t
α
{\displaystyle t_{\alpha }}
calculated in step 3 and so are rejected. This is the power.
This can be done with a variety of software packages. Using this methodology with the values before, setting the sample size to 25 leads to an estimated power of around 0.78. The small discrepancy with the previous section is due mainly to inaccuracies with the normal approximation.
Power in different disciplines
Several studies have attempted to estimate typical levels of statistical power across different academic fields. One common approach uses meta-analyses to assess whether individual studies have sufficient power to detect the average effect size estimated from the meta-analysis itself. This method essentially asks: how likely is each study to detect the consensus effect found in the broader literature? These assessments consistently find low levels of statistical power across many disciplines. For example, using this method median power is 18% in economics, 10% in political science, 36% in psychology, and 15% in ecology and evolutionary biology.
Extension
Bayesian power
In the frequentist setting, parameters are assumed to have a specific value which is unlikely to be true. This issue can be addressed by assuming the parameter has a distribution. The resulting power is sometimes referred to as Bayesian power which is commonly used in clinical trial design.
Predictive probability of success
Both frequentist power and Bayesian power use statistical significance as the success criterion. However, statistical significance is often not enough to define success. To address this issue, the power concept can be extended to the concept of predictive probability of success (PPOS). The success criterion for PPOS is not restricted to statistical significance and is commonly used in clinical trial designs.
Software for power and sample size calculations
Numerous free and/or open source programs are available for performing power and sample size calculations. These include
G*Power (https://www.gpower.hhu.de/)
WebPower Free online statistical power analysis (https://webpower.psychstat.org)
Free and open source online calculators (https://powerandsamplesize.com)
PowerUp! provides Excel-based functions to determine minimum detectable effect size and minimum required sample size for various experimental and quasi-experimental designs.
PowerUpR is R package version of PowerUp! and additionally includes functions to determine sample size for various multilevel randomized experiments with or without budgetary constraints.
R package pwr (https://cran.r-project.org/web/packages/pwr/)
R package WebPower (https://cran.r-project.org/web/packages/WebPower/index.html)
R package Spower (https://cran.r-project.org/web/packages/Spower/index.html) for general-purpose power analyses using simulation experiments
Python package statsmodels (https://www.statsmodels.org/)
See also
Positive and negative predictive values – Statistical measures of whether a finding is likely to be true
Effect size – Statistical measure of the magnitude of a phenomenon
Efficiency – Quality measure of a statistical method
Neyman–Pearson lemma – Theorem about the power of the likelihood ratio test
Sample size – Statistical considerations on how many observations to makePages displaying short descriptions of redirect targets
Uniformly most powerful test – Theoretically optimal hypothesis test
References
Sources
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Lawrence Erlbaum Associates. ISBN 0-8058-0283-5.
Aberson, C.L. (2010). Applied Power Analysis for the Behavioral Science. Routledge. ISBN 978-1-84872-835-6.
External links
StatQuest: P-value pitfalls and power calculations on YouTube
|
|
wiki::en::Equivalence test
|
wiki
|
Equivalence test
|
https://en.wikipedia.org/wiki/Equivalence_test
|
en
|
[] |
Equivalence tests are a variety of hypothesis tests used to draw statistical inferences from observed data. In these tests, the null hypothesis is defined as an effect large enough to be deemed interesting, specified by an equivalence bound. The alternative hypothesis is any effect that is less extreme than said equivalence bound. The observed data are statistically compared against the equivalence bounds. If the statistical test indicates the observed data is surprising, assuming that true effects are at least as extreme as the equivalence bounds, a Neyman-Pearson approach to statistical inferences can be used to reject effect sizes larger than the equivalence bounds with a pre-specified Type 1 error rate.
Equivalence testing originates from the field of clinical trials. One application, known as a non-inferiority trial, is used to show that a new drug that is cheaper than available alternatives works as well as an existing drug. In essence, equivalence tests consist of calculating a confidence interval around an observed effect size and rejecting effects more extreme than the equivalence bound when the confidence interval does not overlap with the equivalence bound. In two-sided tests, both upper and lower equivalence bounds are specified. In non-inferiority trials, where the goal is to test the hypothesis that a new treatment is not worse than existing treatments, only a lower equivalence bound is specified.
Equivalence tests can be performed in addition to null-hypothesis significance tests. This might prevent common misinterpretations of p-values larger than the alpha level as support for the absence of a true effect. Furthermore, equivalence tests can identify effects that are statistically significant but practically insignificant, whenever effects are statistically different from zero, but also statistically smaller than any effect size deemed worthwhile (see the first figure). Equivalence tests were originally used in areas such as pharmaceutics, frequently in bioequivalence trials. However, these tests can be applied to any instance where the research question asks whether the means of two sets of scores are practically or theoretically equivalent. Equivalence tests have recently been introduced in evaluation of measurement devices, artificial intelligence, exercise physiology and sports science, political science, psychology, and economics. Several tests exist for equivalence analyses; however, more recently the two-one-sided t-tests (TOST) procedure has been garnering considerable attention. As outlined below, this approach is an adaptation of the widely known t-test.
TOST procedure
A very simple equivalence testing approach is the ‘two one-sided t-tests’ (TOST) procedure. In the TOST procedure an upper (ΔU) and lower (–ΔL) equivalence bound is specified based on the smallest effect size of interest (e.g., a positive or negative difference of d = 0.3). Two composite null hypotheses are tested: H01: Δ ≤ –ΔL and H02: Δ ≥ ΔU. When both these one-sided tests can be statistically rejected, we can conclude that –ΔL < Δ < ΔU, or that the observed effect falls within the equivalence bounds and is statistically smaller than any effect deemed worthwhile and considered practically equivalent". Alternatives to the TOST procedure have been developed as well. A recent modification to TOST makes the approach feasible in cases of repeated measures and assessing multiple variables.
Comparison between t-test and equivalence test
The equivalence test can be induced from the t-test. Consider a t-test at the significance level αt-test with a power of 1-βt-test for a relevant effect size dr. If Δ = dr as well as αequiv.-test = βt-test and βequiv.-test = αt-test coincide, i.e. the error types (type I and type II) are interchanged between the t-test and the equivalence test, then the t-test will obtain the same results as the equivalence test. To achieve this for the t-test, either the sample size calculation needs to be carried out correctly, or the t-test significance level αt-test needs to be adjusted, referred to as the so-called revised t-test. Both approaches have difficulties in practice since sample size planning relies on unverifiable assumptions of the standard deviation, and the revised t-test yields numerical problems. Preserving the test behavior, those limitations can be removed by using an equivalence test.
The figure below allows a visual comparison of the equivalence test and the t-test when the sample size calculation is affected by differences between the a priori standard deviation
σ
{\textstyle \sigma }
and the sample's standard deviation
σ
^
{\textstyle {\widehat {\sigma }}}
, which is a common problem. Using an equivalence test instead of a t-test additionally ensures that αequiv.-test is bounded, which the t-test does not do in case that
σ
^
>
σ
{\textstyle {\widehat {\sigma }}>\sigma }
with the type II error growing arbitrarily large. On the other hand, having
σ
^
<
σ
{\textstyle {\widehat {\sigma }}<\sigma }
results in the t-test being stricter than the dr specified in the planning, which may randomly penalize the sample source (e.g., a device manufacturer). This makes the equivalence test safer to use.
See also
Bootstrap (statistics)-based testing
Literature
The papers below are good introductions to equivalence testing.
Westlake, W. J. (1976). "Symmetrical confidence intervals for bioequivalence trials". Biometrics. 32 (4): 741–744. doi:10.2307/2529265. JSTOR 2529265.
Berger, Roger L.; Hsu, Jason C. (1996). "Bioequivalence trials, intersection-union tests and equivalence confidence sets". Statistical Science. 11 (4): 283–319. doi:10.1214/ss/1032280304.
Walker, Esteban; Nowacki, Amy S. (2011). "Understanding Equivalence and Noninferiority Testing". Journal of General Internal Medicine. 26 (2): 192–196. doi:10.1007/s11606-010-1513-8. PMC 3019319. PMID 20857339.
Rainey, Carlisle (2014). "Arguing for a Negligible Effect" (PDF). American Journal of Political Science. 58 (4): 1083–1091. doi:10.1111/ajps.12102. Retrieved 2025-06-01.
Lakens, Daniël (2017). "Equivalence Tests: A Practical Primer for t Tests, Correlations, and Meta-Analyses". Social Psychological and Personality Science. 8 (4): 355–362. doi:10.1177/1948550617697177. PMC 5502906.
Lakens, Daniël; Isager, P. M.; Scheel, A. M. (2018). "Equivalence Testing for Psychological Research: A Tutorial". Advances in Methods and Practices in Psychological Science. 1 (2): 259–269. doi:10.1177/2515245918770963.
Fitzgerald, Jack (2025). "The Need for Equivalence Testing in Economics". MetaArXiv. Retrieved 2025-06-01.
An applied introduction to equivalence testing appears in Section 4.2 of Vincent Arel-Bundock’s open-access book Model to Meaning.
== References ==
|
|
wiki::en::Multi-armed bandit
|
wiki
|
Multi-armed bandit
|
https://en.wikipedia.org/wiki/Multi-armed_bandit
|
en
|
[] | "In probability theory and machine learning, the multi-armed bandit problem (sometimes called the K-(...TRUNCATED)
|
|
wiki::en::Thompson sampling
|
wiki
|
Thompson sampling
|
https://en.wikipedia.org/wiki/Thompson_sampling
|
en
|
[] | "Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addres(...TRUNCATED)
|
|
wiki::en::Randomized controlled trial
|
wiki
|
Randomized controlled trial
|
https://en.wikipedia.org/wiki/Randomized_controlled_trial
|
en
|
[] | "A randomized controlled trial (abbreviated RCT) is a type of scientific experiment designed to eval(...TRUNCATED)
|
|
wiki::en::Scientific control
|
wiki
|
Scientific control
|
https://en.wikipedia.org/wiki/Scientific_control
|
en
|
[] | "A scientific control is an element of an experiment or observation designed to minimize the influen(...TRUNCATED)
|
Experiment Brief — Open Corpus (Wikipedia, FR+EN)
But : corpus public filtré (A/B testing, SRM, CUPED, séquentiel, guardrails…) pour un assistant RAG qui aide à rédiger/valider des briefs avec citations.
Splits : wiki_en, wiki_fr
Schéma :
id(str),source_type(str),title(str),url(str)language(str),year(str),topics(list[str])text(str)
Licence : Contenu Wikipedia (CC-BY-SA 3.0/4.0).
Usage : indexation FAISS + retrieval (k=5), réponses “sources-only”.
Limites : définitions génériques (peu de cas pratiques approfondis).
- Downloads last month
- 41
Size of downloaded dataset files:
154 kB
Size of the auto-converted Parquet files:
154 kB
Number of rows:
18