• Search Menu
  • Sign in through your institution
  • Advance Articles
  • Author Guidelines
  • Open Access Policy
  • Self-Archiving Policy
  • About Significance
  • About The Royal Statistical Society
  • Editorial Board
  • Advertising & Corporate Services
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

What is randomisation, why do we randomise, choosing a randomisation method, implementing the chosen randomisation method.

  • < Previous

Randomisation: What, Why and How?

  • Article contents
  • Figures & tables
  • Supplementary Data

Zoë Hoare, Randomisation: What, Why and How?, Significance , Volume 7, Issue 3, September 2010, Pages 136–138, https://doi.org/10.1111/j.1740-9713.2010.00443.x

  • Permissions Icon Permissions

Randomisation is a fundamental aspect of randomised controlled trials, but how many researchers fully understand what randomisation entails or what needs to be taken into consideration to implement it effectively and correctly? Here, for students or for those about to embark on setting up a trial, Zoë Hoare gives a basic introduction to help approach randomisation from a more informed direction.

Most trials of new medical treatments, and most other trials for that matter, now implement some form of randomisation. The idea sounds so simple that defining it becomes almost a joke: randomisation is “putting participants into the treatment groups randomly”. If only it were that simple. Randomisation can be a minefield, and not everyone understands what exactly it is or why they are doing it.

A key feature of a randomised controlled trial is that it is genuinely not known whether the new treatment is better than what is currently offered. The researchers should be in a state of equipoise; although they may hope that the new treatment is better, there is no definitive evidence to back this hypothesis up. This evidence is what the trial is trying to provide.

You will have, at its simplest, two groups: patients who are getting the new treatment, and those getting the control or placebo. You do not hand-select which patient goes into which group, because that would introduce selection bias. Instead you allocate your patients randomly. In its simplest form this can be done by the tossing of a fair coin: heads, the patient gets the trial treatment; tails, he gets the control. Simple randomisation is a fair way of ensuring that any differences that occur between the treatment groups arise completely by chance. But – and this is the first but of many here – simple randomisation can lead to unbalanced groups, that is, groups of unequal size. This is particularly true if the trial is only small. For example, tossing a fair coin 10 times will only result in five heads and five tails about 25% of the time. We would have a 66% chance of getting 6 heads and 4 tails, 5 and 5, or 4 and 6; 33% of the time we would get an even larger imbalance, with 7, 8, 9 or even all 10 patients in one group and the other group correspondingly undersized.

The impact of an imbalance like this is far greater for a small trial than for a larger trial. Tossing a fair coin 100 times will result in imbalance larger than 60–40 less than 1% of the time. One important part of the trial design process is the statement of intention of using randomisation; then we need to establish which method to use, when it will be used, and whether or not it is in fact random.

Randomisation needs to be controlled: You would not want all the males under 30 to be in one trial group and all the women over 70 in the other

It is partly true to say that we do it because we have to. The Consolidated Standards of Reporting Trials (CONSORT) 1 , to which we should all adhere, tells us: “Ideally, participants should be assigned to comparison groups in the trial on the basis of a chance (random) process characterized by unpredictability.” The requirement is there for a reason. Randomisation of the participants is crucial because it allows the principles of statistical theory to stand and as such allows a thorough analysis of the trial data without bias. The exact method of randomisation can have an impact on the trial analyses, and this needs to be taken into account when writing the statistical analysis plan.

Ideally, simple randomisation would always be the preferred option. However, in practice there often needs to be some control of the allocations to avoid severe imbalances within treatments or within categories of patient. You would not want, for example, all the males under 30 to be in one group and all the females over 70 in the other. This is where restricted or stratified randomisation comes in.

Restricted randomisation relates to using any method to control the split of allocations to each of the treatment groups based on certain criteria. This can be as simple as generating a random list, such as AAABBBABABAABB …, and allocating each participant as they arrive to the next treatment on the list. At certain points within the allocations we know that the groups will be balanced in numbers – here at the sixth, eighth, tenth and 14th participants – and we can control the maximum imbalance at any one time.

Stratified randomisation sets out to control the balance in certain baseline characteristics of the participants – such as sex or age. This can be thought of as producing an individual randomisation list for each of the characteristics concerned.

© iStockphoto.com/dra_schwartz

© iStockphoto.com/dra_schwartz

Stratification variables are the baseline characteristics that you think might influence the outcome your trial is trying to measure. For example, if you thought gender was going to have an effect on the efficacy of the treatment then you would use it as one of your stratification variables. A stratified randomisation procedure would aim to ensure a balance of the two gender groups between the two treatment groups.

If you also thought age would be affecting the treatment then you could also stratify by age (young/old) with some sensible limits on what old and young are. Once you start stratifying by age and by gender, you have to start taking care. You will need to use a stratified randomisation process that balances at the stratum level (i.e. at the level of those characteristics) to ensure that all four strata (male/young, male/old, female/young and female/old) have equivalent numbers of each of the treatment groups represented.

“Great”, you might think. “I'll just stratify by all my baseline characteristics!” Better not. Stop and consider what this would mean. As the number of stratification variables increases linearly, the number of strata increases exponentially. This reduces the number of participants that would appear in each stratum. In our example above, with our two stratification variables of age and sex we had four strata; if we added, say “blue-eyed” and “overweight” to our criteria to give four stratification variables each with just two levels we would get 16 represented strata. How likely is it that each of those strata will be represented in the population targeted by the trial? In other words, will we be sure of finding a blue-eyed young male who is also overweight among our patients? And would one such overweight possible Adonis be statistically enough? It becomes evident that implementing pre-generated lists within each stratification level or stratum and maintaining an overall balance of group sizes becomes much more complicated with many stratification variables and the uncertainty of what type of participant will walk through the door next.

Does it matter? There are a wide variety of methods for randomisation, and which one you choose does actually matter. It needs to be able to do everything that is required of it. Ask yourself these questions, and others:

Can the method accommodate enough treatment groups? Some methods are limited to two treatment groups; many trials involve three or more.

What type of randomness, if any, is injected into the method? The level of randomness dictates how predictable a method is.

A deterministic method has no randomness, meaning that with all the previous information you can tell in advance which group the next patient to appear will be allocated to. Allocating alternate participants to the two treatments using ABABABABAB … would be an example.

A static random element means that each allocation is made with a pre-defined probability. The coin-toss method does this.

With a dynamic element the probability of allocation is always changing in relation to the information received, meaning that the probability of allocation can only be worked out with knowledge of the algorithm together with all its settings. A biased coin toss does this where the bias is recalculated for each participant.

Can the method accommodate stratification variables, and if so how many? Not all of them can. And can it cope with continuous stratification variables? Most variables are divided into mutually exclusive categories (e.g. male or female), but sometimes it may be necessary (or preferable) to use a continuous scale of the variable – such as weight, or body mass index.

Can the method use an unequal allocation ratio? Not all trials require equal-sized treatment groups. There are many reasons why it might be wise to have more patients receiving treatment A than treatment B 2 . However, an allocation ratio being something other than 1:1 does impact on the study design and on the calculation of the sample size, so is not something to be changing mid-trial. Not all allocation methods can cope with this inequality.

Is thresholding used in the method? Thresholding handles imbalances in allocation. A threshold is set and if the imbalance becomes greater than the threshold then the allocation becomes deterministic to reduce the imbalance back below the threshold.

Can the method be implemented sequentially? In other words, does it require that the total number of participants be known at the beginning of the allocations? Some methods generate lists requiring exactly N participants to be recruited in order to be effective – and recruiting participants is often one of the more problematic parts of a trial.

Is the method complex? If so, then its practical implementation becomes an issue for the day-to-day running of the trial.

Is the method suitable to apply to a cluster randomisation? Cluster randomisations are used when randomising groups of individuals to a treatment rather than the individuals themselves. This can be due to the nature of the treatment, such as a new teaching method for schools or a dietary intervention for families. Using clusters is a big part of the trial design and the randomisation needs to be handled slightly differently.

Should a response-adaptive method be considered? If there is some evidence that one treatment is better than another, then a response-adaptive method works by taking into account the outcomes of previous allocations and works to minimise the number of participants on the “wrong” treatment.

For multi-centred trials, how to handle the randomisations across the centres should be considered at this point. Do all centres need to be completely balanced? Are all centres the same size? Considering the various centres as stratification variables is one way of dealing with more than one centre.

Once the method of randomisation has been established the next important step is to consider how to implement it. The recommended way is to enlist the services of a central randomisation office that can offer robust, validated techniques with the security and back-up needed to implement many of the methods proposed today. How the method is implemented must be as clearly reported as the method chosen. As part of the implementation it is important to keep the allocations concealed, both those already done and any future ones, from as many people as possible. This helps prevent selection bias: a clinician may withhold a participant if he believes that based on previous allocations the next allocations would not be the “preferred” ones – see the section below on subversion.

Part of the trial design will be to note exactly who should know what about how each participant has been allocated. Researchers and participants may be equally blinded, but that is not always the case.

For example, in a blinded trial there may be researchers who do not know which group the participants have been allocated to. This enables them to conduct the assessments without any bias for the allocation. They may, however, start to guess, on the basis of the results they see. A measure of blinding may be incorporated for the researchers to indicate whether they have remained blind to the treatment allocated. This can be in the form of a simple scale tool for the researcher to indicate how confident they are in knowing which allocated group the participant is in by the end of an assessment. With psychosocial interventions it is often impossible to hide from the participants, let alone the clinicians, which treatment group they have been allocated to.

In a drug trial where a placebo can be prescribed a coded system can ensure that neither patients nor researchers know which group is which until after the analysis stage.

With any level of blinding there may be a requirement to unblind participants or clinicians at any point in the trial, and there should be a documented procedure drawn up on how to unblind a particular participant without risking the unblinding of a trial. For drug trials in particular, the methods for unblinding a participant must be stated in the trial protocol. Wherever possible the data analysts and statisticians should remain blind to the allocation until after the main analysis has taken place.

Blinding should not be confused with allocation concealment. Blinding prevents performance and ascertainment bias within a trial, while allocation concealment prevents selection bias. Bias introduced by poor allocation concealment may be thought of as a predictive bias, trying to influence the results from the outset, while the biases introduced by non-blinding can be thought of as a reactive bias, creating causal links in outcomes because of being in possession of information about the treatment group.

In the literature on randomisation there are numerous tales of how allocation schemes have been subverted by clinicians trying to do the best for the trial or for their patient or both. This includes anecdotal tales of clinicians holding sealed envelopes containing the allocations up to X-ray lights and confessing to breaking into locked filing cabinets to get at the codes 3 . This type of behaviour has many explanations and reasons, but does raise the question whether these clinicians were in a state of equipoise with regard to the trial, and whether therefore they should really have been involved with the trial. Randomisation schemes and their implications must be signed up to by the whole team and are not something that only the participants need to consent to.

Clinicians have been known to X-ray sealed allocation envelopes to try to get their patients into the preferred group in a trial

The 2010 CONSORT statement can be found at http://www.consort-statement.org/consort-statement/ .

Dumville , J. C. , Hahn , S. , Miles , J. N. V. and Torgerson , D. J. ( 2006 ) The use of unequal randomisation ratios in clinical trials: A review . Contemporary Clinical Trials , 27 , 1 – 12 .

Google Scholar

Shulz , K. F. ( 1995 ) Subverting randomisation in controlled trials . Journal of the American Medical Association , 274 , 1456 – 1458 .

Month: Total Views:
February 2023 6
March 2023 6
April 2023 7
May 2023 16
June 2023 24
July 2023 19
August 2023 45
September 2023 106
October 2023 139
November 2023 234
December 2023 184
January 2024 239
February 2024 197
March 2024 183
April 2024 140
May 2024 167
June 2024 142
July 2024 139

Email alerts

Citing articles via.

  • Recommend to Your Librarian
  • Advertising & Corporate Services
  • Journals Career Network

Affiliations

  • Online ISSN 1740-9713
  • Print ISSN 1740-9705
  • Copyright © 2024 Royal Statistical Society
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Open access
  • Published: 16 August 2021

A roadmap to using randomization in clinical trials

  • Vance W. Berger 1 ,
  • Louis Joseph Bour 2 ,
  • Kerstine Carter 3 ,
  • Jonathan J. Chipman   ORCID: orcid.org/0000-0002-3021-2376 4 , 5 ,
  • Colin C. Everett   ORCID: orcid.org/0000-0002-9788-840X 6 ,
  • Nicole Heussen   ORCID: orcid.org/0000-0002-6134-7206 7 , 8 ,
  • Catherine Hewitt   ORCID: orcid.org/0000-0002-0415-3536 9 ,
  • Ralf-Dieter Hilgers   ORCID: orcid.org/0000-0002-5945-1119 7 ,
  • Yuqun Abigail Luo 10 ,
  • Jone Renteria 11 , 12 ,
  • Yevgen Ryeznik   ORCID: orcid.org/0000-0003-2997-8566 13 ,
  • Oleksandr Sverdlov   ORCID: orcid.org/0000-0002-1626-2588 14 &
  • Diane Uschner   ORCID: orcid.org/0000-0002-7858-796X 15

for the Randomization Innovative Design Scientific Working Group

BMC Medical Research Methodology volume  21 , Article number:  168 ( 2021 ) Cite this article

29k Accesses

40 Citations

14 Altmetric

Metrics details

Randomization is the foundation of any clinical trial involving treatment comparison. It helps mitigate selection bias, promotes similarity of treatment groups with respect to important known and unknown confounders, and contributes to the validity of statistical tests. Various restricted randomization procedures with different probabilistic structures and different statistical properties are available. The goal of this paper is to present a systematic roadmap for the choice and application of a restricted randomization procedure in a clinical trial.

We survey available restricted randomization procedures for sequential allocation of subjects in a randomized, comparative, parallel group clinical trial with equal (1:1) allocation. We explore statistical properties of these procedures, including balance/randomness tradeoff, type I error rate and power. We perform head-to-head comparisons of different procedures through simulation under various experimental scenarios, including cases when common model assumptions are violated. We also provide some real-life clinical trial examples to illustrate the thinking process for selecting a randomization procedure for implementation in practice.

Restricted randomization procedures targeting 1:1 allocation vary in the degree of balance/randomness they induce, and more importantly, they vary in terms of validity and efficiency of statistical inference when common model assumptions are violated (e.g. when outcomes are affected by a linear time trend; measurement error distribution is misspecified; or selection bias is introduced in the experiment). Some procedures are more robust than others. Covariate-adjusted analysis may be essential to ensure validity of the results. Special considerations are required when selecting a randomization procedure for a clinical trial with very small sample size.

Conclusions

The choice of randomization design, data analytic technique (parametric or nonparametric), and analysis strategy (randomization-based or population model-based) are all very important considerations. Randomization-based tests are robust and valid alternatives to likelihood-based tests and should be considered more frequently by clinical investigators.

Peer Review reports

Various research designs can be used to acquire scientific medical evidence. The randomized controlled trial (RCT) has been recognized as the most credible research design for investigations of the clinical effectiveness of new medical interventions [ 1 , 2 ]. Evidence from RCTs is widely used as a basis for submissions of regulatory dossiers in request of marketing authorization for new drugs, biologics, and medical devices. Three important methodological pillars of the modern RCT include blinding (masking), randomization, and the use of control group [ 3 ].

While RCTs provide the highest standard of clinical evidence, they are laborious and costly, in terms of both time and material resources. There are alternative designs, such as observational studies with either a cohort or case–control design, and studies using real world evidence (RWE). When properly designed and implemented, observational studies can sometimes produce similar estimates of treatment effects to those found in RCTs, and furthermore, such studies may be viable alternatives to RCTs in many settings where RCTs are not feasible and/or not ethical. In the era of big data, the sources of clinically relevant data are increasingly rich and include electronic health records, data collected from wearable devices, health claims data, etc. Big data creates vast opportunities for development and implementation of novel frameworks for comparative effectiveness research [ 4 ], and RWE studies nowadays can be implemented rapidly and relatively easily. But how credible are the results from such studies?

In 1980, D. P. Byar issued warnings and highlighted potential methodological problems with comparison of treatment effects using observational databases [ 5 ]. Many of these issues still persist and actually become paramount during the ongoing COVID-19 pandemic when global scientific efforts are made to find safe and efficacious vaccines and treatments as soon as possible. While some challenges pertinent to RWE studies are related to the choice of proper research methodology, some additional challenges arise from increasing requirements of health authorities and editorial boards of medical journals for the investigators to present evidence of transparency and reproducibility of their conducted clinical research. Recently, two top medical journals, the New England Journal of Medicine and the Lancet, retracted two COVID-19 studies that relied on observational registry data [ 6 , 7 ]. The retractions were made at the request of the authors who were unable to ensure reproducibility of the results [ 8 ]. Undoubtedly, such cases are harmful in many ways. The already approved drugs may be wrongly labeled as “toxic” or “inefficacious”, and the reputation of the drug developers could be blemished or destroyed. Therefore, the highest standards for design, conduct, analysis, and reporting of clinical research studies are now needed more than ever. When treatment effects are modest, yet still clinically meaningful, a double-blind, randomized, controlled clinical trial design helps detect these differences while adjusting for possible confounders and adequately controlling the chances of both false positive and false negative findings.

Randomization in clinical trials has been an important area of methodological research in biostatistics since the pioneering work of A. Bradford Hill in the 1940’s and the first published randomized trial comparing streptomycin with a non-treatment control [ 9 ]. Statisticians around the world have worked intensively to elaborate the value, properties, and refinement of randomization procedures with an incredible record of publication [ 10 ]. In particular, a recent EU-funded project ( www.IDeAl.rwth-aachen.de ) on innovative design and analysis of small population trials has “randomization” as one work package. In 2020, a group of trial statisticians around the world from different sectors formed a subgroup of the Drug Information Association (DIA) Innovative Designs Scientific Working Group (IDSWG) to raise awareness of the full potential of randomization to improve trial quality, validity and rigor ( https://randomization-working-group.rwth-aachen.de/ ).

The aims of the current paper are three-fold. First, we describe major recent methodological advances in randomization, including different restricted randomization designs that have superior statistical properties compared to some widely used procedures such as permuted block designs. Second, we discuss different types of experimental biases in clinical trials and explain how a carefully chosen randomization design can mitigate risks of these biases. Third, we provide a systematic roadmap for evaluating different restricted randomization procedures and selecting an “optimal” one for a particular trial. We also showcase application of these ideas through several real life RCT examples.

The target audience for this paper would be clinical investigators and biostatisticians who are tasked with the design, conduct, analysis, and interpretation of clinical trial results, as well as regulatory and scientific/medical journal reviewers. Recognizing the breadth of the concept of randomization, in this paper we focus on a randomized, comparative, parallel group clinical trial design with equal (1:1) allocation, which is typically implemented using some restricted randomization procedure, possibly stratified by some important baseline prognostic factor(s) and/or study center. Some of our findings and recommendations are generalizable to more complex clinical trial settings. We shall highlight these generalizations and outline additional important considerations that fall outside the scope of the current paper.

The paper is organized as follows. The “ Methods ” section provides some general background on the methodology of randomization in clinical trials, describes existing restricted randomization procedures, and discusses some important criteria for comparison of these procedures in practice. In the “ Results ” section, we present our findings from four simulation studies that illustrate the thinking process when evaluating different randomization design options at the study planning stage. The “ Conclusions ” section summarizes the key findings and important considerations on restricted randomization procedures, and it also highlights some extensions and further topics on randomization in clinical trials.

What is randomization and what are its virtues in clinical trials?

Randomization is an essential component of an experimental design in general and clinical trials in particular. Its history goes back to R. A. Fisher and his classic book “The Design of Experiments” [ 11 ]. Implementation of randomization in clinical trials is due to A. Bradford Hill who designed the first randomized clinical trial evaluating the use of streptomycin in treating tuberculosis in 1946 [ 9 , 12 , 13 ].

Reference [ 14 ] provides a good summary of the rationale and justification for the use of randomization in clinical trials. The randomized controlled trial (RCT) has been referred to as “the worst possible design (except for all the rest)” [ 15 ], indicating that the benefits of randomization should be evaluated in comparison to what we are left with if we do not randomize. Observational studies suffer from a wide variety of biases that may not be adequately addressed even using state-of-the-art statistical modeling techniques.

The RCT in the medical field has several features that distinguish it from experimental designs in other fields, such as agricultural experiments. In the RCT, the experimental units are humans, and in the medical field often diagnosed with a potentially fatal disease. These subjects are sequentially enrolled for participation in the study at selected study centers, which have relevant expertise for conducting clinical research. Many contemporary clinical trials are run globally, at multiple research institutions. The recruitment period may span several months or even years, depending on a therapeutic indication and the target patient population. Patients who meet study eligibility criteria must sign the informed consent, after which they are enrolled into the study and, for example, randomized to either experimental treatment E or the control treatment C according to the randomization sequence. In this setup, the choice of the randomization design must be made judiciously, to protect the study from experimental biases and ensure validity of clinical trial results.

The first virtue of randomization is that, in combination with allocation concealment and masking, it helps mitigate selection bias due to an investigator’s potential to selectively enroll patients into the study [ 16 ]. A non-randomized, systematic design such as a sequence of alternating treatment assignments has a major fallacy: an investigator, knowing an upcoming treatment assignment in a sequence, may enroll a patient who, in their opinion, would be best suited for this treatment. Consequently, one of the groups may contain a greater number of “sicker” patients and the estimated treatment effect may be biased. Systematic covariate imbalances may increase the probability of false positive findings and undermine the integrity of the trial. While randomization alleviates the fallacy of a systematic design, it does not fully eliminate the possibility of selection bias (unless we consider complete randomization for which each treatment assignment is determined by a flip of a coin, which is rarely, if ever used in practice [ 17 ]). Commonly, RCTs employ restricted randomization procedures which sequentially balance treatment assignments while maintaining allocation randomness. A popular choice is the permuted block design that controls imbalance by making treatment assignments at random in blocks. To minimize potential for selection bias, one should avoid overly restrictive randomization schemes such as permuted block design with small block sizes, as this is very similar to alternating treatment sequence.

The second virtue of randomization is its tendency to promote similarity of treatment groups with respect to important known, but even more importantly, unknown confounders. If treatment assignments are made at random, then by the law of large numbers, the average values of patient characteristics should be approximately equal in the experimental and the control groups, and any observed treatment difference should be attributed to the treatment effects, not the effects of the study participants [ 18 ]. However, one can never rule out the possibility that the observed treatment difference is due to chance, e.g. as a result of random imbalance in some patient characteristics [ 19 ]. Despite that random covariate imbalances can occur in clinical trials of any size, such imbalances do not compromise the validity of statistical inference, provided that proper statistical techniques are applied in the data analysis.

Several misconceptions on the role of randomization and balance in clinical trials were documented and discussed by Senn [ 20 ]. One common misunderstanding is that balance of prognostic covariates is necessary for valid inference. In fact, different randomization designs induce different extent of balance in the distributions of covariates, and for a given trial there is always a possibility of observing baseline group differences. A legitimate approach is to pre-specify in the protocol the clinically important covariates to be adjusted for in the primary analysis, apply a randomization design (possibly accounting for selected covariates using pre-stratification or some other approach), and perform a pre-planned covariate-adjusted analysis (such as analysis of covariance for a continuous primary outcome), verifying the model assumptions and conducting additional supportive/sensitivity analyses, as appropriate. Importantly, the pre-specified prognostic covariates should always be accounted for in the analysis, regardless whether their baseline differences are present or not [ 20 ].

It should be noted that some randomization designs (such as covariate-adaptive randomization procedures) can achieve very tight balance of covariate distributions between treatment groups [ 21 ]. While we address randomization within pre-specified stratifications, we do not address more complex covariate- and response-adaptive randomization in this paper.

Finally, randomization plays an important role in statistical analysis of the clinical trial. The most common approach to inference following the RCT is the invoked population model [ 10 ]. With this approach, one posits that there is an infinite target population of patients with the disease, from which \(n\) eligible subjects are sampled in an unbiased manner for the study and are randomized to the treatment groups. Within each group, the responses are assumed to be independent and identically distributed (i.i.d.), and inference on the treatment effect is performed using some standard statistical methodology, e.g. a two sample t-test for normal outcome data. The added value of randomization is that it makes the assumption of i.i.d. errors more feasible compared to a non-randomized study because it introduces a real element of chance in the allocation of patients.

An alternative approach is the randomization model , in which the implemented randomization itself forms the basis for statistical inference [ 10 ]. Under the null hypothesis of the equality of treatment effects, individual outcomes (which are regarded as not influenced by random variation, i.e. are considered as fixed) are not affected by treatment. Treatment assignments are permuted in all possible ways consistent with the randomization procedure actually used in the trial. The randomization-based p- value is the sum of null probabilities of the treatment assignment permutations in the reference set that yield the test statistic values greater than or equal to the experimental value. A randomization-based test can be a useful supportive analysis, free of assumptions of parametric tests and protective against spurious significant results that may be caused by temporal trends [ 14 , 22 ].

It is important to note that Bayesian inference has also become a common statistical analysis in RCTs [ 23 ]. Although the inferential framework relies upon subjective probabilities, a study analyzed through a Bayesian framework still relies upon randomization for the other aforementioned virtues [ 24 ]. Hence, the randomization considerations discussed herein have broad application.

What types of randomization methodologies are available?

Randomization is not a single methodology, but a very broad class of design techniques for the RCT [ 10 ]. In this paper, we consider only randomization designs for sequential enrollment clinical trials with equal (1:1) allocation in which randomization is not adapted for covariates and/or responses. The simplest procedure for an RCT is complete randomization design (CRD) for which each subject’s treatment is determined by a flip of a fair coin [ 25 ]. CRD provides no potential for selection bias (e.g. based on prediction of future assignments) but it can result, with non-negligible probability, in deviations from the 1:1 allocation ratio and covariate imbalances, especially in small samples. This may lead to loss of statistical efficiency (decrease in power) compared to the balanced design. In practice, some restrictions on randomization are made to achieve balanced allocation. Such randomization designs are referred to as restricted randomization procedures [ 26 , 27 ].

Suppose we plan to randomize an even number of subjects \(n\) sequentially between treatments E and C. Two basic designs that equalize the final treatment numbers are the random allocation rule (Rand) and the truncated binomial design (TBD), which were discussed in the 1957 paper by Blackwell and Hodges [ 28 ]. For Rand, any sequence of exactly \(n/2\) E’s and \(n/2\) C’s is equally likely. For TBD, treatment assignments are made with probability 0.5 until one of the treatments receives its quota of \(n/2\) subjects; thereafter all remaining assignments are made deterministically to the opposite treatment.

A common feature of both Rand and TBD is that they aim at the final balance, whereas at intermediate steps it is still possible to have substantial imbalances, especially if \(n\) is large. A long run of a single treatment in a sequence may be problematic if there is a time drift in some important covariate, which can lead to chronological bias [ 29 ]. To mitigate this risk, one can further restrict randomization so that treatment assignments are balanced over time. One common approach is the permuted block design (PBD) [ 30 ], for which random treatment assignments are made in blocks of size \(2b\) ( \(b\) is some small positive integer), with exactly \(b\) allocations to each of the treatments E and C. The PBD is perhaps the oldest (it can be traced back to A. Bradford Hill’s 1951 paper [ 12 ]) and the most widely used randomization method in clinical trials. Often its choice in practice is justified by simplicity of implementation and the fact that it is referenced in the authoritative ICH E9 guideline on statistical principles for clinical trials [ 31 ]. One major challenge with PBD is the choice of the block size. If \(b=1\) , then every pair of allocations is balanced, but every even allocation is deterministic. Larger block sizes increase allocation randomness. The use of variable block sizes has been suggested [ 31 ]; however, PBDs with variable block sizes are also quite predictable [ 32 ]. Another problematic feature of the PBD is that it forces periodic return to perfect balance, which may be unnecessary from the statistical efficiency perspective and may increase the risk of prediction of upcoming allocations.

More recent and better alternatives to the PBD are the maximum tolerated imbalance (MTI) procedures [ 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 ]. These procedures provide stronger encryption of the randomization sequence (i.e. make it more difficult to predict future treatment allocations in the sequence even knowing the current sizes of the treatment groups) while controlling treatment imbalance at a pre-defined threshold throughout the experiment. A general MTI procedure specifies a certain boundary for treatment imbalance, say \(b>0\) , that cannot be exceeded. If, at a given allocation step the absolute value of imbalance is equal to \(b\) , then one next allocation is deterministically forced toward balance. This is in contrast to PBD which, after reaching the target quota of allocations for either treatment within a block, forces all subsequent allocations to achieve perfect balance at the end of the block. Some notable MTI procedures are the big stick design (BSD) proposed by Soares and Wu in 1983 [ 37 ], the maximal procedure proposed by Berger, Ivanova and Knoll in 2003 [ 35 ], the block urn design (BUD) proposed by Zhao and Weng in 2011 [ 40 ], just to name a few. These designs control treatment imbalance within pre-specified limits and are more immune to selection bias than the PBD [ 42 , 43 ].

Another important class of restricted randomization procedures is biased coin designs (BCDs). Starting with the seminal 1971 paper of Efron [ 44 ], BCDs have been a hot research topic in biostatistics for 50 years. Efron’s BCD is very simple: at any allocation step, if treatment numbers are balanced, the next assignment is made with probability 0.5; otherwise, the underrepresented treatment is assigned with probability \(p\) , where \(0.5<p\le 1\) is a fixed and pre-specified parameter that determines the tradeoff between balance and randomness. Note that \(p=1\) corresponds to PBD with block size 2. If we set \(p<1\) (e.g. \(p=2/3\) ), then the procedure has no deterministic assignments and treatment allocation will be concentrated around 1:1 with high probability [ 44 ]. Several extensions of Efron’s BCD providing better tradeoff between treatment balance and allocation randomness have been proposed [ 45 , 46 , 47 , 48 , 49 ]; for example, a class of adjustable biased coin designs introduced by Baldi Antognini and Giovagnoli in 2004 [ 49 ] unifies many BCDs in a single framework. A comprehensive simulation study comparing different BCDs has been published by Atkinson in 2014 [ 50 ].

Finally, urn models provide a useful mechanism for RCT designs [ 51 ]. Urn models apply some probabilistic rules to sequentially add/remove balls (representing different treatments) in the urn, to balance treatment assignments while maintaining the randomized nature of the experiment [ 39 , 40 , 52 , 53 , 54 , 55 ]. A randomized urn design for balancing treatment assignments was proposed by Wei in 1977 [ 52 ]. More novel urn designs, such as the drop-the-loser urn design developed by Ivanova in 2003 [ 55 ] have reduced variability and can attain the target treatment allocation more efficiently. Many urn designs involve parameters that can be fine-tuned to obtain randomization procedures with desirable balance/randomness tradeoff [ 56 ].

What are the attributes of a good randomization procedure?

A “good” randomization procedure is one that helps successfully achieve the study objective(s). Kalish and Begg [ 57 ] state that the major objective of a comparative clinical trial is to provide a precise and valid comparison. To achieve this, the trial design should be such that it: 1) prevents bias; 2) ensures an efficient treatment comparison; and 3) is simple to implement to minimize operational errors. Table 1 elaborates on these considerations, focusing on restricted randomization procedures for 1:1 randomized trials.

Before delving into a detailed discussion, let us introduce some important definitions. Following [ 10 ], a randomization sequence is a random vector \({{\varvec{\updelta}}}_{n}=({\delta }_{1},\dots ,{\delta }_{n})\) , where \({\delta }_{i}=1\) , if the i th subject is assigned to treatment E or \({\delta }_{i}=0\) , if the \(i\) th subject is assigned to treatment C. A restricted randomization procedure can be defined by specifying a probabilistic rule for the treatment assignment of the ( i +1)st subject, \({\delta }_{i+1}\) , given the past allocations \({{\varvec{\updelta}}}_{i}\) for \(i\ge 1\) . Let \({N}_{E}\left(i\right)={\sum }_{j=1}^{i}{\delta }_{j}\) and \({N}_{C}\left(i\right)=i-{N}_{E}\left(i\right)\) denote the numbers of subjects assigned to treatments E and C, respectively, after \(i\) allocation steps. Then \(D\left(i\right)={N}_{E}\left(i\right)-{N}_{C}(i)\) is treatment imbalance after \(i\) allocations. For any \(i\ge 1\) , \(D\left(i\right)\) is a random variable whose probability distribution is determined by the chosen randomization procedure.

Balance and randomness

Treatment balance and allocation randomness are two competing requirements in the design of an RCT. Restricted randomization procedures that provide a good tradeoff between these two criteria are desirable in practice.

Consider a trial with sample size \(n\) . The absolute value of imbalance, \(\left|D(i)\right|\) \((i=1,\dots,n)\) , provides a measure of deviation from equal allocation after \(i\) allocation steps. \(\left|D(i)\right|=0\) indicates that the trial is perfectly balanced. One can also consider \(\Pr(\vert D\left(i\right)\vert=0)\) , the probability of achieving exact balance after \(i\) allocation steps. In particular \(\Pr(\vert D\left(n\right)\vert=0)\) is the probability that the final treatment numbers are balanced. Two other useful summary measures are the expected imbalance at the \(i\mathrm{th}\)  step, \(E\left|D(i)\right|\) and the expected value of the maximum imbalance of the entire randomization sequence, \(E\left(\underset{1\le i\le n}{\mathrm{max}}\left|D\left(i\right)\right|\right)\) .

Greater forcing of balance implies lack of randomness. A procedure that lacks randomness may be susceptible to selection bias [ 16 ], which is a prominent issue in open-label trials with a single center or with randomization stratified by center, where the investigator knows the sequence of all previous treatment assignments. A classic approach to quantify the degree of susceptibility of a procedure to selection bias is the Blackwell-Hodges model [ 28 ]. Let \({G}_{i}=1\) (or 0), if at the \(i\mathrm{th}\)  allocation step an investigator makes a correct (or incorrect) guess on treatment assignment \({\delta }_{i}\) , given past allocations \({{\varvec{\updelta}}}_{i-1}\) . Then the predictability of the design at the \(i\mathrm{th}\)  step is the expected value of \({G}_{i}\) , i.e. \(E\left(G_i\right)=\Pr(G_i=1)\) . Blackwell and Hodges [ 28 ] considered the expected bias factor , the difference between expected total number of correct guesses of a given sequence of random assignments and the similar quantity obtained from CRD for which treatment assignments are made independently with equal probability: \(E(F)=E\left({\sum }_{i=1}^{n}{G}_{i}\right)-n/2\) . This quantity is zero for CRD, and it is positive for restricted randomization procedures (greater values indicate higher expected bias). Matts and Lachin [ 30 ] suggested taking expected proportion of deterministic assignments in a sequence as another measure of lack of randomness.

In the literature, various restricted randomization procedures have been compared in terms of balance and randomness [ 50 , 58 , 59 ]. For instance, Zhao et al. [ 58 ] performed a comprehensive simulation study of 14 restricted randomization procedures with different choices of design parameters, for sample sizes in the range of 10 to 300. The key criteria were the maximum absolute imbalance and the correct guess probability. The authors found that the performance of the designs was within a closed region with the boundaries shaped by Efron’s BCD [ 44 ] and the big stick design [ 37 ], signifying that the latter procedure with a suitably chosen MTI boundary can be superior to other restricted randomization procedures in terms of balance/randomness tradeoff. Similar findings confirming the utility of the big stick design were recently reported by Hilgers et al. [ 60 ].

Validity and efficiency

Validity of a statistical procedure essentially means that the procedure provides correct statistical inference following an RCT. In particular, a chosen statistical test is valid, if it controls the chance of a false positive finding, that is, the pre-specified probability of a type I error of the test is achieved but not exceeded. The strong control of type I error rate is a major prerequisite for any confirmatory RCT. Efficiency means high statistical power for detecting meaningful treatment differences (when they exist), and high accuracy of estimation of treatment effects.

Both validity and efficiency are major requirements of any RCT, and both of these aspects are intertwined with treatment balance and allocation randomness. Restricted randomization designs, when properly implemented, provide solid ground for valid and efficient statistical inference. However, a careful consideration of different options can help an investigator to optimize the choice of a randomization procedure for their clinical trial.

Let us start with statistical efficiency. Equal (1:1) allocation frequently maximizes power and estimation precision. To illustrate this, suppose the primary outcomes in the two groups are normally distributed with respective means \({\mu }_{E}\) and \({\mu }_{C}\) and common standard deviation \(\sigma >0\) . Then the variance of an efficient estimator of the treatment difference \({\mu }_{E}-{\mu }_{C}\) is equal to \(V=\frac{4{\sigma }^{2}}{n-{L}_{n}}\) , where \({L}_{n}=\frac{{\left|D(n)\right|}^{2}}{n}\) is referred to as loss [ 61 ]. Clearly, \(V\) is minimized when \({L}_{n}=0\) , or equivalently, \(D\left(n\right)=0\) , i.e. the balanced trial.

When the primary outcome follows a more complex statistical model, optimal allocation may be unequal across the treatment groups; however, 1:1 allocation is still nearly optimal for binary outcomes [ 62 , 63 ], survival outcomes [ 64 ], and possibly more complex data types [ 65 , 66 ]. Therefore, a randomization design that balances treatment numbers frequently promotes efficiency of the treatment comparison.

As regards inferential validity, it is important to distinguish two approaches to statistical inference after the RCT – an invoked population model and a randomization model [ 10 ]. For a given randomization procedure, these two approaches generally produce similar results when the assumption of normal random sampling (and some other assumptions) are satisfied, but the randomization model may be more robust when model assumptions are violated; e.g. when outcomes are affected by a linear time trend [ 67 , 68 ]. Another important issue that may interfere with validity is selection bias. Some authors showed theoretically that PBDs with small block sizes may result in serious inflation of the type I error rate under a selection bias model [ 69 , 70 , 71 ]. To mitigate risk of selection bias, one should ideally take preventative measures, such as blinding/masking, allocation concealment, and avoidance of highly restrictive randomization designs. However, for already completed studies with evidence of selection bias [ 72 ], special statistical adjustments are warranted to ensure validity of the results [ 73 , 74 , 75 ].

Implementation aspects

With the current state of information technology, implementation of randomization in RCTs should be straightforward. Validated randomization systems are emerging, and they can handle randomization designs of increasing complexity for clinical trials that are run globally. However, some important points merit consideration.

The first point has to do with how a randomization sequence is generated and implemented. One should distinguish between advance and adaptive randomization [ 16 ]. Here, by “adaptive” randomization we mean “in-real-time” randomization, i.e. when a randomization sequence is generated not upfront, but rather sequentially, as eligible subjects enroll into the study. Restricted randomization procedures are “allocation-adaptive”, in the sense that the treatment assignment of an individual subject is adapted to the history of previous treatment assignments. While in practice the majority of trials with restricted and stratified randomization use randomization schedules pre-generated in advance, there are some circumstances under which “in-real-time” randomization schemes may be preferred; for instance, clinical trials with high cost of goods and/or shortage of drug supply [ 76 ].

The advance randomization approach includes the following steps: 1) for the chosen randomization design and sample size \(n\) , specify the probability distribution on the reference set by enumerating all feasible randomization sequences of length \(n\) and their corresponding probabilities; 2) select a sequence at random from the reference set according to the probability distribution; and 3) implement this sequence in the trial. While enumeration of all possible sequences and their probabilities is feasible and may be useful for trials with small sample sizes, the task becomes computationally prohibitive (and unnecessary) for moderate or large samples. In practice, Monte Carlo simulation can be used to approximate the probability distribution of the reference set of all randomization sequences for a chosen randomization procedure.

A limitation of advance randomization is that a sequence of treatment assignments must be generated upfront, and proper security measures (e.g. blinding/masking) must be in place to protect confidentiality of the sequence. With the adaptive or “in-real-time” randomization, a sequence of treatment assignments is generated dynamically as the trial progresses. For many restricted randomization procedures, the randomization rule can be expressed as \(\Pr(\delta_{i+1}=1)=\left|F\left\{D\left(i\right)\right\}\right|\) , where \(F\left\{\cdot \right\}\) is some non-increasing function of \(D\left(i\right)\) for any \(i\ge 1\) . This is referred to as the Markov property [ 77 ], which makes a procedure easy to implement sequentially. Some restricted randomization procedures, e.g. the maximal procedure [ 35 ], do not have the Markov property.

The second point has to do with how the final data analysis is performed. With an invoked population model, the analysis is conditional on the design and the randomization is ignored in the analysis. With a randomization model, the randomization itself forms the basis for statistical inference. Reference [ 14 ] provides a contemporaneous overview of randomization-based inference in clinical trials. Several other papers provide important technical details on randomization-based tests, including justification for control of type I error rate with these tests [ 22 , 78 , 79 ]. In practice, Monte Carlo simulation can be used to estimate randomization-based p- values [ 10 ].

A roadmap for comparison of restricted randomization procedures

The design of any RCT starts with formulation of the trial objectives and research questions of interest [ 3 , 31 ]. The choice of a randomization procedure is an integral part of the study design. A structured approach for selecting an appropriate randomization procedure for an RCT was proposed by Hilgers et al. [ 60 ]. Here we outline the thinking process one may follow when evaluating different candidate randomization procedures. Our presented roadmap is by no means exhaustive; its main purpose is to illustrate the logic behind some important considerations for finding an “optimal” randomization design for the given trial parameters.

Throughout, we shall assume that the study is designed as a randomized, two-arm comparative trial with 1:1 allocation, with a fixed sample size \(n\) that is pre-determined based on budgetary and statistical considerations to obtain a definitive assessment of the treatment effect via the pre-defined hypothesis testing. We start with some general considerations which determine the study design:

Sample size ( \(n\) ). For small or moderate studies, exact attainment of the target numbers per group may be essential, because even slight imbalance may decrease study power. Therefore, a randomization design in such studies should equalize well the final treatment numbers. For large trials, the risk of major imbalances is less of a concern, and more random procedures may be acceptable.

The length of the recruitment period and the trial duration. Many studies are short-term and enroll participants fast, whereas some other studies are long-term and may have slow patient accrual. In the latter case, there may be time drifts in patient characteristics, and it is important that the randomization design balances treatment assignments over time.

Level of blinding (masking): double-blind, single-blind, or open-label. In double-blind studies with properly implemented allocation concealment the risk of selection bias is low. By contrast, in open-label studies the risk of selection bias may be high, and the randomization design should provide strong encryption of the randomization sequence to minimize prediction of future allocations.

Number of study centers. Many modern RCTs are implemented globally at multiple research institutions, whereas some studies are conducted at a single institution. In the former case, the randomization is often stratified by center and/or clinically important covariates. In the latter case, especially in single-institution open-label studies, the randomization design should be chosen very carefully, to mitigate the risk of selection bias.

An important point to consider is calibration of the design parameters. Many restricted randomization procedures involve parameters, such as the block size in the PBD, the coin bias probability in Efron’s BCD, the MTI threshold, etc. By fine-tuning these parameters, one can obtain designs with desirable statistical properties. For instance, references [ 80 , 81 ] provide guidance on how to justify the block size in the PBD to mitigate the risk of selection bias or chronological bias. Reference [ 82 ] provides a formal approach to determine the “optimal” value of the parameter \(p\) in Efron’s BCD in both finite and large samples. The calibration of design parameters can be done using Monte Carlo simulations for the given trial setting.

Another important consideration is the scope of randomization procedures to be evaluated. As we mentioned already, even one method may represent a broad class of randomization procedures that can provide different levels of balance/randomness tradeoff; e.g. Efron’s BCD covers a wide spectrum of designs, from PBD(2) (if \(p=1\) ) to CRD (if \(p=0.5\) ). One may either prefer to focus on finding the “optimal” parameter value for the chosen design, or be more general and include various designs (e.g. MTI procedures, BCDs, urn designs, etc.) in the comparison. This should be done judiciously, on a case-by-case basis, focusing only on the most reasonable procedures. References [ 50 , 58 , 60 ] provide good examples of simulation studies to facilitate comparisons among various restricted randomization procedures for a 1:1 RCT.

In parallel with the decision on the scope of randomization procedures to be assessed, one should decide upon the performance criteria against which these designs will be compared. Among others, one might think about the two competing considerations: treatment balance and allocation randomness. For a trial of size \(n\) , at each allocation step \(i=1,\dots ,n\) one can calculate expected absolute imbalance \(E\left|D(i)\right|\) and the probability of correct guess \(\Pr(G_i=1)\) as measures of lack of balance and lack of randomness, respectively. These measures can be either calculated analytically (when formulae are available) or through Monte Carlo simulations. Sometimes it may be useful to look at cumulative measures up to the \(i\mathrm{th}\)  allocation step ( \(i=1,\dots ,n\) ); e.g. \(\frac{1}{i}{\sum }_{j=1}^{i}E\left|D(j)\right|\) and \(\frac1i\sum\nolimits_{j=1}^i\Pr(G_j=1)\) . For instance, \(\frac{1}{n}{\sum }_{j=1}^{n}{\mathrm{Pr}}({G}_{j}=1)\) is the average correct guess probability for a design with sample size \(n\) . It is also helpful to visualize the selected criteria. Visualizations can be done in a number of ways; e.g. plots of a criterion vs. allocation step, admissibility plots of two chosen criteria [ 50 , 59 ], etc. Such visualizations can help evaluate design characteristics, both overall and at intermediate allocation steps. They may also provide insights into the behavior of a particular design for different values of the tuning parameter, and/or facilitate a comparison among different types of designs.

Another way to compare the merits of different randomization procedures is to study their inferential characteristics such as type I error rate and power under different experimental conditions. Sometimes this can be done analytically, but a more practical approach is to use Monte Carlo simulation. The choice of the modeling and analysis strategy will be context-specific. Here we outline some considerations that may be useful for this purpose:

Data generating mechanism . To simulate individual outcome data, some plausible statistical model must be posited. The form of the model will depend on the type of outcomes (e.g. continuous, binary, time-to-event, etc.), covariates (if applicable), the distribution of the measurement error terms, and possibly some additional terms representing selection and/or chronological biases [ 60 ].

True treatment effects . At least two scenarios should be considered: under the null hypothesis ( \({H}_{0}\) : treatment effects are the same) to evaluate the type I error rate, and under an alternative hypothesis ( \({H}_{1}\) : there is some true clinically meaningful difference between the treatments) to evaluate statistical power.

Randomization designs to be compared . The choice of candidate randomization designs and their parameters must be made judiciously.

Data analytic strategy . For any study design, one should pre-specify the data analysis strategy to address the primary research question. Statistical tests of significance to compare treatment effects may be parametric or nonparametric, with or without adjustment for covariates.

The approach to statistical inference: population model-based or randomization-based . These two approaches are expected to yield similar results when the population model assumptions are met, but they may be different if some assumptions are violated. Randomization-based tests following restricted randomization procedures will control the type I error at the chosen level if the distribution of the test statistic under the null hypothesis is fully specified by the randomization procedure that was used for patient allocation. This is always the case unless there is a major flaw in the design (such as selection bias whereby the outcome of any individual participant is dependent on treatment assignments of the previous participants).

Overall, there should be a well-thought plan capturing the key questions to be answered, the strategy to address them, the choice of statistical software for simulation and visualization of the results, and other relevant details.

In this section we present four examples that illustrate how one may approach evaluation of different randomization design options at the study planning stage. Example 1 is based on a hypothetical 1:1 RCT with \(n=50\) and a continuous primary outcome, whereas Examples 2, 3, and 4 are based on some real RCTs.

Example 1: Which restricted randomization procedures are robust and efficient?

Our first example is a hypothetical RCT in which the primary outcome is assumed to be normally distributed with mean \({\mu }_{E}\) for treatment E, mean \({\mu }_{C}\) for treatment C, and common variance \({\sigma }^{2}\) . A total of \(n\) subjects are to be randomized equally between E and C, and a two-sample t-test is planned for data analysis. Let \(\Delta ={\mu }_{E}-{\mu }_{C}\) denote the true mean treatment difference. We are interested in testing a hypothesis \({H}_{0}:\Delta =0\) (treatment effects are the same) vs. \({H}_{1}:\Delta \ne 0\) .

The total sample size \(n\) to achieve given power at some clinically meaningful treatment difference \({\Delta }_{c}\) while maintaining the chance of a false positive result at level \(\alpha\) can be obtained using standard statistical methods [ 83 ]. For instance, if \({\Delta }_{c}/\sigma =0.95\) , then a design with \(n=50\) subjects (25 per arm) provides approximately 91% power of a two-sample t-test to detect a statistically significant treatment difference using 2-sided \(\alpha =\) 5%. We shall consider 12 randomization procedures to sequentially randomize \(n=50\) subjects in a 1:1 ratio.

Random allocation rule – Rand.

Truncated binomial design – TBD.

Permuted block design with block size of 2 – PBD(2).

Permuted block design with block size of 4 – PBD(4).

Big stick design [ 37 ] with MTI = 3 – BSD(3).

Biased coin design with imbalance tolerance [ 38 ] with p  = 2/3 and MTI = 3 – BCDWIT(2/3, 3).

Efron’s biased coin design [ 44 ] with p  = 2/3 – BCD(2/3).

Adjustable biased coin design [ 49 ] with a = 2 – ABCD(2).

Generalized biased coin design (GBCD) with \(\gamma =1\) [ 45 ] – GBCD(1).

GBCD with \(\gamma =2\) [ 46 ] – GBCD(2).

GBCD with \(\gamma =5\) [ 47 ] – GBCD(5).

Complete randomization design – CRD.

These 12 procedures can be grouped into five major types. I) Procedures 1, 2, 3, and 4 achieve exact final balance for a chosen sample size (provided the total sample size is a multiple of the block size). II) Procedures 5 and 6 ensure that at any allocation step the absolute value of imbalance is capped at MTI = 3. III) Procedures 7 and 8 are biased coin designs that sequentially adjust randomization according to imbalance measured as the difference in treatment numbers. IV) Procedures 9, 10, and 11 (GBCD’s with \(\gamma =\) 1, 2, and 5) are adaptive biased coin designs, for which randomization probability is modified according to imbalance measured as the difference in treatment allocation proportions (larger \(\gamma\) implies greater forcing of balance). V) Procedure 12 (CRD) is the most random procedure that achieves balance for large samples.

Balance/randomness tradeoff

We first compare the procedures with respect to treatment balance and allocation randomness. To quantify imbalance after \(i\) allocations, we consider two measures: expected value of absolute imbalance \(E\left|D(i)\right|\) , and expected value of loss \(E({L}_{i})=E{\left|D(i)\right|}^{2}/i\) [ 50 , 61 ]. Importantly, for procedures 1, 2, and 3 the final imbalance is always zero, thus \(E\left|D(n)\right|\equiv 0\) and \(E({L}_{n})\equiv 0\) , but at intermediate steps one may have \(E\left|D(i)\right|>0\) and \(E\left({L}_{i}\right)>0\) , for \(1\le i<n\) . For procedures 5 and 6 with MTI = 3, \(E\left({L}_{i}\right)\le 9/i\) . For procedures 7 and 8, \(E\left({L}_{n}\right)\) tends to zero as \(n\to \infty\) [ 49 ]. For procedures 9, 10, 11, and 12, as \(n\to \infty\) , \(E\left({L}_{n}\right)\) tends to the positive constants 1/3, 1/5, 1/11, and 1, respectively [ 47 ]. We take the cumulative average loss after \(n\) allocations as an aggregate measure of imbalance: \(Imb\left(n\right)=\frac{1}{n}{\sum }_{i=1}^{n}E\left({L}_{i}\right)\) , which takes values in the 0–1 range.

To measure lack of randomness, we consider two measures: expected proportion of correct guesses up to the \(i\mathrm{th}\)  step, \(PCG\left(i\right)=\frac1i\sum\nolimits_{j=1}^i\Pr(G_j=1)\) ,  \(i=1,\dots ,n\) , and the forcing index [ 47 , 84 ], \(FI(i)=\frac{{\sum }_{j=1}^{i}E\left|{\phi }_{j}-0.5\right|}{i/4}\) , where \(E\left|{\phi }_{j}-0.5\right|\) is the expected deviation of the conditional probability of treatment E assignment at the \(j\mathrm{th}\)  allocation step ( \({\phi }_{j}\) ) from the unconditional target value of 0.5. Note that \(PCG\left(i\right)\) takes values in the range from 0.5 for CRD to 0.75 for PBD(2) assuming \(i\) is even, whereas \(FI(i)\) takes values in the 0–1 range. At the one extreme, we have CRD for which \(FI(i)\equiv 0\) because for CRD \({\phi }_{i}=0.5\) for any \(i\ge 1\) . At the other extreme, we have PBD(2) for which every odd allocation is made with probability 0.5, and every even allocation is deterministic, i.e. made with probability 0 or 1. For PBD(2), assuming \(i\) is even, there are exactly \(i/2\) pairs of allocations, and so \({\sum }_{j=1}^{i}E\left|{\phi }_{j}-0.5\right|=0.5\cdot i/2=i/4\) , which implies that \(FI(i)=1\) for PBD(2). For all other restricted randomization procedures one has \(0<FI(i)<1\) .

A “good” randomization procedure should have low values of both loss and forcing index. Different randomization procedures can be compared graphically. As a balance/randomness tradeoff metric, one can calculate the quadratic distance to the origin (0,0) for the chosen sample size, e.g. \(d(n)=\sqrt{{\left\{Imb(n)\right\}}^{2}+{\left\{FI(n)\right\}}^{2}}\) (in our example \(n=50\) ), and the randomization designs can then be ranked such that designs with lower values of \(d(n)\) are preferable.

We ran a simulation study of the 12 randomization procedures for an RCT with \(n=50\) . Monte Carlo average values of absolute imbalance, loss, \(Imb\left(i\right)\) , \(FI\left(i\right)\) , and \(d(i)\) were calculated for each intermediate allocation step ( \(i=1,\dots ,50\) ), based on 10,000 simulations.

Figure  1 is a plot of expected absolute imbalance vs. allocation step. CRD, GBCD(1), and GBCD(2) show increasing patterns. For TBD and Rand, the final imbalance (when \(n=50\) ) is zero; however, at intermediate steps is can be quite large. For other designs, absolute imbalance is expected to be below 2 at any allocation step up to \(n=50\) . Note the periodic patterns of PBD(2) and PBD(4); for instance, for PBD(2) imbalance is 0 (or 1) for any even (or odd) allocation.

figure 1

Simulated expected absolute imbalance vs. allocation step for 12 restricted randomization procedures for n  = 50. Note: PBD(2) and PBD(4) have forced periodicity absolute imbalance of 0, which distinguishes them from MTI procedures

Figure  2 is a plot of expected proportion of correct guesses vs. allocation step. One can observe that for CRD it is a flat pattern at 0.5; for PBD(2) it fluctuates while reaching the upper limit of 0.75 at even allocation steps; and for ten other designs the values of proportion of correct guesses fall between those of CRD and PBD(2). The TBD has the same behavior up to ~ 40 th allocation step, at which the pattern starts increasing. Rand exhibits an increasing pattern with overall fewer correct guesses compared to other randomization procedures. Interestingly, BSD(3) is uniformly better (less predictable) than ABCD(2), BCD(2/3), and BCDWIT(2/3, 3). For the three GBCD procedures, there is a rapid initial increase followed by gradual decrease in the pattern; this makes good sense, because GBCD procedures force greater balance when the trial is small and become more random (and less prone to correct guessing) as the sample size increases.

figure 2

Simulated expected proportion of correct guesses vs. allocation step for 12 restricted randomization procedures for n  = 50

Table 2 shows the ranking of the 12 designs with respect to the overall performance metric \(d(n)=\sqrt{{\left\{Imb(n)\right\}}^{2}+{\left\{FI(n)\right\}}^{2}}\) for \(n=50\) . BSD(3), GBCD(2) and GBCD(1) are the top three procedures, whereas PBD(2) and CRD are at the bottom of the list.

Figure  3 is a plot of \(FI\left(n\right)\) vs. \(Imb\left(n\right)\) for \(n=50\) . One can see the two extremes: CRD that takes the value (0,1), and PBD(2) with the value (1,0). The other ten designs are closer to (0,0).

figure 3

Simulated forcing index (x-axis) vs. aggregate expected loss (y-axis) for 12 restricted randomization procedures for n  = 50

Figure  4 is a heat map plot of the metric \(d(i)\) for \(i=1,\dots ,50\) . BSD(3) seems to provide overall best tradeoff between randomness and balance throughout the study.

figure 4

Heatmap of the balance/randomness tradeoff \(d\left(i\right)=\sqrt{{\left\{Imb(i)\right\}}^{2}+{\left\{FI(i)\right\}}^{2}}\) vs. allocation step ( \(i=1,\dots ,50\) ) for 12 restricted randomization procedures. The procedures are ordered by value of d(50), with smaller values (more red) indicating more optimal performance

Inferential characteristics: type I error rate and power

Our next goal is to compare the chosen randomization procedures in terms of validity (control of the type I error rate) and efficiency (power). For this purpose, we assumed the following data generating mechanism: for the \(i\mathrm{th}\)  subject, conditional on the treatment assignment \({\delta }_{i}\) , the outcome \({Y}_{i}\) is generated according to the model

where \({u}_{i}\) is an unknown term associated with the \(i\mathrm{th}\)  subject and \({\varepsilon }_{i}\) ’s are i.i.d. measurement errors. We shall explore the following four models:

M1: Normal random sampling :  \({u}_{i}\equiv 0\) and \({\varepsilon }_{i}\sim\) i.i.d. N(0,1), \(i=1,\dots ,n\) . This corresponds to a standard setup for a two-sample t-test under a population model.

M2: Linear trend :  \({u}_{i}=\frac{5i}{n+1}\) and \({\varepsilon }_{i}\sim\) i.i.d. N(0,1), \(i=1,\dots ,n\) . In this model, the outcomes are affected by a linear trend over time [ 67 ].

M3: Cauchy errors :  \({u}_{i}\equiv 0\) and \({\varepsilon }_{i}\sim\) i.i.d. Cauchy(0,1), \(i=1,\dots ,n\) . In this setup, we have a misspecification of the distribution of measurement errors.

M4: Selection bias :  \({u}_{i+1}=-\nu \cdot sign\left\{D\left(i\right)\right\}\) , \(i=0,\dots ,n-1\) , with the convention that \(D\left(0\right)=0\) . Here, \(\nu >0\) is the “bias effect” (in our simulations we set \(\nu =0.5\) ). We also assume that \({\varepsilon }_{i}\sim\) i.i.d. N(0,1), \(i=1,\dots ,n\) . In this setup, at each allocation step the investigator attempts to intelligently guess the upcoming treatment assignment and selectively enroll a patient who, in their view, would be most suitable for the upcoming treatment. The investigator uses the “convergence” guessing strategy [ 28 ], that is, guess the treatment as one that has been less frequently assigned thus far, or make a random guess in case the current treatment numbers are equal. Assuming that the investigator favors the experimental treatment and is interested in demonstrating its superiority over the control, the biasing mechanism is as follows: at the \((i+1)\) st step, a “healthier” patient is enrolled, if \(D\left(i\right)<0\) ( \({u}_{i+1}=0.5\) ); a “sicker” patient is enrolled, if \(D\left(i\right)>0\) ( \({u}_{i+1}=-0.5\) ); or a “regular” patient is enrolled, if \(D\left(i\right)=0\) ( \({u}_{i+1}=0\) ).

We consider three statistical test procedures:

T1: Two-sample t-test : The test statistic is \(t=\frac{{\overline{Y} }_{E}-{\overline{Y} }_{C}}{\sqrt{{S}_{p}^{2}\left(\frac{1}{{N}_{E}\left(n\right)}+\frac{1}{{N}_{C}\left(n\right)}\right)}}\) , where \({\overline{Y} }_{E}=\frac{1}{{N}_{E}\left(n\right)}{\sum }_{i=1}^{n}{{\delta }_{i}Y}_{i}\) and \({\overline{Y} }_{C}=\frac{1}{{N}_{C}\left(n\right)}{\sum }_{i=1}^{n}{(1-\delta }_{i}){Y}_{i}\) are the treatment sample means,  \({N}_{E}\left(n\right)={\sum }_{i=1}^{n}{\delta }_{i}\) and \({N}_{C}\left(n\right)=n-{N}_{E}\left(n\right)\) are the observed group sample sizes, and \({S}_{p}^{2}\) is a pooled estimate of variance, where \({S}_{p}^{2}=\frac{1}{n-2}\left({\sum }_{i=1}^{n}{\delta }_{i}{\left({Y}_{i}-{\overline{Y} }_{E}\right)}^{2}+{\sum }_{i=1}^{n}(1-{\delta }_{i}){\left({Y}_{i}-{\overline{Y} }_{C}\right)}^{2}\right)\) . Then \({H}_{0}:\Delta =0\) is rejected at level \(\alpha\) , if \(\left|t\right|>{t}_{1-\frac{\alpha }{2}, n-2}\) , the 100( \(1-\frac{\alpha }{2}\) )th percentile of the t-distribution with \(n-2\) degrees of freedom.

T2: Randomization-based test using mean difference : Let \({{\varvec{\updelta}}}_{obs}\) and \({{\varvec{y}}}_{obs}\) denote, respectively the observed sequence of treatment assignments and responses, obtained from the trial using randomization procedure \(\mathfrak{R}\) . We first compute the observed mean difference \({S}_{obs}=S\left({{\varvec{\updelta}}}_{obs},{{\varvec{y}}}_{obs}\right)={\overline{Y} }_{E}-{\overline{Y} }_{C}\) . Then we use Monte Carlo simulation to generate \(L\) randomization sequences of length \(n\) using procedure \(\mathfrak{R}\) , where \(L\) is some large number. For the \(\ell\mathrm{th}\)  generated sequence, \({{\varvec{\updelta}}}_{\ell}\) , compute \({S}_{\ell}=S({{\varvec{\updelta}}}_{\ell},{{\varvec{y}}}_{obs})\) , where \({\ell}=1,\dots ,L\) . The proportion of sequences for which \({S}_{\ell}\) is at least as extreme as \({S}_{obs}\) is computed as \(\widehat{P}=\frac{1}{L}{\sum }_{{\ell}=1}^{L}1\left\{\left|{S}_{\ell}\right|\ge \left|{S}_{obs}\right|\right\}\) . Statistical significance is declared, if \(\widehat{P}<\alpha\) .

T3: Randomization-based test based on ranks : This test procedure follows the same logic as T2, except that the test statistic is calculated based on ranks. Given the vector of observed responses \({{\varvec{y}}}_{obs}=({y}_{1},\dots ,{y}_{n})\) , let \({a}_{jn}\) denote the rank of \({y}_{j}\) among the elements of \({{\varvec{y}}}_{obs}\) . Let \({\overline a}_n\) denote the average of \({a}_{jn}\) ’s, and let \({\boldsymbol a}_n=\left(a_{1n}-{\overline a}_n,...,\alpha_{nn}-{\overline a}_n\right)\boldsymbol'\) . Then a linear rank test statistic has the form \({S}_{obs}={{\varvec{\updelta}}}_{obs}^{\boldsymbol{^{\prime}}}{{\varvec{a}}}_{n}={\sum }_{i=1}^{n}{\delta }_{i}({a}_{in}-{\overline{a} }_{n})\) .

We consider four scenarios of the true mean difference  \(\Delta ={\mu }_{E}-{\mu }_{C}\) , which correspond to the Null case ( \(\Delta =0\) ), and three choices of \(\Delta >0\) which correspond to Alternative 1 (power ~ 70%), Alternative 2 (power ~ 80%), and Alternative 3 (power ~ 90%). In all cases, \(n=50\) was used.

Figure  5 summarizes the results of a simulation study comparing 12 randomization designs, under 4 models for the outcome (M1, M2, M3, and M4), 4 scenarios for the mean treatment difference (Null, and Alternatives 1, 2, and 3), using 3 statistical tests (T1, T2, and T3). The operating characteristics of interest are the type I error rate under the Null scenario and the power under the Alternative scenarios. Each scenario was simulated 10,000 times, and each randomization-based test was computed using \(L=\mathrm{10,000}\) sequences.

figure 5

Simulated type I error rate and power of 12 restricted randomization procedures. Four models for the data generating mechanism of the primary outcome (M1: Normal random sampling; M2: Linear trend; M3: Errors Cauchy; and M4: Selection bias). Four scenarios for the treatment mean difference (Null; Alternatives 1, 2, and 3). Three statistical tests (T1: two-sample t-test; T2: randomization-based test using mean difference; T3: randomization-based test using ranks)

From Fig.  5 , under the normal random sampling model (M1), all considered randomization designs have similar performance: they maintain the type I error rate and have similar power, with all tests. In other words, when population model assumptions are satisfied, any combination of design and analysis should work well and yield reliable and consistent results.

Under the “linear trend” model (M2), the designs have differential performance. First of all, under the Null scenario, only Rand and CRD maintain the type I error rate at 5% with all three tests. For TBD, the t-test is anticonservative, with type I error rate ~ 20%, whereas for nine other procedures the t-test is conservative, with type I error rate in the range 0.1–2%. At the same time, for all 12 designs the two randomization-based tests maintain the nominal type I error rate at 5%. These results are consistent with some previous findings in the literature [ 67 , 68 ]. As regards power, it is reduced significantly compared to the normal random sampling scenario. The t-test seems to be most affected and the randomization-based test using ranks is most robust for a majority of the designs. Remarkably, for CRD the power is similar with all three tests. This signifies the usefulness of randomization-based inference in situations when outcome data are subject to a linear time trend, and the importance of applying randomization-based tests at least as supplemental analyses to likelihood-based test procedures.

Under the “Cauchy errors” model (M3), all designs perform similarly: the randomization-based tests maintain the type I error rate at 5%, whereas the t-test deflates the type I error to 2%. As regards power, all designs also have similar, consistently degraded performance: the t-test is least powerful, and the randomization-based test using ranks has highest power. Overall, under misspecification of the error distribution a randomization-based test using ranks is most appropriate; yet one should acknowledge that its power is still lower than expected.

Under the “selection bias” model (M4), the 12 designs have differential performance. The only procedure that maintained the type I error rate at 5% with all three tests was CRD. For eleven other procedures, inflations of the type I error were observed. In general, the more random the design, the less it was affected by selection bias. For instance, the type I error rate for TBD was ~ 6%; for Rand, BSD(3), and GBCD(1) it was ~ 7.5%; for GBCD(2) and ABCD(2) it was ~ 8–9%; for Efron’s BCD(2/3) it was ~ 12.5%; and the most affected design was PBD(2) for which the type I error rate was ~ 38–40%. These results are consistent with the theory of Blackwell and Hodges [ 28 ] which posits that TBD is least susceptible to selection bias within a class of restricted randomization designs that force exact balance. Finally, under M4, statistical power is inflated by several percentage points compared to the normal random sampling scenario without selection bias.

We performed additional simulations to assess the impact of the bias effect \(\nu\) under selection bias model. The same 12 randomization designs and three statistical tests were evaluated for a trial with \(n=50\) under the Null scenario ( \(\Delta =0\) ), for \(\nu\) in the range of 0 (no bias) to 1 (strong bias). Figure S1 in the Supplementary Materials shows that for all designs but CRD, the type I error rate is increasing in \(\nu\) , with all three tests. The magnitude of the type I error inflation is different across the restricted randomization designs; e.g. for TBD it is minimal, whereas for more restrictive designs it may be large, especially for \(\nu \ge 0.4\) . PBD(2) is particularly vulnerable: for \(\nu\) in the range 0.4–1, its type I error rate is in the range 27–90% (for the nominal \(\alpha =5\) %).

In summary, our Example 1 includes most of the key ingredients of the roadmap for assessment of competing randomization designs which was described in the “ Methods ” section. For the chosen experimental scenarios, we evaluated CRD and several restricted randomization procedures, some of which belonged to the same class but with different values of the parameter (e.g. GBCD with \(\gamma =1, 2, 5\) ). We assessed two measures of imbalance, two measures of lack of randomness (predictability), and a metric that quantifies balance/randomness tradeoff. Based on these criteria, we found that BSD(3) provides overall best performance. We also evaluated type I error and power of selected randomization procedures under several treatment response models. We have observed important links between balance, randomness, type I error rate and power. It is beneficial to consider all these criteria simultaneously as they may complement each other in characterizing statistical properties of randomization designs. In particular, we found that a design that lacks randomness, such as PBD with blocks of 2 or 4, may be vulnerable to selection bias and lead to inflations of the type I error. Therefore, these designs should be avoided, especially in open-label studies. As regards statistical power, since all designs in this example targeted 1:1 allocation ratio (which is optimal if the outcomes are normally distributed and have between-group constant variance), they had very similar power of statistical tests in most scenarios except for the one with chronological bias. In the latter case, randomization-based tests were more robust and more powerful than the standard two-sample t-test under the population model assumption.

Overall, while Example 1 is based on a hypothetical 1:1 RCT, its true purpose is to showcase the thinking process in the application of our general roadmap. The following three examples are considered in the context of real RCTs.

Example 2: How can we reduce predictability of a randomization procedure and lower the risk of selection bias?

Selection bias can arise if the investigator can intelligently guess at least part of the randomization sequence yet to be allocated and, on that basis, preferentially and strategically assigns study subjects to treatments. Although it is generally not possible to prove that a particular study has been infected with selection bias, there are examples of published RCTs that do show some evidence to have been affected by it. Suspect trials are, for example, those with strong observed baseline covariate imbalances that consistently favor the active treatment group [ 16 ]. In what follows we describe an example of an RCT where the stratified block randomization procedure used was vulnerable to potential selection biases, and discuss potential alternatives that may reduce this vulnerability.

Etanercept was studied in patients aged 4 to 17 years with polyarticular juvenile rheumatoid arthritis [ 85 ]. The trial consisted of two parts. During the first, open-label part of the trial, patients received etanercept twice weekly for up to three months. Responders from this initial part of the trial were then randomized, at a 1:1 ratio, in the second, double-blind, placebo-controlled part of the trial to receive etanercept or placebo for four months or until a flare of the disease occurred. The primary efficacy outcome, the proportion of patients with disease flare, was evaluated in the double-blind part. Among the 51 randomized patients, 21 of the 26 placebo patients (81%) withdrew because of disease flare, compared with 7 of the 25 etanercept patients (28%), yielding a p- value of 0.003.

Regulatory review by the Food and Drug Administrative (FDA) identified vulnerability to selection biases in the study design of the double-blind part and potential issues in study conduct. These findings were succinctly summarized in [ 16 ] (pp.51–52).

Specifically, randomization was stratified by study center and number of active joints (≤ 2 vs. > 2, referred to as “few” or “many” in what follows), with blocked randomization within each stratum using a block size of two. Furthermore, randomization codes in corresponding “few” and “many” blocks within each study center were mirror images of each other. For example, if the first block within the “few” active joints stratum of a given center is “placebo followed by etanercept”, then the first block within the “many” stratum of the same center would be “etanercept followed by placebo”. While this appears to be an attempt to improve treatment balance in this small trial, unblinding of one treatment assignment may lead to deterministic predictability of three upcoming assignments. While the double-blind nature of the trial alleviated this concern to some extent, it should be noted that all patients did receive etanercept previously in the initial open-label part of the trial. Chances of unblinding may not be ignorable if etanercept and placebo have immediately evident different effects or side effects. The randomized withdrawal design was appropriate in this context to improve statistical power in identifying efficacious treatments, but the specific randomization procedure used in the trial increased vulnerability to selection biases if blinding cannot be completely maintained.

FDA review also identified that four patients were randomized from the wrong “few” or “many” strata, in three of which (3/51 = 5.9%) it was foreseeable that the treatment received could have been reversed compared to what the patient would have received if randomized in the correct stratum. There were also some patients randomized out of order. Imbalance in baseline characteristics were observed in age (mean ages of 8.9 years in the etanercept arm vs. that of 12.2 years in the placebo arm) and corticosteroid use at baseline (50% vs. 24%).

While the authors [ 85 ] concluded that “The unequal randomization did not affect the study results”, and indeed it was unknown whether the imbalance was a chance occurrence or in part caused by selection biases, the trial could have used better alternative randomization procedures to reduce vulnerability to potential selection bias. To illustrate the latter point, let us compare predictability of two randomization procedures – permuted block design (PBD) and big stick design (BSD) for several values of the maximum tolerated imbalance (MTI). We use BSD here for the illustration purpose because it was found to provide a very good balance/randomness tradeoff based on our simulations in Example 1 . In essence, BSD provides the same level of imbalance control as PBD but with stronger encryption.

Table 3 reports two metrics for PBD and BSD: proportion of deterministic assignments within a randomization sequence, and excess correct guess probability. The latter metric is the absolute increase in proportion of correct guesses for a given procedure over CRD that has 50% probability of correct guesses under the “optimal guessing strategy”. Footnote 1 Note that for MTI = 1, BSD is equivalent to PBD with blocks of two. However, by increasing MTI, one can substantially decrease predictability. For instance, going from MTI = 1 in the BSD to an MTI of 2 or 3 (two bottom rows), the proportion of deterministic assignments decreases from 50% to 25% and 16.7%, respectively, and excess correct guess probability decreases from 25% to 12.5% and 8.3%, which is a substantial reduction in risk of selection bias. In addition to simplicity and lower predictability for the same level of MTI control, BSD has another important advantage: investigators are not accustomed to it (as they are to the PBD), and therefore it has potential for complete elimination of prediction through thwarting enough early prediction attempts.

Our observations here are also generalizable to other MTI randomization methods, such as the maximal procedure [ 35 ], Chen’s designs [ 38 , 39 ], block urn design [ 40 ], just to name a few. MTI randomization procedures can be also used as building elements for more complex stratified randomization schemes [ 86 ].

Example 3: How can we mitigate risk of chronological bias?

Chronological bias may occur if a trial recruitment period is long, and there is a drift in some covariate over time that is subsequently not accounted for in the analysis [ 29 ]. To mitigate risk of chronological bias, treatment assignments should be balanced over time. In this regard, the ICH E9 guideline has the following statement [ 31 ]:

“...Although unrestricted randomisation is an acceptable approach, some advantages can generally be gained by randomising subjects in blocks. This helps to increase the comparability of the treatment groups, particularly when subject characteristics may change over time, as a result, for example, of changes in recruitment policy. It also provides a better guarantee that the treatment groups will be of nearly equal size...”

While randomization in blocks of two ensures best balance, it is highly predictable. In practice, a sensible tradeoff between balance and randomness is desirable. In the following example, we illustrate the issue of chronological bias in the context of a real RCT.

Altman and Royston [ 87 ] gave several examples of clinical studies with hidden time trends. For instance, an RCT to compare azathioprine versus placebo in patients with primary biliary cirrhosis (PBC) with respect to overall survival was an international, double-blind, randomized trial including 248 patients of whom 127 received azathioprine and 121 placebo [ 88 ]. The study had a recruitment period of 7 years. A major prognostic factor for survival was the serum bilirubin level on entry to the trial. Altman and Royston [ 87 ] provided a cusum plot of log bilirubin which showed a strong decreasing trend over time – patients who entered the trial later had, on average, lower bilirubin levels, and therefore better prognosis. Despite that the trial was randomized, there was some evidence of baseline imbalance with respect to serum bilirubin between azathioprine and placebo groups. The analysis using Cox regression adjusted for serum bilirubin showed that the treatment effect of azathioprine was statistically significant ( p  = 0.01), with azathioprine reducing the risk of dying to 59% of that observed during the placebo treatment.

The azathioprine trial [ 88 ] provides a very good example for illustrating importance of both the choice of a randomization design and a subsequent statistical analysis. We evaluated several randomization designs and analysis strategies under the given time trend through simulation. Since we did not have access to the patient level data from the azathioprine trial, we simulated a dataset of serum bilirubin values from 248 patients that resembled that in the original paper (Fig.  1 in [ 87 ]); see Fig.  6 below.

figure 6

reproduced from Fig.  1 of Altman and Royston [ 87 ]

Cusum plot of baseline log serum bilirubin level of 248 subjects from the azathioprine trial,

For the survival outcomes, we use the following data generating mechanism [ 71 , 89 ]: let \({h}_{i}(t,{\delta }_{i})\) denote the hazard function of the \(i\mathrm{th}\)  patient at time \(t\) such that

where \({h}_{c}(t)\) is an unspecified baseline hazard, \(\log HR\) is the true value of the log-transformed hazard ratio, and \({u}_{i}\) is the log serum bilirubin of the \(i\mathrm{th}\)  patient at study entry.

Our main goal is to evaluate the impact of the time trend in bilirubin on the type I error rate and power. We consider seven randomization designs: CRD, Rand, TBD, PBD(2), PBD(4), BSD(3), and GBCD(2). The latter two designs were found to be the top two performing procedures based on our simulation results in Example 1 (cf. Table 2 ). PBD(4) is the most commonly used procedure in clinical trial practice. Rand and TBD are two designs that ensure exact balance in the final treatment numbers. CRD is the most random design, and PBD(2) is the most balanced design.

To evaluate both type I error and power, we consider two values for the true treatment effect: \(HR=1\) (Null) and \(HR=0.6\) (Alternative). For data analysis, we use the Cox regression model, either with or without adjustment for serum bilirubin. Furthermore, we assess two approaches to statistical inference: population model-based and randomization-based. For the sake of simplicity, we let \({h}_{c}\left(t\right)\equiv 1\) (exponential distribution) and assume no censoring when simulating the data.

For each combination of the design, experimental scenario, and data analysis strategy, a trial with 248 patients was simulated 10,000 times. Each randomization-based test was computed using \(L=\mathrm{1,000}\) sequences. In each simulation, we used the same time trend in serum bilirubin as described. Through simulation, we estimated the probability of a statistically significant baseline imbalance in serum bilirubin between azathioprine and placebo groups, type I error rate, and power.

First, we observed that the designs differ with respect to their potential to achieve baseline covariate balance under the time trend. For instance, probability of a statistically significant group difference on serum bilirubin (two-sided P  < 0.05) is ~ 24% for TBD, ~ 10% for CRD, ~ 2% for GBCD(2), ~ 0.9% for Rand, and ~ 0% for BSD(3), PBD(4), and PBD(2).

Second, a failure to adjust for serum bilirubin in the analysis can negatively impact statistical inference. Table 4 shows the type I error and power of statistical analyses unadjusted and adjusted for serum bilirubin, using population model-based and randomization-based approaches.

If we look at the type I error for the population model-based, unadjusted analysis, we can see that only CRD and Rand are valid (maintain the type I error rate at 5%), whereas TBD is anticonservative (~ 15% type I error) and PBD(2), PBD(4), BSD(3), and GBCD(2) are conservative (~ 1–2% type I error). These findings are consistent with the ones for the two-sample t-test described earlier in the current paper, and they agree well with other findings in the literature [ 67 ]. By contrast, population model-based covariate-adjusted analysis is valid for all seven randomization designs. Looking at the type I error for the randomization-based analyses, all designs yield consistent valid results (~ 5% type I error), with or without adjustment for serum bilirubin.

As regards statistical power, unadjusted analyses are substantially less powerful then the corresponding covariate-adjusted analysis, for all designs with either population model-based or randomization-based approaches. For the population model-based, unadjusted analysis, the designs have ~ 59–65% power, whereas than the corresponding covariate-adjusted analyses have ~ 97% power. The most striking results are observed with the randomization-based approach: the power of unadjusted analysis is quite different across seven designs: it is ~ 37% for TBD, ~ 60–61% for CRD and Rand, ~ 80–87% for BCD(3), GBCD(2), and PBD(4), and it is ~ 90% for PBD(2). Thus, PBD(2) is the most powerful approach if a time trend is present, statistical analysis strategy is randomization-based, and no adjustment for time trend is made. Furthermore, randomization-based covariate-adjusted analyses have ~ 97% power for all seven designs. Remarkably, the power of covariate-adjusted analysis is identical for population model-based and randomization-based approaches.

Overall, this example highlights the importance of covariate-adjusted analysis, which should be straightforward if a covariate affected by a time trend is known (e.g. serum bilirubin in our example). If a covariate is unknown or hidden, then unadjusted analysis following a conventional test may have reduced power and distorted type I error (although the designs such as CRD and Rand do ensure valid statistical inference). Alternatively, randomization-based tests can be applied. The resulting analysis will be valid but may be potentially less powerful. The degree of loss in power following randomization-based test depends on the randomization design: designs that force greater treatment balance over time will be more powerful. In fact, PBD(2) is shown to be most powerful under such circumstances; however, as we have seen in Example 1 and Example 2, a major deficiency of PBD(2) is its vulnerability to selection bias. From Table 4 , and taking into account the earlier findings in this paper, BSD(3) seems to provide a very good risk mitigation strategy against unknown time trends.

Example 4: How do we design an RCT with a very small sample size?

In our last example, we illustrate the importance of the careful choice of randomization design and subsequent statistical analysis in a nonstandard RCT with small sample size. Due to confidentiality and because this study is still in conduct, we do not disclose all details here except for that the study is an ongoing phase II RCT in a very rare and devastating autoimmune disease in children.

The study includes three periods: an open-label single-arm active treatment for 28 weeks to identify treatment responders (Period 1), a 24-week randomized treatment withdrawal period to primarily assess the efficacy of the active treatment vs. placebo (Period 2), and a 3-year long-term safety, open-label active treatment (Period 3). Because of a challenging indication and the rarity of the disease, the study plans to enroll up to 10 male or female pediatric patients in order to randomize 8 patients (4 per treatment arm) in Period 2 of the study. The primary endpoint for assessing the efficacy of active treatment versus placebo is the proportion of patients with disease flare during the 24-week randomized withdrawal phase. The two groups will be compared using Fisher’s exact test. In case of a successful outcome, evidence of clinical efficacy from this study will be also used as part of a package to support the claim for drug effectiveness.

Very small sample sizes are not uncommon in clinical trials of rare diseases [ 90 , 91 ]. Naturally, there are several methodological challenges for this type of study. A major challenge is generalizability of the results from the RCT to a population. In this particular indication, no approved treatment exists, and there is uncertainty on disease epidemiology and the exact number of patients with the disease who would benefit from treatment (patient horizon). Another challenge is the choice of the randomization procedure and the primary statistical analysis. In this study, one can enumerate upfront all 25 possible outcomes: {0, 1, 2, 3, 4} responders on active treatment, and {0, 1, 2, 3, 4} responders on placebo, and create a chart quantifying the level of evidence ( p- value) for each experimental outcome, and the corresponding decision. Before the trial starts, a discussion with the regulatory agency is warranted to agree upon on what level of evidence must be achieved in order to declare the study a “success”.

Let us perform a hypothetical planning for the given study. Suppose we go with a standard population-based approach, for which we test the hypothesis \({H}_{0}:{p}_{E}={p}_{C}\) vs. \({H}_{0}:{p}_{E}>{p}_{C}\) (where \({p}_{E}\) and \({p}_{C}\) stand for the true success rates for the experimental and control group, respectively) using Fisher’s exact test. Table 5 provides 1-sided p- values of all possible experimental outcomes. One could argue that a p- value < 0.1 may be viewed as a convincing level of evidence for this study. There are only 3 possibilities that can lead to this outcome: 3/4 vs. 0/4 successes ( p  = 0.0714); 4/4 vs. 0/4 successes ( p  = 0.0143); and 4/4 vs. 1/4 successes ( p  = 0.0714). For all other outcomes, p  ≥ 0.2143, and thus the study would be regarded as a “failure”.

Now let us consider a randomization-based inference approach. For illustration purposes, we consider four restricted randomization procedures—Rand, TBD, PBD(4), and PBD(2)—that exactly achieve 4:4 allocation. These procedures are legitimate choices because all of them provide exact sample sizes (4 per treatment group), which is essential in this trial. The reference set of either Rand or TBD includes \(70=\left(\begin{array}{c}8\\ 4\end{array}\right)\) unique sequences though with different probabilities of observing each sequence. For Rand, these sequences are equiprobable, whereas for TBD, some sequences are more likely than others. For PBD( \(2b\) ), the size of the reference set is \({\left\{\left(\begin{array}{c}2b\\ b\end{array}\right)\right\}}^{B}\) , where \(B=n/2b\) is the number of blocks of length \(2b\) for a trial of size \(n\) (in our example \(n=8\) ). This results in in a reference set of \({2}^{4}=16\) unique sequences with equal probability of 1/16 for PBD(2), and of \({6}^{2}=36\) unique sequences with equal probability of 1/36 for PBD(4).

In practice, the study statistician picks a treatment sequence at random from the reference set according to the chosen design. The details (randomization seed, chosen sequence, etc.) are carefully documented and kept confidential. For the chosen sequence and the observed outcome data, a randomization-based p- value is the sum of probabilities of all sequences in the reference set that yield the result at least as large in favor of the experimental treatment as the one observed. This p- value will depend on the randomization design, the observed randomization sequence and the observed outcomes, and it may also be different from the population-based analysis p- value.

To illustrate this, suppose the chosen randomization sequence is CEECECCE (C stands for control and E stands for experimental), and the observed responses are FSSFFFFS (F stands for failure and S stands for success). Thus, we have 3/4 successes on experimental and 0/4 successes on control. Then, the randomization-based p- value is 0.0714 for Rand; 0.0469 for TBD, 0.1250 for PBD(2); 0.0833 for PBD(4); and it is 0.0714 for the population-based analysis. The coincidence of the randomization-based p- value for Rand and the p- value of the population-based analysis is not surprising. Fisher's exact test is a permutation test and in the case of Rand as randomization procedure, the p- value of a permutation test and of a randomization test are always equal. However, despite the numerical equality, we should be mindful of different assumptions (population/randomization model).

Likewise, randomization-based p- values can be derived for other combinations of observed randomization sequences and responses. All these details (the chosen randomization design, the analysis strategy, and corresponding decisions) would have to be fully specified upfront (before the trial starts) and agreed upon by both the sponsor and the regulator. This would remove any ambiguity when the trial data become available.

As the example shows, the level of evidence in the randomization-based inference approach depends on the chosen randomization procedure and the resulting decisions may be different depending on the specific procedure. For instance, if the level of significance is set to 10% as a criterion for a “successful trial”, then with the observed data (3/4 vs. 0/4), there would be a significant test result for TBD, Rand, PBD(4), but not for PBD(2).

Summary and discussion

Randomization is the foundation of any RCT involving treatment comparison. Randomization is not a single technique, but a very broad class of statistical methodologies for design and analysis of clinical trials [ 10 ]. In this paper, we focused on the randomized controlled two-arm trial designed with equal allocation, which is the gold standard research design to generate clinical evidence in support of regulatory submissions. Even in this relatively simple case, there are various restricted randomization procedures with different probabilistic structures and different statistical properties, and the choice of a randomization design for any RCT must be made judiciously.

For the 1:1 RCT, there is a dual goal of balancing treatment assignments while maintaining allocation randomness. Final balance in treatment totals frequently maximizes statistical power for treatment comparison. It is also important to maintain balance at intermediate steps during the trial, especially in long-term studies, to mitigate potential for chronological bias. At the same time, a procedure should have high degree of randomness so that treatment assignments within the sequence are not easily predictable; otherwise, the procedure may be vulnerable to selection bias, especially in open-label studies. While balance and randomness are competing criteria, it is possible to find restricted randomization procedures that provide a sensible tradeoff between these criteria, e.g. the MTI procedures, of which the big stick design (BSD) [ 37 ] with a suitably chosen MTI limit, such as BSD(3), has very appealing statistical properties. In practice, the choice of a randomization procedure should be made after a systematic evaluation of different candidate procedures under different experimental scenarios for the primary outcome, including cases when model assumptions are violated.

In our considered examples we showed that the choice of randomization design, data analytic technique (e.g. parametric or nonparametric model, with or without covariate adjustment), and the decision on whether to include randomization in the analysis (e.g. randomization-based or population model-based analysis) are all very important considerations. Furthermore, these examples highlight the importance of using randomization designs that provide strong encryption of the randomization sequence, importance of covariate adjustment in the analysis, and the value of statistical thinking in nonstandard RCTs with very small sample sizes and small patient horizon. Finally, in this paper we have discussed randomization-based tests as robust and valid alternatives to likelihood-based tests. Randomization-based inference is a useful approach in clinical trials and should be considered by clinical researchers more frequently [ 14 ].

Further topics on randomization

Given the breadth of the subject of randomization, many important topics have been omitted from the current paper. Here we outline just a few of them.

In this paper, we have focused on the 1:1 RCT. However, clinical trials may involve more than two treatment arms. Extensions of equal randomization to the case of multiple treatment arms is relatively straightforward for many restricted randomization procedures [ 10 ]. Some trials with two or more treatment arms use unequal allocation (e.g. 2:1). Randomization procedures with unequal allocation ratios require careful consideration. For instance, an important and desirable feature is the allocation ratio preserving property (ARP). A randomization procedure targeting unequal allocation is said to be ARP, if at each allocation step the unconditional probability of a particular treatment assignment is the same as the target allocation proportion for this treatment [ 92 ]. Non-ARP procedures may have fluctuations in the unconditional randomization probability from allocation to allocation, which may be problematic [ 93 ]. Fortunately, some randomization procedures naturally possess the ARP property, and there are approaches to correct for a non-ARP deficiency – these should be considered in the design of RCTs with unequal allocation ratios [ 92 , 93 , 94 ].

In many RCTs, investigators may wish to prospectively balance treatment assignments with respect to important prognostic covariates. For a small number of categorical covariates one can use stratified randomization by applying separate MTI randomization procedures within strata [ 86 ]. However, a potential advantage of stratified randomization decreases as the number of stratification variables increases [ 95 ]. In trials where balance over a large number of covariates is sought and the sample size is small or moderate, one can consider covariate-adaptive randomization procedures that achieve balance within covariate margins, such as the minimization procedure [ 96 , 97 ], optimal model-based procedures [ 46 ], or some other covariate-adaptive randomization technique [ 98 ]. To achieve valid and powerful results, covariate-adaptive randomization design must be followed by covariate-adjusted analysis [ 99 ]. Special considerations are required for covariate-adaptive randomization designs with more than two treatment arms and/or unequal allocation ratios [ 100 ].

In some clinical research settings, such as trials for rare and/or life threatening diseases, there is a strong ethical imperative to increase the chance of a trial participant to receive an empirically better treatment. Response-adaptive randomization (RAR) has been increasingly considered in practice, especially in oncology [ 101 , 102 ]. Very extensive methodological research on RAR has been done [ 103 , 104 ]. RAR is increasingly viewed as an important ingredient of complex clinical trials such as umbrella and platform trial designs [ 105 , 106 ]. While RAR, when properly applied, has its merit, the topic has generated a lot of controversial discussions over the years [ 107 , 108 , 109 , 110 , 111 ]. Amid the ongoing COVID-19 pandemic, RCTs evaluating various experimental treatments for critically ill COVID-19 patients do incorporate RAR in their design; see, for example, the I-SPY COVID-19 trial ( https://clinicaltrials.gov/ct2/show/NCT04488081 ).

Randomization can also be applied more broadly than in conventional RCT settings where randomization units are individual subjects. For instance, in a cluster randomized trial, not individuals but groups of individuals (clusters) are randomized among one or more interventions or the control [ 112 ]. Observations from individuals within a given cluster cannot be regarded as independent, and special statistical techniques are required to design and analyze cluster-randomized experiments. In some clinical trial designs, randomization is applied within subjects. For instance, the micro-randomized trial (MRT) is a novel design for development of mobile treatment interventions in which randomization is applied to select different treatment options for individual participants over time to optimally support individuals’ health behaviors [ 113 ].

Finally, beyond the scope of the present paper are the regulatory perspectives on randomization and practical implementation aspects, including statistical software and information systems to generate randomization schedules in real time. We hope to cover these topics in subsequent papers.

Availability of data and materials

All results reported in this paper are based either on theoretical considerations or simulation evidence. The computer code (using R and Julia programming languages) is fully documented and is available upon reasonable request.

Guess the next allocation as the treatment with fewest allocations in the sequence thus far, or make a random guess if the treatment numbers are equal.

Byar DP, Simon RM, Friedewald WT, Schlesselman JJ, DeMets DL, Ellenberg JH, Gail MH, Ware JH. Randomized clinical trials—perspectives on some recent ideas. N Engl J Med. 1976;295:74–80.

Article   CAS   PubMed   Google Scholar  

Collins R, Bowman L, Landray M, Peto R. The magic of randomization versus the myth of real-world evidence. N Engl J Med. 2020;382:674–8.

Article   PubMed   Google Scholar  

ICH Harmonised tripartite guideline. General considerations for clinical trials E8. 1997.

Hernán MA, Robins JM. Using big data to emulate a target trial when a randomized trial is not available. Am J Epidemiol. 2016;183(8):758–64.

Article   PubMed   PubMed Central   Google Scholar  

Byar DP. Why data bases should not replace randomized clinical trials. Biometrics. 1980;36:337–42.

Mehra MR, Desai SS, Kuy SR, Henry TD, Patel AN. Cardiovascular disease, drug therapy, and mortality in Covid-19. N Engl J Med. 2020;382:e102. https://www.nejm.org/doi/10.1056/NEJMoa2007621 .

Mehra MR, Desai SS, Ruschitzka F, Patel AN. Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. Lancet. 2020. https://www.sciencedirect.com/science/article/pii/S0140673620311806?via%3Dihub .

Mehra MR, Desai SS, Kuy SR, Henry TD, Patel AN. Retraction: Cardiovascular disease, drug therapy, and mortality in Covid-19. N Engl J Med. 2020. https://doi.org/10.1056/NEJMoa2007621 . https://www.nejm.org/doi/10.1056/NEJMc2021225 .

Medical Research Council. Streptomycin treatment of pulmonary tuberculosis. BMJ. 1948;2:769–82.

Article   Google Scholar  

Rosenberger WF, Lachin J. Randomization in clinical trials: theory and practice. 2nd ed. New York: Wiley; 2015.

Google Scholar  

Fisher RA. The design of experiments. Edinburgh: Oliver and Boyd; 1935.

Hill AB. The clinical trial. Br Med Bull. 1951;7(4):278–82.

Hill AB. Memories of the British streptomycin trial in tuberculosis: the first randomized clinical trial. Control Clin Trials. 1990;11:77–9.

Rosenberger WF, Uschner D, Wang Y. Randomization: The forgotten component of the randomized clinical trial. Stat Med. 2019;38(1):1–30 (with discussion).

Berger VW. Trials: the worst possible design (except for all the rest). Int J Person Centered Med. 2011;1(3):630–1.

Berger VW. Selection bias and covariate imbalances in randomized clinical trials. New York: Wiley; 2005.

Book   Google Scholar  

Berger VW. The alleged benefits of unrestricted randomization. In: Berger VW, editor. Randomization, masking, and allocation concealment. Boca Raton: CRC Press; 2018. p. 39–50.

Altman DG, Bland JM. Treatment allocation in controlled trials: why randomise? BMJ. 1999;318:1209.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Senn S. Testing for baseline balance in clinical trials. Stat Med. 1994;13:1715–26.

Senn S. Seven myths of randomisation in clinical trials. Stat Med. 2013;32:1439–50.

Rosenberger WF, Sverdlov O. Handling covariates in the design of clinical trials. Stat Sci. 2008;23:404–19.

Proschan M, Dodd L. Re-randomization tests in clinical trials. Stat Med. 2019;38:2292–302.

Spiegelhalter DJ, Freedman LS, Parmar MK. Bayesian approaches to randomized trials. J R Stat Soc A Stat Soc. 1994;157(3):357–87.

Berry SM, Carlin BP, Lee JJ, Muller P. Bayesian adaptive methods for clinical trials. Boca Raton: CRC Press; 2010.

Lachin J. Properties of simple randomization in clinical trials. Control Clin Trials. 1988;9:312–26.

Pocock SJ. Allocation of patients to treatment in clinical trials. Biometrics. 1979;35(1):183–97.

Simon R. Restricted randomization designs in clinical trials. Biometrics. 1979;35(2):503–12.

Blackwell D, Hodges JL. Design for the control of selection bias. Ann Math Stat. 1957;28(2):449–60.

Matts JP, McHugh R. Analysis of accrual randomized clinical trials with balanced groups in strata. J Chronic Dis. 1978;31:725–40.

Matts JP, Lachin JM. Properties of permuted-block randomization in clinical trials. Control Clin Trials. 1988;9:327–44.

ICH Harmonised Tripartite Guideline. Statistical principles for clinical trials E9. 1998.

Shao H, Rosenberger WF. Properties of the random block design for clinical trials. In: Kunert J, Müller CH, Atkinson AC, eds. mODa 11 – Advances in model-oriented design and analysis. Springer International Publishing Switzerland; 2016. 225–233.

Zhao W. Evolution of restricted randomization with maximum tolerated imbalance. In: Berger VW, editor. Randomization, masking, and allocation concealment. Boca Raton: CRC Press; 2018. p. 61–81.

Bailey RA, Nelson PR. Hadamard randomization: a valid restriction of random permuted blocks. Biom J. 2003;45(5):554–60.

Berger VW, Ivanova A, Knoll MD. Minimizing predictability while retaining balance through the use of less restrictive randomization procedures. Stat Med. 2003;22:3017–28.

Zhao W, Berger VW, Yu Z. The asymptotic maximal procedure for subject randomization in clinical trials. Stat Methods Med Res. 2018;27(7):2142–53.

Soares JF, Wu CFJ. Some restricted randomization rules in sequential designs. Commun Stat Theory Methods. 1983;12(17):2017–34.

Chen YP. Biased coin design with imbalance tolerance. Commun Stat Stochastic Models. 1999;15(5):953–75.

Chen YP. Which design is better? Ehrenfest urn versus biased coin. Adv Appl Probab. 2000;32:738–49.

Zhao W, Weng Y. Block urn design—A new randomization algorithm for sequential trials with two or more treatments and balanced or unbalanced allocation. Contemp Clin Trials. 2011;32:953–61.

van der Pas SL. Merged block randomisation: A novel randomisation procedure for small clinical trials. Clin Trials. 2019;16(3):246–52.

Zhao W. Letter to the Editor – Selection bias, allocation concealment and randomization design in clinical trials. Contemp Clin Trials. 2013;36:263–5.

Berger VW, Bejleri K, Agnor R. Comparing MTI randomization procedures to blocked randomization. Stat Med. 2016;35:685–94.

Efron B. Forcing a sequential experiment to be balanced. Biometrika. 1971;58(3):403–17.

Wei LJ. The adaptive biased coin design for sequential experiments. Ann Stat. 1978;6(1):92–100.

Atkinson AC. Optimum biased coin designs for sequential clinical trials with prognostic factors. Biometrika. 1982;69(1):61–7.

Smith RL. Sequential treatment allocation using biased coin designs. J Roy Stat Soc B. 1984;46(3):519–43.

Ball FG, Smith AFM, Verdinelli I. Biased coin designs with a Bayesian bias. J Stat Planning Infer. 1993;34(3):403–21.

BaldiAntognini A, Giovagnoli A. A new ‘biased coin design’ for the sequential allocation of two treatments. Appl Stat. 2004;53(4):651–64.

Atkinson AC. Selecting a biased-coin design. Stat Sci. 2014;29(1):144–63.

Rosenberger WF. Randomized urn models and sequential design. Sequential Anal. 2002;21(1&2):1–41 (with discussion).

Wei LJ. A class of designs for sequential clinical trials. J Am Stat Assoc. 1977;72(358):382–6.

Wei LJ, Lachin JM. Properties of the urn randomization in clinical trials. Control Clin Trials. 1988;9:345–64.

Schouten HJA. Adaptive biased urn randomization in small strata when blinding is impossible. Biometrics. 1995;51(4):1529–35.

Ivanova A. A play-the-winner-type urn design with reduced variability. Metrika. 2003;58:1–13.

Kundt G. A new proposal for setting parameter values in restricted randomization methods. Methods Inf Med. 2007;46(4):440–9.

Kalish LA, Begg CB. Treatment allocation methods in clinical trials: a review. Stat Med. 1985;4:129–44.

Zhao W, Weng Y, Wu Q, Palesch Y. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness. Pharm Stat. 2012;11:39–48.

Flournoy N, Haines LM, Rosenberger WF. A graphical comparison of response-adaptive randomization procedures. Statistics in Biopharmaceutical Research. 2013;5(2):126–41.

Hilgers RD, Uschner D, Rosenberger WF, Heussen N. ERDO – a framework to select an appropriate randomization procedure for clinical trials. BMC Med Res Methodol. 2017;17:159.

Burman CF. On sequential treatment allocations in clinical trials. PhD Thesis Dept. Mathematics, Göteborg. 1996.

Azriel D, Mandel M, Rinott Y. Optimal allocation to maximize the power of two-sample tests for binary response. Biometrika. 2012;99(1):101–13.

Begg CB, Kalish LA. Treatment allocation for nonlinear models in clinical trials: the logistic model. Biometrics. 1984;40:409–20.

Kalish LA, Harrington DP. Efficiency of balanced treatment allocation for survival analysis. Biometrics. 1988;44(3):815–21.

Sverdlov O, Rosenberger WF. On recent advances in optimal allocation designs for clinical trials. J Stat Theory Practice. 2013;7(4):753–73.

Sverdlov O, Ryeznik Y, Wong WK. On optimal designs for clinical trials: an updated review. J Stat Theory Pract. 2020;14:10.

Rosenkranz GK. The impact of randomization on the analysis of clinical trials. Stat Med. 2011;30:3475–87.

Galbete A, Rosenberger WF. On the use of randomization tests following adaptive designs. J Biopharm Stat. 2016;26(3):466–74.

Proschan M. Influence of selection bias on type I error rate under random permuted block design. Stat Sin. 1994;4:219–31.

Kennes LN, Cramer E, Hilgers RD, Heussen N. The impact of selection bias on test decisions in randomized clinical trials. Stat Med. 2011;30:2573–81.

PubMed   Google Scholar  

Rückbeil MV, Hilgers RD, Heussen N. Assessing the impact of selection bias on test decisions in trials with a time-to-event outcome. Stat Med. 2017;36:2656–68.

Berger VW, Exner DV. Detecting selection bias in randomized clinical trials. Control Clin Trials. 1999;25:515–24.

Ivanova A, Barrier RC, Berger VW. Adjusting for observable selection bias in block randomized trials. Stat Med. 2005;24:1537–46.

Kennes LN, Rosenberger WF, Hilgers RD. Inference for blocked randomization under a selection bias model. Biometrics. 2015;71:979–84.

Hilgers RD, Manolov M, Heussen N, Rosenberger WF. Design and analysis of stratified clinical trials in the presence of bias. Stat Methods Med Res. 2020;29(6):1715–27.

Hamilton SA. Dynamically allocating treatment when the cost of goods is high and drug supply is limited. Control Clin Trials. 2000;21(1):44–53.

Zhao W. Letter to the Editor – A better alternative to the inferior permuted block design is not necessarily complex. Stat Med. 2016;35:1736–8.

Berger VW. Pros and cons of permutation tests in clinical trials. Stat Med. 2000;19:1319–28.

Simon R, Simon NR. Using randomization tests to preserve type I error with response adaptive and covariate adaptive randomization. Statist Probab Lett. 2011;81:767–72.

Tamm M, Cramer E, Kennes LN, Hilgers RD. Influence of selection bias on the test decision. Methods Inf Med. 2012;51:138–43.

Tamm M, Hilgers RD. Chronological bias in randomized clinical trials arising from different types of unobserved time trends. Methods Inf Med. 2014;53:501–10.

BaldiAntognini A, Rosenberger WF, Wang Y, Zagoraiou M. Exact optimum coin bias in Efron’s randomization procedure. Stat Med. 2015;34:3760–8.

Chow SC, Shao J, Wang H, Lokhnygina. Sample size calculations in clinical research. 3rd ed. Boca Raton: CRC Press; 2018.

Heritier S, Gebski V, Pillai A. Dynamic balancing randomization in controlled clinical trials. Stat Med. 2005;24:3729–41.

Lovell DJ, Giannini EH, Reiff A, et al. Etanercept in children with polyarticular juvenile rheumatoid arthritis. N Engl J Med. 2000;342(11):763–9.

Zhao W. A better alternative to stratified permuted block design for subject randomization in clinical trials. Stat Med. 2014;33:5239–48.

Altman DG, Royston JP. The hidden effect of time. Stat Med. 1988;7:629–37.

Christensen E, Neuberger J, Crowe J, et al. Beneficial effect of azathioprine and prediction of prognosis in primary biliary cirrhosis. Gastroenterology. 1985;89:1084–91.

Rückbeil MV, Hilgers RD, Heussen N. Randomization in survival trials: An evaluation method that takes into account selection and chronological bias. PLoS ONE. 2019;14(6):e0217964.

Article   CAS   Google Scholar  

Hilgers RD, König F, Molenberghs G, Senn S. Design and analysis of clinical trials for small rare disease populations. J Rare Dis Res Treatment. 2016;1(3):53–60.

Miller F, Zohar S, Stallard N, Madan J, Posch M, Hee SW, Pearce M, Vågerö M, Day S. Approaches to sample size calculation for clinical trials in rare diseases. Pharm Stat. 2017;17:214–30.

Kuznetsova OM, Tymofyeyev Y. Preserving the allocation ratio at every allocation with biased coin randomization and minimization in studies with unequal allocation. Stat Med. 2012;31(8):701–23.

Kuznetsova OM, Tymofyeyev Y. Brick tunnel and wide brick tunnel randomization for studies with unequal allocation. In: Sverdlov O, editor. Modern adaptive randomized clinical trials: statistical and practical aspects. Boca Raton: CRC Press; 2015. p. 83–114.

Kuznetsova OM, Tymofyeyev Y. Expansion of the modified Zelen’s approach randomization and dynamic randomization with partial block supplies at the centers to unequal allocation. Contemp Clin Trials. 2011;32:962–72.

EMA. Guideline on adjustment for baseline covariates in clinical trials. 2015.

Taves DR. Minimization: A new method of assigning patients to treatment and control groups. Clin Pharmacol Ther. 1974;15(5):443–53.

Pocock SJ, Simon R. Sequential treatment assignment with balancing for prognostic factors in the controlled clinical trial. Biometrics. 1975;31(1):103–15.

Hu F, Hu Y, Ma Z, Rosenberger WF. Adaptive randomization for balancing over covariates. Wiley Interdiscipl Rev Computational Stat. 2014;6(4):288–303.

Senn S. Statistical issues in drug development. 2nd ed. Wiley-Interscience; 2007.

Kuznetsova OM, Tymofyeyev Y. Covariate-adaptive randomization with unequal allocation. In: Sverdlov O, editor. Modern adaptive randomized clinical trials: statistical and practical aspects. Boca Raton: CRC Press; 2015. p. 171–97.

Berry DA. Adaptive clinical trials: the promise and the caution. J Clin Oncol. 2011;29(6):606–9.

Trippa L, Lee EQ, Wen PY, Batchelor TT, Cloughesy T, Parmigiani G, Alexander BM. Bayesian adaptive randomized trial design for patients with recurrent glioblastoma. J Clin Oncol. 2012;30(26):3258–63.

Hu F, Rosenberger WF. The theory of response-adaptive randomization in clinical trials. New York: Wiley; 2006.

Atkinson AC, Biswas A. Randomised response-adaptive designs in clinical trials. Boca Raton: CRC Press; 2014.

Rugo HS, Olopade OI, DeMichele A, et al. Adaptive randomization of veliparib–carboplatin treatment in breast cancer. N Engl J Med. 2016;375:23–34.

Berry SM, Petzold EA, Dull P, et al. A response-adaptive randomization platform trial for efficient evaluation of Ebola virus treatments: a model for pandemic response. Clin Trials. 2016;13:22–30.

Ware JH. Investigating therapies of potentially great benefit: ECMO. (with discussion). Stat Sci. 1989;4(4):298–340.

Hey SP, Kimmelman J. Are outcome-adaptive allocation trials ethical? (with discussion). Clin Trials. 2005;12(2):102–27.

Proschan M, Evans S. Resist the temptation of response-adaptive randomization. Clin Infect Dis. 2020;71(11):3002–4. https://doi.org/10.1093/cid/ciaa334 .

Villar SS, Robertson DS, Rosenberger WF. The temptation of overgeneralizing response-adaptive randomization. Clinical Infectious Diseases. 2020; ciaa1027; doi: https://doi.org/10.1093/cid/ciaa1027 .

Proschan M. Reply to Villar, et al. Clinical infectious diseases. 2020; ciaa1029; doi: https://doi.org/10.1093/cid/ciaa1029 .

Donner A, Klar N. Design and Analysis of Cluster Randomization Trials in Health Research. London: Arnold Publishers Limited; 2000.

Klasnja P, Hekler EB, Shiffman S, Boruvka A, Almirall D, Tewari A, Murphy SA. Micro-randomized trials: An experimental design for developing just-in-time adaptive interventions. Health Psychol. 2015;34:1220–8.

Article   PubMed Central   Google Scholar  

Download references

Acknowledgements

The authors are grateful to Robert A. Beckman for his continuous efforts coordinating Innovative Design Scientific Working Groups, which is also a networking research platform for the Randomization ID SWG. We would also like to thank the editorial board and the two anonymous reviewers for the valuable comments which helped to substantially improve the original version of the manuscript.

None. The opinions expressed in this article are those of the authors and may not reflect the opinions of the organizations that they work for.

Author information

Authors and affiliations.

National Institutes of Health, Bethesda, MD, USA

Vance W. Berger

Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach, Germany

Louis Joseph Bour

Boehringer-Ingelheim Pharmaceuticals Inc, Ridgefield, CT, USA

Kerstine Carter

Population Health Sciences, University of Utah School of Medicine, Salt Lake City UT, USA

Jonathan J. Chipman

Cancer Biostatistics, University of Utah Huntsman Cancer Institute, Salt Lake City UT, USA

Clinical Trials Research Unit, University of Leeds, Leeds, UK

Colin C. Everett

RWTH Aachen University, Aachen, Germany

Nicole Heussen & Ralf-Dieter Hilgers

Medical School, Sigmund Freud University, Vienna, Austria

Nicole Heussen

York Trials Unit, Department of Health Sciences, University of York, York, UK

Catherine Hewitt

Food and Drug Administration, Silver Spring, MD, USA

Yuqun Abigail Luo

Open University of Catalonia (UOC) and the University of Barcelona (UB), Barcelona, Spain

Jone Renteria

Department of Human Development and Quantitative Methodology, University of Maryland, College Park, MD, USA

BioPharma Early Biometrics & Statistical Innovations, Data Science & AI, R&D BioPharmaceuticals, AstraZeneca, Gothenburg, Sweden

Yevgen Ryeznik

Early Development Analytics, Novartis Pharmaceuticals Corporation, NJ, East Hanover, USA

Oleksandr Sverdlov

Biostatistics Center & Department of Biostatistics and Bioinformatics, George Washington University, DC, Washington, USA

Diane Uschner

You can also search for this author in PubMed   Google Scholar

  • Robert A Beckman

Contributions

Conception: VWB, KC, NH, RDH, OS. Writing of the main manuscript: OS, with contributions from VWB, KC, JJC, CE, NH, and RDH. Design of simulation studies: OS, YR. Development of code and running simulations: YR. Digitization and preparation of data for Fig.  5 : JR. All authors reviewed the original manuscript and the revised version. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Oleksandr Sverdlov .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests, additional information, publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: figure s1.

. Type I error rate under selection bias model with bias effect ( \(\nu\) ) in the range 0 (no bias) to 1 (strong bias) for 12 randomization designs and three statistical tests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Berger, V., Bour, L., Carter, K. et al. A roadmap to using randomization in clinical trials. BMC Med Res Methodol 21 , 168 (2021). https://doi.org/10.1186/s12874-021-01303-z

Download citation

Received : 24 December 2020

Accepted : 14 April 2021

Published : 16 August 2021

DOI : https://doi.org/10.1186/s12874-021-01303-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Randomization-based test
  • Restricted randomization design

BMC Medical Research Methodology

ISSN: 1471-2288

what is randomization in research design

  • Yale Directories

Institution for Social and Policy Studies

Advancing research • shaping policy • developing leaders, why randomize.

About Randomized Field Experiments Randomized field experiments allow researchers to scientifically measure the impact of an intervention on a particular outcome of interest.

What is a randomized field experiment? In a randomized experiment, a study sample is divided into one group that will receive the intervention being studied (the treatment group) and another group that will not receive the intervention (the control group). For instance, a study sample might consist of all registered voters in a particular city. This sample will then be randomly divided into treatment and control groups. Perhaps 40% of the sample will be on a campaign’s Get-Out-the-Vote (GOTV) mailing list and the other 60% of the sample will not receive the GOTV mailings. The outcome measured –voter turnout– can then be compared in the two groups. The difference in turnout will reflect the effectiveness of the intervention.

What does random assignment mean? The key to randomized experimental research design is in the random assignment of study subjects – for example, individual voters, precincts, media markets or some other group – into treatment or control groups. Randomization has a very specific meaning in this context. It does not refer to haphazard or casual choosing of some and not others. Randomization in this context means that care is taken to ensure that no pattern exists between the assignment of subjects into groups and any characteristics of those subjects. Every subject is as likely as any other to be assigned to the treatment (or control) group. Randomization is generally achieved by employing a computer program containing a random number generator. Randomization procedures differ based upon the research design of the experiment. Individuals or groups may be randomly assigned to treatment or control groups. Some research designs stratify subjects by geographic, demographic or other factors prior to random assignment in order to maximize the statistical power of the estimated effect of the treatment (e.g., GOTV intervention). Information about the randomization procedure is included in each experiment summary on the site.

What are the advantages of randomized experimental designs? Randomized experimental design yields the most accurate analysis of the effect of an intervention (e.g., a voter mobilization phone drive or a visit from a GOTV canvasser, on voter behavior). By randomly assigning subjects to be in the group that receives the treatment or to be in the control group, researchers can measure the effect of the mobilization method regardless of other factors that may make some people or groups more likely to participate in the political process. To provide a simple example, say we are testing the effectiveness of a voter education program on high school seniors. If we allow students from the class to volunteer to participate in the program, and we then compare the volunteers’ voting behavior against those who did not participate, our results will reflect something other than the effects of the voter education intervention. This is because there are, no doubt, qualities about those volunteers that make them different from students who do not volunteer. And, most important for our work, those differences may very well correlate with propensity to vote. Instead of letting students self-select, or even letting teachers select students (as teachers may have biases in who they choose), we could randomly assign all students in a given class to be in either a treatment or control group. This would ensure that those in the treatment and control groups differ solely due to chance. The value of randomization may also be seen in the use of walk lists for door-to-door canvassers. If canvassers choose which houses they will go to and which they will skip, they may choose houses that seem more inviting or they may choose houses that are placed closely together rather than those that are more spread out. These differences could conceivably correlate with voter turnout. Or if house numbers are chosen by selecting those on the first half of a ten page list, they may be clustered in neighborhoods that differ in important ways from neighborhoods in the second half of the list. Random assignment controls for both known and unknown variables that can creep in with other selection processes to confound analyses. Randomized experimental design is a powerful tool for drawing valid inferences about cause and effect. The use of randomized experimental design should allow a degree of certainty that the research findings cited in studies that employ this methodology reflect the effects of the interventions being measured and not some other underlying variable or variables.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

The principle of randomization in scientific research

Affiliation.

  • 1 Consulting Center of Biomedical Statistics, Academy of Military Medical Sciences, Beijing 100850, China. [email protected]
  • PMID: 21669161
  • DOI: 10.3736/jcim20110603

Scientific research design includes specialty design and statistics design which can be subdivided into experimental design, clinical trial design and survey design. Usually, statistics textbooks introduce the core aspects of experimental design as the three key elements, the four principles and the design types, which run through the whole scientific research design and determine the overall success of the research. This article discusses the principle of randomization, which is one of the four principles, and focuses on the following two issues--the definition and function of randomization and the real life examples which go against the randomization principle, thereby demonstrating that strict adherence to the randomization principle leads to meaningful and valuable scientific research.

PubMed Disclaimer

Similar articles

  • The balance principle in scientific research. Hu LP, Bao XL, Wang Q. Hu LP, et al. Zhong Xi Yi Jie He Xue Bao. 2012 May;10(5):504-7. doi: 10.3736/jcim20120504. Zhong Xi Yi Jie He Xue Bao. 2012. PMID: 22587971
  • A course for clinical trial personnel in clinical study designs, randomization, allocation schedules, and interactive response systems. Golm GT, Bradstreet TE, Coffey LA. Golm GT, et al. Pharm Stat. 2011 Mar-Apr;10(2):175-82. doi: 10.1002/pst.447. Epub 2010 Oct 22. Pharm Stat. 2011. PMID: 20967893 Review.
  • Methods and analysis of realizing randomized grouping. Hu LP, Bao XL, Wang Q. Hu LP, et al. Zhong Xi Yi Jie He Xue Bao. 2011 Jul;9(7):711-4. doi: 10.3736/jcim20110703. Zhong Xi Yi Jie He Xue Bao. 2011. PMID: 21749820
  • Randomization: Beyond the closurization principle. Moulton LH. Moulton LH. Clin Trials. 2022 Aug;19(4):396-401. doi: 10.1177/17407745221080714. Epub 2022 Mar 1. Clin Trials. 2022. PMID: 35232309
  • Causality and control: key to the experiment. Behi R, Nolan M. Behi R, et al. Br J Nurs. 1996 Feb 22-Mar 13;5(4):252-5. doi: 10.12968/bjon.1996.5.4.252. Br J Nurs. 1996. PMID: 8704457 Review.
  • Search in MeSH
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Research Design | Types, Guide & Examples

What Is a Research Design | Types, Guide & Examples

Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.

A research design is a strategy for answering your   research question  using empirical data. Creating a research design means making decisions about:

  • Your overall research objectives and approach
  • Whether you’ll rely on primary research or secondary research
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach
and describe frequencies, averages, and correlations about relationships between variables

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types.

  • Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships
  • Descriptive and correlational designs allow you to measure variables and describe relationships between them.
Type of design Purpose and characteristics
Experimental relationships effect on a
Quasi-experimental )
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

  • Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .

Questionnaires Interviews
)

Observation methods

Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what is randomization in research design

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.

Operationalization

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity
) )

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample—by mail, online, by phone, or in person?

If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organizing and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).

On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarize your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved July 18, 2024, from https://www.scribbr.com/methodology/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, what is your plagiarism score.

Randomization in Statistics and Experimental Design

Design of Experiments > Randomization

What is Randomization?

Randomization in an experiment is where you choose your experimental participants randomly . For example, you might use simple random sampling , where participants names are drawn randomly from a pool where everyone has an even probability of being chosen. You can also assign treatments randomly to participants, by assigning random numbers from a random number table.

If you use randomization in your experiments, you guard against bias . For example, selection bias (where some groups are underrepresented) is eliminated and accidental bias (where chance imbalances happen) is minimized. You can also run a variety of statistical tests on your data (to test your hypotheses) if your sample is random.

Randomization Techniques

The word “random” has a very specific meaning in statistics. Arbitrarily choosing names from a list might seem random, but it actually isn’t. Hidden biases (like a subconscious preference for English names, names that sound like friends, or names that roll off the tongue) means that what you think is a random selection probably isn’t. Because these biases are often hidden, or overlooked, specific randomization techniques have been developed for researchers:

randomization

5.5 – Importance of randomization in experimental design

Introduction.

  • Demonstrate the benefits of random sampling as a method to control for extraneous factors

What about observational studies? How does randomization work?

Chapter 5 contents.

If the goal of the research is to make general, evidenced-based statements about causes of disease or other conditions of concern to the researcher, then how the subjects are selected for study directly impacts our ability to make generalizable conclusions . The most important concept to learn about inference in statistical science is that your sample of subjects upon which all measurements and treatments are conducted, ideally should be a random selection of individuals from a well-defined reference population.

The primary benefit of random sampling is that it strengthens our confidence in the links between cause and effect. Often after an intervention trial is complete, differences among the treatment groups will be observed. Groups of subjects who participated in sixteen weeks of “vigorous” aerobic exercise training show reduced systolic blood pressure compared to those subjects who engaged in light exercise for the same period of time (Cox et al 1996). But how do we know that exercise training caused the difference in blood pressure between the two treatment groups? Couldn’t the differences be explained by chance differences in the subjects? Age, body mass index (BMI), over all health, family history, etc.?

How can we account for these additional differences among the subjects? If you are thinking like an experimental biologist, then the word “control” is likely coming to the foreground. Why not design a study in which all 60 subjects are the same age, the same BMI, the same general health, the same family … history…? Hmm. That does not work. Even if you decide to control age, BMI, and general health categories, you can imagine the increased effort and cost to the project in trying to recruit subjects based on such narrow criteria. So, control per se is not the general answer.

If done properly, random sampling makes these alternative explanations less likely. Random sampling implies that other factors that may causally contribute to differences in the measured outcome, but themselves are not measured or included as a focus of the research study, should be the same, on average, among our different treatment groups. The practical benefits of proper random sampling is that recruiting subjects gets easier — fewer subjects will be needed because you are not trying to control dozens of factors that may (or may not!) contribute to differences in your outcome variable. The downside to random sampling is that the variability of the outcomes within your treatment groups will tends to increase. As we will see when we get to statistical inference, large variability within groups will make it less likely that any statistical difference between the treatment groups will be observed.

Demonstrate the benefits of random sampling as a method to control for extraneous factors.

The study reported by Cox et al included 60 obese men between the ages of 20 and 50. A reasonable experimental design decision would suggest that the 60 subjects be split into the two treatment groups such that both groups had 30 subjects for a balanced design. Subjects who met all of the research criteria and who had signed the informed consent agreement are to be placed into the treatment groups and there are many ways that group assignment could be accomplished. One possibility, the researchers could assign the first 30 people that came into the lab to the Vigorous exercise group and the remaining 30 then would be assigned to the Light exercise group. Intuitively I think we would all agree that this is a suspect way to design an experiment, but more importantly, why shouldn’t you use this convenient method?

Just for arguments sake, imagine that their subjects came in one at a time, and, coincidentally, they did so by age. The first person was age 21, the second was 22, and so on up to the 30th person who was 50. Then, the next group came in, again, coincidentally in order of ascending age. If you calculate the simple average age for each group you will find that they are identical (35.5 years). On the surface, this looks like we have controlled for age: both treatment groups have subjects that are the same age. A second option is to sort the subjects into the two treatment groups so that a 21 year old is in Group A, and the other 21 year old is in Group B, and so on. Again, the average age of Group A subjects and of Group B subjects would be the same and therefore controlled with respect to any covariation between age and change in blood pressure. However, there are other variables that may covary with blood pressure, and by controlling one, we would need to control the others. Randomization provides a better way.

I will demonstrate how randomization tends to distribute the values in such a way that the groups will not differ appreciably for the nuisance variables like age and BMI differences and, by extension, any other covariable. The R work is attached following the Reading list. The take-home message: After randomly selecting subjects for assignment to the treatment groups, the apparent differences between Group A and Group B for both age and BMI are substantially diminished. No attempt to match by age and by BMI is necessary. The numbers are shown in the table and then in two graphics (Fig. 1, Fig. 2) derived from the table.

Table 1. Mean age and BMI for subjects in two treatment groups A and B where subjects were assigned randomly or by convenience to treatment groups.

Group Random assignment of subjects
to treatment groups
Convenience assignment of subjects
to treatment groups
A 35.2 28
B 35.8 43
A 32.49 28.99
B 32.87 37.37

Just for emphasis, the means from Table 1 are presented in the next two figures (Fig. 1 and Fig. 2).

Figure 6. Age of subjects by groups (A = blue, B = red) with and without randomized assignment of subjects to treatment groups

Figure 1. Age of subjects by groups (A = blue, B = red) with and without randomized assignment of subjects to treatment groups

Figure 7. BMI of subjects by groups (A = blue, B = red) with and without randomized assignment of subjects to treatment groups

Figure 2. BMI of subjects by groups (A = blue, B = red) with and without randomized assignment of subjects to treatment groups

Note that the apparent difference between A and B for BMI disappear once proper randomization of subjects was accomplished. In conclusion, a random sample is an approach to experimental design that helps to reduce the influence other factors may have on the outcome variable (e.g., change in blood pressure after 16 weeks of exercise). In principle, randomization should protect a project because, on average, these influences will be represented randomly for the two groups of individuals. This reasoning extends to unmeasured and unknown causal factors as well.

This discussion was illustrated by random assignment of subjects to treatment groups. The same logic applies to how to select subjects from a population. If the sampling is large enough, then a random sample of subjects will tend to be representative of the variability of the outcome variable for the population and representative also of the additional and unmeasured cofactors that may contribute to the variability of the outcome variable.

However, if you do cannot obtain a random sample, then conclusions reached may be sample-specific, biased . …perhaps the group of individuals that likes to exercise on treadmills just happens to have a higher cardiac output because they are larger than the individuals that like to exercise on bicycles. This nonrandom sample will bias your results and can lead to incorrect interpretation of results. Random sampling is CRUCIAL in epidemiology, opinion survey work, most aspects of health, drug studies, medical work with human subjects. It’s difficult and very costly to do… so most surveys you hear about, especially polls reported from Internet sites, are NOT conducted using random sampling (included in the catch-all term “ probability sampling “)!! As an aside, most opinion survey work involves complex sample designs involving some form of geographic clustering (e.g., all phone numbers in a city, random sample among neighborhoods).

Random sampling is the ideal if generalizations are to be made about data, but strictly random sampling is not appropriate for all kinds of studies. Consider the question of whether or not EMF exposure is a risk factor for developing cancer (Pool 1990). These kinds of studies are observational: at least in principle, we wouldn’t expect that housing and therefore exposure to EMF is manipulated (cf. discussion Walker 2009). Thus, epidemiologists will look for patterns: if EMF exposure is linked to cancer, then more cases of cancer should occur near EMF sources compared to areas distant from EMF sources. Thus, the hypothesis is that an association between EMF exposure and cancer occurs non-randomly, whereas cancers occurring in people not exposed to EMF are random. Unfortunately, clusters can occur even if the process that generates the data is random.

Compare Graph A and Graph B (Fig. 3). One of the graphs resulted from a random process and the other was generated by a non-random process . Note that the claim can be rephrased about the probability that each grid has a point, e.g., it’s like Heads/Tails of 16 tosses of a coin. We can see clusters of points in Graph B; Graph A lacks obvious clusters of points — there is a point in each of the 16 cells of the grid. Although both patterns could be random, the correct answer in this case is Graph B.

Figure 8. An example of clustering resulting from a random sampling process (Graph B). In contrast, Graph A was generated so that a point was located within each grid.

Figure 3. An example of clustering resulting from a random sampling process (Graph B). In contrast, Graph A was generated so that a point was located within each grid.

The graphic below shows the transmission grid in the continental United States (Fig. 4). How would one design a random sampling scheme overlaid against the obviously heterogeneous distribution of the grid itself? If a random sample was drawn, chances are good that no population would be near a grid in many of the western states, but in contrast, the likelihood would increase in the eastern portion of the United States where the population and therefore transmission grid is more densely placed.

Open Infrastructure map, https://openinframap.org/#3/24.61/-101.16

Figure 4. Map of electrical transmission grid for continental United States of America. Image source https://openinframap.org/#3/24.61/-101.16

For example, you want to test whether or not EMF affects human health, and your particular interest is in whether or not there exists a relationship between living close to high voltage towers or transfer stations and brain cancer. How does one design a study, keeping in mind the importance of randomization for our ability to generalize and assign causation?  This is a part of epidemiology which strives to detect whether clusters of disease are related to some environmental source. It is an extremely difficult challenge. For the record, no clear link to EMF and cancer has been found, but reports do appear from time to time (e.g., report on a cluster of breast cancer in men working in office adjacent to high EMF, Milham 2004).

1. I claimed that Graph B in Figure 8 was generated by a random process while Graph B was not. The results are: Graph A, each cell in the grid has a point; In graph B, ten cells have at least one point, six cells are empty. Which probability _____ distribution applies? A. beta B. binomial C. normal D. poisson

2. True or False. If sample with replacement is used, a subject may be included more than once.

3. Use the sample() with and without replacement on the object (see help with R below)

a) set of 3

b) set of 4

4. Confirm the claim by calculating the probability of Graph A result vs Graph B result (see R script below).

Code you type is shown in red; responses or output from R are shown in blue. Recall that statements preceded by the hash # are comments and are not read by R (i.e., no need for you tp type them).

First, create some variables. Vectors aa and bb contain my two age sequences.

Second, append vector bb to the end of vector aa

Third, get the average age for the first group (the aa sequence)  and for the second group (the bb sequence). Lots of ways to do this, I made a two subsets from the combined age variable; could have just as easily taken the mean of aa and the mean of bb (same thing!).

Fourth, start building a data frame, then sort it by age. Will be adding additional variables to this data frame

Fifth, divide the variable again into two subsets of 30 and get the averages

Sixth, create an index variable, random order without replacement

Add the new variable to our existing data frame, then print it to check that all is well

Seventh, select for our first treatment group the first 30 subjects from the randomized index. There are again other ways to do this, but sorting on the index variable means that the subject order will be change too.

Print the new data frame to confirm that the sorting worked. It did. we can see that the rows have been sorted by ascending order based on the index variable.

Eighth, create our new treatment groups, again of n = 30 each, then get the means ages for each group.

Get the minimum and maximum values for the groups

Ninth, create a BMI variable drawn from a normal distribution with coefficient of variation equal to 20%. The first group with we will call cc

The second group called dd

Create a new variable called BMI by joining cc and dd

Add the BMI variable to our data frame.

Tenth, repeat our protocol from before: Set up two groups each with 30 subjects, calculate the means for the variables and then sort by the random index and get the new group means.

All we did was confirm that the unsorted groups had mean BMI of around 27.5 and 37.5 respectively. Now, proceed to sort by the random index variable. Go ahead and create a new data frame

Get the means of the new groups

That’s all of the work!

  • The basics explained
  • Experiments
  • Experimental and Sampling units
  • Replication, Bias, and Nuisance Variables
  • Clinical trials
  • Importance of randomization in experimental design
  • Sampling from Populations
  • References and suggested readings
  • Translators
  • Graphic Designers

Solve

Please enter the email address you used for your account. Your sign in information will be sent to your email address after it has been verified.

Completely Randomized Design: The One-Factor Approach

David Costello

Completely Randomized Design (CRD) is a research methodology in which experimental units are randomly assigned to treatments without any systematic bias. CRD gained prominence in the early 20th century, largely attributed to the pioneering work of statistician Ronald A. Fisher . His method addressed the inherent variability in experimental units by randomly assigning treatments, thus countering potential biases. Today, CRD serves as an indispensable tool in various domains, including agriculture, medicine, industrial engineering, and quality control analysis.

CRD is particularly favored in situations with limited control over external variables. By leveraging its inherent randomness, CRD neutralizes potentially confounding factors. As a result, each experimental unit has an equal likelihood of receiving any specific treatment, ensuring a level playing field. Such random allocation is pivotal in eliminating systematic bias and bolstering the validity of experimental conclusions.

While CRD may sometimes necessitate larger sample sizes , the improved accuracy and consistency it introduces to results often justify this requirement.

Understanding CRD

At its core, CRD is centered on harnessing randomness to achieve objective experimental outcomes. This approach effectively addresses unanticipated extraneous variables —those not included in the study design but that can still influence the response variable. In the context of CRD, these extraneous variables are expected to be uniformly distributed across treatments, thereby mitigating their potential influence.

A key aspect of CRD is the single-factor experiment. This means that the experiment revolves around changing or manipulating one primary independent variable (or factor) to ascertain its effect on the dependent variable . Consider these examples across different fields:

  • Medical: An experiment might be designed where the independent variable is the dosage of a new drug, and the dependent variable is the speed of patient recovery. Researchers would vary the drug dosage and observe its effect on recovery rates.
  • Agriculture: An agricultural study could alter the amount of water irrigation (independent variable) given to crops and measure the resulting crop yield (dependent variable) to determine the optimal irrigation level.
  • Psychology: A psychologist might introduce different intensities of a visual cue (independent variable) to participants and then measure their reaction times (dependent variable) to understand the cue's influence.
  • Environmental Science: Scientists might introduce different concentrations of a pollutant (independent variable) to a freshwater pond and measure the health and survival rate of aquatic life (dependent variable) in response.
  • Education: In an educational setting, researchers could change the duration of digital learning (independent variable) students receive daily and then observe its effect on test scores (dependent variable) at the end of the term.
  • Engineering: In material science, an experiment might adjust the temperature (independent variable) during the curing process of a polymer and then measure its resultant tensile strength (dependent variable).

For each of these scenarios, only one key factor or independent variable is intentionally varied, while any changes or outcomes in another variable (the dependent variable) are observed and recorded. This distinct focus on a single variable, while keeping all others constant or controlled, underscores the essence of the single-factor experiment in CRD.

Advantages of CRD

Understanding the strengths of Completely Randomized Design is pivotal for effectively applying this research tool and interpreting results accurately. Below is an exploration of the benefits of employing CRD in research studies.

  • Simplicity: One of the most appealing features of CRD is its straightforwardness. Focusing on a single primary factor, CRD is easier to understand and implement compared to more complex research designs.
  • Flexibility: CRD enhances versatility by allowing the inclusion of various experimental units and treatments through random assignment, enabling researchers to explore a range of variables.
  • Robustness: Despite its simplicity, CRD stands as a robust research tool. The consistent use of randomization minimizes biases and uniformly distributes the effects of uncontrolled variables across all groups, contributing to the reliability of the results.
  • Generalizability: Proper application of CRD enables the extension of research findings to a broader population. The minimization of selection bias , thanks to random assignment, increases the probability that the sample closely represents the larger population.

Disadvantages of CRD

While CRD is marked by simplicity, flexibility, robustness, and enhanced generalizability, it is essential to carefully consider its limitations. A thoughtful analysis of these aspects will guide researchers in making informed decisions about the applicability of CRD to their specific research context.

  • Ignoring Nuisance Variables: CRD operates primarily under the assumption that all treatments are equivalent aside from the independent variable. If strong nuisance factors vary systematically across treatments, this assumption becomes a limitation, making CRD less suitable for studies where nuisance variables significantly impact the results.
  • Need for Large Sample Size: The pooling of all experimental units into one extensive set necessitates a larger sample size, potentially leading to increased time, cost, and resource investment.
  • Inefficiency in Some Cases: CRD might demonstrate statistical inefficiency with significant within-treatment group variability . In such cases, other designs that account for this variability may offer enhanced efficiency.

Differentiating CRD from other research design methods

CRD stands out in the realm of research designs due to its foundational simplicity. While its essence lies in the random assignment of experimental units to treatments without any systematic bias, other designs introduce varying layers of complexity tailored to specific experimental needs.

For instance, consider the Randomized Block Design (RBD) . Unlike the straightforward approach of CRD, RBD divides experimental units into homogenous blocks, based on known sources of variability, before assigning treatments. This method is especially useful when there's an identifiable source of variability that researchers wish to control for. Similarly, the Latin Square Design , while also involving random assignment, operates on a grid system to simultaneously control for two lurking variables , adding another dimension of complexity not found in CRD.

Factorial Design investigates the effects and interactions of multiple independent variables. This design can reveal interactions that might be overlooked in simpler designs. Then there's the Crossover Design , often used in medical trials. Unlike CRD, where each unit experiences only one treatment, in Crossover Design, participants receive multiple treatments over different periods, allowing each participant to serve as their own control.

The choice of research design, whether it be CRD, RBD, Latin Square, or any of the other methods available, is fundamentally guided by the nature of the research question , the characteristics of the experimental units, and the specific objectives the study aims to achieve. However, it's the inherent simplicity and flexibility of CRD that often makes it the go-to choice, especially in scenarios with many units or treatments, where intricate stratification or blocking isn't necessary.

Let us further explore the advantages and disadvantages of each method.

Research DesignDescriptionKey FeaturesAdvantagesDisadvantages
Completely Randomized Design (CRD)Employs random assignment of experimental units to treatments without any systematic bias.Simple and flexible

Each unit experiences only one treatment
Simple structure makes it easy to implementDoes not control for any other variables; may require a larger sample size
Randomized Block Design (RBD)Divides experimental units into homogenous blocks based on known sources of variability before assigning treatments.Controls for one source of variability

More complex than CRD
Controls for known variability, potentially increasing the precision of the experimentMore complex to implement and analyze
Latin Square DesignUses a grid system to control for two lurking variables.Controls for two sources of variability

Adds complexity not found in CRD
Controls for two sources of variabilityComplex design; may not be practical for all experiments
Factorial DesignInvestigates the effects and interactions of multiple independent variables.Reveals interactions

More complex design
Can assess interactions between factorsComplex and may require a large sample size
Crossover DesignParticipants receive multiple treatments over different periods.Each participant serves as their own control

Often used in medical trials
Each participant can serve as their own control, potentially reducing variabilityPeriod effects and carryover effects can complicate results

While CRD's simplicity and flexibility make it a popular choice for many research scenarios, the optimal design depends on the specific needs, objectives, and contexts of the study. Researchers must carefully consider these factors to select the most suitable research design method.

The role of CRD in mitigating extraneous variables

Within the framework of experimental research, extraneous variables persistently challenge the validity of findings, potentially compromising the established relationship between independent and dependent variables . CRD is a methodological safeguard that systematically addresses these extraneous variables. Below, we describe specific types of extraneous variables and how CRD counteracts their potential influence:

  • Definition: Variables that induce variance in the dependent variable, yet are not of primary academic interest. While they don't muddle the relationship between the primary variables, their presence can augment within-group variability, reducing statistical power.
  • CRD's Countermeasure: Through the mechanism of random assignment, CRD ensures an equitably distributed influence of nuisance variables across all experimental conditions. This distribution, theoretically, leads to mutual nullification of their effects when assessing the efficacy of treatments.
  • Definition: Variables not explicitly incorporated within the study design but can influence its outcomes. Their impact often manifests post-hoc, rendering them alternative explanations for observed phenomena.
  • CRD's Countermeasure: Random assignment intrinsic to CRD assures a uniform distribution of these lurking variables across experimental conditions. This diminishes the probability of them systematically influencing one group, thus safeguarding the experiment's conclusions.
  • Definition: Variables that not only influence the dependent variable but also correlate with the independent variable. Their simultaneous influence can mislead interpretations of causality.
  • CRD's Countermeasure: The tenet of random assignment inherent in CRD ensures an equitable distribution of potential confounders among groups. This bolsters confidence in attributing observed effects predominantly to the experimental treatments.
  • Definition: Deliberately held constant to ensure that they do not introduce variability into the experiment. They are intentionally kept constant to preserve experimental integrity.
  • CRD's Countermeasure: While CRD focuses on randomization, the nature of the design inherently assumes that controlled variables remain constant across all experimental units. By maintaining these constants, CRD ensures that the focus remains solely on the treatment effects, further validating the experiment's findings.

The foundational principle underpinning the Completely Randomized Design—randomization—serves as a bulwark against the influences of extraneous variables. By uniformly distributing these variables across experimental conditions, CRD enhances the validity and reliability of experimental outcomes. However, researchers should exercise caution and continuously evaluate potential extraneous influences, even in randomized designs.

Selecting the independent variable

The selection of the independent variable is crucial for research design . This pivotal step not only shapes the direction and quality of the research but also underpins the understanding of causal relationships within the studied system, influencing the dependent variable or response. When choosing this essential component of experimental design , several critical considerations emerge:

  • Relevance: Paramount to the success of the experiment is the variable's direct relevance to the research query. For instance, in a botanical study of phototropism, the light's intensity or duration would naturally serve as the independent variable.
  • Measurability: The chosen variable should be quantifiable or categorizable, enabling distinctions between its varying levels or types.
  • Controllability: The research environment must allow for steadfast control over the variable, ensuring extraneous influences are kept at bay.
  • Ethical Considerations: In disciplines like social sciences or medical research, it's vital to consider the ethical implications . The chosen variable should withstand ethical scrutiny, safeguarding the well-being and rights of participants.

Identifying the independent variable necessitates a methodical and structured approach where each step aligns with the overarching research objective:

  • Review Literature: Thoroughly review existing literature to provide invaluable insights into past research and highlight unexplored areas.
  • Define the Scope: Clearly delineating research boundaries is crucial. For example, when studying dietary impacts on metabolic health, the variable could span from diet types (like keto, vegan, Mediterranean) to specific nutrients.
  • Determine Levels of the Variable: This involves understanding the various levels or categories the independent variable might have. In educational research, one might look beyond simply "innovative vs. conventional methods" to a broader range of teaching techniques.
  • Consider Potential Outcomes: Anticipating possible outcomes based on variations in the independent variable is beneficial. If potential outcomes seem too vast, the variable might need further refinement.

In academic discourse, while CRD is praised for its rigor and clarity, the effectiveness of the design relies heavily on the meticulous selection of the independent variable. Making this choice with thorough consideration ensures the research offers valuable insights with both academic and wider societal implications.

Applications of CRD

CRD has found wide and varied applications in several areas of research. Its versatility and fundamental simplicity make it an attractive option for scientists and researchers across a multitude of disciplines.

CRD in agricultural research

Agricultural research was among the earliest fields to adopt the use of Completely Randomized Design. The broad application of CRD within agriculture not only encompasses crop improvement but also the systematic analysis of various fertilizers, pesticides, and cropping techniques. Agricultural scientists leverage the CRD framework to scrutinize the effects on yield enhancement and bolstered disease resistance. The fundamental randomization in CRD effectively mitigates the influence of nuisance variables such as soil variations and microclimate differences, ensuring more reliable and valid experimental outcomes.

Additionally, CRD in agricultural research paves the way for robust testing of new agricultural products and methods. The unbiased allocation of treatments serves as a solid foundation for accurately determining the efficacy and potential downsides of innovative fertilizers, genetically modified seeds, and novel pest control methods, contributing to informed decision-making and policy formulation in agricultural development.

However, the limitations of CRD within the agricultural context warrant acknowledgment. While it offers an efficient and straightforward approach for experimental design, CRD may not always capture spatial variability within large agricultural fields adequately. Such unaccounted variations can potentially skew results, underscoring the necessity for employing more intricate experimental designs, such as the Randomized Complete Block Design (RCBD), where necessary. This adaptation enhances the reliability and generalizability of the research findings, ensuring their applicability to real-world agricultural challenges.

CRD in medical research

The fields of medical and health research substantially benefit from the application of Completely Randomized Design, especially in executing randomized control trials. Within this context, participants, whether patients or others, are randomly assigned to either the treatment or control groups. This structured random allocation minimizes the impact of extraneous variables, ensuring that the groups are comparable. It fortifies the assertion that any discernible differences in outcomes are genuinely attributable to the treatment being analyzed, enhancing the robustness and reliability of the research findings.

CRD's randomized nature in medical research allows for a more objective assessment of varied medical treatments and interventions. By mitigating the influence of extraneous variables, researchers can more accurately gauge the effectiveness and potential side effects of novel medical approaches, including pharmaceuticals and surgical techniques. This precision is crucial for the continual advancement of medical science, offering a solid empirical foundation for the refinement of treatments that improve health outcomes and patient quality of life.

However, like other fields, the application of CRD in medical research has its limitations. Despite its effectiveness in controlling various factors, CRD may not always consider the complexity of human health conditions where multiple variables often interact in intricate ways. Hence, while CRD remains a valuable tool for medical research, it is crucial to apply it judiciously and alongside other research designs to ensure comprehensive and reliable insights into medical treatments and interventions.

CRD in industrial engineering

In industrial engineering, Completely Randomized Design plays a significant role in process and product testing, offering a reliable structure for the evaluation and improvement of industrial systems. Engineers often employ CRD in single-factor experiments to analyze the effects of a particular factor on a certain outcome, enhancing the precision and objectivity of the assessment.

For example, to discern the impact of varying temperatures on the strength of a metal alloy, engineers might utilize CRD. In this scenario, the different temperatures represent the single factor, and the alloy samples are randomly allocated to be tested at each designated temperature. This random assignment minimizes the influence of extraneous variables, ensuring that the observed effects on alloy strength are primarily attributable to the temperature variations.

CRD's implementation in industrial engineering also assists in the optimization of manufacturing processes. Through random assignment and structured testing, engineers can effectively evaluate process parameters, such as production speed, material quality, and machine settings. By accurately assessing the influence of these factors on production efficiency and product quality, engineers can implement informed adjustments and enhancements, promoting optimal operational performance and superior product standards. This systematic approach, anchored by CRD, facilitates consistent and robust industrial advancements, bolstering overall productivity and innovation in industrial engineering.

Despite these advantages, it's crucial to acknowledge the limitations of CRD in industrial engineering contexts. The design is efficient for single-factor experiments but may falter with experiments involving multiple factors and interactions, common in industrial settings. This limitation underscores the importance of combining CRD with other experimental designs. Doing so navigates the complex landscape of industrial engineering research, ensuring insights are comprehensive, accurate, and actionable for continuous innovation in industrial operations.

CRD in quality control analysis

Completely Randomized Design is also beneficial in quality control analysis, where ensuring the consistency of products is paramount.

For instance, a manufacturer keen on minimizing product defects may deploy CRD to empirically assess the effectiveness of various inspection techniques. By randomly assigning different inspection methods to identical or similar production batches, the manufacturer can gather data regarding the most effective techniques for identifying and mitigating defects, bolstering overall product quality and consumer satisfaction.

Furthermore, the utility of CRD in quality control extends to the analysis of materials, machinery settings, or operational processes that are pivotal to final product quality. This design enables organizations to rigorously test and compare assorted conditions or settings, ensuring the selection of parameters that optimize both quality and efficiency. This approach to quality analysis not only bolsters the reliability and performance of products but also significantly augments the optimization of organizational resources, curtailing wastage and improving profitability.

However, similar to other CRD applications, it is crucial to understand its limitations. While CRD can significantly aid in the analysis and optimization of various aspects of quality control, its effectiveness may be constrained when dealing with multi-factorial scenarios with complex interactions. In such situations, other experimental designs, possibly in tandem with CRD, might offer more robust and comprehensive insights, ensuring that quality control measures are not only effective but also adaptable to evolving industrial and market demands.

Future applications and emerging fields for CRD

The breadth of applications for Completely Randomized Design continues to expand. Emerging fields such as data science, business analytics, and environmental studies are increasingly recognizing the value of CRD in conducting reliable and uncomplicated experiments. In the realm of data science, CRD can be invaluable in assessing the performance of different algorithms, models, or data processing techniques. It enables researchers to randomize the variables, minimizing biases and providing a clearer understanding of the real-world applicability and effectiveness of various data-centric solutions.

In the domain of business analytics, CRD is paving the way for robust analysis of business strategies and initiatives. Businesses can employ CRD to randomly assign strategies or processes across various departments or teams, allowing for a comprehensive assessment of their impact. The insights from such assessments empower organizations to make data-driven decisions, optimizing their operations, and enhancing overall productivity and profitability. This approach is particularly crucial in the business environment of today, characterized by rapid changes, intense competition, and escalating customer expectations, where informed and timely decision-making is a key determinant of success.

Moreover, in environmental studies, CRD is increasingly being used to evaluate the impact of various factors on environmental health and sustainability. For example, researchers might use CRD to study the effects of different pollutants, conservation strategies, or land use patterns on ecosystem health. The randomized design ensures that the conclusions drawn are robust and reliable, providing a solid foundation for the development of policies and initiatives. As environmental concerns continue to mount, the role of reliable experimental designs like CRD in facilitating meaningful research and informed policy-making cannot be overstated.

Planning and conducting a CRD experiment

A CRD experiment involves meticulous planning and execution, outlined in the following structured steps. Each phase, from the preparatory steps to data collection and analysis, plays a pivotal role in bolstering the integrity and success of the experiment, ensuring that the findings stand as a valuable contribution to scientific knowledge and understanding.

  • Selecting Participants in a Random Manner: The heart of a CRD experiment is randomness. Regardless of whether the subjects are human participants, animals, plants, or objects, their selection must be truly random. This level of randomness ensures that every participant has an equal likelihood of being assigned to any treatment group, which plays a crucial role in eliminating selection bias.
  • Understanding and Selecting the Independent Variable: This is the variable of interest – the one that researchers aim to manipulate to observe its effects. Identifying and understanding this factor is pivotal. Its selection depends on the experiment's primary research question or hypothesis , and its clear definition is essential to ensuring the experiment's clarity and success.
  • The Process of Random Assignment in Experiments: Following the identification of subjects and the independent variable, researchers must randomly allocate subjects to various treatment groups. This process, known as random assignment, typically involves using random number generators or other statistical tools , ensuring that the principle of randomness is upheld.
  • Implementing the Single-factor Experiment: After random assignment, researchers can launch the main experiment. At this stage, they introduce the independent variable to the designated treatment groups, ensuring that all other conditions remain consistent across groups. The goal is to make certain that any observed effect or change is attributed only to the manipulation of the independent variable.
  • Data Cleaning and Preparation: The first step post-collection is to prepare and clean the data . This process involves rectifying errors, handling missing or inconsistent data, and eradicating duplicates. Employing tools like statistical software or languages such as Python and R can be immensely helpful. Handling outliers and maintaining consistency throughout the dataset is essential for accurate subsequent analysis.
  • Statistical Analysis Methods: The next step involves analyzing the data using appropriate statistical methodologies, dependent on the nature of the data and research questions . Analysis can range from basic descriptive statistics to complex inferential statistics or even advanced statistical modeling.
  • Interpreting the Results: Analysis culminates in the interpretation of results, wherein researchers draw conclusions based on the statistical outcomes. This stage is crucial in CRD, as it determines if observed effects can be attributed to the independent variable's manipulation or if they occurred purely by chance. Apart from statistical significance, the practical implications and relevance of the results also play a vital role in determining the experiment's success and potential real-world applications.

Navigating common challenges in CRD

While the Completely Randomized Design offers numerous advantages, researchers often encounter specific challenges when implementing it in real-world experiments. Recognizing these challenges early and being prepared with strategies to address them can significantly improve the integrity and success of the CRD experiment. Let's delve into some of the most common challenges and explore potential solutions:

  • Lack of Homogeneity: One foundational assumption of CRD is the homogeneity of experimental units . However, in reality, there may be inherent variability among units. To mitigate this, researchers can use stratified sampling or consider employing a randomized block design.
  • Improper Randomization: The essence of CRD is randomization. However, it's not uncommon for some researchers to inadvertently introduce biases during the assignment. Utilizing computerized random number generators or statistical software can help ensure true randomization.
  • Limited Number of Experimental Units: Sometimes, the available experimental units might be fewer than required for a robust experiment. In such cases, using a larger number of replications can help, albeit at the cost of increased resources.
  • Extraneous Variables: These external factors can influence the outcome of an experiment. They make it hard to attribute observed effects solely to the independent variable. Careful experimental design, pre-experimental testing, and post-experimental analysis can help identify and control these extraneous variables.
  • Overlooking Practical Significance: Even if a CRD experiment yields statistically significant results, these might not always be practically significant. Researchers need to assess the real-world implications of their findings, considering factors like cost, feasibility, and the magnitude of observed effects.
  • Data-related Challenges: From missing data to outliers, data-related issues may skew results. Regular data cleaning, rigorous validation, and employing robust statistical methods can help address these challenges.

While CRD is a powerful tool in experimental research, its successful implementation hinges on the researcher's ability to anticipate, recognize, and navigate challenges that might arise. By being proactive and employing strategies to mitigate potential pitfalls, researchers can maximize the reliability and validity of their CRD experiments, ensuring meaningful and impactful results.

In summary, the Completely Randomized Design holds a pivotal place in the field of research owing to its simplicity and straightforward approach. Its essence lies in the unbiased random assignment of experimental units to various treatments, ensuring the reliability and validity of the results. Although it may not control for other variables and often requires larger sample sizes, its ease of implementation frequently outweighs these drawbacks, solidifying it as a preferred choice for researchers across many fields.

Looking ahead, the future of CRD remains bright. As research continues to evolve, we anticipate the integration of CRD with more sophisticated design techniques and advanced analytical tools. This synergy will likely enhance the efficiency and applicability of CRD in varied research contexts, perpetuating its legacy as a fundamental research design method. While other designs might offer more control and complexity, the fundamental simplicity of CRD will continue to hold significant value in the rapidly evolving research landscape.

Moving forward, it is imperative to champion continuous learning and exploration in the field of CRD. Engaging in educational opportunities, staying abreast of the latest research and advancements, and actively participating in pertinent discussions and forums can markedly enrich understanding and expertise in CRD. Embracing this ongoing learning journey will not only bolster individual research skills but also make a significant contribution to the broader scientific community, fueling innovation and discovery in numerous fields of study.

Header image by Alex Shuper .

  • Academic Writing Advice
  • All Blog Posts
  • Writing Advice
  • Admissions Writing Advice
  • Book Writing Advice
  • Short Story Advice
  • Employment Writing Advice
  • Business Writing Advice
  • Web Content Advice
  • Article Writing Advice
  • Magazine Writing Advice
  • Grammar Advice
  • Dialect Advice
  • Editing Advice
  • Freelance Advice
  • Legal Writing Advice
  • Poetry Advice
  • Graphic Design Advice
  • Logo Design Advice
  • Translation Advice
  • Blog Reviews
  • Short Story Award Winners
  • Scholarship Winners

Advance your scientific manuscript with expert editing

Advance your scientific manuscript with expert editing

what is randomization in research design

  • Voxco Online
  • Voxco Panel Management
  • Voxco Panel Portal
  • Voxco Audience
  • Voxco Mobile Offline
  • Voxco Dialer Cloud
  • Voxco Dialer On-premise
  • Voxco TCPA Connect
  • Voxco Analytics
  • Voxco Text & Sentiment Analysis

what is randomization in research design

  • 40+ question types
  • Drag-and-drop interface
  • Skip logic and branching
  • Multi-lingual survey
  • Text piping
  • Question library
  • CSS customization
  • White-label surveys
  • Customizable ‘Thank You’ page
  • Customizable survey theme
  • Reminder send-outs
  • Survey rewards
  • Social media
  • Website surveys
  • Correlation analysis
  • Cross-tabulation analysis
  • Trend analysis
  • Real-time dashboard
  • Customizable report
  • Email address validation
  • Recaptcha validation
  • SSL security

Take a peek at our powerful survey features to design surveys that scale discoveries.

Download feature sheet.

  • Hospitality
  • Financial Services
  • Academic Research
  • Customer Experience
  • Employee Experience
  • Product Experience
  • Market Research
  • Social Research
  • Data Analysis

Explore Voxco 

Need to map Voxco’s features & offerings? We can help!

Watch a Demo 

Download Brochures 

Get a Quote

  • NPS Calculator
  • CES Calculator
  • A/B Testing Calculator
  • Margin of Error Calculator
  • Sample Size Calculator
  • CX Strategy & Management Hub
  • Market Research Hub
  • Patient Experience Hub
  • Employee Experience Hub
  • NPS Knowledge Hub
  • Market Research Guide
  • Customer Experience Guide
  • The Voxco Guide to Customer Experience
  • Survey Research Guides
  • Survey Template Library
  • Webinars and Events
  • Feature Sheets
  • Try a sample survey
  • Professional services

what is randomization in research design

Get exclusive insights into research trends and best practices from top experts! Access Voxco’s ‘State of Research Report 2024 edition’ .

We’ve been avid users of the Voxco platform now for over 20 years. It gives us the flexibility to routinely enhance our survey toolkit and provides our clients with a more robust dataset and story to tell their clients.

VP Innovation & Strategic Partnerships, The Logit Group

  • Client Stories
  • Voxco Reviews
  • Why Voxco Research?
  • Careers at Voxco
  • Vulnerabilities and Ethical Hacking

Explore Regional Offices

  • Survey Software The world’s leading omnichannel survey software
  • Online Survey Tools Create sophisticated surveys with ease.
  • Mobile Offline Conduct efficient field surveys.
  • Text Analysis
  • Close The Loop
  • Automated Translations
  • NPS Dashboard
  • CATI Manage high volume phone surveys efficiently
  • Cloud/On-premise Dialer TCPA compliant Cloud on-premise dialer
  • IVR Survey Software Boost productivity with automated call workflows.
  • Analytics Analyze survey data with visual dashboards
  • Panel Manager Nurture a loyal community of respondents.
  • Survey Portal Best-in-class user friendly survey portal.
  • Voxco Audience Conduct targeted sample research in hours.
  • Predictive Analytics
  • Customer 360
  • Customer Loyalty
  • Fraud & Risk Management
  • AI/ML Enablement Services
  • Credit Underwriting

what is randomization in research design

Find the best survey software for you! (Along with a checklist to compare platforms)

Get Buyer’s Guide

  • 100+ question types
  • SMS surveys
  • Banking & Financial Services
  • Retail Solution
  • Risk Management
  • Customer Lifecycle Solutions
  • Net Promoter Score
  • Customer Behaviour Analytics
  • Customer Segmentation
  • Data Unification

Explore Voxco 

Watch a Demo 

Download Brochures 

  • CX Strategy & Management Hub
  • Blogs & White papers
  • Case Studies

Find the best customer experience platform

Uncover customer pain points, analyze feedback and run successful CX programs with the best CX platform for your team.

Get the Guide Now

what is randomization in research design

VP Innovation & Strategic Partnerships, The Logit Group

  • Why Voxco Intelligence?
  • Our clients
  • Client stories
  • Featuresheets

COVER IMAGE 1

Value and Techniques of Randomization in Experimental Design

  • October 5, 2021

SHARE THE ARTICLE ON

What is randomization in experimental design?

Randomization in an experiment refers to a random assignment of participants to the treatment in an experiment. OR, for instance we can say that randomization is assignment of treatment to the participants randomly. 

For example : a teacher decides to take a viva in the class and randomly starts asking the students.

Here, all the participants have equal chance of getting into the experiment. Like with our example, every student has equal chance of getting a question asked by the teacher. Randomization helps you stand a chance against biases. It can be a case when you select a group using some category, there can be personal biases or accidental biases. But when the selection is random, you don’t get a chance to look into each participant and hence the groups are fairly divided. 

See Voxco survey software in action with a Free demo.

Why is randomization in experimental design important?

As mentioned earlier, randomization minimizes the biases. But apart from that it also provides various benefits when adopted as a selection method in experiments. 

  • Randomization prevents biases and makes the results fair. 
  • It makes sure that the groups made for conducting an experiments are as similar as possible to each other so that the results come out as accurate as possible. 
  • It also helps control the lurking variables which can affect the results to be different from what they are supposed to be. 
  • The sample that is randomly selected is meant to be representative of the population and since it doesn’t involve researcher’s interference, it is fairly selected. 
  • Randomizing the experiments helps you get the best cause-effect relationships between the variables. 
  • It makes sure that the random selection is done from all genders, casts, races and the groups are not too different from each other. 
  • Researchers control values of the explanatory variable with a randomization procedure. So, if we see a relationship between the explanatory variable and response variables, we can say that it is a causal one.

What are different types of randomization techniques in experimental design?

Randomization can be subject to errors when it comes to “randomly” selecting the participants. As for our example, the teacher surely said she will ask questions to random students, but it is possible that she might subconsciously target mischievous students. This means we think the selection is random, but most of the times it isn’t. 

Hence, to avoid these unintended biases, there are three techniques that researchers use commonly:

Simple Random Sampling

SIMPLE RANDOM SAMPLING

In simple random sampling. The selection of the participants is completely luck and probability based. Every participant has an equal chance of getting into the sample. 

This method is theoretically easy to understand and works best against a sample size of 100 or more. The main factor here is that every participants gets an equal chance of being included in a treatment, and this is why it is also called the method of chances. 

Methods of simple random sampling:

  • Lottery – Like the old ways, the participants are given a number each. The selection is done by randomly drawing a number from the pot. 
  • Random numbers – Similar to the lottery method, this includes giving numbers to the participants and using random number table.

Example : A teacher wants to know how good her class is in mathematics. So she will give each student a number and will draw numbers from a bunch of chits. This will include a randomly selected sample size and It won’t have any biases depending on teachers interference. 

Customer experience

Permuted Block Randomization

It is a method of randomly assigning participants to the treatment groups. A block is a group is randomly ordered treatment group. All the blocks have a fair balance of treatment assignment throughout. 

Example : A teacher wants to enroll student in two treatments A and B. and she plans to enroll 6 students per week. The blocks would look like this:

Week 1- AABABA

Week 2- BABAAB

Week 3- BBABAB

Each block has 9 A and 9 B. both treatments have been balanced even though their ordering is random. 

There are two types of block assignment in permuted block randomization:

  • Random number generator

Generate a random number for each treatment that is assign in the block. In our example, the block “Week 1” would look like- A(4), A(5), B(56), A(33), B(40), A(10)

Then arrange these treatments according to their number is ascending order, the new treatment could be- AAABB

  • Permutations

This includes listing the permutations for the block. Simply, writing down all possible variations. 

The formula is b! / ((b/2)! (b/2)!)

For our example, the block sixe is 6, so the possible arrangements would be:

6! / ((6/2)! (6/2)!)

6! / (3)! x (3)!

6x5x4x3x2x1 / (3x2x1) x (3x2x1)

20 possible arrangements. 

Stratified Random Sampling

STRATIFIED RANDOM SAMPLING

The word “strata” refers to characteristics. Every population has characteristics like gender, cast, age, background etc. Stratified random sampling helps you consider these stratum while sampling the population. The stratum can be pre-defined or you can define them yourself any way you think is best suitable for your study. 

Example: you want to categorize population of a state depending on literacy. Your categories would be- (1) Literate (2) Intermediate (3) Illiterate. 

Steps to conduct stratified random sampling: 

  • Define the target audience.
  • Identify the stratification variables and decide the number of strata to be used.
  • Using a pre-existent sampling frame or by creating a frame that includes all the information of the stratification variable for the elements in the target audience.
  • Make changes after evaluating the sampling frame depending on its coverage.
  • Each stratum should be unique and should cover each and every member of the population. 
  • Assign a random, unique number to each element.
  • Define the size of each stratum according to your requirement. 
  • The researcher can then select random elements from each stratum to form the sample. 

Explore all the survey question types possible on Voxco

Explore Voxco Survey Software

Online page new product image3 02.png 1

+ Omnichannel Survey Software 

+ Online Survey Software 

+ CATI Survey Software 

+ IVR Survey Software 

+ Market Research Tool

+ Customer Experience Tool 

+ Product Experience Software 

+ Enterprise Survey Software 

Cover 12 scaled

Voxco partners with Rybbon to automate rewards for its Omnichannel Surveys!

Voxco announced its latest partnership with Rybbon which enables respondents to choose and receive a reward moments after completing a survey. Rybbon’s curated reward offerings

Importance of Optimising Post purchase Customer Experience1

Importance of Optimising Post-purchase Customer Experience

Importance of Optimising Post-purchase Customer Experience Transform your Post-purchase Customer Experience Create an actionable feedback collection survey process. Download Now SHARE THE ARTICLE ON Share

Randomization in experimental design Post-Purchase Customer Experience

How to conduct patient satisfaction survey

How to conduct patient satisfaction survey SHARE THE ARTICLE ON Table of Contents Patient satisfaction surveys allow you to translate patients’ feedback into meaningful and

Predictive analytics in healthcare

Predictive Analytics SHARE THE ARTICLE ON Share on facebook Share on twitter Share on linkedin Table of Contents Predictive analytics is a statistical technique that

Measuring Customer Experience (CX)

Measuring Customer Experience (CX) SHARE THE ARTICLE ON Table of Contents What is Customer Experience? Customer experience, or CX, encompasses customers’ overall perception of a

Field Research1

Field Research : Definition, Examples & Methodology

Field Research : Definition, Examples & Methodology Try a free Voxco Online sample survey! Unlock your Sample Survey SHARE THE ARTICLE ON Table of Contents

We use cookies in our website to give you the best browsing experience and to tailor advertising. By continuing to use our website, you give us consent to the use of cookies. Read More

Name Domain Purpose Expiry Type
hubspotutk www.voxco.com HubSpot functional cookie. 1 year HTTP
lhc_dir_locale amplifyreach.com --- 52 years ---
lhc_dirclass amplifyreach.com --- 52 years ---
Name Domain Purpose Expiry Type
_fbp www.voxco.com Facebook Pixel advertising first-party cookie 3 months HTTP
__hstc www.voxco.com Hubspot marketing platform cookie. 1 year HTTP
__hssrc www.voxco.com Hubspot marketing platform cookie. 52 years HTTP
__hssc www.voxco.com Hubspot marketing platform cookie. Session HTTP
Name Domain Purpose Expiry Type
_gid www.voxco.com Google Universal Analytics short-time unique user tracking identifier. 1 days HTTP
MUID bing.com Microsoft User Identifier tracking cookie used by Bing Ads. 1 year HTTP
MR bat.bing.com Microsoft User Identifier tracking cookie used by Bing Ads. 7 days HTTP
IDE doubleclick.net Google advertising cookie used for user tracking and ad targeting purposes. 2 years HTTP
_vwo_uuid_v2 www.voxco.com Generic Visual Website Optimizer (VWO) user tracking cookie. 1 year HTTP
_vis_opt_s www.voxco.com Generic Visual Website Optimizer (VWO) user tracking cookie that detects if the user is new or returning to a particular campaign. 3 months HTTP
_vis_opt_test_cookie www.voxco.com A session (temporary) cookie used by Generic Visual Website Optimizer (VWO) to detect if the cookies are enabled on the browser of the user or not. 52 years HTTP
_ga www.voxco.com Google Universal Analytics long-time unique user tracking identifier. 2 years HTTP
_uetsid www.voxco.com Microsoft Bing Ads Universal Event Tracking (UET) tracking cookie. 1 days HTTP
vuid vimeo.com Vimeo tracking cookie 2 years HTTP
Name Domain Purpose Expiry Type
__cf_bm hubspot.com Generic CloudFlare functional cookie. Session HTTP
Name Domain Purpose Expiry Type
_gcl_au www.voxco.com --- 3 months ---
_gat_gtag_UA_3262734_1 www.voxco.com --- Session ---
_clck www.voxco.com --- 1 year ---
_ga_HNFQQ528PZ www.voxco.com --- 2 years ---
_clsk www.voxco.com --- 1 days ---
visitor_id18452 pardot.com --- 10 years ---
visitor_id18452-hash pardot.com --- 10 years ---
lpv18452 pi.pardot.com --- Session ---
lhc_per www.voxco.com --- 6 months ---
_uetvid www.voxco.com --- 1 year ---

National Institutes of Health Office of Disease Prevention logo

Subscribe Follow Watch

Education & Training

Research methods for multilevel interventions to reduce health disparities.

There is a growing recognition that successful preventive interventions need to address social determinants of health. Effectively reducing health disparities often requires the use of study designs that go beyond the individual level or traditional randomized controlled trial; however, studies that evaluate multilevel interventions face unique challenges and require specialized design and analytical approaches.

Researchers can use the information and resources below to learn more about appropriate research methods for evaluating complex multilevel interventions to reduce health disparities.

Supplemental Journal Issue: Design and Analytic Methods to Evaluate Multilevel Interventions to Reduce Health Disparities

ODP sponsored a supplemental issue of Prevention Science in July 2024 that brings together new ideas and examples of strong applications of existing design and analytic methods for studies aimed at reducing health disparities. Papers also include strategies for developing multilevel interventions that balance methodological rigor with design feasibility, acceptability, and ethical considerations.

See the list below for the 12 open-access articles featured in the supplemental issue . Guest editors David M. Murray, Ph.D. (ODP), and Melody S. Goodman, Ph.D. (New York University), discuss key points of the papers and provide more background about the issue in their accompanying commentary.

You can also refer to the quick guide below that ODP developed to help you navigate the issue and identify the papers that are most relevant to your work.

  • Sample Size Calculations for Stepped Wedge Designs with Treatment Effects that May Change with the Duration of Time under Intervention (Hughes et al.)
  • Sample Size Requirements to Test Subgroup-Specific Treatment Effects in Cluster-Randomized Trials (Wang et al.)
  • Multilevel Intervention Stepped Wedge Designs (MLI-SWDs) (Sperger et al.)
  • Optimizing Interventions for Equitability: Some Initial Ideas (Strayhorn et al.)
  • Operationalizing Primary Outcomes to Achieve Reach, Effectiveness, and Equity in Multilevel Interventions (Guastaferro et al.)
  • Evaluating Effects of Multilevel Interventions on Disparity in Health and Healthcare Decisions (Jackson et al.)
  • Considerations for Subgroup Analyses in Cluster-Randomized Trials Based on Aggregated Individual-Level Predictors (Williamson et al.)
  • Using Power Analysis to Choose the Unit of Randomization, Outcome, and Approach for Subgroup Analysis for a Multilevel Randomized Controlled Clinical Trial to Reduce Disparities in Cardiovascular Health (Harrall et al.)
  • Application of a Heuristic Framework for Multilevel Interventions to Eliminate the Impact of Unjust Social Processes and Other Harmful Social Determinants of Health (Guilamo-Ramos et al.)
  • Mixed-Method, Multilevel Clustered-Randomized Control Trial for Menstrual Health Disparities (Houghton and Adkins-Jackson)
  • “We don’t separate out these things. Everything is related”: Partnerships with Indigenous Communities to Design, Implement, and Evaluate Multilevel Interventions to Reduce Health Disparities (Rink et al.)
  • A Hybrid Pragmatic and Factorial Cluster Randomized Controlled Trial for an Anti-racist, Multilevel Intervention to Improve Mental Health Equity in High Schools (Mulawa et al.)  

Quick Guide to the 2024 Supplemental Issue of Prevention Science

Note: The links in this document take you to an external, non-federal website. ODP is not responsible for, nor can we endorse, external content. You will be subject to the external website’s privacy policy when you leave our site.

Additional Resources for Research Methods and Health Disparities

  • NIH Research Methods Resources – Information and sample size calculators to help investigators design and analyze their studies using the best available methods
  • Methods: Mind the Gap Webinar Series – Topics include research design, measurement, data analysis, and other methods in prevention science
  • Pragmatic and Group-Randomized Trials in Public Health and Medicine: Online Course – 7-part online course for designing and analyzing group-randomized trials
  • Training in Prevention Research Methods – Federal courses, webinars, online tutorials, and other training in prevention research methodology, focused on study and intervention design, data analysis, and measurement
  • NIH Funding Opportunities in Prevention-Related Health Disparities Research – Prevention-related funding opportunities in health disparities, health equity, and social determinants of health research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Hum Reprod Sci
  • v.4(1); Jan-Apr 2011

This article has been retracted.

An overview of randomization techniques: an unbiased assessment of outcome in clinical research.

Department of Biostatics, National Institute of Animal Nutrition & Physiology (NIANP), Adugodi, Bangalore, India

Randomization as a method of experimental control has been extensively used in human clinical trials and other biological experiments. It prevents the selection bias and insures against the accidental bias. It produces the comparable groups and eliminates the source of bias in treatment assignments. Finally, it permits the use of probability theory to express the likelihood of chance as a source for the difference of end outcome. This paper discusses the different methods of randomization and use of online statistical computing web programming ( www.graphpad.com /quickcalcs or www.randomization.com ) to generate the randomization schedule. Issues related to randomization are also discussed in this paper.

INTRODUCTION

A good experiment or trial minimizes the variability of the evaluation and provides unbiased evaluation of the intervention by avoiding confounding from other factors, which are known and unknown. Randomization ensures that each patient has an equal chance of receiving any of the treatments under study, generate comparable intervention groups, which are alike in all the important aspects except for the intervention each groups receives. It also provides a basis for the statistical methods used in analyzing the data. The basic benefits of randomization are as follows: it eliminates the selection bias, balances the groups with respect to many known and unknown confounding or prognostic variables, and forms the basis for statistical tests, a basis for an assumption of free statistical test of the equality of treatments. In general, a randomized experiment is an essential tool for testing the efficacy of the treatment.

In practice, randomization requires generating randomization schedules, which should be reproducible. Generation of a randomization schedule usually includes obtaining the random numbers and assigning random numbers to each subject or treatment conditions. Random numbers can be generated by computers or can come from random number tables found in the most statistical text books. For simple experiments with small number of subjects, randomization can be performed easily by assigning the random numbers from random number tables to the treatment conditions. However, in the large sample size situation or if restricted randomization or stratified randomization to be performed for an experiment or if an unbalanced allocation ratio will be used, it is better to use the computer programming to do the randomization such as SAS, R environment etc.[ 1 – 6 ]

REASON FOR RANDOMIZATION

Researchers in life science research demand randomization for several reasons. First, subjects in various groups should not differ in any systematic way. In a clinical research, if treatment groups are systematically different, research results will be biased. Suppose that subjects are assigned to control and treatment groups in a study examining the efficacy of a surgical intervention. If a greater proportion of older subjects are assigned to the treatment group, then the outcome of the surgical intervention may be influenced by this imbalance. The effects of the treatment would be indistinguishable from the influence of the imbalance of covariates, thereby requiring the researcher to control for the covariates in the analysis to obtain an unbiased result.[ 7 , 8 ]

Second, proper randomization ensures no a priori knowledge of group assignment (i.e., allocation concealment). That is, researchers, subject or patients or participants, and others should not know to which group the subject will be assigned. Knowledge of group assignment creates a layer of potential selection bias that may taint the data.[ 9 ] Schul and Grimes stated that trials with inadequate or unclear randomization tended to overestimate treatment effects up to 40% compared with those that used proper randomization. The outcome of the research can be negatively influenced by this inadequate randomization.

Statistical techniques such as analysis of covariance (ANCOVA), multivariate ANCOVA, or both, are often used to adjust for covariate imbalance in the analysis stage of the clinical research. However, the interpretation of this post adjustment approach is often difficult because imbalance of covariates frequently leads to unanticipated interaction effects, such as unequal slopes among subgroups of covariates.[ 1 ] One of the critical assumptions in ANCOVA is that the slopes of regression lines are the same for each group of covariates. The adjustment needed for each covariate group may vary, which is problematic because ANCOVA uses the average slope across the groups to adjust the outcome variable. Thus, the ideal way of balancing covariates among groups is to apply sound randomization in the design stage of a clinical research (before the adjustment procedure) instead of post data collection. In such instances, random assignment is necessary and guarantees validity for statistical tests of significance that are used to compare treatments.

TYPES OF RANDOMIZATION

Many procedures have been proposed for the random assignment of participants to treatment groups in clinical trials. In this article, common randomization techniques, including simple randomization, block randomization, stratified randomization, and covariate adaptive randomization, are reviewed. Each method is described along with its advantages and disadvantages. It is very important to select a method that will produce interpretable and valid results for your study. Use of online software to generate randomization code using block randomization procedure will be presented.

Simple randomization

Randomization based on a single sequence of random assignments is known as simple randomization.[ 3 ] This technique maintains complete randomness of the assignment of a subject to a particular group. The most common and basic method of simple randomization is flipping a coin. For example, with two treatment groups (control versus treatment), the side of the coin (i.e., heads - control, tails - treatment) determines the assignment of each subject. Other methods include using a shuffled deck of cards (e.g., even - control, odd - treatment) or throwing a dice (e.g., below and equal to 3 - control, over 3 - treatment). A random number table found in a statistics book or computer-generated random numbers can also be used for simple randomization of subjects.

This randomization approach is simple and easy to implement in a clinical research. In large clinical research, simple randomization can be trusted to generate similar numbers of subjects among groups. However, randomization results could be problematic in relatively small sample size clinical research, resulting in an unequal number of participants among groups.

Block randomization

The block randomization method is designed to randomize subjects into groups that result in equal sample sizes. This method is used to ensure a balance in sample size across groups over time. Blocks are small and balanced with predetermined group assignments, which keeps the numbers of subjects in each group similar at all times.[ 1 , 2 ] The block size is determined by the researcher and should be a multiple of the number of groups (i.e., with two treatment groups, block size of either 4, 6, or 8). Blocks are best used in smaller increments as researchers can more easily control balance.[ 10 ]

After block size has been determined, all possible balanced combinations of assignment within the block (i.e., equal number for all groups within the block) must be calculated. Blocks are then randomly chosen to determine the patients’ assignment into the groups.

Although balance in sample size may be achieved with this method, groups may be generated that are rarely comparable in terms of certain covariates. For example, one group may have more participants with secondary diseases (e.g., diabetes, multiple sclerosis, cancer, hypertension, etc.) that could confound the data and may negatively influence the results of the clinical trial.[ 11 ] Pocock and Simon stressed the importance of controlling for these covariates because of serious consequences to the interpretation of the results. Such an imbalance could introduce bias in the statistical analysis and reduce the power of the study. Hence, sample size and covariates must be balanced in clinical research.

Stratified randomization

The stratified randomization method addresses the need to control and balance the influence of covariates. This method can be used to achieve balance among groups in terms of subjects’ baseline characteristics (covariates). Specific covariates must be identified by the researcher who understands the potential influence each covariate has on the dependent variable. Stratified randomization is achieved by generating a separate block for each combination of covariates, and subjects are assigned to the appropriate block of covariates. After all subjects have been identified and assigned into blocks, simple randomization is performed within each block to assign subjects to one of the groups.

The stratified randomization method controls for the possible influence of covariates that would jeopardize the conclusions of the clinical research. For example, a clinical research of different rehabilitation techniques after a surgical procedure will have a number of covariates. It is well known that the age of the subject affects the rate of prognosis. Thus, age could be a confounding variable and influence the outcome of the clinical research. Stratified randomization can balance the control and treatment groups for age or other identified covariates. Although stratified randomization is a relatively simple and useful technique, especially for smaller clinical trials, it becomes complicated to implement if many covariates must be controlled.[ 12 ] Stratified randomization has another limitation; it works only when all subjects have been identified before group assignment. However, this method is rarely applicable because clinical research subjects are often enrolled one at a time on a continuous basis. When baseline characteristics of all subjects are not available before assignment, using stratified randomization is difficult.[ 10 ]

Covariate adaptive randomization

One potential problem with small to moderate size clinical research is that simple randomization (with or without taking stratification of prognostic variables into account) may result in imbalance of important covariates among treatment groups. Imbalance of covariates is important because of its potential to influence the interpretation of a research results. Covariate adaptive randomization has been recommended by many researchers as a valid alternative randomization method for clinical research.[ 8 , 13 ] In covariate adaptive randomization, a new participant is sequentially assigned to a particular treatment group by taking into account the specific covariates and previous assignments of participants.[ 7 ] Covariate adaptive randomization uses the method of minimization by assessing the imbalance of sample size among several covariates.

Using the online randomization http://www.graphpad.com/quickcalcs/index.cfm , researcher can generate randomization plan for treatment assignment to patients. This online software is very simple and easy to implement. Up to 10 treatments can be allocated to patients and the replication of treatment can also be performed up to 9 times. The major limitations of this software is that once the randomization plan is generated, same randomization plan cannot be generated as this uses the seed point of local computer clock and is not displayed for further use. Other limitation of this online software Maximum of only 10 treatments can be assigned to patients. Entering the web address http://www.graphpad.com/quickcalcs/index.cfm on address bar of any browser, the page of graphpad appears with number of options. Select the option of “Random Numbers” and then press continue, Random Number Calculator with three options appears. Select the tab “Randomly assign subjects to groups” and press continue. In the next page, enter the number of subjects in each group in the tab “Assign” and select the number of groups from the tab “Subjects to each group” and keep number 1 in repeat tab if there is no replication in the study. For example, the total number of patients in a three group experimental study is 30 and each group will assigned to 10 patients. Type 10 in the “Assign” tab and select 3 in the tab “Subjects to each group” and then press “do it” button. The results is obtained as shown as below (partial output is presented)

Another randomization online software, which can be used to generate randomization plan is http://www.randomization.com . The seed for the random number generator[ 14 , 15 ] (Wichmann and Hill, 1982, as modified by McLeod, 1985) is obtained from the clock of the local computer and is printed at the bottom of the randomization plan. If a seed is included in the request, it overrides the value obtained from the clock and can be used to reproduce or verify a particular plan. Up to 20 treatments can be specified. The randomization plan is not affected by the order in which the treatments are entered or the particular boxes left blank if not all are needed. The program begins by sorting treatment names internally. The sorting is case sensitive, however, so the same capitalization should be used when recreating an earlier plan. Example of 10 patients allocating to two groups (each with 5 patients), first the enter the treatment labels in the boxes, and enter the total number of patients that is 10 in the tab “Number of subjects per block” and enter the 1 in the tab “Number of blocks” for simple randomization or more than one for Block randomization. The output of this online software is presented as follows.

The benefits of randomization are numerous. It ensures against the accidental bias in the experiment and produces comparable groups in all the respect except the intervention each group received. The purpose of this paper is to introduce the randomization, including concept and significance and to review several randomization techniques to guide the researchers and practitioners to better design their randomized clinical trials. Use of online randomization was effectively demonstrated in this article for benefit of researchers. Simple randomization works well for the large clinical trails ( n >100) and for small to moderate clinical trials ( n <100) without covariates, use of block randomization helps to achieve the balance. For small to moderate size clinical trials with several prognostic factors or covariates, the adaptive randomization method could be more useful in providing a means to achieve treatment balance.

Source of Support: Nil

Conflict of Interest: None declared.

ORIGINAL RESEARCH article

Coeliac disease and postpartum depression: are they linked a two-sample mendelian randomization study.

Xiaomeng Yu

  • 1 Departments of Obstetrics, Women and Children’s Hospital of Jinzhou, Jinzhou, Liaoning, China
  • 2 Departments of Surgery, Jinzhou Second Hospital, Jinzhou, Liaoning, China
  • 3 Center for Reproductive Medicine, The First Affiliated Hospital of Jinzhou Medical University, Jinzhou, Liaoning, China

Background: To explore the potential causal associations between coeliac disease(CD) and postpartum depression(PPD) by using two-sample Mendelian randomization(MR) analysis.

Methods: The IEU OPEN GWAS project was utilized to identify genetic loci strongly associated with CD as instrumental variables (IVs), and MR analysis was performed using inverse variance weighting(IVW), weighted median, weighted model, and MR-Egger. MR analyses were used to examine whether there was a link between CD and PPD, with an OR and 95% CI. Meanwhile, the relationship between CD and depression(DP) was analyzed using MR. The sensitivity analysis was conducted using MR-Egger intercept analysis, Cochran’s Q test, and leave-one-out analysis.

Results: From the GWAS online database, 13 single-nucleotide polymorphisms (SNPs) were chosen as IVs. The IVW results showed a relationship between PPD and a genetically predicted risk of developing CD (OR = 1.022, 95% CI: 1.001–1.044, P = 0.043). However, the presence of DP was not linked with CD (OR=0.991, 95% CI: 0.978–1.003, P=0.151). Potential horizontal pleiotropy was not discovered using MR-Egger intercept analysis (PPD: P=0.725; DP: P=0.785), and Cochran’s Q test for heterogeneity revealed no significant heterogeneity (PPD: P=0.486; DP: P=0.909). A leave-one-out analysis found that individual SNPs had minimal effect on overall causal estimations.

Conclusion: MR research discovered a link between CD and PPD.

1 Introduction

Postpartum depression (PPD) is a frequent puerperal mental illness in which women have major depressive symptoms or characteristic depressive episodes throughout the puerperium ( 1 ). It is a frequent puerperal mental condition characterized by a persistent and profoundly depressed mood throughout the puerperium, as well as a variety of symptoms such as despair, sorrow, irritability, and even suicidal ideation. These symptoms significantly impair the mother’s capacity to care for her unborn child ( 2 ). The aetiology and pathophysiology of PPD remain unknown, and the illness has a worldwide prevalence rate ranging from 7% to 9% ( 3 ). It is thought to be closely linked to genetic, neurobiochemical, and psychosocial factors, in addition to maternal helplessness, bad mood, lower energy, and other negative mental characteristics. A recent large-scale clinical study involving over 1 million women from 138 countries discovered that PPD symptoms were most commonly self-reported by women between the ages of 18 and 24, and that women were twice as likely as men to experience depression during childbirth. PPD prevalence decreases with age, reaching a low of 6.5% among persons aged 35 to 39. First-time mothers are more likely to develop comorbid depression than women who have previously given birth. Twin births had a greater symptom load than single births, with 11.3 percent of twin mothers experiencing depressive symptoms compared to 8.3 percent of single-birth mothers. This disparity is especially pronounced in women over the age of 40 ( 4 ). A separate study found that PPD can last up to three years, which is far longer than previously thought ( 5 ). PPD has a long duration and a significant impact, hurting relationships, families, society, and maternal health and placing mothers and newborns at risk. Identifying people at high risk of having PPD, changing behaviors, and developing preventative measures to minimize and eliminate risk factors for PPD are all effective approaches to reducing the condition’s prevalence.

Coeliac disease (CD), also known as gluten-sensitive enteropathy, is a common immune-mediated inflammatory illness of the small intestine characterized by the body’s sensitivity to dietary gluten and associated proteins in genetically susceptible individuals ( 6 ). According to epidemiological research, CD affects around 1% of the global population ( 7 ). CD is widespread in most countries at a rate of 1:300–1:70, according to an epidemiological study based on serological testing and biopsy confirmation ( 8 ). Another meta-analysis discovered an even higher worldwide frequency of 1.4% based on serological tests and 0.7% based on biopsies ( 9 ). Every year, the frequency of CD rises, and it has been associated with a variety of conditions, including unexplained infertility in women, intrauterine growth restriction, and recurrent miscarriage ( 10 ). Recent research has found that PPD is frequent in CD women on gluten-free diet GFD, particularly in those with previous menstrual disorders. we suggest screening for PPD in CD for early detection and treatment of this condition ( 11 ). Less study has been conducted on the relationship between CD and PPD, and it is still unclear whether one exists.

Mendelian randomization (MR) is a data analysis technique for evaluating etiological inferences in epidemiological studies that is based on whole genome sequencing data and uses genetic variants with strong correlations to exposures as instrumental variables to assess causal associations between exposures and outcomes ( 12 ). It is effective at reducing prejudice. In the current study, MR was used to look into any possible causal links between CD and PPD.

2.1 Study design

We evaluated the probable causal relationship between CD phenotypes and the occurrence of PPD in this study by using the genetic pooled dataset from the genome-wide association study (GWAS) for a two-sample MR analysis with CD-related phenotypes as the exposure factor and the presence of PPD as the outcome. Meanwhile, in the risk factor analysis of PPD, we used depression (DP) as the outcome to investigate whether CD mediates the causal relationship with PPD via DP.

2.2 GWAS data source

The exposure variable is celiac disease, and the outcome variables are postpartum depression and depression. All GWAS data are sourced from the IEU OPEN GWAS project ( https://gwas.mrcieu.ac.uk/ ). Celiac disease (ID: finn-b-K11_COELIAC) includes 16,380,438 SNPs, encompassing 1,973 cases and 210,964 controls. Postpartum depression (ID: finn-b-O15_POSTPART_DEPR) includes 16,376,275 SNPs, with 7,604 cases and 59,601 controls. Depression (ID: finn-b-F5_DEPRESSIO) includes 16,380,457 SNPs, with 23,424 cases and 192,220 controls. To ensure ethnic homogeneity, all participants in the sample are of European ancestry, as detailed in Table 1 .

www.frontiersin.org

Table 1 Summary of genome-wide association study data in this Mendelian randomization study.

2.3 Selection of IVs

The instrument variables (IVs) in this study were needed to meet the following standards ( 13 ): (i) There was no linkage disequilibrium (LD) between single-nucleotide polymorphisms (SNPs), with r2<0.001; (ii) There was no genome-wide significant connection between SNPs and the occurrence of PPD; and (iii) SNPs did not achieve genome-wide significant association with PPD-associated symptoms. The development of puerperal sorrow was not related to SNPs across the genome (P<5×10 -8 ) ( Figure 1 ). Finally, 13 SNPs associated with CD were included. The F-value was determined using the following method to determine the IVs’ strength in order to rule out any potential weak instrumental variable bias between IVs and exposure factors ( 14 ). F = (R 2 /1-R 2 )(n-k-1/K), R 2 = 2X(1-MAF)X(MAF)X2, where N is the exposed GWAS sample size, K is the number of single nucleotide polymorphisms, R2 is the percentage of variation explained by single nucleotide polymorphisms in the exposed database, MAF is the effect allele frequency, and is the allele effect value. When F > 10, weak instrumental variance bias is expected to be less frequent.

www.frontiersin.org

Figure 1 Hypotheses for Mendelian randomization design.

2.4 Statistical analyses

In our two-sample MR, we used CD as the exposure and PPD and DP as the outcomes. For all studies, the “TwoSampleMR” analytic technique was utilized. For all analyses in R (4.3.1), the “TwoSampleMR” package, version 0.5.7, was utilized. The outcome-related SNPs were removed using PhenoScanner V2. The outliers were removed using MR-PRESSO. To study the causal influence, inverse variance weighting (IVW), weighted median, weighted model, and MR-Egger techniques were utilized. Odds ratios (OR) and 95% confidence intervals (CI) were calculated to assess the probable causal relationship between CD and the chance of developing prenatal depression. Possibility of a causal relationship. The sensitivity analysis was conducted using MR-Egger intercept analysis, Cochran’s Q test, and leave-one-out analysis.

In this work, the PhenoScanner database was utilized to search for phenotypes in line with the screening criteria for instrumental factors, and a total of 13 SNPs that were closely connected to one another without chain disequilibrium were originally screened out from the CD exposure data set. There were no SNPs with PPD significance, and the F-value statistic for 13 SNPs was larger than 10, suggesting that there were no weak instrumental factors in the MR analysis, and the MR-PRESSO analysis did not disclose any outliers. Finally, 13 SNPs were used as IVs (P<5×10 -8 , r2<0.001, clumping distance = 10,000 kb) to assess the relationship between CD and PPD.

3.2 Causal association between CD and PPD

Genetically predicted CD and PPD were shown to have a positive and statistically significant causative relationship using the IVW technique in MR analysis (OR = 1.022, 95% CI: 1.001–1.044, P = 0.043). Three additional analysis techniques Although the three methods did not support a causal relationship between CD and PPD, the four methods of IVW, weighted median, weighted mode, and MR-Egger were consistent in the direction of the results. These methods were also used to confirm the robustness of the results. The four studies’ findings all pointed in the same direction (OR > 1). Additionally, scatter plots demonstrated the general consistency of the regression lines discovered for the genetic prediction of CD on the risk of PPD ( Figure 2A ). The lack of possible horizontal pleiotropy detected by MR-Egger intercept analysis (P = 0.725) suggests that IVs do not significantly alter outcomes through mechanisms other than exposure. No significant heterogeneity was found in the Cochran’s Q test for heterogeneity (P = 0.486) ( Table 2 ; Figure 3 ). No specific single nucleotide polymorphism had an impact on the overall causal estimate, according to a leave-one-out analysis ( Figure 4A ). The preceding sensitivity analysis demonstrates that the effect OR values derived from the IVW method are quite robust.

www.frontiersin.org

Figure 2 Scatter plot of Mendelian randomization effect size for causal associations. (A) Coeliac disease and postpartum depression; (B) Coeliac disease and depression.

www.frontiersin.org

Table 2 The effects of coeliac disease on the risk of postpartum depression and depression as determined by Mendelian randomization.

www.frontiersin.org

Figure 3 Forest plot of Mendelian randomization effect sizes for causal associations. PPD, Postpartum depression; DP, Depression; OR, Odds ratio; CI, Confidence interval.

www.frontiersin.org

Figure 4 Leave-one-out plot to assess if a single variant is driving the association. (A) Coeliac disease and postpartum depression; (B) Coeliac disease and depression.

3.3 Association between CD and DP

Genetically predicted CD and DP may not be causally related, as the IVW method showed in MR investigations (OR=0.991, 95% CI: 0.978–1.003, P=0.151). The other three methods confirmed that there is no causal relationship between CD and DP. Similar scatter plots illustrated how the regression lines produced for the genetically predicted risk of CD and DP were generally consistent ( Figure 2B ). The MR-Egger intercept analysis failed to detect any potential horizontal pleiotropy (P = 0.785). No significant heterogeneity was found using the Cochran’s Q test (P = 0.909). The leave-one-out analysis failed to identify any particular SNPs that had an effect on the overall causal estimates ( Figure 4B ).

4 Discussion

Identifying the etiology of PPD is important for its prevention, diagnosis, and treatment. This study looked into the connection between CD and PPD using MR analysis. The results of the MR investigation showed a clear causal link between CD and PPD.

The association between CD and PPD has not been made clear till now. According to certain studies, people with CD can experience neurological or psychological symptoms such as anxiety, sadness, headache, peripheral neuropathy, ataxia, and epilepsy ( 15 , 16 ). These studies had some drawbacks, including small sample numbers, retrospective data, the possibility of an inaccurate CD diagnosis, and referral bias from tertiary care facilities. Some of these investigations used the presence of anti-antimelanocortin antibodies to make the diagnosis rather than duodenal histological signs or more specific autoantibodies. Concerning the connection between CD, DP, and epilepsy, there is conflicting information. Due to the superior research design, we were confident in proving causality in addition to bias in the current study employing MR techniques. It was difficult for prior observational studies to prevent bias due to confounding risk factors. This was the first study to do an MR analysis between CD and PPD risk. According to the research, CD may have a significant causal link that increases the risk of PPD.

Out of the many recognized potential risk factors for PPD, a past history of depression in the perinatal or non-perinatal era has the greatest impact and the strongest correlation with PPD ( 17 , 18 ). Depression that was present at high levels during the pregnancy may persist during labor and delivery ( 19 ). More than half of women with histories of prenatal depression (AD) also had postpartum depression, according to a study ( 20 ). On the other hand, a review of the literature showed that postpartum depression affected more than a third (39%) of women with AD ( 21 ). In order to exclude the influence of depression as a risk factor on the result, the correlation between CD and depression was examined in this study utilizing MR analysis. The results of the MR investigation showed that CD and DP did not clearly share a causal relationship. It was also proven that CD and PPD are causally connected.

This study’s strength stems from the fact that it is the first to use data from a substantial population sample in a systematic genetic technique to look into the potential of a connection between CD and the prevalence of PPD. The stability of the data was also confirmed using a number of statistical approaches, such as inverse variance weighting, weighted median, maximum likelihood ratio, MR-Egger regression analysis, and the MR-PRESSO method. However, this study also has limitations, such as the inclusion of populations from Europe, which may reduce population stratification bias, but the reliability of extrapolation to other ethnic groups may be insufficient. Therefore, it is necessary to further study the relationship between CD and PPD in other ethnic groups.

Ultimately, the present study employed two-sample MR research to carefully assess the potential causal associations between CD and the incidence of PPD at the genetic level. The results indicated that since CD was positively correlated with the chance of developing PPD, prevention of CD could have a prophylactic impact on the development of PPD.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

XY: Writing – original draft. MC: Writing – original draft, Data curation, Software. JZ: Funding acquisition, Visualization, Writing – original draft, Writing – review & editing.

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported by Natural Science Foundation of Liaoning Province (2022-MS-394).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1. Shorey S, Chee CYI, Ng ED, Chan YH, Tam WWS, Chong YS. Prevalence and incidence of postpartum depression among healthy mothers: A systematic review and meta-analysis. J Psychiatr Res . (2018) 104:235–48. doi: 10.1016/j.jpsychires.2018.08.001

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Howard LM, Molyneaux E, Dennis CL, Rochat T, Stein A, Milgrom J. Non-psychotic mental disorders in the perinatal period. Lancet . (2014) 384:1775–88. doi: 10.1016/S0140-6736(14)61276-9

3. Ben Hayoun DH, Sultan P, Rozeznic J, Guo N, Carvalho B, Orbach-Zinger S, et al. Association of inpatient postpartum quality of recovery with postpartum depression: A prospective observational study. J Clin Anesth . (2023) 91:111263. doi: 10.1016/j.jclinane.2023.111263

4. Bradshaw H, Riddle JN, Salimgaraev R, Zhaunova L, Payne JL. Risk factors associated with postpartum depressive symptoms: A multinational study. J Affect Disord . (2022) 301:345–51. doi: 10.1016/j.jad.2021.12.121

5. Hedges VL, Heaton EC, Amaral C, Benedetto LE, Bodie CL, D'Antonio BI, et al. Estrogen withdrawal increases postpartum anxiety via oxytocin plasticity in the paraventricular hypothalamus and dorsal raphe nucleus. Biol Psychiatry . (2021) 89:929–38. doi: 10.1016/j.biopsych.2020.11.016

6. Di Simone N, Gratta M, Castellani R, D'Ippolito S, Specchia M, Scambia G, et al. Celiac disease and reproductive failures: An update on pathogenic mechanisms. Am J Reprod Immunol . (2021) 85:e13334. doi: 10.1111/aji.13334

7. Choung RS, Larson SA, Khaleghi S, Rubio-Tapia A, Ovsyannikova IG, King KS, et al. Prevalence and morbidity of undiagnosed celiac disease from a community-based study. Gastroenterology . (2017) 152:830–9.e5. doi: 10.1053/j.gastro.2016.11.043

8. Gujral N, Freeman HJ, Thomson AB. Celiac disease: prevalence, diagnosis, pathogenesis and treatment. World J Gastroenterol . (2012) 18:6036–59. doi: 10.3748/wjg.v18.i42.6036

9. Singh P, Arora A, Strand TA, Leffler DA, Catassi C, Green PH, et al. Global prevalence of celiac disease: systematic review and meta-analysis. Clin Gastroenterol Hepatol . (2018) 16:823–36.e2. doi: 10.1016/j.cgh.2017.06.037

10. Dimitrova AK, Ungaro RC, Lebwohl B, Lewis SK, Tennyson CA, Green MW, et al. Prevalence of migraine in patients with celiac disease and inflammatory bowel disease. Headache . (2013) 53:344–55. doi: 10.1111/j.1526-4610.2012.02260.x

11. Tortora R, Imperatore N, Ciacci C, Zingone F, Rispo A. High prevalence of post-partum depression in women with coeliac disease. World J Obstet Gynecol . (2015) 004:9–15. doi: 10.5317/wjog.v4.i1.9

CrossRef Full Text | Google Scholar

12. Bowden J, Holmes MV. Meta-analysis and Mendelian randomization: A review. Res Synth Methods . (2019) 10:486–96. doi: 10.1002/jrsm.1346

13. Boef AG, Dekkers OM, le Cessie S. Mendelian randomization studies: a review of the approaches used and the quality of reporting. Int J Epidemiol . (2015) 44:496–511. doi: 10.1093/ije/dyv071

14. Park JH, Wacholder S, Gail MH, Peters U, Jacobs KB, Chanock SJ, et al. Estimation of effect size distribution from genome-wide association studies and implications for future discoveries. Nat Genet . (2010) 42:570–5. doi: 10.1038/ng.610

15. Di Simone N, Gratta M, Castellani R, D'Ippolito S, Specchia M, Scambia G, et al. Celiac disease and reproductive failures: An update on pathogenic mechanisms. Am J Reprod Immunol . (2021) 85:e13334. doi: 10.1111/aji.13334

16. Ludvigsson JF, Zingone F, Tomson T, Ekbom A, Ciacci C. Increased risk of epilepsy in biopsy-verified celiac disease: a population-based cohort study. Neurology . (2012) 78:1401–7. doi: 10.1212/WNL.0b013e3182544728

17. Patton GC, Romaniuk H, Spry E, Coffey C, Olsson C, Doyle LW, et al. Prediction of perinatal depression from adolescence and before conception (VIHCS): 20-year prospective cohort study [published correction appears. Lancet . (2015) 386:875–83. doi: 10.1016/S0140-6736(14)62248-0

18. Qi W, Zhao F, Liu Y, Li Q, Hu J. Psychosocial risk factors for postpartum depression in Chinese women: a meta-analysis. BMC Pregnancy Childbirth . (2021) 21:174. doi: 10.1186/s12884-021-03657-0

19. Dlamini LP, Amelia VL, Shongwe MC, Chang PC, Chung MH. Antenatal depression across trimesters as a risk for postpartum depression and estimation of the fraction of postpartum depression attributable to antenatal depression: A systematic review and meta-analysis of cohort studies. Gen Hosp Psychiatry . (2023) 85:35–42. doi: 10.1016/j.genhosppsych.2023.09.005

20. Al Rawahi A, Al Kiyumi MH, Al Kimyani R, Al-Lawati I, Murthi S, Davidson R, et al. The Effect of Antepartum Depression on the Outcomes of Pregnancy and Development of Postpartum Depression: A prospective cohort study of Omani women. Sultan Qaboos Univ Med J . (2020) 20:e179–86. doi: 10.18295/squmj.2020.20.02.008

21. Underwood L, Waldie K, D'Souza S, Peterson ER, Morton S. A review of longitudinal studies on antenatal and postnatal depression. Arch Womens Ment Health . (2016) 19:711–20. doi: 10.1007/s00737-016-0629-1

Keywords: Mendelian randomization, coeliac disease, postpartum depression, depression, SNPs

Citation: Yu X, Cheng M and Zheng J (2024) Coeliac disease and postpartum depression: are they linked? A two-sample Mendelian randomization study. Front. Psychiatry 15:1312117. doi: 10.3389/fpsyt.2024.1312117

Received: 13 December 2023; Accepted: 08 July 2024; Published: 19 July 2024.

Reviewed by:

Copyright © 2024 Yu, Cheng and Zheng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jindan Zheng, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. randomization

    what is randomization in research design

  2. PPT

    what is randomization in research design

  3. Experimental Designs 1 Completely Randomized Design 2 Randomized

    what is randomization in research design

  4. Randomisation and Blinding

    what is randomization in research design

  5. PPT

    what is randomization in research design

  6. PPT

    what is randomization in research design

VIDEO

  1. Randomization and Trial Supply Management Software

  2. The Completely Randomized Design (CRD)

  3. What is research design? #how to design a research advantages of research design

  4. Completely Randomized Design (CRD)

  5. Randomization- Principle of Experimental Design

  6. Completely Randomized Design (CRD) in R

COMMENTS

  1. Randomisation: What, Why and How?

    Simple randomisation is a fair way of ensuring that any differences that occur between the treatment groups arise completely by chance. But - and this is the first but of many here - simple randomisation can lead to unbalanced groups, that is, groups of unequal size. This is particularly true if the trial is only small.

  2. A roadmap to using randomization in clinical trials

    Background. Various research designs can be used to acquire scientific medical evidence. The randomized controlled trial (RCT) has been recognized as the most credible research design for investigations of the clinical effectiveness of new medical interventions [1, 2].Evidence from RCTs is widely used as a basis for submissions of regulatory dossiers in request of marketing authorization for ...

  3. Randomization in clinical studies

    Randomized controlled trial is widely accepted as the best design for evaluating the efficacy of a new treatment because of the advantages of randomization (random allocation). Randomization eliminates accidental bias, including selection bias, and provides a base for allowing the use of probability theory.

  4. Principles and methods of randomization in research

    In performing randomization, it is important to consider the choice of methodology in the specific context of the trial/experiment. Sample size, population characteristics, longevity of treatment effect, number of treatment arms, and study design all factor into determining an applicable randomization schedule.

  5. A roadmap to using randomization in clinical trials

    Various research designs can be used to acquire scientific medical evidence. The randomized controlled trial (RCT) has been recognized as the most credible research design for investigations of the clinical effectiveness of new medical interventions [1, 2].Evidence from RCTs is widely used as a basis for submissions of regulatory dossiers in request of marketing authorization for new drugs ...

  6. Randomization

    Randomization is widely applied in various fields, especially in scientific research, statistical analysis, and resource allocation, to ensure fairness and validity in the outcomes. In various contexts, randomization may involve Generating Random Permutations: This is essential in various situations, such as shuffling cards. By randomly ...

  7. Randomized Controlled Trials

    Randomized controlled trials (RCTs) have traditionally been viewed as the gold standard of clinical trial design, residing at the top of the hierarchy of levels of evidence in clinical study; this is because the process of randomization can minimize differences in characteristics of the groups that may influence the outcome, thus providing the ...

  8. Random Assignment in Experiments

    In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomization. With simple random assignment, ... In this research design, there's usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

  9. Guide to Experimental Design

    A completely randomized design vs a randomized block design. A between-subjects design vs a within-subjects design. Randomization. An experiment can be completely randomized or randomized within blocks (aka strata): In a completely randomized design, every subject is assigned to a treatment group at random.

  10. PDF How to design a randomised controlled trial

    How to design a randomised controlled trial ... or adaptive trial, your research question always returns to your PICO statement. Precision in defining a research question is a key skill; the ...

  11. Randomized experiment

    In the design of experiments, the simplest design for comparing treatments is the "completely randomized design". Some "restriction on randomization" can occur with blocking and experiments that have hard-to-change factors; additional restrictions on randomization can occur when a full randomization is infeasible or when it is desirable to ...

  12. Randomisation: what, why and how?

    Simple randomisation is a fair way of ensuring that any differences that occur between the treatment groups arise completely by chance. But - and this is the first but of many here - simple ran-domisation can lead to unbalanced groups, that is, groups of unequal size. This is particularly true if the trial is only small.

  13. Why randomize?

    The key to randomized experimental research design is in the random assignment of study subjects - for example, individual voters, precincts, media markets or some other group - into treatment or control groups. Randomization has a very specific meaning in this context. It does not refer to haphazard or casual choosing of some and not others.

  14. A simplified guide to randomized controlled trials

    Abstract. A randomized controlled trial is a prospective, comparative, quantitative study/experiment performed under controlled conditions with random allocation of interventions to comparison groups. The randomized controlled trial is the most rigorous and robust research method of determining whether a cause-effect relation exists between an ...

  15. The principle of randomization in scientific research

    Usually, statistics textbooks introduce the core aspects of experimental design as the three key elements, the four principles and the design types, which run through the whole scientific research design and determine the overall success of the research. This article discusses the principle of randomization, which is one of the four principles ...

  16. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  17. Issues in Outcomes Research: An Overview of Randomization Techniques

    What Is Randomization? Randomization is the process of assigning participants to treatment and control groups, assuming that each participant has an equal chance of being assigned to any group. 12 Randomization has evolved into a fundamental aspect of scientific research methodology. Demands have increased for more randomized clinical trials in many areas of biomedical research, such as ...

  18. The Registry-Based Randomized Trial

    Randomized controlled trials are the gold standard of clinical research for comparing therapies in well-defined groups of participants. 1 Randomization avoids confounding due to unmeasured variables or to treatment selection and enables a causal interpretation of the estimated treatment effect. It has long been recognized, however, that standard explanatory clinical trials are slow, costly ...

  19. Completely randomized design

    Completely randomized design. In the design of experiments, completely randomized designs are for studying the effects of one primary factor without the need to take other nuisance variables into account. This article describes completely randomized designs that have one primary factor. The experiment compares the values of a response variable ...

  20. Randomization in Statistics and Experimental Design

    Permuted block randomization is a way to randomly allocate a participant to a treatment group, while keeping a balance across treatment groups. Each "block" has a specified number of randomly ordered treatment assignments. 3. Stratified Random Sampling. Stratified random sampling is useful when you can subdivide areas.

  21. 5.5

    In principle, randomization should protect a project because, on average, these influences will be represented randomly for the two groups of individuals. This reasoning extends to unmeasured and unknown causal factors as well. This discussion was illustrated by random assignment of subjects to treatment groups.

  22. Completely Randomized Design: The One-Factor Approach

    Completely Randomized Design (CRD) is a research methodology in which experimental units are randomly assigned to treatments without any systematic bias. CRD gained prominence in the early 20th century, largely attributed to the pioneering work of statistician Ronald A. Fisher. His method addressed the inherent variability in experimental units by randomly assigning treatments, thus countering ...

  23. Why Randomization in Experimental Design Triumphs

    Randomization in an experiment refers to a random assignment of participants to the treatment in an experiment. OR, for instance we can say that randomization is assignment of treatment to the participants randomly. For example: a teacher decides to take a viva in the class and randomly starts asking the students.

  24. Research Methods for Multilevel Interventions To Reduce Health

    There is a growing recognition that successful preventive interventions need to address social determinants of health. Effectively reducing health disparities often requires the use of study designs that go beyond the individual level or traditional randomized controlled trial; however, studies that evaluate multilevel interventions face unique challenges and require specialized design and ...

  25. An overview of randomization techniques: An unbiased assessment of

    A random number table found in a statistics book or computer-generated random numbers can also be used for simple randomization of subjects. This randomization approach is simple and easy to implement in a clinical research. In large clinical research, simple randomization can be trusted to generate similar numbers of subjects among groups.

  26. Dexmedetomidine for Reducing Mortality in Patients with Septic ...

    Original Research. Dexmedetomidine for Reducing Mortality in Patients with Septic Shock: A Randomized Controlled Trial (DecatSepsis) ... Study Design and Methods. This open-label randomized controlled trial assessed the effects of a heart rate (HR)-calibrated DEX infusion on in-hospital mortality in patients with septic shock and HR >90 beats ...

  27. Frontiers

    1 Introduction. Postpartum depression (PPD) is a frequent puerperal mental illness in which women have major depressive symptoms or characteristic depressive episodes throughout the puerperium ().It is a frequent puerperal mental condition characterized by a persistent and profoundly depressed mood throughout the puerperium, as well as a variety of symptoms such as despair, sorrow ...

  28. Nudging accurate scientific communication.

    The recent replicability crisis in social and biomedical sciences has highlighted the need for improvement in the honest transmission of scientific content. We present the results of two studies investigating whether nudges and soft social incentives enhance participants' readiness to transmit high-quality scientific news. In two online randomized experiments (Total N = 2425), participants ...