Experimental designs as commonly practiced in the social sciences are static. As we run experiments, though, we gain knowledge about the joint distribution of treatment, covariates and potential outcomes that can be used to inform what the best possible experimental design could have been. Through the use of outcome-adaptive experimental methods, I show how causal inferences may be more efficient without sacrificing identification, and how the basis of that identification is from i.i.d. sampling of units rather than randomization as is the norm. In particular, I demonstrate a method,
NeymanUCB, which ensures that treatment allocations will converge to the optimal design (the Neyman Allocation) of an experiment. Simulations and survey experimental evidence show that
NeymanUCB can substantially improve the sample efficiency of experiments.