Two different average measures of the sum of squared standardized effects have been proposed to represent the degree of difference among treatment effects in the analysis of variance design. One is the square root of the signal-to-noise ratio and the second is the root mean square standardized effect. Despite the obvious usage of the two measures, the associated test procedures and properties for detecting a minimal important difference among standardized means are not well illustrated. In view of the advocated practice of reporting effect sizes and testing minimal effects in quantitative research, this paper presents and compares the power and sample size procedures for testing the hypothesis that treatments have negligible effects rather than that of no difference. To enhance the practical usefulness, the corresponding computer algorithms for data analysis and design planning are also developed.