Articles

1.2: Number comparisons using < ,>, and =


1.2: Number comparisons using < ,>, and =

Maximum and minimum of an array using minimum number of comparisons in C

We are given with an array of integers. The task is to find the minimum and maximum element of the array in the minimum number of comparisons.

Explanation &minus Here to minimize the number of comparisons, we will initialize the maximum and minimum element with Arr[0]. And starting from the 2nd element compare each value with min and max and update accordingly.

Explanation &minus Here also, to minimize the number of comparisons, we will initialize the maximum and minimum element with Arr[0]. And starting from the 2nd element compare each value with min and max and update accordingly.


1.2: Number comparisons using , and =

Compare two fractions with different numerators and different denominators, e.g., by creating common denominators or numerators, or by comparing to a benchmark fraction such as 1/2. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model. (Standard #: MAFS.4.NF.1.2)

Original Tutorials

Use equivalent fractions to compare fractions in this garden-themed, interactive tutorials This is Part 2 in a two-part series. Click to open Part 1,  “Mama’s Pizza, Butterflies, & Comparing Fractions.”

Subject Area(s): Mathematics

Primary Resource Type: Original Tutorial

Help a family settle an argument about who got the most pizza and which butterfly was longer by comparing fractions using benchmarks and area models, in this interactive tutorial.

Subject Area(s): Mathematics

Primary Resource Type: Original Tutorial

Other Resources

This is a fun and interactive game that helps students practice ordering rational numbers, including decimals, fractions, and percents. You are planting and harvesting flowers for cash. Allow the bee to pollinate, and you can multiply your crops and cash rewards!


1.2: Number comparisons using , and =

Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. (Standard #: MAFS.4.NBT.1.2)

Original Tutorials

Learn how to compare numbers using the greater than and less than symbols in this interactive tutorial that compares some pretty cool things!

Subject Area(s): Mathematics

Primary Resource Type: Original Tutorial

Read and write multi-digit whole numbers using base-ten numerals and number names using the Base 10 place value system in this interactive tutorial. Note: this tutorial exceeds the number limits of the benchmark.

Subject Area(s): Mathematics, Mathematics (B.E.S.T. -.

Primary Resource Type: Original Tutorial

Learn how to write numbers using place value in differentforms like standard, word, and expanded notationin this interactive tutorial.

Subject Area(s): Mathematics, Mathematics (B.E.S.T. -.

Primary Resource Type: Original Tutorial

Other Resources

This is a fun and interactive game that helps students practice ordering rational numbers, including decimals, fractions, and percents. You are planting and harvesting flowers for cash. Allow the bee to pollinate, and you can multiply your crops and cash rewards!


Benchmark Fractions Activities

A simple way to kick off a lesson on benchmark fractions is to show students a picture like the one below and ask questions like, “Which donut is approximately half-eaten? Which donut is nearly whole, and which one is almost gone?” This example helps students see that we actually use benchmarks in real life!

Start with Visuals and Fraction Manipulatives

To start, I recommend modeling problems using a number line and manipulatives like fraction circles and fraction tiles. These visuals help to make benchmark fractions more concrete as you’re introducing this skill.

I like to begin by comparing fractions to 0 and 1. This is a little easier for students. For example, I might show fractions like 1/9 and 10/12 and ask students whether they’re closer to 0 or to 1.

After some practice, we can tackle comparing fractions to one half, again using number lines and manipulatives.

After students have this down, we can move to comparing fractions to each other by comparing both of them to the benchmarks.

When comparing 4/10 and 6/7, students can use the benchmarks of 1/2 and 1. Since 1 is larger than 1/2, students can estimate that 6/7 is larger than 4/10.

Benchmark Fractions with Mental Math

The next step is to try this strategy without visual aids. You’ll want to have already taught equivalent fractions before starting this.

It’s fairly easy for students to compare fractions to 0 and 1 by comparing the numerator to the denominator. Comparing fractions to 1/2 requires a little more mental math. I ask students to look at the denominator of the fraction and determine what fraction (using that denominator) would be equivalent to 1/2. A simple way to do this is to just divide the denominator by 2.

For example, let’s use the fraction 4/10. 5/10 is equivalent to 1/2. So if we have a fraction with 10 as the denominator, we know that 5/10 is exactly half. When we compare 4/10 to 5/10, we see it’s only 1/10 away. It’s much closer to 5/10, or 1/2, than it is to 0 or 1.

Students definitely need repeated practice with this! It’s harder with odd-numbered denominators, so I recommend starting with even denominators of 12 or less.

In our earlier example of 3/11 and 6/7, 3/11 is closer to 0 and 6/7 is closer to 1. 0 <1, so we know that 3/11 < 6/7. If you choose to also use 1/4 and 3/4 as benchmarks, that can help students reach a more specific answer.

Benchmark Fraction Resources

A sorting activity is a great way to assess if students are grasping this skill.

I hope this post helps you see why benchmark fractions are a great strategy for comparing and ordering fractions! If you want to save time, you can grab my benchmark fractions bundle. Be sure to let me know what other strategies you use to teach this lesson!

This post includes Amazon Affiliate links. I earn a small commission on items purchased through these links at no extra cost to you.


How to Compare Number Sentences using Greater-Than and Less-Than Signs

Each sign is chosen in a number sentence so that the symbol points to the side that has the smallest value.

Each symbol then opens up to the side that has the greatest value.

We can use a number line to decide which side of a number sentence has the greatest value.

When teaching comparing number size, a number line is useful to help visualise the size of each value.

Here are the multiples of 10 from 0 to 100 shown on a number line.

When comparing number sentences at KS1 and KS2 (up to fourth grade), most children will be expected to use greater-than or less-than symbols for numbers up to 100.

In this example we have a missing symbol between 30 + 10 and 80.

We first evaluate the addition sum on the left of the missing symbol problem.

40 is less than 80 because it is further left on the number line.

We can use the less-than symbol ‘ ’ to write this comparison mathematically.

60 is smaller than 94 and so, the arrow will point at 64. The ‘mouth’ will open up to the larger value of 94.

We can write 94 > 60 to say that 94 is greater-than 60.

Because 94 > 60, we can also write 90 + 4 > 60.

In this next example of comparing a subtraction sentence we have a missing symbol between 15 and 20 – 2.

We first evaluate the subtraction of 20 – 2.

20 – 2 = 18, which is to the right of 15 on the number line.

18 is greater-than 15 and so the symbol opens to the 18 and points to the 15.


1.2: Number comparisons using , and =

Analysis of variance (ANOVA) techniques test whether a set of group means (treatment effects) are equal or not. Rejection of the null hypothesis leads to the conclusion that not all group means are the same. This result, however, does not provide further information on which group means are different.

Performing a series of t-tests to determine which pairs of means are significantly different is not recommended. When you perform multiple t-tests, the probability that the means appear significant, and significant difference results might be due to large number of tests. These t-tests use the data from the same sample, hence they are not independent. This fact makes it more difficult to quantify the level of significance for multiple tests.

Suppose that in a single t-test, the probability that the null hypothesis (H0) is rejected when it is actually true is a small value, say 0.05. Suppose also that you conduct six independent t-tests. If the significance level for each test is 0.05, then the probability that the tests correctly fail to reject H0, when H0 is true for each case, is (0.95) 6 = 0.735. And the probability that one of the tests incorrectly rejects the null hypothesis is 1 – 0.735 = 0.265, which is much higher than 0.05.

To compensate for multiple tests, you can use multiple comparison procedures. The Statistics and Machine Learning Toolbox™ function multcompare performs multiple pairwise comparison of the group means, or treatment effects. The options are Tukey’s honestly significant difference criterion (default option), the Bonferroni method, Scheffe’s procedure, Fisher’s least significant differences (lsd) method, and Dunn & Sidák’s approach to t-test.

To perform multiple comparisons of group means, provide the structure stats as an input for multcompare . You can obtain stats from one of the following functions :

For multiple comparison procedure options for repeated measures, see multcompare ( RepeatedMeasuresModel ).

Multiple Comparisons Using One-Way ANOVA

MPG represents the miles per gallon for each car, and Cylinders represents the number of cylinders in each car, either 4, 6, or 8 cylinders.

Test if the mean miles per gallon (mpg) is different across cars that have different numbers of cylinders. Also compute the statistics needed for multiple comparison tests.

The small p -value of about 0 is a strong indication that mean miles per gallon is significantly different across cars with different numbers of cylinders.

Perform a multiple comparison test, using the Bonferroni method, to determine which numbers of cylinders make a difference in the performance of the cars.

In the results matrix, 1, 2, and 3 correspond to cars with 4, 6, and 8 cylinders, respectively. The first two columns show which groups are compared. For example, the first row compares the cars with 4 and 6 cylinders. The fourth column shows the mean mpg difference for the compared groups. The third and fifth columns show the lower and upper limits for a 95% confidence interval for the difference in the group means. The last column shows the p -values for the tests. All p -values are zero, which indicates that the mean mpg for all groups differ across all groups.

In the figure the blue bar represents the group of cars with 4 cylinders. The red bars represent the other groups. None of the red comparison intervals for the mean mpg of cars overlap, which means that the mean mpg is significantly different for cars having 4, 6, or 8 cylinders.

The first column of the means matrix has the mean mpg estimates for each group of cars. The second column contains the standard errors of the estimates.

Multiple Comparisons for Three-Way ANOVA

y is the response vector and g1 , g2 , and g3 are the grouping variables (factors). Each factor has two levels, and every observation in y is identified by a combination of factor levels. For example, observation y(1) is associated with level 1 of factor g1 , level 'hi' of factor g2 , and level 'may' of factor g3 . Similarly, observation y(6) is associated with level 2 of factor g1 , level 'hi' of factor g2 , and level 'june' of factor g3 .

Test if the response is the same for all factor levels. Also compute the statistics required for multiple comparison tests.

The p -value of 0.2578 indicates that the mean responses for levels 'may' and 'june' of factor g3 are not significantly different. The p -value of 0.0347 indicates that the mean responses for levels 1 and 2 of factor g1 are significantly different. Similarly, the p -value of 0.0048 indicates that the mean responses for levels 'hi' and 'lo' of factor g2 are significantly different.

Perform multiple comparison tests to find out which groups of the factors g1 and g2 are significantly different.

multcompare compares the combinations of groups (levels) of the two grouping variables, g1 and g2 . In the results matrix, the number 1 corresponds to the combination of level 1 of g1 and level hi of g2 , the number 2 corresponds to the combination of level 2 of g1 and level hi of g2 . Similarly, the number 3 corresponds to the combination of level 1 of g1 and level lo of g2 , and the number 4 corresponds to the combination of level 2 of g1 and level lo of g2 . The last column of the matrix contains the p -values.

For example, the first row of the matrix shows that the combination of level 1 of g1 and level hi of g2 has the same mean response values as the combination of level 2 of g1 and level hi of g2 . The p -value corresponding to this test is 0.0280, which indicates that the mean responses are significantly different. You can also see this result in the figure. The blue bar shows the comparison interval for the mean response for the combination of level 1 of g1 and level hi of g2 . The red bars are the comparison intervals for the mean response for other group combinations. None of the red bars overlap with the blue bar, which means the mean response for the combination of level 1 of g1 and level hi of g2 is significantly different from the mean response for other group combinations.

You can test the other groups by clicking on the corresponding comparison interval for the group. The bar you click on turns to blue. The bars for the groups that are significantly different are red. The bars for the groups that are not significantly different are gray. For example, if you click on the comparison interval for the combination of level 1 of g1 and level lo of g2 , the comparison interval for the combination of level 2 of g1 and level lo of g2 overlaps, and is therefore gray. Conversely, the other comparison intervals are red, indicating significant difference.

Multiple Comparison Procedures

To specify the multiple comparison procedure you want multcompare to conduct use the 'CType' name-value pair argument. multcompare provides the following procedures:

Tukey’s Honestly Significant Difference Procedure

You can specify Tukey’s honestly significant difference procedure using the 'CType','Tukey-Kramer' or 'CType','hsd' name-value pair argument. The test is based on studentized range distribution. Reject H0:αi = αj if

| t | = | y ¯ i − y ¯ j | M S E ( 1 n i + 1 n j ) > 1 2 q α , k , N − k ,

where q α , k , N − k is the upper 100*(1 – α)th percentile of the studentized range distribution with parameter k and Nk degrees of freedom. k is the number of groups (treatments or marginal means) and N is the total number of observations.

Tukey’s honestly significant difference procedure is optimal for balanced one-way ANOVA and similar procedures with equal sample sizes. It has been proven to be conservative for one-way ANOVA with different sample sizes. According to the unproven Tukey-Kramer conjecture, it is also accurate for problems where the quantities being compared are correlated, as in analysis of covariance with unbalanced covariate values.

Bonferroni Method

You can specify the Bonferroni method using the 'CType','bonferroni' name-value pair. This method uses critical values from Student’s t-distribution after an adjustment to compensate for multiple comparisons. The test rejects H0:αi = αj at the α / 2 ( k 2 ) significance level, where k is the number of groups if

| t | = | y ¯ i − y ¯ j | M S E ( 1 n i + 1 n j ) > t α 2 ( k 2 ) , N − k ,

where N is the total number of observations and k is the number of groups (marginal means). This procedure is conservative, but usually less so than the Scheffé procedure.

Dunn & Sidák’s Approach

You can specify Dunn & Sidák’s approach using the 'CType','dunn-sidak' name-value pair argument. It uses critical values from the t-distribution, after an adjustment for multiple comparisons that was proposed by Dunn and proved accurate by Sidák. This test rejects H0:αi = αj if

| t | = | y ¯ i − y ¯ j | M S E ( 1 n i + 1 n j ) > t 1 − η / 2 , v ,

and k is the number of groups. This procedure is similar to, but less conservative than, the Bonferroni procedure.

Least Significant Difference

You can specify the least significance difference procedure using the 'CType','lsd' name-value pair argument. This test uses the test statistic

t = y ¯ i − y ¯ j M S E ( 1 n i + 1 n j ) .

| y ¯ i − y ¯ j | > t α 2 , N − k M S E ( 1 n i + 1 n j ) ︸ L S D .

Fisher suggests a protection against multiple comparisons by performing LSD only when the null hypothesis H0: α1 = α2 = . = αk is rejected by ANOVA F-test. Even in this case, LSD might not reject any of the individual hypotheses. It is also possible that ANOVA does not reject H0, even when there are differences between some group means. This behavior occurs because the equality of the remaining group means can cause the F-test statistic to be nonsignificant. Without any condition, LSD does not provide any protection against the multiple comparison problem.

Scheffe’s Procedure

You can specify Scheffe’s procedure using the 'CType','scheffe' name-value pair argument. The critical values are derived from the F distribution. The test rejects H0:αi = αj if

| y ¯ i − y ¯ j | M S E ( 1 n i + 1 n j ) > ( k − 1 ) F k − 1 , N − k , α

This procedure provides a simultaneous confidence level for comparisons of all linear combinations of the means. It is conservative for comparisons of simple differences of pairs.


Algorithms/Recursion CodeHS Test Q's

private void doSort(int lowerIndex, int higherIndex)
<
if (lowerIndex < higherIndex)
<
int middle = lowerIndex + (higherIndex - lowerIndex) / 2
doSort(lowerIndex, middle)
doSort(middle + 1, higherIndex)
doSomething(lowerIndex, middle, higherIndex)
>
>

public static int lolUthought(int[] array, int key)
<
int n = array.length
int first = 0
int last = n - 1
int middle = (first + last)/2

while( first <= last )
<
if ( array[middle] < key )
<
first = middle + 1
>
else if ( array[middle] == key )
<
return middle
>
else
<
last = middle - 1
>

I - Selection Sort is always faster than Insertion Sort

II - Insertion Sort is always faster than Selection Sort

III - When Selection Sort places an element into the sorted part of the array, that element is in its final position, whereas Insertion Sort may move the element later if it finds a smaller element. Selection Sort builds up an absolutely sorted array as it goes while Insertion Sort builds up a relatively sorted array as it goes.


Classification (or Type) of Multiple Comparison: Single-step versus Stepwise Procedures

As mentioned earlier, repeated testing with given groups results in the serious problem known as α inflation. Therefore, numerous MCT methods have been developed in statistics over the years. 2) Most of the researchers in the field are interested in understanding the differences between relevant groups. These groups could be all pairs in the experiments, or one control and other groups, or more than two groups (one subgroup) and another experiment groups (another subgroup). Irrespective of the type of pairs to be compared, all post hoc subgroup comparing methods should be applied under the significance of complete ANOVA result. 3)

Usually, MCTs are categorized into two classes, single-step and stepwise procedures. Stepwise procedures are further divided into step-up and step-down methods. This classification depends on the method used to handle type I error. As indicated by its name, single-step procedure assumes one hypothetical type I error rate. Under this assumption, almost all pairwise comparisons (multiple hypotheses) are performed (tested using one critical value). In other words, every comparison is independent. A typical example is Fisher’s least significant difference (LSD) test. Other examples are Bonferroni, Sidak, Scheffé, Tukey, Tukey-Kramer, Hochberg’s GF2, Gabriel, and Dunnett tests.

The stepwise procedure handles type I error according to previously selected comparison results, that is, it processes pairwise comparisons in a predetermined order, and each comparison is performed only when the previous comparison result is statistically significant. In general, this method improves the statistical power of the process while preserving the type I error rate throughout. Among the comparison test statistics, the most significant test (for step-down procedures) or least significant test (for step-up procedures) is identified, and comparisons are successively performed when the previous test result is significant. If one comparison test during the process fails to reject a null hypothesis, all the remaining tests are rejected. This method does not determine the same level of significance as single-step methods rather, it classifies all relevant groups into the statistically similar subgroups. The stepwise methods include Ryan-Einot-Gabriel-Welsch Q (REGWQ), Ryan-Einot-Gabriel-Welsch F (REGWF), Student-Newman-Keuls (SNK), and Duncan tests. These methods have different uses, for example, the SNK test is started to compare the two groups with the largest differences the other two groups with the second largest differences are compared only if there is a significant difference in prior comparison. Therefore, this method is called as step-down methods because the extents of the differences are reduced as comparisons proceed. It is noted that the critical value for comparison varies for each pair. That is, it depends on the range of mean differences between groups. The smaller the range of comparison, the smaller the critical value for the range hence, although the power increases, the probability of type I error increases.

All the aforementioned methods can be used only in the situation of equal variance assumption. If equal variance assumption is violent during the ANOVA process, pairwise comparisons should be based on the statistics of Tamhane’s T2, Dunnett’s T3, Games-Howell, and Dunnett’s C tests.

Tukey method

This test uses pairwise post-hoc testing to determine whether there is a difference between the mean of all possible pairs using a studentized range distribution. This method tests every possible pair of all groups. Initially, the Tukey test was called the ‘Honestly significant difference’ test, or simply the ‘T test,’ 4) because this method was based on the t-distribution. It is noted that the Tukey test is based on the same sample counts between groups (balanced data) as ANOVA. Subsequently, Kramer modified this method to apply it on unbalanced data, and it became known as the Tukey-Kramer test. This method uses the harmonic mean of the cell size of the two comparisons. The statistical assumptions of ANOVA should be applied to the Tukey method, as well. 5)

Fig. 2 depicts the example results of one-way ANOVA and Tukey test for multiple comparisons. According to this figure, the Tukey test is performed with one critical level, as described earlier, and the results of all pairwise comparisons are presented in one table under the section ‘post-hoc test.’ The results conclude that groups A and B are different, whereas groups A and C are not different and groups B and C are also not different. These odd results are continued in the last table named ‘Homogeneous subsets.’ Groups A and C are similar and groups B and C are also similar however, groups A and B are different. An inference of this type is different with the syllogistic reasoning. In mathematics, if A = B and B = C, then A = C. However, in statistics, when A = B and B = C, A is not the same as C because all these results are probable outcomes based on statistics. Such contradictory results can originate from inadequate statistical power, that is, a small sample size. The Tukey test is a generous method to detect the difference during pairwise comparison (less conservative) to avoid this illogical result, an adequate sample size should be guaranteed, which gives rise to smaller standard errors and increases the probability of rejecting the null hypothesis.

An example of a one-way analysis of variance (ANOVA) result with Tukey test for multiple comparison performed using IBM Ⓡ SPSS Ⓡ Statistics (ver 23.0, IBM Ⓡ Co., USA). Groups A, B, and C are compared. The Tukey honestly significant difference (HSD) test was performed under the significant result of ANOVA. Multiple comparison results presented statistical differences between groups A and B, but not between groups A and C and between groups B and C. However, in the last table ‘Homogenous subsets’, there is a contradictory result: the differences between groups A and C and groups B and C are not significant, although a significant difference existed between groups A and B. This inconsistent interpretation could have originated from insufficient evidence.

Bonferroni method: ɑ splitting (Dunn’s method)

The Bonferroni method can be used to compare different groups at the baseline, study the relationship between variables, or examine one or more endpoints in clinical trials. It is applied as a post-hoc test in many statistical procedures such as ANOVA and its variants, including analysis of covariance (ANCOVA) and multivariate ANOVA (MANOVA) multiple t-tests and Pearson’s correlation analysis. It is also used in several nonparametric tests, including the Mann-Whitney U test, Wilcoxon signed rank test, and Kruskal-Wallis test by ranks [4], and as a test for categorical data, such as Chi-squared test. When used as a post hoc test after ANOVA, the Bonferroni method uses thresholds based on the t-distribution the Bonferroni method is more rigorous than the Tukey test, which tolerates type I errors, and more generous than the very conservative Scheffé’s method.

However, it has disadvantages, as well, since it is unnecessarily conservative (with weak statistical power). The adjusted α is often smaller than required, particularly if there are many tests and/or the test statistics are positively correlated. Therefore, this method often fails to detect real differences. If the proposed study requires that type II error should be avoided and possible effects should not be missed, we should not use Bonferroni correction. Rather, we should use a more liberal method like Fisher’s LSD, which does not control the family-wise error rate (FWER). 6) Another alternative to the Bonferroni correction to yield overly conservative results is to use the stepwise (sequential) method, for which the Bonferroni-Holm and Hochberg methods are suitable, which are less conservative than the Bonferroni test [5].

Dunnett method

This is a particularly useful method to analyze studies having control groups, based on modified t-test statistics (Dunnett’s t-distribution). It is a powerful statistic and, therefore, can discover relatively small but significant differences among groups or combinations of groups. The Dunnett test is used by researchers interested in testing two or more experimental groups against a single control group. However, the Dunnett test has the disadvantage that it does not compare the groups other than the control group among themselves at all.

As an example, suppose there are three experimental groups A, B, and C, in which an experimental drug is used, and a control group in a study. In the Dunnett test, a comparison of control group with A, B, C, or their combinations is performed however, no comparison is made between the experimental groups A, B, and C. Therefore, the power of the test is higher because the number of tests is reduced compared to the 𠆊ll pairwise comparison.’

On the other hand, the Dunnett method is capable of ‘twotailed’ or ‘one-tailed’ testing, which makes it different from other pairwise comparison methods. For example, if the effect of a new drug is not known at all, the two-tailed test should be used to confirm whether the effect of the new drug is better or worse than that of a conventional control. Subsequently, a one-sided test is required to compare the new drug and control. Since the two-sided or single-sided test can be performed according to the situation, the Dunnett method can be used without any restrictions.

Scheffé’s method: exploratory post-hoc method

Scheffé’s method is not a simple pairwise comparison test. Based on F-distribution, it is a method for performing simultaneous, joint pairwise comparisons for all possible pairwise combinations of each group mean [6]. It controls FWER after considering every possible pairwise combination, whereas the Tukey test controls the FWER when only all pairwise comparisons are made. 7) This is why the Scheffé’s method is very conservative than other methods and has small power to detect the differences. Since Scheffé’s method generates hypotheses based on all possible comparisons to confirm significance, this method is preferred when theoretical background for differences between groups is unavailable or previous studies have not been completely implemented (exploratory data analysis). The hypotheses generated in this manner should be tested by subsequent studies that are specifically designed to test new hypotheses. This is important in exploratory data analysis or the theoretic testing process (e.g., if a type I error is likely to occur in this type of study and the differences should be identified in subsequent studies). Follow-up studies testing specific subgroup contrasts discovered through the application of Scheffé’s method should use. Bonferroni methods that are appropriate for theoretical test studies. It is further noted that Bonferroni methods are less sensitive to type I errors than Scheffé’s method. Finally, Scheffé’s method enables simple or complex averaging comparisons in both balanced and unbalanced data.

Violation of the assumption of equivalence of variance

One-way ANOVA is performed only in cases where the assumption of equivalence of variance holds. However, it is a robust statistic that can be used even when there is a deviation from the equivalence assumption. In such cases, the Games-Howell, Tamhane’s T2, Dunnett’s T3, and Dunnett’s C tests can be applied.

The Games-Howell method is an improved version of the Tukey-Kramer method and is applicable in cases where the equivalence of variance assumption is violated. It is a t-test using Welch’s degree of freedom. This method uses a strategy for controlling the type I error for the entire comparison and is known to maintain the preset significance level even when the size of the sample is different. However, the smaller the number of samples in each group, the it is more tolerant the type I error control. Thus, this method can be applied when the number of samples is six or more.

Tamhane’s T2 method gives a test statistic using the t-distribution by applying the concept of ‘multiplicative inequality’ introduced by Sidak. Sidak’s multiplicative inequality theorem implies that the probability of occurrence of intersection of each event is more than or equal to the probability of occurrence of each event. Compared to the Games-Howell method, Sidak’s theorem provides a more rigorous multiple comparison method by adjusting the significance level. In other words, it is more conservative than type I error control. Contrarily, Dunnett’s T3 method does not use the t-distribution but uses a quasi-normalized maximum-magnitude distribution (studentized maximum modulus distribution), which always provides a narrower CI than T2. The degrees of freedom are calculated using the Welch methods, such as Games-Howell or T2. This Dunnett’s T3 test is understood to be more appropriate than the Games-Howell test when the number of samples in the each group is less than 50. It is noted that Dunnett’s C test uses studentized range distribution, which generates a slightly narrower CI than the Games-Howell test for a sample size of 50 or more in the experimental group however, the power of Dunnett’s C test is better than that of the Games-Howell test.


Containment operators

The containment operators ( -contains , -notcontains , -in , and -notin ) are similar to the equality operators, except that they always return a Boolean value, even when the input is a collection. These operators stop comparing as soon as they detect the first match, whereas the equality operators evaluate all input members. In a very large collection, these operators return quicker than the equality operators.

-contains and -notcontains

These operators tell whether a set includes a certain element. -contains returns True when the right-hand side (test object) matches one of the elements in the set. -notcontains returns False instead. When the test object is a collection, these operators use reference equality, i.e. they check whether one of the set's elements is the same instance of the test object.

-in and -notin

The -in and - notin operators were introduced in PowerShell 3 as the syntactic reverse of the of contains and -notcontain operators. -in returns True when the left-hand side <test-object> matches one of the elements in the set. -notin returns False instead. When the test object is a set, these operators use reference equality to check whether one of the set's elements is the same instance of the test object.

The following examples do the same thing that the examples for -contain and -notcontain do, but they are written with -in and -notin instead.