In experiment or observation data, the test of significance is used to account for sample variability. It’s usual to compare a group’s feature to a specified value or to compare two or more groups on the same characteristic (such as the mean, variance, or a measure of connection between two characteristics). For example, you might want to compare two wheat varieties in terms of mean yield per hectare, or examine if the genetic fraction of total variation in a strain is greater than a given value, or compare different crop lines in terms of plant variance within lines. When making such comparisons, one can’t merely rely on the comparison index’s numerical magnitudes, such as the mean, variance, or measure of association. This is because each group is represented by only a sample of data, and if another sample were collected, the numerical value would change. This variation between samples from the same population can be reduced, but never eliminated, in a well-designed controlled experiment. In the face of sample variations that affect apparent differences between groups, clouding the underlying differences, conclusions must be drawn. Statistical science gives a means for objectively determining whether the observed difference between groups is an actual difference. Such a procedure is known as a relevance test.
Purposes of statistical test of significance
The test of significance is a way of accounting for sample variability in experiment or observation data. Such examinations are required because biological studies are influenced by a considerable amount of uncontrolled variance. These tests can be used to see if,
- The variance between the observed sample statistic and the hypothetical parameter value, or,
- The deviation between two sample statistics is significantly influenced by the sample findings.
Development of hypothesis
We must first develop a hypothesis, which is a definitive declaration about the population parameters before we can utilize the significance tests. We establish an exact hypothesis in all of these cases, such as that the treatments or variates in question do not differ in terms of mean value, variability, or the relationship between the specified characters, as the case may be, and then follow an objective data analysis procedure that leads to one of two conclusions:
I) reject or ii) not reject the hypothesis
Steps related to test of significance
The steps below should be followed when performing any type of significance test.
- i) Identify the factors that will be analyzed and the groups that will be compared.
- ii) State the null hypothesis;
- iii) Select an alternative hypothesis;
- iv) Determine the alpha (degree of significance);
- v) Select a test statistic
- vii) Determine the test statistic
- vii) Determine the p-value
- viii) Determine the significance of the p-value
- ix) If necessary, calculate the test’s power.
P-value and the interpretation of p-value
The value achieved when data is subjected to significance testing is referred to as a statistic. This can be a Z, t, chi-square statistic, F, and so on, depending on the test. In tables, this statistic is used to calculate the p-value (statistics software can automatically calculate the p-value).
If the p-value is less than the cut-off value, the difference between the groups is statistically significant (level of significance, i.e. alpha). When p is less than 0.05, it means that there’s a less than 5% possibility of acquiring the difference (between groups) by chance (when there isn’t one). It is determined that there is no difference if the difference is statistically non-significant (p>0.05), it is determined that there is no difference between the groups or that the difference is not detectable. More precisely,
- If p<=0.05; then the test is significant.
- If p>0.05; then the test is not significant.
Explanation behind the non-significant result
There are two explanations for a non-significant result:
1. There isn’t much of a distinction between the groups.
2 The study’s power is insufficient to detect the difference.
As a result, one must compute the power to determine if there is no change or whether the power is insufficient. If the power is insufficient (80%), the conclusion is “the study did not identify the difference,” rather than “no difference between groups.”