The assumption of residual normality in regression analysis is a crucial foundation that must be met to ensure the attainment of the Best Linear Unbiased Estimator (BLUE). However, often, many researchers face difficulties in understanding this concept thoroughly.

This article, written by Kanda Data, aims to provide a comprehensive understanding of the assumption of residual normality, why this test is important in linear regression analysis, and how to detect normality tests. Thus, readers will gain an understanding of testing the assumption of residual normality in ordinary least square linear regression analysis.

**Residual in Regression Analysis**

Residual in regression analysis is the difference between the observed value of the dependent variable (actual Y) and the value predicted by the regression model (predicted Y). Residual represents the error or deviation between actual data and the estimation generated by the researcher’s regression model.

Understanding residual is crucial for researchers as it can help evaluate how well the regression model fits empirical data. By understanding the nature and pattern of residuals, researchers can assess the model’s adequacy and detect whether the data is normally distributed or not.

Residuals that are normally distributed indicate that the model has been able to capture most of the variation in the data. However, if residuals exhibit specific patterns or are not normally distributed, this may indicate that the regression model is not sufficient to explain the variability in the data.

**Testing Assumptions of Normality in Regression Analysis**

Testing the assumption of normality is a crucial step in regression analysis to validate the assumption that residuals generated by the regression model have a normal distribution. Therefore, researchers need to conduct residual normality tests as a prerequisite assumption in linear regression analysis.

This test aims to determine whether the distribution of residuals significantly differs from the assumed normal distribution. If the test results indicate that residuals do not have a normal distribution, it suggests that the regression model may not be suitable or may need adjustment by the researcher.

In addition to normality tests, residual plots are also commonly used to evaluate the assumption of normality in regression analysis. Graphs such as residual histograms, Quantile-Quantile (Q-Q) plots, and residual versus predicted value plots can provide useful visualizations of whether residuals are normally distributed.

**Detecting Residual Normality in Regression Analysis**

Commonly used tests for detecting residual normality in linear regression analysis are the Kolmogorov-Smirnov normality test and the Shapiro-Wilk test. The Kolmogorov-Smirnov test is conducted to compare the empirical distribution of residuals with the assumed standard normal distribution. The null hypothesis of this test is that the residual distribution is normal. If the test result shows P < 0.05, we reject the null hypothesis and conclude that residuals are not normally distributed.

The Shapiro-Wilk test is also used to determine whether a sample has a normal distribution or is non-normally distributed. However, the Shapiro-Wilk test tends to be more robust in detecting non-normality in small to medium sample sizes.

In addition to these two tests, there are other tests that can be used depending on the sample size and analysis needs. However, it’s important to note that the results of normality tests should be interpreted carefully and should not be the sole criteria for assessing the adequacy of the regression model. There are other assumption tests in linear regression analysis that researchers also need to conduct to ensure the attainment of the Best Linear Unbiased Estimator (BLUE).

**Example of a Case Study Testing Normality**

An example of a case study testing residual normality in linear regression analysis is to examine the influence of advertising costs and the number of marketing staff on the sales volume of product X units. After constructing the regression model, researchers decided to test residual normality to ensure the model’s adequacy.

The normality test results using the Shapiro-Wilk test showed an alpha probability value of 0.232. Because this probability value is higher than the significance threshold of 0.05 (the researcher set alpha at 5% in the study), we accept the null hypothesis that the residual distribution is normal. This means there is sufficient evidence to state that residuals in this regression model follow a normal distribution.

**Conclusion**

Testing residual normality can provide valuable insights into model adequacy and detect whether the model meets the required assumptions. It’s also important to remember that the results of normality tests are just one of many factors to consider in evaluating a regression model, and their interpretation should be done carefully. This concludes the article for this occasion. Hopefully, it provides useful insights for those in need. See you in the next article from Kanda Data.