The autocorrelation test is one of the assumptions of linear regression with the OLS method. On this occasion, I will discuss the autocorrelation test on time series data. Before discussing the autocorrelation test, you need to know first that the autocorrelation test was conducted on time series, not cross-sectional data.

Time series data is data with one object but measured over several periods. Generally, you can get this data from secondary data. Then, secondary data already exists; we can take it from official government data providers, companies, cooperatives, and others. The point is that the data has been collected for specific or other purposes.

To make it clearer, I’ll give you an exampleâ€”for example, the monthly profit of company XYZ from 2010 to 2020. The object is profit and measured in different monthly periods from 2010 to 2020. This data is called time-series data. The data period can also vary; it can be daily, monthly, quarterly, and yearly.

If your data has these characteristics and you choose an analysis tool using linear regression, you need to do the autocorrelation test. The objective of this test was to determine regression unbiased estimation results.

This autocorrelation test aims to find out whether there is a correlation between the residuals in period t and period t-1. For example, is there a correlation between the company’s profit in the 30th month and the company’s profit in the 29th month? Likewise, profit in the 29th month correlates with the company’s profit in the 28th month and so on. Thus, it may be said that the autocorrelation variable could be the observation/sample value for a certain period, which is strongly influenced by the observation/sample value in the previous period.

“What is the cause of autocorrelation?” Erm, this is a difficult question for me to answer. But I will try to answer with a possible proxy. This autocorrelation can occur when you are compiling the regression model specifications. Another possibility could be that you did not enter a variable that is actually important but did not include it in the model.

“Then what is the impact if there is an autocorrelation, but we still force a regression analysis?” Impact, first and foremost, can lead to bias. If this happens, it can lead to wrong conclusions and even lead to spurious regression. For example, if the hypothesis test should not have a significant effect, this result has a significant effect.

The next thing you need to know is how to detect autocorrelation. Autocorrelation detection can be conducted in several ways. You can choose one method that is considered easy. You can use the Durbin Watson test, LM test, Lagrange Multiplier Test, Q statistical test, run test, and other approach tests.

“Oops, so confused, which one to choose!” You don’t need to be confused about which one to choose because all methods will lead to the same conclusion. However, if you use Durbin-Watson, there are already many video tutorials. I have also made a video tutorial on the test stages and interpreting the output.

For video tutorials using statistical software, you can see this audiovisual (video in Indonesian, please use the translation in English):

For manual calculation of the Durbin Watson autocorrelation test, you can see this audiovisual (video in Indonesian, please use the English translation):

Okay, let’s recap from the video that you have watched. So the Durbin Watson autocorrelation test was carried out to assess whether there was a correlation between residuals in the sample period t and period t-1. One of the assumptions, the dependent variable is not a lag variable.

Then to conclude the hypothesis test, you need to compare it with the DW table. It would help to find the Durbin Upper (dU) and Durbin Lower (dL) values in the table. In conclusion, there may be no positive and negative autocorrelation, positive autocorrelation, negative autocorrelation, and no conclusions can be drawn.

Well, I end this article. See you in the next article! Stay healthy and keep working!