KANDA DATA

  • Home
  • About Us
  • Contact
  • Sitemap
  • Privacy Policy
  • Disclaimer
  • Bimbingan Online Kanda Data
Menu
  • Home
  • About Us
  • Contact
  • Sitemap
  • Privacy Policy
  • Disclaimer
  • Bimbingan Online Kanda Data
Home/Assumptions of Linear Regression/Autocorrelation Test on Time Series Data using Linear Regression

Blog

2,742 views

Autocorrelation Test on Time Series Data using Linear Regression

By Kanda Data / Date Jan 14.2022
Assumptions of Linear Regression

The autocorrelation test is one of the assumptions of linear regression with the OLS method. On this occasion, I will discuss the autocorrelation test on time series data. Before discussing the autocorrelation test, you need to know first that the autocorrelation test was conducted on time series, not cross-sectional data.

Time series data is data with one object but measured over several periods. Generally, you can get this data from secondary data. Then, secondary data already exists; we can take it from official government data providers, companies, cooperatives, and others. The point is that the data has been collected for specific or other purposes.

To make it clearer, I’ll give you an example—for example, the monthly profit of company XYZ from 2010 to 2020. The object is profit and measured in different monthly periods from 2010 to 2020. This data is called time-series data. The data period can also vary; it can be daily, monthly, quarterly, and yearly.

If your data has these characteristics and you choose an analysis tool using linear regression, you need to do the autocorrelation test. The objective of this test was to determine regression unbiased estimation results.

This autocorrelation test aims to find out whether there is a correlation between the residuals in period t and period t-1. For example, is there a correlation between the company’s profit in the 30th month and the company’s profit in the 29th month? Likewise, profit in the 29th month correlates with the company’s profit in the 28th month and so on. Thus, it may be said that the autocorrelation variable could be the observation/sample value for a certain period, which is strongly influenced by the observation/sample value in the previous period.

“What is the cause of autocorrelation?” Erm, this is a difficult question for me to answer. But I will try to answer with a possible proxy. This autocorrelation can occur when you are compiling the regression model specifications. Another possibility could be that you did not enter a variable that is actually important but did not include it in the model.

“Then what is the impact if there is an autocorrelation, but we still force a regression analysis?” Impact, first and foremost, can lead to bias. If this happens, it can lead to wrong conclusions and even lead to spurious regression. For example, if the hypothesis test should not have a significant effect, this result has a significant effect.

The next thing you need to know is how to detect autocorrelation. Autocorrelation detection can be conducted in several ways. You can choose one method that is considered easy. You can use the Durbin Watson test, LM test, Lagrange Multiplier Test, Q statistical test, run test, and other approach tests.

“Oops, so confused, which one to choose!” You don’t need to be confused about which one to choose because all methods will lead to the same conclusion. However, if you use Durbin-Watson, there are already many video tutorials. I have also made a video tutorial on the test stages and interpreting the output.

For video tutorials using statistical software, you can see this audiovisual (video in Indonesian, please use the translation in English):

For manual calculation of the Durbin Watson autocorrelation test, you can see this audiovisual (video in Indonesian, please use the English translation):

Okay, let’s recap from the video that you have watched. So the Durbin Watson autocorrelation test was carried out to assess whether there was a correlation between residuals in the sample period t and period t-1. One of the assumptions, the dependent variable is not a lag variable.

Then to conclude the hypothesis test, you need to compare it with the DW table. It would help to find the Durbin Upper (dU) and Durbin Lower (dL) values in the table. In conclusion, there may be no positive and negative autocorrelation, positive autocorrelation, negative autocorrelation, and no conclusions can be drawn.

Well, I end this article. See you in the next article! Stay healthy and keep working!

Tags: autocorrelation in time series regression, autocorrelation test, evaluating the regression model, how to calculate autocorrelation test, how to compute autocorrelation test, how to interpret the output of autocorrelation test, Time series and autocorrelation, time series regression

Related posts

Understanding the Essence of Assumption Testing in Linear Regression Analysis: Prominent Differences between Cross-Sectional Data and Time Series Data

Date Mar 19.2024

How to Calculate Durbin Watson Tests in Excel and Interpret the Results

Date Dec 18.2022

Leave a Reply Cancel reply

You must be logged in to post a comment.

Categories

  • Article Publication
  • Assumptions of Linear Regression
  • Comparison Test
  • Correlation Test
  • Data Analysis in R
  • Econometrics
  • Excel Tutorial for Statistics
  • Multiple Linear Regression
  • Nonparametric Statistics
  • Profit Analysis
  • Regression Tutorial using Excel
  • Research Methodology
  • Simple Linear Regression
  • Statistics

Popular Post

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  
« Sep    
  • How to Determine the Minimum Sample Size in Survey Research to Ensure Representativeness
  • Regression Analysis for Binary Categorical Dependent Variables
  • How to Sort Values from Highest to Lowest in Excel
  • How to Perform Descriptive Statistics in Excel in Under 1 Minute
  • How to Tabulate Data Using Pivot Table for Your Research Results
Copyright KANDA DATA 2025. All Rights Reserved