SlideShare a Scribd company logo
Regression and
Correlation
Analysis
1
Objectives
To determine the relationship
between response variable
and independent variables for
prediction purposes
2
• compute a simple linear regression model
• interpret the slope and intercept in a linear
regression model
• Model adequacy checking
• Use the model for prediction purposes
3
Contents
1. Introduction
regression and correlation
2. Simple Linear Regression
- Simple linear regression model ( deals
with one independent variable)
- Least- square estimation of parameters
- Hypothesis testing on the parameters
- Interpretation
4
3. Correlation
-Correlation co-efficient
- Co- efficient of determination and its
interpretation
5
Learning Outcomes
• Student will be able to identify the nature
of the association between a given pair of
variables
• Find a suitable regression model to a given
set of data of two variables
• Check for model assumptions
• Interpret the model parameters of the fixed
model
• Predict or estimate Y values for given X
values
6
Reference
1. Introduction to Linear Regression Analysis (3 rd
edition) D.C. Montgomery, E.A. Peck and G.G.
Vining, John Wiley ( 2004)
2. Applied Regression Analysis ( 3rd
edition) N.R.
Draper, H. Smith, John Wiley ( 1998)
7
Introduction
Regression and correlation are very important
statistical tools which are used to identify
and quantify the relationship between two
or more variables
Application of regression occurs almost in
every field such as engineering, physical
and chemical sciences, economics, life and
biological sciences and social science
8
Regression analysis was first developed by Sir
Francis Galton ( 1822-1911)
Regression and correlation are two different but
closely related concepts
Regression is a quantitative expression of the basic
nature of the relationship between the dependent
and independent variables
Correlation is the strength of the relationship. That
means correlation measures how strong the
relationship between two variables is?
9
Dependent variable
• In a research study, the dependent variable
is the variable that you believe might be
influenced or modified by some treatment
or exposure. It may also represent the
variable you are trying to predict.
Sometimes the dependent variable is called
the outcome variable. This definition
depends on the context of the study
10
If one variable is depended on other we can say that
one variable is a function of another
Y = ƒ (X)
Hear Y depends on X in some manner
As Y depends on X , Y is called the dependent
variable, criterion variable or response variable..
11
Independent variable
In a research study, an independent variable
is a variable that you believe might
influence your outcome measure.
X is called the independent variable,
predictor variable, regress or explanatory
variable
12
This might be a variable that you control, like
a treatment, or a variable not under your
control, like an exposure.
It also might represent a demographic factor
like age or gender
13
Regression
Simple Y = ƒ (X)
Multiple Y = ƒ (X1,X2,…X3)
Linear
Non linear
Linear
Non linear
14
CONTENTS
• Coefficients of correlation
–meaning
–values
–role
–significance
• Regression
–line of best fit
–prediction
–significance
15
• Correlation
–the strength of the linear relationship
between two variables
• Regression analysis
–determines the nature of the relationship
Ex : Is there a relationship between the
number of units of alcohol consumed
and the likelihood of developing
cirrhosis of the liver?
16
Correlation and Covariance
Correlation is the standardized covariance:
17
Measures the relative strength of the linear
relationship between two variables
The correlation is scale invariant and the
units of measurement don't matter (unit-
less)
This gives the direction (- or +) and strength
(0 to1) of the linear relationship between X
and Y.
18
• It is always true that -1≤ corr(X; Y ) ≤ 1. That means
ranges between –1 and 1
• The closer to –1, the stronger the negative linear
relationship
• The closer to 1, the stronger the positive linear
relationship
• The closer to 0, the weaker any linear relationship
Though a value close to zero indicates almost no
linear association it does not mean no relationship
19
Scatter Plots of Data with Various
Correlation Coefficients
Y
X
Y
X
Y
X
Y
X
Y
X
r = -1 r = -.6 r = 0
r = +.3r = +1
Y
X
r = 0 20
Y
X
Y
X
Y
Y
X
X
Linear relationships Curvilinear relationships
Linear Correlation
21
Y
X
Y
X
Y
Y
X
X
Strong relationships Weak relationships
Linear Correlation
22
Linear Correlation
Y
X
Y
X
No relationship
23
interpreting the Pearson correlation
coefficient
• The value of r for this
data is 0.39. thus
indicating weak
positive linear
association.
• Omitting the last
observation, r is 0.96.
• Thus, r is sensitive to
extreme observations.
Hight (inches)
Weight(lbs)
7672686460
170
160
150
140
130
120
110
100
90
Scatterplot of Weight (lbs) vs Hight (inches)
Extreme observation
24
• The value of r
here is 0.94.
• However, a
straight line model
may not be
suitable.
• The relationship
appears
curvilinear.
Predictor
Response
20151050
90
80
70
60
50
40
30
20
10
25
continued…
Extreme Observation
• The value of r is
-0.07.
• But the plot indicates
positive linear
association.
• Again, this anomaly
is due to extreme
data values.
OBT marks
Finalmarks
9080706050403020
70
60
50
40
30
20
10
Scatterplot of Final marks vs OBT marks
26
• The value of r is around
0.006, thus indicating
almost no linear
association.
• However, from the plot,
we find strong
relationship between the
two variables.
• This exemplifies that r
does not provide evidence
of all relationships.
• These examples highlight
the importance of looking
at scatter plots of data
prior to deciding on a
model function.
Age in years
ReactiontimeinSeconds
403020100
50
40
30
20
10
0
Scatterplot of Reaction time in Seconds vs Age in years
27
17.28
Coefficient of Determination
R2
has a value of .6483. This means 64.83% of
the variation in the auction selling prices (y) is
explained by your regression model. The
remaining 35.17% is unexplained, i.e. due to
error.
.
28
Unlike the value of a test statistic, the
coefficient of determination does not have
a critical value that enables us to draw
conclusions.
In general the higher the value of R2
, the better
the model fits the data.
R2
= 1: Perfect match between the line and the
data points.
R2
= 0: There are no linear relationship
between x and y
29
Coefficient of determination
x1 x2
y1
y2
y
Two data points (x1,y1) and (x2,y2)
of a certain sample are shown.
=−+− 2
2
2
1 )yy()yy( 2
2
2
1 )yyˆ()yyˆ( −+− 2
22
2
11 )yˆy()yˆy( −+−+
Total variation in y = Variation explained by the
regression line
+ Unexplained variation (error)
Variation in y = SSR + SSE
30
Coefficient of Determination
• How “strong” is relationship between predictor &
outcome? (Fraction of observed variance of
outcome variable explained by the predictor
variables).
• Relationship Among SST, SSR, SSE
where:where:
SST = total sum of squaresSST = total sum of squares
SSR = sum of squares due to regressionSSR = sum of squares due to regression
SSE = sum of squares due to errorSSE = sum of squares due to error
SST = SSR + SSESST = SSR + SSE
2
( )iy y−∑ 2
ˆ( )iy y= −∑ 2
ˆ( )i iy y+ −∑
31
REGRESSION
32
Estimation Process
Regression ModelRegression Model
yy == ββ00 ++ ββ11xx ++εε
Regression EquationRegression Equation
EE((yy) =) = ββ00 ++ ββ11xx
Unknown ParametersUnknown Parameters
ββ00,, ββ11
Sample Data:Sample Data:
x yx y
xx11 yy11
. .. .
. .. .
xxnn yynn
bb00 andand bb11
provide estimates ofprovide estimates of
ββ00 andand ββ11
EstimatedEstimated
Regression EquationRegression Equation
Sample StatisticsSample Statistics
bb00,, bb11
0 1
ˆy b b x= +
33
Introduction
• We will examine the relationship between
quantitative variables x and y via a
mathematical equation.
• The motivation for using the technique:
– Forecast the value of a dependent variable (y)
from the value of independent variables (x1, x2,
…xk.).
– Analyze the specific relationships between the
independent variables and the dependent 34
For a continuous variable X the easiest way
of checking for a linear relationship with Y
is by means of a scatter plot of Y against X.
Hence, regression analysis can be started
with a scatter plot.
35
3636
Least SquaresLeast Squares
• 1.1. ‘Best Fit’ Means Difference Between‘Best Fit’ Means Difference Between
Actual Y Values & Predicted Y Values AreActual Y Values & Predicted Y Values Are
a Minimum.a Minimum. ButBut Positive Differences Off-Positive Differences Off-
Set Negative. So square errors!Set Negative. So square errors!
• 2.2. LS Minimizes the Sum of the SquaredLS Minimizes the Sum of the Squared
Differences (errors) (SSE)Differences (errors) (SSE)
( ) ∑∑ ==
=−
n
i
i
n
i
ii YY
1
2
1
2
ˆˆ ε
36
3737
Coefficient EquationsCoefficient Equations
• Prediction equationPrediction equation
• Sample slopeSample slope
• Sample Y - interceptSample Y - intercept
ii xy 10
ˆˆˆ ββ +=
( )( )
( )∑ −
∑ −−
==
21
ˆ
xx
yyxx
SS
SS
i
ii
xx
xy
β
xy 10
ˆˆ ββ −=
37
Interpreting regression
coefficients
You should interpret the slope and the
intercept of this line as follows:
–The slope represents the estimated
average change in Y when X increases by one
unit.
–The intercept represents the estimated
average value of Y when X equals zero
38
3939
Interpretation of CoefficientsInterpretation of Coefficients
• 1.1. Slope (Slope (ββ11))
– EstimatedEstimated YY changes bychanges by ββ11 for each 1 unitfor each 1 unit
increase inincrease in XX
• IfIf ββ11 = 2, then= 2, then YY is Expected to Increase by 2 foris Expected to Increase by 2 for
each 1 unit increase ineach 1 unit increase in XX
• 2.2. Y-Intercept (Y-Intercept (ββ00))
– Average Value ofAverage Value of YY whenwhen XX = 0= 0
• IfIf ββ00 = 4, then Average of= 4, then Average of YY is expected to beis expected to be
4 when4 when XX is 0is 0
^^
^^
^^
^^
^^
39
The Model
• The first order linear model
y = dependent variable
x = independent variable
β0 = y-intercept
β1 = slope of the line
ε = error variable
ε+β+β= xy 10
x
y
β0
Run
Rise β1 = Rise/Run
β0 and β1 are unknown population
parameters, therefore are estimated
from the data.
40
The Least Squares (Regression)
Line
A good line is one that minimizes
the sum of squared differences between the
points and the line.
41
Model adequacy cheking
When conducting linear regression, it is important
to make sure the assumptions behind the model
are met. It is also important to verify that the
estimated linear regression model is a good fit for
the data (often a linear regression line can be
estimated by SAS, SPSS, MINITAB etc. even if
it’s not appropriate—in this case it is up to you to
judge whether the model is a good one).
42
Assumptions
• The relationship between the explanatory
variable and the outcome variable is linear.
In other words, each increase by one unit in
an explanatory variable is associated with a
fixed increase in the outcome variable.
• The regression equation describes the mean
value of the dependent variable for a given
values of independent variable.
43
• The individual data points of Y (the
response variable) for each value of the
explanatory variable are normally
distributed about the line of means
(regression line).
• The variance of the data points about the
line of means is the same for each value of
explanatory variable.
44
Assumptions About the Error
Term ε
1. The error1. The error εε is a random variable with mean of zero.is a random variable with mean of zero.
2.2. The variance ofThe variance of εε ,, denoted bydenoted by σσ 22
,, is the same foris the same for
all values of the independent variable.all values of the independent variable.
3.3. The values ofThe values of εε are independent (randomly distributed.are independent (randomly distributed.
4.4. The errorThe error εε is a normally distributed randomis a normally distributed random
variable with mean zero and variancevariable with mean zero and variance σσ 22
..
45
Testing the assumptions for
regression - 2
• Normality (interval level variables)
– Skewness & Kurtosis must lie within acceptable limits
(-1 to +1)
• How to test?
• You can examine a histogram. Normality of distribution of
Y data points can be checked by plotting a histogram of
the residuals.
46
• If condition violated?
– Regression procedure can overestimate significance, so
should add a note of caution to the interpretation of
results (increases type I error rate)
47
Testing the assumptions -
normality
To compute skewness and
kurtosis for the included
cases, select Descriptive
Statistics|Descriptives…
from the Analyze menu.
1
48
Testing the assumptions -
normality
Second, click on
the Continue
button to complete
the options.
First, mark the
checkboxes for Kurtosis
and Skew ness.
49
Analysis of Residual
• To examine whether the regression model is
appropriate for the data being analyzed, we can
check the residual plots.
• Residual plots are:
– Plot a histogram of the residuals
– Plot residuals against the fitted values.
– Plot residuals against the independent variable.
– Plot residuals over time if the data are chronological.
50
Analysis of Residual
• A histogram of the residuals provides a check on
the normality assumption. A Normal quantile plot
of the residuals can also be used to check the
Normality assumptions.
• Regression Inference is robust against moderate
lack of Normality. On the other hand, outliers and
influential observations can invalidate the results
of inference for regression
• Plot of residuals against fitted values or the
independent variable can be used to check the
assumption of constant variance and the aptness
of the model.
51
Analysis of Residual
• Plot of residuals against time provides a
check on the independence of the error
terms assumption.
• Assumption of independence is the most
critical one.
52
Residual plots
• The residuals should
have no systematic
pattern.
• The residual plot to
right shows a scatter
of the points with no
individual
observations or
systematic change as x
increases.
Degree Days Residual Plot
-1
-0.5
0
0.5
1
0 20 40 60
Degree DaysResiduals
53
Residual plots
• The points in this
residual plot have a
curve pattern, so a
straight line fits poorly
54
Residual plots
• The points in this plot
show more spread for
larger values of the
explanatory variable x,
so prediction will be
less accurate when x is
large.
55
Heteroscedasticity
• When the requirement of a constant variance is violated we
have a condition of heteroscedasticity.
• Diagnose heteroscedasticity by plotting the residual
against the predicted y.
+ + +
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The spread increases with y^
y^
Residual
^y
+
+
+
+
+
+
+
+
+
+
+
++
+
+
+
+
+
+
+
+
+
+
56
Patterns in the appearance of the residuals indicates that
autocorrelation exists.
+
+
+
+ +
+
+
+
+
+
+
+
+ + +
+
+
+
+
+
+
+
+
+
+
Time
Residual Residual
Time
+
+
+
Note the runs of positive residuals,
replaced by runs of negative residuals
Note the oscillating behavior of the
residuals around zero.
0 0
Non Independence of Error Variables
57
Outliers
• An outlier is an observation that is unusually small or
large.
• Several possibilities need to be investigated when an
outlier is observed:
– There was an error in recording the value.
– The point does not belong in the sample.
– The observation is valid.
• Identify outliers from the scatter diagram.
• It is customary to suspect an observation is an outlier
if its |standard residual| > 2 58
• DFITTS value of the data point is >2
59
Variable transformations
• If the residual plot suggests that the variance is not
constant, a transformation can be used to stabilize
the variance.
• If the residual plot suggests a non linear
relationship between x and y, a transformation
may reduce it to one that is approximately linear.
• Common linearizing transformations are:
• Variance stabilizing transformations are:
)log(,
1
x
x
2
,),log(,
1
yyy
y 60
The Model
• The first order linear model
y = dependent variable
x = independent variable
β0 = y-intercept
β1 = slope of the line
ε = error variable
ε+β+β= xy 10
x
y
β0
Run
Rise β1 = Rise/Run
β0 and β1 are unknown population
parameters, therefore are estimated
from the data.
61
The Least Squares (Regression)
Line
A good line is one that minimizes
the sum of squared differences between the
points and the line.
62
Example
• Following observations are made on an
experiment that was carried out to measure
the relationship of a mathematics placement
test conducted at a faculty and final grades
of 20 students as faculty decided not to give
admissions to those students got marks
below 35 at the placement test.
63
Table
placement test Final grade
50 53
35 41
35 51
40 62
55 68
65 63
35 22
60 70
90 85
35 40
placement test Final grade
90 75
80 91
60 58
60 71
60 71
40 49
55 58
50 57
65 77
50 59
64
Scatter plot
90807060504030
100
90
80
70
60
50
40
30
20
placement t est
Finalgrade
Scatterplot of Final grade vs placement test
65
Correlations: Daily RF(0.01cm),
Particle weight (µg/m3
• Pearson correlation of Daily RF(0.01cm)
and Particle weight (µg/m3) = 0.726
• P-Value = 0.011
66
SAS For Regression and Correlation
67
PROG REG
Submit the following program in SAS. In addition to the first
two statements with which you are familiar, the third
statement requests a plot of the residuals by weight and the
fourth statement requests a plot of the studentized
(standardized) residuals by weight:
PROC REG DATA = blood;
MODEL level = weight;
PLOT level * weight;
PLOT residual. * weight;
PLOT student. * weight;
RUN;
68
Interpreting Output
Notice that the overall F-test has a p-value of
0.2160, which is greater than 0.05.
Therefore, we would conclude that blood
level and weight are independent (fail to
reject Ho: β1 = 0).
Now look at the following plots:
69
Plot of Regression Line: Notice it is the same plot as the one
you created from PROC GPLOT, except the fitted regression line
has been added to it.
70
Plot of residuals * weight: you want an even spread of
points above and below the dashed line. This is a good way
to eyeball the data for potential outliers.
71
Plot of studentized residuals * weight: look for
values with an absolute value larger than 2.6 to
determine if there are any outliers.
72
You can see from the plot that the observation
with weight = 128 (observation #4) is an
outlier.
The residual plots also help you determine
whether the assumption of constant variance is
met. Because the residuals appear to be
randomly scattered without any definite
pattern, this suggests that the data are
independent with constant variance.
73
The Normality Assumption
A convenient way to test for normality is by
constructing a “Normal Quantile Quantile”
plot. This plots the residuals you would see
under normality versus the residuals that are
actually observed. If the data are completely
normal, the residuals will follow a 45° line.
Use the following code in SAS to make the
NQQ plot:
PLOT residual. * nqq.;
RUN;
74
Residual vs. NQQ Plot
75
Interpreting the NQQ Plot
The residuals do not clearly follow a 45° line.
Because the tails of this line seem curved,
this suggests that the data may be skewed,
not normally distributed.
76
Recommendations
• It is extremely important to look at plots of raw
data prior to selecting a tentative model
• Need to be cautious in interpreting the correlation
coefficient r.
• Proper model assessment should be done prior to
using the fitted model for predictions.
• Need to focus on the range of x values used for
building the model prior to making predictions at
a desired x value. 77
78

More Related Content

PPTX
Correlation & Regression Analysis using SPSS
Parag Shah
 
PPT
Research methodology
Rolling Plans Pvt. Ltd.
 
PPTX
Ethics in research
Mira K Desai
 
PPTX
Correlation ppt...
Shruti Srivastava
 
PPTX
Assertiveness
SUDIPTA PAUL
 
PPT
Chi – square test
Dr.M.Prasad Naidu
 
PDF
Correlation Analysis
Birinder Singh Gulati
 
PDF
Introduction to NumPy (PyData SV 2013)
PyData
 
Correlation & Regression Analysis using SPSS
Parag Shah
 
Research methodology
Rolling Plans Pvt. Ltd.
 
Ethics in research
Mira K Desai
 
Correlation ppt...
Shruti Srivastava
 
Assertiveness
SUDIPTA PAUL
 
Chi – square test
Dr.M.Prasad Naidu
 
Correlation Analysis
Birinder Singh Gulati
 
Introduction to NumPy (PyData SV 2013)
PyData
 

What's hot (20)

PPT
Simple Correlation : Karl Pearson’s Correlation co- efficient and Spearman’s ...
RekhaChoudhary24
 
PPT
Correlation and regression
Ajendra7846
 
PPT
correlation and regression
Unsa Shakir
 
PPT
Correlation analysis
Shiela Vinarao
 
PPTX
Correlation and regression
Mohit Asija
 
PDF
Multiple Correlation - Thiyagu
Thiyagu K
 
PPTX
Applications of regression analysis - Measurement of validity of relationship
Rithish Kumar
 
PPT
Simple linear regression (final)
Harsh Upadhyay
 
PPTX
Regression ppt
Shraddha Tiwari
 
PPTX
application of correlation
sudhanyavinod
 
PDF
Simple linear regression
Avjinder (Avi) Kaler
 
PPTX
Regression
Buddy Krishna
 
PPTX
Regression Analysis
Salim Azad
 
PPTX
Statistics-Regression analysis
Rabin BK
 
PPTX
Regression analysis.
sonia gupta
 
PPTX
Spearman Rank
i-study-co-uk
 
PPTX
Multivariate analysis
SUDARSHAN KUMAR PATEL
 
PPTX
STATISTICAL REGRESSION MODELS
Aneesa K Ayoob
 
PDF
Linear regression theory
Saurav Mukherjee
 
PPTX
What is Simple Linear Regression and How Can an Enterprise Use this Technique...
Smarten Augmented Analytics
 
Simple Correlation : Karl Pearson’s Correlation co- efficient and Spearman’s ...
RekhaChoudhary24
 
Correlation and regression
Ajendra7846
 
correlation and regression
Unsa Shakir
 
Correlation analysis
Shiela Vinarao
 
Correlation and regression
Mohit Asija
 
Multiple Correlation - Thiyagu
Thiyagu K
 
Applications of regression analysis - Measurement of validity of relationship
Rithish Kumar
 
Simple linear regression (final)
Harsh Upadhyay
 
Regression ppt
Shraddha Tiwari
 
application of correlation
sudhanyavinod
 
Simple linear regression
Avjinder (Avi) Kaler
 
Regression
Buddy Krishna
 
Regression Analysis
Salim Azad
 
Statistics-Regression analysis
Rabin BK
 
Regression analysis.
sonia gupta
 
Spearman Rank
i-study-co-uk
 
Multivariate analysis
SUDARSHAN KUMAR PATEL
 
STATISTICAL REGRESSION MODELS
Aneesa K Ayoob
 
Linear regression theory
Saurav Mukherjee
 
What is Simple Linear Regression and How Can an Enterprise Use this Technique...
Smarten Augmented Analytics
 
Ad

Viewers also liked (11)

PPTX
co relation and regression
Rehan ali
 
PDF
Linearity in the non-deterministic call-by-value setting
Alejandro Díaz-Caro
 
PPTX
Linearity
Nikhil Singh
 
PPTX
Questionnaire design
Suvarna JaipurkarGanvir
 
PPTX
ANOVA-One Way Classification
Sharlaine Ruth
 
PPTX
Chi square test
Anandapadmanabhan Kottiyam
 
PPTX
Correlation nd regression
vinay gowda
 
PDF
Pearson Correlation, Spearman Correlation &Linear Regression
Azmi Mohd Tamil
 
PPT
The Chi-Squared Test
Stephen Taylor
 
PPTX
Chi square test
Patel Parth
 
PPTX
Measurement and scaling techniques
Ujjwal 'Shanu'
 
co relation and regression
Rehan ali
 
Linearity in the non-deterministic call-by-value setting
Alejandro Díaz-Caro
 
Linearity
Nikhil Singh
 
Questionnaire design
Suvarna JaipurkarGanvir
 
ANOVA-One Way Classification
Sharlaine Ruth
 
Correlation nd regression
vinay gowda
 
Pearson Correlation, Spearman Correlation &Linear Regression
Azmi Mohd Tamil
 
The Chi-Squared Test
Stephen Taylor
 
Chi square test
Patel Parth
 
Measurement and scaling techniques
Ujjwal 'Shanu'
 
Ad

Similar to Regression and Co-Relation (20)

PDF
Correlations
Kaori Kubo Germano, PhD
 
PPTX
Stat 1163 -correlation and regression
Khulna University
 
PPTX
Correlation _ Regression Analysis statistics.pptx
krunal soni
 
PDF
Unit 1 Correlation- BSRM.pdf
Ravinandan A P
 
PPT
Research Methodology-Chapter 14
Javed Iqbal Kamyana
 
PPTX
3.3 correlation and regression part 2.pptx
adityabhardwaj282
 
PPTX
Module 2_ Regression Models..pptx
nikshaikh786
 
PPTX
simple and multiple linear Regression. (1).pptx
akshatastats
 
DOCX
Statistics
KafiPati
 
PPTX
Linear regression
Regent University
 
PDF
Correlation and Regression
Dr. Tushar J Bhatt
 
PPTX
Linear regression analysis
Nimrita Koul
 
PDF
need help with stats 301 assignment help
realnerdovo
 
PPTX
Regression-SIMPLE LINEAR (1).psssssssssptx
pokah34509
 
PPT
regression and correlation
Priya Sharma
 
PPT
Simple linear regressionn and Correlation
Southern Range, Berhampur, Odisha
 
PPT
A correlation analysis.ppt 2018
DrRavindraKumarSaini
 
PDF
Regression Analysis-Machine Learning -Different Types
Sharmila Chidaravalli
 
PPTX
Correlation and Regression ppt
Santosh Bhaskar
 
Stat 1163 -correlation and regression
Khulna University
 
Correlation _ Regression Analysis statistics.pptx
krunal soni
 
Unit 1 Correlation- BSRM.pdf
Ravinandan A P
 
Research Methodology-Chapter 14
Javed Iqbal Kamyana
 
3.3 correlation and regression part 2.pptx
adityabhardwaj282
 
Module 2_ Regression Models..pptx
nikshaikh786
 
simple and multiple linear Regression. (1).pptx
akshatastats
 
Statistics
KafiPati
 
Linear regression
Regent University
 
Correlation and Regression
Dr. Tushar J Bhatt
 
Linear regression analysis
Nimrita Koul
 
need help with stats 301 assignment help
realnerdovo
 
Regression-SIMPLE LINEAR (1).psssssssssptx
pokah34509
 
regression and correlation
Priya Sharma
 
Simple linear regressionn and Correlation
Southern Range, Berhampur, Odisha
 
A correlation analysis.ppt 2018
DrRavindraKumarSaini
 
Regression Analysis-Machine Learning -Different Types
Sharmila Chidaravalli
 
Correlation and Regression ppt
Santosh Bhaskar
 

Recently uploaded (20)

PDF
Key_Statistical_Techniques_in_Analytics_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
PPTX
short term internship project on Data visualization
JMJCollegeComputerde
 
PPTX
IP_Journal_Articles_2025IP_Journal_Articles_2025
mishell212144
 
PPTX
The whitetiger novel review for collegeassignment.pptx
DhruvPatel754154
 
PDF
D9110.pdfdsfvsdfvsdfvsdfvfvfsvfsvffsdfvsdfvsd
minhn6673
 
PPTX
Databricks-DE-Associate Certification Questions-june-2024.pptx
pedelli41
 
PDF
717629748-Databricks-Certified-Data-Engineer-Professional-Dumps-by-Ball-21-03...
pedelli41
 
PPT
Grade 5 PPT_Science_Q2_W6_Methods of reproduction.ppt
AaronBaluyut
 
PDF
oop_java (1) of ice or cse or eee ic.pdf
sabiquntoufiqlabonno
 
PDF
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
PDF
blockchain123456789012345678901234567890
tanvikhunt1003
 
PDF
SUMMER INTERNSHIP REPORT[1] (AutoRecovered) (6) (1).pdf
pandeydiksha814
 
PPTX
Fuzzy_Membership_Functions_Presentation.pptx
pythoncrazy2024
 
PDF
Classifcation using Machine Learning and deep learning
bhaveshagrawal35
 
PPTX
White Blue Simple Modern Enhancing Sales Strategy Presentation_20250724_21093...
RamNeymarjr
 
PPTX
Employee Salary Presentation.l based on data science collection of data
barridevakumari2004
 
PPTX
short term project on AI Driven Data Analytics
JMJCollegeComputerde
 
PPTX
Introduction to Biostatistics Presentation.pptx
AtemJoshua
 
PPTX
INFO8116 - Week 10 - Slides.pptx data analutics
guddipatel10
 
PPTX
lecture 13 mind test academy it skills.pptx
ggesjmrasoolpark
 
Key_Statistical_Techniques_in_Analytics_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
short term internship project on Data visualization
JMJCollegeComputerde
 
IP_Journal_Articles_2025IP_Journal_Articles_2025
mishell212144
 
The whitetiger novel review for collegeassignment.pptx
DhruvPatel754154
 
D9110.pdfdsfvsdfvsdfvsdfvfvfsvfsvffsdfvsdfvsd
minhn6673
 
Databricks-DE-Associate Certification Questions-june-2024.pptx
pedelli41
 
717629748-Databricks-Certified-Data-Engineer-Professional-Dumps-by-Ball-21-03...
pedelli41
 
Grade 5 PPT_Science_Q2_W6_Methods of reproduction.ppt
AaronBaluyut
 
oop_java (1) of ice or cse or eee ic.pdf
sabiquntoufiqlabonno
 
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
blockchain123456789012345678901234567890
tanvikhunt1003
 
SUMMER INTERNSHIP REPORT[1] (AutoRecovered) (6) (1).pdf
pandeydiksha814
 
Fuzzy_Membership_Functions_Presentation.pptx
pythoncrazy2024
 
Classifcation using Machine Learning and deep learning
bhaveshagrawal35
 
White Blue Simple Modern Enhancing Sales Strategy Presentation_20250724_21093...
RamNeymarjr
 
Employee Salary Presentation.l based on data science collection of data
barridevakumari2004
 
short term project on AI Driven Data Analytics
JMJCollegeComputerde
 
Introduction to Biostatistics Presentation.pptx
AtemJoshua
 
INFO8116 - Week 10 - Slides.pptx data analutics
guddipatel10
 
lecture 13 mind test academy it skills.pptx
ggesjmrasoolpark
 

Regression and Co-Relation

  • 2. Objectives To determine the relationship between response variable and independent variables for prediction purposes 2
  • 3. • compute a simple linear regression model • interpret the slope and intercept in a linear regression model • Model adequacy checking • Use the model for prediction purposes 3
  • 4. Contents 1. Introduction regression and correlation 2. Simple Linear Regression - Simple linear regression model ( deals with one independent variable) - Least- square estimation of parameters - Hypothesis testing on the parameters - Interpretation 4
  • 5. 3. Correlation -Correlation co-efficient - Co- efficient of determination and its interpretation 5
  • 6. Learning Outcomes • Student will be able to identify the nature of the association between a given pair of variables • Find a suitable regression model to a given set of data of two variables • Check for model assumptions • Interpret the model parameters of the fixed model • Predict or estimate Y values for given X values 6
  • 7. Reference 1. Introduction to Linear Regression Analysis (3 rd edition) D.C. Montgomery, E.A. Peck and G.G. Vining, John Wiley ( 2004) 2. Applied Regression Analysis ( 3rd edition) N.R. Draper, H. Smith, John Wiley ( 1998) 7
  • 8. Introduction Regression and correlation are very important statistical tools which are used to identify and quantify the relationship between two or more variables Application of regression occurs almost in every field such as engineering, physical and chemical sciences, economics, life and biological sciences and social science 8
  • 9. Regression analysis was first developed by Sir Francis Galton ( 1822-1911) Regression and correlation are two different but closely related concepts Regression is a quantitative expression of the basic nature of the relationship between the dependent and independent variables Correlation is the strength of the relationship. That means correlation measures how strong the relationship between two variables is? 9
  • 10. Dependent variable • In a research study, the dependent variable is the variable that you believe might be influenced or modified by some treatment or exposure. It may also represent the variable you are trying to predict. Sometimes the dependent variable is called the outcome variable. This definition depends on the context of the study 10
  • 11. If one variable is depended on other we can say that one variable is a function of another Y = ƒ (X) Hear Y depends on X in some manner As Y depends on X , Y is called the dependent variable, criterion variable or response variable.. 11
  • 12. Independent variable In a research study, an independent variable is a variable that you believe might influence your outcome measure. X is called the independent variable, predictor variable, regress or explanatory variable 12
  • 13. This might be a variable that you control, like a treatment, or a variable not under your control, like an exposure. It also might represent a demographic factor like age or gender 13
  • 14. Regression Simple Y = ƒ (X) Multiple Y = ƒ (X1,X2,…X3) Linear Non linear Linear Non linear 14
  • 15. CONTENTS • Coefficients of correlation –meaning –values –role –significance • Regression –line of best fit –prediction –significance 15
  • 16. • Correlation –the strength of the linear relationship between two variables • Regression analysis –determines the nature of the relationship Ex : Is there a relationship between the number of units of alcohol consumed and the likelihood of developing cirrhosis of the liver? 16
  • 17. Correlation and Covariance Correlation is the standardized covariance: 17
  • 18. Measures the relative strength of the linear relationship between two variables The correlation is scale invariant and the units of measurement don't matter (unit- less) This gives the direction (- or +) and strength (0 to1) of the linear relationship between X and Y. 18
  • 19. • It is always true that -1≤ corr(X; Y ) ≤ 1. That means ranges between –1 and 1 • The closer to –1, the stronger the negative linear relationship • The closer to 1, the stronger the positive linear relationship • The closer to 0, the weaker any linear relationship Though a value close to zero indicates almost no linear association it does not mean no relationship 19
  • 20. Scatter Plots of Data with Various Correlation Coefficients Y X Y X Y X Y X Y X r = -1 r = -.6 r = 0 r = +.3r = +1 Y X r = 0 20
  • 21. Y X Y X Y Y X X Linear relationships Curvilinear relationships Linear Correlation 21
  • 22. Y X Y X Y Y X X Strong relationships Weak relationships Linear Correlation 22
  • 24. interpreting the Pearson correlation coefficient • The value of r for this data is 0.39. thus indicating weak positive linear association. • Omitting the last observation, r is 0.96. • Thus, r is sensitive to extreme observations. Hight (inches) Weight(lbs) 7672686460 170 160 150 140 130 120 110 100 90 Scatterplot of Weight (lbs) vs Hight (inches) Extreme observation 24
  • 25. • The value of r here is 0.94. • However, a straight line model may not be suitable. • The relationship appears curvilinear. Predictor Response 20151050 90 80 70 60 50 40 30 20 10 25
  • 26. continued… Extreme Observation • The value of r is -0.07. • But the plot indicates positive linear association. • Again, this anomaly is due to extreme data values. OBT marks Finalmarks 9080706050403020 70 60 50 40 30 20 10 Scatterplot of Final marks vs OBT marks 26
  • 27. • The value of r is around 0.006, thus indicating almost no linear association. • However, from the plot, we find strong relationship between the two variables. • This exemplifies that r does not provide evidence of all relationships. • These examples highlight the importance of looking at scatter plots of data prior to deciding on a model function. Age in years ReactiontimeinSeconds 403020100 50 40 30 20 10 0 Scatterplot of Reaction time in Seconds vs Age in years 27
  • 28. 17.28 Coefficient of Determination R2 has a value of .6483. This means 64.83% of the variation in the auction selling prices (y) is explained by your regression model. The remaining 35.17% is unexplained, i.e. due to error. . 28
  • 29. Unlike the value of a test statistic, the coefficient of determination does not have a critical value that enables us to draw conclusions. In general the higher the value of R2 , the better the model fits the data. R2 = 1: Perfect match between the line and the data points. R2 = 0: There are no linear relationship between x and y 29
  • 30. Coefficient of determination x1 x2 y1 y2 y Two data points (x1,y1) and (x2,y2) of a certain sample are shown. =−+− 2 2 2 1 )yy()yy( 2 2 2 1 )yyˆ()yyˆ( −+− 2 22 2 11 )yˆy()yˆy( −+−+ Total variation in y = Variation explained by the regression line + Unexplained variation (error) Variation in y = SSR + SSE 30
  • 31. Coefficient of Determination • How “strong” is relationship between predictor & outcome? (Fraction of observed variance of outcome variable explained by the predictor variables). • Relationship Among SST, SSR, SSE where:where: SST = total sum of squaresSST = total sum of squares SSR = sum of squares due to regressionSSR = sum of squares due to regression SSE = sum of squares due to errorSSE = sum of squares due to error SST = SSR + SSESST = SSR + SSE 2 ( )iy y−∑ 2 ˆ( )iy y= −∑ 2 ˆ( )i iy y+ −∑ 31
  • 33. Estimation Process Regression ModelRegression Model yy == ββ00 ++ ββ11xx ++εε Regression EquationRegression Equation EE((yy) =) = ββ00 ++ ββ11xx Unknown ParametersUnknown Parameters ββ00,, ββ11 Sample Data:Sample Data: x yx y xx11 yy11 . .. . . .. . xxnn yynn bb00 andand bb11 provide estimates ofprovide estimates of ββ00 andand ββ11 EstimatedEstimated Regression EquationRegression Equation Sample StatisticsSample Statistics bb00,, bb11 0 1 ˆy b b x= + 33
  • 34. Introduction • We will examine the relationship between quantitative variables x and y via a mathematical equation. • The motivation for using the technique: – Forecast the value of a dependent variable (y) from the value of independent variables (x1, x2, …xk.). – Analyze the specific relationships between the independent variables and the dependent 34
  • 35. For a continuous variable X the easiest way of checking for a linear relationship with Y is by means of a scatter plot of Y against X. Hence, regression analysis can be started with a scatter plot. 35
  • 36. 3636 Least SquaresLeast Squares • 1.1. ‘Best Fit’ Means Difference Between‘Best Fit’ Means Difference Between Actual Y Values & Predicted Y Values AreActual Y Values & Predicted Y Values Are a Minimum.a Minimum. ButBut Positive Differences Off-Positive Differences Off- Set Negative. So square errors!Set Negative. So square errors! • 2.2. LS Minimizes the Sum of the SquaredLS Minimizes the Sum of the Squared Differences (errors) (SSE)Differences (errors) (SSE) ( ) ∑∑ == =− n i i n i ii YY 1 2 1 2 ˆˆ ε 36
  • 37. 3737 Coefficient EquationsCoefficient Equations • Prediction equationPrediction equation • Sample slopeSample slope • Sample Y - interceptSample Y - intercept ii xy 10 ˆˆˆ ββ += ( )( ) ( )∑ − ∑ −− == 21 ˆ xx yyxx SS SS i ii xx xy β xy 10 ˆˆ ββ −= 37
  • 38. Interpreting regression coefficients You should interpret the slope and the intercept of this line as follows: –The slope represents the estimated average change in Y when X increases by one unit. –The intercept represents the estimated average value of Y when X equals zero 38
  • 39. 3939 Interpretation of CoefficientsInterpretation of Coefficients • 1.1. Slope (Slope (ββ11)) – EstimatedEstimated YY changes bychanges by ββ11 for each 1 unitfor each 1 unit increase inincrease in XX • IfIf ββ11 = 2, then= 2, then YY is Expected to Increase by 2 foris Expected to Increase by 2 for each 1 unit increase ineach 1 unit increase in XX • 2.2. Y-Intercept (Y-Intercept (ββ00)) – Average Value ofAverage Value of YY whenwhen XX = 0= 0 • IfIf ββ00 = 4, then Average of= 4, then Average of YY is expected to beis expected to be 4 when4 when XX is 0is 0 ^^ ^^ ^^ ^^ ^^ 39
  • 40. The Model • The first order linear model y = dependent variable x = independent variable β0 = y-intercept β1 = slope of the line ε = error variable ε+β+β= xy 10 x y β0 Run Rise β1 = Rise/Run β0 and β1 are unknown population parameters, therefore are estimated from the data. 40
  • 41. The Least Squares (Regression) Line A good line is one that minimizes the sum of squared differences between the points and the line. 41
  • 42. Model adequacy cheking When conducting linear regression, it is important to make sure the assumptions behind the model are met. It is also important to verify that the estimated linear regression model is a good fit for the data (often a linear regression line can be estimated by SAS, SPSS, MINITAB etc. even if it’s not appropriate—in this case it is up to you to judge whether the model is a good one). 42
  • 43. Assumptions • The relationship between the explanatory variable and the outcome variable is linear. In other words, each increase by one unit in an explanatory variable is associated with a fixed increase in the outcome variable. • The regression equation describes the mean value of the dependent variable for a given values of independent variable. 43
  • 44. • The individual data points of Y (the response variable) for each value of the explanatory variable are normally distributed about the line of means (regression line). • The variance of the data points about the line of means is the same for each value of explanatory variable. 44
  • 45. Assumptions About the Error Term ε 1. The error1. The error εε is a random variable with mean of zero.is a random variable with mean of zero. 2.2. The variance ofThe variance of εε ,, denoted bydenoted by σσ 22 ,, is the same foris the same for all values of the independent variable.all values of the independent variable. 3.3. The values ofThe values of εε are independent (randomly distributed.are independent (randomly distributed. 4.4. The errorThe error εε is a normally distributed randomis a normally distributed random variable with mean zero and variancevariable with mean zero and variance σσ 22 .. 45
  • 46. Testing the assumptions for regression - 2 • Normality (interval level variables) – Skewness & Kurtosis must lie within acceptable limits (-1 to +1) • How to test? • You can examine a histogram. Normality of distribution of Y data points can be checked by plotting a histogram of the residuals. 46
  • 47. • If condition violated? – Regression procedure can overestimate significance, so should add a note of caution to the interpretation of results (increases type I error rate) 47
  • 48. Testing the assumptions - normality To compute skewness and kurtosis for the included cases, select Descriptive Statistics|Descriptives… from the Analyze menu. 1 48
  • 49. Testing the assumptions - normality Second, click on the Continue button to complete the options. First, mark the checkboxes for Kurtosis and Skew ness. 49
  • 50. Analysis of Residual • To examine whether the regression model is appropriate for the data being analyzed, we can check the residual plots. • Residual plots are: – Plot a histogram of the residuals – Plot residuals against the fitted values. – Plot residuals against the independent variable. – Plot residuals over time if the data are chronological. 50
  • 51. Analysis of Residual • A histogram of the residuals provides a check on the normality assumption. A Normal quantile plot of the residuals can also be used to check the Normality assumptions. • Regression Inference is robust against moderate lack of Normality. On the other hand, outliers and influential observations can invalidate the results of inference for regression • Plot of residuals against fitted values or the independent variable can be used to check the assumption of constant variance and the aptness of the model. 51
  • 52. Analysis of Residual • Plot of residuals against time provides a check on the independence of the error terms assumption. • Assumption of independence is the most critical one. 52
  • 53. Residual plots • The residuals should have no systematic pattern. • The residual plot to right shows a scatter of the points with no individual observations or systematic change as x increases. Degree Days Residual Plot -1 -0.5 0 0.5 1 0 20 40 60 Degree DaysResiduals 53
  • 54. Residual plots • The points in this residual plot have a curve pattern, so a straight line fits poorly 54
  • 55. Residual plots • The points in this plot show more spread for larger values of the explanatory variable x, so prediction will be less accurate when x is large. 55
  • 56. Heteroscedasticity • When the requirement of a constant variance is violated we have a condition of heteroscedasticity. • Diagnose heteroscedasticity by plotting the residual against the predicted y. + + + + + + + + + + + + + + + + + + + + + + + + The spread increases with y^ y^ Residual ^y + + + + + + + + + + + ++ + + + + + + + + + + 56
  • 57. Patterns in the appearance of the residuals indicates that autocorrelation exists. + + + + + + + + + + + + + + + + + + + + + + + + + Time Residual Residual Time + + + Note the runs of positive residuals, replaced by runs of negative residuals Note the oscillating behavior of the residuals around zero. 0 0 Non Independence of Error Variables 57
  • 58. Outliers • An outlier is an observation that is unusually small or large. • Several possibilities need to be investigated when an outlier is observed: – There was an error in recording the value. – The point does not belong in the sample. – The observation is valid. • Identify outliers from the scatter diagram. • It is customary to suspect an observation is an outlier if its |standard residual| > 2 58
  • 59. • DFITTS value of the data point is >2 59
  • 60. Variable transformations • If the residual plot suggests that the variance is not constant, a transformation can be used to stabilize the variance. • If the residual plot suggests a non linear relationship between x and y, a transformation may reduce it to one that is approximately linear. • Common linearizing transformations are: • Variance stabilizing transformations are: )log(, 1 x x 2 ,),log(, 1 yyy y 60
  • 61. The Model • The first order linear model y = dependent variable x = independent variable β0 = y-intercept β1 = slope of the line ε = error variable ε+β+β= xy 10 x y β0 Run Rise β1 = Rise/Run β0 and β1 are unknown population parameters, therefore are estimated from the data. 61
  • 62. The Least Squares (Regression) Line A good line is one that minimizes the sum of squared differences between the points and the line. 62
  • 63. Example • Following observations are made on an experiment that was carried out to measure the relationship of a mathematics placement test conducted at a faculty and final grades of 20 students as faculty decided not to give admissions to those students got marks below 35 at the placement test. 63
  • 64. Table placement test Final grade 50 53 35 41 35 51 40 62 55 68 65 63 35 22 60 70 90 85 35 40 placement test Final grade 90 75 80 91 60 58 60 71 60 71 40 49 55 58 50 57 65 77 50 59 64
  • 65. Scatter plot 90807060504030 100 90 80 70 60 50 40 30 20 placement t est Finalgrade Scatterplot of Final grade vs placement test 65
  • 66. Correlations: Daily RF(0.01cm), Particle weight (µg/m3 • Pearson correlation of Daily RF(0.01cm) and Particle weight (µg/m3) = 0.726 • P-Value = 0.011 66
  • 67. SAS For Regression and Correlation 67
  • 68. PROG REG Submit the following program in SAS. In addition to the first two statements with which you are familiar, the third statement requests a plot of the residuals by weight and the fourth statement requests a plot of the studentized (standardized) residuals by weight: PROC REG DATA = blood; MODEL level = weight; PLOT level * weight; PLOT residual. * weight; PLOT student. * weight; RUN; 68
  • 69. Interpreting Output Notice that the overall F-test has a p-value of 0.2160, which is greater than 0.05. Therefore, we would conclude that blood level and weight are independent (fail to reject Ho: β1 = 0). Now look at the following plots: 69
  • 70. Plot of Regression Line: Notice it is the same plot as the one you created from PROC GPLOT, except the fitted regression line has been added to it. 70
  • 71. Plot of residuals * weight: you want an even spread of points above and below the dashed line. This is a good way to eyeball the data for potential outliers. 71
  • 72. Plot of studentized residuals * weight: look for values with an absolute value larger than 2.6 to determine if there are any outliers. 72
  • 73. You can see from the plot that the observation with weight = 128 (observation #4) is an outlier. The residual plots also help you determine whether the assumption of constant variance is met. Because the residuals appear to be randomly scattered without any definite pattern, this suggests that the data are independent with constant variance. 73
  • 74. The Normality Assumption A convenient way to test for normality is by constructing a “Normal Quantile Quantile” plot. This plots the residuals you would see under normality versus the residuals that are actually observed. If the data are completely normal, the residuals will follow a 45° line. Use the following code in SAS to make the NQQ plot: PLOT residual. * nqq.; RUN; 74
  • 75. Residual vs. NQQ Plot 75
  • 76. Interpreting the NQQ Plot The residuals do not clearly follow a 45° line. Because the tails of this line seem curved, this suggests that the data may be skewed, not normally distributed. 76
  • 77. Recommendations • It is extremely important to look at plots of raw data prior to selecting a tentative model • Need to be cautious in interpreting the correlation coefficient r. • Proper model assessment should be done prior to using the fitted model for predictions. • Need to focus on the range of x values used for building the model prior to making predictions at a desired x value. 77
  • 78. 78