Tuesday, October 1, 2019
Regression Results :: Research Analysis
3.3.4. Results For the purpose of finding a suitable function for benefits transfer, different meta-regression models become specified: (i) different functional forms (e.g., a simple linear form versus semi-log form); (ii) a fully specified model including all independent variables and a restricted model on grounds of statistical significance or econometric problems (e.g., multicollinearity); (iii) robust consistent standard errors to correct for heteroskedasticity. As shown by the test for heteroskedasticity (see Table 3.7), a simple linear form has heteroskedasticity. There are several ways to correct for heteroskedasticity (e.g., GLS, WLS, robust consistent errors, and data transformation). For this study, robust consistent standard errors and data transformation (e.g., the log transformation of the dependent variable) are utilized. All independent variables initially are considered, even if later dropped on grounds of statistical significance or econometric problems (e.g., multicollinearity). Some variables (e.g., MSW and ACTIV) are dropped because the variables have multicollinearity and/or are statistically insignificant at the 20% level for optimizing the meta-regression transfer model (suggested by Rosenberger and Loomis (2001, 2003). A wide range of diagnostic tests has been conducted on each regression for benefits transfer (suggested by Walton et al. 2006). The R^2 for the overall fit of the regression, hypothesis tests (F tests and t tests), and diagnostic works (e.g., skewness-kurtosis normality test, Ramseyââ¬â¢s RESET test for the specification error bias, heteroskedasticity test, and multicollinearity assessment) are reported. The F test assesses the null hypothesis that all or some coefficients ( ) on the modelââ¬â¢s explanatory variables equal zero i.e., ãâ¬â"H_0: à ² ãâ¬â"_1= à ²_2=â⹠¯= à ²_k=0 for all or some coefficients (Wooldridge 2003). A linear restriction test on some coefficients is useful before dropping the variables when some variables are unreliable due to multicollinearity (Hamilton 2004). An important issue when handling small samples is the potential for multicollinearity which has a high degree of linear relationships between explanatory variables (Walton et al. 2006). The high correlation between estimated coefficients on explanatory variables in small samples can produce possible concerns: (i) substantially higher standard errors with lower t statistics (a greater chance of falsely accepting the null hypothesis in standard significance tests); (ii) unexpected changes in coefficient magnitudes or signs; and (iii) statistically insignificant coefficients despite the high R^2 (Hamilton 2004). A number of tests to indicate the presence and severity of multicollinearity exist (e.g., Durbin-Watson tests, VIF, Tolerance, and a correlation matrix between estimated coefficients). One test is the variance inflation factor (VIF) which measures the degree to which the variance and standard error of an estimated coefficient increase because of the inclusion of the explanatory variable (i.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.