Exports lm, plm and glmer regression's coefficient and the statistical measure of preference in parenthesis.

resultsMatrix(x, z = "stderr", decim = 4, bRes, x.coeftest = FALSE)

Arguments

x

An object of class lm, plm, coeftest, or subclass glmerMod. If x is an object of class coeftest, use x.coeftest=TRUE.

z

Specifies the measure for statistical significance. Options are "stderr", "tstat", and "pvalue". The default is "stderr".

decim

Specifies the number of decimals to display. The default is 4.

bRes

Specifies the bootstraped model. An object of S3 class - result of bootMer(). If provided, the function will return the (t-stat, or p-value based on) boodstrapped standard errors.

x.coeftest

logical. The default is 'FALSE'. If x is an object of class coeftest, use x.coeftest=TRUE.

Value

An n x 1 matrix with the coefficient, number of stars, and the statistical measure of preference in parenthesis. Works with bootstrapped standard errors in case of glmerMod class objects. Significance stars follow the social sciences standards. The rownames are the variable names.

References

Croissant, Y. et al.(2022), Package ‘plm’:linear models for panel data (Version 2.6-2).https://cran.r-project.org/package=plm Bates, D. et al. (2023), Package ‘lme4’: Linear Mixed-Effects Models using 'Eigen' and S4 (Version 1.1-32). https://github.com/lme4/lme4/

Examples

library(plm)
data("Grunfeld", package = "plm") ###use the models from plm package (Croissant Y. et al, 2022: pg.4):
pooledOLS=plm(inv ~ value + capital, data = Grunfeld, model="pooling")
resultsMatrix(pooledOLS)
#>             Coefficient (stderr)
#> (Intercept) -42.7144*** (9.5117)
#> value         0.1156*** (0.0058)
#> capital       0.2307*** (0.0255)
library(lmtest)
ols_corrected=coeftest(pooledOLS, vcov = vcovSCC(pooledOLS, method="arellano", type="HC3", cluster = "group"))
resultsMatrix(ols_corrected, "pvalue", x.coeftest=TRUE)
#>             Coefficient (pvalue)
#> (Intercept) -42.7144*** (0.0073)
#> value              0.1156*** (0)
#> capital       0.2307*** (0.0076)
data("Crime", package = "plm") ###use the models from plm package (Croissant Y. et al, 2022: pg.97):
FE2SLS=plm(lcrmrte ~ lprbarr + lpolpc + lprbconv + lprbpris + lavgsen + ldensity + lwcon + lwtuc + lwtrd + region + factor(year) | . - lprbarr - lpolpc + ltaxpc + lmix, data = Crime, model = "within")
resultsMatrix(FE2SLS, "pvalue", 3)
#>                Coefficient (pvalue)
#> lprbarr               -0.56 (0.426)
#> lpolpc                0.675 (0.334)
#> lprbconv             -0.425 (0.323)
#> lprbpris             -0.258 (0.286)
#> lavgsen                 0.01 (0.82)
#> ldensity              0.126 (0.873)
#> lwcon                -0.025 (0.637)
#> lwtuc                0.043* (0.098)
#> lwtrd                 -0.015 (0.75)
#> factor(year)82        0.023 (0.676)
#> factor(year)83     -0.099** (0.016)
#> factor(year)84     -0.117** (0.033)
#> factor(year)85      -0.111* (0.073)
#> factor(year)86    -0.097*** (0.006)
#> factor(year)87       -0.077 (0.162)
library(lme4)
library(boot)
set.seed(404)
beta0=-1.4
beta1=0.1
age=sample(18:40, 100, replace=T)
gender=sample(0:1, 100, replace=T)
eduCat=sample(1:3, 100, replace=T)
groupId=sample(1:10, 100, replace=T)
prob=exp(beta0 + beta1 * age) / (1 + exp(beta0 + beta1 * age))
WLB=rbinom(n=100, size=1, prob=prob)
dataTest=as.data.frame(cbind(WLB, age, gender, eduCat, groupId))
regression.WLB=glmer(WLB ~ age + factor(gender) + I(eduCat==1) + I(eduCat==3) + (1 | groupId), data = dataTest, family = binomial, control=glmerControl(optimizer="bobyqa"), nAGQ = 0)
#> boundary (singular) fit: see help('isSingular')
resultsMatrix(regression.WLB) #returns the coefficient and standard error, 4 decimals
#>                    Coefficient (stderr)
#> (Intercept)             -1.0783 (1.241)
#> age                    0.0879** (0.042)
#> factor(gender)1         0.0583 (0.5389)
#> I(eduCat == 1)TRUE       0.291 (0.6283)
#> I(eduCat == 3)TRUE     -0.1957 (0.6567)
resultsMatrix(regression.WLB, "pvalue", 2) #returns the coefficient and p-value, 2 decimals
#>                    Coefficient (pvalue)
#> (Intercept)                -1.08 (0.38)
#> age                       0.09** (0.02)
#> factor(gender)1             0.06 (0.91)
#> I(eduCat == 1)TRUE          0.29 (0.65)
#> I(eduCat == 3)TRUE          -0.2 (0.76)
###to return (tstat/pvalues based on) bootstrapped standard errors:
FUN <- function(fit) {
  return(fixef(fit))
}
bootStdErr=bootMer(regression.WLB, FUN=FUN, nsim=10)
resultsMatrix(regression.WLB,bRes=bootStdErr) #returns the coefficient and bootstrapped standard error, 4 decimals
#>                    Coefficient (bootstrapped stderr)
#> (Intercept)                         -1.0783 (1.2991)
#> age                                 0.0879* (0.0461)
#> factor(gender)1                      0.0583 (0.3231)
#> I(eduCat == 1)TRUE                    0.291 (0.9161)
#> I(eduCat == 3)TRUE                  -0.1957 (0.7943)
resultsMatrix(regression.WLB,"tstat", 3, bootStdErr) #returns the coefficient and t stat based on bootstrapped standard error, 3 decimals
#>                    Coefficient (bootstrapped tstat)
#> (Intercept)                          -1.078 (-0.83)
#> age                                  0.088* (1.913)
#> factor(gender)1                        0.058 (0.18)
#> I(eduCat == 1)TRUE                    0.291 (0.318)
#> I(eduCat == 3)TRUE                  -0.196 (-0.247)
#***, **, * represent statistical significance at 1%, 5%, and 10%, respectively.