Skip to main content

Table 4 A summary of the estimators and penalty functions for the bridge-type and adaptive bridge-type regularized regression methods used in this study. The adaptive methods have an a prefix in their names

From: Genomic prediction using machine learning: a comparison of the performance of regularized regression, ensemble, instance-based and deep learning methods on synthetic and empirical data

Method

Penalty

Estimator

 

bridge

\(p_{\lambda ,\gamma }(\varvec{\beta })=\lambda \sum \limits _{j=1}^p|\beta _j|^{\gamma }\)

\(\widehat{\varvec{\beta }}_{bridge}=\underset{\varvec{\beta }}{\textit{argmin}} \ \Big \{\text {RSS} + \lambda \sum \limits _{j=1}^p|\beta _j|^{\gamma } \Big \}, \ \gamma >0, \ \lambda \ge 0\)

(2)

\(\bullet \ \gamma =1\):

LASSO

\(p_{\lambda }(\varvec{\beta })=\lambda \Vert \varvec{\beta }\Vert _1\)

\(\widehat{\varvec{\beta }}_{lasso}=\underset{\varvec{\beta }}{\textit{argmin}} \ \Big \{\text {RSS} + \lambda \Vert \varvec{\beta }\Vert _1\Big \}\)

(3)

\(\bullet \ \gamma =2\):

ridge

\(p_{\lambda }(\varvec{\beta })=\lambda \Vert \varvec{\beta }\Vert _2^2\)

\(\widehat{\varvec{\beta }}_{ridge}=\underset{\varvec{\beta }}{\textit{argmin}} \ \Big \{\text {RSS} + \lambda \Vert \varvec{\beta }\Vert _2^2\Big \}\)

(4)

\(\bullet\) Combination of LASSO and ridge penalties (\(\gamma =1,2\), respectively):

ENET

\(p_{\varvec{\lambda }}(\varvec{\beta })=\lambda _1\Vert \varvec{\beta }\Vert _1+\lambda _2\Vert \varvec{\beta }\Vert _2^2\)

\(\widehat{\varvec{\beta }}_{enet}=(1+\lambda _2)\times \underset{\varvec{\beta }}{\textit{argmin}}\Big \{ \text {RSS} + \lambda _1\Vert \varvec{\beta }\Vert _1+ \lambda _2 \Vert \varvec{\beta }\Vert _2^2\Big \}\)

(6)

abridge

\(p_{\lambda ,\gamma }(\varvec{\beta })=\lambda \sum \limits _{j=1}^p w_j|\beta _j|^{\gamma }\)

\(\widehat{\varvec{\beta }}_{\texttt {a}bridge}=\underset{\varvec{\beta }}{\textit{argmin}} \ \Big \{\text {RSS} + \lambda \sum \limits _{j=1}^p{w}_j|\beta _j|^{\gamma }\Big \}\)

(7)

\(\bullet \ \gamma =1\):

aLASSO

\(p_{\lambda }(\varvec{\beta })=\lambda \Vert {\textbf {w}}\varvec{\beta }\Vert _1\)

\(\widehat{\varvec{\beta }}_{\texttt {a}lasso}=\underset{\varvec{\beta }}{argmin} \ \Big \{\text {RSS} + \lambda \Vert {\textbf {w}}\varvec{\beta }\Vert _1\Big \}\)

(8)

\(\bullet\) Combination of aLASSO and ridge penalties (\(\gamma =1,2\), respectively):

aENET

\(p_{\varvec{\lambda }}(\varvec{\beta }) = \lambda _1\Vert {\textbf {w}}\varvec{\beta }\Vert _1+ \lambda _2 \Vert \varvec{\beta }\Vert _2^2\)

\(\widehat{\varvec{\beta }}_{\texttt {a}enet}= k\times \underset{\varvec{\beta }}{argmin}\ \ \Big \{\text {RSS} + \lambda _1\Vert {\textbf {w}}\varvec{\beta }\Vert _1+ \lambda _2 \Vert \varvec{\beta }\Vert _2^2\Big \}\)

(9)