This paper derives asymptotic risk (expected loss) results for shrinkage estimators with multidimensional regularization in high-dimensional settings. We introduce a class of multidimensional shrinkage estimators (MuSEs), which includes the elastic net, and show that—as the number of parameters to estimate grows—the empirical loss converges to the oracle-optimal risk. This result holds when the regularization parameters are estimated empirically via cross-validation or Stein’s unbiased risk estimate. To help guide applied researchers in their choice of estimator, we compare the empirical Bayes risk of the lasso, ridge, and elastic net in a spike and normal setting. Of the three estimators, we find that the elastic net performs best when the data are moderately sparse and the lasso performs best when the data are highly sparse. Our analysis suggests that applied researchers who are unsure about the level of sparsity in their data might benefit from using MuSEs such as the elastic net. We exploit these insights to propose a new estimator, the cubic net, and demonstrate through simulations that it outperforms the three other estimators for any sparsity level.