Complex models are of increasing interest to social scientists. Researchers interested in prediction generally favor flexible, robust approaches, while those interested in causation are often interested in modeling nuanced treatment structures and confounding relationships. Unfortunately, estimators of complex models often scale poorly, especially if they seek to maintain interpretability. In this paper, we present an example of such a conundrum and show how optimization can alleviate the worst of these concerns. Specifically, we introduce bigKRLS, which offers a variety of statistical and computational improvements to the Hainmueller and Hazlett (2013) Kernel-Regularized Least Squares (KRLS) approach. As part of our improvements, we decrease the estimator’s single-core runtime by 50% and reduce the estimator’s peak memory usage by an order of magnitude. We also improve uncertainty estimates for the model’s average marginal effect estimates—which we test both in simulation and in practice—and introduce new visual and statistical tools designed to assist with inference under the model. We further demonstrate the value of our improvements through an analysis of the 2016 presidential election, an analysis that would have been impractical or even infeasible for many users with existing software.