It is perhaps uncontroversial to claim that behavioral science research is playing an increasingly important role in practice. However, practitioners largely rely on media reports rather than original research articles to learn about the science. Do these media reports contain all the information needed to understand the nuances of the research? To assess this question, we develop a set of rubrics to evaluate the fidelity of the media report to the original research article. As an illustration, we apply these rubrics to a sample of media reports based on several research articles published in one journal and identify common patterns, trends, and pitfalls in media presentations. We find preliminary evidence of low fidelity in presenting participant characteristics, contextual elements, and limitations of the original research. The media also appear to misreport correlational evidence as causal and sometimes miss acknowledging the hypothetical nature of evidence when hypothetical scenarios were used as the sole basis of conclusions. Furthermore, the media often present broad conclusions and personal opinions as directly backed by scientific evidence. To support more discerning consumption of behavioral insights from media sources, we propose a checklist to guide practitioners in evaluating and using information from media sources.