Graphs can help people arrive at data-supported conclusions. However, graphs might also induce bias by shifting the amount of evidence needed to make a decision, such as deciding whether a treatment had some kind of effect. In 2 experiments, we manipulated the early base rates of treatment effects in graphs. Early base rates had a large effect on a signal detection measure of bias in future graphs even though all future graphs had a 50% chance of showing a treatment effect, regardless of earlier base rates. In contrast, the autocorrelation of data points within each graph had a larger effect on discriminability. Exploratory analyses showed that a simple cue could be used to correctly categorize most graphs, and we examine participants’ use of this cue among others in lens models. When exposed to multiple graphs on the same topic, human judges can draw conclusions about the data, but once those conclusions are made, they can affect subsequent graph judgment.