Accurate geospatial information about the causes and consequences of climate change, including energy systems infrastructure, is critical to planning climate change mitigation and adaptation strategies. When up-to-date spatial data on infrastructure is lacking, one approach to fill this gap is to learn from overhead imagery using deep-learning-based object detection algorithms. However, the performance of these algorithms can suffer when applied to diverse geographies, which is a common case. We propose a technique to generate realistic synthetic overhead images of an object (e.g., a generator) to enhance the ability of these techniques to transfer across diverse geographic domains. Our technique blends example objects into unlabeled images from the target domain using generative adversarial networks. This requires minimal labeled examples of the target object and is computationally efficient such that it can be used to generate a large corpus of synthetic imagery. We show that including these synthetic images in the training of an object detection model improves its ability to generalize to new domains (measured in terms of average precision) when compared to a baseline model and other relevant domain adaptation techniques.