Inference in natural language often involves recognizing lexical entailment (RLE), that is, identifying whether one word entails another. For example, buy entails own. Two general strategies for RLE have been proposed: One strategy is to manually construct an asymmetric similarity measure for context vectors (directional similarity) and another is to treat RLE as a problem of learning to recognize semantic relations using supervised machine-learning techniques (relation classification). In this paper, we experiment with two recent state-of-the-art representatives of the two general strategies. The first approach is an asymmetric similarity measure (an instance of the directional similarity strategy), designed to capture the degree to which the contexts of a word, a, form a subset of the contexts of another word, b. The second approach (an instance of the relation classification strategy) represents a word pair, a: b, with a feature vector that is the concatenation of the context vectors of a and b, and then applies supervised learning to a training set of labeled feature vectors. In addition, we introduce a third approach that is a new instance of the relation classification strategy. The third approach represents a word pair, a: b, with a feature vector in which the features are the differences in the similarities of a and b to a set of reference words. All three approaches use vector space models of semantics, based on word–context matrices. We perform an extensive evaluation of the three approaches using three different datasets. The proposed new approach (similarity differences) performs significantly better than the other two approaches on some datasets and there is no dataset for which it is significantly worse. Along the way, we address some of the concerns raised in past research, regarding the treatment of RLE as a problem of semantic relation classification, and we suggest, it is beneficial to make connections between the research in lexical entailment and the research in semantic relation classification.