Many of the mathematicians and scientists who guided the development of digital computers in the late 1940s, such as Alan Turing and John von Neumann, saw these new devices not just as tools for calculation but as devices that might employ the same principles as are exhibited in rational human thought. Thus, a subfield of what came to be called computer science assumed the label artificial intelligence (AI). The idea of building artificial systems which could exhibit intelligent behavior comparable to that of humans (which could, e.g., recognize objects, solve problems, formulate and implement plans, etc.) was a heady prospect, and the claims made on behalf of AI during the 1950s and 1960s were impossibly ambitious (e.g., having a computer capture the world chess championship within a decade). Despite some theoretical and applied successes within the field, serious problems soon became evident (of which the most notorious is the frame problem, which involves the difficulty in determining which information about the environment must be changed and which must be kept constant in the face of new information). Instead of fulfilling the goal of quickly producing artificial intelligent agents which could compete with or outperform human beings, by the 1970s and 1980s AI had settled into a pattern of slower but real progress in modeling or simulating aspects of human intelligence. (Examples of the advances made during this period were the development of higher-level structures for encoding information, such as frames or scripts, which were superior to simple prepositional encodings in supporting reasoning or the understanding of natural [as opposed to computer or other artificial] language texts, and the development of procedures for storing information about previously encountered cases and invoking these cases in solving new problems.)