Published online by Cambridge University Press: 16 May 2024
This study explores how large language models like ChatGPT comprehend language and assess information. Through two experiments, we compare ChatGPT's performance with humans', addressing two key questions: 1) How does ChatGPT compare with human raters in evaluating judgment-based tasks like speculative technology realization? 2) How well does ChatGPT extract technical knowledge from non-technical content, such as mining speculative technologies from text, compared to humans? Results suggest ChatGPT's promise in knowledge extraction but also reveal a disparity with humans in decision-making.