An emerging body of research suggests that large language models (LLMs) can “deceive” humans by offering fabricated explanations for their behavior or concealing the truth of their actions from human users. The implications are worrisome, particularly because researchers do not fully understand why or when LLMs engage in this behavior.