Neural network language models have had great success in learning language processing solutions by encoding language statistics. These solutions have been shown to produce good approximations of human behavior in many situations (e.g., predicting that a particular construction should be considered less acceptable than another). However, these solutions are also very sample inefficient and they are brittle outside their training domains. This talk will highlight a number of aspects of human language processing that are unlikely to be learnable from language modeling statistics precisely because the domains of language to which we have access during training are distinct from the domains in which we would like NLP models to operate. I will provide some background from psycholinguistics to discuss different ways language models are likely to be inherently inadequate to model human language processing. This framing may be helpful when analyzing, designing, and fine-tuning models in order to achieve human-like language processing.