The field of computational cognitive modeling of human behavior in computer science is relatively new and vast with respect to its capabilities. Several frameworks, referred to as cognitive architectures, exist in aiding the development of computational models by considering the realistic capabilities of humans. Complex tasks, such as driving, have been successfully modeled, allowing researchers to examine what processes in the brain are invoked. While existing models have been attentive to a human’s abilities and limitations, the effects of individual differences in attributes and environmental conditions haven’t been as extensively researched. Attempting to integrate such differences is essential to applying a model’s findings to realistic conditions. With regard to driving, humans regularly find themselves in situations where music is played aloud, which takes up a portion of their cognitive abilities. Through attempting to model this common occurrence of music playing during a complex task, such as driving, there can be a deeper analysis and discussion behind the cognitive, perceptual, and motor processes involved, allowing for a comprehensive application to actual human behavior.


