Abstract--During language acquisition, one of the first tasks encountered by infants is determining which sounds indicate phonological distinctions in their language and which do not. This is a particularly challenging problem, since it requires unsupervised learning (i.e., speech sounds are unlabeled) and occurs incrementally (i.e., representations are updated continuously as new information is received). Recently, work on perceptual learning has demonstrated that adults are able to adapt speech sound categories to novel distributions of acoustic cues (e.g., in the context of a novel accent) in tasks that also require unsupervised learning. Typically, however, these two processes are thought to rely on distinct mechanisms: Acquisition is viewed as a slow processes that occurs during development, whereas adaptation is viewed as a rapid process that can occur over the course of a single hour in a laboratory experiment. Here, I present a computational model of speech perception that learns to map acoustic cues onto phonetic categories via unsupervised learning. I demonstrate that a single model can explain both the acquisition of phonetic categories during development and the adaptation of those categories in adulthood without any changes changes in the model's plasticity. This suggests that relatively simple unsupervised learning algorithms are sufficient for explaining speech sound learning on vastly different time-scales.