Abstract--Standard theoretical models in phonology during the past 50 years (e.g., Chomsky & Halle 1968, Kenstowicz 1994, Prince & Smolensky 2004) have asserted that all phonological information must be encoded at the most abstract level of representation, and that there cannot be any informational redundancy across representational levels: otherwise, individual words would be expected to be able to develop their own idiosyncratic phonetics. Consistent with this hypothesis, it is often the case that many details of the pronunciation of a given phoneme token are significantly predicted by its abstract phonological context, rather than by the specific word or larger context that it occurs in. To account for this observation, traditional theory assumes that the the lexicon and phonological grammar are largely independent: the grammar can define the range of forms that can exist in the lexicon, but the particularities of existing words in the lexicon have no influence on the grammar. This approach has been fruitful in accounting for categorical, synchronic phonological patterns within languages, but has been less so in accounting for how those patterns arise and change over time (e.g., Blevins 2004). Further, a great deal of recent work has established that much of the variation that does exist in the pronunciation of a given phoneme is in fact predicted by a wide variety of more local, less abstract factors such as word identity (e.g. Pierrehumbert 2002}, and the information content of a phoneme relative to context (e.g., Jurafsky et al. 2001, vanSon & Pols 2003, Aylett & Turk 2004, Raymond et al. 2006, Kaiser et al. 2011, Cohen-Priva 2012). As a result, a number of newer models have been developed over the last several decades that propose a causal chain linking individual utterances to long-term change in the phonology of a speech community (e.g., Ohala 1989, Lindblom 1996, Bybee 2001, Blevins 2004, Wedel in press). These models critically rely on redundancy across levels of representation. This general model predicts that utterance-level biases arising in the lexicon should influence the long-term trajectory of phonological pattern formation. To test this model, I and colleagues have looked for evidence that the functional load of a phoneme contrast influences its survival over time. In this talk I will present cross-linguistic evidence that phoneme contrasts that distinguish few minimal pairs are significantly more likely to merge over time, and conversely, that phoneme contrasts that distinguish many minimal pairs are significantly more likely to participate in chain shifts or phoneme splits, both of which preserve lexical contrast (Wedel et al. in press). This effect is strongest for minimal pairs that share word-category, consistent with the hypothesis that the effect arises in disambiguation in usage. In addition, I will share some work-in-progress indicating that as predicted, within a corpus of spoken English, VOT cues to the stop-voicing contrast are enhanced for words that have minimal pairs defined by that contrast (e.g. 'pat' ~ 'bat'). These findings suggest that phoneme inventories and phonotactics are in part influenced by the set of actual words within the lexicon and how they are deployed in usage.