Are computing systems trustworthy? To answer this, we need to know threethings: what the systems are supposed to do, what they are not supposed to do, and what they actually do. All three are problematic. There is no expressive, practical way to specify what systems must do and must not do. And if we had a specification, it would likely be infeasible to show that existing computing systems satisfy it. If we can't analyze security after the fact, the alternative is to design it in from the beginning: accompany programs with explicit, machine-checked security policies, written by programmers as part of program development.
Trustworthy systems must safeguard the end-to-end confidentiality, integrity, and availability of information they manipulate. We currently lack both sufficiently expressive specifications for these information security properties, and sufficiently accurate methods for checking them. This talk describes progress on both fronts. First, information security policies can be made more expressive than simple noninterference or access control policies, by adding notions of ownership, declassification, robustness, and erasure.Second, program analysis and transformation can be used to provide strong, automated security assurance. The talk describes how these methods were applied to building a distributed system with explicit security policies.