Software security: defensive programming

Security in Computing (3rd Edition)

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CS 161 Computer Security Fall 2005 Joseph/Tygar/Vazirani/Wagner Notes 15 Writing Secure Code This lecture discusses implementation techniques to avoid security holes when you write code. We will describe many good practices. Many of these have a strong overlap with software engineering and general software quality, but the demands of security place a heavier burden on programmers. In security applications, we must eliminate all security-relevant bugs, no matter how unlikely they are to be triggered in normal execution, because we are facing an intelligent adversary who will gladly interact with our code in abnormal ways if there is any profit in doing so. Compare to software reliability, where we normally focus on the bugs that are most likely to happen; bugs that only come up under obscure conditions might be ignored if reliability is the goal, but they cannot be ignored when security is the goal. Dealing with malice is much harder than dealing with mischance. In these notes, we’ll especially emphasize three fundamental techniques: (1) modularity and decomposition for security; (2) formal reasoning about code using invariants; (3) defensive programming. At the end, we also discuss programming language-specific issues and integrating security into the software lifecycle. 1 Modularity A well-designed system will be decomposed into modules, where modules interact with each other only through well-defined interfaces. Each module should perform a clear function; the essence is conceptual clarity of what it does (what functionality it provides), not how it does it (how it is implemented). The granularity of modules is dependent on the system and language. A module typically has state and code. For instance, in an object-oriented language like Java, a module might consist of a class (or a few closely related classes). In C, a module might be in its own file and contain some clear external interface, along with many internal functions that are not externally visible or callable. Module design is as much about interface design as anything else. The interface is the contract between caller and callee; hopefully, it should change less often than the implementation of the module itself. A caller should only need to understand the interface. Modules should interact only through the defined interface; for instance, you shouldn’t use global variables to communicate information from caller to callee. Think of a module as a blob; the interface is its surface area, and the implementation is its volume. Thoughtful design is often characterized by narrow and conceptually clean interfaces and modules with a low surface area to volume ratio. When you decompose the system into modules, here are some suggestions that will improve security: • Minimize the harm that could be caused by failure of a module. Ensure that even if one module is penetrated (e.g., by a buffer overrun) or behaves unexpectedly (e.g., due to a bug in its implementa- tion), then the damage is contained as much possible. Draw a security perimeter around each module.tion), then the damage is contained as much possible....
View Full Document

This note was uploaded on 01/29/2008 for the course CS 194 taught by Professor Joseph during the Fall '05 term at Berkeley.

Page1 / 13

Software security: defensive programming - CS 161 Computer...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online