SlideShare a Scribd company logo
SWE 681 / ISA 681
Secure Software Design &
Programming:
Lecture 4: Design
Dr. David A. Wheeler
2023-09-24
Outline
• Saltzer and Schroeder design principles
• More in-depth on some design principles (a grab bag!), e.g.:
– Minimize privileges
– Counter TOCTOU issues
• Detect, contain, respond
– Including: Design in updates!
• NIST SP 800-160 volume 1 list of design principles
• Supply chain / reuse
• Five Star Automotive Cyber Safety Program (“I am the Cavalry”)
• Clouds & how they are implemented
– Containers, virtual machines
• Attack (Threat) modeling
• Self-protecting software systems
2
What’s design?
• ISO/IEC 12207:2008:
– “Software architectural design.
• For each software item …the implementer shall transform the
requirements for the software item into an architecture that describes
its top-level structure and identifies the software components.
• It shall be ensured that all the requirements for the software item are
allocated to its software components and further refined to facilitate
detailed design.”
– “Software detailed design process is to provide a design for the
software that implements and can be verified against the
requirements…”
• In short:
– Determine components (e.g., classes, programming languages,
frameworks, database systems, etc.) that you’ll use, and how
you’ll use them (interconnections/APIs), to solve the problem
3
Abstract view of a program
4
Program
Process Data
(Structured Program
Internals)
Input Output
Call-out to
other programs
(also consider
input & output issues)
You are here
Developing secure software is not
new & unknown
• Saltzer & Schroeder principles, & many specifics,
publicly known since at least 1970s
• Problem is that most software developers do not
know how to do it
– Attackers, of course, know this
• Knowing various principles & rules-of-thumb
ahead-of-time can avoid many problems later
– This lecture is all about those various principles &
rules-of-thumb
– Principles & rules-of-thumb have trade-offs &
exceptions, but knowing them is important
5
Saltzer and Schroeder
design principles (1)
• Least privilege
– Each user and program should operate using fewest privileges
possible
– Limits the damage from an accident, error, or attack
– Reduces number of potential interactions among privileged programs
• Unintentional, unwanted, or improper uses of privilege less likely
– Extend to the internals of a program: only smallest portion of program
which needs those privileges should have them
• Economy of mechanism/Simplicity
– Protection system's design should be simple and small as possible
– “techniques such as line-by-line inspection of software and physical
examination of hardware that implements protection mechanisms are
necessary. For such techniques to be successful, a small and simple
design is essential.‘”
– Aka “KISS” principle (``keep it simple, stupid'')
6
Saltzer and Schroeder
design principles (2)
• Open design
– The protection mechanism must not depend on attacker ignorance
– Instead, the mechanism should be public
• Depend on secrecy of relatively few (and easily changeable) items like
passwords or private keys
– An open design makes extensive public scrutiny possible
– Makes it possible for users to convince themselves system is adequate
– Not realistic to maintain secrecy for distributed system
• Decompilers/subverted hardware quickly expose implementation
“secrets”
• Even if you pretend that source code is necessary to find exploits (it isn't),
source code has often been stolen and redistributed
• This is one of the oldest and strongly supported principles, based on many
years in cryptography
– Kerckhoffs's Law: “A cryptosystem should be designed to be secure if
everything is known about it except the key information”
– Claude Shannon (inventor of information theory) restated
Kerckhoff's Law as: “[Assume] the enemy knows the system”
7
Saltzer and Schroeder
design principles (3)
• Complete mediation (“non-bypassable”)
– Every access attempt must be checked; position the mechanism
so it cannot be subverted. For example, in a client-server model,
generally the server must do all access checking because users
can build or modify their own clients
• Fail-safe defaults (e.g., permission-based approach)
– The default should be denial of service, and the protection
scheme should then identify conditions under which access is
permitted
– More generally, installation should be secure by default
• Separation of privilege
– Ideally, access to objects should depend on more than one
condition, so that defeating one protection system won't enable
complete access
8
Saltzer and Schroeder
design principles (4)
• Least common mechanism
– Minimize the amount and use of shared mechanisms (e.g.
use of the /tmp or /var/tmp directories)
– Shared objects provide potentially dangerous channels for
information flow and unintended interactions
• Psychological acceptability / Easy to use
– The human interface must be designed for ease of use so
users will routinely and automatically use the protection
mechanisms correctly
– Mistakes will be reduced if the security mechanisms
closely match the user's mental image of his or her
protection goals
9
Some additional important design
principles (my view)
• Limited attack surface
– Limit how attackers can get to the software you
trust
• Input validation with whitelists
– Limit what can enter the system through attack
surface
• Harden the system
– Defects happen! Design system so a single defect’s
less likely to result in complete compromise
10
These can all be viewed as special cases of minimizing privilege
Design principles…
• Any design must also consider other
requirements & specific circumstances
• Not all details apply to all systems
– Web apps ≠ Local apps ≠ GUI ≠ setuid programs
– But better to know the larger set of issues
• We’ll now cover specific design issues
– Several just add details on S&S principles
– Something of a “grab bag” of things to consider
11
Securing the interface
• Interfaces should be
– Minimal (simple as possible)
– Narrow (provide only the functions needed)
– Non-bypassable
• Trust should be minimized
• Consider limiting the data the user can see
• We’ve already discussed filtering input data – trusted
components must filter input data from untrusted sources
• Code you trust must run in an environment you control, not
on a system controlled by an attacker
– Necessary for complete mediation/non-bypassability
– Code run in a browser/client is bypassable if an attacker runs
the browser/client (attacker can modify or replace)
12
BEWARE: Warning signs of bad
designs (leading to bypassability)
• JavaScript running on the client that does input validation or other
security-relevant operations
– Ensure all security-relevant operations (e.g., input validation) is re-performed
in a trusted environment (typically on the server)
– In most situations client cannot be trusted; only server is trusted
– Common mistake with big JavaScript frameworks & large mobile apps
– Can be secure, but only if checks are re-performed in a trusted environment
• Mobile app does security-relevant input validation (same client-side issue)
• Database is directly accessible via the network for use by a client
application (web browser, mobile app, etc.)
– Can be secure, but must ensure that all operations the user can possibly
perform are acceptable (e.g., control with SQL grant)
– Often better (or necessary) to interpose access instead of providing direct
access to database; direct database access may violate least privilege
13
If the attacker is running your code, the attacker can modify or
replace your code – don’t let the attacker control your system!
JavaScript framework example #1
Is this input validation approach acceptable?
14
Execution of JavaScript
framework/library (e.g., React,
Angular, Vue, JQuery, etc.)
Database with API
providing direct
access to all data to
logged-in users
Execution of application-
specific JavaScript code,
including some security-
relevant input validation checks
Files with JavaScript code to be
executed on the client (downloaded
by user’s web browser on request)
Runs in web browser
Runs on trusted server
NO! THIS DOES NOT PROVIDE ANY SECURITY!
JavaScript sent to a web browser is executed client-side. This typically makes
it trivial to bypass and thus irrelevant for security. This JavaScript may be
useful to speed non-malicious responses, but it does not counter attack.
Login
mech-
anism
Obfuscated Code is not enough to
secure the interface
• Obfuscation: deliberate act of creating source or machine code that is
(more) difficult for humans to understand
• Code sent to client (e.g., JavaScript or Web Assembly) can be first sent
through an obfuscator (result aka shrouded or scrambled)
– Minification provides mild level of obfuscation
– Tools focusing on scrambling (jscrambler, etc.) can do more obfuscation
• Some naïve developers think no one can de-obfuscate or replace it
• Basic problem: If it is obfuscated, it can be de-obfuscated
• Obfuscation may slow naïve attacks; it does not prevent attackers from:
– Learning how the code works
– Finding data sent to the code (including any secret keys you send)
– Changing or replacing the code
• Obfuscation is inadequate for non-bypassability/security
– Non-bypassability means the attacker can’t bypass it
– Security-relevant code (such as input validation code) needs to run in a
trusted environment, not as obfuscated code in an environment that the
attacker could control
15
Avoid this. Work to run code you’ll trust on on a system you control
To run trusted code within an
attacker-controlled system…
• Don’t do it! Trying to do it is very risky, difficult, & best avoided
• Some advanced client-side mechanisms try to do this
– Digital Rights Management (DRM) systems, Trusted Platform Module (TPM),
etc. try to enable execution inside client “protected” from owner of client
– Theory: Can trust software running on environment controlled by attacker
– In practice: typically broken quickly (hardware protections can be undone if
you physically control the hardware) or bypassed (e.g., videos), even when
implemented using specialized teams of experts (you must use such teams)
– Ethical issue: Should you prevent users’ control of the hardware they own?
If you succeeded & users can’t break it, dangerous to users & society
– Potential liability issue: System owner doesn’t control the system
– Often better to check on server and/or limit physical access to client
• Research ongoing to run trusted code on untrusted servers
– Homomorphic encryption: enables computation on encrypted data that
remains encrypted, so can compute on server you don’t trust while
maintaining confidentiality/integrity (not availability)
– Fully homomorphic encryption is research
– Partial homomorphic encryption (supporting only a subset of operations)
currently practical only in very special cases (huge overhead)
16
Minimizing privileges
• Strive to provide “least privilege”
– Program components should only have minimum
necessary privileges
– Limits damage if attacker breaks in
– Don’t make a program setuid; make it a normal program &
require admin login
– Consider breaking into pieces with small trusted pieces
• Unix-like systems
– Primary privilege determiner is EUID/EGID
– “Saved” SUID/SGID lets you temporarily disable/re-enable
– “chroot” lets you limit filesystem visibility
• This principle has many aspects
17
Minimize privileges granted
• Don’t grant root
– Avoid creating setuid root programs
• Option: Create special group
– Grant file access to that group
– Setgid group, or run daemon process as group
• Option: Create special user
– Web servers typically do this (“nobody”)
– Better to create group than user – easier to admin group
• Option: Limit database rights for application
– Create pseudo-user(s)/groups with limited rights
• E.g., application may only need select or insert
– SQL GRANT …
18
Minimize time privilege
can be used
• Permanently give up privilege ASAP
– E.G., set EUID/EGID and SUID/SGID to RUID/RGID
– Web servers do this
• Need root privileges to attach to port 80
• Don’t need them afterwards, so permanently drops
• Sometimes can’t do this
19
Minimize time privilege
is active
• Temporarily give up privileges
– Can do with setuid(2), seteuid(2), setgroups(2)
• Less useful than giving up permanently
– Some attacks can force program to re-assert
– But some attacks can’t, so still helps
20
Minimize modules granted
privileges
• Break into parts
– Only some parts (preferably small) have privilege
– Its API limits actions the rest can request
• E.G., “Returns authenticated yes/no” & hides password info
• Break up roles, e.g., “admin” vs “user” components
– Give program for user interface far fewer privileges
– Web app: Separate admin interface, different privileges
• Traditional GUI: Don’t give privilege to GUI
– GUI toolkits are big
– Instead, GUI talks to separate program with privilege
• Create special program that does one small thing
21
Minimizing privileges:
Limit accessible resources
• Put all web program (e.g., CGI) data outside
document tree
– So users cannot request data by URL
• Minimize resources available
– Limit CPU resources, data space, etc.
– If “goes haywire” can detect & stop sooner
• Consider using mechanisms to limit visibility
– E.G., chroot (next), containers, virtual machines
22
Minimizing privileges:
Mutually suspicious components
• In “mutually suspicious components” the
system is broken into components…
• But the components strictly limit the trust
granted to others
• Limits damage if one component is subverted
• Can be challenging to do in some systems
23
Chroot & Containers
• Unix-like systems (unlike Windows) have a “top”
directory “/” (root directory)
• “chroot” call can change what “/” refers to in a process
(privileged call)
– Attempts to “cd ..” through it don’t work
– Called a “chroot jail”
– Makes large portions of filesystem inaccessible
• Must set up, call chroot, cd into chroot jail, close all
files, and drop all privileges
• Chroot cannot protect against root privileges
• Basis for implementing containers (discussed later)
24
Configure safely/ use safe defaults
• Make initial installation secure
– Don’t have a “default” password – force setting
– Start with most restrictive policy until reconfigure
– “Sample” configurations must be secure
• People will use them as starting points
– Installed files not writeable (or readable?) by others
– Make installable by non-root (if practical)
– During install, check assumptions are true
• Make it easy to reconfigure while keeping secure
– Make configuration easy in general
– Deny access until specifically granted
25
Load initialization values safely
• If load initialization values, make sure they can’t
be subverted
– Ensure attacker can't change which initialization file is
used, nor create or modify that file
– If a traditional user application, don’t use current
directory – use user’s home directory
• If the program is setuid/setgid
– Don’t read any file controlled by the user unless you
carefully filter it as an untrusted (potentially hostile)
input
– Trusted configuration values should be loaded from
somewhere else entirely (typically from a file in /etc)
26
Minimize file privileges
• Minimize who can read
– Should ordinary users be able to read config file?
– Probably not, especially if sensitive info like
passwords might be there
– Android: “other user” should not normally have r/w
access (different application)
• Especially minimize who can write
• Consider checking before use
– E.G., stop processing configuration file if arbitrary
user can write the file or directory it’s in
27
Fail safe
• Failure will happen
– Design so if (when) program fails, safest possible
result occurs
• Two possibilities:
– Fail open: Try to fail in a way that allows system to
function (choose if availability most important)
– Fail closed: Restrict access on failure, even if it
means loss of service (choose if confidentiality or
integrity are more important)
28
“Open/closed” is “like a door” not “like an electrical circuit”
Bad input
• Don’t crash when bad input occurs
– That sets up easy DoS
• Malformed input should stop processing that
request & prep for next request
– Don’t try to “figure out what the user wanted” – if
it’s security-related, just deny service
• Don’t reply with too much information
– Just “access denied” & log the rest
29
Avoid race conditions
• Race condition'' = “Anomalous behavior due to
unexpected critical dependence on the relative timing
of events'' [FOLDOC]
– Generally involve one or more processes…
– accessing a shared resource (e.g., file or variable)
– where multiple access isn’t properly controlled
• Problems because:
– Sequencing (non-atomic) problems
• Interference often from untrusted processes
• Typically solve by creating locks
– Deadlock/livelock/locking failure
• interference from trusted process, often same program
30
Sequencing/Non-atomic problems
• Modern systems have many processors
– Even if one processor, typically simulate many
• Loading/saving shared resource must be
controlled
– Shared variable (memory)
– File
– …
• If not controlled, may lead to vulnerability
31
Non-atomic “Add 1”
32
Process 1 Shared variable x Process 2
Load x Load x
3
Increment Accumulator
Increment Accumulator
Store x
Store x
3 3
4
4
4
4
Failure to lock means 3+2 = 4!
time
TOCTOU
• Secure programs must:
1. Determine if a request should be granted, and
2. If ok, act on that request
• Must not be possible for untrusted user to
change anything relevant between #1 and #2
• If can, termed “time of check - time of use”
(TOCTOU) race condition
– Often best to just try to perform action (if
atomic), and then look at error condition
33
Atomic actions in filesystem
• If filesystem shared, can cause problems
– Don’t use “access(2)” to see if can do something
• Perhaps things will change!
– Just try to open file & check for errors
• When creating a file, use open O_CREAT | O_EXCL
– C11 specification added fopen “x” exclusive mode
• Means fopen fails if the file already exists or cannot be created
• Otherwise, the file is created with exclusive (also known as non-
shared) access
– Many other languages build on C’s “fopen”
• So this addition simplifies exclusive access
34
/tmp & /var/tmp directories
• Typically shared among processes
– Used for “temporary” files and for sharing data
between processes
• Be very careful creating files here!!
• In any shared directory, repeatedly:
– Create a ``random'' filename
– Open it using O_CREAT | O_EXCL and very narrow
permissions (atomically creates the file, else fails)
– Stop repeating when the open succeeds
• Otherwise, if attacker can create that file first,
attacker owns & controls it
35
Filesystem: Insecure & secure
• By themselves, many library implementations for
creating temporary files aren’t secure
– tmpfile(3), mktemp(3), tmpnam(3)
– Shell convention “$$” isn’t secure – predictable filenames!
• Instead, wrap in larger (securing) routines
– First, set creation file permissions to limit them
– Create name tempnam(3), try open O_CREAT | O_EXCL
– Repeat previous step until success
• mktemp(1) is okay… unless temp cleaning!
– Automated “cleanup” can enable a race condition
– Safely created, temp cleaner erases, attacker creates
36
Symbolic & hard links
• Unix-like systems allow “links”
– Symbolic link
• Stores the name of another file
• Any attempt to open this file redirected by name
– Hard link
• Creates another name for same file
• System counts how many links to file exist
• Attackers may create links to existing files
– Trick privileged program into revealing/changing it
37
Locking
• Often program must have exclusive rights (“lock”)
• Standard lock problems:
– Deadlocks (“deadly embraces”)
• Process 1 locks A and waits for B; process 2 locks B and waits
for A
– Livelocks
– Releasing “stuck” locks if program doesn’t clean up
• Many deadlocks/livelocks can be prevented simply:
– Ensure all processes create locks in the same order
38
Files as locks
• Traditional lock on Unix-like systems:
– Create a file to assert a lock
– E.G., /var/run/NAME.pid
– If lock stuck, can easily see (ls) & fix (rm)
– Works fine if O_CREAT|O_EXCL file
• And make sure that file permissions don’t let attacker
in
• Inside threads, use threading systems’ locks
39
Avoid sharing
• Avoid sharing resources
– No sharing means no race conditions
– No sharing means attack on one might not affect
the other
• “Covert channels” are surreptitious ways to
send data across data boundaries
– Require some sort of sharing
40
Trust only trustworthy channels
• IP address typically forgeable
• IP port under attacker control (usually)
• Reverse DNS entry (“given this IP address, what is the name?”)
usually under attacker control
• Emails easily forgeable (unless digitally signed)
– Emailing back & forth, with cryptographically random entries, may be
okay for low-value info, but it’s not hard to defeat
• Server (including web server) must assume that client (or
middleman) is not trustworthy (client can do anything!)
– Client controls HTML “hidden fields”, HTTP_REFERER, & cookies
– These cannot be trusted unless special precautions are taken (e.g.,
digitally signed, or encrypted using server-only key)
– Usually better off keeping data you care about at the server end in a
client/server model
41
Use internal
consistency-checking code
• Check that key internal calls and basic state
assumptions are valid
– Identify key invariants, and actually check them
• Make sure that they stay running in the
deployed system
42
Self-limit resources
• Shed/limit excessive loads
• Consider setting limit values (e.g., setrlimit(2)
to limit resource use
– At least, don’t record debug info like “core” files in
deployed systems
– May contain sensitive data
43
Secure install/configuration
• No “default passwords” – force their setting
• Ensure executables can’t be changed during
normal operation
– Install executables (e.g., web apps) so they are owned
by root/special install user, & not writable by others
– Ensure most programs run by others
– Even if that process is broken into, attacker can’t
easily subvert executable – no permission!
– This means installation/update requires special
permission, often true anyway
• Sample files for admins should be secure
44
Secure install/configuration (2)
• Ensure that configuration data is from secured
locations
– /etc? $HOME? Environment variables?
– Configuration information should be in directories
that can’t be easily accessed (at least write,
maybe read). Often /etc
• Implement “default deny”
45
Error reports
• Limit error information sent back to user
– Information may help attacker
– Do log problems, in ways not available to
potential adversaries
• E.G., login failure
– Just tell them “authorization failed” – not “no
such user” or “password incorrect” or (worse)
“need longer password”
46
Protect credentials
• Never hardcode credentials (e.g., passwords) into code –
put credentials in separate place (CWE-259)
• Don’t check live credentials into a version control system –
they stick around
• Encrypt credentials if you send them over network – never
send them in the clear
• Attackers hunt for credentials!
– NSA “hunts sysadmins” when attacking networks, in particular,
they hunt for the “credentials of network administrators and
others with high levels of network access and privileges that can
open the kingdom to intruders…. [including looking for]
hardcoded passwords in software or passwords that are
transmitted in the clear” [Zetter2016]
47
“NSA Hacker Chief Explains How to Keep Him Out of Your System" by Kim Zetter, 2016-01-28, Wired,
a summary of a talk by Rob Joyce, chief of the NSA's Tailored Access Operations (TAO),
http://www.wired.com/2016/01/nsa-hacker-chief-explains-how-to-keep-him-out-of-your-system/
Defense-in-depth
• Try to make attacker break multiple
countermeasures before attack succeeds, e.g.:
– Buffer overflow countermeasure
– Limited privilege
• No guarantee against attack, but can help
– If breaking all is necessary and each measure adds
significant effort
48
Protect, Detect, Respond
• Protect: Preventing break-in ahead-of-time is best
– Defense-in-depth: Attacker must break multiple levels
• But prevention can’t always succeed
– Persistent attacker will break through layers
– Defender has to “defend everywhere” while attacker may only
require one mistake/subversion
• NIST Cybersecurity Framework has 5 functions
– Identify, Protect, Detect, Respond, Recover
– Groupable: (Identify &) Protect, Detect, (Respond &) Recover
• Also support “detect, contain, respond”
– Protect: Limit privileges, sandboxes, etc.
– Detect: Logging / log analysis, tripwires, etc.
– Response: Record info as justifiable evidence
49
Design should include an update
system to counter vulnerabilities
• Vulnerabilities likely to be found later
• Attackers often succeed using known vulnerabilities on unpatched systems
– “NSA and other APT attackers don’t rely on zero-day exploits extensively... they
don’t have to... so many more vectors [are] easier... [including] known
vulnerabilities for which a patch is available but the owner hasn’t installed it.”
[Zetter2016]
• Users often don’t update if they have to initiate
– 92% of the 115K Cisco devices currently connected to the Internet running out-
of-date vulnerable software (31% well past support) [Cisco 2016 Annual
Security Report]
• Plan and design for rapid automatic updates vs. vulnerabilities
– Ensure supply chain (e.g., OEMs, hardware makers, carriers) enable (Android!)
– Consider making updating secure & automatic, with a way for users to opt out
if they wish (& a way for them to control rollout separately)
– Beware: Update system can create vulnerabilities; ensure update has a valid
digital signature before automatically accepting it
– Safeguard update signing keys; don’t put on an Internet-connected machine
50
“NSA Hacker Chief Explains How to Keep Him Out of Your System" by Kim Zetter, 2016-01-28, Wired,
a summary of a talk by Rob Joyce, chief of the NSA's Tailored Access Operations (TAO),
http://www.wired.com/2016/01/nsa-hacker-chief-explains-how-to-keep-him-out-of-your-system/
NIST SP 800-160 volume 1
Appendix F (design principles)
51
Supply chain / reuse
• Supply chain = anything from elsewhere & how distributed
– Which components you’ll use, & how, is a design decision
– Important decision: What will you reuse? How get & verify?
– Includes operating system, programming language implementation(s),
database systems, any other third-party components
• Reused software may have unintentional or intentional vulnerabilities –
manage that risk!
• Evaluate risk of components before reusing them
– Learn more about component before reusing it
– Consider product, process (development & delivery), people
– What are others saying? What information is available?
– May want to evaluate it yourself (to what depth?)
– Non-OSS often more difficult due to less information
– Many ways to manage risk, once you realize it’s an issue
• Establish process to quickly learn of component vulnerabilities & update
– Aids: Package manager, good test suite, conform to its API
52
Programming language selection
• Selecting programming language(s) is a key
architectural design choice
• Depends on existing software, knowledge, domain, etc.
• Where practicable, choose a programming language
that reduces the likelihood of security vulnerabilities
– Includes countermeasures for common domain problems
• Difficult to create secure software in C/C++
– Default operations are dangerous & provides no built-in
mechanism to counter buffer overflows, double-free, etc.
– Syntax easily leads to errors (e.g., == vs. =)
– It is possible to write secure software in C/C++, but must
be prepared to spend the time & effort to do it
53
Five Star Automotive Cyber Safety
Program from “I am the Cavalry”
1. Safety by Design
– Published attestation of Secure Software Development Lifecycle, summarizing your design,
development, and adversarial resilience testing programs for products and supply chain
– Supply Chain Rigor; Reduce Attack Surface/Complexity; Independent, Adversarial Testing
2. Third Party Collaboration
– Published coordinated disclosure policy inviting 3rd
-party researchers acting in good faith
3. Evidence Capture
– Vehicle systems provide tamper evident, forensically-sound logging and evidence capture to
facilitate safety investigations?
4. Security Updates
– Vehicles can be securely updated in a prompt and agile manner
5. Segmentation and Isolation
– Have a published attestation of the physical and logical isolation measures to separate critical
systems from non-critical systems
– “If systems share the same memory, computing, and/or circuitry (as most current generation
cars do), these systems allow for loss of life and limb. Such risks are entirely avoidable…
Hacking the InfoTainment system should never cause an accident.”
– “Air Gaps: Physical separation is the only way to ensure that non-critical systems can not
adversely impact primary, operational, and safety systems…”
– “System Integrity/Recovery: …Earlier detection can reduce the total duration and extent of
the compromise as well as catalyze remediation…”
54
Source: https://www.iamthecavalry.org/domains/automotive/5star/
Cloud computing
• Clouds widely used, often misunderstood
– Clouds often cheaper (where appropriate), many variations
– Decisions to use cloud (and how) impact security
• NIST Definition of Cloud Computing (NIST SP 800-145):
– Cloud computing is “a model for enabling ubiquitous,
convenient, on-demand network access to a shared pool of
configurable computing resources (e.g., networks, servers,
storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or
service provider interaction.”
– Five essential characteristics: On-demand self-service, broad
network access, resource pooling, rapid elasticity, and
measured service
• Virtualization is common, but not required, to be a cloud
55
Cloud service models
• Infrastructure as a Service (IaaS): “consumer [can] deploy and run arbitrary
software [including] operating systems and applications....” Examples: Amazon
Web Services (AWS), Windows Azure, Google Compute Engine, and Rackspace
Open Cloud. OpenStack is OSS
• Platform as a Service (PaaS): “consumer [can deploy] consumer-created or
acquired applications...” Examples: Google App Engine, Red Hat OpenShift ,
Heroku, and Windows Azure Cloud Services. Amazon Web Services (AWS) also
supports. OpenShift Origin is OSS
• Software as a Service (SaaS): “consumer [can] use the provider’s applications
running on a cloud infrastructure...”. Examples: SalesForce, Google docs
56
Source: NIST
Definition of
Cloud
Computing
(NIST SP
800-145)
Different cloud approaches have
different security ramifications
• Many different isolation mechanisms can implement clouds, e.g.:
– Physically separate (“bare metal clouds”)
– Virtual machines (VMs) (hardware virtualization) – hardware & VMM shared.
When attacker breaks VMM or hardware, controls everything
– Containerization (“OS-level virtualization”) – OS kernel also shared. Creates
processes with separate namespace (files, net, etc.). However, when attacker
breaks OS kernel (bigger API than VMM) or hardware, controls everything
– Multi-user accounts – process tree & filesystem also shared. When attacker
breaks privileged program (bigger API), OS, or hardware, controls everything
• Who is cloud shared with?
– Public, limited community, private
– Increased sharing tends to lower cost, but also eases attacker access for attack
• Who controls your cloud? Can you trust them? What’s the exit cost?
• Always do risk management
– Sometimes it’s better to go with lower risk approach (of cloud, or non-cloud)
– Sometimes it’s better to mitigate or accept risk
57
For more information, see “Cloud Security: Virtualization, Containers, and Related Issues”
by David A. Wheeler, http://www.dwheeler.com/essays/cloud-security-virtualization-containers.html
Cloud use is not automatically
secure – who’s your supplier?
58
Dave Blazek, SHI Cloud CLOUDVILLE Cartoon. Licensed
under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
http://readwrite.com/2012/01/15/cartoon-ways-to-improve-cloud
Based on a work at blog.shicloud.com.
Attack (Threat) modeling
• When you have an initial design, think like an attacker
before implementing it (cheaper!)
• Sometimes called “threat modeling” (some object to
term as misleading)
• Varying approaches, often include:
– Define (security) requirements
– Define architecture/design model (need not be detailed)
– Analyze to identify attacks/security threats
– Determine risk level, select countermeasures
• Including design changes, additional components, etc.
– Keep updating
59
Analysis approaches
• Attacker-centric
– Starts with an attacker
– Evaluates their goals & how they might achieve them (e.g.,
via entry points)
– Used in CERT attack modeling approach
• Design-centric
– Starts with the design of the system: steps through system
model, looking for types of attacks against each element
– Used in threat modeling in Microsoft’s Security
Development Lifecycle
• Asset-centric
– Starts from key assets entrusted to a system
60
Microsoft STRIDE Approach
• Decompose system into relevant (design) components
– Design-centric approach
– Uses simple data flow diagrams, identify trust boundary
• Analyze each component for susceptibility to threats
• Mitigate the threats
61
http://msdn.microsoft.com/en-us/magazine/cc163519.aspx
“STRIDE” Threat Security Property
Spoofing Authentication
Tampering Integrity
Repudiation Non-repudiation
Information disclosure Confidentiality
Denial of service Availability
Elevation of privilege Authorization
CERT Attack Modeling
• Create simple design overview
• Develop “attack trees” – top is attacker’s objective, decompose down into
conditions that achieve that (with AND or OR)
• Devise methods to counter attack (build on attack patterns)
• E.G., Buffer Overflow Attack Pattern:
– Goal: Exploit buffer overflow vulnerability to perform malicious function on
target system
– Precondition: Attacker can execute certain programs on target system
– Attack:
AND 1. Identify executable program on target system susceptible to buffer overflow
vulnerability
2. Identify code that will perform malicious function when it executes with program’s
privilege
3. Construct input value that will force code to be in program’s address space
4. Execute program in a way that makes it jump to address at which code resides
– Postcondition: Target system performs malicious function
62
Source: http://www.cert.org/archive/pdf/01tn001.pdf
Other threat modeling approaches
• Process for Attack Simulation and Threat Analysis
(PASTA)
– Developed in 2012, risk/asset based approach
• Operationally Critical Threat, Asset, and
Vulnerability Evaluation (OCTAVE) – from CMU
• Trike
– focuses on security auditing from cyber risk
perspective
• Visual, Agile, and Simple Threat (VAST) modeling
63
CAPEC
• CAPEC = Common Attack Pattern Enumeration and Classification
• Attack categories (per CAPEC-1000: Mechanism of Attack)
– Data Leakage Attacks (e.g., Probing an Application Through Targeting its Error Reporting)
– Depletion
– Injection (e.g., SQL injection, command injection)
– Spoofing
– Time and State Attacks
– Abuse of Functionality
– Probabilistic Techniques
– Exploitation of Authentication
– Exploitation of Privilege/Trust
– Data Structure Attacks
– Resource Manipulation
– Physical Security Attacks
• Attack patterns:
– Network Reconnaissance
– Social Engineering Attacks
– Supply Chain Attacks
64
Source: http://capec.mitre.org/
CAPEC can help you
“think like an attacker”,
so you can partly
anticipate what
they will try to do
MITRE ATT&CK
• Adversarial Tactics, Techniques, and Common Knowledge
(ATT&CK)
– “is a curated knowledge base and model for cyber adversary
behavior”
– “reflecting the various phases of an adversary’s attack lifecycle
and the platforms they are known to target”
– Developed to describe attacks, but also covers defense
– Behavioral model focused on common cases
– Not exhaustive (CAPEC & CWE cover much more), but does help
focus on what’s common
• More info: https://www.mitre.org/publications/technical-
papers/mitre-attack-design-and-philosophy
65
Self-protecting software systems
• Eric Yuan (former student!), Naeem Esfahani, &
Sam Malek (all at GMU) wrote “A systematic
survey of self-protecting software systems”
(2014)
– “Self-protection, like other self-* properties, allows
the system to adapt to the changing environment
through autonomic means without much human
intervention, and can thereby be responsive, agile,
and cost effective.”
– “Systematic literature review… classify and
characterize the state-of-the-art research”
– Next several slides based on [Yuan2014]
66
Self-protection: Basic model
67
Source: [Yuan2014], based on
FORMS formal language of [Weyns2012]
Self-protection: What & How
68
Source: [Yuan2014]
Self-protection: Quality
69
Source: [Yuan2014]
Conclusions
• There are various key design principles
– E.G., Saltzer and Schroeder
• Need to design program to counter attack, e.g.:
– Minimize privileges
– Counter TOCTOU issues
• Use attack/threat modeling to look for
potentially-successful attacks
– Before the attacker tries them
• Many design approaches for self-protection
• Consider principles & rules-of-thumb in design
70
Released under CC BY-SA 3.0
• This presentation is released under the Creative Commons Attribution-
ShareAlike 3.0 Unported (CC BY-SA 3.0) license
• You are free:
– to Share — to copy, distribute and transmit the work
– to Remix — to adapt the work
– to make commercial use of the work
• Under the following conditions:
– Attribution — You must attribute the work in the manner specified by the
author or licensor (but not in any way that suggests that they endorse you or
your use of the work)
– Share Alike — If you alter, transform, or build upon this work, you may
distribute the resulting work only under the same or similar license to this one
• These conditions can be waived by permission from the copyright holder
– dwheeler at dwheeler dot com
• Details at: http://creativecommons.org/licenses/by-sa/3.0/
• Attribute me as “David A. Wheeler”
71

More Related Content

Similar to Secure Software Design and programming.ppt (20)

PDF
Secure design best practices and design patterns
Intopalo Digital Oy
 
PPTX
02-overview.pptx
EmanAzam
 
PDF
Secure by Design - Security Design Principles for the Working Architect
Eoin Woods
 
PPT
Security Testing for Mobile and Web Apps
DrKaramHatim
 
PPTX
Development lifecycle and principals of Security
SylvesterNdegese1
 
PPT
Andrews whitakrer lecture18-security.ppt
SilverGold16
 
PPT
ch0001 computer systems security and principles and practices
stephen972973
 
PDF
The Principles of Secure Development - BSides Las Vegas 2009
Security Ninja
 
PDF
Secure Coding Practices Every Developer Should Know.pdf
Zohaib Rizwan
 
PPTX
Lecture-6 about this slide programs .pptx
MUHAMMADAHMAD173574
 
PDF
SBA Security Meetup: Building a Secure Architecture – A Deep-Dive into Securi...
SBA Research
 
PPTX
Information Security and the SDLC
BDPA Charlotte - Information Technology Thought Leaders
 
PDF
Program Security in information security.pdf
shumailach472
 
PPTX
Built-in Security Mindfulness for Software Developers
Phú Phùng
 
PPT
3.Secure Design Principles And Process
phanleson
 
PPTX
Secure Android Development
Shaul Rosenzwieg
 
PPT
Application Security
florinc
 
PPTX
Ryan Elkins - Simple Security Defense to Thwart an Army of Cyber Ninja Warriors
Ryan Elkins
 
PPTX
Safe and secure programming practices for embedded devices
Soumitra Bhattacharyya
 
Secure design best practices and design patterns
Intopalo Digital Oy
 
02-overview.pptx
EmanAzam
 
Secure by Design - Security Design Principles for the Working Architect
Eoin Woods
 
Security Testing for Mobile and Web Apps
DrKaramHatim
 
Development lifecycle and principals of Security
SylvesterNdegese1
 
Andrews whitakrer lecture18-security.ppt
SilverGold16
 
ch0001 computer systems security and principles and practices
stephen972973
 
The Principles of Secure Development - BSides Las Vegas 2009
Security Ninja
 
Secure Coding Practices Every Developer Should Know.pdf
Zohaib Rizwan
 
Lecture-6 about this slide programs .pptx
MUHAMMADAHMAD173574
 
SBA Security Meetup: Building a Secure Architecture – A Deep-Dive into Securi...
SBA Research
 
Program Security in information security.pdf
shumailach472
 
Built-in Security Mindfulness for Software Developers
Phú Phùng
 
3.Secure Design Principles And Process
phanleson
 
Secure Android Development
Shaul Rosenzwieg
 
Application Security
florinc
 
Ryan Elkins - Simple Security Defense to Thwart an Army of Cyber Ninja Warriors
Ryan Elkins
 
Safe and secure programming practices for embedded devices
Soumitra Bhattacharyya
 

Recently uploaded (20)

PPTX
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
PPTX
Home Care Tools: Benefits, features and more
Third Rock Techkno
 
PDF
Open Chain Q2 Steering Committee Meeting - 2025-06-25
Shane Coughlan
 
PPTX
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
유니티에서 Burst Compiler+ThreadedJobs+SIMD 적용사례
Seongdae Kim
 
PDF
iTop VPN With Crack Lifetime Activation Key-CODE
utfefguu
 
PDF
AI + DevOps = Smart Automation with devseccops.ai.pdf
Devseccops.ai
 
PDF
Technical-Careers-Roadmap-in-Software-Market.pdf
Hussein Ali
 
PDF
Wondershare PDFelement Pro Crack for MacOS New Version Latest 2025
bashirkhan333g
 
PDF
MiniTool Power Data Recovery 8.8 With Crack New Latest 2025
bashirkhan333g
 
PDF
Digger Solo: Semantic search and maps for your local files
seanpedersen96
 
PDF
[Solution] Why Choose the VeryPDF DRM Protector Custom-Built Solution for You...
Lingwen1998
 
PPTX
Homogeneity of Variance Test Options IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
Build It, Buy It, or Already Got It? Make Smarter Martech Decisions
bbedford2
 
PDF
MiniTool Partition Wizard 12.8 Crack License Key LATEST
hashhshs786
 
PDF
AOMEI Partition Assistant Crack 10.8.2 + WinPE Free Downlaod New Version 2025
bashirkhan333g
 
PDF
The 5 Reasons for IT Maintenance - Arna Softech
Arna Softech
 
PPTX
OpenChain @ OSS NA - In From the Cold: Open Source as Part of Mainstream Soft...
Shane Coughlan
 
PDF
SciPy 2025 - Packaging a Scientific Python Project
Henry Schreiner
 
PDF
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
Home Care Tools: Benefits, features and more
Third Rock Techkno
 
Open Chain Q2 Steering Committee Meeting - 2025-06-25
Shane Coughlan
 
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
유니티에서 Burst Compiler+ThreadedJobs+SIMD 적용사례
Seongdae Kim
 
iTop VPN With Crack Lifetime Activation Key-CODE
utfefguu
 
AI + DevOps = Smart Automation with devseccops.ai.pdf
Devseccops.ai
 
Technical-Careers-Roadmap-in-Software-Market.pdf
Hussein Ali
 
Wondershare PDFelement Pro Crack for MacOS New Version Latest 2025
bashirkhan333g
 
MiniTool Power Data Recovery 8.8 With Crack New Latest 2025
bashirkhan333g
 
Digger Solo: Semantic search and maps for your local files
seanpedersen96
 
[Solution] Why Choose the VeryPDF DRM Protector Custom-Built Solution for You...
Lingwen1998
 
Homogeneity of Variance Test Options IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Build It, Buy It, or Already Got It? Make Smarter Martech Decisions
bbedford2
 
MiniTool Partition Wizard 12.8 Crack License Key LATEST
hashhshs786
 
AOMEI Partition Assistant Crack 10.8.2 + WinPE Free Downlaod New Version 2025
bashirkhan333g
 
The 5 Reasons for IT Maintenance - Arna Softech
Arna Softech
 
OpenChain @ OSS NA - In From the Cold: Open Source as Part of Mainstream Soft...
Shane Coughlan
 
SciPy 2025 - Packaging a Scientific Python Project
Henry Schreiner
 
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
Ad

Secure Software Design and programming.ppt

  • 1. SWE 681 / ISA 681 Secure Software Design & Programming: Lecture 4: Design Dr. David A. Wheeler 2023-09-24
  • 2. Outline • Saltzer and Schroeder design principles • More in-depth on some design principles (a grab bag!), e.g.: – Minimize privileges – Counter TOCTOU issues • Detect, contain, respond – Including: Design in updates! • NIST SP 800-160 volume 1 list of design principles • Supply chain / reuse • Five Star Automotive Cyber Safety Program (“I am the Cavalry”) • Clouds & how they are implemented – Containers, virtual machines • Attack (Threat) modeling • Self-protecting software systems 2
  • 3. What’s design? • ISO/IEC 12207:2008: – “Software architectural design. • For each software item …the implementer shall transform the requirements for the software item into an architecture that describes its top-level structure and identifies the software components. • It shall be ensured that all the requirements for the software item are allocated to its software components and further refined to facilitate detailed design.” – “Software detailed design process is to provide a design for the software that implements and can be verified against the requirements…” • In short: – Determine components (e.g., classes, programming languages, frameworks, database systems, etc.) that you’ll use, and how you’ll use them (interconnections/APIs), to solve the problem 3
  • 4. Abstract view of a program 4 Program Process Data (Structured Program Internals) Input Output Call-out to other programs (also consider input & output issues) You are here
  • 5. Developing secure software is not new & unknown • Saltzer & Schroeder principles, & many specifics, publicly known since at least 1970s • Problem is that most software developers do not know how to do it – Attackers, of course, know this • Knowing various principles & rules-of-thumb ahead-of-time can avoid many problems later – This lecture is all about those various principles & rules-of-thumb – Principles & rules-of-thumb have trade-offs & exceptions, but knowing them is important 5
  • 6. Saltzer and Schroeder design principles (1) • Least privilege – Each user and program should operate using fewest privileges possible – Limits the damage from an accident, error, or attack – Reduces number of potential interactions among privileged programs • Unintentional, unwanted, or improper uses of privilege less likely – Extend to the internals of a program: only smallest portion of program which needs those privileges should have them • Economy of mechanism/Simplicity – Protection system's design should be simple and small as possible – “techniques such as line-by-line inspection of software and physical examination of hardware that implements protection mechanisms are necessary. For such techniques to be successful, a small and simple design is essential.‘” – Aka “KISS” principle (``keep it simple, stupid'') 6
  • 7. Saltzer and Schroeder design principles (2) • Open design – The protection mechanism must not depend on attacker ignorance – Instead, the mechanism should be public • Depend on secrecy of relatively few (and easily changeable) items like passwords or private keys – An open design makes extensive public scrutiny possible – Makes it possible for users to convince themselves system is adequate – Not realistic to maintain secrecy for distributed system • Decompilers/subverted hardware quickly expose implementation “secrets” • Even if you pretend that source code is necessary to find exploits (it isn't), source code has often been stolen and redistributed • This is one of the oldest and strongly supported principles, based on many years in cryptography – Kerckhoffs's Law: “A cryptosystem should be designed to be secure if everything is known about it except the key information” – Claude Shannon (inventor of information theory) restated Kerckhoff's Law as: “[Assume] the enemy knows the system” 7
  • 8. Saltzer and Schroeder design principles (3) • Complete mediation (“non-bypassable”) – Every access attempt must be checked; position the mechanism so it cannot be subverted. For example, in a client-server model, generally the server must do all access checking because users can build or modify their own clients • Fail-safe defaults (e.g., permission-based approach) – The default should be denial of service, and the protection scheme should then identify conditions under which access is permitted – More generally, installation should be secure by default • Separation of privilege – Ideally, access to objects should depend on more than one condition, so that defeating one protection system won't enable complete access 8
  • 9. Saltzer and Schroeder design principles (4) • Least common mechanism – Minimize the amount and use of shared mechanisms (e.g. use of the /tmp or /var/tmp directories) – Shared objects provide potentially dangerous channels for information flow and unintended interactions • Psychological acceptability / Easy to use – The human interface must be designed for ease of use so users will routinely and automatically use the protection mechanisms correctly – Mistakes will be reduced if the security mechanisms closely match the user's mental image of his or her protection goals 9
  • 10. Some additional important design principles (my view) • Limited attack surface – Limit how attackers can get to the software you trust • Input validation with whitelists – Limit what can enter the system through attack surface • Harden the system – Defects happen! Design system so a single defect’s less likely to result in complete compromise 10 These can all be viewed as special cases of minimizing privilege
  • 11. Design principles… • Any design must also consider other requirements & specific circumstances • Not all details apply to all systems – Web apps ≠ Local apps ≠ GUI ≠ setuid programs – But better to know the larger set of issues • We’ll now cover specific design issues – Several just add details on S&S principles – Something of a “grab bag” of things to consider 11
  • 12. Securing the interface • Interfaces should be – Minimal (simple as possible) – Narrow (provide only the functions needed) – Non-bypassable • Trust should be minimized • Consider limiting the data the user can see • We’ve already discussed filtering input data – trusted components must filter input data from untrusted sources • Code you trust must run in an environment you control, not on a system controlled by an attacker – Necessary for complete mediation/non-bypassability – Code run in a browser/client is bypassable if an attacker runs the browser/client (attacker can modify or replace) 12
  • 13. BEWARE: Warning signs of bad designs (leading to bypassability) • JavaScript running on the client that does input validation or other security-relevant operations – Ensure all security-relevant operations (e.g., input validation) is re-performed in a trusted environment (typically on the server) – In most situations client cannot be trusted; only server is trusted – Common mistake with big JavaScript frameworks & large mobile apps – Can be secure, but only if checks are re-performed in a trusted environment • Mobile app does security-relevant input validation (same client-side issue) • Database is directly accessible via the network for use by a client application (web browser, mobile app, etc.) – Can be secure, but must ensure that all operations the user can possibly perform are acceptable (e.g., control with SQL grant) – Often better (or necessary) to interpose access instead of providing direct access to database; direct database access may violate least privilege 13 If the attacker is running your code, the attacker can modify or replace your code – don’t let the attacker control your system!
  • 14. JavaScript framework example #1 Is this input validation approach acceptable? 14 Execution of JavaScript framework/library (e.g., React, Angular, Vue, JQuery, etc.) Database with API providing direct access to all data to logged-in users Execution of application- specific JavaScript code, including some security- relevant input validation checks Files with JavaScript code to be executed on the client (downloaded by user’s web browser on request) Runs in web browser Runs on trusted server NO! THIS DOES NOT PROVIDE ANY SECURITY! JavaScript sent to a web browser is executed client-side. This typically makes it trivial to bypass and thus irrelevant for security. This JavaScript may be useful to speed non-malicious responses, but it does not counter attack. Login mech- anism
  • 15. Obfuscated Code is not enough to secure the interface • Obfuscation: deliberate act of creating source or machine code that is (more) difficult for humans to understand • Code sent to client (e.g., JavaScript or Web Assembly) can be first sent through an obfuscator (result aka shrouded or scrambled) – Minification provides mild level of obfuscation – Tools focusing on scrambling (jscrambler, etc.) can do more obfuscation • Some naïve developers think no one can de-obfuscate or replace it • Basic problem: If it is obfuscated, it can be de-obfuscated • Obfuscation may slow naïve attacks; it does not prevent attackers from: – Learning how the code works – Finding data sent to the code (including any secret keys you send) – Changing or replacing the code • Obfuscation is inadequate for non-bypassability/security – Non-bypassability means the attacker can’t bypass it – Security-relevant code (such as input validation code) needs to run in a trusted environment, not as obfuscated code in an environment that the attacker could control 15
  • 16. Avoid this. Work to run code you’ll trust on on a system you control To run trusted code within an attacker-controlled system… • Don’t do it! Trying to do it is very risky, difficult, & best avoided • Some advanced client-side mechanisms try to do this – Digital Rights Management (DRM) systems, Trusted Platform Module (TPM), etc. try to enable execution inside client “protected” from owner of client – Theory: Can trust software running on environment controlled by attacker – In practice: typically broken quickly (hardware protections can be undone if you physically control the hardware) or bypassed (e.g., videos), even when implemented using specialized teams of experts (you must use such teams) – Ethical issue: Should you prevent users’ control of the hardware they own? If you succeeded & users can’t break it, dangerous to users & society – Potential liability issue: System owner doesn’t control the system – Often better to check on server and/or limit physical access to client • Research ongoing to run trusted code on untrusted servers – Homomorphic encryption: enables computation on encrypted data that remains encrypted, so can compute on server you don’t trust while maintaining confidentiality/integrity (not availability) – Fully homomorphic encryption is research – Partial homomorphic encryption (supporting only a subset of operations) currently practical only in very special cases (huge overhead) 16
  • 17. Minimizing privileges • Strive to provide “least privilege” – Program components should only have minimum necessary privileges – Limits damage if attacker breaks in – Don’t make a program setuid; make it a normal program & require admin login – Consider breaking into pieces with small trusted pieces • Unix-like systems – Primary privilege determiner is EUID/EGID – “Saved” SUID/SGID lets you temporarily disable/re-enable – “chroot” lets you limit filesystem visibility • This principle has many aspects 17
  • 18. Minimize privileges granted • Don’t grant root – Avoid creating setuid root programs • Option: Create special group – Grant file access to that group – Setgid group, or run daemon process as group • Option: Create special user – Web servers typically do this (“nobody”) – Better to create group than user – easier to admin group • Option: Limit database rights for application – Create pseudo-user(s)/groups with limited rights • E.g., application may only need select or insert – SQL GRANT … 18
  • 19. Minimize time privilege can be used • Permanently give up privilege ASAP – E.G., set EUID/EGID and SUID/SGID to RUID/RGID – Web servers do this • Need root privileges to attach to port 80 • Don’t need them afterwards, so permanently drops • Sometimes can’t do this 19
  • 20. Minimize time privilege is active • Temporarily give up privileges – Can do with setuid(2), seteuid(2), setgroups(2) • Less useful than giving up permanently – Some attacks can force program to re-assert – But some attacks can’t, so still helps 20
  • 21. Minimize modules granted privileges • Break into parts – Only some parts (preferably small) have privilege – Its API limits actions the rest can request • E.G., “Returns authenticated yes/no” & hides password info • Break up roles, e.g., “admin” vs “user” components – Give program for user interface far fewer privileges – Web app: Separate admin interface, different privileges • Traditional GUI: Don’t give privilege to GUI – GUI toolkits are big – Instead, GUI talks to separate program with privilege • Create special program that does one small thing 21
  • 22. Minimizing privileges: Limit accessible resources • Put all web program (e.g., CGI) data outside document tree – So users cannot request data by URL • Minimize resources available – Limit CPU resources, data space, etc. – If “goes haywire” can detect & stop sooner • Consider using mechanisms to limit visibility – E.G., chroot (next), containers, virtual machines 22
  • 23. Minimizing privileges: Mutually suspicious components • In “mutually suspicious components” the system is broken into components… • But the components strictly limit the trust granted to others • Limits damage if one component is subverted • Can be challenging to do in some systems 23
  • 24. Chroot & Containers • Unix-like systems (unlike Windows) have a “top” directory “/” (root directory) • “chroot” call can change what “/” refers to in a process (privileged call) – Attempts to “cd ..” through it don’t work – Called a “chroot jail” – Makes large portions of filesystem inaccessible • Must set up, call chroot, cd into chroot jail, close all files, and drop all privileges • Chroot cannot protect against root privileges • Basis for implementing containers (discussed later) 24
  • 25. Configure safely/ use safe defaults • Make initial installation secure – Don’t have a “default” password – force setting – Start with most restrictive policy until reconfigure – “Sample” configurations must be secure • People will use them as starting points – Installed files not writeable (or readable?) by others – Make installable by non-root (if practical) – During install, check assumptions are true • Make it easy to reconfigure while keeping secure – Make configuration easy in general – Deny access until specifically granted 25
  • 26. Load initialization values safely • If load initialization values, make sure they can’t be subverted – Ensure attacker can't change which initialization file is used, nor create or modify that file – If a traditional user application, don’t use current directory – use user’s home directory • If the program is setuid/setgid – Don’t read any file controlled by the user unless you carefully filter it as an untrusted (potentially hostile) input – Trusted configuration values should be loaded from somewhere else entirely (typically from a file in /etc) 26
  • 27. Minimize file privileges • Minimize who can read – Should ordinary users be able to read config file? – Probably not, especially if sensitive info like passwords might be there – Android: “other user” should not normally have r/w access (different application) • Especially minimize who can write • Consider checking before use – E.G., stop processing configuration file if arbitrary user can write the file or directory it’s in 27
  • 28. Fail safe • Failure will happen – Design so if (when) program fails, safest possible result occurs • Two possibilities: – Fail open: Try to fail in a way that allows system to function (choose if availability most important) – Fail closed: Restrict access on failure, even if it means loss of service (choose if confidentiality or integrity are more important) 28 “Open/closed” is “like a door” not “like an electrical circuit”
  • 29. Bad input • Don’t crash when bad input occurs – That sets up easy DoS • Malformed input should stop processing that request & prep for next request – Don’t try to “figure out what the user wanted” – if it’s security-related, just deny service • Don’t reply with too much information – Just “access denied” & log the rest 29
  • 30. Avoid race conditions • Race condition'' = “Anomalous behavior due to unexpected critical dependence on the relative timing of events'' [FOLDOC] – Generally involve one or more processes… – accessing a shared resource (e.g., file or variable) – where multiple access isn’t properly controlled • Problems because: – Sequencing (non-atomic) problems • Interference often from untrusted processes • Typically solve by creating locks – Deadlock/livelock/locking failure • interference from trusted process, often same program 30
  • 31. Sequencing/Non-atomic problems • Modern systems have many processors – Even if one processor, typically simulate many • Loading/saving shared resource must be controlled – Shared variable (memory) – File – … • If not controlled, may lead to vulnerability 31
  • 32. Non-atomic “Add 1” 32 Process 1 Shared variable x Process 2 Load x Load x 3 Increment Accumulator Increment Accumulator Store x Store x 3 3 4 4 4 4 Failure to lock means 3+2 = 4! time
  • 33. TOCTOU • Secure programs must: 1. Determine if a request should be granted, and 2. If ok, act on that request • Must not be possible for untrusted user to change anything relevant between #1 and #2 • If can, termed “time of check - time of use” (TOCTOU) race condition – Often best to just try to perform action (if atomic), and then look at error condition 33
  • 34. Atomic actions in filesystem • If filesystem shared, can cause problems – Don’t use “access(2)” to see if can do something • Perhaps things will change! – Just try to open file & check for errors • When creating a file, use open O_CREAT | O_EXCL – C11 specification added fopen “x” exclusive mode • Means fopen fails if the file already exists or cannot be created • Otherwise, the file is created with exclusive (also known as non- shared) access – Many other languages build on C’s “fopen” • So this addition simplifies exclusive access 34
  • 35. /tmp & /var/tmp directories • Typically shared among processes – Used for “temporary” files and for sharing data between processes • Be very careful creating files here!! • In any shared directory, repeatedly: – Create a ``random'' filename – Open it using O_CREAT | O_EXCL and very narrow permissions (atomically creates the file, else fails) – Stop repeating when the open succeeds • Otherwise, if attacker can create that file first, attacker owns & controls it 35
  • 36. Filesystem: Insecure & secure • By themselves, many library implementations for creating temporary files aren’t secure – tmpfile(3), mktemp(3), tmpnam(3) – Shell convention “$$” isn’t secure – predictable filenames! • Instead, wrap in larger (securing) routines – First, set creation file permissions to limit them – Create name tempnam(3), try open O_CREAT | O_EXCL – Repeat previous step until success • mktemp(1) is okay… unless temp cleaning! – Automated “cleanup” can enable a race condition – Safely created, temp cleaner erases, attacker creates 36
  • 37. Symbolic & hard links • Unix-like systems allow “links” – Symbolic link • Stores the name of another file • Any attempt to open this file redirected by name – Hard link • Creates another name for same file • System counts how many links to file exist • Attackers may create links to existing files – Trick privileged program into revealing/changing it 37
  • 38. Locking • Often program must have exclusive rights (“lock”) • Standard lock problems: – Deadlocks (“deadly embraces”) • Process 1 locks A and waits for B; process 2 locks B and waits for A – Livelocks – Releasing “stuck” locks if program doesn’t clean up • Many deadlocks/livelocks can be prevented simply: – Ensure all processes create locks in the same order 38
  • 39. Files as locks • Traditional lock on Unix-like systems: – Create a file to assert a lock – E.G., /var/run/NAME.pid – If lock stuck, can easily see (ls) & fix (rm) – Works fine if O_CREAT|O_EXCL file • And make sure that file permissions don’t let attacker in • Inside threads, use threading systems’ locks 39
  • 40. Avoid sharing • Avoid sharing resources – No sharing means no race conditions – No sharing means attack on one might not affect the other • “Covert channels” are surreptitious ways to send data across data boundaries – Require some sort of sharing 40
  • 41. Trust only trustworthy channels • IP address typically forgeable • IP port under attacker control (usually) • Reverse DNS entry (“given this IP address, what is the name?”) usually under attacker control • Emails easily forgeable (unless digitally signed) – Emailing back & forth, with cryptographically random entries, may be okay for low-value info, but it’s not hard to defeat • Server (including web server) must assume that client (or middleman) is not trustworthy (client can do anything!) – Client controls HTML “hidden fields”, HTTP_REFERER, & cookies – These cannot be trusted unless special precautions are taken (e.g., digitally signed, or encrypted using server-only key) – Usually better off keeping data you care about at the server end in a client/server model 41
  • 42. Use internal consistency-checking code • Check that key internal calls and basic state assumptions are valid – Identify key invariants, and actually check them • Make sure that they stay running in the deployed system 42
  • 43. Self-limit resources • Shed/limit excessive loads • Consider setting limit values (e.g., setrlimit(2) to limit resource use – At least, don’t record debug info like “core” files in deployed systems – May contain sensitive data 43
  • 44. Secure install/configuration • No “default passwords” – force their setting • Ensure executables can’t be changed during normal operation – Install executables (e.g., web apps) so they are owned by root/special install user, & not writable by others – Ensure most programs run by others – Even if that process is broken into, attacker can’t easily subvert executable – no permission! – This means installation/update requires special permission, often true anyway • Sample files for admins should be secure 44
  • 45. Secure install/configuration (2) • Ensure that configuration data is from secured locations – /etc? $HOME? Environment variables? – Configuration information should be in directories that can’t be easily accessed (at least write, maybe read). Often /etc • Implement “default deny” 45
  • 46. Error reports • Limit error information sent back to user – Information may help attacker – Do log problems, in ways not available to potential adversaries • E.G., login failure – Just tell them “authorization failed” – not “no such user” or “password incorrect” or (worse) “need longer password” 46
  • 47. Protect credentials • Never hardcode credentials (e.g., passwords) into code – put credentials in separate place (CWE-259) • Don’t check live credentials into a version control system – they stick around • Encrypt credentials if you send them over network – never send them in the clear • Attackers hunt for credentials! – NSA “hunts sysadmins” when attacking networks, in particular, they hunt for the “credentials of network administrators and others with high levels of network access and privileges that can open the kingdom to intruders…. [including looking for] hardcoded passwords in software or passwords that are transmitted in the clear” [Zetter2016] 47 “NSA Hacker Chief Explains How to Keep Him Out of Your System" by Kim Zetter, 2016-01-28, Wired, a summary of a talk by Rob Joyce, chief of the NSA's Tailored Access Operations (TAO), http://www.wired.com/2016/01/nsa-hacker-chief-explains-how-to-keep-him-out-of-your-system/
  • 48. Defense-in-depth • Try to make attacker break multiple countermeasures before attack succeeds, e.g.: – Buffer overflow countermeasure – Limited privilege • No guarantee against attack, but can help – If breaking all is necessary and each measure adds significant effort 48
  • 49. Protect, Detect, Respond • Protect: Preventing break-in ahead-of-time is best – Defense-in-depth: Attacker must break multiple levels • But prevention can’t always succeed – Persistent attacker will break through layers – Defender has to “defend everywhere” while attacker may only require one mistake/subversion • NIST Cybersecurity Framework has 5 functions – Identify, Protect, Detect, Respond, Recover – Groupable: (Identify &) Protect, Detect, (Respond &) Recover • Also support “detect, contain, respond” – Protect: Limit privileges, sandboxes, etc. – Detect: Logging / log analysis, tripwires, etc. – Response: Record info as justifiable evidence 49
  • 50. Design should include an update system to counter vulnerabilities • Vulnerabilities likely to be found later • Attackers often succeed using known vulnerabilities on unpatched systems – “NSA and other APT attackers don’t rely on zero-day exploits extensively... they don’t have to... so many more vectors [are] easier... [including] known vulnerabilities for which a patch is available but the owner hasn’t installed it.” [Zetter2016] • Users often don’t update if they have to initiate – 92% of the 115K Cisco devices currently connected to the Internet running out- of-date vulnerable software (31% well past support) [Cisco 2016 Annual Security Report] • Plan and design for rapid automatic updates vs. vulnerabilities – Ensure supply chain (e.g., OEMs, hardware makers, carriers) enable (Android!) – Consider making updating secure & automatic, with a way for users to opt out if they wish (& a way for them to control rollout separately) – Beware: Update system can create vulnerabilities; ensure update has a valid digital signature before automatically accepting it – Safeguard update signing keys; don’t put on an Internet-connected machine 50 “NSA Hacker Chief Explains How to Keep Him Out of Your System" by Kim Zetter, 2016-01-28, Wired, a summary of a talk by Rob Joyce, chief of the NSA's Tailored Access Operations (TAO), http://www.wired.com/2016/01/nsa-hacker-chief-explains-how-to-keep-him-out-of-your-system/
  • 51. NIST SP 800-160 volume 1 Appendix F (design principles) 51
  • 52. Supply chain / reuse • Supply chain = anything from elsewhere & how distributed – Which components you’ll use, & how, is a design decision – Important decision: What will you reuse? How get & verify? – Includes operating system, programming language implementation(s), database systems, any other third-party components • Reused software may have unintentional or intentional vulnerabilities – manage that risk! • Evaluate risk of components before reusing them – Learn more about component before reusing it – Consider product, process (development & delivery), people – What are others saying? What information is available? – May want to evaluate it yourself (to what depth?) – Non-OSS often more difficult due to less information – Many ways to manage risk, once you realize it’s an issue • Establish process to quickly learn of component vulnerabilities & update – Aids: Package manager, good test suite, conform to its API 52
  • 53. Programming language selection • Selecting programming language(s) is a key architectural design choice • Depends on existing software, knowledge, domain, etc. • Where practicable, choose a programming language that reduces the likelihood of security vulnerabilities – Includes countermeasures for common domain problems • Difficult to create secure software in C/C++ – Default operations are dangerous & provides no built-in mechanism to counter buffer overflows, double-free, etc. – Syntax easily leads to errors (e.g., == vs. =) – It is possible to write secure software in C/C++, but must be prepared to spend the time & effort to do it 53
  • 54. Five Star Automotive Cyber Safety Program from “I am the Cavalry” 1. Safety by Design – Published attestation of Secure Software Development Lifecycle, summarizing your design, development, and adversarial resilience testing programs for products and supply chain – Supply Chain Rigor; Reduce Attack Surface/Complexity; Independent, Adversarial Testing 2. Third Party Collaboration – Published coordinated disclosure policy inviting 3rd -party researchers acting in good faith 3. Evidence Capture – Vehicle systems provide tamper evident, forensically-sound logging and evidence capture to facilitate safety investigations? 4. Security Updates – Vehicles can be securely updated in a prompt and agile manner 5. Segmentation and Isolation – Have a published attestation of the physical and logical isolation measures to separate critical systems from non-critical systems – “If systems share the same memory, computing, and/or circuitry (as most current generation cars do), these systems allow for loss of life and limb. Such risks are entirely avoidable… Hacking the InfoTainment system should never cause an accident.” – “Air Gaps: Physical separation is the only way to ensure that non-critical systems can not adversely impact primary, operational, and safety systems…” – “System Integrity/Recovery: …Earlier detection can reduce the total duration and extent of the compromise as well as catalyze remediation…” 54 Source: https://www.iamthecavalry.org/domains/automotive/5star/
  • 55. Cloud computing • Clouds widely used, often misunderstood – Clouds often cheaper (where appropriate), many variations – Decisions to use cloud (and how) impact security • NIST Definition of Cloud Computing (NIST SP 800-145): – Cloud computing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” – Five essential characteristics: On-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service • Virtualization is common, but not required, to be a cloud 55
  • 56. Cloud service models • Infrastructure as a Service (IaaS): “consumer [can] deploy and run arbitrary software [including] operating systems and applications....” Examples: Amazon Web Services (AWS), Windows Azure, Google Compute Engine, and Rackspace Open Cloud. OpenStack is OSS • Platform as a Service (PaaS): “consumer [can deploy] consumer-created or acquired applications...” Examples: Google App Engine, Red Hat OpenShift , Heroku, and Windows Azure Cloud Services. Amazon Web Services (AWS) also supports. OpenShift Origin is OSS • Software as a Service (SaaS): “consumer [can] use the provider’s applications running on a cloud infrastructure...”. Examples: SalesForce, Google docs 56 Source: NIST Definition of Cloud Computing (NIST SP 800-145)
  • 57. Different cloud approaches have different security ramifications • Many different isolation mechanisms can implement clouds, e.g.: – Physically separate (“bare metal clouds”) – Virtual machines (VMs) (hardware virtualization) – hardware & VMM shared. When attacker breaks VMM or hardware, controls everything – Containerization (“OS-level virtualization”) – OS kernel also shared. Creates processes with separate namespace (files, net, etc.). However, when attacker breaks OS kernel (bigger API than VMM) or hardware, controls everything – Multi-user accounts – process tree & filesystem also shared. When attacker breaks privileged program (bigger API), OS, or hardware, controls everything • Who is cloud shared with? – Public, limited community, private – Increased sharing tends to lower cost, but also eases attacker access for attack • Who controls your cloud? Can you trust them? What’s the exit cost? • Always do risk management – Sometimes it’s better to go with lower risk approach (of cloud, or non-cloud) – Sometimes it’s better to mitigate or accept risk 57 For more information, see “Cloud Security: Virtualization, Containers, and Related Issues” by David A. Wheeler, http://www.dwheeler.com/essays/cloud-security-virtualization-containers.html
  • 58. Cloud use is not automatically secure – who’s your supplier? 58 Dave Blazek, SHI Cloud CLOUDVILLE Cartoon. Licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License. http://readwrite.com/2012/01/15/cartoon-ways-to-improve-cloud Based on a work at blog.shicloud.com.
  • 59. Attack (Threat) modeling • When you have an initial design, think like an attacker before implementing it (cheaper!) • Sometimes called “threat modeling” (some object to term as misleading) • Varying approaches, often include: – Define (security) requirements – Define architecture/design model (need not be detailed) – Analyze to identify attacks/security threats – Determine risk level, select countermeasures • Including design changes, additional components, etc. – Keep updating 59
  • 60. Analysis approaches • Attacker-centric – Starts with an attacker – Evaluates their goals & how they might achieve them (e.g., via entry points) – Used in CERT attack modeling approach • Design-centric – Starts with the design of the system: steps through system model, looking for types of attacks against each element – Used in threat modeling in Microsoft’s Security Development Lifecycle • Asset-centric – Starts from key assets entrusted to a system 60
  • 61. Microsoft STRIDE Approach • Decompose system into relevant (design) components – Design-centric approach – Uses simple data flow diagrams, identify trust boundary • Analyze each component for susceptibility to threats • Mitigate the threats 61 http://msdn.microsoft.com/en-us/magazine/cc163519.aspx “STRIDE” Threat Security Property Spoofing Authentication Tampering Integrity Repudiation Non-repudiation Information disclosure Confidentiality Denial of service Availability Elevation of privilege Authorization
  • 62. CERT Attack Modeling • Create simple design overview • Develop “attack trees” – top is attacker’s objective, decompose down into conditions that achieve that (with AND or OR) • Devise methods to counter attack (build on attack patterns) • E.G., Buffer Overflow Attack Pattern: – Goal: Exploit buffer overflow vulnerability to perform malicious function on target system – Precondition: Attacker can execute certain programs on target system – Attack: AND 1. Identify executable program on target system susceptible to buffer overflow vulnerability 2. Identify code that will perform malicious function when it executes with program’s privilege 3. Construct input value that will force code to be in program’s address space 4. Execute program in a way that makes it jump to address at which code resides – Postcondition: Target system performs malicious function 62 Source: http://www.cert.org/archive/pdf/01tn001.pdf
  • 63. Other threat modeling approaches • Process for Attack Simulation and Threat Analysis (PASTA) – Developed in 2012, risk/asset based approach • Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) – from CMU • Trike – focuses on security auditing from cyber risk perspective • Visual, Agile, and Simple Threat (VAST) modeling 63
  • 64. CAPEC • CAPEC = Common Attack Pattern Enumeration and Classification • Attack categories (per CAPEC-1000: Mechanism of Attack) – Data Leakage Attacks (e.g., Probing an Application Through Targeting its Error Reporting) – Depletion – Injection (e.g., SQL injection, command injection) – Spoofing – Time and State Attacks – Abuse of Functionality – Probabilistic Techniques – Exploitation of Authentication – Exploitation of Privilege/Trust – Data Structure Attacks – Resource Manipulation – Physical Security Attacks • Attack patterns: – Network Reconnaissance – Social Engineering Attacks – Supply Chain Attacks 64 Source: http://capec.mitre.org/ CAPEC can help you “think like an attacker”, so you can partly anticipate what they will try to do
  • 65. MITRE ATT&CK • Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) – “is a curated knowledge base and model for cyber adversary behavior” – “reflecting the various phases of an adversary’s attack lifecycle and the platforms they are known to target” – Developed to describe attacks, but also covers defense – Behavioral model focused on common cases – Not exhaustive (CAPEC & CWE cover much more), but does help focus on what’s common • More info: https://www.mitre.org/publications/technical- papers/mitre-attack-design-and-philosophy 65
  • 66. Self-protecting software systems • Eric Yuan (former student!), Naeem Esfahani, & Sam Malek (all at GMU) wrote “A systematic survey of self-protecting software systems” (2014) – “Self-protection, like other self-* properties, allows the system to adapt to the changing environment through autonomic means without much human intervention, and can thereby be responsive, agile, and cost effective.” – “Systematic literature review… classify and characterize the state-of-the-art research” – Next several slides based on [Yuan2014] 66
  • 67. Self-protection: Basic model 67 Source: [Yuan2014], based on FORMS formal language of [Weyns2012]
  • 68. Self-protection: What & How 68 Source: [Yuan2014]
  • 70. Conclusions • There are various key design principles – E.G., Saltzer and Schroeder • Need to design program to counter attack, e.g.: – Minimize privileges – Counter TOCTOU issues • Use attack/threat modeling to look for potentially-successful attacks – Before the attacker tries them • Many design approaches for self-protection • Consider principles & rules-of-thumb in design 70
  • 71. Released under CC BY-SA 3.0 • This presentation is released under the Creative Commons Attribution- ShareAlike 3.0 Unported (CC BY-SA 3.0) license • You are free: – to Share — to copy, distribute and transmit the work – to Remix — to adapt the work – to make commercial use of the work • Under the following conditions: – Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work) – Share Alike — If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one • These conditions can be waived by permission from the copyright holder – dwheeler at dwheeler dot com • Details at: http://creativecommons.org/licenses/by-sa/3.0/ • Attribute me as “David A. Wheeler” 71

Editor's Notes

  • #16: Terminator head PNG from http://pngimg.com/download/29765 Fair use asserted. CC-BY-NC 4.0 asserted on image
  • #50: https://redmondmag.com/articles/2016/01/26/enterprise-security-confidence-declining.aspx
  • #54: https://www.iamthecavalry.org/domains/automotive/5star/
  • #61: See also: http://blogs.msdn.com/b/ptorr/archive/2005/02/22/guerillathreatmodelling.aspx Book “Threat Modeling” by Frank Swiderski and Window Snyder
  • #62: “Attack Modeling for Information Security and Survivability” Andrew P. Moore, Robert J. Ellison, Richard C. Linger March 2001 http://www.cert.org/archive/pdf/01tn001.pdf
  • #66: [Yuan2014] Yuan, Eric, Naeem Esfahani, & Sam Malek. “A systematic survey of self-protecting software systems”. ACM Transactions on Autonomous and Adaptive Systems (TAAS), Volume 8 Issue 4, January 2014, Article No. 17.