Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
CompTIA CySA+ (CS0-003) Certification Guide

You're reading from   CompTIA CySA+ (CS0-003) Certification Guide Pass the CySA+ exam on your first attempt with complete topic coverage, expert tips, and practice resources

Arrow left icon
Product type Paperback
Published in Apr 2025
Publisher Packt
ISBN-13 9781835468920
Length 742 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Jonathan Isley Jonathan Isley
Author Profile Icon Jonathan Isley
Jonathan Isley
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Chapter 1: IAM, Logging, and Security Architecture FREE CHAPTER 2. Chapter 2: Attack Frameworks 3. Chapter 3: Incident Response Preparation and Detection 4. Chapter 4: Incident Response – Containment, Eradication, Recovery, and Post-Incident Activities 5. Chapter 5: Efficiency in Security Operations 6. Chapter 6: Threat Intelligence and Threat Hunting 7. Chapter 7: Indicators of Malicious Activity 8. Chapter 8: Tools and Techniques for Malicious Activity Analysis 9. Chapter 9: Attack Mitigations 10. Chapter 10: Risk Control and Analysis 11. Chapter 11: Vulnerability Management Program 12. Chapter 12: Vulnerability Assessment Tools 13. Chapter 13: Vulnerability Prioritization 14. Chapter 14: Incident Reporting and Communication 15. Chapter 15: Vulnerability Management Reporting and Communication 16. Chapter 16: Accessing the Online Practice Resources 17. Index 18. Other Books You May Enjoy

Infrastructure

Information technology infrastructure forms the fundamental structure of an organization to support its operation. It can include aspects around hardware, software, and networking. It provides the backbone for an organization to function and provide services to its customers. In a traditional design, these would be physical components (such as cabling, routers, switches, servers, and racks) acquired, provisioned, and maintained directly by the organization.

In this section, you will learn about three evolved infrastructure concepts:

  • Virtualization
  • Containerization
  • Serverless computing

Virtualization has enhanced physical machine resource utilization. Containerization has provided a portable solution for packaging applications and their dependencies. Meanwhile, serverless computing has allowed developers to focus only on code. Together, these innovations redefine the approach, development, and deployment of applications in today’s world. As you read this section, make sure to understand the advantages and disadvantages of each concept as well as security concerns to be aware of when using them.

Virtualization

Virtualization utilizes software to allow a single physical machine to run multiple independent machines on the same hardware. The machine run by this software is called a virtual machine (VM). A VM is isolated and self-contained, allowing it to run an OS that can differ from the physical machine. An example would be running a Linux VM on a Windows physical machine.

This concept allows more efficient usage of hardware as, typically, much of an individual machine’s hardware goes unused, such as memory, CPU, and disk space. In standard physical computing, a server is often dedicated to a single application or group of applications. To account for potential periods of higher usage, memory and CPU are provisioned that may sit idle for periods of time. For example, an application may peak and need 8 GB of memory, but only peak for an hour each day. For the rest of the day, it only uses 2 GB, which causes wasted usage from the idle unused 6 GB of memory for 23 hours a day. Virtualization also expands the software capabilities of physical hardware. A single physical machine can run multiple VMs, each of which can run different OS and software configurations.

Many organizations utilize virtualization in their server infrastructure. They run large physical machines, with many CPUs and lots of memory and disk space. Numerous VMs called clusters are then created for each physical machine. These VMs and their settings are centrally managed by a hypervisor. This allows the dynamic allocation of system resources based on need. Using the example mentioned previously about system resource wastage, an additional VM could be set up to use 4 GB of memory for 23 hours of the day, creating more efficient overall usage for the physical machine.

Figure 1.2 depicts a simple architecture for a VM.

Figure 1.2: VM architecture

Figure 1.2: VM architecture

It starts at the base layer with physical hardware running a host OS of Windows Server 2019. The next layer represents the hypervisor to control and create the VMs. There are two VMs depicted, each running its own guest OS – one running Linux and the other running Windows 11. Any number of applications can then be installed and run. Depending on the resource needs of each VM and application, and the hardware available on the physical server, multiple VMs can be created and run alongside each other. Each VM can have a different OS or version of an OS to best meet the business needs.

Organizations can also utilize a virtual desktop infrastructure (VDI) setup. This is where a desktop environment is streamed to individual external machines while contained and maintained internally. The desktop environment runs virtually, in a dynamic or persistent manner. In the dynamic method, VMs are created and destroyed as users connect to a desktop environment. For the persistent method, machines are pre-created and sit idle waiting for usage. This method is less efficient than dynamic but still more efficient than standard physical computing since it still allows dynamic resource allocation to the VMs based on real-time needs.

Virtualized machines and environments can be complex to design, build, manage, and secure. Organizations will have to manage patching at multiple layers, each layer having the potential to adversely affect the others. For example, if Windows server is running a Linux guest OS VM, and Windows Server is patched, it could cause the Linux VM to no longer function or to run slower. This means patching requires more planning and testing. It is critical to secure the hypervisor as there is a single dependency on it. If the hypervisor is compromised, it can provide an avenue for attackers to impact or compromise all VMs running under its management. Isolation and segmentation between VMs and clusters can help to reduce the attack surface and prevent wider impact from issues and attacks such as from VM escape vulnerabilities.

Containerization

Containerization is a form of virtualization that creates an isolated unit called a container. This is a standardized unit that contains software, including all the requirements needed to function and run, such as code, libraries, and dependencies. In standard computing, these requirements may come from other installed software or components. A container essentially brings its own environment with the software. Some examples of container technologies are Docker and Kubernetes.

Using containers provides several benefits. They have portability and isolation, allowing them consistent performance wherever they are run. Their design is often lightweight, using less resources than VMs. They are created from images, generally making them immutable read-only copies, increasing security.

They also fit neatly into the microservices architecture concept by facilitating breaking an application into smaller, manageable services, each in its own container. These smaller units allow enhanced agility, scalability, and ease of management. They often have quick development and deployment timelines.

However, containers can have compatibility issues, with some OSs requiring additional configuration. Networking and storage configurations can be complex, especially as containers scale. They are stateless, much like serverless functions, so they would not work with applications that require state management.

Since they run on an OS, it is important to use the principle of least privilege, granting only what is necessary for the container to run. Containers share the host’s kernel, so any vulnerabilities in how they run can pose a risk to the entire system, including the host OS. The images they are created from must be reviewed for security; otherwise, any security issues, such as misconfigurations, will proliferate into the container unit. It is also important to secure the hosts that containers are deployed on, or the containers may be impacted by security issues of the host. Network segmentation and traffic flow control should be used to protect containers from each other and only allow communication when necessary. This helps to reduce the impact of vulnerabilities such as container escapes.

Serverless Computing

Serverless computing leverages the dynamic nature of the cloud to create functions without an organization having to perform infrastructure management. When an organization owns physical hardware, it must handle all management functions for the hardware, such as provisioning, maintenance, scaling, and security. Also, in some cloud setups, these responsibilities can still be with the organizations. With serverless computing, the cloud provider is responsible for handling the need-based dynamic allocation and provisioning of servers. These needs can be statically defined directly by the organization or based on dynamic application demand. All required management, including security, for these servers would also be supplied by the cloud provider. This removes the organizational responsibility of infrastructure design, building, and management of physical or virtual devices. High availability becomes easier to design and achieve with the cloud provider managing the infrastructure.

Function as a service (FaaS) is a common implementation within serverless computing, where developers can create discrete functions. These custom-designed functions are often event-driven, executing on demand. There can be any number of trigger events, such as HTTP requests, uploads, and timers. They are also stateless, retaining no information about previous invocations. This event-driven and stateless nature further allows functions to auto-scale as needed. These function designs can facilitate the underlying operation of an application or service offering. Some example offerings in support of FaaS from cloud providers are AWS Lambda, Google App Engine, and Azure Functions. Consider an online photo-sharing service that allows user uploads that are then displayed in different formats, such as thumbnails, medium size, and full size. A developer can create a function to work with the uploaded images. When the picture is uploaded and stored, an event can be triggered that pulls the original upload from storage, creating multiple resized versions, and then places them back in storage for user usage.

Secure design and coding for serverless functions have the same importance as standard applications. They have many of the same security considerations, such as authentication and authorization, data security and privacy, deployment, and communication. Another common attack vector is denial of service (DoS) and resource exhaustion, which can take advantage of the event-driven nature of FaaS, triggering events in high amounts, and overwhelming workloads. A large security trade-off is having no visibility into infrastructure and how it is being secured or managed. This makes a FaaS user dependent on the cloud provider’s security.

Serverless computing provides several benefits to organizations. It uses a pay-per-use model for cost efficiency. Functions only run when needed, using only the resources necessary during the invocation. There is no charge for idle resources when functions are not running. Resources are elastic and auto-scaled, increasing or decreasing based on demand and maintaining a consistent level of performance. It also provides high availability and fault tolerance through the dynamic management of resources, ensuring that applications remain up and running. Finally, there can be rapid development and creation of applications by allowing attention to be focused on development, without the organizational responsibility of infrastructure management.

However, serverless computing may not always be the best solution for all situations. Invocations can often have a small start delay. This can be a disadvantage if an application requires real-time processing, such as with video conferencing. A function execution can be constrained by a maximum runtime, affecting long-running processes. This can be a disadvantage for applications such as those that work with databases that extract, transform, and load large volumes of data. As pricing is based on usage, resource efficiency and low workloads lead to better cost efficiency. This can be lost with heavy workloads that run more often, such as when working with machine learning and big data analytics.

Table 1.1 provides a summary of items discussed in this Infrastructure section. It covers virtualization, containerization, and serverless computing. Advantages, disadvantages, security concerns, and other topics are compared across all three subjects.

Aspect

Virtualization

Containerization

Serverless Computing

Definition

Running multiple VMs on a single physical server

Running multiple isolated containers on a single OS instance

Running functions or services without managing the underlying infrastructure

Advantages

  • Better utilization of hardware resources
  • Isolation between VMs
  • Flexibility to run different OSs
  • Lightweight and faster startup
  • Efficient use of resources
  • Consistent environments
  • No infrastructure management
  • Auto-scaling
  • Cost-effective for variable workloads

Disadvantages

  • Higher overhead due to running separate OS instances
  • Slower startup times
  • Less isolation compared to VMs
  • Dependency on the host OS
  • - Limited control over the environment
  • - Vendor lock-in risks
  • - Cold start latency

Security Concerns

  • VM escape vulnerabilities
  • Hypervisor attacks
  • Complex patch management
  • Container escape vulnerabilities
  • Shared kernel risks
  • Insecure container images
  • - Dependency on the provider’s security
  • - Lack of visibility into infrastructure
  • - Function-level security risks

Use Cases

  • Running legacy applications
  • Multi-tenant environments
  • Development and testing
  • Microservices architectures
  • Continuous integration/continuous deployment (CI/CD)
  • Lightweight applications
  • Event-driven applications
  • Short-lived tasks
  • Dynamic scaling requirements

Resource Efficiency

Moderate efficiency due to full OS instances

High efficiency due to sharing the OS kernel

  • Very high efficiency, paying only for execution time

Isolation Level

Strong isolation between VMs

Moderate isolation; containers share the same OS kernel

  • Limited isolation, depends on the provider’s multi-tenancy model

Startup Time

Slow (minutes)

Fast (seconds)

Very fast (milliseconds)

Management Overhead

High; requires managing VMs and OS updates

Moderate; requires managing containers and dependencies

Low; provider handles infrastructure management

Table 1.1: Comparison of virtualization, containerization, and serverless computing

Now that you have reviewed infrastructure design choices, you will learn about the systems that will operate within those designs. You will explore OS concepts and security considerations including hardware architecture, filesystem structure, configuration files, system processes, and secure system hardening.

You have been reading a chapter from
CompTIA CySA+ (CS0-003) Certification Guide
Published in: Apr 2025
Publisher: Packt
ISBN-13: 9781835468920
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime
Modal Close icon
Modal Close icon