Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
CompTIA CySA+ (CS0-003) Certification Guide
CompTIA CySA+ (CS0-003) Certification Guide

CompTIA CySA+ (CS0-003) Certification Guide: Pass the CySA+ exam on your first attempt with complete topic coverage, expert tips, and practice resources

Arrow left icon
Profile Icon Jonathan Isley
Arrow right icon
₹799.99 ₹2680.99
eBook Apr 2025 742 pages 1st Edition
eBook
₹799.99 ₹2680.99
Paperback
₹3351.99
Subscription
Free Trial
Renews at ₹800p/m
Arrow left icon
Profile Icon Jonathan Isley
Arrow right icon
₹799.99 ₹2680.99
eBook Apr 2025 742 pages 1st Edition
eBook
₹799.99 ₹2680.99
Paperback
₹3351.99
Subscription
Free Trial
Renews at ₹800p/m
eBook
₹799.99 ₹2680.99
Paperback
₹3351.99
Subscription
Free Trial
Renews at ₹800p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

CompTIA CySA+ (CS0-003) Certification Guide

IAM, Logging, and Security Architecture

Identity and access management (IAM), logging, and security are the three major concepts that serve as the main building blocks of an organization’s security. In today’s ever-evolving security landscape, these concepts (if properly implemented) can secure the base of an organization’s environment. If absent or improperly implemented, it can expose an organization to many IAM issues, such as unauthorized access due to inadequate role-based permissions, potentially allowing access to sensitive data, and ineffective or missing multi-factor authentication (MFA) implementation, which may increase the risk of account compromise and unauthorized access. It can also cause logging issues such as insufficient logging details, causing incomplete records of security events and making it difficult to investigate and respond to incidents effectively, and lack of centralized log management, which can complicate incident investigation and response, making it slower and less effective. There is also the risk of security architecture issues such as inadequate network segmentation arising, which can expose lateral movement threats within the network and increased risk of a wider impact and poorly configured firewalls and access controls, potentially leaving open vulnerabilities that attackers could exploit to gain unauthorized access to the system.

Design and planning are the key first steps to creating a secure environment. First, a cybersecurity analyst must choose between infrastructure models, such as virtualization, containerization, on-premises, cloud, or hybrid. During this process, you must be aware of and understand common operating system (OS) concepts, including system hardening, filesystems, system processes, logging, and underlying hardware architecture. You must then include network design concepts to integrate these systems while continuing to keep security in mind. After the systems and networks are designed, you must be able to use and manage them securely. This is where access concepts and technologies will be integrated into the design to further facilitate an overall secure organization.

This chapter will discuss the CIA triad, teaching about infrastructure concepts, such as virtualization and containerization, alongside operating system concepts and network architecture. You will learn about the logging setup and its importance as related to system security and health. IAM criticality and concepts will be examined. The chapter will end by discussing encryption and sensitive data protection.

This chapter covers Domain 1.0: Security Operations, objective 1.1 Explain the importance of system and network architecture concepts in security operations in the CompTIA CySA+ CS0-003 exam.

The exam topics covered are as follows:

  • Infrastructure concepts
  • Operating system concepts
  • Log ingestion
  • Network architecture concepts
  • IAM
  • Encryption and data protection

Making the Most of This Book – Your Certification and Beyond

This book and its accompanying online resources are designed to be a complete preparation tool for your CySA+ exam.

The book is written in a way that means you can apply everything you’ve learned here even after your certification. The online practice resources that come with this book (Figure 1.1) are designed to improve your test-taking skills. They are loaded with timed mock exams, chapter review questions, interactive flashcards, case studies, and exam tips to help you work on your exam readiness from now till your test day.

Before You Proceed

To learn how to access these resources, head over to Chapter 16, Accessing the Online Practice Resources, at the end of the book.

s

Figure 1.1: Dashboard interface of the online practice resources

Figure 1.1: Dashboard interface of the online practice resources

Here are some tips on how to make the most of this book so that you can clear your certification and retain your knowledge beyond your exam:

  1. Read each section thoroughly.
  2. Make ample notes: You can use your favorite online note-taking tool or use a physical notebook. The free online resources also give you access to an online version of this book. Click the BACK TO THE BOOK link from the dashboard to access the book in Packt Reader. You can highlight specific sections of the book there.
  3. Chapter review questions: At the end of this chapter, you’ll find a link to review questions for this chapter. These are designed to test your knowledge of the chapter. Aim to score at least 75% before moving on to the next chapter. You’ll find detailed instructions on how to make the most of these questions at the end of this chapter in the Exam Readiness Drill – Chapter Review Questions section. That way, you’re improving your exam-taking skills after each chapter, rather than at the end of the book.
  4. Flashcards: After you’ve gone through the book and scored 75% or more in each of the chapter review questions, start reviewing the online flashcards. They will help you memorize key concepts.
  5. Mock exams: Review by solving the mock exams that come with the book till your exam day. If you get some answers wrong, go back to the book and revisit the concepts you’re weak in.
  6. Exam tips: Review these from time to time to improve your exam readiness even further

Infrastructure

Information technology infrastructure forms the fundamental structure of an organization to support its operation. It can include aspects around hardware, software, and networking. It provides the backbone for an organization to function and provide services to its customers. In a traditional design, these would be physical components (such as cabling, routers, switches, servers, and racks) acquired, provisioned, and maintained directly by the organization.

In this section, you will learn about three evolved infrastructure concepts:

  • Virtualization
  • Containerization
  • Serverless computing

Virtualization has enhanced physical machine resource utilization. Containerization has provided a portable solution for packaging applications and their dependencies. Meanwhile, serverless computing has allowed developers to focus only on code. Together, these innovations redefine the approach, development, and deployment of applications in today’s world. As you read this section, make sure to understand the advantages and disadvantages of each concept as well as security concerns to be aware of when using them.

Virtualization

Virtualization utilizes software to allow a single physical machine to run multiple independent machines on the same hardware. The machine run by this software is called a virtual machine (VM). A VM is isolated and self-contained, allowing it to run an OS that can differ from the physical machine. An example would be running a Linux VM on a Windows physical machine.

This concept allows more efficient usage of hardware as, typically, much of an individual machine’s hardware goes unused, such as memory, CPU, and disk space. In standard physical computing, a server is often dedicated to a single application or group of applications. To account for potential periods of higher usage, memory and CPU are provisioned that may sit idle for periods of time. For example, an application may peak and need 8 GB of memory, but only peak for an hour each day. For the rest of the day, it only uses 2 GB, which causes wasted usage from the idle unused 6 GB of memory for 23 hours a day. Virtualization also expands the software capabilities of physical hardware. A single physical machine can run multiple VMs, each of which can run different OS and software configurations.

Many organizations utilize virtualization in their server infrastructure. They run large physical machines, with many CPUs and lots of memory and disk space. Numerous VMs called clusters are then created for each physical machine. These VMs and their settings are centrally managed by a hypervisor. This allows the dynamic allocation of system resources based on need. Using the example mentioned previously about system resource wastage, an additional VM could be set up to use 4 GB of memory for 23 hours of the day, creating more efficient overall usage for the physical machine.

Figure 1.2 depicts a simple architecture for a VM.

Figure 1.2: VM architecture

Figure 1.2: VM architecture

It starts at the base layer with physical hardware running a host OS of Windows Server 2019. The next layer represents the hypervisor to control and create the VMs. There are two VMs depicted, each running its own guest OS – one running Linux and the other running Windows 11. Any number of applications can then be installed and run. Depending on the resource needs of each VM and application, and the hardware available on the physical server, multiple VMs can be created and run alongside each other. Each VM can have a different OS or version of an OS to best meet the business needs.

Organizations can also utilize a virtual desktop infrastructure (VDI) setup. This is where a desktop environment is streamed to individual external machines while contained and maintained internally. The desktop environment runs virtually, in a dynamic or persistent manner. In the dynamic method, VMs are created and destroyed as users connect to a desktop environment. For the persistent method, machines are pre-created and sit idle waiting for usage. This method is less efficient than dynamic but still more efficient than standard physical computing since it still allows dynamic resource allocation to the VMs based on real-time needs.

Virtualized machines and environments can be complex to design, build, manage, and secure. Organizations will have to manage patching at multiple layers, each layer having the potential to adversely affect the others. For example, if Windows server is running a Linux guest OS VM, and Windows Server is patched, it could cause the Linux VM to no longer function or to run slower. This means patching requires more planning and testing. It is critical to secure the hypervisor as there is a single dependency on it. If the hypervisor is compromised, it can provide an avenue for attackers to impact or compromise all VMs running under its management. Isolation and segmentation between VMs and clusters can help to reduce the attack surface and prevent wider impact from issues and attacks such as from VM escape vulnerabilities.

Containerization

Containerization is a form of virtualization that creates an isolated unit called a container. This is a standardized unit that contains software, including all the requirements needed to function and run, such as code, libraries, and dependencies. In standard computing, these requirements may come from other installed software or components. A container essentially brings its own environment with the software. Some examples of container technologies are Docker and Kubernetes.

Using containers provides several benefits. They have portability and isolation, allowing them consistent performance wherever they are run. Their design is often lightweight, using less resources than VMs. They are created from images, generally making them immutable read-only copies, increasing security.

They also fit neatly into the microservices architecture concept by facilitating breaking an application into smaller, manageable services, each in its own container. These smaller units allow enhanced agility, scalability, and ease of management. They often have quick development and deployment timelines.

However, containers can have compatibility issues, with some OSs requiring additional configuration. Networking and storage configurations can be complex, especially as containers scale. They are stateless, much like serverless functions, so they would not work with applications that require state management.

Since they run on an OS, it is important to use the principle of least privilege, granting only what is necessary for the container to run. Containers share the host’s kernel, so any vulnerabilities in how they run can pose a risk to the entire system, including the host OS. The images they are created from must be reviewed for security; otherwise, any security issues, such as misconfigurations, will proliferate into the container unit. It is also important to secure the hosts that containers are deployed on, or the containers may be impacted by security issues of the host. Network segmentation and traffic flow control should be used to protect containers from each other and only allow communication when necessary. This helps to reduce the impact of vulnerabilities such as container escapes.

Serverless Computing

Serverless computing leverages the dynamic nature of the cloud to create functions without an organization having to perform infrastructure management. When an organization owns physical hardware, it must handle all management functions for the hardware, such as provisioning, maintenance, scaling, and security. Also, in some cloud setups, these responsibilities can still be with the organizations. With serverless computing, the cloud provider is responsible for handling the need-based dynamic allocation and provisioning of servers. These needs can be statically defined directly by the organization or based on dynamic application demand. All required management, including security, for these servers would also be supplied by the cloud provider. This removes the organizational responsibility of infrastructure design, building, and management of physical or virtual devices. High availability becomes easier to design and achieve with the cloud provider managing the infrastructure.

Function as a service (FaaS) is a common implementation within serverless computing, where developers can create discrete functions. These custom-designed functions are often event-driven, executing on demand. There can be any number of trigger events, such as HTTP requests, uploads, and timers. They are also stateless, retaining no information about previous invocations. This event-driven and stateless nature further allows functions to auto-scale as needed. These function designs can facilitate the underlying operation of an application or service offering. Some example offerings in support of FaaS from cloud providers are AWS Lambda, Google App Engine, and Azure Functions. Consider an online photo-sharing service that allows user uploads that are then displayed in different formats, such as thumbnails, medium size, and full size. A developer can create a function to work with the uploaded images. When the picture is uploaded and stored, an event can be triggered that pulls the original upload from storage, creating multiple resized versions, and then places them back in storage for user usage.

Secure design and coding for serverless functions have the same importance as standard applications. They have many of the same security considerations, such as authentication and authorization, data security and privacy, deployment, and communication. Another common attack vector is denial of service (DoS) and resource exhaustion, which can take advantage of the event-driven nature of FaaS, triggering events in high amounts, and overwhelming workloads. A large security trade-off is having no visibility into infrastructure and how it is being secured or managed. This makes a FaaS user dependent on the cloud provider’s security.

Serverless computing provides several benefits to organizations. It uses a pay-per-use model for cost efficiency. Functions only run when needed, using only the resources necessary during the invocation. There is no charge for idle resources when functions are not running. Resources are elastic and auto-scaled, increasing or decreasing based on demand and maintaining a consistent level of performance. It also provides high availability and fault tolerance through the dynamic management of resources, ensuring that applications remain up and running. Finally, there can be rapid development and creation of applications by allowing attention to be focused on development, without the organizational responsibility of infrastructure management.

However, serverless computing may not always be the best solution for all situations. Invocations can often have a small start delay. This can be a disadvantage if an application requires real-time processing, such as with video conferencing. A function execution can be constrained by a maximum runtime, affecting long-running processes. This can be a disadvantage for applications such as those that work with databases that extract, transform, and load large volumes of data. As pricing is based on usage, resource efficiency and low workloads lead to better cost efficiency. This can be lost with heavy workloads that run more often, such as when working with machine learning and big data analytics.

Table 1.1 provides a summary of items discussed in this Infrastructure section. It covers virtualization, containerization, and serverless computing. Advantages, disadvantages, security concerns, and other topics are compared across all three subjects.

Aspect

Virtualization

Containerization

Serverless Computing

Definition

Running multiple VMs on a single physical server

Running multiple isolated containers on a single OS instance

Running functions or services without managing the underlying infrastructure

Advantages

  • Better utilization of hardware resources
  • Isolation between VMs
  • Flexibility to run different OSs
  • Lightweight and faster startup
  • Efficient use of resources
  • Consistent environments
  • No infrastructure management
  • Auto-scaling
  • Cost-effective for variable workloads

Disadvantages

  • Higher overhead due to running separate OS instances
  • Slower startup times
  • Less isolation compared to VMs
  • Dependency on the host OS
  • - Limited control over the environment
  • - Vendor lock-in risks
  • - Cold start latency

Security Concerns

  • VM escape vulnerabilities
  • Hypervisor attacks
  • Complex patch management
  • Container escape vulnerabilities
  • Shared kernel risks
  • Insecure container images
  • - Dependency on the provider’s security
  • - Lack of visibility into infrastructure
  • - Function-level security risks

Use Cases

  • Running legacy applications
  • Multi-tenant environments
  • Development and testing
  • Microservices architectures
  • Continuous integration/continuous deployment (CI/CD)
  • Lightweight applications
  • Event-driven applications
  • Short-lived tasks
  • Dynamic scaling requirements

Resource Efficiency

Moderate efficiency due to full OS instances

High efficiency due to sharing the OS kernel

  • Very high efficiency, paying only for execution time

Isolation Level

Strong isolation between VMs

Moderate isolation; containers share the same OS kernel

  • Limited isolation, depends on the provider’s multi-tenancy model

Startup Time

Slow (minutes)

Fast (seconds)

Very fast (milliseconds)

Management Overhead

High; requires managing VMs and OS updates

Moderate; requires managing containers and dependencies

Low; provider handles infrastructure management

Table 1.1: Comparison of virtualization, containerization, and serverless computing

Now that you have reviewed infrastructure design choices, you will learn about the systems that will operate within those designs. You will explore OS concepts and security considerations including hardware architecture, filesystem structure, configuration files, system processes, and secure system hardening.

Activity 1.1: Set Up Your Virtual Environment

This activity guides you through setting up a virtualized environment using VirtualBox, Kali Linux, and Metasploitable. These tools are essential for practicing cybersecurity concepts in a safe and controlled setting. By the end of this activity, you will have a functional virtual environment ready for hands-on exercises.

You will begin by downloading and installing VirtualBox, followed by obtaining and setting up the required VMs. Finally, you will verify that your setup is complete by testing the functionality of each VM.

Part 1: Download VirtualBox

Before you can start working with VMs, you need a virtualization platform. VirtualBox is a free and reliable tool that enables you to create and manage VMs on your system. Follow these steps to download and install it.

To download and install VirtualBox, follow these steps:

  1. Navigate to https://www.virtualbox.org/wiki/Downloads.
  2. Download the latest VirtualBox for your system OS.
  3. Install VirtualBox and accept all the defaults. If you are presented with a message about missing dependencies Python Core / win32api, you can click Yes to proceed forward, as this book will not utilize these. If you plan to use the Python bindings for Oracle VM VirtualBox for external Python applications using the Oracle VM VirtualBox API, you will need to revisit this later.

Part 2: Download VMs

You will be using VMs for Kali Linux and Metasploitable. To perform the exercises in this book, you will need to download specific VMs, including Kali Linux and Metasploitable. These downloads can be quite large and may take a long time depending on your connection speed. These will provide the environments required for hands-on learning. You can follow these steps to download the VM files:

  1. Navigate to https://www.kali.org/get-kali/#kali-virtual-machines and select the VirtualBox 64 download.
  2. Navigate to https://sourceforge.net/projects/metasploitable/files/Metasploitable2/ and select download the latest version.

Part 3: Set Up Your Downloaded VMs

Both of your downloads will need to be unzipped. You can use your preferred ZIP program, such as 7zip found at https://www.7-zip.org/download.html. Windows has a ZIP program built in as well. Unzip the images and place them in a folder to store your VirtualBox images. They will both be used in the next steps.

Set Up Your Kali Linux VM

Kali Linux is penetration testing and ethical hacking distribution. Follow these steps to configure it in VirtualBox and ensure it is ready for exercises in this book:

  1. Figure 1.3 shows the main initial VirtualBox screen. Here, you will click the Add button, the green plus sign on the right side of the buttons at the top of the screen.
Figure 1.3: VirtualBox Add button

Figure 1.3: VirtualBox Add button

  1. Figure 1.4 shows the popup that will appear, allowing you to choose a .vbox file. Navigate to where you unzipped your Kali Linux files and choose the .vbox file. It will be the only one that shows as available as the prompt restricts showing VM files only. Then, select Open.
Figure 1.4: VirtualBox .vbox file choice

Figure 1.4: VirtualBox .vbox file choice

  1. This will automatically configure all elements of the VM and you will see it available in your list of VMs. Figure 1.5 shows how the VirtualBox home screen will appear when your new Kali Linux VM is selected post setup.
Figure 1.5: VirtualBox Kali Linux post setup

Figure 1.5: VirtualBox Kali Linux post setup

As you will see, this creates a VM that will use 2 GB of memory and 2 CPUs.

Set Up Metasploitable

Metasploitable is a purposefully vulnerable VM designed for testing and learning. This section provides the necessary steps to configure it in VirtualBox.

  1. Figure 1.6 shows the home screen where the New button will be used to create a new VM. Click on the New button to create a new VM that will be used to load Metasploitable files.
Figure 1.6: VirtualBox new VM button

Figure 1.6: VirtualBox new VM button

  1. Figure 1.7 shows the screen that will appear giving you options to configure elements for the new VM. You will interact with the following elements (the rest can be left at their defaults):
    • Name – Fill in a name of your choice for this VM; the suggested name is Metasploitable 2
    • Type – Choose Linux from the drop-down list
    • Version – Choose Other Linux (64-bit) from the drop-down list; it will probably be the last option in the list.
Figure 1.7: VirtualBox new VM name and OS

Figure 1.7: VirtualBox new VM name and OS

  1. Then, click Next to proceed.
  2. Figure 1.8 shows the hardware configuration screen for a new VM. On this screen, it is recommended to set at least 512 MB of memory and 1 CPU. You can set these higher if you desire and have the resources available, keeping in mind that you will need to run the Kali Linux and Metasploitable VMs at the same time for future exercises and have resources available for your computer to function as well. When you have finished adjusting these settings, click Next to proceed.
Figure 1.8: VirtualBox new VM hardware settings

Figure 1.8: VirtualBox new VM hardware settings

  1. Figure 1.9 shows the VirtualBox virtual hard disk selection screen, providing three options for you to choose from for configuring the new VM. On this screen, select the Use an Existing Virtual Hard Disk File option.
  2. After selecting the radio button, click on the folder icon with the green up arrow on the right-hand side; this will open a new window to choose a hard disk file.
Figure 1.9: VirtualBox virtual hard disk selection

Figure 1.9: VirtualBox virtual hard disk selection

  1. The next screen, which is the Hard Disk Selector, is shown in Figure 1.10. It also shows the Add button, which is used to define new hard disk files. On this screen, click on Add at the top left.

Figure 1.10: VirtualBox Hard Disk Selector Add button

Figure 1.10: VirtualBox Hard Disk Selector Add button

  1. Figure 1.11 shows the pop-up that will load, allowing you to choose a .vmdk file for defining the virtual hard disk file. Navigate to the folder where you unzipped Metasploitable 2 and choose the .vmdk file. The prompt will by default restrict the options to only VM files, so you should only see the one .vmdk file. After selecting it, choose Open to continue.
Figure 1.11: VirtualBox hard disk .vmdk file

Figure 1.11: VirtualBox hard disk .vmdk file

  1. You should now see the .vmdk file as an option in your list of hard disks. Figure 1.12 shows the hard disk selector screen now having two options that can be used to setup up new VMs, including the kali-linux and Metasploitable disks. Click to highlight the Metasploitable.vmdk file and click the Choose button to continue.
Figure 1.12: VirtualBox hard disk selector Choose button

Figure 1.12: VirtualBox hard disk selector Choose button

  1. You should now be back to the Virtual Hard Disk screen, as shown in Figure 1.13, and it should have the Metasploitable.vmdk file listed; click Next to continue.
Figure 1.13: VirtualBox Metasploitable.vmdk input for the virtual hard disk file

Figure 1.13: VirtualBox Metasploitable.vmdk input for the virtual hard disk file

  1. Figure 1.14 shows the final screen for the new VM setup, which is a summary screen listing all the options selected. Double-check that you see your desired machine name, Guest OS Type is set to Other Linux (64-bit), Base Memory is set to at least 512, and Attached Disk should be your Metasploitable.vmdk file. If all checks out, you can click Finish to proceed.

Figure 1.14: VirtualBox new VM creation summary screen

Figure 1.14: VirtualBox new VM creation summary screen

  1. Figure 1.15 shows the VirtualBox home screen, now containing two configured VMs, as shown in the list on the left side. You now should see the two VMs in your list of VMs set up for VirtualBox.
Figure 1.15: VirtualBox configured VM list

Figure 1.15: VirtualBox configured VM list

Test Your VMs

After configuring your VMs, it is essential to verify that they are functioning properly. These steps will help you test, log in, and prepare your VMs for future activities:

  1. Figure 1.16 shows the menu that appears after right-clicking on a VM, and the options under the Start option. You should right-click on one of your VMs and choose Start and then Normal Start.
Figure 1.16: VirtualBox starting VMs

Figure 1.16: VirtualBox starting VMs

  1. Figure 1.17 shows a small prompt window that will appear, telling you a VM is powering up and providing a progress bar.
Figure 1.17: VirtualBox powering VM up prompt

Figure 1.17: VirtualBox powering VM up prompt

Some VMs may start quickly, and this popup will not appear. If you get any errors, delete your machines and repeat the setup steps. If you still get errors, delete the files that you unzipped, delete the download ZIP file and re-download it, unzip it, and recreate the VMs again by following the steps. These actions will ensure that files have not been corrupted during any steps.

  1. Figure 1.18 shows the console window that appears after a VM is powered up. It also shows a pop-up window on the right side of the screen that lists integration options.
Figure 1.18: VirtualBox initial VM start and enhanced options

Figure 1.18: VirtualBox initial VM start and enhanced options

You can dismiss the tooltip using the top-right box with an X in it. When you interact with a VirtualBox VM, it may take control of your mouse when you click within it. If this happens, use the bottom-right information as a guide. In this example, it says Right Ctrl; this means that to get the mouse back to your host machine, you must hit the right Ctrl key.

  1. Test that you can log in to each of the VMs. As of this writing, the login for Metasplotiable 2 is msfadmin: msfadmin. The login for Kali is kali:kali.
  2. Figure 1.19 shows how to close a VM that has been started. Once you have verified that you can start and log in to each VM, you are ready for future activities. When you are done with your VMs, you can stop them by clicking on File in the top left and then Close….
Figure 1.19: Closing a VM

Figure 1.19: Closing a VM

  1. This will open another window with three choices. For the purposes of this book, I suggest choosing Power off the machine. Figure 1.20 shows the list of VirtualBox VM options for closing a running VM.
Figure 1.20: VirtualBox Close Virtual Machine options

Figure 1.20: VirtualBox Close Virtual Machine options

If you make changes to your VM or wish to come back to the same point, you can use Save the machine state; this will start it back at the same point you left it at. You can also take regular snapshots, or copies, of the machine that can be used to restore or start from. These are more advanced features that will not be used for this book.

Operating System

In this section, you will explore key concepts related to OSs, which form the backbone of any IT infrastructure. Understanding hardware architecture is crucial as it lays the foundation for how an OS interacts with physical components. You will delve into the Windows Registry, a vital database that stores configuration settings and options for the OS. Additionally, you will learn about file structures for both Windows and Linux, highlighting the differences and similarities in how these systems organize and manage files.

Configuration file locations will also be covered, providing insights into where and how important system and application settings are stored and managed. You will examine system processes, focusing on common processes that allow Windows and Linux to handle tasks and services to ensure smooth operation. Finally, the section will emphasize the importance of system hardening, discussing strategies and best practices to reduce vulnerabilities and enhance the security of your OS.

Hardware Architecture

The physical hardware architecture of a machine is not immune from attacks. Specific attacks may be designed for specific hardware architectures, such as CPUs from Intel or AMD. Today, most computers run on either x86 or x64 chips, but due to variations in hardware and software, code may not always run as intended in every situation. Even so, attackers often have evolved code development processes, testing on many different architectures. This means that simply having different architectures will not ensure a safeguard against successful attacks. In 2018, two hardware-related vulnerabilities (named Spectre and Meltdown) occurred. They targeted several different processor types, including Intel x86, IBM Power, and ARM-based processors. They both maliciously exploited how CPUs handle speculative execution, which allowed them to bypass memory protection to perform more attacks, such as privilege escalation and side-channel attacks. They were later resolved through OS patches from vendors and BIOS updates from CPU manufacturers. It is important to know what hardware you are using to be aware of any related threats so that you can evaluate risk and apply controls to best protect the hardware.

An additional concern for hardware is supply chain attacks. These attacks target hardware before it arrives for use. For instance, during the manufacturing process, implanting potential means to compromise organizations after installation occurs. An example is the 2018 Supermicro motherboard attack. It is alleged that Chinese actors implanted microchips designed for malicious purposes on Supermicro motherboards while they were being manufactured. The chips could bypass security settings, allowing the potential compromise of systems that used the motherboard. If an organization was found to have this issue, it would have required replacing affected hardware with new hardware that did not have the affected Supermicro motherboards. It also could have required a broader internal review to find any other compromises due to the motherboard attack and to resolve them on a case-by-case basis. In these cases, it is important to have a vendor management process and a risk-based approach to evaluate new hardware.

Windows Registry

The Windows Registry is where the Windows OS stores configuration settings and options for the OS and software. It is a crucial component of the OS as it assists with managing aspects of the computer operation, such as configuration settings, system and application preferences, user profiles, and hardware information, enabling the OS and installed applications to function correctly and adapt to user-specific requirements.

Registry Editor (regedit) is a built-in tool that can be used to easily view and interact with the Windows Registry. In Figure 1.21, you can see an example view of the Registry Editor screen. It shows the main key hives drilled down to the HKEY_LOCAL_MACHINE\SECURITY key, which is primarily used for storing security-related information and settings, such as access lists for system resources.

Figure 1.21: Windows Registry Editor

Figure 1.21: Windows Registry Editor

The settings and options found in the Registry are organized into a structured database. It contains a hierarchical structure of hives, keys, subkeys, and values. As shown in the figure, the hive is HKEY_LOCAL_MACHINE, SECURITY is the parent key, and the right frame shows elements of values defined by name, type, and data. Hives are the first level of the hierarchical structure representing the logical grouping of Registry data, containing sets of keys and values. Keys are organizational units in the next level of the hierarchal structure that contain other subkeys and values. Values store specific information and settings and can contain strings, binary data, numeric data, links to other Registry entries, or component data.

There are five main hives to be aware of:

  • HKEY_CLASSES_ROOT (HKCR) – Contains links between file extensions and applications to open them
  • HKEY_CURRENT_USER (HKCU) – Preferences, environment variables, and configuration settings for the currently logged-in user
  • HKEY_LOCAL_MACHINE (HKLM) – System-wide settings for all users, including services and scheduled tasks
  • HKEY_USERS (HKU) – Configuration settings for all system users
  • HKEY_CURRENT_CONFIG (HKCC) – Local system and hardware configuration

Being a crucial component of the OS, the Windows Registry requires protection. It can be protected by various means including access control, antivirus and antimalware, Group Policy settings, and user account control (UAC).

Windows Registry is a popular target for attackers, such as being a vector for persistence methods. It can be corrupted to cause system outages and impact. It can also be a vector for performing privilege escalation, allowing an attacker to gain a higher level of permissions.

Activity 1.2: Explore Windows Registry

This activity will take you on a quick review of the Windows Registry, focusing on software keys like VirtualBox. By completing this exercise, you will learn how to locate specific registry keys, interpret their values, and compare registry data with application details, which can be helpful in troubleshooting or forensic analysis. Through this exercise, you will learn how to locate specific registry keys, interpret their values, and compare registry data with application details, which can be helpful in troubleshooting or forensic analysis. You will be using regedit.msc, which is found by default on Windows machines.

The following steps will show you how to access and navigate the Windows Registry:

  1. Figure 1.22 shows how to open the Registry Editor using the Windows search box. On a Windows system where you have administrator privileges, click on the magnifying glass in the start bar and type regedit. Then, open the Registry Editor app. You will need to select Yes in the User Account Control box.
Figure 1.22: Starting Windows Registry Editor

Figure 1.22: Starting Windows Registry Editor

  1. Click around through the main Registry hives to explore them. Observe keys, their values, and data.
  2. Find the version for your VirtualBox application. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\VirtualBox and record the version you see. Figure 1.23 shows how these keys will appear after you navigate to them. Your screen should look similar and allow you to find the version keys with the application version details.
Figure 1.23: Windows Registry VirtualBox keys

Figure 1.23: Windows Registry VirtualBox keys

  1. If it is not already open, open VirtualBox up. In the top navigation bar, click the Help menu and select About VirtualBox…. This menu and option is shown in Figure 1.24. Make a note of the version you see and compare it with what you found in the Registry.
Figure 1.24: About VirtualBox option

Figure 1.24: About VirtualBox option

Figure 1.25 shows the screen that will open up after you select the About VirtualBox… option. On this screen, you can see information on the installed version in the bottom-right corner.

Figure 1.25: VirtualBox installed version

Figure 1.25: VirtualBox installed version

  1. If you use VMware for your VM installs, you can explore its keys at HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.. Figure 1.25 shows how the keys may appear on your system. The key name for finding the version is called vmci.status. An example of this is also shown in Figure 1.26. You will have to read the version from the middle of the value string.
Figure 1.26: Windows Registry VMware keys

Figure 1.26: Windows Registry VMware keys

You can also look at other software installed and review the associated key values. The version is not always a standard key.

File Structure

Understanding file structure is crucial for managing and securing an OS effectively. This section discusses the types and structures of filesystems in both Windows and Linux environments.

For Windows, you will explore the New Technology Filesystem (NTFS) and the File Allocation Table (FAT) system. You will learn how these systems organize data, manage disk space, and handle file permissions. The section will also cover the hierarchical structure of directories and files within these systems.

For Linux, you will examine the Extended (ext) filesystem and X Filesystem (XFS). You will study how these filesystems manage data, support large files, and ensure data integrity. Additionally, you will understand the Linux filesystem hierarchy, including essential directories such as /home, /etc, and /var.

The section will provide a comprehensive understanding of various filesystem types and structures in both Windows and Linux. This knowledge will enable you to manage filesystems more effectively and enhance overall system security.

Windows

While you explore OS file structure concepts, you should also learn about filesystem types. They are important because they influence data organization, compatibility, performance, integrity, and security for the OS. There are a few filesystem types that are specific to the Windows OS. FAT was used in earlier versions of Windows, including FAT12, FAT16, and FAT32. Earlier versions of FAT limited filename length; FAT32 extended it to 255 characters but still with a 4 GB individual file max size limit. exFAT added additional features and capabilities over FAT, including larger than 4 GB sizes. NTFS is the most modern Windows filesystem type; it was introduced by Windows NT 3.1 in 1993, adding numerous additional features, including many security-based features missing from earlier filesystem types. Some examples are file and folder permissions, compression and encryption, fault tolerance and recovery, and links. FAT and exFAT are still being used today for specific use cases, such as for simple systems that do not need all the features provided by NTFS. They can offer slight performance improvements over NTFS. Also, they provide intersystem compatibility, such as for flash drives, between Windows, macOS, and Linux. This makes them still important to be aware of.

Windows uses several structure components, such as drives letters, folders, subfolders, filenames, and extensions, to organize the filesystem. Drive letters, such as C:, D:, and E:, are used to define storage devices. File paths are hierarchical, starting with the drive letter or network location. They can contain folders and subfolders to hold files and extensions. The file extension allows Windows to link a file to a program to properly interact with it.

An example of a file path is C:\Users\Username\Documents\File.txt. Here, C: is the drive letter for the storage device. \Users\Username\Documents\ is the file path of folders and subfolders to hold the file. File.txt is the file. The .txt part is the extension that tells Windows this is a text file and to open it with the associated text editor program.

Some essential directories include the following:

  • C:\Users: This directory contains the home directories of all the users on the system. Each user has a subdirectory within C:\Users, typically named after their username. This is where users store their personal files, configuration settings, and directories.
  • C:\Windows\System32: This directory contains system-wide configuration files, executable files, and libraries essential for the OS’s operation. It holds many of the core components and configuration settings for the Windows OS.
  • C:\ProgramData: This directory is used for application data that is accessible to all users on the system. It includes configuration files, application data, and other files that programs need to access.
  • C:\Windows\Logs: These are system log files.
  • C:\Windows\Temp: These are temporary files used by the system and applications.
  • C:\Users\[username]\AppData\Local\Temp: These are user-specific temporary files.

Libraries, user profiles, and the recycle bin are some additional organizational elements of the Windows filesystem.

Linux

There are several common Linux filesystem types still in use today. ext3 is an early version of the ext filesystem that added several enhancements over previous versions, including journaling, file sizes up to 2 terabytes (TB), and volume sizes up to 32 TB. ext4 is the latest version of ext, supporting file sizes up to 16 TB, a volume size of up to 1 exabyte (EB), reduced fragmentation, improved read/write performance, faster fsck, and optimization for high-performance computing. fsck (File System Consistency Check) is a command-line utility used in Unix-like OSs to check and repair filesystem inconsistencies on storage devices. XFS contains many of the ext4 features but increases support for even larger sizes, up to 16 EB. This makes it more widely used in large-scale storage systems.

Most Linux file structure components are different from Windows. They are organized hierarchically. The structure starts from the root directory (/) and directories and subdirectories are found from that point, such as /bin. File types can be regular files, directories, symbolic links, devices, or special files. Filesystems are mounted onto directories for access. Each file type has permissions at the owner/user and group level.

Some essential directories include the following:

  • /home: This directory contains the home directories of all the users on the system. Each user has a subdirectory within /home, typically named after their username. This is where users store their personal files, configuration settings, and directories.
  • /etc: This directory is used to store all system-wide configuration files and shell scripts used to boot and initialize system settings.
  • /var: This directory holds variable data files. These include logs, spool files, and temporary files. For example, system log files are typically found in /var/log.

Configuration File Locations

Understanding the location and structure of configuration files is central to effective system administration and security. Configuration files, which store settings and preferences for OSs, applications, and services, play a pivotal role in the functionality and stability of a system. These files enable customization and control over various system behaviors and features, making them an essential aspect of both Windows and Linux environments.

In Windows, configuration settings are often stored in the Windows Registry, a centralized hierarchical database. However, many applications also use configuration files that are typically found in specific directories. Key locations include C:\Windows\System32 for system-wide configurations and C:\ProgramData for application-specific settings accessible to all users.

In contrast, Linux employs a more distributed approach, with configuration files scattered across multiple directories. The /etc directory is the primary location for system-wide configuration files and scripts, essential for booting and initializing system settings. User-specific configurations are usually found within their home directories, often in hidden files or subdirectories.

Windows

As previously discussed, the Registry is the main configuration file for Windows. However, there are some additional files to be aware of bootmgr and Boot Configuration Data (BCD) contain information about the OS and its boot configuration. The host file, found in C:\Windows\System32\drivers\etc, allows local network configuration that is a manual DNS bypass. This same directory also holds additional network-related configuration files. Additional application-specific configurations can be found in C:\ProgramData and C:\Program Files. Also, some additional user-specific settings are stored within C:\Users\<Username> and the AppData subdirectory.

Group Policy Objects (GPOs) are a set of rules and configurations that can be centrally managed by the organization and then pushed out to specific machines within an Active Directory environment. An Organizational Unit (OU) is a logical container within Active Directory used to group users, computers, or other resources for easier management and policy application. GPOs can be applied at different levels, including local GPOs (specific to a single computer), domain GPOs (affecting all computers and users in a domain), and OU-specific GPOs (targeted to a specific OU). In some situations, this can override or replace local settings, even found within the Registry. They allow consistent settings to be set across the organization to maintain the security, stability, and functionality of a Windows environment.

Figure 1.27 shows a local policy view and some of the security options settings. Depending on the type of setup, these can be defined at the domain or local level.

Figure 1.27: Local Group Policy Editor

Figure 1.27: Local Group Policy Editor

Figure 1.28 shows the domain group policy definition and some example GPO settings for password policies.

Figure 1.28: Group Policy Management

Figure 1.28: Group Policy Management

The local group policy editor can be accessed with administrative privileges via the gpedit.msc snap-in. The group policy management editor is accessible on the domain controller with the gpmc.msc snap-in or from a client machine with the remote server administrator tools feature installed.

Linux

Linux uses numerous files to define configurations. The /etc directory is used to store system-wide configuration files. Some examples are as follows:

  • /etc/passwd: User account information
  • /etc/fstab: Filesystem table for defining disk drives and partitions
  • /etc/hosts: Local DNS resolution
  • /etc/network: A folder with network configuration files

The /etc directory also stores application configurations for ssh, package managers, Apache, Samba, and others. There are also user-specific settings stored in the user’s home directory under environment shell files such as ~/.bashrc and ~/.bash_profile. Each shell used on the system, such as zsh or sh, would also have these same files.

System Processes

All OSs create one or more processes when running a program. System processes are the main processes running the OS. They can vary between OS types and versions but generally serve the same main purposes of controlling and directing the function of the OS. Each system process will be assigned a process identifier (PID). This is a unique number for each running process. For the CySA+ test, you need not memorize all the process names and functions but must be aware of their overall functions and importance. Attackers will often target system processes to hide and obfuscate their actions, such as running a false svchost process in Windows. They also can target them to gain additional higher levels of access to the system.

Windows

Here are some examples of common Windows system processes:

  • ntoskrnl.exe: Also known as the system process, always assigned PID 4, the core system process running the OS.
  • scvhost.exe: Usually, a system will have multiple instances running, hosting different services. It is often used by attackers to hide their processes among others.
  • explorer.exe: Manages the desktop, taskbar, and file management.
  • lsass.exe: Security-related, user authentication, managing the Windows Security Account Manager (SAM) database, and enforcing security policies.
  • services.exe: Responsible for starting, stopping, and interacting with system services.

This list is not exhaustive as there are a large number of system processes that run regularly on a Windows system.

Linux

Here are some examples of common Linux system processes:

  • init or system: Initializes the system and manages system services; always PID 1
  • sshd: Daemon allowing SSH access to the system
  • cron and crond: Manages scheduled tasks and automated jobs
  • syslogd or rsyslogd: Main logging process for system messages and events
  • ntpd or chronyd: Manage Network Time Protocol (NTP) for time synchronization
  • httpd: Apache web service to host websites

This list is not exhaustive as there are many system processes that run regularly on a Linux system.

System Hardening

System hardening is a crucial component in a secure design as it significantly enhances the overall security posture of a system. It uses established best practice procedures for hardening hardware, networks, software, and services. By following the best practices and eliminating potential entry points for attackers, organizations can protect sensitive data, ensure the integrity of their operations, and maintain the trust of their customers. Some general system hardening items include disabling unnecessary service and network components, implementing least privilege and strong passwords, applying security updates and patches, and implementing security software. These items, and the effort of system hardening, serve to reduce the attack surface and opportunities for attackers to take advantage of.

The Center for Internet Security (CIS) has hardening guides and system benchmarks for numerous OS versions and software applications. For example, they have guides about various versions of Windows Desktop and Server, Red Hat Linux, AIX, Oracle database, Apache web server, and even iOS. You can find the benchmark for the popular web server software Apache here: https://www.cisecurity.org/benchmark/apache_http_server.

Another system-hardening resource is Security Technical Implementation Guides (STIGs) developed by the Defense Information Systems Agency (DISA) in the US. They provide detailed instructions and recommendations to secure computer systems, networks, and infrastructure effectively. The primary goals of STIGs are to enhance the security posture of information systems, reduce vulnerabilities, and standardize security configurations across various technologies used within the US Department of Defense (DoD) and other government agencies. Many non-government agencies also use these as best practices to configure their systems in secure ways.

A downloaded STIG file will come in a ZIP archive. In this archive, you will generally find several PDF files and the base STIG files. The PDFs explain how to understand and use the STIG files. They also include notes about updates, revisions, and an overview of the specific file. For the actual STIG requirements, you will review an *xccdf.xml file. This can be opened with the STIG Viewer application. There are three categories of items in STIGs to help prioritize settings and fixes: CAT I (High), CAT II (Medium), and CAT III (Low). They go from immediate impact to potential impact, and degradation of measures for protection. Figure 1.29 shows an example of Windows 11 STIG open in the STIG Viewer application.

Figure 1.29: Windows 11 STIG opened in STIG Viewer

Figure 1.29: Windows 11 STIG opened in STIG Viewer

The left-hand side of the figure shows a list and grouping of best practice items as defined by DISA. The right frame is a more detailed version of the same items.

Each specific STIG item has a general organization details section with several IDs, a severity, and a classification. They also have Rule Title, Discussion, Check Text, Fix Text, and References sections. Figure 1.30 shows these items for a Windows 11 firmware rule.

Figure 1.30: Example STIG item details

Figure 1.30: Example STIG item details

Figure 1.31 shows the References section, which is found at the bottom of the STIG rule item details. This provides a reference to other best practice documents (for this example, the NIST SP 800-53 document) that form the basis for this STIG’s best practice rule settings.

Figure 1.31: Example STIG item details, References

Figure 1.31: Example STIG item details, References

The Check Text section, as shown in Figure 1.29, defines the process to complete manual checks to verify whether this STIG has been implemented. In the example from the figures, an analyst can run System Information and review the System Summary section for the BIOS Mode setting to display UEFI; if it does not, it is considered a finding. Also, many commercial vulnerability scanning tools include STIG checks (which are actively updated as STIGs are updated) to perform these checks in an automated fashion. You can then utilize Fix Text to correct any issues found. Fix Text can also be referenced for the initial system setup to make it more secure.

Implementing any of these measures should be implemented with planning and care. They have the potential to impact systems, causing them to act in unexpected ways depending on system usage and setup. Some examples include impacting legacy systems needing SMBv1, audit settings causing higher memory and CPU usage, and application failures when necessary ports or protocols are turned off. Ideally, they should be tailored to meet the needs of the organization implementing them. The organizations can choose not to use some settings or alter suggested settings as per their requirements. It is important to test these settings before production implementation and analyze any potential exceptions to determine why they cause an impact. Any deviations from STIGs or CIS benchmarks should be done with a risk-based approach in mind. It is also important to monitor these settings periodically to ensure they do not get unintentionally altered, which is commonly done as part of vulnerability monitoring programs. The benchmarks and STIGs should also be monitored for new updates to evaluate and apply any new settings.

This section covered essential OS concepts, including hardware architecture, Registry management, and file structures. You examined configuration file locations and system processes for both Windows and Linux, along with system hardening practices to enhance security. With a solid understanding of these foundational elements, you are now prepared to explore how logs are managed and utilized in the next section.

Activity 1.3: CIS Benchmark and STIG Review

This activity gives you practice with two widely recognized resources for system hardening: CIS benchmarks and STIGs. These documents provide detailed guidelines for securing systems by implementing industry best practices and government standards. By reviewing and comparing them, you’ll gain hands-on experience of analyzing security settings, assessing their organizational impact, and understanding how different frameworks present and enforce security controls.

Follow these steps to explore key security settings, analyze their applicability, and compare the presentation of information in CIS benchmarks and STIGs:

  1. Visit https://www.cisecurity.org/cis-benchmarks and navigate to Operating Systems | Microsoft Windows Server | DOWNLOAD THE BENCHMARK. You will have to register to be able to complete the download.
  2. Access your email and you will receive a link to download the applicable benchmark. Choose the Windows Server 2022 benchmark to use in the following steps. You can use any benchmark you want to explore on your own, but the rest of the steps here are specific to the Windows Server 2022 benchmark. Explore the settings for section 1.1 Password Policy on pages 29–44 in the document. Note that the PDF pages may not exactly match the document pages. While reviewing these settings consider these questions:
    • Do these settings fit within your environment?
    • If these settings were to be turned on enterprise-wide, would there be any concern of adverse impact?
    • If there is a concern for impact, what steps would be best to follow for those specific settings?

    Another specific setting to explore is 2.3.7.3 Interactive logon: Machine inactivity limit on page 188. Bearing in mind where this benchmark would be applied, answer the same questions.

    Continue to practice reviewing this benchmark and others of interest and see what kind of common items you see. Remember, it may not always be best for every organization to implement every item exactly as written.

  3. STIGs provided by the DoD can also provide this type of system hardening guidance but require some additional steps. They must be opened via a STIG Viewer tool as they are in an XCCDF format. Navigate to https://public.cyber.mil/stigs/srg-stig-tools/ and download the STIG Viewer compatible with your system and install it. If you get the MSI package, select More Info and Run Anyway to get the installation to work.
  4. Navigate to https://public.cyber.mil/stigs/downloads/ on the right-hand filter, choose operating systems, then select the plus sign to choose Windows. Find Windows Server 2022 STIG and download it.
  5. These files are typically found in compressed format, so you will have to unzip them first. You will see a .xml file and many other files. These other files help explain some details about STIG itself. Open the STIG Viewer you installed. Click to open STIG and navigate to the .xml file and open it.
  6. Take some time to explore the interface. In the top left, next to STIG Rules, you will see a gear icon and a filter icon. Click on the filter icon, which will allow ways to search through the STIG, and input interactive login. Choose V-254456 from the filtered list. Explore these details and compare what you saw in step 4 from the CIS benchmark.

Continue practicing looking through a STIG and notice the differences in how information is presented when compared to CIS benchmarks.

Logging and Log Ingestion

Logging is an important aspect of system and security design. It can serve many functions from system troubleshooting to security monitoring. This part of the exam objectives specifically calls out log ingestion, time synchronization, and logging levels. Log ingestion centers around the collection and shipping of logs to a central location for further analysis. For example, on Linux systems, the syslog daemon can be configured to collect logs from the system and applications and then forward them to another central storage location. This analysis is commonly done through a security information and event management (SIEM) solution. You will learn more about SIEM solutions in Chapter 8, Tools and Techniques for Malicious Activity Analysis.

Time Synchronization

Time synchronization is an essential part of logging to ensure consistency, accuracy, and reliability. Its importance increases as more systems and logs are integrated within an environment. Organizations use NTP to provide a centralized time reference for devices to use and be synched together. This concept enhances the meaningfulness of logs and facilitates monitoring and alerts. It also allows event correlation, event analysis, root cause analysis, and forensic investigations. This concept is not only important for security considerations but also for system troubleshooting and debugging.

For example, consider an organization that has just had a cyber-attack. Evidence of the attack was first noticed on Windows Server at 3:18 AM. The cyber analysts began to research this event by checking the IDSs and IPSs for alerts; they utilized the 3:18 AM time to start their analysis. They also started to review Windows client logs via their SIEM tool to further map out the attack and impact. All these machines would ideally be using NTP to maintain their time. This makes sure that accurate research and correlation can be completed, which can help the analysts map out the attack process and attribute other potential machines and evidence to the attack.

Logging Levels

Logging levels, also referred to as log severity levels, are predefined categories to group and classify log messages. They allow easier configuration of logging based on the importance of log messages. These categories are hierarchical, with each level including messages from the ones below. For example, this means a Level 4 message would include message information from Levels 0–3 as well. In Figure 1.32, you can see how each logging level feeds into the next, increasing the amount of logging being done. Also, the severity level flows from Level 0 as the most important or highest severity to Level 6 being the least important and lowest severity.

Figure 1.32: Logging levels

Figure 1.32: Logging levels

Here is a list of all the standard logging levels and the message types found at each level:

  • Level 0Emergency is used for messages about catastrophic issues that may need emergency action.
  • Level 1Alert is used for urgent messages.
  • Level 2Critical is used for messages that require immediate attention and may cause immediate impact.
  • Level 3Error is used for messages of failed execution, but not the function of an application.
  • Level 4Warning (Warn) is used for messages that are not immediately important but need awareness for potential impact.
  • Level 5Information (Info) is used for general normal operation events.
  • Level 6Debug is used for diagnostics during development and debugging.

This concept is directly used in Cisco appliances when setting up logging. It is also found in Linux systems, when configuring syslog, with the exclusion of Levels 0 and 1. Windows uses event severity levels, but they are not hierarchical and inclusive of each level above.

Since each level includes the one below, it may take trial and error to adequately configure a system. If expected messages are missing, it may require turning on additional levels. Another consideration is that each level causes more and more messages to be included, which can increase storage needs and potentially be overwhelming to review. It is best to only turn on what is minimally needed and only use Debug sparingly.

Extra Logging Insights

Here are several additional logging best practices to keep in mind; they may or may not come up on the test but are good to be aware of:

  • It is important to define and implement logging policies and procedures, including retention needs, keeping in mind storage costs and regulatory requirements.
  • Logs are a cost, so only what is necessary, based on risk, should be logged and stored. This should be re-evaluated periodically as risk factors may change.
  • Logs should include enough information to be meaningful for analysis and review. It is important to not only to store logs but to also be able to use them.
  • To ensure integrity, logs should be immutable and secure, offloaded from the creation point, and centrally stored for analysis.
  • Logging processes need periodic validation and monitoring to ensure expected content is present in logs, systems are generating logs as expected, and log shipping is occurring to central stores.

In this section, you analyzed the critical aspects of logging and log ingestion, including how time synchronization plays a vital role in accurate log analysis. You learned about different logging levels and their impact on the comprehensiveness and usefulness of log data. Additionally, you examined extra logging insights to enhance your ability to detect and respond to security incidents effectively. Moving forward, the next section will delve into network architecture, covering key concepts such as on-premises and cloud computing environments, hybrid models, and network segmentation, to provide a solid foundation for understanding and securing network infrastructures.

Network Architecture

A security practitioner needs to have a thorough comprehension of the architecture of their network. In this section, you will learn about three architecture designs: on-premises, cloud, and hybrid. Network segmentation and zero-trust concepts will also be covered. Finally, two cloud-based network solutions, secure access secure edge (SASE) and software-defined networking (SDN), will be discussed. You may find one, multiple, or all these concepts present in your network, and it is important to understand the security considerations and potential impact of each.

Base network access includes media access control (MAC) addresses, Internet Protocol (IP) addresses, and Address Resolution Protocol (ARP) messages. A MAC address is a unique identifier for every device that allows connection to a network, such as for a network interface card (NIC). It is a 48-bit address represented with hexadecimal numbers, for example, 00:1A:2B:3C:4D:5E. A security concern is that it is possible to change or spoof these impersonating devices. Every device linked to a computer network is given a specific numerical identity called an IP address. Through the specification of their source and destination on the internet or a local network, it lets devices communicate with one another. ARP is used to allow network communication by mapping MAC addresses to IP addresses. Every device on a given network or subnetwork will send out messages asking Hey, who has this IP address? It will then use replied MAC addresses to make an ARP table for future communications. These requests would be dropped at the router level, keeping them only present on the local network.

There are several different types of networks, and here, you will read about three: local area network (LAN), virtual local area network (VLAN), and wide area network (WAN). A LAN is a limited-size network, connecting devices such as within a home, office building, or campus. A VLAN is a logical segmentation of a LAN, existing on the same hardware rather than creating another physical LAN with additional hardware. This allows additional security and traffic flow control. A WAN is a collection of multiple LANs over a bigger area, such as connecting multiple offices, often over wide distances such as in different states.

Two additional concepts assist in facilitating communication within these networks: Transmission Control Protocol (TCP) and Border Gateway Protocol (BGP). BGP is used to help connect LANs together to form the WAN. It assists in the routing process by maintaining a table of IP networks, such as other LANs, so that packets can be routed to the proper destination over the most effective path. TCP is one of the main protocols used for communication between network devices. It is connection-oriented with a three-way handshake allowing both sides to validate if they are connected and ready to communicate. This allows reliability in communication, as packets are sent, confirmed, and re-sent as necessary.

On-premises

On-premises (on-prem) network architecture is a traditional network design. This includes components such as cabling, routers, switches, and other security devices. On-prem networks are physical and possibly virtual assets contained on site within the organization. Today, many organizations may create their own on-prem virtualization setup through technologies such as ESXi from VMware, allowing the creation of organizationally controlled and maintained virtual assets. This includes some of the benefits of cloud provider virtualization, such as better resource utilization, but does not gain some of the cost benefits since the hardware is still maintained on-prem. Generally, on-prem networks carry higher costs and resource needs than other architectures, such as cloud computing and hybrid models. This is because an organization is responsible for all the maintenance of hardware including electricity, backup, and system support.

There are various security solutions available for on-prem networks. Some common solutions include the following:

  • Firewalls, such as next-generation firewalls (NGFWs), use an access control list (ACL) to help control traffic flow.
  • Network access control (NAC) enforces policies to control access to a network.
  • Intrusion detection systems (IDSs) are used to detect anomalies, and intrusion prevention systems (IPSs) are used to prevent attacks. These are also known as NIDS and NIPS, where the N stands for network, and their host-based counterparts are HIDS and HIPS, where the H stands for host.
  • Content filtering and caching devices, such as proxies, help regulate what data can reach protected devices.

Some devices combine several security functions onto one device, such as IDS/IPS, firewall, and content filtering, known as unified threat management (UTM) devices.

Cloud Computing

Cloud computing is a collection of networks and computing resources that are accessible over the internet. It has a shared responsibility design that can vary between providers and services. The main components of this design are information and data, application logic and code, identity and access, platform and resource configuration, and various other security items. Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure are a few examples of cloud service providers.

There are various cloud service models, including software as a service (SaaS), infrastructure as a service (IaaS), and platform as a service (PaaS). These models have varying levels of services and maintenance provided by the cloud vendor. SaaS is a service model that provides software on the cloud and makes it accessible over the internet for the users. The cloud provider would be responsible for the full management of everything related to software, including the underlying hardware needs. An example of this is Google Workspace, which includes applications such as Gmail, Google, Drive, Google Docs, and Google Sheets, all accessible over the internet. In the IaaS service model, the cloud provider maintains hardware for the cloud and enables the users to install, configure, and maintain the OS and applications on it. An example of this is Amazon EC2, which allows the user or client to create virtual servers that can then have OS and applications installed. PaaS provides management by the cloud provider for the hardware and software environment in the cloud that allows users to create and manage applications to run on it. An example is Azure App Services, which is a platform that allows developers to build, deploy, and manage applications without any need to maintain the backend infrastructure.

Some additional cloud concepts to be aware of are SDNs, content delivery networks (CDNs), and cloud access security brokers (CASBs). CDNs help to effectively deliver web content such as text, images, videos, and other resources to users. CDNs consist of a distributed network of servers that are positioned strategically throughout various geographic regions. By caching and delivering content from the closest server to the end user, they reduce latency and improve user experience while enhancing website performance, dependability, and speed. You will learn about SDN and CASB later in this chapter.

There are several factors to be considered and weighed when considering using on-prem or cloud solutions. These include cost, control and customization, scalability, security, and compliance. It can often come down to a balance between risk and cost when choosing between the two. Some main security features to review with the cloud are access, key management, storage, logging, monitoring, privacy, and compliance.

Hybrid Model

A hybrid model is a combination of on-prem and cloud options. This model allows an organization to maintain greater control over sensitive data and critical applications while still gaining some cost and scalability benefits of the cloud. They are most often used when organizations are migrating their data and operations to the cloud serving as an intermediary state between fully on-prem and fully cloud-based. For example, a company might keep its sensitive customer data on-prem to comply with regulatory requirements while using the cloud for scalable web hosting and storage. Some use cases benefit from this model, such as backup and recovery, where critical data is stored both on-site and, in the cloud, to ensure redundancy and fast recovery times, and seasonal workloads, where cloud resources can be leveraged to handle peak demand periods without the need for permanent infrastructure investment. Additionally, hybrid models are beneficial for application development and testing, allowing developers to test new applications in a cloud environment while keeping production workloads on-prem for stability and control.

Other Cloud Models

Apart from the hybrid model, there are several other cloud deployment models, including public, private, and community models. In the public cloud model, the cloud service provider hosts and manages a shared pool of computer resources, including applications and storage. This is one of the most well-known models, providing the greatest flexibility at the lowest cost. It is offered by the major providers: AWS, GCP, and Azure. It can be an ideal option for start-ups and small businesses, allowing them to scale quickly, develop and deploy rapidly, and keep costs lower and more manageable.

The community cloud model is like the public model in that it uses a shared pool of resources, but they are restricted to specific groups such as those with similar security, compliance, or performance requirements. This can help them meet certain regulatory requirements by having a cloud environment specifically designed for the requirements of their business sector, such as for healthcare organizations. Another specific example of this from a cloud provider is GovCloud, found with AWS.

The private cloud model includes more isolation and dedicated resources for a specific client. It includes the concept of a virtual private cloud (VPC) that would exist on an isolated subnet and have additional measures to further isolate client data and network traffic from other clients. This isolation can provide dedicated resources, help enhance security, improve privacy, and meet regulatory requirements. It helps to further strengthen the cloud design by preventing cloud clients from impacting each other in any way. This cloud model can be ideal for larger enterprises that have bigger budgets and stricter regulatory controls. These entities want to maintain control over their data. As the AWS offering name suggests, it is also used by government agencies to help ensure data sovereignty, security, and compliance with local laws and regulations. The choice of which model to use should be based on a thorough review of costs, risks, and threats from a risk-based approach.

Network Segmentation

Segmentation is a key concept for security. The process of segmenting an organization’s infrastructure helps in several ways. It helps to reduce the impact of any issues, security or operational. It also reduces the attack surface, making segments secure from other segments, and reducing exposure of systems to attackers by requiring attackers to compromise multiple segments to get greater control over the overall organization. Some segments can have extra security capabilities deployed to further secure them while making cost investments more efficient. It can also help to reduce the scope for audits and compliance.

Physical segmentation can be accomplished by air-gapping systems and networks. This would mean no physical or virtual connections would be established between the segments. Physical segmentation increases security but also the complexity of administration. However, it does not fully prevent attacks as there are several other ways to attack these setups, such as supply chain attacks, infected USB keys, and more. Segmentation can also be done at a virtual level such as by running VMs that are not connected to each other, running containers, or even using separate, unconnected, physical machines to run the VMs.

Without segmentation, network devices can experience latency due to congestion. Segmenting the network reduces the traffic to only what is necessary for specific subsets of machines, increasing the overall efficiency and speed of network communication. This level of segmentation can be accomplished simply with the use of routers, switches, and subnetting.

Network segmentation is most often accomplished using firewalls. With the use of ACLs, traffic flow can be restricted. This only allows traffic to cross from one segment to the other when explicitly allowed but prevents the crossing otherwise. Security solutions for network segmentation also include NGFWs that offer additional security functions such as intrusion detection and prevention systems (IDPSs), application awareness and control, SSL/TLS decryption and inspection, user and identity-based controls, and advanced threat intelligence.

A combination of firewalls, routers, switches, and subnets can be used to help segment with operational and security benefits in mind. This is often done with VLAN tagging, allowing further control of data flow between different segments.

To ensure a secure design, these segments must have secure access methods. This access is often done via a jump box or virtual private network (VPN) connection. A jump box’s specific function is to exist between segments with connections to those segments. A user would first connect to the jump box and then access the resources as necessary on the connected segments. Due to their connection setup, these boxes need to be highly secure, maintained, and monitored.

VPNs facilitate the secure connection of remote users or branch offices to a corporate network by establishing a secure and encrypted tunnel over the public internet. Users can access network resources through this tunnel as if they were physically on the company’s LAN. More advanced capabilities, including micro-segmentation, further divide traffic into subgroups based on parameters such as user roles, device kinds, or apps.

Figure 1.33 shows an example of a segmented network. The user has a VPN client used to connect to the internal VPN server, flowing through the firewall. Segment 1 and Segment 2 are divided, not depicted by a specific device. It could be done with another firewall, NGFW, or router. To access Segment 2, a user in Segment 1 must connect to the jump box, which would then allow access to Segment 2 devices.

Figure 1.33: Simple segmented network

Figure 1.33: Simple segmented network

Cloud computing also has capabilities to facilitate network segmentation. They can utilize the concept of VPCs, as discussed in the Other Cloud Models section of this chapter. This concept is furthered using subnets and different VPCs to isolate devices and traffic. VPCs and devices within them can be further traffic controlled with ACLs and network ACLs (NACLs). They also commonly use jump boxes to facilitate communication and often administration of these segments.

Zero Trust

Zero trust is a modern security principle that emphasizes a “never trust, always verify” mindset. Every user and every device accessing a network must be verified, regardless of previous permissions. It has a main premise that threats can come from both inside and outside the network. This verification ensures better security and reduced risk. For example, an employee working remotely must authenticate through multiple layers, such as MFA and device health checks, before gaining access to internal systems. This ensures that even if a device is compromised or credentials are leaked, the system remains secure as it enforces strict verification protocols for every access attempt.

Zero trust network access (ZTNA) is a streamlined application of the zero trust principle. It requires authentication at every access point for both external and internal connections. To have greater value, it relies on micro-segmentation, segmenting the network at the application and workload level. With ZTNA, lateral movement threats are significantly reduced and attacks can be more contained. It makes authentication and authorization identity-centric, using unique identities, roles, and permissions before granting access to resources. It allows trust to be dynamic and continuously verified, based on specific parameters and context-aware policies.

Some of the main advantages of ZTNA are the following:

  • Enhanced security – Greater enforcement of the least privilege principle
  • Reduced insider threats – Limited impact potential and reduced lateral movement capabilities
  • Remote work enablement – Enhanced identities for authentication and authorization
  • Compliance with regulations – Some compliance frameworks are starting to require zero trust

Consider a scenario where a company has implemented ZTNA to manage remote access, requiring users to authenticate at each access point. To limit lateral movement, the company uses micro-segmentation to isolate applications and workloads. A developer accessing the development environment must pass role-specific authentication checks and is restricted from accessing other parts of the network. This approach reduces the risk of a compromised account spreading to other systems, enhances security, and supports remote work by continuously verifying user identities and permissions.

ZTNA can be complex and costly to implement and is also resource intensive. It has a strong dependency on connectivity, and instability can impact access to critical resources.

SASE

SASE is a framework that combines security and network functionalities into a unified, cloud-native solution. It provides secure access solutions for an organization’s WAN. It also helps provide network edge protection. It can also be referred to as secure access service edge (SASE).

Some key components include the following:

Software-defined wide-area networking (SD-WAN)

  • CASBs
  • Secure web gateways (SWGs)
  • Firewall as a service (FWaaS)
  • ZTNA

You will review CASB and SD-WAN later in this chapter. SWGs can be a cloud service or network security appliances that enforce security policies for web usage and protect a network from internet-based threats. It does this by intercepting and inspecting web traffic for malicious identifiers. It can integrate with other security solutions, such as firewalls, to further enhance capabilities on both sides. Zscaler, Cisco Umbrella, and McAfee Web Gateway are some examples of SWG vendors. Some use case examples include the following:

  • A financial institution uses SWG to filter web traffic and block access to malicious websites that could compromise sensitive data
  • A technology company utilizes a cloud-based SWG to secure the browsing activities of remote employees

SASE shares some of the same benefits as ZTNA, such as enhanced security and remote work enablement, as well as cost efficiency, simplified security architecture, and scalability. Using cloud solutions can reduce hardware and operating costs. It combines multiple security solutions into a single platform. Being cloud-native allows simpler scalability.

SASE is not without potential cons. It can be complex to initially transition and integrate on-prem solutions with cloud solutions. There are potential privacy and compliance issues when storing and processing sensitive data in the cloud and it can introduce latency in processes depending on the overall network design. There is also an availability risk with a heavy dependency on cloud vendor solutions.

SDN

SDN is a method that divides the control plane (which determines where traffic is routed) from the data plane (which forwards traffic) in networking devices. Using software programs, SDN enables network managers to control and manage network resources, enhancing networks’ flexibility, programmability, and responsiveness to changing requirements. It uses application programming interfaces (APIs) and standard protocols, such as OpenFlow, to facilitate this control via software programs. Some examples of this software are OpenDaylight, Cisco ACI, and VMware NSX.

Since this control is done over APIs, it is important to ensure that these APIs are designed, implemented, and managed securely. If they are breached, they can allow an attacker access to alter the network, causing outages, moving laterally, or gaining additional privileges.

SDN is also used for WANs (SDN-WAN or SD-WAN), as referenced in the previous section. In these cases, outside vendors utilize the SDN model to facilitate connectivity between sites. While these configurations typically contain encryption, there are other security factors to consider, such as SDN software flaws, a lack of organization direct control, and availability and integrity issues when data transits over different network channels.

This section presented various network architecture models, including on-prem, cloud computing, and hybrid configurations. You learned about the benefits and challenges of each model, with an emphasis on how hybrid setups can offer a balance between control and flexibility. Network segmentation was highlighted as a key practice for enhancing security, and the zero-trust model, with ZTNA, was discussed for its rigorous approach to verifying every access request. Additionally, you explored SASE for integrated security and networking, and SDN for improved network management and agility. Next, you will examine IAM, focusing on the strategies and technologies used to manage user identities and control access to resources.

IAM

The authentication, authorization, and accounting (AAA) framework is used to manage and control access to resources and services. These are the three primary components of access management. They play a crucial role in defining and implementing access control models, such as mandatory access control, discretionary access control (DAC), and role-based access control (RBAC). MAC is where access rights and permissions are assigned based on the security classifications and labels associated with both users and resources. DAC is where access rights and permissions are at the discretion of the resource owner, allowing users to control access to their own resources. RBAC is where access rights and permissions are assigned based on roles within an organization or system rather than individual user identities.

Note

Identities, subjects, and directories are key terms used to further understand AAA. Identities are distinct representations of individuals, devices, or other system entities. Subjects are active entities, such as users, that request access to resources or services. A directory is a repository that stores and manages identity-related information, including user accounts, credentials, group memberships, and access permissions.

Verifying an entity’s identity when they want to access a system or resource, such as a user, is the process of authentication. An authenticated entity’s access to certain actions or resources within a system or network is determined through the process of authorization. Accounting entails monitoring and recording of actions and occurrences about resource utilization, access, and other security-related operations carried out by entities inside a system. The management of this process and its pieces is known as privilege management. All of this is included under the umbrella of IAM. IAM is a framework of policies and technologies used to manage and secure digital identities and control access to resources within an organization. It ensures that only authorized users can access specific systems, applications, and data, based on their roles and permissions. IAM is critical for protecting sensitive information, maintaining compliance with regulations, and preventing unauthorized access and security breaches. By effectively managing user identities and access rights, IAM helps organizations safeguard their IT environments and ensure operational efficiency.

In this section, you will learn about some IAM concepts, as specified by the CySA+ exam objectives, including MFA, single sign-on, passwordless authentication, federation, privileged access management, and CASBs.

MFA

MFA is one of the most important elements of IAM. MFA uses multiple forms of verification to strengthen the security of the authentication process. It is key to understanding that using multiple passwords or multiple tokens on their own would not meet MFA requirements, as this would only be using one form (something you know or something you have) rather than multiple forms. These factors include something you know, something you have, something you are, something you do, and somewhere you are. The latter is used infrequently but the others are a main component of most modern MFA setups.

Something you know includes things such as passwords, passphrases, PINs, or any other combination of data that only you are expected to know (non-public information). Something you have includes things such as smart cards, one-time passcodes (OTPs), and tokens. Something you are deals with biometrics, using unique characteristics of the human body, such as fingerprints, facial recognition, voice recognition, and eye or retina recognition. Something you do includes things such as keyboard and mouse dynamics recognition.

Somewhere you are factors include location-based identifiers such as through GPS. Figure 1.34 shows a physical RSA token, which is a popular vendor for these devices. The number automatically iterates based on the programmed algorithm. On the right side of the figure, it shows an authenticator mobile application that serves the same function but does not use a dedicated device.

Figure 1.34: RSA SecurID physical token and Google Authenticator mobile app

Figure 1.34: RSA SecurID physical token and Google Authenticator mobile app

Generally, the more factors you include in your authentication scheme, the more secure you make it. The caveat is that this will also make it more complex and cumbersome, often leading to more failed authentication attempts by legitimate users.

Most major websites and organizations today use at least two-factor authentication (2FA) as a requirement or encouragement, with tokens or OTPs via SMS being the most common second factor. This heightens security as passwords can be phished, guessed, or compromised in various ways, but it is less likely for an attacker to have the second factor as well to complete authentication. Today, the standard is to have at least two factors, as this prevents an attacker from simply using stolen credentials to gain access. However, these systems are not unbreachable, as there are direct attacks against token systems as well as simply losing tokens or phones, or using insecure channels of communication, which then allow attackers to have this information as well.

Single Sign-On

Single sign-on (SSO) is a centralized authentication process that enables users to sign in once to access several linked systems or apps. By handling the authentication and securely transferring the user’s credentials to the various services, the SSO system serves as a middleman. It is commonly implemented in most organizations today and is also used by web applications. Figure 1.35 shows a simple SSO process.

Figure 1.35: Simple SSO process

Figure 1.35: Simple SSO process

It begins with the user requesting access to App 1. App 1 requests the user to provide a token. The user is redirected to request a token from the credential server. They must use their credentials to authenticate and authorize with the credential server. The user is then issued a token that will be valid for a set period. This token is then sent back to App 1 for access. App 1 verifies the token with the credential server, which approves the token and allows the user to access App 1. This is where the user benefit comes in, with a single sign-on. While the token is valid, the user requests access to App 2 and provides the token. App 2 follows the same process to verify the token and allow access, but the user does not have to enter their credentials again to access App 2. Depending on the backend setup, the applications may be preconfigured with elements they can verify about a token, removing the requirement to verify the provided user token with the credential server.

Lightweight Directory Access Protocol Secure (LDAPS) is a network protocol used to support SSO systems. It facilitates access and queries to directory services, particularly directory servers such as Active Directory, in a secure and encrypted manner. The S (Secure) in LDAPS is an enhancement of LDAP to include SSL/TLS for great security and encrypted communication. Active Directory is an additional common element in Windows-based SSO systems, as it is a main directory of users, passwords, permissions, roles, and other details.

SSO setups produce several benefits. The user experience is improved because they only have to enter their credentials once and set and remember one username and password. Security is improved as it reduces the problem of user passwords being reused across multiple systems and sites. It also helps reduce complexity for access administrators and help desks as they will maintain smaller databases of credentials. This can help reduce support costs as well.

However, SSO is not without risks. If an attacker can gain credentials, they will receive access to multiple resources with the same credentials. Also, if an attacker can gain access to an active session that is already authenticated, they will gain additional access to other SSO-enabled systems as well. Additional controls to help reduce this risk include periodic re-authentication and usage of MFA for critical systems. MFA would still require the user to supply an additional factor, even if their username and password may not require user input, before granting access to these systems.

Federation

Federation is a specifically listed CompTIA CySA+ exam objective in the IAM section and it deals with the concept of sharing a federated identity, which is a collection of linked identity attributes, between trusted entities. It uses the concepts of shared authentication, much like SSO, and a central authentication service (CAS). Users authenticate once and gain access to or share information with systems that are not the direct authorizer. Today’s online landscape often uses this concept, and you may have seen it at sites such as Google, Microsoft, LinkedIn, Facebook, and others.

Figure 1.36 shows the ChatGPT login screen. Here, a user has options to log in with their already existing Google, Microsoft, or Apple account, without having to create a new account with ChatGPT directly.

Figure 1.36: ChatGPT federated login options

Figure 1.36: ChatGPT federated login options

Federated Identity System Design

One of the most important concepts of federated identity systems is that they move trust beyond the boundaries of your organization. This makes a risk-based design approach even more critical when designing, implementing, and securing these systems. An organization needs to trust these outside parties and potentially even review their security practices to help provide further assurances. There are a few main components to consider:

  • Identity provider (IDP)
  • Service provider (SP)
  • Consumer

An IDP provides identity management and authentication. They store, verify, and supply information to the other parts of a federation. This can be provided for one or more trusted entities. Their role requires secure storage and transmission of all identity details. It also requires secure mechanisms to provide authentication. Okta is an example of an IDP that offers cloud-based identity management and authentication services.

An SP, sometimes also known as a relying partner (RP), provides services, resources, or applications to users. It relies on the IDP for identity management and authentication. It is important for this entity to securely handle these communications between itself, the consumer, and the IDP. Salesforce is an example of an SP that relies on an IDP such as Okta to facilitate user authentication and identity management.

A consumer relies on the SP to provide access to services. They are the ones who initially request access. They are responsible for accepting attribute release requests, providing requested information to the IDP, and validating information that the IDP has stored. Again, all communication between the consumer and SP needs to be done over secure channels. An employee user, such as for Salesforce, is an example of a consumer.

Figure 1.37 depicts the process for the consumer first requesting access to a service, then being redirected to the IDP to input credentials and verify their identity. A token is granted when successful, which is provided to the SP for access to the service.

Figure 1.37: Federated identity high-level access process

Figure 1.37: Federated identity high-level access process

There are generally two levels of trust in federation systems. One is that IDPs do not directly verify specific identities. With these, they try to ensure that the correct account owner is validating, not necessarily who the account owner is specifically. In the other trust type, they do additional identity verification steps, such as with government IDs, additional private information, or even video calls, to verify and confirm actual identities. An example of this is the ID.me service. This service is used by many governmental websites and agencies, such as InfraGard, the IRS, the Department of Labor, and the Social Security Administration. It requires a user to verify their identity with government IDs, facial biometrics, and sometimes video conference calls. After verification, a user’s information and identity are certified as true and able to be shared with numerous RPs. It is important to consider which type would suit an organization’s needs and what type of access to grant internally to the federated identity based on these two types. Also, you should consider this information to decide on what level of monitoring to do against these account types.

Next, you must plan for how internal accounts are created, based on the federated identities. Generally, federated systems operate most efficiently when accounts are automatically provisioned, as this lowers administrative load overall and reduces any delays in user access. However, this does raise additional security concerns around trusting these accounts without any prior internal verification.

Once you decide on an IDP and RP, you must understand what additional related technologies they use to properly plan applicable security controls. Some of the common technologies are Security Assertion Markup Language (SAML), Active Directory Federated Services (AD FS), OAuth, and OpenID Connect. These are also important if you are planning your own internal federated system implementation.

All these factors should inform your security designs for implementation and usage of the federated system. You may require extra levels of detail and assurance from the external parties to further trust them. You may also decide not to work with certain IDPs, such as Google, as their model requires much less assurance of a user’s real identity.

News of federated identity-related hacks is becoming more and more common. It is not impossible for a breach in one member to potentially lead to breaching other members in a federation. This concept furthers the imperative to ensure secure designs, to protect your organization from any breach impact of the federation.

Federated Identity System Technologies

SAML, AD FS, OAuth, and OpenID Connect are currently the major technologies in use by most federation systems. They provide specific functionalities and the ability for IDPs to connect with SPs securely without specific knowledge of an SP’s services or usage of the identity. Here, you will learn more details about each, including how they function at a higher level, how they are used, and common security issues.

SAML

SAML is an XML-based standard used for exchanging authentication and authorization information between an IDP and SP. It is a common standard used for implementing SSO for web-based applications and services and provides authentication and authorization. It is utilized commonly with Linux environments, but it is OS-independent, and uses SAML assertions, which are statements about a subject (usually a user) that the IDP provides to the SP after successful authentication. They can contain information such as user identity, attributes, and permissions. Message confidentiality and protocol usage are some of the main security considerations.

AD FS, OAuth, and OpenID Connect

AD FS, OAuth, and OpenID Connect are additional common federated system technologies. AD FS is Microsoft’s federated identity system, implementing Active Directory solutions in a federated manner. Like SAML, it provides both authentication and authorization. It uses a security token service that generates SAML tokens for successful authentication. Within these tokens are claims, which are details about the user, role, and attributes. These claims are further used to make access control decisions. Its most common use is to integrate Windows on-prem AD with cloud-based Microsoft services. Due to its token usage, token-based attacks are a security concern.

OAuth, currently at version 2.0, is a protocol created by the Internet Engineering Task Force (IETF). It is designed to support embedded and mobile technologies access authorization within federated systems. It connects HTTP-based services with third-party applications, often using APIs. Related key terms include clients (applications for use), resource owners (end users), resource servers (servers for an application to use), and authorization servers (servers under the IDP). Unlike the other federation technologies, it only provides authorization. This authorization functionality is used by federation members to request sharing of user details.

Figure 1.38 shows an example screen of the process for a user’s details-sharing request. The Developer REST Console requests access and gathers information from a user’s LinkedIn profile details via OAuth. If the user inputs the correct authentication details and selects Allow access, the data will then be shared as requested to the Developer REST Console.

Figure 1.38: Example OAuth authorization request screen

Figure 1.38: Example OAuth authorization request screen

Figure 1.39 shows an approved OAuth authorization request for Credly, sharing the name and photo. This is an example screen seen after successful authentication and authorization have occurred. It provides further specifics regarding when Credly was granted access and what it has access to. It also provides a way to remove this access, which would require re-authentication and authorization if access was needed and requested again.

Figure 1.39: Example approved OAuth authorization request

Figure 1.39: Example approved OAuth authorization request

Much like other federation technologies, message confidentiality is a security concern, as well as a redirect manipulation of messages and impersonation of resource servers or authorization servers.

OpenID Connect provides only authentication. Due to this, it is often used alongside OAuth, which would then provide the authorization. It adds functionality to the authorization server in the form of an ID token generated through successful authentication. It shares some of the same security concerns as OAuth, such as redirect manipulation and message confidentiality, but also has authentication-based concerns, including replay attacks, CSRF/XSS attacks, and phishing attacks.

Privileged Access Management

Privileged access management (PAM) is a solution framework that helps prevent unauthorized access to privileged accounts, such as administrator, system, or other elevated access accounts. It helps to implement additional IAM concepts such as least privilege access, password rotation, session monitoring, and access approval, among others.

PAM generally works by having a user authenticate into a central portal or client. They then can further access other devices and resources via sessions initiated and monitored by the PAM client. Another function may allow them to check out passwords, which could include approval from other parties to further access accounts. Once the user is done with their needs, they can check in the password, which is often set to auto-rotate. Another common setup is to have password auto check-in after a set period. This ensures that auto-rotation still occurs, even if the user forgets to manually check the password back in. An example of implementation for this is with the usage of service or administrator accounts. These highly privileged accounts would require more security. PAM allows them to be shared and facilitates password checkout approval and password autorotation. This auto-rotation can occur when the user checks the account back in or after a predefined time period for automatic check-in elapses. This process helps protect the usage of these accounts and increase the associated password strength through auto-rotation and assignment.

PAM enhances security through isolation and segmentation, granular access control, session monitoring, and workflow automation. The password rotation feature is a huge benefit as it allows much more frequent rotation to reduce the risk of compromised password use. It does have potential cons around cost, complexity, potential single points of failure, and user experience. It is important to carefully plan and design PAM implementations to reduce the impact of these cons.

Passwordless Authentication

Passwordless authentication allows authentication without the usage of a traditional password. It would use one of the other factors, as discussed in the MFA section. Items such as tokens or biometrics serve as the single factor for authentication. These schemes are more secure than a standard password, but generally less secure than 2FA or greater schemes. Due to only using one factor, it generally results in better, less complex user experience. Today, many mobile applications allow for passwordless authentication. This occurs by first requesting enrollment of stored biometrics from the user’s mobile device. This process includes authenticating via standard password methods. After enrollment is successful, future access can be facilitated through only biometrics, no longer requiring the use of a password. Passwords are still a part of this overall process as many of these applications also have web portals that would still require and use standard passwords.

CASB

CASB is a software tool designed to provide several security functions and benefits for cloud environments. It can exist on-prem or fully in the cloud. It assists with enforcing security policies by providing functions such as monitoring, data loss prevention, access controls, threat protection, and encryption and tokenization. CASB systems need to be carefully designed, implemented, and maintained due to these broad capabilities.

In this section, you delved into the various aspects of IAM and its essential role in securing systems and data. You learned about MFA and how it enhances security by requiring multiple forms of verification. SSO was discussed for its convenience in allowing users to access multiple systems with a single login. Federation and its components, including federated identity system design and technologies, were examined to understand how organizations manage authentication across different domains. You also explored PAM for controlling and monitoring access to critical systems and passwordless authentication as a modern approach to user verification. CASB was reviewed for its role in managing and securing cloud-based services. Next, you will explore encryption and data protection, focusing on methods and technologies used to secure data both at rest and in transit, and ensuring compliance with data protection regulations.

Encryption and Data Protection

Hashing and encryption are essential elements in many layers of security. They help to protect the confidentiality and integrity of networks, hosts, and data. Their importance is shown even more as most security controls include them as built-in established features.

As you continue to develop secure system designs, you need to ensure that you can understand where and how encryption and hashing are used. Data should, whenever possible, be protected at rest and in transit, typically with encryption and hashing. It is often easy to protect data in one of these states, such as encrypting it while it is stored. However, additional scrutiny may reveal gaps, such as the data being encrypted during storage but lacking protection when transmitted over the internet. An example of this can be the use of cloud storage services. A company may store sensitive customer data encrypted at rest on the cloud provider’s server. This ensures security and privacy while it is stored. However, the company does not protect the data when being transmitted, such as when it is being uploaded or downloaded. This can allow an attacker the ability to intercept and read this sensitive plaintext private data. This underscores the need for comprehensive security measures that cover all stages of data handling. In this section, you will learn about the CySA+ objectives that are related to these principles such as public key infrastructure, Secure Sockets Layer, and data loss prevention. A review of two important data types, personally identifiable information and cardholder data, is also included.

Public Key Infrastructure

The public key infrastructure (PKI) encompasses a collection of protocols, rules, and procedures designed to establish secure communication using asymmetric cryptography. It facilitates identity verification and offers confidentiality, integrity, and authentication. Some example uses include digital signatures, encryption, and user or device authentication.

The PKI is made up of several parts, including a certificate authority (CA), a registration authority (RA), public and private keys, a key directory, digital certificates, and certificate revocation lists (CRLs). A CA is a trusted entity that verifies identities and then issues certificates; they also revoke certificates and share CRLs with entities. An RA works with the CA to help facilitate the identity verification process.

Figure 1.40 is a visual depiction of the high-level PKI certification request process, including the four main steps. A key directory is used to store PKI-related elements, such as private keys and digital certificates issued.

Figure 1.40: PKI certificate request process

Figure 1.40: PKI certificate request process

Digital certificates are issued from the CA and contain public keys. A CRL is published by a CA and shared with subscribed entities, listing which certificates are revoked and no longer valid, so systems will no longer trust them. It is not uncommon to find organizations running their own internal PKIs to establish trust between internal systems.

Note

The CompTIA CySA+ exam may ask more targeted questions around the backend asymmetric process for PKI. There also may be a comparison to symmetric encryption options as well.

Secure Sockets Layer

Secure Sockets Layer (SSL) is a cryptographic protocol used to create secure and encrypted connections over the internet. It is still common to see TLS referred to as SSL, but all versions before 1.3 are considered insecure due to known flaws. SSL 1.0 was never publicly released, as it had significant security issues. SSL 2.0 had flaws such as downgrade attacks, weak cipher suites, no message integrity checks, and no support for modern cryptographic algorithms. SSL 3.0 also has weak cipher suites and no forward secrecy, while also being vulnerable to POODLE attacks. TLS 1.0 supported weak cipher suites and was vulnerable to padding oracle attacks such as the BEAST attack. TLS 1.1 also supported weak cipher suites and still did not fully address the padding oracle vulnerabilities such as what POODLE and BEAST were based on. TLS 1.2 had support for RC4, which has since been found to be insecure, did not require default forward secrecy, and still has the potential for downgrade attacks.

Since SSL/TLS encrypts data in flow, specialized tools and devices are necessary to monitor it and enforce security policies. These solutions use SSL decryption or SSL inspection to facilitate these functions. This is often done with a proxy that will decrypt the channel, do its own evaluation or monitoring of the traffic, and then re-encrypt it before sending it onto the next hop. It may also send a copy of the decrypted data to other security solutions, such as an IDS, IPS, or data loss prevention, to further evaluate the data. This can help identify more advanced malicious traffic that is being sent over encrypted channels. This type of activity should be done based on a risk decision, as it comes with its own complexity and administrative burden to the organization. One example of this administrative burden is sharing a full certificate of trust web from your organization with the SSL inspection device to allow TLS connection decryption.

Data Loss Prevention

Data loss prevention (DLP) systems act as guardians to keep sensitive information safe within an organization. They help with preventing unauthorized access, sharing, or exposure of data, operating against data at rest and data in transit. To facilitate these functions, they may integrate with other security solutions. One example is integrating with a proxy that intercepts all network traffic, which the DLP analyzes for sensitivity to prevent data leakage or sharing. DLP systems require consistent tuning and maintenance to ensure they are aware of the proper data and data types to protect. They often come with pre-created templates, such as for social security numbers, to monitor, but even these could require potential tuning.

Personally Identifiable Information

Personally identifiable information (PII) is any information that may be used to identify a specific person, either alone or in conjunction with other data. The types of PII can be quite extensive and can include name, address, phone number, social security number, and date of birth. Aside from general privacy benefits, it is a good idea to protect this data as it can be abused for attacks such as identity theft. Other nefarious uses include guessing security question answers and having additional factors used for authentication. Regulations and rules around the protection of PII can vary from state to state and country-to-country.

Some examples include the California Consumer Privacy Act (CCPA) and the European Union’s General Data Protection Regulation (GDPR). Further subsets of PII include protected health information (PHI) and cardholder data (CHD). PHI is regulated by the Health Insurance Portability and Accountability Act (HIPAA). These are just a few examples of regulations that have specific requirements for the handling and protection of data.

CHD

CHD is like the blueprint of your credit card. It typically includes the account number, cardholder name, and expiration date. It also encompasses sensitive authentication data (SAD), which could be magnetic stripe data, card verification value (CVV), or even the PIN. The Payment Card Industry Data Security Standard (PCI DSS) dictates the regulations on how organizations must handle, process, and store cardholder data. It is designed to help secure and protect information about CHD and credit card transactions. Non-compliance with this standard can cause financial penalties, reputational damage, and loss of trust from both customers and financial institutions.

This section covered key concepts in encryption and data protection essential for securing sensitive information. PKI was discussed as a framework for managing encryption keys and digital certificates to ensure secure communication. The evolution of SSL to its latest version, TLS 1.3, highlighted improvements in security protocols. DLP strategies were explored to protect against data breaches and ensure compliance with regulations. Additionally, the importance of safeguarding PII and CHD was discussed, focusing on the specific regulations that govern these types of sensitive data. Understanding these elements will help you implement robust data protection measures in your security strategy.

Summary

This chapter covered essential elements of modern system design, focusing on infrastructure concepts such as serverless computing, virtualization, and containerization. Serverless computing eliminates the need for traditional infrastructure management by leveraging cloud services to handle scaling and execution on demand. Virtualization allows more efficient use of physical hardware by enabling multiple independent VMs on a single physical host. Containerization simplifies application deployment by using standardized, isolated units that package all dependencies, allowing more flexible and scalable architectures.

The chapter also emphasized the importance of OS security, including system hardening practices to minimize attack surfaces. Key OS concepts such as the Windows Registry, file structures, and system processes were discussed, along with methods for securing these components. Logging and time synchronization were highlighted as critical for accurate system monitoring and forensic analysis, with a focus on configuring proper log levels and ensuring synchronized timestamps to prevent misleading data. Networking considerations, including different models (on-prem, cloud, and hybrid), network segmentation, and advanced security principles such as zero trust and SASE were also examined. Lastly, the chapter covered IAM solutions, including MFA, SSO, and federation, as well as encryption and data protection techniques to safeguard sensitive information.

In the next chapter, you will explore advanced threat analysis models and methodologies, including the Cyber Kill Chain, Diamond Model of Intrusion Analysis, MITRE ATT&CK, Unified Kill Chain, OSS TMM, and the OWASP Testing Guide.

Exam Topics Highlights

Infrastructure – Make sure you comprehend serverless, virtualization, and containerization principles well. You should be able to list each one’s advantages and distinguishing qualities. Additionally, be able to decide which could work best for a specific organizational or security design.

Operating System Concepts – You should be able to briefly describe hardware architecture, system processes, common file structure, locations of configuration files, and Windows Registry. System hardening can be informed by general best practices, CIS benchmarks, and STIGs. Know how each of these may be used, as well as the best approaches overall for managing system hardening implementation. Be aware of typical logging levels as well as the need for time synchronization.

Network Architecture – You should be able to describe the on-prem, cloud, and hybrid approaches; identify key differences and advantages between them to apply to scenario-based questions; recognize the significance, security implications, and various implementation choices for network segmentation; and be able to define zero trust, SASE, and SDN concepts as well as their benefits, drawbacks, and important security factors.

IAM – You should be able to describe the solutions of MFA, SSO, PAM, passwordless, and CASB. Additionally, have a deeper knowledge of federation and related topics. Make sure you can list each concept’s benefits, drawbacks, and security implications.

Encryption and Data Protection – You should be able to explain how PKI, SSL, and DLP each provide security benefits and recognize what and how they protect. Plus, you should be able to define and name examples of PII and CHD.

Exam Readiness Drill – Chapter Review Questions

Apart from mastering key concepts, strong test-taking skills under time pressure are essential for acing your certification exam. That’s why developing these abilities early in your learning journey is critical.

Exam readiness drills, using the free online practice resources provided with this book, help you progressively improve your time management and test-taking skills while reinforcing the key concepts you’ve learned.

HOW TO GET STARTED

  • Open the link or scan the QR code at the bottom of this page
  • If you have unlocked the practice resources already, log in to your registered account. If you haven’t, follow the instructions in Chapter 16 and come back to this page.
  • Once you log in, click the START button to start a quiz
  • We recommend attempting a quiz multiple times till you’re able to answer most of the questions correctly and well within the time limit.
  • You can use the following practice template to help you plan your attempts:

Table

The above drill is just an example. Design your drills based on your own goals and make the most out of the online quizzes accompanying this book.

First time accessing the online resources?Lock

You’ll need to unlock them through a one-time process. Head to Chapter 16 for instructions.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Become proficient in all CS0-003 exam objectives with the help of real-world examples
  • Learn to perform key cybersecurity analyst tasks, including essential security operations and vulnerability management
  • Assess your exam readiness with end-of-chapter exam-style questions and two full-length practice tests

Description

The CompTIA CySA+ (CS0-003) Certification Guide is your complete resource for passing the latest CySA+ exam and developing real-world cybersecurity skills. Covering all four exam domains—security operations, vulnerability management, incident response, and reporting and communication—this guide provides clear explanations, hands-on examples, and practical guidance drawn from real-world scenarios. You’ll learn how to identify and analyze signs of malicious activity, apply threat hunting and intelligence concepts, and leverage tools to manage, assess, and respond to vulnerabilities and attacks. The book walks you through the incident response lifecycle and shows you how to report and communicate findings during both proactive and reactive cybersecurity efforts. To solidify your understanding, each chapter includes review questions and interactive exercises. You’ll also get access to over 250 flashcards and two full-length practice exams that mirror the real test—helping you gauge your readiness and boost your confidence. Whether you're starting your career in cybersecurity or advancing from an entry-level role, this guide equips you with the knowledge and skills you need to pass the CS0-003 exam and thrive as a cybersecurity analyst.

Who is this book for?

This book is for IT security analysts, vulnerability analysts, threat intelligence professionals, and anyone looking to deepen their expertise in cybersecurity analysis. To get the most out of this book and effectively prepare for your exam, you should have earned the CompTIA Network+ and CompTIA Security+ certifications or possess equivalent knowledge.

What you will learn

  • Analyze and respond to security incidents effectively
  • Manage vulnerabilities and identify threats using practical tools
  • Perform key cybersecurity analyst tasks with confidence
  • Communicate and report security findings clearly
  • Apply threat intelligence and threat hunting concepts
  • Reinforce your learning by solving two practice exams modeled on the real certification test

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 30, 2025
Length: 742 pages
Edition : 1st
Language : English
ISBN-13 : 9781835461389
Category :
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Apr 30, 2025
Length: 742 pages
Edition : 1st
Language : English
ISBN-13 : 9781835461389
Category :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
₹800 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
₹4500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts
₹5000 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts

Table of Contents

18 Chapters
Chapter 1: IAM, Logging, and Security Architecture Chevron down icon Chevron up icon
Chapter 2: Attack Frameworks Chevron down icon Chevron up icon
Chapter 3: Incident Response Preparation and Detection Chevron down icon Chevron up icon
Chapter 4: Incident Response – Containment, Eradication, Recovery, and Post-Incident Activities Chevron down icon Chevron up icon
Chapter 5: Efficiency in Security Operations Chevron down icon Chevron up icon
Chapter 6: Threat Intelligence and Threat Hunting Chevron down icon Chevron up icon
Chapter 7: Indicators of Malicious Activity Chevron down icon Chevron up icon
Chapter 8: Tools and Techniques for Malicious Activity Analysis Chevron down icon Chevron up icon
Chapter 9: Attack Mitigations Chevron down icon Chevron up icon
Chapter 10: Risk Control and Analysis Chevron down icon Chevron up icon
Chapter 11: Vulnerability Management Program Chevron down icon Chevron up icon
Chapter 12: Vulnerability Assessment Tools Chevron down icon Chevron up icon
Chapter 13: Vulnerability Prioritization Chevron down icon Chevron up icon
Chapter 14: Incident Reporting and Communication Chevron down icon Chevron up icon
Chapter 15: Vulnerability Management Reporting and Communication Chevron down icon Chevron up icon
Chapter 16: Accessing the Online Practice Resources Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.