Logging and Log Ingestion
Logging is an important aspect of system and security design. It can serve many functions from system troubleshooting to security monitoring. This part of the exam objectives specifically calls out log ingestion, time synchronization, and logging levels. Log ingestion centers around the collection and shipping of logs to a central location for further analysis. For example, on Linux systems, the syslog
daemon can be configured to collect logs from the system and applications and then forward them to another central storage location. This analysis is commonly done through a security information and event management (SIEM) solution. You will learn more about SIEM solutions in Chapter 8, Tools and Techniques for Malicious Activity Analysis.
Time Synchronization
Time synchronization is an essential part of logging to ensure consistency, accuracy, and reliability. Its importance increases as more systems and logs are integrated within an environment. Organizations use NTP to provide a centralized time reference for devices to use and be synched together. This concept enhances the meaningfulness of logs and facilitates monitoring and alerts. It also allows event correlation, event analysis, root cause analysis, and forensic investigations. This concept is not only important for security considerations but also for system troubleshooting and debugging.
For example, consider an organization that has just had a cyber-attack. Evidence of the attack was first noticed on Windows Server at 3:18 AM. The cyber analysts began to research this event by checking the IDSs and IPSs for alerts; they utilized the 3:18 AM time to start their analysis. They also started to review Windows client logs via their SIEM tool to further map out the attack and impact. All these machines would ideally be using NTP to maintain their time. This makes sure that accurate research and correlation can be completed, which can help the analysts map out the attack process and attribute other potential machines and evidence to the attack.
Logging Levels
Logging levels, also referred to as log severity levels, are predefined categories to group and classify log messages. They allow easier configuration of logging based on the importance of log messages. These categories are hierarchical, with each level including messages from the ones below. For example, this means a Level 4 message would include message information from Levels 0–3 as well. In Figure 1.32, you can see how each logging level feeds into the next, increasing the amount of logging being done. Also, the severity level flows from Level 0 as the most important or highest severity to Level 6 being the least important and lowest severity.

Figure 1.32: Logging levels
Here is a list of all the standard logging levels and the message types found at each level:
- Level 0 – Emergency is used for messages about catastrophic issues that may need emergency action.
- Level 1 – Alert is used for urgent messages.
- Level 2 – Critical is used for messages that require immediate attention and may cause immediate impact.
- Level 3 – Error is used for messages of failed execution, but not the function of an application.
- Level 4 – Warning (Warn) is used for messages that are not immediately important but need awareness for potential impact.
- Level 5 – Information (Info) is used for general normal operation events.
- Level 6 – Debug is used for diagnostics during development and debugging.
This concept is directly used in Cisco appliances when setting up logging. It is also found in Linux systems, when configuring syslog, with the exclusion of Levels 0 and 1. Windows uses event severity levels, but they are not hierarchical and inclusive of each level above.
Since each level includes the one below, it may take trial and error to adequately configure a system. If expected messages are missing, it may require turning on additional levels. Another consideration is that each level causes more and more messages to be included, which can increase storage needs and potentially be overwhelming to review. It is best to only turn on what is minimally needed and only use Debug sparingly.
Extra Logging Insights
Here are several additional logging best practices to keep in mind; they may or may not come up on the test but are good to be aware of:
- It is important to define and implement logging policies and procedures, including retention needs, keeping in mind storage costs and regulatory requirements.
- Logs are a cost, so only what is necessary, based on risk, should be logged and stored. This should be re-evaluated periodically as risk factors may change.
- Logs should include enough information to be meaningful for analysis and review. It is important to not only to store logs but to also be able to use them.
- To ensure integrity, logs should be immutable and secure, offloaded from the creation point, and centrally stored for analysis.
- Logging processes need periodic validation and monitoring to ensure expected content is present in logs, systems are generating logs as expected, and log shipping is occurring to central stores.
In this section, you analyzed the critical aspects of logging and log ingestion, including how time synchronization plays a vital role in accurate log analysis. You learned about different logging levels and their impact on the comprehensiveness and usefulness of log data. Additionally, you examined extra logging insights to enhance your ability to detect and respond to security incidents effectively. Moving forward, the next section will delve into network architecture, covering key concepts such as on-premises and cloud computing environments, hybrid models, and network segmentation, to provide a solid foundation for understanding and securing network infrastructures.