All Products
Search
Document Center

Container Service for Kubernetes:Collect container logs from ACK clusters in DaemonSet mode

Last Updated:May 22, 2025

Container Service for Kubernetes (ACK) is deeply integrated with Simple Log Service (SLS). ACK provides log collection components to help you simplify container log collection and management. This topic describes how to install a log collection component and how to configure log collection, such as automated log collection, log query, and log analysis. You can use the preceding configurations to improve O&M efficiency and reduce operating costs.

Scenarios

Log collection components collect logs in the following modes:

For more information about the differences between the two modes, see Collection method.

Table of contents

Procedure

References

Step 1: Install a log collection component

Install one of the following log collection components:

  • LoongCollector

    LoongCollector is a new-generation log collection agent provided by SLS. It is an upgraded version of Logtail.
  • Logtail

    Logtail is an agent provided by SLS to collect various types of logs from containers and ACK clusters.

Step 2: Create a collection configuration

Collect text logs or stdout.

  • Collect text logs

    Text logs are generated by programs running in containers and stored in the log files in specified directories. Text logs are suitable for long-term analysis and troubleshooting.
  • Collect stdout

    stdout is generated by programs running in containers in real time. stdout is suitable for program debugging and quick troubleshooting.

Step 3: Query and analyze logs

You can query and analyze logs in the SLS console.

Step 1: Install a log collection component

Install LoongCollector

Note

Currently, LoongCollector is in canary release. Before you install LoongCollector, check the supported regions.

LoongCollector-based data collection: LoongCollector is a new-generation log collection agent that is provided by Simple Log Service. LoongCollector is an upgraded version of Logtail. LoongCollector is expected to integrate the capabilities of specific collection agents of Application Real-Time Monitoring Service (ARMS), such as Managed Service for Prometheus-based data collection and Extended Berkeley Packet Filter (eBPF) technology-based non-intrusive data collection.

Install the loongcollector component in an existing ACK cluster

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the cluster that you want to manage. In the left-side navigation pane, choose Operations > Add-ons.

  3. On the Logs and Monitoring tab of the Add-ons page, find the loongcollector component and click Install.

    Note

    You cannot install the loongcollector component and the logtail-ds component at the same time. If the logtail-ds component is installed in your cluster, you cannot directly upgrade the logtail-ds component to the loongcollector component. The upgrade solution is available soon.

After the LoongCollector components are installed, Simple Log Service automatically generates a project named k8s-log-${your_k8s_cluster_id} and resources in the project. You can log on to the Simple Log Service console to view the resources. The following table describes the resources.

Resource type

Resource name

Description

Example

Machine group

k8s-group-${your_k8s_cluster_id}

The machine group of loongcollector-ds, which is used in log collection scenarios.

k8s-group-my-cluster-123

k8s-group-${your_k8s_cluster_id}-cluster

The machine group of loongcollector-cluster, which is used in metric collection scenarios.

k8s-group-my-cluster-123-cluster

k8s-group-${your_k8s_cluster_id}-singleton

The machine group of a single instance, which is used to create a LoongCollector configuration for the single instance.

k8s-group-my-cluster-123-singleton

Logstore

config-operation-log

The Logstore is used to collect and store loongcollector-operator logs.

Important

Do not delete the config-operation-log Logstore.

config-operation-log

Install Logtail

Logtail-based data collection: Logtail is a log collection agent that is provided by Simple Log Service. You can use Logtail to collect logs from multiple data sources, such as Alibaba Cloud Elastic Compute Service (ECS) instances, servers in data centers, and servers from third-party cloud service providers. Logtail supports non-intrusive log collection based on log files. You do not need to modify your application code, and log collection does not affect the operation of your applications.

Install Logtail components in an existing ACK cluster

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the one you want to manage and click its name. In the left-side navigation pane, choose Operations > Add-ons.

  3. On the Logs and Monitoring tab of the Add-ons page, find the logtail-ds component and click Install.

Install Logtail components when you create an ACK cluster

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click Create Kubernetes Cluster. In the Component Configurations step of the wizard, select Enable Log Service.

    This topic describes only the settings related to Simple Log Service. For more information about other settings, see Create an ACK managed cluster.

    After you select Enable Log Service, the system prompts you to create a Simple Log Service project. You can use one of the following methods to create a project:

    • Select Project

      You can select an existing project to manage the collected container logs.

      安装logtail组件

    • Create Project

      Simple Log Service automatically creates a project to manage the collected container logs. ClusterID indicates the unique identifier of the created Kubernetes cluster.

      安装logtail组件

Important

In the Component Configurations step of the wizard, Enable is selected for the Control Plane Component Logs parameter by default. If Enable is selected, the system automatically configures collection settings and collects logs from the control plane components of a cluster, and you are charged for the collected logs based on the pay-as-you-go billing method. You can determine whether to select Enable based on your business requirements. For more information, see Collect logs of control plane components in ACK managed clusters.image

After the Logtail components are installed, Simple Log Service automatically generates a project named k8s-log-<YOUR_CLUSTER_ID> and resources in the project. You can log on to the Simple Log Service console to view the resources. The following table describes the resources.

Resource type

Resource name

Description

Example

Machine group

k8s-group-<YOUR_CLUSTER_ID>

The machine group of logtail-daemonset, which is used in log collection scenarios.

k8s-group-my-cluster-123

k8s-group-<YOUR_CLUSTER_ID>-statefulset

The machine group of logtail-statefulset, which is used in metric collection scenarios.

k8s-group-my-cluster-123-statefulset

k8s-group-<YOUR_CLUSTER_ID>-singleton

The machine group of a single instance, which is used to create a Logtail configuration for the single instance.

k8s-group-my-cluster-123-singleton

Logstore

config-operation-log

The Logstore is used to store logs of the alibaba-log-controller component. We recommend that you do not create a Logtail configuration for the Logstore. You can delete the Logstore. After the Logstore is deleted, the system no longer collects the operational logs of the alibaba-log-controller component. You are charged for the Logstore in the same manner as you are charged for regular Logstores. For more information, see Billable items of pay-by-ingested-data.

None

Step 2: Create a collection configuration

Collect text logs

This section describes four methods that you can use to create a collection configuration. We recommend that you use only one method to manage a collection configuration.

Configuration method

Configuration description

Scenario

CRD - AliyunPipelineConfig (recommended)

You can use the AliyunPipelineConfig Custom Resource Definition (CRD), which is a Kubernetes CRD, to manage a LoongCollector configuration.

This method is suitable for scenarios that require complex collection and processing, and version consistency between the LoongCollector configuration and the LoongCollector container in an ACK cluster.

CRD - AliyunLogConfig

You can use the AliyunLogConfig CRD, which is an old version CRD, to manage a LoongCollector configuration.

This method is suitable for known scenarios in which LoongCollector configurations can be managed by using the old version CRD.

You must gradually replace the AliyunLogConfig CRD with the AliyunPipelineConfig CRD to obtain better extensibility and stability. For more information about the differences between the CRD - AliyunPipelineConfig and CRD - AliyunLogConfig methods, see CRDs.

Simple Log Service console

You can manage a LoongCollector configuration in the GUI based on quick deployment and configuration.

This method is suitable for scenarios in which simple settings are required to manage a LoongCollector configuration. If you use this method to manage a LoongCollector configuration, specific advanced features and custom settings cannot be used.

Environment variable

You can use environment variables to configure parameters used to manage a LoongCollector configuration in an efficient manner.

You can use environment variables only to configure simple settings. Complex processing logic is not supported. Only single-line text logs are supported. You can use environment variables to create a LoongCollector configuration that can meet the following requirements:

  • Collect data from multiple applications to the same Logstore.

  • Collect data from multiple applications to different projects.

Note

When you use the (Recommended) CRD - AliyunPipelineConfig method, the logtail-ds version installed in your cluster must be later than 1.8.10. For more information about how to upgrade logtail-ds, see Upgrade the latest version of Logtail. However, this method does not have version requirements for LoongCollector.

CRD - AliyunPipelineConfig (recommended)

To create a LoongCollector configuration, you need to only create a Custom Resource (CR) from the AliyunPipelineConfig CRD. After the CR is created, the LoongCollector configuration takes effect.

Important

If you create a LoongCollector configuration by creating a CR and you want to modify the LoongCollector configuration, you can only modify the CR. If you modify the LoongCollector configuration in the Simple Log Service console, the new settings are not synchronized to the CR.

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, find the cluster that you want to manage and click More in the Actions column. In the drop-down list that appears, click Manage ACK clusters.

  4. Create a file named example-k8s-file.yaml.

    The YAML file is used to collect logs from the test.LOG file in the /data/logs/app_1 directory of pods labeled with app: ^(.*test.*)$ to the automatically created k8s-file Logstore in the k8s-log-test project. You can modify the following parameters in the YAML file based on your business requirements:

    1. project: Example: k8s-log-test.

      Log on to the Simple Log Service console. Check the name of the project generated after LoongCollector is installed. In most cases, the project name is in the k8s-log-<YOUR_CLUSTER_ID> format.

    2. IncludeK8sLabel: The label used to filter pods. Example: app: ^(.*test.*)$. In this example, pods whose label key is app and label value contains test are collected.

      Note

      If you want to collect all pods whose names contain test in your cluster, you can replace the IncludeK8sLabel parameter with the K8sContainerRegex parameter and use wildcards to specify a value for the K8sContainerRegex parameter. Example: K8sContainerRegex: ^(. test.) $.

    3. FilePaths: Example: /data/logs/app_1/**/test.LOG. For more information, see File path mapping for containers.

    4. Endpoint and Region: Example for the Endpoint parameter: cn-hangzhou.log.aliyuncs.com. Example for the Region parameter: cn-hangzhou.

    The value of the config parameter includes the types of input, output, and processing plug-ins and container filtering methods. For more information, see PipelineConfig. For more information about the complete parameters in the YAML file, see CR parameters.

    apiVersion: telemetry.alibabacloud.com/v1alpha1
    kind: ClusterAliyunPipelineConfig
    metadata:
      # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. The name is the same as the name of the created LoongCollector configuration. If the resource name already exists, the name does not take effect. 
      name: example-k8s-file
    spec:
      # Specify the name of the project.
      project:
        name: k8s-log-test
      logstores:
        # Create a Logstore named k8s-file.
        - name: k8s-file
      # Create a LoongCollector configuration.
      config:
        # Enter a sample log. You can leave this parameter empty.
        sample: |
          2024-06-19 16:35:00 INFO test log
          line-1
          line-2
          end
        # Specify the input plug-in.
        inputs:
          # Use the input_file plug-in to collect multi-line text logs from containers.
          - Type: input_file
            # Specify the file path in the containers.
            FilePaths:
              - /data/logs/app_1/**/test.LOG
            # Enable the container discovery feature. 
            EnableContainerDiscovery: true
            # Add conditions to filter containers. Multiple conditions are evaluated by using a logical AND. 
            CollectingContainersMeta: true
            ContainerFilters:
              # Specify the namespace of the pods to which the required containers belong. Regular expression matching is supported. 
              K8sNamespaceRegex: default
              # Specify the name of the required containers. Regular expression matching is supported. 
              IncludeK8sLabel:
                app: ^(.*app.*)$
            # Enable multi-line log collection. If you want to collect single-line logs, delete this parameter.
            Multiline:
              # Specify the custom mode to match the beginning of the first line of a log based on a regular expression.
              Mode: custom
              # Specify the regular expression that is used to match the beginning of the first line of a log.
              StartPattern: '\d+-\d+-\d+\s\d+:\d+:\d+'
        # Specify the processing plug-in.
        processors:
          # Use the processor_parse_regex_native plug-in to parse logs based on the specified regular expression.
          - Type: processor_parse_regex_native
            # Specify the name of the original field.
            SourceKey: content
            # Specify the regular expression that is used for parsing. Use capturing groups to extract fields.
            Regex: (\d+-\d+-\d+\s\S+)(.*)
            # Specify the fields that you want to extract.
            Keys: ["time", "detail"]
        # Specify the output plug-in.
        flushers:
          # Use the flusher_sls plug-in to deliver logs to a specific Logstore. 
          - Type: flusher_sls
            # Make sure that the Logstore exists.
            Logstore: k8s-file
            # Make sure that the endpoint is valid.
            Endpoint: cn-beijing.log.aliyuncs.com
            Region: cn-beijing
            TelemetryType: logs
  5. Run the kubectl apply -f example-k8s-file.yaml command. Then, LoongCollector starts to collect text logs from pods and deliver the collected logs to Simple Log Service.

CRD - AliyunLogConfig

To create a LoongCollector configuration, you need to only create a CR from the AliyunLogConfig CRD. After the CR is created, the LoongCollector configuration takes effect.

Important

If you create a LoongCollector configuration by creating a CR and you want to modify the LoongCollector configuration, you can only modify the CR. If you modify the LoongCollector configuration in the Simple Log Service console, the new settings are not synchronized to the CR.

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, find the cluster that you want to manage and click More in the Actions column. In the drop-down list that appears, click Manage ACK clusters.

  4. Create a file named example-k8s-file.yaml.

    The YAML file is used to create a LoongCollector configuration named example-k8s-file. You can use the LoongCollector configuration to collect logs from the test.LOG file in the /data/logs/app_1 directory of the containers whose names start with app in your cluster in simple mode. After the logs are collected, you can deliver the collected logs to the automatically created k8s-file Logstore in the k8s-log-<YOUR_CLUSTER_ID> project.

    You can modify the log file path in the example based on your business requirements. For more information, see File path mapping for containers.

    • logPath: The log file path. Example: /data/logs/app_1.

    • filePattern: The name of the file from which you want to collect logs. Example: test.LOG.

    The logtailConfig parameter specifies the LoongCollector details, which include the types of input, output, and processing plug-ins and container filtering methods. For more information, see AliyunLogConfigDetail. For more information about the complete parameters in the YAML file, see CR parameters.

    apiVersion: log.alibabacloud.com/v1alpha1
    kind: AliyunLogConfig
    metadata:
      # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. 
      name: example-k8s-file
      # Specify the namespace to which the resource belongs. 
      namespace: kube-system
    spec:
      # Specify the name of the project. If you leave this parameter empty, the project named k8s-log-<your_cluster_id> is used.
      # project: k8s-log-test
      # Specify the name of the Logstore. If the specified Logstore does not exist, Simple Log Service automatically creates a Logstore. 
      logstore: k8s-file
      # Create a LoongCollector configuration. 
      logtailConfig:
        # Specify the type of the data source. If you want to collect text logs, set the value to file. 
        inputType: file
        # Specify the name of the LoongCollector configuration. The name must be the same as the resource name that is specified by metadata.name. 
        configName: example-k8s-file
        inputDetail:
          # Specify the settings that allow LoongCollector to collect text logs in simple mode. 
          logType: common_reg_log
          # Specify the log file path. 
          logPath: /data/logs/app_1
          # Specify the log file name. You can use wildcard characters such as asterisks (*) and question marks (?) when you specify the log file name. Example: log_*.log. 
          filePattern: test.LOG
          # If you want to collect text logs from containers, set the value to true. 
          dockerFile: true
          # Enable multi-line log collection. If you want to collect single-line logs, delete this parameter.
          # Specify the regular expression to match the beginning of the first line of a log. 
          logBeginRegex: \d+-\d+-\d+.*
          # Specify the conditions to filter containers. 
          advanced:
            k8s:
              K8sPodRegex: '^(app.*)$'
  5. Run the kubectl apply -f example-k8s-file.yaml command. Then, LoongCollector starts to collect text logs from pods and deliver the collected logs to Simple Log Service.

Simple Log Service console

Note

This method is suitable for scenarios in which simple settings are required to manage a LoongCollector configuration without the need to log on to a Kubernetes cluster. You cannot batch create LoongCollector configurations by using this method.

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the project that you use to install LoongCollector components. Example: k8s-log-<your_cluster_id>. On the page that appears, click the Logstore that you want to manage, and then click Logtail Configurations. On the Logtail Configuration page, click Add Logtail Configuration. In the Quick Data Import dialog box, find the Kubernetes - File card and click Integrate Now.image

  3. In the Machine Group Configurations step of the Import Data wizard, set the Scenario parameter to Kubernetes Clusters and the Deployment Method parameter to ACK Daemonset, select the k8s-group-${your_k8s_cluster_id} machine group and click the > icon to move the machine group from the Source Machine Group section to the Applied Server Groups section, and then click Next.image

  4. Create a LoongCollector configuration. In the Logtail Configuration step of the Import Data wizard, configure the required parameters and click Next. Approximately 1 minute is required to create a LoongCollector configuration. The following list describes the main parameter settings. For more information, see Create a Logtail configuration.

    • Global Configurations

      In the Global Configurations section, configure the Configuration Name parameter.

      image

    • Input Configurations

      • Logtail Deployment Mode: The LoongCollector deployment mode. Select Daemonset.

      • File Path Type: The type of the file path that you want to use to collect logs. Valid values: Path in Container and Host Path. If a hostPath volume is mounted to a container and you want to collect logs from files based on the mapped file path on the container host, set this parameter to Host Path. In other scenarios, set this parameter to Path in Container.

      • File Path: The directory used to store the logs that you want to collect. The file path must start with a forward slash (/). In this example, set the File Path parameter to /data/wwwlogs/main/**/*.Log, which indicates that logs are collected from files suffixed with .Log in the /data/wwwlogs/main directory. You can configure the Maximum Directory Monitoring Depth parameter to specify the maximum number of levels of the subdirectories that you want to monitor. The subdirectories are in the log file directory that you specify. This parameter specifies the levels of the subdirectories that the ** wildcard characters can match in the value of the File Path parameter. The value 0 specifies that only the specified log file directory is monitored.image

  5. Create indexes and preview data. By default, full-text indexing is enabled for Simple Log Service. In this case, full-text indexes are created. You can query all fields in logs based on the indexes. You can also manually create indexes for fields based on the collected logs. Alternatively, you can click Automatic Index Generation. Then, Simple Log Service generates indexes for fields. You can query data in an accurate manner based on field indexes. This reduces indexing costs and improves query efficiency. For more information, see Create indexes.image

Environment variable

Note

This method supports only single-line text logs. If you want to collect multi-line text logs or logs of other formats, use the preceding methods.

  1. Create an application and configure Simple Log Service.

    Configure Simple Log Service in the ACK console
    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane, choose Workloads > Deployments.

    3. On the Deployments page, click Create from Image.

    4. In the Basic Information step of the Create wizard, configure the Name parameter, and then click Next. In the Container step of the Create wizard, configure the Image Name parameter.

      The following section describes only the settings related to Simple Log Service. For more information about other settings, see Create a stateless application by using a Deployment.

    5. In the Log section, configure log-related information.

      1. Create a Logtail configuration.

        Click Collection Configuration to create a Logtail configuration. Each Logtail configuration consists of the Logstore and Log Path In Container parameters.

        • Logstore: the name of the Logstore that is used to store the collected logs. If the Logstore does not exist, ACK automatically creates a Logstore in the Simple Log Service project that is associated with your ACK cluster.

          Note

          The default retention period of logs in a Logstore is 90 days.

        • Log Path in Container: the path from which you want to collect logs. A value of /usr/local/tomcat/logs/catalina.*.log indicates that Logtail collects text logs from the Tomcat application.image

          By default, each Logstore corresponds to a Logtail configuration that you can use to collect logs by line in simple mode.

      2. Create custom tags.

        Click Custom Tag to create custom tags. Each custom tag is a key-value pair and is added to the collected logs. You can use custom tags to identify container logs. For example, you can specify a version number for the Tag Value parameter.image

    Configure Simple Log Service by using a YAML template
    1. Log on to the Container Service for Kubernetes console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane, choose Workloads > Deployments.

    3. On the Deployments page, select a namespace from the Namespace drop-down list in the upper part of the page. Then, click Create From YAML in the upper-right corner of the page.

    4. Configure a YAML template.

      The syntax of the YAML template is the same as the Kubernetes syntax. However, to specify a collection configuration for a container, you must use env to add Collection Configurations and Custom Tags to the container. You must also create the corresponding volumeMounts and volumes based on the collection configuration. The following sample code provides an example of pod configurations:

      apiVersion: v1
      kind: Pod
      metadata:
        name: my-demo
      spec:
        containers:
        - name: my-demo-app
          image: 'registry.cn-hangzhou.aliyuncs.com/log-service/docker-log-test:latest'
          env:
          # Configure environment variables
          - name: aliyun_logs_log-varlog
            value: /var/log/*.log
          - name: aliyun_logs_mytag1_tags
            value: tag1=v1
          # Configure volume mounting
          volumeMounts:
          - name: volumn-sls-mydemo
            mountPath: /var/log
          # If the pod is repetitively restarted, you can add a sleep command to the startup parameters of the pod.
          command: ["sh", "-c"]  # Run commands in the shell.
          args: ["sleep 3600"]   # Make the pod sleep 3,600 seconds (1 hour).
        volumes:
        - name: volumn-sls-mydemo
          emptyDir: {}
      1. Use environment variables to create Collection Configurations and Custom Tags. All environment variables related to configurations use aliyun_logs_ as the prefix.

        • Add log collection configurations in the following format:

          - name: aliyun_logs_log-varlog
            value: /var/log/*.log                        

          In the example, a collection configuration is created in the aliyun_logs_{key} format. The value of {key} is log-varlog.

          • aliyun_logs_log-varlog: This environment variable indicates that a configuration is created to collect logs from the /var/log/*.log path and store the logs in a Logstore named log-varlog. The name of the Simple Log Service collection configuration is also log-varlog. The purpose is to collect the content of the /var/log/*.log file in the container to the log-varlog Logstore.

        • Create Custom Tags in the following format:

          - name: aliyun_logs_mytag1_tags
            value: tag1=v1                       

          After a tag is added, the tag is automatically appended to the log data that is collected from the container. mytag1 is a name that does not contain underscores (_).

      2. If your collection configuration specifies a collection path other than stdout, you must create the corresponding volumeMounts in this section.

        In the example, the collection configuration adds the collection of /var/log/*.log. Therefore, the corresponding volumeMounts for /var/log is added.

    5. After you complete the YAML template, click Create. The Kubernetes cluster executes the configuration.

  2. Use environment variables to configure advanced settings.

    Environment variable-based LoongCollector configuration supports various parameters. You can use environment variables to configure advanced settings to meet your log collection requirements.

    Important

    You cannot use environment variables to configure log collection in edge computing scenarios.

    Variable

    Description

    Example

    Usage note

    aliyun_logs_{key}

    • Required. {key} can contain only lowercase letters, digits, and hyphens (-).

    • If the specified aliyun_logs_{key}_logstore variable does not exist, a Logstore named {key} is automatically created to store the collected logs.

    • To collect the stdout of a container, set the value to stdout. You can also set the value to a log file path in the containers.

    • - name: aliyun_logs_catalina
      
        value: stdout
    • - name: aliyun_logs_access-log
      
        value: /var/log/nginx/access.log
    • By default, logs are collected in simple mode. If you want to parse the collected logs, we recommend that you configure the related settings in the Simple Log Service console or by using CRDs.

    • {key} specifies the name of the LoongCollector configuration. The configuration name must be unique in the Kubernetes cluster.

    aliyun_logs_{key}_tags

    Optional. The variable is used to add tags to logs. The value must be in the {tag-key}={tag-value} format.

    - name: aliyun_logs_catalina_tags
    
      value: app=catalina

    N/A.

    aliyun_logs_{key}_project

    Optional. The variable specifies a Simple Log Service project. The default project is the one that is generated after LoongCollector is installed.

    - name: aliyun_logs_catalina_project
    
      value: my-k8s-project

    The project must be deployed in the same region as LoongCollector.

    aliyun_logs_{key}_logstore

    Optional. The variable specifies a Simple Log Service Logstore. Default value: {key}.

    - name: aliyun_logs_catalina_logstore
    
      value: my-logstore

    N/A.

    aliyun_logs_{key}_shard

    Optional. The variable specifies the number of shards of the Logstore. Valid values: 1 to 10. Default value: 2.

    Note

    If the Logstore that you specify already exists, this variable does not take effect.

    - name: aliyun_logs_catalina_shard
    
      value: '4'

    N/A.

    aliyun_logs_{key}_ttl

    Optional. The variable specifies the log retention period. Valid values: 1 to 3650.

    • If you set the value to 3650, logs are permanently stored.

    • The default retention period is 90 days.

    Note

    If the Logstore that you specify already exists, this variable does not take effect.

    - name: aliyun_logs_catalina_ttl
    
      value: '3650'

    N/A.

    aliyun_logs_{key}_machinegroup

    Optional. The variable specifies the machine group in which the application is deployed. The default machine group is the one in which LoongCollector is deployed. For more information about how to use this variable, see Deploy Logtail in DaemonSet mode to collect text logs from an ACK cluster.

    - name: aliyun_logs_catalina_machinegroup
    
      value: my-machine-group

    N/A.

    aliyun_logs_{key}_logstoremode

    Optional. The variable specifies the type of Logstore. Default value: standard. Valid values: standard and query.

    Note

    If the Logstore that you specify already exists, this variable does not take effect.

    • standard: Standard Logstore. This type of Logstore supports the log analysis feature and is suitable for scenarios such as real-time monitoring and interactive analysis. You can use this type of Logstore to build a comprehensive observability system.

    • query: Query Logstore. This type of Logstore supports high-performance queries. The index traffic fee of a Query Logstore is approximately half that of a Standard Logstore. Query Logstores do not support SQL analysis. Query Logstores are suitable for scenarios in which the amount of data is large, the log retention period is long, or log analysis is not required. If logs are stored for weeks or months, the log retention period is considered long.

    • - name: aliyun_logs_catalina_logstoremode
        value: standard 
    • - name: aliyun_logs_catalina_logstoremode
        value: query 

    N/A.

    • Custom requirement 1: Collect data from multiple applications to the same Logstore

      In this scenario, configure the aliyun_logs_{key}_logstore parameter. The following example shows how to collect stdout from two applications to the stdout-logstore Logstore.

      The {key} of Application 1 is set to app1-stdout, and the {key} of Application 2 is set to app2-stdout.

      Configure the following environment variables for Application 1:

      # Configure environment variables.
          - name: aliyun_logs_app1-stdout
            value: stdout
          - name: aliyun_logs_app1-stdout_logstore
            value: stdout-logstore

      Configure the following environment variables for Application 2:

      # Configure environment variables.
          - name: aliyun_logs_app2-stdout
            value: stdout
          - name: aliyun_logs_app2-stdout_logstore
            value: stdout-logstore
    • Custom requirement 2: Collect data from multiple applications to different projects

      In this scenario, perform the following steps:

      1. Create a machine group in each project and set the custom identifier of the machine group in the following format: k8s-group-{cluster-id}, where {cluster-id} is the ID of the cluster. You can specify a custom machine group name.

      2. Specify the project, Logstore, and machine group in the environment variables for each application. The name of the machine group is the same as the one that you create in the previous step.

        In the following example, the {key} of Application 1 is set to app1-stdout, and the {key} of Application 2 is set to app2-stdout. If the two applications are deployed in the same Kubernetes cluster, you can use the same machine group for the applications.

        Configure the following environment variables for Application 1:

        # Configure environment variables.
            - name: aliyun_logs_app1-stdout
              value: stdout
            - name: aliyun_logs_app1-stdout_project
              value: app1-project
            - name: aliyun_logs_app1-stdout_logstore
              value: app1-logstore
            - name: aliyun_logs_app1-stdout_machinegroup
              value: app1-machine-group

        Configure the following environment variables for Application 2:

        # Configure environment variables.
            - name: aliyun_logs_app2-stdout
              value: stdout
            - name: aliyun_logs_app2-stdout_project
              value: app2-project
            - name: aliyun_logs_app2-stdout_logstore
              value: app2-logstore
            - name: aliyun_logs_app2-stdout_machinegroup
              value: app1-machine-group

Collect stdout

This section describes four methods that you can use to create a collection configuration. We recommend that you use only one method to manage a collection configuration.

Configuration method

Configuration description

Scenario

CRD - AliyunPipelineConfig (recommended)

You can use the AliyunPipelineConfig Custom Resource Definition (CRD), which is a Kubernetes CRD, to manage a LoongCollector configuration.

This method is suitable for scenarios that require complex collection and processing, and version consistency between the LoongCollector configuration and the LoongCollector container in an ACK cluster.

CRD - AliyunLogConfig

You can use the AliyunLogConfig CRD, which is an old version CRD, to manage a LoongCollector configuration.

This method is suitable for known scenarios in which LoongCollector configurations can be managed by using the old version CRD.

You must gradually replace the AliyunLogConfig CRD with the AliyunPipelineConfig CRD to obtain better extensibility and stability. For more information about the differences between the CRD - AliyunPipelineConfig and CRD - AliyunLogConfig methods, see CRDs.

Simple Log Service console

You can manage a LoongCollector configuration in the GUI based on quick deployment and configuration.

This method is suitable for scenarios in which simple settings are required to manage a LoongCollector configuration. If you use this method to manage a LoongCollector configuration, specific advanced features and custom settings cannot be used.

Environment variable

You can use environment variables to configure parameters used to manage a LoongCollector configuration in an efficient manner.

You can use environment variables only to configure simple settings. Complex processing logic is not supported. Only single-line text logs are supported. You can use environment variables to create a LoongCollector configuration that can meet the following requirements:

  • Collect data from multiple applications to the same Logstore.

  • Collect data from multiple applications to different projects.

Note

When you use the (Recommended) CRD - AliyunPipelineConfig method, the logtail-ds version installed in your cluster must be later than 1.8.10. For more information about how to upgrade logtail-ds, see Upgrade the latest version of Logtail. However, this method does not have version requirements for LoongCollector.

Crd-AliyunPipelineConfig (recommended)

To create Logtail configurations, simply create the AliyunPipelineConfig custom resources, which will take effect automatically.

Important

For configurations created through custom resources, modifications must be made by updating the corresponding custom resource. Changes made in the Simple Log Service Console will not sync to the custom resource.

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, click More in the Actions column of the target cluster, then select Manage ACK Clusters.

  4. Create a file named example.yaml.

    Note

    You can use the Logtail configuration generator to create a target scenario YAML script. This tool helps you quickly complete the configuration and reduces manual operations.

    The example YAML file below captures standard output from Pods labeled with app: ^(.*test.*)$ within the default namespace, using multi-line text mode, and forwards it to a logstore called k8s-stdout, which is automatically created within a project named k8s-log-test. Adjust the parameters in the YAML as needed:

    1. project: Log on to the Simple Log Service Console, and identify the project name created by the Logtail you installed, typically in the format k8s-log-<YOUR_CLUSTER_ID>, such as k8s-log-test.

    2. IncludeK8sLabel: Used to filter the labels of the target pod. For example, app: ^(.*test.*)$ indicates that the label key is app, and it will collect pods with values that include test.

    3. Endpoint and Region: For example, ap-southeast-1.log.aliyuncs.com and ap-southeast-1.

    For more information on config in the YAML file, such as supported inputs, outputs, processing plug-in types, and container filtering methods, see PipelineConfig. For a comprehensive list of YAML parameters, see CR parameters.

    apiVersion: telemetry.alibabacloud.com/v1alpha1
    # Create a ClusterAliyunPipelineConfig.
    kind: ClusterAliyunPipelineConfig
    metadata:
      # Specify the name of the resource. The name must be unique in the current Kubernetes cluster. This name is also the name of the Logtail configuration created. 
      name: example-k8s-stdout
    spec:
      # Specify the target project.
      project:
        name: k8s-log-test
      # Create a logstore for storing logs.
      logstores:
        - name: k8s-stdout
      # Define the Logtail configuration.
      config:
        # Sample log (optional)
        sample: |
          2024-06-19 16:35:00 INFO test log
          line-1
          line-2
          end
        # Define input plug-ins.
        inputs:
              # Use the service_docker_stdout plug-in to collect text logs inside the container.
          - Type: service_docker_stdout
            Stdout: true
            Stderr: true
            # Configure container information filter conditions. Multiple options are in an "and" relationship.
            # Specify the namespace to which the pod containing the container to be collected belongs. Supports regular expression matching.
            K8sNamespaceRegex: "^(default)$"
            # Enable container metadata preview.
            CollectContainersFlag: true
            # Collect containers that meet the Pod label conditions. Multiple entries are in an "or" relationship.
            IncludeK8sLabel:
              app: ^(.*test.*)$
            # Configure multi-line chunk configuration. Invalid configuration for single-line log collection.
            # Configure the regular expression for the beginning of the line.
            BeginLineRegex: \d+-\d+-\d+.*
        # Define output plug-ins
        flushers:
          # Use the flusher_sls plug-in to send logs to the specified logstore.
          - Type: flusher_sls
            # Make sure that the logstore exists.
            Logstore: k8s-stdout
            # Make sure that the endpoint is valid.
            Endpoint: ap-southeast-1.log.aliyuncs.com
            Region: ap-southeast-1
            TelemetryType: logs
  5. Run kubectl apply -f example.yaml, replacing example.yaml with your YAML file name. Logtail will begin collecting standard output from the container and sending it to Simple Log Service.

CRD-AliyunLogConfig

To create Logtail configurations, simply create the AliyunLogConfig custom resources, which will take effect automatically.

Important

For configurations created through custom resources, modifications must be made by updating the corresponding custom resource. Changes made in the Simple Log Service Console will not sync to the custom resource.

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, click More in the Actions column of the target cluster, then select Manage ACK Clusters.

  4. Create a file named example-k8s-file.yaml.

    This YAML script will establish a Logtail configuration called simple-stdout-example. It will collect the standard output from all containers within the cluster that have names beginning with app, using multi-line mode. The collected data will then be transmitted to a logstore called k8s-stdout within a project named k8s-log-<your_cluster_id>.

    For more information on the logtailConfig item in the YAML file, including supported inputs, outputs, processing plug-in types, and container filtering methods, see AliyunLogConfigDetail. For a comprehensive list of YAML parameters, see CR parameters.

    # Standard output configuration
    apiVersion: log.alibabacloud.com/v1alpha1
    kind: AliyunLogConfig
    metadata:
      # Specify the name of the resource. The name must be unique in the current Kubernetes cluster.
      name: simple-stdout-example
    spec:
      # Specify the target project name (optional, default is k8s-log-<your_cluster_id>)
      # project: k8s-log-test
      # Specify the name of the logstore. If the specified logstore does not exist, Simple Log Service automatically creates a Logstore.
      logstore: k8s-stdout
      # Specify the Logtail collection configuration.
      logtailConfig:
        # Specify the type of the data source. To collect standard output, set the value to plugin.
        inputType: plugin
        # Specify the name of the Logtail collection configuration. The name must be the same as the resource name that is specified in metadata.name.
        configName: simple-stdout-example
        inputDetail:
          plugin:
            inputs:
              - type: service_docker_stdout
                detail:
                  # Specify the collection of stdout and stderr.
                  Stdout: true
                  Stderr: true
                  # Specify the namespace to which the pod containing the container to be collected belongs. Supports regular expression matching.
                  K8sNamespaceRegex: "^(default)$"
                  # Specify the name of the container to be collected. Supports regular expression matching.
                  K8sContainerRegex: "^(app.*)$"
                  # Configure multi-line chunk configuration.
                  # Configure the regular expression for the beginning of the line.
                  BeginLineRegex: \d+-\d+-\d+.*
                  
  5. Run kubectl apply -f example.yaml, replacing example.yaml with your YAML file name. Logtail will start collecting standard output from the container and sending it to Simple Log Service.

Simple Log Service console

  1. Log on to the Simple Log Service Console.

  2. Select your project from the list, such as k8s-log-<your_cluster_id>. On the project page, click Logtail Configurations for the target logstore, click Add Logtail Configuration, and click Integrate Now under K8s - Stdout and Stderr - Old Version.image

  3. Since Logtail is already installed for the ACK cluster, select Use Existing Machine Groups.image

  4. On the Machine Group Configurations page, select the k8s-group-${your_k8s_cluster_id} machine group under the ACK Daemonset method in the Kubernetes Clusters scenario, add it to the applied machine group, then click Next.image

  5. Create a Logtail configuration, enter the required configurations as described below, and click Next. It will take about 1 minute for the configuration to take effect.

    This section covers only the necessary configurations. For a complete list, see Global Configurations.

    • Global Configuration

      Enter the configuration name in Global Configuration.

      image

  6. Create indexes and preview data: By default, Simple Log Service enables a full-text index, indexing all fields in the log for queries. You can also create a field index manually based on the collected logs, or click Automatic Index Generation. This will generate a field index for term queries on specific fields, reducing index costs and improving query efficiency. image

Environment variables

  1. Configure Simple Log Service when creating an application.

    Console
    1. Log on to the Container Service Management Console and click Clusters in the left-side navigation pane.

    2. On the Clusters page, click the target cluster, then select Workloads > Deployments from the left-side navigation pane.

    3. On the Deployments page, select a namespace and click Create from Image.

    4. On the Basic Information page, set the application name, then click Next to enter the Container page.

      This section introduces configurations related to Simple Log Service. For more information about other application configurations, see Create a stateless application by using a Deployment.

    5. In the Log section, configure log-related information.

      1. Set Collection Configuration.

        Click Collection Configuration to create a new collection configuration. Each configuration consists of two items:

        • Logstore: Specify the logstore where the collected logs are stored. If the logstore does not exist, ACK will automatically create it in the Simple Log Service project associated with your cluster.

          Note

          The default log retention period for newly created logstores is 90 days.

        • Log Path in Container: Specify stdout to collect standard output and error output from the container.

          Collection Configuration

          Each collection configuration is automatically created as a logstore configuration, and logs are collected in simple mode (by row) by default.

      2. Set Custom Tag.

        Click Custom Tag to create a custom tag. Each tag is a key-value pair that will be appended to the collected logs. You can use it to label the log data of the container, such as the version number.

        Custom tag

    6. After configuring all settings, click Next. For subsequent steps, see Create a stateless application by using a Deployment.

    YAML template
    1. Log on to the Container Service Management Console or the , and select Clusters from the left-side navigation pane.

    2. On the Clusters page, click the target cluster name, and then select Workloads > Deployments from the left-side navigation pane.

    3. On the Deployments page, select a namespace and click Create from YAML.

    4. Configure the YAML file.

      The syntax of the YAML template is consistent with Kubernetes. To specify the collection configuration for the container, use env to add Collection Configuration and Custom Tag to the container, and create corresponding volumeMounts and volumes based on the collection configuration. Below is a simple pod example:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        annotations:
          deployment.kubernetes.io/revision: '1'
        labels:
          app: deployment-stdout
          cluster_label: CLUSTER-LABEL-A
        name: deployment-stdout
        namespace: default
      spec:
        progressDeadlineSeconds: 600
        replicas: 1
        revisionHistoryLimit: 10
        selector:
          matchLabels:
            app: deployment-stdout
        strategy:
          rollingUpdate:
            maxSurge: 25%
            maxUnavailable: 25%
          type: RollingUpdate
        template:
          metadata:
            labels:
              app: deployment-stdout
              cluster_label: CLUSTER-LABEL-A
          spec:
            containers:
              - args:
                  - >-
                    while true; do date '+%Y-%m-%d %H:%M:%S'; echo 1; echo 2; echo 3;
                    echo 4; echo 5; echo 6; echo 7; echo 8; echo 9; 
                    sleep 10; done
                command:
                  - /bin/sh
                  - '-c'
                  - '--'
                env:
                  - name: cluster_id
                    value: CLUSTER-A
                  - name: aliyun_logs_log-stdout
                    value: stdout
                image: 'mirrors-ssl.aliyuncs.com/busybox:latest'
                imagePullPolicy: IfNotPresent
                name: timestamp-test
                resources: {}
                terminationMessagePath: /dev/termination-log
                terminationMessagePolicy: File
            dnsPolicy: ClusterFirst
            restartPolicy: Always
            schedulerName: default-scheduler
            securityContext: {}
            terminationGracePeriodSeconds: 30
      1. Create your Collection Configuration and Custom Tag using environment variables. All related environment variables start with the prefix aliyun_logs_.

        • The rules for creating collection configurations are as follows:

          - name: aliyun_logs_log-varlog
            value: /var/log/*.log                        

          The example creates a collection configuration in the format aliyun_logs_{key}, where {key} is log-varlog.

          • aliyun_logs_log-varlog: This variable indicates the creation of a logstore named log-varlog. The log collection path is set to /var/log/*.log, and the corresponding Simple Log Service collection configuration name is also log-varlog. The goal is to collect the contents of the container's /var/log/*.log files into the log-varlog Logstore.

        • The rules for Creating Custom Tags are as follows:

          - name: aliyun_logs_mytag1_tags
            value: tag1=v1                       

          After a tag is configured, the corresponding field is automatically appended to the log data collected from the container. The mytag1 must be any name that does not contain '_'.

      2. If your collection configuration specifies a path other than stdout, you need to create the corresponding volumeMounts in this section.

        In the example, the collection configuration adds collection for /var/log/*.log, so the corresponding volumeMounts for /var/log is added.

    5. After completing the YAML, click Create to submit the configuration to the Kubernetes cluster for execution.

  2. Configure advanced parameters for environment variables.

    Environment variables support various configuration parameters for log collection. Set advanced parameters as needed.

    Important

    Configuring log collection through environment variables is not suitable for edge computing scenarios.

    Field

    Description

    Example

    Notes

    aliyun_logs_{key}

    • Required. {key} can contain only lowercase letters, digits, and hyphens (-).

    • If aliyun_logs_{key}_logstore does not exist, a logstore named {key} is created by default.

    • If the value is stdout, it indicates the collection of the container's standard output. Other values indicate the log path inside the container.

    • - name: aliyun_logs_catalina
      
        value: stdout
    • - name: aliyun_logs_access-log
      
        value: /var/log/nginx/access.log

    aliyun_logs_{key}_tags

    Optional. The value must be in the format {tag-key}={tag-value} and is used to tag the logs.

    - name: aliyun_logs_catalina_tags
    
      value: app=catalina

    None.

    aliyun_logs_{key}_project

    Optional. The value specifies a project in Simple Log Service. If this environment variable does not exist, the project you selected during installation is used.

    - name: aliyun_logs_catalina_project
    
      value: my-k8s-project

    The project must be deployed in the same region as Logtail.

    aliyun_logs_{key}_logstore

    Optional. The value specifies a Logstore in Simple Log Service. If this environment variable does not exist, the Logstore is the same as {key}.

    - name: aliyun_logs_catalina_logstore
    
      value: my-logstore

    None.

    aliyun_logs_{key}_shard

    Optional. The value specifies the number of shards when creating a logstore. Valid values: 1 to 10. If this environment variable does not exist, the value is 2.

    Note

    If the logstore already exists, this parameter does not take effect.

    - name: aliyun_logs_catalina_shard
    
      value: '4'

    None.

    aliyun_logs_{key}_ttl

    Optional. The value specifies the log retention period. Valid values: 1 to 3650.

    • If the value is 3650, the log retention period is set to permanent.

    • If this environment variable does not exist, the default log retention period is 90 days.

    Note

    If the logstore already exists, this parameter does not take effect.

    - name: aliyun_logs_catalina_ttl
    
      value: '3650'

    None.

    aliyun_logs_{key}_machinegroup

    Optional. The value specifies the machine group of the application. If this environment variable does not exist, the default machine group where Logtail is installed is used. For detailed usage of this parameter, see Collect container logs from ACK clusters.

    - name: aliyun_logs_catalina_machinegroup
    
      value: my-machine-group

    None.

    aliyun_logs_{key}_logstoremode

    Optional. The value specifies the type of logstore in Simple Log Service. If this parameter is not specified, the default value is standard. Valid values:

    Note

    If the logstore already exists, this parameter does not take effect.

    • standard: Supports one-stop data analysis features of Simple Log Service. Suitable for real-time monitoring, interactive analysis, and building complete observability systems.

    • query: Supports high-performance queries. The index traffic cost is about half of standard, but does not support SQL analysis. Suitable for scenarios with large data volumes, long storage periods (weekly or monthly), and no log analysis.

    • - name: aliyun_logs_catalina_logstoremode
        value: standard 
    • - name: aliyun_logs_catalina_logstoremode
        value: query 

    This parameter requires the logtail-ds image version to be >=1.3.1.

    • Customization requirement 1: Collect data from multiple applications into the same logstore

      To collect data from multiple applications into the same Logstore, set the aliyun_logs_{key}_logstore parameter. For example, the following configuration collects stdout from two applications into stdout-logstore.

      In the example, the {key} for Application 1 is app1-stdout, while for Application 2 it is {key} app2-stdout.

      The environment variables for Application 1 are as follows:

      # Configure environment variables
          - name: aliyun_logs_app1-stdout
            value: stdout
          - name: aliyun_logs_app1-stdout_logstore
            value: stdout-logstore

      The environment variables for Application 2 are as follows:

      # Configure environment variables
          - name: aliyun_logs_app2-stdout
            value: stdout
          - name: aliyun_logs_app2-stdout_logstore
            value: stdout-logstore
    • Customization requirement 2: Collect data from different applications into different projects

      To collect data from different applications into multiple projects, follow these steps:

      1. Create a machine group in each project with a custom ID named k8s-group-{cluster-id}, where {cluster-id} is your cluster ID. The machine group name is customizable.

      2. Configure the project, logstore, and machine group information in the environment variables for each application. Use the same machine group name as created in the previous step.

        In the following example, the {key} for Application 1 is app1-stdout, and the {key} for Application 2 is app2-stdout. If both applications are deployed within the same K8s cluster, you can utilize the same machine group for them.

        The environment variables for Application 1 are as follows:

        # Configure environment variables
            - name: aliyun_logs_app1-stdout
              value: stdout
            - name: aliyun_logs_app1-stdout_project
              value: app1-project
            - name: aliyun_logs_app1-stdout_logstore
              value: app1-logstore
            - name: aliyun_logs_app1-stdout_machinegroup
              value: app1-machine-group

        The environment variables for Application 2 are as follows:

        # Application 2 Configure environment variables
            - name: aliyun_logs_app2-stdout
              value: stdout
            - name: aliyun_logs_app2-stdout_project
              value: app2-project
            - name: aliyun_logs_app2-stdout_logstore
              value: app2-logstore
            - name: aliyun_logs_app2-stdout_machinegroup
              value: app1-machine-group

Step 3: Query and analyze logs

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the project that you want to manage to go to the details page of the project.

    image

  3. In the left-side navigation pane, click the 图标 icon of the Logstore that you want to manage. In the drop-down list, select Search & Analysis to view the logs that are collected from your Kubernetes cluster.

    image

Default log fields

Text logs

The following table describes the fields that are included by default in each container text log.

Field name

Description

__tag__:__hostname__

The name of the container host.

__tag__:__path__

The log file path in the container.

__tag__:_container_ip_

The IP address of the container.

__tag__:_image_name_

The name of the image that is used by the container.

__tag__:_pod_name_

The name of the pod.

__tag__:_namespace_

The namespace to which the pod belongs.

__tag__:_pod_uid_

The unique identifier (UID) of the pod.

stdout

The following table describes the fields uploaded by default for each log in a Kubernetes cluster.

Field

Description

_time_

The time when the log was collected.

_source_

The type of the log source. Valid values: stdout and stderr.

_image_name_

The name of the image.

_container_name_

The name of the container.

_pod_name_

The name of the pod.

_namespace_

The namespace of the pod.

_pod_uid_

The unique identifier of the pod.

References