kubelet与api-server交互

本文深入解析Kubernetes中kubelet与API Server的交互机制,重点阐述Pod从创建到同步至期望状态的过程,包括syncLoop的工作流程、PLEG组件在Pod生命周期事件中的作用以及Pod状态如何最终回写至API Server。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在kubelet创建成功后,会将状态回写api-server,通知落到该node上的pod已经创建成功,分析两者如何交互。

syncLoop有个for循环,主要运行syncLoopIteration,并且和pleg组件有交互。

func (kl *Kubelet) syncLoopIteration(configCh <-chan kubetypes.PodUpdate, handler SyncHandler,    syncCh <-chan time.Time, housekeepingCh <-chan time.Time, plegCh <-chan *pleg.PodLifecycleEvent) bool {

    select {

    case u, open := <-configCh:

        if !open {

            glog.Errorf("Update channel is closed. Exiting the sync loop.")

            return false

        }

        switch u.Op {

        case kubetypes.ADD:

            ...

        case kubetypes.UPDATE:

            ...

        case kubetypes.REMOVE:

            ...

        case kubetypes.RECONCILE:

            ...

        case kubetypes.DELETE:

            ...

        case kubetypes.RESTORE:

            ...

        case kubetypes.SET:

            ...

        }

        ...

    case e := <-plegCh:

        ...

    case <-syncCh:

        ...

    case update := <-kl.livenessManager.Updates():

        ...

    case <-housekeepingCh:

        ...

    }

    return true

}

HandlePodAdditions

syncLoopIteration中调用的HandlePodAdditions的逻辑:

先对要add的pods按创建时间排序

然后遍历pods 

1. 先把pod写入podManager(如果podManager中没有某个pod,就意味着这个pod已经在apiserver中被删除了,并且除了cleanup不再会做别的操作) 

2. 处理MirrorPod(为了监控static pod的状态,kubelet通过api server为每个static pod创建一个mirror pod) 

3. 检查是否admit pod

4. 将pod分发到某个worker去进行sync操作 

5. 将pod传给probeManager做健康检查

func (kl *Kubelet) HandlePodAdditions(pods []*v1.Pod) {

start := kl.clock.Now()

sort.Sort(sliceutils.PodsByCreationTime(pods))

// Responsible for checking limits in resolv.conf

// The limits do not have anything to do with individual pods

if kl.dnsConfigurer != nil && kl.dnsConfigurer.ResolverConfig != "" {

kl.dnsConfigurer.CheckLimitsForResolvConf()

}

for _, pod := range pods {

existingPods := kl.podManager.GetPods()

// Always add the pod to the pod manager. Kubelet relies on the pod

// manager as the source of truth for the desired state. If a pod does

// not exist in the pod manager, it means that it has been deleted in

// the apiserver and no action (other than cleanup) is required.

kl.podManager.AddPod(pod)

if kubepod.IsMirrorPod(pod) {

kl.handleMirrorPod(pod, start)

continue

}

if !kl.podIsTerminated(pod) {

// Only go through the admission process if the pod is not

// terminated.

// We failed pods that we rejected, so activePods include all admitted

// pods that are alive.

activePods := kl.filterOutTerminatedPods(existingPods)

// Check if we can admit the pod; if not, reject it.

if ok, reason, message := kl.canAdmitPod(activePods, pod); !ok {

kl.rejectPod(pod, reason, message)

continue

}

}

mirrorPod, _ := kl.podManager.GetMirrorPodByPod(pod)

kl.dispatchWork(pod, kubetypes.SyncPodCreate, mirrorPod, start)

kl.probeManager.AddPod(pod)

}

}

主要看第4步:dispatchwork()做了什么。我们按照dispatchWork --> podWorkers.UpdatePod --> podWorkers.managePodLoop的代码链路一路追踪下去,发现最终调用了 podWorkers的syncPodFn,而syncPodFn是在NewMainKubelet对podWorkers初始化时赋值的,赋值为klet.syncPod,所以真正做同步工作的是syncPod

...

klet.podWorkers = newPodWorkers(klet.syncPod, kubeDeps.Recorder, klet.workQueue, klet.resyncInterval, backOffPeriod, klet.podCache)

...

syncPod是同步单个pod的事务脚本。主要工作流是:

  • 如果要创建pod,记录pod worker的启动延时
  • 调用generateAPIPodStatus为pod准备一个v1.PodStatus对象,用来保存pod状态,并会写到statusManager,回写API server
  • 如果pod被视为第一次running,记录pod启动延迟
  • 在status manager中更新pod的状态
  • kill掉不该是running的pod
  • 如果pod是static pod,且没有mirror pod,创建一个mirror pod
  • 如果不存在,则为pod创建数据目录
  • 等待volume被attach/mount
  • 获取pod的pull secrets
  • 调用容器运行时的SyncPod回调
  • 更新reasonCache(缓存的是所有容器最近创建失败的原因,用于产生容器状态)

上面的注释中比较重要的是:

  1. syncPod会通过status Manager去回写apiserver pod的状态
  2. 会等待volume被attach/mount之后再继续执行
  3. 调用的容器运行时的SyncPod

    现在我们总结一下:

syncLoop主要就是将pod同步成期望状态。
另外通过grpc与dockershim通信,让dockershim向docker发送创建删除容器的请求,并通过CNI去配置pod网络
创建出来的pod实际上就是pause容器加上用户自己的容器(如init容器、业务容器)到这里SyncLoop就完成它的一次循环的工作了,当然每次循环的处理动作要看收到的数据。那么pod创建成功后,我们通过kubectl get pods看到的状态变为running

api-server回写

kubelet.syncPod中会往statusManager中更新pod状态,但是这个步骤在创建容器之前,创建容器完成后,kubelet中PLEG这个组件去同步。

func (kl *Kubelet) syncLoop(updates <-chan kubetypes.PodUpdate, handler SyncHandler) {

klog.Info("Starting kubelet main sync loop.")

// The syncTicker wakes up kubelet to checks if there are any pod workers

// that need to be sync'd. A one-second period is sufficient because the

// sync interval is defaulted to 10s.

syncTicker := time.NewTicker(time.Second)

defer syncTicker.Stop()

housekeepingTicker := time.NewTicker(housekeepingPeriod)

defer housekeepingTicker.Stop()

# 该位置watch

plegCh := kl.pleg.Watch()

const (

base   = 100 * time.Millisecond

max    = 5 * time.Second

factor = 2

)

duration := base

for {

if err := kl.runtimeState.runtimeErrors(); err != nil {

klog.Infof("skipping pod synchronization - %v", err)

// exponential backoff

time.Sleep(duration)

duration = time.Duration(math.Min(float64(max), factor*float64(duration)))

continue

}

// reset backoff if we have a success

duration = base

kl.syncLoopMonitor.Store(kl.clock.Now())

#该位置传入该channel

if !kl.syncLoopIteration(updates, handler, syncTicker.C, housekeepingTicker.C, plegCh) {

break

}

kl.syncLoopMonitor.Store(kl.clock.Now())

}

}

pleg watch到数据传入了syncLoopIteration。pleg是用来在pod生命周期中生成事件的,它周期性地去监听容器状态。

syncLoopIteration中回去watch pleg的eventchannel,而pleg周期性地发现pod(或container)的变化,生成事件,写入eventchannel中。这样syncLoopIteration在select到数据以后会进行相应的处理

func (kl *Kubelet) syncLoopIteration(configCh <-chan kubetypes.PodUpdate, handler SyncHandler,

syncCh <-chan time.Time, housekeepingCh <-chan time.Time, plegCh <-chan *pleg.PodLifecycleEvent) bool {

case e := <-plegCh:

if isSyncPodWorthy(e) {

// PLEG event for a pod; sync it.

if pod, ok := kl.podManager.GetPodByUID(e.ID); ok {

klog.V(2).Infof("SyncLoop (PLEG): %q, event: %#v", format.Pod(pod), e)

handler.HandlePodSyncs([]*v1.Pod{pod})

} else {

// If the pod no longer exists, ignore the event.

klog.V(4).Infof("SyncLoop (PLEG): ignore irrelevant event: %#v", e)

}

}

if e.Type == pleg.ContainerDied {

if containerID, ok := e.Data.(string); ok {

kl.cleanUpContainersInPod(e.ID, containerID)

}

}

}

将pod状态(假如是第一次创建到running,则为ContainerStarted)更新到statusManger。一路追踪代码发现数据写入了statusManager中的podStatusChannel,而statusManager在启动时就会select这个channel,并在statusManager的syncPod中去调用kubeClient去更新apiserver中pod的状态

func (m *manager) Start() {

// Don't start the status manager if we don't have a client. This will happen

// on the master, where the kubelet is responsible for bootstrapping the pods

// of the master components.

if m.kubeClient == nil {

klog.Infof("Kubernetes client is nil, not starting status manager.")

return

}

klog.Info("Starting to sync pod status with apiserver")

//lint:ignore SA1015 Ticker can link since this is only called once and doesn't handle termination.

syncTicker := time.Tick(syncPeriod)

// syncPod and syncBatch share the same go routine to avoid sync races.

go wait.Forever(func() {

select {

case syncRequest := <-m.podStatusChannel:

klog.V(5).Infof("Status Manager: syncing pod: %q, with status: (%d, %v) from podStatusChannel",

syncRequest.podUID, syncRequest.status.version, syncRequest.status.status)

m.syncPod(syncRequest.podUID, syncRequest.status)

case <-syncTicker:

m.syncBatch()

}

}, 0)

}

syncPod函数调用statusutil.PatchPodStatus(m.kubeClient, pod.Namespace, pod.Name, *oldStatus, mergePodStatus(*oldStatus, status.status))同步信息到api-server

// syncPod syncs the given status with the API server. The caller must not hold the lock.

func (m *manager) syncPod(uid types.UID, status versionedPodStatus) {

if !m.needsUpdate(uid, status) {

klog.V(1).Infof("Status for pod %q is up-to-date; skipping", uid)

return

}

// TODO: make me easier to express from client code

pod, err := m.kubeClient.CoreV1().Pods(status.podNamespace).Get(status.podName, metav1.GetOptions{})

if errors.IsNotFound(err) {

klog.V(3).Infof("Pod %q (%s) does not exist on the server", status.podName, uid)

// If the Pod is deleted the status will be cleared in

// RemoveOrphanedStatuses, so we just ignore the update here.

return

}

if err != nil {

klog.Warningf("Failed to get status for pod %q: %v", format.PodDesc(status.podName, status.podNamespace, uid), err)

return

}

translatedUID := m.podManager.TranslatePodUID(pod.UID)

// Type convert original uid just for the purpose of comparison.

if len(translatedUID) > 0 && translatedUID != kubetypes.ResolvedPodUID(uid) {

klog.V(2).Infof("Pod %q was deleted and then recreated, skipping status update; old UID %q, new UID %q", format.Pod(pod), uid, translatedUID)

m.deletePodStatus(uid)

return

}

oldStatus := pod.Status.DeepCopy()

newPod, patchBytes, err := statusutil.PatchPodStatus(m.kubeClient, pod.Namespace, pod.Name, *oldStatus, mergePodStatus(*oldStatus, status.status))

klog.V(3).Infof("Patch status for pod %q with %q", format.Pod(pod), patchBytes)

if err != nil {

klog.Warningf("Failed to update status for pod %q: %v", format.Pod(pod), err)

return

}

pod = newPod

klog.V(3).Infof("Status for pod %q updated successfully: (%d, %+v)", format.Pod(pod), status.version, status.status)

m.apiStatusVersions[kubetypes.MirrorPodUID(pod.UID)] = status.version

// We don't handle graceful deletion of mirror pods.

if m.canBeDeleted(pod, status.status) {

deleteOptions := metav1.NewDeleteOptions(0)

// Use the pod UID as the precondition for deletion to prevent deleting a newly created pod with the same name and namespace.

deleteOptions.Preconditions = metav1.NewUIDPreconditions(string(pod.UID))

err = m.kubeClient.CoreV1().Pods(pod.Namespace).Delete(pod.Name, deleteOptions)

if err != nil {

klog.Warningf("Failed to delete status for pod %q: %v", format.Pod(pod), err)

return

}

klog.V(3).Infof("Pod %q fully terminated and removed from etcd", format.Pod(pod))

m.deletePodStatus(uid)

}

}

PatchPodStatus定义在pkg/util/pod/pod.go中,真正向api-server建连

// PatchPodStatus patches pod status.

func PatchPodStatus(c clientset.Interface, namespace, name string, oldPodStatus, newPodStatus v1.PodStatus) (*v1.Pod, []byte, error) {

patchBytes, err := preparePatchBytesforPodStatus(namespace, name, oldPodStatus, newPodStatus)

if err != nil {

return nil, nil, err

}

updatedPod, err := c.CoreV1().Pods(namespace).Patch(name, types.StrategicMergePatchType, patchBytes, "status")

if err != nil {

return nil, nil, fmt.Errorf("failed to patch status %q for pod %q/%q: %v", patchBytes, namespace, name, err)

}

return updatedPod, patchBytes, nil

}

接下来要调用client-go客户端来进行restful的调用。

client-go

client-go/kubernetes/typed/core/v1/pod.go

type PodInterface interface {

Create(*v1.Pod) (*v1.Pod, error)

Update(*v1.Pod) (*v1.Pod, error)

UpdateStatus(*v1.Pod) (*v1.Pod, error)

Delete(name string, options *metav1.DeleteOptions) error

DeleteCollection(options *metav1.DeleteOptions, listOptions metav1.ListOptions) error

Get(name string, options metav1.GetOptions) (*v1.Pod, error)

List(opts metav1.ListOptions) (*v1.PodList, error)

Watch(opts metav1.ListOptions) (watch.Interface, error)

Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Pod, err error)

GetEphemeralContainers(podName string, options metav1.GetOptions) (*v1.EphemeralContainers, error)

UpdateEphemeralContainers(podName string, ephemeralContainers *v1.EphemeralContainers) (*v1.EphemeralContainers, error)

PodExpansion

}

// Patch applies the patch and returns the patched pod.

func (c *pods) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Pod, err error) {

result = &v1.Pod{}

err = c.client.Patch(pt).

Namespace(c.ns).

Resource("pods").

SubResource(subresources...).

Name(name).

Body(data).

#注意这个Do方法

Do().

Into(result)

return

}

Do方法会在接下来调用

kubernetes\staging\src\k8s.io\client-go\rest\client.go

type Interface interface {

GetRateLimiter() flowcontrol.RateLimiter

Verb(verb string) *Request

Post() *Request

Put() *Request

Patch(pt types.PatchType) *Request

Get() *Request

Delete() *Request

APIVersion() schema.GroupVersion

}

// Patch begins a PATCH request. Short for c.Verb("Patch").

func (c *RESTClient) Patch(pt types.PatchType) *Request {

return c.Verb("PATCH").SetHeader("Content-Type", string(pt))

}

func (c *RESTClient) Verb(verb string) *Request {

backoff := c.createBackoffMgr()

if c.Client == nil {

return NewRequest(nil, verb, c.base, c.versionedAPIPath, c.contentConfig, c.serializers, backoff, c.Throttle, 0)

}

return NewRequest(c.Client, verb, c.base, c.versionedAPIPath, c.contentConfig, c.serializers, backoff, c.Throttle, c.Client.Timeout)

}

kubernetes\staging\src\k8s.io\client-go\rest\request.go

func NewRequest(client HTTPClient, verb string, baseURL *url.URL, versionedAPIPath string, content ContentConfig, serializers Serializers, backoff BackoffManager, throttle flowcontrol.RateLimiter, timeout time.Duration) *Request {

if backoff == nil {

klog.V(2).Infof("Not implementing request backoff strategy.")

backoff = &NoBackoff{}

}

pathPrefix := "/"

if baseURL != nil {

pathPrefix = path.Join(pathPrefix, baseURL.Path)

}

r := &Request{

client:      client,

verb:        verb,

baseURL:     baseURL,

pathPrefix:  path.Join(pathPrefix, versionedAPIPath),

content:     content,

serializers: serializers,

backoffMgr:  backoff,

throttle:    throttle,

timeout:     timeout,

}

switch {

case len(content.AcceptContentTypes) > 0:

r.SetHeader("Accept", content.AcceptContentTypes)

case len(content.ContentType) > 0:

r.SetHeader("Accept", content.ContentType+", */*")

}

return r

}

func (r *Request) Do() Result {

if err := r.tryThrottle(); err != nil {

return Result{err: err}

}

var result Result

err := r.request(func(req *http.Request, resp *http.Response) {

result = r.transformResponse(resp, req)

})

if err != nil {

return Result{err: err}

}

return result

}

r.resource = "pods"

r.namespace = "default"

r.subresource = "status"

r.resourcename = "tengine2-test"

r.body = byte.NewReader(data)

func (r *Request) request(fn func(*http.Request, *http.Response)) error {

//Metrics for total request latency

start := time.Now()

defer func() {

metrics.RequestLatency.Observe(r.verb, r.finalURLTemplate(), time.Since(start))

}()

if r.err != nil {

klog.V(4).Infof("Error in request: %v", r.err)

return r.err

}

// TODO: added to catch programmer errors (invoking operations with an object with an empty namespace)

if (r.verb == "GET" || r.verb == "PUT" || r.verb == "DELETE") && r.namespaceSet && len(r.resourceName) > 0 && len(r.namespace) == 0 {

return fmt.Errorf("an empty namespace may not be set when a resource name is provided")

}

if (r.verb == "POST") && r.namespaceSet && len(r.namespace) == 0 {

return fmt.Errorf("an empty namespace may not be set during creation")

}

client := r.client

if client == nil {

client = http.DefaultClient

}

// Right now we make about ten retry attempts if we get a Retry-After response.

maxRetries := 10

retries := 0

for {

#该函数生成url

url := r.URL().String()

req, err := http.NewRequest(r.verb, url, r.body)

func (r *Request) URL() *url.URL {

p := r.pathPrefix

if r.namespaceSet && len(r.namespace) > 0 {

p = path.Join(p, "namespaces", r.namespace)

}

if len(r.resource) != 0 {

p = path.Join(p, strings.ToLower(r.resource))

}

// Join trims trailing slashes, so preserve r.pathPrefix's trailing slash for backwards compatibility if nothing was changed

if len(r.resourceName) != 0 || len(r.subpath) != 0 || len(r.subresource) != 0 {

p = path.Join(p, r.resourceName, r.subresource, r.subpath)

}

finalURL := &url.URL{}

根据上面r结构体的信息,可知最后访问的url为:prefix/namespaces/default/pods/tengine2-test/status,body为old status和new status

至此,向api-server发送流程结束

<think>嗯,用户问的是Kubernetes中Pod和API Server交互方式。首先,我需要回忆一下Kubernetes的架构,API Server作为控制平面的核心组件,所有资源的操作都要经过它。Pod作为最基本的部署单元,它们之间的交互应该涉及到创建、更新、状态上报这些操作。 用户可能对Kubernetes的机制不太熟悉,所以需要从基础讲起。比如,Pod本身不会直接和API Server通信,而是通过kubelet来代理。这时候要解释kubelet的作用,它运行在每个节点上,负责管理Pod的生命周期。 接下来要考虑交互的具体流程。比如用户创建Pod时,会通过kubectl发送请求到API Server,然后API Server将信息存入etcd。然后调度器分配节点,目标节点的kubeletAPI Server获取Pod定义,并启动容器。这里需要分步骤说明,可能还需要提到watch机制,kubelet会监听API Server的变化。 然后要考虑状态上报的部分,kubelet定期向API Server报告Pod的状态,比如Running、Pending等。这部分需要说明是kubelet主动上报,还是API Server主动查询,应该是kubelet主动更新状态到API Server。 还有可能用户想知道其他组件,比如控制器管理器如何通过API Server监控Pod状态,确保期望状态和实际状态一致。比如Deployment控制器发现Pod数量不足,会触发创建新的Pod。 另外,认证、授权和准入控制也是重要的部分,PodAPI Server交互需要经过这些安全步骤。比如使用ServiceAccount进行身份验证,RBAC授权,以及可能的准入控制器修改请求。 用户可能还想知道具体的通信方式,比如REST API,使用HTTP协议,数据格式是JSON或Protobuf。这时候需要提到API Server的RESTful特性,以及客户端如何通过这些API进行操作。 可能还需要举例子,比如用户执行kubectl apply,这个命令是如何转化为API请求的,然后kubelet如何监听并获取信息。或者Pod中的应用使用客户端库API Server交互,比如监控其他Pod的状态,但这种情况较少,通常由控制器处理。 需要区分Pod内的应用和Pod本身的管理。大多数情况下,Pod内的应用不会直接API Server交互,而是通过kubelet或其他组件。如果有特殊情况,比如需要扩容的自定义指标,可能需要通过Metrics Server或其他监控组件,但这也是间接的。 还要注意用户可能的误区,比如认为Pod自己主动联系API Server,但实际上主要是kubelet在中间协调。确保解释清楚各个组件之间的关系和职责。 最后,总结整个交互过程,确保结构清晰,步骤明确。可能需要用图表辅助,但文本描述要分步骤说明。检查是否涵盖了创建、更新、状态上报、监控等关键环节,以及安全机制和通信协议。</think>Kubernetes中PodAPI Server交互是通过分层协作机制实现的。以下是关键步骤和原理的逐步解释: --- ### **1. 交互主体:kubelet代理** Pod本身不直接API Server通信,而是由**节点上的kubelet**作为代理完成交互- kubelet持续监听API Server中属于本节点Pod的变更 -API Server中Pod定义更新时,kubelet执行对应操作(如创建/销毁容器) --- ### **2. 核心交互流程** #### **(1) Pod创建流程** 1. **用户提交定义**:用户通过`kubectl apply`发送Pod YAML到API Server 2. **持久化存储**:API Server验证后存入etcd 3. **调度器介入**:调度器(Scheduler)通过Watch机制发现未调度的Pod,分配节点并更新Pod的`nodeName`字段 4. **kubelet拉取配置**:目标节点kubelet检测到新Pod分配,从API Server获取完整定义 5. **容器启动**:kubelet调用容器运行时(如Docker)创建容器 #### **(2) 状态上报** 1. **周期心跳**:kubelet每10秒(默认)通过`NodeStatus`更新节点状态 2. **事件推送**: - 容器启动成功/失败 - 资源不足告警 - 健康检查异常 3. **状态同步**:kubelet将Pod的`status`字段(如`Running`/`Pending`)实时更新到API Server --- ### **3. Watch机制实现实时同步** ```go // 伪代码示例:kubelet通过List-Watch监听Pod变化 watcher := client.CoreV1().Pods("").Watch(context.TODO(), metav1.ListOptions{ FieldSelector: "spec.nodeName=node-1", // 过滤本节点Pod }) for event := range watcher.ResultChan() { switch event.Type { case watch.Added: createPod(event.Object) case watch.Modified: updatePod(event.Object) case watch.Deleted: deletePod(event.Object) } } ``` --- ### **4. 安全通信层** 所有交互需通过三重安全校验: 1. **认证**:ServiceAccount Token或客户端证书 2. **授权**:RBAC检查操作权限(如是否允许get pods) 3. **准入控制**:修改请求(如自动注入sidecar容器) --- ### **5. 特殊场景交互** #### **(1) Readiness/Liveness探针** - kubelet本地执行健康检查 - 结果通过status.conditions更新到API Server - 控制器(如Deployment)根据状态决定是否重启Pod #### **(2) Horizontal Pod Autoscaler** 1. Metrics Server收集指标 2. HPA控制器通过API Server获取指标数据 3. 计算所需副本数并修改Deployment的`replicas`字段 --- ### **6. 协议数据格式** | 交互方向 | 协议 | 数据格式 | 典型操作 | |---------|------|---------|---------| | kubeletAPI Server | HTTPS | JSON/Protobuf | UpdateStatus, CreateEvent | | API Serverkubelet | HTTP/2 | Stream(用于exec/logs请求) | Exec, Logs, PortForward | --- ### **总结架构图** ``` 用户/控制器 │ ▼ kubectl apply API Server ↔ etcd ▲ │ Watch │ ▼ 调度器 → 分配节点 ▲ │ List-Watch ▼ kubelet ↔ 容器运行时 ``` 这种设计实现了声明式API的核心特性:用户声明期望状态,系统通过控制器不断调和实际状态。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值