KEDA http-add-on 简介

Kubernetes-based Event Driven Autoscaling - HTTP Add-On

The KEDA HTTP Add On allows Kubernetes users to automatically scale their HTTP servers up and down (including to/from zero) based on incoming HTTP traffic. Please see our use cases document to learn more about how and why you would use this project.

安装 & demo

  1. 安装 KEDA

    1
    2
    3
    4
    helm repo add kedacore https://kedacore.github.io/charts
    helm repo update
    kubectl create namespace keda
    helm install keda kedacore/keda --namespace keda
  2. 安装 keda-http-addon

    此步骤会自动创建一个名为 xkcd 的 httpscaledobjects.http.keda.sh 资源以及一个名为 xkcd 的 ingress 资源

    1
    helm install http-add-on kedacore/keda-add-ons-http -n keda --set interceptor.replicas.waitTimeout=20s --set images.tag=v0.2.0.RC1 --set images.operator=arschles/http-addon-operator --set images.interceptor=arschles/http-addon-interceptor --set images.scaler=arschles/http-addon-scaler
    显示|隐藏输出内容

    以下为失败的安装经历,仅供参考

    1. 安装 keda-http-addon

      此步骤会自动创建一个名为 xkcd 的 httpscaledobjects.http.keda.sh 资源以及一个名为 xkcd 的 ingress 资源

      1
      helm install http-add-on kedacore/keda-add-ons-http --namespace keda
    2. 检查

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      # helm list -n keda
      NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
      http-add-on keda 1 2021-11-12 10:32:36.467128586 +0800 CST deployed keda-add-ons-http-0.2.0 0.2.0
      keda keda 1 2021-11-12 10:32:31.378006707 +0800 CST deployed keda-2.4.0 2.4.0

      # kubectl get po -n keda
      NAME READY STATUS RESTARTS AGE
      keda-add-ons-http-controller-manager-76b9d5b57b-59kw6 1/2 CrashLoopBackOff 5 3m51s
      keda-add-ons-http-external-scaler-6b97695b5d-ztqkt 1/1 Running 0 3m51s
      keda-add-ons-http-interceptor-6d4c9bc7c4-fjgwl 0/1 CrashLoopBackOff 5 3m51s
      keda-operator-5c569bb794-xwnlb 1/1 Running 0 3m56s
      keda-operator-metrics-apiserver-78f4f687dd-z5c42 1/1 Running 0 3m56s

      # kubectl get deployments.apps -n keda keda-add-ons-http-controller-manager -o=jsonpath='{.spec.template.spec.containers[1].image}'
      ghcr.io/kedacore/http-add-on-operator:latest

      # kubectl get deployments.apps -n keda keda-add-ons-http-interceptor -o=jsonpath='{.spec.template.spec.containers[0].image}'
      ghcr.io/kedacore/http-add-on-interceptor:latest

      # kubectl get deployments.apps -n keda keda-add-ons-http-external-scaler -o=jsonpath='{.spec.template.spec.containers[0].image}'
      ghcr.io/kedacore/http-add-on-scaler:latest

      # docker image ls|grep keda
      ghcr.io/kedacore/http-add-on-operator latest b46817d83423 30 hours ago 46.4MB
      ghcr.io/kedacore/http-add-on-scaler latest 26aa34c0b77a 30 hours ago 44.5MB
      ghcr.io/kedacore/http-add-on-interceptor latest 27512c0642db 30 hours ago 41.9MB
      ghcr.io/kedacore/keda-metrics-apiserver 2.4.0 c311fb6c4ea8 3 months ago 96.5MB
      ghcr.io/kedacore/keda 2.4.0 3e08f92f2f84 3 months ago 84.2MB
    3. 查看异常 pod 情况

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      # kubectl logs -f keda-add-ons-http-controller-manager-76b9d5b57b-59kw6 -n keda keda-add-ons-http-operator
      flag provided but not defined: -admin-port
      Usage of /operator:
      -enable-leader-election
      Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.
      -kubeconfig string
      Paths to a kubeconfig. Only required if out-of-cluster.
      -metrics-addr string
      The address the metric endpoint binds to. (default ":8080")

      # kubectl logs -f keda-add-ons-http-interceptor-6d4c9bc7c4-fjgwl -n keda keda-add-ons-http-interceptor
      panic: required key KEDA_HTTP_APP_SERVICE_NAME missing value

      goroutine 1 [running]:
      github.com/kelseyhightower/envconfig.MustProcess(...)
      /go/pkg/mod/github.com/kelseyhightower/envconfig@v1.4.0/envconfig.go:233
      github.com/kedacore/http-add-on/interceptor/config.MustParseOrigin(0xc000049740)
      /go/src/github.com/kedahttp/http-add-on/interceptor/config/origin.go:36 +0x97
      main.main()
      /go/src/github.com/kedahttp/http-add-on/interceptor/main.go:28 +0x45
    4. 使用 main 分支代码在本地编译镜像

      需要准备 mage、make、controller-gen、go 等工具

      1
      2
      3
      4
      5
      6
      git clone https://github.com/kedacore/http-add-on.git
      cd http-add-on
      export KEDAHTTP_OPERATOR_IMAGE='zephyrfish/http-add-on-operator:latest'
      export KEDAHTTP_SCALER_IMAGE='zephyrfish/http-add-on-scaler:latest'
      export KEDAHTTP_INTERCEPTOR_IMAGE='zephyrfish/http-add-on-interceptor:latest'
      mage dockerBuild

      编译的镜像与官方镜像对比

      1
      2
      3
      4
      5
      6
      7
      # docker image ls|grep keda
      zephyrfish/keda-http-addon-operator latest 000a86376580 23 hours ago 48.4MB
      zephyrfish/keda-http-addon-interceptor latest 59942974135b 23 hours ago 43.4MB
      zephyrfish/keda-http-addon-scaler latest 0a6c2bafcd94 25 hours ago 45.7MB
      ghcr.io/kedacore/http-add-on-operator latest b46817d83423 30 hours ago 46.4MB
      ghcr.io/kedacore/http-add-on-scaler latest 26aa34c0b77a 30 hours ago 44.5MB
      ghcr.io/kedacore/http-add-on-interceptor latest 27512c0642db 30 hours ago 41.9MB
    5. 上传镜像后,替换 keda-http-addon 组件的镜像配置

  3. 检查 keda-http-addon 组件情况

    1
    2
    3
    4
    5
    6
    7
    # kubectl get po -n keda
    NAME READY STATUS RESTARTS AGE
    keda-add-ons-http-controller-manager-657fff6f49-8ntn7 2/2 Running 0 16s
    keda-add-ons-http-external-scaler-74fbcb959-j65z4 1/1 Running 0 9s
    keda-add-ons-http-interceptor-75f95798dc-7tmrd 1/1 Running 0 13s
    keda-operator-5c569bb794-xwnlb 1/1 Running 0 34m
    keda-operator-metrics-apiserver-78f4f687dd-z5c42 1/1 Running 0 34m
  4. 部署 xkcd 案例

    1
    2
    cd http-add-on
    helm install xkcd ./examples/xkcd -n keda

    检查 xkcd 案例的状态

    1
    2
    # kubectl rollout status deployment -n keda xkcd
    deployment "xkcd" successfully rolled out
  5. 部署 ingress-nginx 作为入口

    1
    2
    3
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    helm install ingress-nginx ingress-nginx/ingress-nginx -n keda

    检查 in 的状态

    1
    2
    # kubectl rollout status -n keda deployment ingress-nginx-controller
    deployment "ingress-nginx-controller" successfully rolled out

    使用节点地址作为负载均衡地址

    假设此处节点地址为:192.168.0.4

    1
    2
    kubectl patch svc -n keda ingress-nginx-controller \
    -p '{"spec": {"externalIPs": ["192.168.0.4"]}}'

    检查 in 服务的状态

    1
    2
    3
    # kubectl get svc -n keda ingress-nginx-controller
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    ingress-nginx-controller LoadBalancer 10.233.15.182 192.168.0.4 80:31853/TCP,443:31165/TCP 4m51s
  6. 其他配置

    当前各资源的状态

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    # kubectl get httpscaledobjects.http.keda.sh -n keda
    NAME SCALETARGETDEPLOYMENTNAME SCALETARGETSERVICENAME SCALETARGETPORT MINREPLICAS MAXREPLICAS AGE ACTIVE
    xkcd {"deployment":"xkcd","port":8080,"service":"xkcd"} 10 13m

    # kubectl get scaledobjects.keda.sh -n keda
    NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE
    keda-add-ons-http-interceptor apps/v1.Deployment keda-add-ons-http-interceptor 1 50 external True True False 83m
    xkcd-app apps/v1.Deployment xkcd 0 10 external-push True False False 12m

    # kubectl get hpa -n keda
    NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
    keda-hpa-keda-add-ons-http-interceptor Deployment/keda-add-ons-http-interceptor 0/200 (avg) 1 50 1 85m
    keda-hpa-xkcd-app Deployment/xkcd 0/100 (avg) 1 10 1 14m

    # kubectl get ingress -n keda
    NAME CLASS HOSTS ADDRESS PORTS AGE
    xkcd <none> myhost.com 192.168.0.4 80 15m

    为节点配置对应的 dns

    1
    echo '192.168.0.4  myhost.com' >> /etc/hosts
  7. 验证

    访问 myhost.com ,观察 xkcd pod 的变化。

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    # curl myhost.com
    Hello from XKCD-serv! 👋

    # kubectl get po -n keda -w
    NAME READY STATUS RESTARTS AGE
    xkcd-84459f7fbf-d4nm8 0/1 Pending 0 0s
    xkcd-84459f7fbf-d4nm8 0/1 Pending 0 0s
    xkcd-84459f7fbf-d4nm8 0/1 ContainerCreating 0 0s
    xkcd-84459f7fbf-d4nm8 0/1 ContainerCreating 0 2s
    xkcd-84459f7fbf-d4nm8 0/1 Running 0 5s
    xkcd-84459f7fbf-d4nm8 1/1 Running 0 10s

结构体 & 服务

Table 结构体

Table 维护了一个 host 到目标后端服务的映射关系表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// pkg/routing/table.go

type Table struct {
fmt.Stringer
m map[string]Target
l *sync.RWMutex
}

type Target struct {
Service string `json:"service"`
Port int `json:"port"`
Deployment string `json:"deployment"`
TargetPendingRequests int32 `json:"target"`
}

Memory 结构体

Memory 是 Counter 的内存队列形式的实现,用于记录当前 host 对应的请求数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// pkg/queue/queue.go

// Memory is a Counter implementation that
// holds the HTTP queue in memory only. Always use
// NewMemory to create one of these.
type Memory struct {
countMap map[string]int
mut *sync.RWMutex
}

type Counter interface {
CountReader
// Resize resizes the queue size by delta for the given host.
Resize(host string, delta int) error
// Ensure ensures that host is represented in this counter.
// If host already has a nonzero value, then it is unchanged. If
// it is missing, it is set to 0.
Ensure(host string)
// Remove tries to remove the given host and its
// associated counts from the queue. returns true if it existed,
// false otherwise.
Remove(host string) bool
}

operator 服务

启动时事项:

  1. 从环境变量中获取配置信息
  2. 通过 ensureConfigMap() 确认集群指定的命名空间中是否存在一个名为 keda-http-routing-table 的 configmap 资源。如果使用 helm 安装 http-add-on,那么会自动创建这个 cm
  3. 创建一个路由表(Table)routingTable
  4. 创建并启动一个用于调谐 HTTPScaledObject 资源的 Manager(传入 routingTable)
  5. 启动 AdminServer 用于向 interceptor 提供 routingTable 信息(通过访问 /routing_table

HTTPScaledObject Reconcile 逻辑:

  1. 构建一个 AppInfo 实例

    1
    2
    3
    4
    5
    6
    appInfo := config.AppInfo{
    Name: httpso.Spec.ScaleTargetRef.Deployment,
    Namespace: req.Namespace,
    InterceptorConfig: rec.InterceptorConfig,
    ExternalScalerConfig: rec.ExternalScalerConfig,
    }
  2. 根据 appInfo 的内容,创建关联的 scaledObject 资源,名称为“伸缩对象的名称-app”

  3. 生成新的 routing.Target 内容,更新至 operator 维护的 routingTable 中

interceptor 服务

启动时事项:

  1. 从环境变量中获取配置信息
  2. 创建一个内存队列(Memory)q,用于存储 host 的访问数信息;同时生成一个路由表(Table)routingTable
  3. 启动用于监听当前命名空间的 deployments 资源 cache 的 watcher
  4. 启动用于监听当前命名空间的 configmap 资源 cache 的 watcher,用于监听名为 keda-http-routing-table 的 cm 的变化;当监听到 cm 的变化后,更新 routingTable 中的路由表信息
  5. 启动 AdminServer,监听 /queue(获取内存队列 q);监听 /routing_table(返回路由表信息);监听 /routing_ping(从 cm 中获取最新的路由表信息并将其保存至本地内存中);监听 /deployments(返回本地 deployments cache 信息)
  6. 启动 ProxyServer,将请求转发给应用程序的后端服务,期间会维护一个 context

Scaler 服务

启动时事项:

  1. 从环境变量中获取配置信息
  2. 生成一个 queuePinger 实例,用于访问 interceptor 的 adminserver ,获取 interceptor 维护的内存队列 q 的 counts 信息(通过访问 interceptor 的 /queue),并进行聚合处理
  3. 生成一个路由表(Table)table
  4. 注册 external-push scaler
  5. 启动用于监听当前命名空间的 configmap 资源 cache 的 watcher,用于监听名为 keda-http-routing-table 的 cm 的变化;当监听到 cm 的变化后,更新 table 中的信息
  6. 启动健康检查服务

原理 & 工作流

arch.png

  1. 创建一个名为 xkcd 的 deployments 和对应的 service 资源(用户行为)

    1
    2
    3
    4
    5
    6
    7
    # kubectl get deployments.apps -n keda xkcd
    NAME READY UP-TO-DATE AVAILABLE AGE
    xkcd 0/0 0 0 3d1h

    # kubectl get service -n keda xkcd
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    xkcd ClusterIP 10.233.1.29 <none> 8080/TCP 3d1h
  2. 创建一个 httpscaledobjects 资源(用户行为)

    1
    2
    3
    # kubectl get httpscaledobjects.http.keda.sh -n keda
    NAME SCALETARGETDEPLOYMENTNAME SCALETARGETSERVICENAME SCALETARGETPORT MINREPLICAS MAXREPLICAS AGE ACTIVE
    xkcd {"deployment":"xkcd","port":8080,"service":"xkcd"} 0 10 2d3h

    可以观察到,http-add-on 生成了两个 scaledobjects 资源(keda http-add-on 行为)

    其中 xkcd-app 对应 xkcd deployments

    keda-add-ons-http-interceptor 对应 keda-add-ons-http-interceptor deployments

    1
    2
    3
    4
    # kubectl get scaledobjects.keda.sh -n keda
    NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE
    keda-add-ons-http-interceptor apps/v1.Deployment keda-add-ons-http-interceptor 1 50 external True True False 2d4h
    xkcd-app apps/v1.Deployment xkcd 0 10 external-push True False False 2d3h

    查看 hso 资源的内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    apiVersion: http.keda.sh/v1alpha1
    kind: HTTPScaledObject
    metadata:
    creationTimestamp: "2021-11-13T02:52:21Z"
    finalizers:
    - httpscaledobject.http.keda.sh
    generation: 4
    name: xkcd
    namespace: keda
    resourceVersion: "1093466"
    uid: 3427a880-edae-4522-ad88-d81b530a53b9
    spec:
    host: myhost.com # 服务的 host 名称
    replicas:
    max: 10 # 最大副本数
    min: 0 # 最小副本数
    scaleTargetRef:
    deployment: xkcd # 需要伸缩的目标工作负载名称
    port: 8080
    service: xkcd # 关联的服务名称
  3. 观察此时名为 keda-http-routing-table 的 configmap 资源的内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    apiVersion: v1
    data:
    routing-table: |
    {"myhost.com":{"service":"xkcd","port":8080,"deployment":"xkcd","target":100}}
    kind: ConfigMap
    metadata:
    annotations:
    meta.helm.sh/release-name: http-add-on
    meta.helm.sh/release-namespace: keda
    creationTimestamp: "2021-11-13T02:26:38Z"
    labels:
    app: http-add-on
    app.kubernetes.io/managed-by: Helm
    control-plane: operator
    keda.sh/addon: http-add-on
    name: http-add-on-routing-table
    name: keda-http-routing-table
    namespace: keda
    resourceVersion: "1047502"
    uid: 8a1c31e5-0639-4310-a6d7-ea4e4c7b0c03
  4. 创建一个 ingress 资源(用户行为)

    本例中按照官方文档选择使用 ingress-nginx 作为网关

    1
    2
    3
    # kubectl get ingress -n keda
    NAME CLASS HOSTS ADDRESS PORTS AGE
    xkcd <none> myhost.com 192.168.0.4 80 3d1h

    注意 ingress 的配置:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.io/ingress.class: nginx
    meta.helm.sh/release-name: xkcd
    meta.helm.sh/release-namespace: keda
    nginx.ingress.kubernetes.io/rewrite-target: /
    creationTimestamp: "2021-11-12T04:53:36Z"
    generation: 1
    labels:
    app.kubernetes.io/managed-by: Helm
    name: xkcd
    namespace: keda
    resourceVersion: "708217"
    uid: cfa08023-3391-4ae5-9b0a-4adfa1192e87
    spec:
    rules:
    - host: myhost.com # 服务的 host 名称
    http:
    paths:
    - backend:
    service:
    name: keda-add-ons-http-interceptor-proxy # 此处设置为 interceptor proxy 服务的 service 名称
    port:
    number: 8080
    path: /
    pathType: Prefix
    status:
    loadBalancer:
    ingress:
    - ip: 192.168.0.4 # 此处为节点地址
  5. 执行 curl http://myhost.com

    等效于 curl -H "host: myhost.com" http://keda-add-ons-http-interceptor-proxy:8080

    interceptor 会根据 header 中的 host 信息,将请求转发给对应的应用 service,同时会更新内存队列 q 中的对应 host 计数(keda http-add-on 行为):

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    // countMiddleware adds 1 to the given queue counter, executes next
    // (by calling ServeHTTP on it), then decrements the queue counter
    func countMiddleware(
    lggr logr.Logger,
    q queue.Counter,
    next nethttp.Handler,
    ) nethttp.Handler {
    return nethttp.HandlerFunc(func(w nethttp.ResponseWriter, r *nethttp.Request) {
    host, err := getHost(r)
    if err != nil {
    lggr.Error(err, "not forwarding request")
    w.WriteHeader(400)
    w.Write([]byte("Host not found, not forwarding request"))
    return
    }
    if err := q.Resize(host, +1); err != nil {
    log.Printf("Error incrementing queue for %q (%s)", r.RequestURI, err)
    }
    defer func() {
    if err := q.Resize(host, -1); err != nil {
    log.Printf("Error decrementing queue for %q (%s)", r.RequestURI, err)
    }
    }()
    next.ServeHTTP(w, r)
    })
    }
  6. 再观察之前创建的名为 xkcd-app 的 scaledobjects 资源

    这是一个 external-push 类型的 scaler,参考 KEDA | External Scalers 了解关于 scaler 的信息

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
    creationTimestamp: "2021-11-13T02:52:21Z"
    finalizers:
    - finalizer.keda.sh
    generation: 1
    labels:
    app: kedahttp-xkcd-app
    name: xkcd-app
    scaledobject.keda.sh/name: xkcd-app
    name: xkcd-app
    namespace: keda
    resourceVersion: "1857700"
    uid: b7b88292-e669-4f8c-9ac9-024909a409a3
    spec:
    maxReplicaCount: 10
    minReplicaCount: 0
    pollingInterval: 1
    scaleTargetRef:
    kind: Deployment
    name: xkcd
    triggers:
    - metadata:
    host: myhost.com
    scalerAddress: keda-add-ons-http-external-scaler.keda.svc.cluster.local:9090
    type: external-push
    status:
    externalMetricNames:
    - myhost.com
    health:
    myhost.com:
    numberOfFailures: 0
    status: Happy
    lastActiveTime: "2021-11-15T07:18:57Z"
    originalReplicaCount: 5
    scaleTargetGVKR:
    group: apps
    kind: Deployment
    resource: deployments
    version: v1
    scaleTargetKind: apps/v1.Deployment

    http-add-on 是一个 external-scaler,它的实现如下:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    // ExternalScalerClient is the client API for ExternalScaler service.
    //
    // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
    type ExternalScalerClient interface {
    IsActive(ctx context.Context, in *ScaledObjectRef, opts ...grpc.CallOption) (*IsActiveResponse, error)
    StreamIsActive(ctx context.Context, in *ScaledObjectRef, opts ...grpc.CallOption) (ExternalScaler_StreamIsActiveClient, error)
    GetMetricSpec(ctx context.Context, in *ScaledObjectRef, opts ...grpc.CallOption) (*GetMetricSpecResponse, error)
    GetMetrics(ctx context.Context, in *GetMetricsRequest, opts ...grpc.CallOption) (*GetMetricsResponse, error)
    }

    其中 IsActive 用于根据 ScaledObject.spec.pollingInterval 定期判断伸缩的状态(当 true 时即进行扩容,反之缩容)。

    StreamIsActiveIsActive 功能一致,区别在于 StreamIsActive 是主动型(push),可以随时触发。

  7. 当前 http-add-on 采用了 push 的方式向 KEDA 发起请求,再由 KEDA 通过访问 spec.triggers[].metadata.scalerAddress 的值,获取 IsActive 的返回值决定是否改变目标的副本数。(keda http-add-on 行为 -> keda 行为)

  8. scaler 服务会通过 queuePinger.start() 定时访问 interceptor adminserver(由环境变量 KEDA_HTTP_QUEUE_TICK_DURATION 设定,默认为 500ms),获取并更新队列 q 中各 host 的计数。(keda http-add-on 行为)

    同时 scaler 服务的 StreamIsActive 实现会每隔 500ms 向 KEDA 发送请求,检查自己的 IsActive 返回值。而 IsActive 实现中,就会从队列 q 中获取 host 的访问计数,以此作为伸缩的判断依据。(keda 行为)

    社区计划将这个方式修改为:当 interceptor 收到请求后,通知 scaler 服务执行 StreamIsActive (KEDA HTTP Addon: push-based communication between interceptor and scaler