登录
首页 >  Golang >  Go问答

Prometheus Exporter - 直接检测与自定义收集器

来源:stackoverflow

时间:2024-03-30 19:33:34 500浏览 收藏

一分耕耘,一分收获!既然打开了这篇文章《Prometheus Exporter - 直接检测与自定义收集器》,就坚持看下去吧!文中内容包含等等知识点...希望你能在阅读本文后,能真真实实学到知识或者帮你解决心中的疑惑,也欢迎大佬或者新人朋友们多留言评论,多给建议!谢谢!

问题内容

我目前正在为遥测网络应用程序编写 prometheus 导出器。

我已阅读此处的文档“编写导出器”,虽然我了解实现自定义收集器以避免竞争条件的用例,但我不确定我的用例是否适合直接检测。

基本上,网络指标由网络设备通过 grpc 进行流式传输,因此我的导出器只需接收它们,而不必有效地抓取它们。

我使用了以下代码的直接检测:

  • 我使用 promauto 包声明我的指标,以保持代码紧凑:
package metrics

import (
    "github.com/lucabrasi83/prom-high-obs/proto/telemetry"
    "github.com/prometheus/client_golang/prometheus"
    "github.com/prometheus/client_golang/prometheus/promauto"
)

var (
    cpu5sec = promauto.newgaugevec(

        prometheus.gaugeopts{
            name: "cisco_iosxe_iosd_cpu_busy_5_sec_percentage",
            help: "the iosd daemon cpu busy percentage over the last 5 seconds",
        },
        []string{"node"},
    )
  • 下面是我如何简单地设置 grpc 协议缓冲区解码消息的指标值:
cpu5sec.withlabelvalues(msg.getnodeidstr()).set(float64(val))
  • 最后,这是我的主循环,它基本上处理我感兴趣的指标的遥测 grpc 流:
for {

        req, err := stream.recv()
        if err == io.eof {
            return nil
        }
        if err != nil {
            logging.peppamonlog(
                "error",
                fmt.sprintf("error while reading client %v stream: %v", clientipsocket, err))

            return err
        }

        data := req.getdata()

        msg := &telemetry.telemetry{}

        err = proto.unmarshal(data, msg)

        if err != nil {
            log.fatalln(err)
        }

        if !logflag {
            logging.peppamonlog(
                "info",
                fmt.sprintf(
                    "telemetry subscription request received - client %v - node %v - yang model path %v",
                    clientipsocket, msg.getnodeidstr(), msg.getencodingpath(),
                ),
            )
        }
        logflag = true

        // flag to determine whether the telemetry device streams accepted yang node path
        yangpathsupported := false

        for _, m := range metrics.ciscometricregistrar {
            if msg.encodingpath == m.encodingpath {

                yangpathsupported = true
                go m.recordmetricfunc(msg)
            }
        }
}
  • 对于我感兴趣的每个指标,我都会使用记录指标函数 (m.recordmetricfunc ) 来注册它,该函数将协议缓冲区消息作为参数,如下所示。
package metrics

import "github.com/lucabrasi83/prom-high-obs/proto/telemetry"

var ciscometricregistrar []ciscotelemetrymetric

type ciscotelemetrymetric struct {
    encodingpath     string
    recordmetricfunc func(msg *telemetry.telemetry)
}

  • 然后我使用 init 函数进行实际注册:

func init() {
    CiscoMetricRegistrar = append(CiscoMetricRegistrar, CiscoTelemetryMetric{
        EncodingPath:     CpuYANGEncodingPath,
        RecordMetricFunc: ParsePBMsgCpuBusyPercent,
    })
}

我使用 grafana 作为前端,到目前为止,在关联 prometheus 公开的指标与直接在设备上检查指标时,尚未发现任何特定的差异。

所以我想了解这是否遵循 prometheus 最佳实践,或者我仍然应该采用自定义收集器路线。

提前致谢。


解决方案


您没有遵循最佳实践,因为您正在使用您链接到的文章所警告的全局指标。使用当前的实现,在设备断开连接后(或者更准确地说,直到您的导出器重新启动),您的仪表板将永远显示 cpu 指标的某个任意且恒定的值。

相反,rpc 方法应该维护一组本地指标,并在方法返回后将其删除。这样,当设备断开连接时,设备的指标就会从抓取输出中消失。

这是执行此操作的一种方法。它使用包含当前活动指标的地图。每个映射元素都是一个特定流的一组指标(我理解它对应于一个设备)。一旦流结束,该条目就会被删除。

package main

import (
    "sync"

    "github.com/prometheus/client_golang/prometheus"
)

// Exporter is a prometheus.Collector implementation.
type Exporter struct {
    // We need some way to map gRPC streams to their metrics. Using the stream
    // itself as a map key is simple enough, but anything works as long as we
    // can remove metrics once the stream ends.
    sync.Mutex
    Metrics map[StreamServer]*DeviceMetrics
}

type DeviceMetrics struct {
    sync.Mutex

    CPU prometheus.Metric
}

// Globally defined descriptions are fine.
var cpu5SecDesc = prometheus.NewDesc(
    "cisco_iosxe_iosd_cpu_busy_5_sec_percentage",
    "The IOSd daemon CPU busy percentage over the last 5 seconds",
    []string{"node"},
    nil, // constant labels
)

// Collect implements prometheus.Collector.
func (e *Exporter) Collect(ch chan<- prometheus.Metric) {
    // Copy current metrics so we don't lock for very long if ch's consumer is
    // slow.
    var metrics []prometheus.Metric

    e.Lock()
    for _, deviceMetrics := range e.Metrics {
        deviceMetrics.Lock()
        metrics = append(metrics,
            deviceMetrics.CPU,
        )
        deviceMetrics.Unlock()
    }
    e.Unlock()

    for _, m := range metrics {
        if m != nil {
            ch <- m
        }
    }
}

// Describe implements prometheus.Collector.
func (e *Exporter) Describe(ch chan<- *prometheus.Desc) {
    ch <- cpu5SecDesc
}

// Service is the gRPC service implementation.
type Service struct {
    exp *Exporter
}

func (s *Service) RPCMethod(stream StreamServer) (*Response, error) {
    deviceMetrics := new(DeviceMetrics)

    s.exp.Lock()
    s.exp.Metrics[stream] = deviceMetrics
    s.exp.Unlock()

    defer func() {
        // Stop emitting metrics for this stream.
        s.exp.Lock()
        delete(s.exp.Metrics, stream)
        s.exp.Unlock()
    }()

    for {
        req, err := stream.Recv()
        // TODO: handle error

        var msg *Telemetry = parseRequest(req) // Your existing code that unmarshals the nested message.

        var (
            metricField *prometheus.Metric
            metric      prometheus.Metric
        )

        switch msg.GetEncodingPath() {
        case CpuYANGEncodingPath:
            metricField = &deviceMetrics.CPU
            metric = prometheus.MustNewConstMetric(
                cpu5SecDesc,
                prometheus.GaugeValue,
                ParsePBMsgCpuBusyPercent(msg), // func(*Telemetry) float64
                "node", msg.GetNodeIdStr(),
            )
        default:
            continue
        }

        deviceMetrics.Lock()
        *metricField = metric
        deviceMetrics.Unlock()
    }

    return nil, &Response{}
}

到这里,我们也就讲完了《Prometheus Exporter - 直接检测与自定义收集器》的内容了。个人认为,基础知识的学习和巩固,是为了更好的将其运用到项目中,欢迎关注golang学习网公众号,带你了解更多关于的知识点!

声明:本文转载于:stackoverflow 如有侵犯,请联系study_golang@163.com删除
相关阅读
更多>
最新阅读
更多>
课程推荐
更多>