Skip to content

[GCP] Add loadbalancing_metrics distribution fields #4004

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Sep 23, 2022

Conversation

gpop63
Copy link
Contributor

@gpop63 gpop63 commented Aug 16, 2022

What does this PR do?

Adds loadbalancing_metrics missing distribution fields.

Checklist

  • I have reviewed tips for building integrations and this pull request is aligned with them.
  • I have verified that all data streams collect metrics or logs.
  • I have added an entry to my package's changelog.yml file.
  • I have verified that Kibana version constraints are current according to guidelines.

Author's Checklist

  • [ ]

How to test this PR locally

Related issues

Screenshots

@elasticmachine
Copy link

elasticmachine commented Aug 16, 2022

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2022-09-23T08:52:44.140+0000

  • Duration: 18 min 13 sec

Test stats 🧪

Test Results
Failed 0
Passed 82
Skipped 0
Total 82

🤖 GitHub comments

Expand to view the GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

@elasticmachine
Copy link

elasticmachine commented Aug 16, 2022

🌐 Coverage report

Name Metrics % (covered/total) Diff
Packages 100.0% (5/5) 💚
Files 100.0% (5/5) 💚 2.66
Classes 100.0% (5/5) 💚 2.66
Methods 90.816% (89/98) 👍 0.896
Lines 95.752% (1375/1436) 👍 4.47
Conditionals 100.0% (0/0) 💚

@gpop63 gpop63 marked this pull request as ready for review August 16, 2022 16:08
@gpop63 gpop63 requested review from a team as code owners August 16, 2022 16:08
Copy link
Member

@endorama endorama left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 please add a changelog entry and bump package version to 2.6.0

@andrewkroh andrewkroh added the Integration:gcp Google Cloud Platform label Aug 17, 2022
@@ -59,3 +59,43 @@
- name: tcp_ssl_proxy.open_connections.value
type: long
description: Current number of outstanding connections through the TCP/SSL proxy.
- name: https.backend_latencies.value
type: object
object_type: histogram
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this need to use object_type? Could it simply use type: histogram?

In what stack versions is object_type: histogram supported? That seems like something new.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used the prometheus integration as a reference. Should I just use type: histogram instead?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my understanding object_type should only be used for type: array fields, see https://github.com/elastic/package-spec/blob/34c28eedce0d1ae7c140c998dd1e1bfdccdfc578/versions/1/integration/data_stream/fields/fields.spec.yml#L236-L239

So I think object_type: histogram is supported, but should only be used if the field is of type: array.

@andrewkroh if you confirm this I think this should be changed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an example document somewhere that I can review to see what the fields look like?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By looking at this documentation I think you are right type: histogram is a valid type.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After researching this topic for a bit, my conclusion is that: object_type: histogram is supported but is not clear to me if type: histogram would produce the same result.

@gpop63 could you try using type: histogram to see if the behaviour is the same?

Copy link
Contributor Author

@gpop63 gpop63 Sep 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested the way @endorama suggested and using type: histogram in the integration field type does not work (reason":"error parsing field [gcp.loadbalancing.l3.external.rtt_latencies.value], with unknown parameter [histogram]") maybe because in metricbeat they were added as type: object. I think we should either leave it like this or try changing metric types in metricbeat to type: histogram.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also raised 2 issues with the team behind package-spec for missing documentation and they fixed it super fast with this PR, confirming support for object_type.

@andrewkroh Let us know if you still have doubts around this, otherwise I think we should proceed by using object_type.

@elasticmachine
Copy link

🚀 Benchmarks report

Package gcp 👍(3) 💚(0) 💔(2)

Expand to view
Data stream Previous EPS New EPS Diff (%) Result
dns 1808.32 1524.39 -283.93 (-15.7%) 💔
loadbalancing_logs 3937.01 3333.33 -603.68 (-15.33%) 💔

To see the full report comment with /test benchmark fullreport

@gpop63 gpop63 merged commit aa62b85 into elastic:main Sep 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Integration:gcp Google Cloud Platform
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants