How do I query spans by trace id in DataDog
In Trace Explorer in the Spans view how can I filter to specific trace id or show trace id column?
In datadog, is there a way to filter a hostmap based on the results of another query?
I have a dashboard that has a variable for specific installs of a program (1-n selected), each uniquely named. I would like to make a hostmap, that’s only for the hosts for the selected installs of the program. So if you filter the dashboard for one install, the map has one hex; n installs, n hexes.
Is there a way to do that if the hosts don’t all have a tag for it?
Unable to use same Datadog APM threshold monitor query to filter for 2 services
I have a Datadog APM Threshold monitor created on the top of metrics. It currently has the following query-
How to override Datadog sampling rules for all errors in Java using DD_TRACE environment variables?
I have a Java Spring application that is integrated in datadog APM. I want to reduce the sampling rate for traces to 10%. However, if a trace is from an error, I want it to be included all the time. How can I add this override using the DD_TRACE environment variables?
Create a relative and fixed time frame query visual in Datadog
I have a RUM query visual in Datadog that counts the number of views. Instead of using the dashboard’s timeframe, I want to count the views from a fixed timeframe one week back from the current day. For example, if today is Monday, I want to see the view count from 09:00 to 12:00 last Monday. This time period (09:00 – 12:00) remains fixed as long as I am viewing the query on Monday. On Tuesday, the visual should shift to show data from 09:00 – 12:00 on the previous Tuesday, and so on. Is this possible?
Is it possible to configure datadog to include stack trace in content or default search
I would like to see and search logs with stacktraces, unfortunately DataDog seems to put stack traces into separate stack_trace
field and does not search for it by default, unless I explicitly search by attribute (stack_trace:*foo*
).
Can Vector observability tool parse nested JSON for kinesis/kafka sinks partition key?
I’m currently configuring Vector sinks to send data to an AWS Kinesis/Kafka Stream. My input data consists of JSON objects with nested fields, and I need to use one of these nested fields (specifically _session_id) as the partition key for the Kinesis Stream. Here is a sample of the data: