ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://grafana.com/grafana/plugins/grafana-clickhouse-datasource/ |
| Last Crawled | 2026-04-07 11:56:47 (21 hours ago) |
| First Indexed | 2021-12-12 06:25:50 (4 years ago) |
| HTTP Status Code | 200 |
| Meta Title | ClickHouse plugin for Grafana | Grafana Labs |
| Meta Description | ClickHouse datasource plugin for Grafana |
| Meta Canonical | null |
| Boilerpipe Text | Official ClickHouse data source for Grafana
The ClickHouse data source plugin allows you to query and visualize ClickHouse data in Grafana.
Version compatibility
Users on Grafana
v9.x
and higher of Grafana can use
v4
.
Users on Grafana
v8.x
are encouraged to continue using
v2.2.0
of the plugin.
*
As of 2.0 this plugin will only support ad hoc filters when using ClickHouse 22.7+
Installation
For detailed instructions on how to install the plugin on Grafana Cloud or locally,
please checkout the
Plugin installation docs
.
Configuration
ClickHouse user for the data source
Set up an ClickHouse user account with
readonly
permission and access to
databases and tables you want to query.
Please note that Grafana does not validate that queries are safe. Queries can contain any SQL statement.
For example, statements like
ALTER TABLE system.users DELETE WHERE name='sadUser'
and
DROP TABLE sadTable;
would be executed.
To configure a readonly user, follow these steps:
Create a
readonly
user profile following the
Creating Users and Roles in ClickHouse
guide.
Ensure the
readonly
user has enough permission to modify the
max_execution_time
setting required by the underlying
clickhouse-go client
.
If you're using a public ClickHouse instance, it's not recommended to set
readonly=2
in the
readonly
profile. Instead, leave
readonly=1
and set the constraint type of
max_execution_time
to
changeable_in_readonly
to allow modification of this setting.
ClickHouse protocol support
The plugin supports both
Native
(default) and
HTTP
transport protocols.
This can be enabled in the configuration via the
protocol
configuration parameter.
Both protocols exchange data with ClickHouse using optimized native format.
Note that the default ports for
HTTP/S
and
Native
differ:
HTTP - 8123
HTTPS - 8443
Native - 9000
Native with TLS - 9440
Manual configuration via UI
Once the plugin is installed on your Grafana instance, follow
these instructions
to add a new ClickHouse data source, and enter configuration options.
With a configuration file
It is possible to configure data sources using configuration files with Grafana’s provisioning system.
To read about how it works, refer to
Provisioning Grafana data sources
.
Here are some provisioning examples for this data source using basic authentication:
apiVersion
:
1
datasources
:
-
name
:
ClickHouse
type
:
grafana
-
clickhouse
-
datasource
jsonData
:
defaultDatabase
:
database
port
:
9000
host
:
localhost
username
:
username
tlsSkipVerify
:
false
# tlsAuth: <bool>
# tlsAuthWithCACert: <bool>
# secure: <bool>
# dialTimeout: <seconds>
# queryTimeout: <seconds>
# protocol: <native|http>
# defaultTable: <string>
# httpHeaders:
# - name: X-Example-Header
# secure: false
# value: <string>
# - name: Authorization
# secure: true
# logs:
# defaultDatabase: <string>
# defaultTable: <string>
# otelEnabled: <bool>
# otelVersion: <string>
# timeColumn: <string>
# ...Column: <string>
# traces:
# defaultDatabase: <string>
# defaultTable: <string>
# otelEnabled: <bool>
# otelVersion: <string>
# durationUnit: <seconds|milliseconds|microseconds|nanoseconds>
# traceIdColumn: <string>
# ...Column: <string>
secureJsonData
:
password
:
password
# tlsCACert: <string>
# tlsClientCert: <string>
# tlsClientKey: <string>
# secureHttpHeaders.Authorization: <string>
Building queries
Queries can be built using the raw SQL editor or the query builder.
Queries can contain macros which simplify syntax and allow for
dynamic SQL generation.
Time series
Time series visualization options are selectable after adding a
datetime
field type to your query. This field will be used as the timestamp. You can
select time series visualizations using the visualization options. Grafana
interprets timestamp rows without explicit time zone as UTC. Any column except
time
is treated as a value column.
Multi-line time series
To create multi-line time series, the query must return at least 3 fields in
the following order:
field 1:
datetime
field with an alias of
time
field 2: value to group by
field 3+: the metric values
For example:
SELECT
log_time
AS
time
,
machine_group
,
avg
(
disk_free
)
AS
avg_disk_free
FROM
mgbench
.
logs1
GROUP
BY
machine_group
,
log_time
ORDER
BY
log_time
Tables
Table visualizations will always be available for any valid ClickHouse query.
Visualizing logs with the Logs Panel
To use the Logs panel your query must return a timestamp and string values. To default to the logs visualization in Explore mode, set the timestamp alias to
log_time
.
For example:
SELECT
log_time
AS
log_time
,
machine_group
,
toString
(
avg
(
disk_free
)
)
AS
avg_disk_free
FROM
logs1
GROUP
BY
machine_group
,
log_time
ORDER
BY
log_time
To force rendering as logs, in absence of a
log_time
column, set the Format to
Logs
(available from 2.2.0).
Visualizing traces with the Traces Panel
Ensure your data meets the
requirements of the traces panel
. This applies if using the visualization or Explore view.
Set the Format to
Trace
when constructing the query (available from 2.2.0).
If using the
Open Telemetry Collector and ClickHouse exporter
, the following query produces the required column names (these are case sensitive):
SELECT
TraceId
AS
traceID
,
SpanId
AS
spanID
,
SpanName
AS
operationName
,
ParentSpanId
AS
parentSpanID
,
ServiceName
AS
serviceName
,
Duration
/
1000000
AS
duration
,
Timestamp
AS
startTime
,
arrayMap
(
key
-
>
map
(
'key'
,
key
,
'value'
,
SpanAttributes
[
key
]
)
,
mapKeys
(
SpanAttributes
)
)
AS
tags
,
arrayMap
(
key
-
>
map
(
'key'
,
key
,
'value'
,
ResourceAttributes
[
key
]
)
,
mapKeys
(
ResourceAttributes
)
)
AS
serviceTags
,
if
(
StatusCode
IN
(
'Error'
,
'STATUS_CODE_ERROR'
)
,
2
,
0
)
AS
statusCode
FROM
otel
.
otel_traces
WHERE
TraceId
=
'61d489320c01243966700e172ab37081'
ORDER
BY
startTime
ASC
Macros
To simplify syntax and to allow for dynamic parts, like date range filters, the query can contain macros.
Here is an example of a query with a macro that will use Grafana's time filter:
SELECT
date_time
,
data_stuff
FROM
test_data
WHERE
$__timeFilter
(
date_time
)
Macro
Description
Output example
$__dateFilter(columnName)
Replaced by a conditional that filters the data (using the provided column) based on the date range of the panel
date >= toDate('2022-10-21') AND date <= toDate('2022-10-23')
$__timeFilter(columnName)
Replaced by a conditional that filters the data (using the provided column) based on the time range of the panel in seconds
time >= toDateTime(1415792726) AND time <= toDateTime(1447328726)
$__timeFilter_ms(columnName)
Replaced by a conditional that filters the data (using the provided column) based on the time range of the panel in milliseconds
time >= fromUnixTimestamp64Milli(1415792726123) AND time <= fromUnixTimestamp64Milli(1447328726456)
$__dateTimeFilter(dateColumn, timeColumn)
Shorthand that combines $**dateFilter() AND $**timeFilter() using separate Date and DateTime columns.
$__dateFilter(dateColumn) AND $__timeFilter(timeColumn)
$__fromTime
Replaced by the starting time of the range of the panel casted to
DateTime
toDateTime(1415792726)
$__toTime
Replaced by the ending time of the range of the panel casted to
DateTime
toDateTime(1447328726)
$__fromTime_ms
Replaced by the starting time of the range of the panel casted to
DateTime64(3)
fromUnixTimestamp64Milli(1415792726123)
$__toTime_ms
Replaced by the ending time of the range of the panel casted to
DateTime64(3)
fromUnixTimestamp64Milli(1447328726456)
$__interval_s
Replaced by the interval in seconds
20
$__timeInterval(columnName)
Replaced by a function calculating the interval based on window size in seconds, useful when grouping
toStartOfInterval(toDateTime(column), INTERVAL 20 second)
$__timeInterval_ms(columnName)
Replaced by a function calculating the interval based on window size in milliseconds, useful when grouping
toStartOfInterval(toDateTime64(column, 3), INTERVAL 20 millisecond)
$__conditionalAll(condition, $templateVar)
Replaced by the first parameter when the template variable in the second parameter does not select every value. Replaced by the 1=1 when the template variable selects every value.
condition
or
1=1
The plugin also supports notation using braces {}. Use this notation when queries are needed inside parameters.
Templates and variables
To add a new ClickHouse query variable, refer to
Add a query
variable
.
After creating a variable, you can use it in your ClickHouse queries by using
Variable syntax
.
For more information about variables, refer to
Templates and
variables
.
Importing dashboards for ClickHouse
Follow these
instructions
to import a dashboard.
You can also find available, pre-made dashboards by navigating to the data
sources configuration page, selecting the ClickHouse data source and clicking
on the Dashboards tab.
We distribute the following dashboards with the plugin. These are aimed at assisting with support analysis of a ClickHouse cluster and do not rely on external datasets. The querying user requires access to the
system
database.
Cluster Analysis - an overview of configured clusters, merges, mutations and data replication.
Data Analysis - an overview of current databases and tables, including their respective sizes, partitions and parts.
Query Analysis - an analysis of queries by type, performance and resource consumption.
Ad Hoc Filters
Ad hoc filters are only supported with version 22.7+ of ClickHouse.
Ad hoc filters allow you to add key/value filters that are automatically added
to all metric queries that use the specified data source, without being
explicitly used in queries.
By default, Ad Hoc filters will be populated with all Tables and Columns. If
you have a default database defined in the Datasource settings, all Tables from
that database will be used to populate the filters. As this could be
slow/expensive, you can introduce a second variable to allow limiting the
Ad Hoc filters. It should be a
constant
type named
clickhouse_adhoc_query
and can contain: a comma delimited list of databases, just one database, or a
database.table combination to show only columns for a single table.
Ad Hoc filters also work with the Map and JSON types for OTel data.
Map is the default, and will automatically convert the merged labels output into a usable filter.
To have the filter logic use JSON syntax, add a dashboard variable with a
constant
type called
clickhouse_adhoc_use_json
(the variable's
value
is ignored, it just has to be present).
For more information on Ad Hoc filters, check the
Grafana
docs
Using a query for Ad Hoc filters
The second
clickhouse_adhoc_query
also allows any valid ClickHouse query. The
query results will be used to populate your ad-hoc filter's selectable filters.
You may choose to hide this variable from view as it serves no further purpose.
For example, if
clickhouse_adhoc_query
is set to
SELECT DISTINCT machine_name FROM mgbench.logs1
you would be able to select which machine
names are filtered for in the dashboard.
Manual Ad Hoc Filter Placement with
$__adHocFilters
By default, ad-hoc filters are automatically applied to queries by detecting the
target table using SQL parsing. However, for queries that use CTEs or ClickHouse-specific
syntax like
INTERVAL
or aggregate functions with parameters, the automatic
detection may fail. In these cases, you can manually specify where to apply
ad-hoc filters using the
$__adHocFilters('table_name')
macro.
This macro expands to the ClickHouse
additional_table_filters
setting with the
currently active ad-hoc filters. It should be placed in the
SETTINGS
clause of
your query.
Example:
SELECT
*
FROM
(
SELECT
*
FROM
my_complex_table
WHERE
complicated_condition
)
AS
result
SETTINGS $__adHocFilters
(
'my_complex_table'
)
When ad-hoc filters are active (e.g.,
status = 'active'
and
region = 'us-west'
),
this expands to:
SELECT
*
FROM
(
SELECT
*
FROM
my_complex_table
WHERE
complicated_condition
)
AS
result
SETTINGS additional_table_filters
=
{
'my_complex_table'
:
'status = \'active\' AND region = \'us-west\''
}
Learn more
Add
Annotations
.
Configure and use
Templates and variables
.
Add
Transformations
.
Set up alerting; refer to
Alerts overview
. |
| Markdown | ![GrafanaCON 2026]() ![GrafanaCON 2026]()
Save the Date\!
Sign up to get the latest GrafanaCON 2026 updates, agenda drops, and first access to early-bird tickets when they’re first available.
[Sign up](https://grafana.com/events/grafanacon/)
![GrafanaCON 2026]() ![GrafanaCON 2026]()
We have our sights
on Barcelona
[Save the Date](https://grafana.com/events/grafanacon/)

Watch the keynote again and signup for more on-demand sessions.
[Watch keynote](https://grafana.com/events/observabilitycon/?plcmt=obscon2025-post-event-banner)


📢 Registration + agenda now live Explore the latest Grafana Cloud and AI solutions, learn tips & tricks from demos and hands-on workshops, and get actionable advice to advance your observability strategy. Register now and get 50% off - limited tickets available (while they last!).
[Register now](https://grafana.com/events/observabilitycon/?tech=target&plcmt=top-banner&aud=default)
Grafana Labs is a Leader in the 2025 Gartner® Magic Quadrant™ for Observability Platforms [Read the report](https://grafana.com/resources/grafana-observability-platforms-gartner-magic-quadrant-2025/?plcmt=top-banner)
Site search
Ask Grot AI
[Try using  **Grot AI** for this query -\>](https://grafana.com/grot/?chat=&from=/search)
[**Grafana 12.4 is here** — faster and easier data visualization, Git Sync for observability as code updates, and more.](https://grafana.com/blog/grafana-12-4-release-all-the-latest-features/?plcmt=top-banner)
[Learn more](https://grafana.com/blog/grafana-12-4-release-all-the-latest-features/?plcmt=top-banner)
[Downloads](https://grafana.com/get/?plcmt=nav)[Contact Us](https://grafana.com/contact/?plcmt=nav)
[](https://grafana.com/)
- Grafana Cloud
- Solutions
- [Pricing](https://grafana.com/pricing/?plcmt=nav)
- Open Source
- Learn
- [Docs](https://grafana.com/docs/?plcmt=nav)
- - Company
AI/Search
[Sign in](https://grafana.com/auth/sign-in/?plcmt=nav)[Sign up](https://grafana.com/auth/sign-up/create-user/?plcmt=nav)
Site Search
Ask AI
[Plugins](https://grafana.com/grafana/plugins/) 〉ClickHouse

[Create account](https://grafana.com/auth/sign-up/create-user?pg=plugins&plcmt=grafana-clickhouse-datasource)
This component requires javascript to be enabled.

[Create account](https://grafana.com/auth/sign-up/create-user?pg=plugins&plcmt=grafana-clickhouse-datasource)
This component requires javascript to be enabled.
***
Dependencies
Grafana \>=9.5.0
***
Developer
Grafana Labs
***
Last updated
3/3/2026
***
Links
[Repository]()
[License]()
***
Sign up to receive occasional product news and updates:
***
![GrafanaCON 2026]()

We have our sights
on Barcelona
[Save the Date](https://grafana.com/events/grafanacon/)
![GrafanaCON 2026]()
We have our sights
on Barcelona
[Save the Date](https://grafana.com/events/grafanacon/)
![GrafanaCON 2026]()
Save the Date\!
Sign up to get the latest GrafanaCON 2026 updates, agenda drops, and first access to early-bird tickets when they’re first available.
[Sign Up](https://grafana.com/events/grafanacon/)
![GrafanaCON 2026]()
Save the Date\!
Sign up to get the latest GrafanaCON 2026 updates, agenda drops, and first access to early-bird tickets when they’re first available.
[Sign Up](https://grafana.com/events/grafanacon/)
![ObservabilityCON 2025]()
![ObservabilityCON 2025]()
7-9 OCT LONDON
Learn how to leverage new AI features and observability tools, attend technical deep dives, & leave with tips for growing your observability strategy.
[Sign up to save the date](https://grafana.com/events/observabilitycon/?tech=target&pg=blog&plcmt=sidebar)

![GrafanaCON 2025 Grot]()
We have our sights on Seattle
[Register](https://grafana.com/events/grafanacon/?tech=target&plcmt=footer-banner&aud=default#register)

Bring your crew,
save up to 20% Don't miss out—Be the first to dive into Grafana 12, Prometheus 3.0, and our nearly sold-out hands-on labs on Grafana as Code, OpenTelemetry, and more.
[Register](https://grafana.com/events/grafanacon/?tech=target&plcmt=footer-banner&aud=default#register)

**Registration is open** Get the latest updates on Grafana, Loki, Prometheus, and a demo of new AI/ML features & Adaptive Telemetry.
[Register](https://grafana.com/events/observabilitycon/2024/?tech=target&plcmt=sidebar)

**Registration is open** Get the latest updates on Grafana, Loki, Prometheus, and a demo of new AI/ML features & Adaptive Telemetry.
[Register](https://grafana.com/events/observabilitycon/2024/?tech=target&plcmt=sidebar)

Preview new LGTM Stack features, attend technical deep dive sessions + live demos, and leave with what you need to advance your observability roadmap.
[Register](https://grafana.com/events/observabilitycon/2024/?tech=target&plcmt=sidebar)

Registration is open! Join us in a city near you to preview new LGTM Stack features, attend technical deep dive sessions, and leave with what you need to advance your observability roadmap.
[Register](https://grafana.com/events/observabilitycon-on-the-road/?tech=target&plcmt=sidebar&aud=default?plcmt=sidebar&pg=grafana-plugins-grafana-clickhouse-datasource)
Registration is open

After last year's record sellout, our biggest community event is headed to Seattle on May 6-8! Discover what's new in Grafana 12, learn from 20+ talks covering Prometheus, OpenTelemetry, & Loki, and much more.
[Register](https://grafana.com/events/grafanacon/?tech=target&plcmt=sidebar&aud=default)

Grafana Cloud
- Grafana, of course
- 10k series Prometheus metrics
- 50 GB logs
- 50 GB traces
- 2,232 app o11y host hours
- ...and more
[Create free account](https://grafana.com/auth/sign-up/create-user?pg=grafana-plugins-grafana-clickhouse-datasource&plcmt=sidebar)
No credit card needed, ever.

Become a Champion
Helping others embodies the spirit of open source, and we want to celebrate your invaluable contributions.
[Become a Champion](https://grafana.com/community/champions/)

Golden Grots
Helping others embodies the spirit of open source, and we want to celebrate your invaluable contributions.
[Submit your dashboard](https://grafana-labs.typeform.com/to/dNIZUv8v#src=gconlp)

Become a Contributor
Helping others embodies the spirit of open source, and we want to celebrate your invaluable contributions.
[Become a Contributor](https://grafana-labs.typeform.com/community?typeform-source=www.google.com)

Introducing
Frontend Observability
Our hosted service for real user monitoring. Gain precise, end-to-end user insights.
[Sign up for free](https://grafana.com/auth/sign-up/create-user?tech=target&plcmt=general-sidebar&aud=non-cloud-frontend-observability-default) [Read the blog](https://grafana.com/blog/2023/07/19/real-user-monitoring-in-grafana-cloud-get-frontend-error-tracking-faster-root-cause-analysis-and-more/?tech=target&plcmt=general-sidebar&aud=non-cloud-frontend-observability-default)

- Reduce metric cardinality by 30-50%
- Pay only for metrics you use
- Centralize control over your data in Grafana Cloud
[Create free account](https://grafana.com/auth/sign-up/create-user?pg=grafana-plugins-grafana-clickhouse-datasource&plcmt=sidebar) [Read the blog post](https://grafana.com/blog/?pg=grafana-plugins-grafana-clickhouse-datasource&plcmt=sidebar)

Gain insight into unused metrics and optimize metric cardinality with the new cardinality management dashboards and Adaptive Metrics
[Create free account](https://grafana.com/auth/sign-up/create-user?pg=grafana-plugins-grafana-clickhouse-datasource&plcmt=sidebar) [Read the blog post](https://grafana.com/blog/?pg=grafana-plugins-grafana-clickhouse-datasource&plcmt=sidebar)


We're coming to the Bay Area FEB 25
[Join Us](https://grafana.com/events/observabilitycon-on-the-road/2025/san-francisco-bay-area/?plcmt=sidebar-banner&pg=grafana-plugins-grafana-clickhouse-datasource&tech=target)


Chicago, Mar 11 Grafana Cloud demos. Community success stories. Advance your observability roadmap.
[Join Us](https://grafana.com/events/observabilitycon-on-the-road/2025/chicago/?plcmt=sidebar-banner&pg=grafana-plugins-grafana-clickhouse-datasource&tech=target)
***
Latest webinars
[How to get started with OpenTelemetry and Grafana](https://grafana.com/go/webinar/how-to-instrument-apps-with-otel-and-grafana/?pg=grafana-plugins-grafana-clickhouse-datasource&plcmt=sidebar-related-1)
[Building advanced Grafana dashboards](https://grafana.com/go/webinar/building-advanced-grafana-dashboards/?pg=grafana-plugins-grafana-clickhouse-datasource&plcmt=sidebar-related-2)
[Incident management with Grafana IRM & SLOS in Grafana Cloud](https://grafana.com/go/webinar/getting-started-with-grafana-incident-response-and-management-irm-and-slo/?pg=grafana-plugins-grafana-clickhouse-datasource&plcmt=sidebar-related-3)
Data Source
grafana
# ClickHouse
- Overview
- Installation
- Change log
- Related content
## Official ClickHouse data source for Grafana
The ClickHouse data source plugin allows you to query and visualize ClickHouse data in Grafana.
 
## Version compatibility
Users on Grafana `v9.x` and higher of Grafana can use `v4`. Users on Grafana `v8.x` are encouraged to continue using `v2.2.0` of the plugin.
\* *As of 2.0 this plugin will only support ad hoc filters when using ClickHouse 22.7+*
## Installation
For detailed instructions on how to install the plugin on Grafana Cloud or locally, please checkout the [Plugin installation docs](https://grafana.com/docs/grafana/latest/plugins/installation/).
## Configuration
### ClickHouse user for the data source
Set up an ClickHouse user account with [readonly](https://clickhouse.com/docs/en/operations/settings/permissions-for-queries#settings_readonly) permission and access to databases and tables you want to query. Please note that Grafana does not validate that queries are safe. Queries can contain any SQL statement. For example, statements like `ALTER TABLE system.users DELETE WHERE name='sadUser'` and `DROP TABLE sadTable;` would be executed.
To configure a readonly user, follow these steps:
1. Create a `readonly` user profile following the [Creating Users and Roles in ClickHouse](https://clickhouse.com/docs/en/operations/access-rights) guide.
2. Ensure the `readonly` user has enough permission to modify the `max_execution_time` setting required by the underlying [clickhouse-go client](https://github.com/ClickHouse/clickhouse-go/).
3. If you're using a public ClickHouse instance, it's not recommended to set `readonly=2` in the `readonly` profile. Instead, leave `readonly=1` and set the constraint type of `max_execution_time` to [changeable\_in\_readonly](https://clickhouse.com/docs/en/operations/settings/constraints-on-settings) to allow modification of this setting.
### ClickHouse protocol support
The plugin supports both `Native` (default) and `HTTP` transport protocols. This can be enabled in the configuration via the `protocol` configuration parameter. Both protocols exchange data with ClickHouse using optimized native format.
Note that the default ports for `HTTP/S` and `Native` differ:
- HTTP - 8123
- HTTPS - 8443
- Native - 9000
- Native with TLS - 9440
### Manual configuration via UI
Once the plugin is installed on your Grafana instance, follow [these instructions](https://grafana.com/docs/grafana/latest/datasources/add-a-data-source/) to add a new ClickHouse data source, and enter configuration options.
### With a configuration file
It is possible to configure data sources using configuration files with Grafana’s provisioning system. To read about how it works, refer to [Provisioning Grafana data sources](https://grafana.com/docs/grafana/latest/administration/provisioning/#data-sources).
Here are some provisioning examples for this data source using basic authentication:
```
apiVersion: 1
datasources:
- name: ClickHouse
type: grafana-clickhouse-datasource
jsonData:
defaultDatabase: database
port: 9000
host: localhost
username: username
tlsSkipVerify: false
# tlsAuth: <bool>
# tlsAuthWithCACert: <bool>
# secure: <bool>
# dialTimeout: <seconds>
# queryTimeout: <seconds>
# protocol: <native|http>
# defaultTable: <string>
# httpHeaders:
# - name: X-Example-Header
# secure: false
# value: <string>
# - name: Authorization
# secure: true
# logs:
# defaultDatabase: <string>
# defaultTable: <string>
# otelEnabled: <bool>
# otelVersion: <string>
# timeColumn: <string>
# ...Column: <string>
# traces:
# defaultDatabase: <string>
# defaultTable: <string>
# otelEnabled: <bool>
# otelVersion: <string>
# durationUnit: <seconds|milliseconds|microseconds|nanoseconds>
# traceIdColumn: <string>
# ...Column: <string>
secureJsonData:
password: password
# tlsCACert: <string>
# tlsClientCert: <string>
# tlsClientKey: <string>
# secureHttpHeaders.Authorization: <string>
```
## Building queries
Queries can be built using the raw SQL editor or the query builder. Queries can contain macros which simplify syntax and allow for dynamic SQL generation.
### Time series
Time series visualization options are selectable after adding a `datetime` field type to your query. This field will be used as the timestamp. You can select time series visualizations using the visualization options. Grafana interprets timestamp rows without explicit time zone as UTC. Any column except `time` is treated as a value column.
#### Multi-line time series
To create multi-line time series, the query must return at least 3 fields in the following order:
- field 1: `datetime` field with an alias of `time`
- field 2: value to group by
- field 3+: the metric values
For example:
```
SELECT log_time AS time, machine_group, avg(disk_free) AS avg_disk_free
FROM mgbench.logs1
GROUP BY machine_group, log_time
ORDER BY log_time
```
### Tables
Table visualizations will always be available for any valid ClickHouse query.
### Visualizing logs with the Logs Panel
To use the Logs panel your query must return a timestamp and string values. To default to the logs visualization in Explore mode, set the timestamp alias to *log\_time*.
For example:
```
SELECT log_time AS log_time, machine_group, toString(avg(disk_free)) AS avg_disk_free
FROM logs1
GROUP BY machine_group, log_time
ORDER BY log_time
```
To force rendering as logs, in absence of a `log_time` column, set the Format to `Logs` (available from 2.2.0).
### Visualizing traces with the Traces Panel
Ensure your data meets the [requirements of the traces panel](https://grafana.com/docs/grafana/latest/explore/trace-integration/#data-api). This applies if using the visualization or Explore view.
Set the Format to `Trace` when constructing the query (available from 2.2.0).
If using the [Open Telemetry Collector and ClickHouse exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/clickhouseexporter), the following query produces the required column names (these are case sensitive):
```
SELECT
TraceId AS traceID,
SpanId AS spanID,
SpanName AS operationName,
ParentSpanId AS parentSpanID,
ServiceName AS serviceName,
Duration / 1000000 AS duration,
Timestamp AS startTime,
arrayMap(key -> map('key', key, 'value', SpanAttributes[key]), mapKeys(SpanAttributes)) AS tags,
arrayMap(key -> map('key', key, 'value', ResourceAttributes[key]), mapKeys(ResourceAttributes)) AS serviceTags,
if(StatusCode IN ('Error', 'STATUS_CODE_ERROR'), 2, 0) AS statusCode
FROM otel.otel_traces
WHERE TraceId = '61d489320c01243966700e172ab37081'
ORDER BY startTime ASC
```
### Macros
To simplify syntax and to allow for dynamic parts, like date range filters, the query can contain macros.
Here is an example of a query with a macro that will use Grafana's time filter:
```
SELECT date_time, data_stuff
FROM test_data
WHERE $__timeFilter(date_time)
```
| Macro | Description | Output example |
|---|---|---|
| *\$\_\_dateFilter(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the date range of the panel | `date >= toDate('2022-10-21') AND date <= toDate('2022-10-23')` |
| *\$\_\_timeFilter(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the time range of the panel in seconds | `time >= toDateTime(1415792726) AND time <= toDateTime(1447328726)` |
| *\$\_\_timeFilter\_ms(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the time range of the panel in milliseconds | `time >= fromUnixTimestamp64Milli(1415792726123) AND time <= fromUnixTimestamp64Milli(1447328726456)` |
| *\$\_\_dateTimeFilter(dateColumn, timeColumn)* | Shorthand that combines \$\*\*dateFilter() AND \$\*\*timeFilter() using separate Date and DateTime columns. | `$__dateFilter(dateColumn) AND $__timeFilter(timeColumn)` |
| *\$\_\_fromTime* | Replaced by the starting time of the range of the panel casted to `DateTime` | `toDateTime(1415792726)` |
| *\$\_\_toTime* | Replaced by the ending time of the range of the panel casted to `DateTime` | `toDateTime(1447328726)` |
| *\$\_\_fromTime\_ms* | Replaced by the starting time of the range of the panel casted to `DateTime64(3)` | `fromUnixTimestamp64Milli(1415792726123)` |
| *\$\_\_toTime\_ms* | Replaced by the ending time of the range of the panel casted to `DateTime64(3)` | `fromUnixTimestamp64Milli(1447328726456)` |
| *\$\_\_interval\_s* | Replaced by the interval in seconds | `20` |
| *\$\_\_timeInterval(columnName)* | Replaced by a function calculating the interval based on window size in seconds, useful when grouping | `toStartOfInterval(toDateTime(column), INTERVAL 20 second)` |
| *\$\_\_timeInterval\_ms(columnName)* | Replaced by a function calculating the interval based on window size in milliseconds, useful when grouping | `toStartOfInterval(toDateTime64(column, 3), INTERVAL 20 millisecond)` |
| *\$\_\_conditionalAll(condition, \$templateVar)* | Replaced by the first parameter when the template variable in the second parameter does not select every value. Replaced by the 1=1 when the template variable selects every value. | `condition` or `1=1` |
The plugin also supports notation using braces {}. Use this notation when queries are needed inside parameters.
### Templates and variables
To add a new ClickHouse query variable, refer to [Add a query variable](https://grafana.com/docs/grafana/latest/variables/variable-types/add-query-variable/).
After creating a variable, you can use it in your ClickHouse queries by using [Variable syntax](https://grafana.com/docs/grafana/latest/variables/syntax/). For more information about variables, refer to [Templates and variables](https://grafana.com/docs/grafana/latest/variables/).
### Importing dashboards for ClickHouse
Follow these [instructions](https://grafana.com/docs/grafana/latest/dashboards/export-import/#import-dashboard) to import a dashboard.
You can also find available, pre-made dashboards by navigating to the data sources configuration page, selecting the ClickHouse data source and clicking on the Dashboards tab.
We distribute the following dashboards with the plugin. These are aimed at assisting with support analysis of a ClickHouse cluster and do not rely on external datasets. The querying user requires access to the `system` database.
1. Cluster Analysis - an overview of configured clusters, merges, mutations and data replication.
2. Data Analysis - an overview of current databases and tables, including their respective sizes, partitions and parts.
3. Query Analysis - an analysis of queries by type, performance and resource consumption.
### Ad Hoc Filters
Ad hoc filters are only supported with version 22.7+ of ClickHouse.
Ad hoc filters allow you to add key/value filters that are automatically added to all metric queries that use the specified data source, without being explicitly used in queries.
By default, Ad Hoc filters will be populated with all Tables and Columns. If you have a default database defined in the Datasource settings, all Tables from that database will be used to populate the filters. As this could be slow/expensive, you can introduce a second variable to allow limiting the Ad Hoc filters. It should be a `constant` type named `clickhouse_adhoc_query` and can contain: a comma delimited list of databases, just one database, or a database.table combination to show only columns for a single table.
Ad Hoc filters also work with the Map and JSON types for OTel data. Map is the default, and will automatically convert the merged labels output into a usable filter. To have the filter logic use JSON syntax, add a dashboard variable with a `constant` type called `clickhouse_adhoc_use_json` (the variable's `value` is ignored, it just has to be present).
For more information on Ad Hoc filters, check the [Grafana docs](https://grafana.com/docs/grafana/latest/variables/variable-types/add-ad-hoc-filters/)
#### Using a query for Ad Hoc filters
The second `clickhouse_adhoc_query` also allows any valid ClickHouse query. The query results will be used to populate your ad-hoc filter's selectable filters. You may choose to hide this variable from view as it serves no further purpose.
For example, if `clickhouse_adhoc_query` is set to `SELECT DISTINCT machine_name FROM mgbench.logs1` you would be able to select which machine names are filtered for in the dashboard.
#### Manual Ad Hoc Filter Placement with `$__adHocFilters`
By default, ad-hoc filters are automatically applied to queries by detecting the target table using SQL parsing. However, for queries that use CTEs or ClickHouse-specific syntax like `INTERVAL` or aggregate functions with parameters, the automatic detection may fail. In these cases, you can manually specify where to apply ad-hoc filters using the `$__adHocFilters('table_name')` macro.
This macro expands to the ClickHouse `additional_table_filters` setting with the currently active ad-hoc filters. It should be placed in the `SETTINGS` clause of your query.
Example:
```
SELECT *
FROM (
SELECT * FROM my_complex_table
WHERE complicated_condition
) AS result
SETTINGS $__adHocFilters('my_complex_table')
```
When ad-hoc filters are active (e.g., `status = 'active'` and `region = 'us-west'`), this expands to:
```
SELECT *
FROM (
SELECT * FROM my_complex_table
WHERE complicated_condition
) AS result
SETTINGS additional_table_filters={'my_complex_table': 'status = \'active\' AND region = \'us-west\''}
```
## Learn more
- Add [Annotations](https://grafana.com/docs/grafana/latest/dashboards/annotations/).
- Configure and use [Templates and variables](https://grafana.com/docs/grafana/latest/variables/).
- Add [Transformations](https://grafana.com/docs/grafana/latest/panels/transformations/).
- Set up alerting; refer to [Alerts overview](https://grafana.com/docs/grafana/latest/alerting/).
### Grafana Cloud Free
- Free tier: Limited to 3 users
- Paid plans: \$55 / user / month above included usage
- Access to all Enterprise Plugins
- Fully managed service (not available to self-manage)
[Get started for free](https://grafana.com/auth/sign-up/create-user/?pg=plugins&plcmt=installation)
### Self-hosted Grafana Enterprise
- Access to all Enterprise plugins
- All Grafana Enterprise features
- Self-manage on your own infrastructure
[Contact us to learn more](https://grafana.com/contact?about=grafana-enterprise&topic=enterprise-plugin&pg=plugins&plcmt=installation)
This component requires javascript to be enabled.
### Installing ClickHouse on Grafana Cloud:
Installing plugins on a Grafana Cloud instance is a one-click install; same with updates. Cool, right?
Note that it could take up to 1 minute to see the plugin show up in your Grafana.
This component requires javascript to be enabled.
[Sign up for Grafana Cloud to install ClickHouse.](https://grafana.com/auth/sign-up/create-user?plcmt=grafana-clickhouse-datasource&key=pg&value=plugins)
For more information, visit the docs on [plugin installation](https://grafana.com/docs/grafana/latest/plugins/installation/).
### Installing on a local Grafana:
For local instances, plugins are installed and updated via a simple CLI command. Plugins are not updated automatically, however you will be notified when updates are available right within your Grafana.
#### 1\. Install the Data Source
Use the grafana-cli tool to install ClickHouse from the commandline:
```
grafana-cli plugins install grafana-clickhouse-datasource
```
The plugin will be installed into your grafana plugins directory; the default is /var/lib/grafana/plugins. [More information on the cli tool](https://grafana.com/docs/grafana/latest/administration/cli/#plugins-commands).
Alternatively, you can manually download the .zip file for your architecture below and unpack it into your grafana plugins directory.
[darwin-amd64](https://grafana.com/api/plugins/grafana-clickhouse-datasource/versions/4.14.0/download?os=darwin&arch=amd64)
[darwin-arm64](https://grafana.com/api/plugins/grafana-clickhouse-datasource/versions/4.14.0/download?os=darwin&arch=arm64)
[linux-amd64](https://grafana.com/api/plugins/grafana-clickhouse-datasource/versions/4.14.0/download?os=linux&arch=amd64)
[linux-arm](https://grafana.com/api/plugins/grafana-clickhouse-datasource/versions/4.14.0/download?os=linux&arch=arm)
[linux-arm64](https://grafana.com/api/plugins/grafana-clickhouse-datasource/versions/4.14.0/download?os=linux&arch=arm64)
[windows-amd64](https://grafana.com/api/plugins/grafana-clickhouse-datasource/versions/4.14.0/download?os=windows&arch=amd64)
[any](https://grafana.com/api/plugins/grafana-clickhouse-datasource/versions/4.14.0/download)
##### 2\. Configure the Data Source
Accessed from the Grafana main menu, newly installed data sources can be added immediately within the Data Sources section.
Next, click the Add data source button in the upper right. The data source will be available for selection in the **Type** select box.
To see a list of installed data sources, click the **Plugins** item in the main menu. Both core data sources and installed data sources will appear.
# Changelog
## 4\.14.0
### Features
- Add FilterTime hint to enable multi-timestamp log filtering/sorting (\#1642)
- Skip OTel trace time range optimization when trace timestamp table does not exist (\#1663)
### Fixes
- Dependency updates
- Add separate columns for Resource/Scope/Log Attributes (\#1560)
- Fix panic when configuring datasource with numeric timeout values (\#1559)
## 4\.13.0
### Features
- Support for hiding table names in ad-hoc filters (\#1493)
- Allow manual placement of ad-hoc filters (\#1488)
- Add support for Nullable(Enum8/16) column types (\#1523)
- Add dashboard variable to control AdHoc filter syntax (\#1464)
### Fixes
- Fix generating query with column names containing colons (\#1538)
- Update config HTTPS language (\#1537)
- Dependency updates
## 4\.12.0
### Features
- Support log volume queries for the SQL editor. Note that this will only work for Grafana versions \>= 12.4.0 (\#1475)
- Support columns with `.` in ad-hoc filters (\#1481)
### Fixes
- Dependency updates
## 4\.11.4
## Fixes
- Fix view logs link in Explore and Dashboards (\#1462)
- Fix filter for map type LogLabels (\#1456)
- Temporarily disable slow JSON suggestions function (\#1468)
- Dependency updates
## 4\.11.3
### Fixes
- Fix config UI bugs (\#1409) and update design (\#1422)
- Dependency updates
## 4\.11.2
### Features
- Second part of redesigned ClickHouse config page (behind newClickhouseConfigPageDesign) (\#1387)
### Fixes
- Improved error classification to mark all ClickHouse errors as downstream errors, including errors wrapped in HTTP response bodies and multi-error chains (\#1405)
- Dependency updates
## 4\.11.1
### Fixes
- All Clickhouse errors are marked as downstream errors for Grafana (\#1378)
## 4\.11.0
### Features
- Merge OpenTelemetry resource/scope/log attributes into a unified Labels column in Logs (\#1369)
- First part of redesigned ClickHouse config page with sidebar navigation and collapsible sections (behind newClickhouseConfigPageDesign) (\#1370)
### Fixes
- Fix ad-hoc filter application with templated target tables (\#1241)
- Fix column sorting by formatting bytes in Grafana (\#1352)
- Fix events and links not displaying correctly for table view queries (\#1345)
- Dependency updates
## 4\.10.2
### Fixes
- Fix Ad-Hoc filters for variable datasources \#1330
- Fix switching between SQL Editor and Query Builder \#1337
- Fix large JSON objects + complex JSON types \#1326
- Configuration fixes related to row limit implementation \#1294
- Fix bug where switched to logs query type errored \#1341
- Dependency updates
## 4\.10.1
### Fixes
- Bump grafana/plugin-actions from ff169fa386880e34ca85a49414e5a0ff84c3f7ad to b788be6746403ff9bae26d5e800794f2a5620b4c (\#1286)
- Bump cspell from 9.0.2 to 9.1.1 (\#1278)
## 4\.10.0
### Features
- Ad-hoc queries: Allow to filter by values inside the map (\#1265)
### Fixes
- Fix ad-hoc filter application with templated target tables (\#1241)
- Dependency updates
## 4\.9.1
### Fixes
- Error logging fix
## 4\.9.0
### Features
- Add support for the Grafana `row_limit` [configuration setting](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#row_limit).
- Add support for kind, status, instrumentation library, links, events and state data for traces (\#1043, \#1208)
- Cancel JSON paths query after 10s (\#1206)
- SQL Editor now suggests database, table, column, and function names while typing (\#1204)
- Add SQL Formatter button + shortcut for making long queries more readable in the editor (\#1205)
### Fixes
- Fixed "run query" shortcut from running stale query (\#1205)
- Dependency updates
## 4\.8.2
### Fixes
- Dependency updates
## 4\.8.1
### Fixes
- Dependency updates
## 4\.8.0
### Features
- Enable CtrlCmd + Enter keyboard shortcut to Run Query (\#1158)
### Fixes
- Refactor `MutateResponse` function and PDC dialler creation (\#1155)
- Refactor `clickhouse.Connect` to improve context cancellation handling (\#1154)
- Prevent usage of failed connections and improve connection management (\#1156). Please note that following this change, the following limits will be set. Although we believe these limits are reasonable, you can adjust them in the datasource settings if needed:
- `MaxOpenConns` to 50.
- `MaxIdleConns` to 25.
- `ConnMaxLifetime` to 5 minutes.
- Dependency updates
## 4\.7.0
### Features
- Add JSON column sub-paths to column selection in query builder
- Added events support in trace detail view.(https://github.com/grafana/clickhouse-datasource/pull/1128)
## 4\.6.0
### Features
- Add support for new Variant, Dynamic, and JSON types (https://github.com/grafana/clickhouse-datasource/pull/1108)
### Fixes
- Optimized performance for log volumes processing using ClickHouse `multiSearchAny`
## 4\.5.1
### Fixes
- Dependency updates
## 4\.5.0
### Features
- Implemented log context for log queries
- Added configuration options for log context columns
- Queries parsed from the SQL editor will now attempt to re-map columns into their correct fields for Log and Trace queries.
- Added support for IN operator in adhoc filters
### Fixes
- Fixed and enhanced the logic for parsing a query back into the query builder.
## 4\.4.0
### Features
- Added "Labels" column selector to the log query builder
- Datasource OTel configuration will now set default table names for logs and traces.
### Fixes
- Added warning for when `uid` is missing in provisioned datasources.
- Map filters in the query builder now correctly show the key instead of the column name
- Updated and fixed missing `system.dashboards` dashboard in list of dashboards
- Updated the duration value in example traces dashboard to provide useful information
- Fix to display status codes from spans in trace queries (\#950)
## 4\.3.2
### Fixes
- Optimized performance for types dependent on the JSON converter
- Dependency updates
## 4\.3.1
### Features
- Added preset dashboard from `system.dashboards` table
### Fixes
- Fix trace start times in trace ID mode (\#900)
- Fixed OTel dashboard that waa failing to import (\#908)
## 4\.3.0
### Features
- Added OpenTelemetry dashboard (\#884)
### Fixes
- Fix support for LowCardinality strings (\#857)
- Update trace queries to better handle time fields (\#890)
- Dependency bumps
## 4\.2.0
### Features
- Added `$__dateTimeFilter()` macro for conveniently filtering a PRIMARY KEY composed of Date and DateTime columns.
## 4\.1.0
### Features
- Added the ability to define column alias tables in the config, which simplifies query syntax for tables with a known schema.
## 4\.0.8
### Fixes
- Fixed `IN` operator escaping the entire string (specifically with `Nullable(String)`), also added `FixedString(N)` (\#830)
- Fixed query builder filter editor on alert rules page (\#828)
## 4\.0.7
- Upgrade dependencies
## 4\.0.6
### Fixes
- Add support for configuring proxy options from context rather than environment variables (supported by updating `sqlds`) (\#799)
## 4\.0.5
### Fixes
- Fixed converter regex for `Nullable(IP)` and `Nullable(String)`. It won't match to `Array(Nullable(IP))` or `Array(Nullable(String))` any more. (\#783)
- Updated `grafana-plugin-sdk-go` to fix a PDC issue. More details [here](https://github.com/grafana/grafana-plugin-sdk-go/releases/tag/v0.217.0) (\#790)
## 4\.0.4
### Fixes
- Changed trace timestamp table from the constant `otel_traces_trace_id_ts` to a suffix `_trace_id_ts` applied to the current table name.
## 4\.0.3
### Features
- Added `$__fromTime_ms` macro that represents the dashboard "from" time in milliseconds using a `DateTime64(3)`
- Added `$__toTime_ms` macro that represents the dashboard "to" time in milliseconds using a `DateTime64(3)`
- Added `$__timeFilter_ms` macro that uses `DateTime64(3)` for millisecond precision time filtering
- Re-added query type selector in dashboard view. This was only visible in explore view, but somehow it affects dashboard view, and so it has been re-added. (\#730)
- When OTel is enabled, Trace ID queries now use a skip index to optimize exact ID lookups on large trace datasets (\#724)
### Fixes
- Fixed performance issues caused by `$__timeFilter` using a `DateTime64(3)` instead of `DateTime` (\#699)
- Fixed trace queries from rounding span durations under 1ms to `0` (\#720)
- Fixed AST error when including Grafana macros/variables in SQL (\#714)
- Fixed empty builder options when switching from SQL Editor back to Query Editor
- Fix SQL Generator including "undefined" in `FROM` when database isn't defined
- Allow adding spaces in multi filters (such as `WHERE .. IN`)
- Fixed missing `AND` keyword when adding a filter to a Trace ID query
## 4\.0.2
### Fixes
- Fixed migration script not running when opening an existing v3 config
## 4\.0.1
### Fixes
- Set `protocol` to `native` by default in config view. Fixes the "default port" description.
## 4\.0.0
### Features
Version 4.0.0 contains major revisions to the query builder and datasource configuration settings.
#### Query Builder
- Completely rebuilt query builder to have specialized editors for Table, Logs, Time Series, and Traces.
- Completely rebuilt SQL generator to support more complicated and dynamic queries.
- Updated query builder options structure to be clearer and support more complex queries.
- Updated database/table selector to be in a more convenient location. Database and table options are automatically selected on initial load.
- Upgraded query builder state management so queries stay consistent when saving/editing/sharing queries.
- Separated Table and Time Series query builders. Table view operates as a catch-all for queries that don't fit the other query types.
- Combined "format" into the query type switcher for simplicity. The query tab now changes the builder view and the display format when on the Explore page. This includes the raw SQL editor.
- Added an OTEL switch for logs and trace views. This will allow for quicker query building for those using the OTEL exporter for ClickHouse.
- Updated Time Series query builder with dedicated Time column. Default filters are added on-load.
- Added an `IS ANYTHING` filter that acts as a placeholder for easily editing later (useful for query templates/bookmarks on the Explore page.)
- Added better support for Map types on the Filter editor.
- LIMIT editor can now be set to 0 to be excluded from the query.
- Table and Time Series views now have a simple / aggregate mode, depending on the query complexity.
- Updated the logs histogram query to use the new query builder options and column hints.
- Added Logs query builder with dedicated Time, Level, and Message columns. Includes OTEL switch for automatically loading OTEL schema columns. Default filters are added on-load.
- Added Trace query builder with dedicated trace columns. Includes OTEL switch for automatically loading OTEL schema columns. Default filters are added on-load.
- Updated data panel filtering to append filters with column hints. Visible in logs view when filtering by a specific level. Instead of referencing a column by name, it will use its hint.
- Order By now lists aggregates by their full name + alias.
- Order By column allows for custom value to be typed in.
- Aggregate column name allows for custom value to be typed in.
- Filter editor allows for custom column names to be typed in.
- Increased width of filter value text input.
- Columns with the `Map*` type now show a `[]` at the end to indicate they are complex types. For example, `SpanAttributes[]`.
- Filter editor now has a dedicated field for map key. You can now select a map column and its key separately. For example, `SpanAttributes['key']`.
- Map types now load a sample of options when editing the `key` for the map. This doesn't include all unique values, but for most datasets it should be a convenience.
- Added column hints, which offers better linking across query components when working with columns and filters. For example, a filter can be added for the `Time` column, even without knowing what the time column name is yet. This enables better SQL generation that is "aware" of a column's intended use.
### Plugin Backend
- Added migration logic for `v3` configs going to `v4+`. This is applied when the config is loaded when building a database connection.
- `$__timeFilter`, `$__fromTime`, and `$__toTime` macros now convert to `DateTime64(3)` for better server-side type conversion. Also enables millisecond precision time range filtering.
#### Datasource Configuration
- Added migration script for `v3.x` configurations to `v4+`. This runs automatically when opening/saving the datasource configuration.
- Renamed config value `server` to `host`.
- Renamed config value `timeout` to the more specific `dial_timeout`.
- Updated labeling for port selection. The default port will now change depending on native/http and secure/unsecure setting.
- Rearranged fields and sections to flow better for initial setup of a new datasource.
- Added plugin version to config data for easier config version migrations in the future.
- Added fields for setting default values for database/table.
- Added section for setting default log database/table/columns. Includes OTEL. These are used when using the log query builder.
- Added section for setting default trace database/table/columns. Includes OTEL. These are used when using the trace query builder.
- Added OTEL switches for logs/traces for quicker query building. OTEL defaults to the latest version, and will auto update if kept on this setting.
- Increased width of inputs for typically long values (server URL, path, etc.)
- Allow adding custom HTTP headers with either plain text or secure credentials. [\#633](https://github.com/grafana/clickhouse-datasource/pull/633)
- Add `path` setting to specify an additional URL path when using the HTTP protocol. [\#512](https://github.com/grafana/clickhouse-datasource/pull/512)
### Fixes
- Queries will now remain consistent when reloading/editing a previously saved query.
- Fixed default Ad-Hoc filters. [\#650](https://github.com/grafana/clickhouse-datasource/pull/650)
- Fixed Ad-Hoc filters parsing numeric fields. [\#629](https://github.com/grafana/clickhouse-datasource/pull/629)
- Fixed majority of usability quirks with redesigned query builder.
### Upgrades
- Updated all dependencies to latest compatible versions (Includes Dependabot PRs)
## 3\.3.0
### Features
- Support Point geo data type.
### Fixes
- Fix timeInterval\_ms macro.
- Fix Table summary and Parts over time panels in Data Analysis dashboard.
### Upgrades
- Upgrade [grafana-plugin-sdk-go](https://github.com/grafana/grafana-plugin-sdk-go).
## 3\.2.0
### Features
- Add `timeInterval_ms` macro to allow higher precision queries on DateTime64 columns. [\#462](https://github.com/grafana/clickhouse-datasource/pull/462).
### Fixes
- Ensure databases, tables, and columns are escaped correctly. [\#460](https://github.com/grafana/clickhouse-datasource/pull/460).
- Fix conditionAll handling. [\#459](https://github.com/grafana/clickhouse-datasource/pull/459).
- Fix support for ad-hoc regexp filters: `=~`, `!~` [\#414](https://github.com/grafana/clickhouse-datasource/pull/414).
- Do not create malformed adhoc filters [\#451](https://github.com/grafana/clickhouse-datasource/pull/451). invalid values will be ignored.
- Fix auto formatting by reverting to table correctly. [\#469](https://github.com/grafana/clickhouse-datasource/pull/469).
- Fix parsing of numeric configuration values in `yaml` file. [\#456](https://github.com/grafana/clickhouse-datasource/pull/456).
## 3\.1.0
- Stable release of v3.0.4-beta
## 3\.0.4-beta
- Update Grafana dependencies to \>=v9.0.0
- **Feature** - [Add support for the secure socks proxy](https://github.com/grafana/clickhouse-datasource/pull/389)
## 3\.0.3-beta
- Update ClickHouse driver to v2.9.2
## 3\.0.2-beta
- Custom ClickHouse settings can be set in data source settings. [Allow passing custom ClickHouse settings in datasource](https://github.com/grafana/clickhouse-datasource/pull/366)
- Histogram UI fixes [Histogram UI fixes](https://github.com/grafana/clickhouse-datasource/pull/363)
- Support filter/filter out logs view actions
- Fix undefined database name by default
- Reset level and time field properly on table/database change
- Make it possible to clear the level field (so the histogram will render without grouping by level)
- Fix filter value that gets stuck in the UI
- Tracing dashboard added to default dashboards. [Tracing dashboard](https://github.com/grafana/clickhouse-datasource/pull/336)
## 3\.0.1-beta
- Users on v8.x of Grafana are encouraged to continue to use v2.2.0 of the plugin.
- Users of Grafana v9.x can use v3 however it is beta and may contain bugs.
## 3\.0.0
- **Feature** - [Logs volume histogram support](https://github.com/grafana/clickhouse-datasource/pull/352)
- **Chore** - Update clickhouse-go to v2.8.1
## 2\.2.1
- **Chore** - Backend binaries compiled with latest go version 1.20.4
- Custom ClickHouse settings can be set in data source settings. Allow passing custom [ClickHouse settings in datasource](https://github.com/grafana/clickhouse-datasource/pull/371)
- Standard Golang HTTP proxy environment variables support (`HTTP_PROXY`/`HTTPS_PROXY`/`NO_PROXY`). See [FromEnvironment](https://pkg.go.dev/golang.org/x/net/http/httpproxy#FromEnvironment) for more information. If the Grafana instance is started with one of these env variables, the driver will automatically load them now.
## 2\.2.0
- **Feature** - [Support format dropdown and support for rendering traces](https://github.com/grafana/clickhouse-datasource/pull/329)
## 2\.1.1
- **Fix** - [Date and Date32 type normalization with user's timezone](https://github.com/grafana/clickhouse-datasource/pull/314)
## 2\.1.0
- **Fix** - Quote table names with dots by @slvrtrn in https://github.com/grafana/clickhouse-datasource/pull/298
- Add a predefined TimeRange filter if there is at least one DateTime\* column by @slvrtrn in https://github.com/grafana/clickhouse-datasource/pull/304
## 2\.0.7
- **Fix** - Empty template variables used with the conditionalAll macro work the same as selecting All. [Allow empty Inputs for \$\_\_conditionalAll](https://github.com/grafana/clickhouse-datasource/issues/262)
- **Fix** - Intervals are limited to 1 second. [limit \$\_\_interval\_s to at least 1 second](https://github.com/grafana/clickhouse-datasource/pull/270)
- **Chore** - Bump ClickHouse go API to v2.5.1 [Bump github.com/ClickHouse/clickhouse-go/v2 from 2.4.3 to 2.5.1](https://github.com/grafana/clickhouse-datasource/pull/283)
## 2\.0.6
- **Chore** - Backend binaries compiled with latest go version 1.19.4
- **Chore** - Backend grafana dependencies updated to latest version
- **Chore** - Clickhouse-go client updated to [v2.4.3](https://github.com/ClickHouse/clickhouse-go/blob/main/CHANGELOG.md#243-2022-11-30)
## 2\.0.5
- **Chore** - Update sqlds to 2.3.17 which fixes complex macro queries
- **Chore** - Backend grafana dependency updated
- **Fix** - Allow default protocol toggle value when saving in settings
## 2\.0.4
- **Fix** - Query builder: allow custom filter values for fields with [`Map`](https://clickhouse.com/docs/en/sql-reference/data-types/map/) type
## 2\.0.3
- **Chore** - Backend binaries compiled with latest go version 1.19.3
- **Chore** - Backend grafana dependencies updated
## 2\.0.2
- **Feature** - Update sqlds to 2.3.13 which fixes some macro queries
## 2\.0.1
- **Bug** - Now works with Safari. Safari does not support regex look aheads
## 2\.0.0
- **Feature** - Upgrade driver to support HTTP
- **Feature** - Changed how ad hoc filters work with a settings option provided in CH 22.7
- **Feature** - Conditional alls are now handled with a conditional all function. The function checks if the second parameter is a template var set to all, if it then replaces the function with 1=1, and if not set the function to the first parameter.
- **Bug** - Visual query builder can use any date type for time field
- **Fix** - 'any' is now an aggregation type in the visual query builder
- **Fix** - Time filter macros can be used in the adhoc query
- **Bug** - Time interval macro cannot have an interval of 0
- **Fix** - Update drive to v2.1.0
- **Bug** - Expand query button works with grafana 8.0+
- **Fix** - Added adhoc columns macro
## 1\.1.2
- **Bug** - Add timerange to metricFindQuery
## 1\.1.1
- **Bug** - Add timeout
## 1\.1.0
- **Feature** - Add convention for showing logs panel in Explore
## 1\.0.0
- Official release
## 0\.12.7
- **Fix** - Ignore template vars when validating sql
## 0\.12.6
- **Fix** - Time series builder - use time alias when grouping/ordering
## 0\.12.5
- **Chore** - Dashboards
## 0\.12.4
- **Fix** - timeseries where clause. make default db the default in visual editor
## 0\.12.3
- **Fix** - When removing conditional all, check scoped vars (support repeating panels)
## 0\.12.2
- **Fix** - When removing conditional all, only remove lines with variables
## 0\.12.1
- **Fix** - Handle large decimals properly
## 0\.12.0
- **Feature** - Time series builder: use \$\_\_timeInterval macro on time field so buckets can be adjusted from query options.
## 0\.11.0
- **Feature** - Time series: Hide fields, use group by in select, use time field in group by
## 0\.10.0
- **Feature** - Ad-Hoc sourced by database or table
## 0\.9.13
- **Fix** - Update sdk to show streaming errors
## 0\.9.12
- **Fix** - Format check after ast change
## 0\.9.11
- **Feature** - \$**timeInterval(column) and \$**interval\_s macros
## 0\.9.10
- **Fix** - Set format when using the new Run Query button.
## 0\.9.9
- **Feature** - Query Builder.
## 0\.9.8
- **Fix** - Detect Multi-line time series. Handle cases with functions.
## 0\.9.7
- **Feature** - Multi-line time series.
## 0\.9.6
- **Bug** - Change time template variable names.
## 0\.9.5
- **Bug** - Fix global template variables.
## 0\.9.4
- **Bug** - Fix query type variables.
## 0\.9.3
- **Bug** - Support Array data types.
## 0\.9.2
- **Bug** - Fix TLS model.
## 0\.9.1
- **Feature** - Add secure toggle to config editor.
## 0\.9.0
- Initial Beta release.
[ Docs Customize navigation placement of plugin pages](https://grafana.com/docs/grafana/latest/administration/plugin-management/customize-nav-bar/)
[ Docs Install a plugin](https://grafana.com/docs/grafana/latest/administration/plugin-management/plugin-install/)
[ Docs Isolate plugin code with the Plugin Frontend Sandbox](https://grafana.com/docs/grafana/latest/administration/plugin-management/plugin-frontend-sandbox/)
[ Docs Plugin backend communication](https://grafana.com/docs/grafana/latest/administration/plugin-management/plugin-integrate/)
[ Docs Plugin signatures](https://grafana.com/docs/grafana/latest/administration/plugin-management/plugin-sign/)
[ Docs Types of plugins](https://grafana.com/docs/grafana/latest/administration/plugin-management/plugin-types/)
[ Docs Image rendering flags](https://grafana.com/docs/grafana/latest/setup-grafana/image-rendering/flags/)
[ Docs Install plugins in Grafana Cloud using Terraform](https://grafana.com/docs/grafana/latest/as-code/infrastructure-as-code/terraform/terraform-plugins/)
[ Docs Troubleshoot image rendering](https://grafana.com/docs/grafana/latest/setup-grafana/image-rendering/troubleshooting/)
[ Blog Video: How to get started with MongoDB and Grafana](https://grafana.com/grafana/plugins/grafana-clickhouse-datasource/)
[ Blog Monitoring COVID-19 virus levels in wastewater using Grafana, Databricks, and the Sqlyze plugin](https://grafana.com/grafana/plugins/grafana-clickhouse-datasource/)
[ Blog Video: Top 3 features of the New Relic data source plugin for Grafana Enterprise](https://grafana.com/grafana/plugins/grafana-clickhouse-datasource/)
[ Blog How traceroute in the Synthetic Monitoring plugin for Grafana Cloud helps network troubleshooting](https://grafana.com/grafana/plugins/grafana-clickhouse-datasource/)
[ Blog Video: How to build a Prometheus query in Grafana](https://grafana.com/grafana/plugins/grafana-clickhouse-datasource/)
[ Blog Video: How to set up a Prometheus data source in Grafana](https://grafana.com/grafana/plugins/grafana-clickhouse-datasource/)
### Still have questions?
Ask your questions. Let AI do the heavy lifting.

### Get every update
Subscribe to our newsletter
By submitting, you agree to our [Privacy policy](https://grafana.com/legal/privacy-policy/)
### Grafana Cloud
- [Overview](https://grafana.com/products/cloud/?plcmt=footer-nav)
- [Pricing](https://grafana.com/pricing/?plcmt=footer-nav)
- [What's in the free tier?](https://grafana.com/products/cloud/free-tier/?plcmt=footer-nav)
- [AI Assistant](https://grafana.com/products/cloud/ai-assistant/?plcmt=footer-nav)
- [Application Observability](https://grafana.com/products/cloud/application-observability/?plcmt=footer-nav)
- [Kubernetes Monitoring](https://grafana.com/products/cloud/kubernetes/?plcmt=footer-nav)
- [Dashboards & Visualization](https://grafana.com/products/cloud/grafana/?plcmt=footer-nav)
- [Database Observability](https://grafana.com/products/cloud/database-observability/?plcmt=footer-nav)
- [Frontend Observability](https://grafana.com/products/cloud/frontend-observability/?plcmt=footer-nav)
- [Synthetic Monitoring](https://grafana.com/products/cloud/synthetic-monitoring/?plcmt=footer-nav)
- [Performance & Load Testing](https://grafana.com/products/cloud/performance-load-testing-k6/?plcmt=footer-nav)
- [Incident Response & Management](https://grafana.com/products/cloud/irm/?plcmt=footer-nav)
- [What’s New](https://grafana.com/whats-new/?plcmt=footer-nav)
- [Grafana Cloud Status](https://status.grafana.com/?plcmt=footer-nav)
### Solutions
- [AI Observability](https://grafana.com/products/cloud/ai-tools-for-observability/?plcmt=footer-nav#ai-observability-for-llms-and-beyond)
- [Full-Stack Observability](https://grafana.com/solutions/full-stack-observability/?plcmt=footer-nav)
- [Infrastructure & Cloud Observability](https://grafana.com/solutions/cloud-infrastructure-observability/?plcmt=footer-nav)
- [Digital Experience Monitoring](https://grafana.com/solutions/digital-experience-monitoring/?plcmt=footer-nav)
- [Scaled Prometheus](https://grafana.com/products/cloud/metrics/?plcmt=footer-nav#why-scale-metrics-with-grafana-cloud)
- [Cost Management & Optimization](https://grafana.com/products/cloud/cost-management-optimization/?plcmt=footer-nav)
- [Site Reliability](https://grafana.com/solutions/site-reliability/?plcmt=footer-nav)
- [Log Management](https://grafana.com/products/cloud/logs/?plcmt=footer-nav)
- [Migrate to OpenTelemetry](https://grafana.com/solutions/opentelemetry/?plcmt=footer-nav#migrate-to-open-telemetry)
### Integrations
- [All Integrations](https://grafana.com/integrations/?plcmt=footer-nav)
- [AWS](https://grafana.com/integrations/cloud-monitoring-aws/?plcmt=footer-nav)
- [Google Cloud](https://grafana.com/integrations/cloud-monitoring-google-cloud/?plcmt=footer-nav)
- [Microsoft Azure](https://grafana.com/integrations/cloud-monitoring-microsoft-azure/?plcmt=footer-nav)
- [Kubernetes](https://grafana.com/products/cloud/kubernetes/?plcmt=footer-nav)
- [Datadog](https://grafana.com/integrations/datadog/visualize/?plcmt=footer-nav)
- [New Relic](https://grafana.com/integrations/new-relic/visualize/?plcmt=footer-nav)
### Open Source
- [Our Projects](https://grafana.com/oss/?plcmt=footer-nav)
- [GitHub](https://github.com/grafana/)
- [Downloads](https://grafana.com/get/?plcmt=footer-nav)
- [Dashboard Templates](https://grafana.com/grafana/dashboards/?plcmt=footer-nav)
### Learn
- [Documentation](https://grafana.com/docs/?plcmt=footer-nav)
- [Blog](https://grafana.com/blog/?plcmt=footer-nav)
- [Community](https://grafana.com/community/?plcmt=footer-nav)
- [Events](https://grafana.com/events/?plcmt=footer-nav)
- [Observability Survey & Reports](https://grafana.com/observability-benefits-for-business/?plcmt=footer-nav)
### Company
- [About Grafana Labs](https://grafana.com/about/grafana-labs/?plcmt=footer-nav)
- [Open Positions](https://job-boards.greenhouse.io/grafanalabs/)
- [Partnerships](https://grafana.com/partnerships/?plcmt=footer-nav)
- [Newsroom](https://grafana.com/press/?plcmt=footer-nav)
- [Success Stories](https://grafana.com/success/?plcmt=footer-nav)
- [Contact Us](https://grafana.com/contact/?plcmt=footer-nav)
- [Getting Help](https://grafana.com/help/?plcmt=footer-nav)
- [Professional Services](https://grafana.com/professional-services/?plcmt=footer-nav)
- [Hey AI](https://grafana.com/ai-info/?plcmt=footer-nav)
### Compare
- [Datadog vs. Grafana Labs](https://grafana.com/compare/grafana-vs-datadog/?plcmt=footer-nav)
- [Dynatrace vs Grafana Cloud](https://grafana.com/compare/grafana-vs-dynatrace/?plcmt=footer-nav)
- [Elasticsearch vs Grafana Cloud](https://grafana.com/compare/grafana-vs-elastic/?plcmt=footer-nav)
- [New Relic vs Grafana Cloud](https://grafana.com/compare/grafana-vs-new-relic/?plcmt=footer-nav)
- [PagerDuty vs Grafana Cloud](https://grafana.com/compare/grafana-vs-pagerduty/?plcmt=footer-nav)
- [Splunk vs Grafana Cloud](https://grafana.com/compare/grafana-vs-splunk/?plcmt=footer-nav)
)
Donut take our word for it. Try [Grafana Cloud](https://grafana.com/products/cloud/) today.
[Grafana Cloud Status](https://status.grafana.com/?plcmt=footer-nav)[Legal & Security](https://grafana.com/legal/?plcmt=footer-nav)[Terms of Service](https://grafana.com/legal/terms/?plcmt=footer-nav)[Privacy Policy](https://grafana.com/legal/privacy-policy/?plcmt=footer-nav)[Trademark Policy](https://grafana.com/trademark-policy/?plcmt=footer-nav)
Copyright 2026 © Grafana Labs
[](https://www.facebook.com/grafana/)[](https://twitter.com/grafana/)[](https://www.linkedin.com/company/grafana-labs/)[](https://github.com/grafana/)[](https://www.youtube.com/channel/UCYCwgQAMm9sTJv0rgwQLCxw/)[](https://www.reddit.com/r/grafana/)
Grafana Labs uses cookies for the normal operation of this website. [**Learn more.**](https://grafana.com/terms#cookie-policy)
Got it\! |
| Readable Markdown | ## Official ClickHouse data source for Grafana
The ClickHouse data source plugin allows you to query and visualize ClickHouse data in Grafana.
 
## Version compatibility
Users on Grafana `v9.x` and higher of Grafana can use `v4`. Users on Grafana `v8.x` are encouraged to continue using `v2.2.0` of the plugin.
\* *As of 2.0 this plugin will only support ad hoc filters when using ClickHouse 22.7+*
## Installation
For detailed instructions on how to install the plugin on Grafana Cloud or locally, please checkout the [Plugin installation docs](https://grafana.com/docs/grafana/latest/plugins/installation/).
## Configuration
### ClickHouse user for the data source
Set up an ClickHouse user account with [readonly](https://clickhouse.com/docs/en/operations/settings/permissions-for-queries#settings_readonly) permission and access to databases and tables you want to query. Please note that Grafana does not validate that queries are safe. Queries can contain any SQL statement. For example, statements like `ALTER TABLE system.users DELETE WHERE name='sadUser'` and `DROP TABLE sadTable;` would be executed.
To configure a readonly user, follow these steps:
1. Create a `readonly` user profile following the [Creating Users and Roles in ClickHouse](https://clickhouse.com/docs/en/operations/access-rights) guide.
2. Ensure the `readonly` user has enough permission to modify the `max_execution_time` setting required by the underlying [clickhouse-go client](https://github.com/ClickHouse/clickhouse-go/).
3. If you're using a public ClickHouse instance, it's not recommended to set `readonly=2` in the `readonly` profile. Instead, leave `readonly=1` and set the constraint type of `max_execution_time` to [changeable\_in\_readonly](https://clickhouse.com/docs/en/operations/settings/constraints-on-settings) to allow modification of this setting.
### ClickHouse protocol support
The plugin supports both `Native` (default) and `HTTP` transport protocols. This can be enabled in the configuration via the `protocol` configuration parameter. Both protocols exchange data with ClickHouse using optimized native format.
Note that the default ports for `HTTP/S` and `Native` differ:
- HTTP - 8123
- HTTPS - 8443
- Native - 9000
- Native with TLS - 9440
### Manual configuration via UI
Once the plugin is installed on your Grafana instance, follow [these instructions](https://grafana.com/docs/grafana/latest/datasources/add-a-data-source/) to add a new ClickHouse data source, and enter configuration options.
### With a configuration file
It is possible to configure data sources using configuration files with Grafana’s provisioning system. To read about how it works, refer to [Provisioning Grafana data sources](https://grafana.com/docs/grafana/latest/administration/provisioning/#data-sources).
Here are some provisioning examples for this data source using basic authentication:
```
apiVersion: 1
datasources:
- name: ClickHouse
type: grafana-clickhouse-datasource
jsonData:
defaultDatabase: database
port: 9000
host: localhost
username: username
tlsSkipVerify: false
# tlsAuth: <bool>
# tlsAuthWithCACert: <bool>
# secure: <bool>
# dialTimeout: <seconds>
# queryTimeout: <seconds>
# protocol: <native|http>
# defaultTable: <string>
# httpHeaders:
# - name: X-Example-Header
# secure: false
# value: <string>
# - name: Authorization
# secure: true
# logs:
# defaultDatabase: <string>
# defaultTable: <string>
# otelEnabled: <bool>
# otelVersion: <string>
# timeColumn: <string>
# ...Column: <string>
# traces:
# defaultDatabase: <string>
# defaultTable: <string>
# otelEnabled: <bool>
# otelVersion: <string>
# durationUnit: <seconds|milliseconds|microseconds|nanoseconds>
# traceIdColumn: <string>
# ...Column: <string>
secureJsonData:
password: password
# tlsCACert: <string>
# tlsClientCert: <string>
# tlsClientKey: <string>
# secureHttpHeaders.Authorization: <string>
```
## Building queries
Queries can be built using the raw SQL editor or the query builder. Queries can contain macros which simplify syntax and allow for dynamic SQL generation.
### Time series
Time series visualization options are selectable after adding a `datetime` field type to your query. This field will be used as the timestamp. You can select time series visualizations using the visualization options. Grafana interprets timestamp rows without explicit time zone as UTC. Any column except `time` is treated as a value column.
#### Multi-line time series
To create multi-line time series, the query must return at least 3 fields in the following order:
- field 1: `datetime` field with an alias of `time`
- field 2: value to group by
- field 3+: the metric values
For example:
```
SELECT log_time AS time, machine_group, avg(disk_free) AS avg_disk_free
FROM mgbench.logs1
GROUP BY machine_group, log_time
ORDER BY log_time
```
### Tables
Table visualizations will always be available for any valid ClickHouse query.
### Visualizing logs with the Logs Panel
To use the Logs panel your query must return a timestamp and string values. To default to the logs visualization in Explore mode, set the timestamp alias to *log\_time*.
For example:
```
SELECT log_time AS log_time, machine_group, toString(avg(disk_free)) AS avg_disk_free
FROM logs1
GROUP BY machine_group, log_time
ORDER BY log_time
```
To force rendering as logs, in absence of a `log_time` column, set the Format to `Logs` (available from 2.2.0).
### Visualizing traces with the Traces Panel
Ensure your data meets the [requirements of the traces panel](https://grafana.com/docs/grafana/latest/explore/trace-integration/#data-api). This applies if using the visualization or Explore view.
Set the Format to `Trace` when constructing the query (available from 2.2.0).
If using the [Open Telemetry Collector and ClickHouse exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/clickhouseexporter), the following query produces the required column names (these are case sensitive):
```
SELECT
TraceId AS traceID,
SpanId AS spanID,
SpanName AS operationName,
ParentSpanId AS parentSpanID,
ServiceName AS serviceName,
Duration / 1000000 AS duration,
Timestamp AS startTime,
arrayMap(key -> map('key', key, 'value', SpanAttributes[key]), mapKeys(SpanAttributes)) AS tags,
arrayMap(key -> map('key', key, 'value', ResourceAttributes[key]), mapKeys(ResourceAttributes)) AS serviceTags,
if(StatusCode IN ('Error', 'STATUS_CODE_ERROR'), 2, 0) AS statusCode
FROM otel.otel_traces
WHERE TraceId = '61d489320c01243966700e172ab37081'
ORDER BY startTime ASC
```
### Macros
To simplify syntax and to allow for dynamic parts, like date range filters, the query can contain macros.
Here is an example of a query with a macro that will use Grafana's time filter:
```
SELECT date_time, data_stuff
FROM test_data
WHERE $__timeFilter(date_time)
```
| Macro | Description | Output example |
|---|---|---|
| *\$\_\_dateFilter(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the date range of the panel | `date >= toDate('2022-10-21') AND date <= toDate('2022-10-23')` |
| *\$\_\_timeFilter(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the time range of the panel in seconds | `time >= toDateTime(1415792726) AND time <= toDateTime(1447328726)` |
| *\$\_\_timeFilter\_ms(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the time range of the panel in milliseconds | `time >= fromUnixTimestamp64Milli(1415792726123) AND time <= fromUnixTimestamp64Milli(1447328726456)` |
| *\$\_\_dateTimeFilter(dateColumn, timeColumn)* | Shorthand that combines \$\*\*dateFilter() AND \$\*\*timeFilter() using separate Date and DateTime columns. | `$__dateFilter(dateColumn) AND $__timeFilter(timeColumn)` |
| *\$\_\_fromTime* | Replaced by the starting time of the range of the panel casted to `DateTime` | `toDateTime(1415792726)` |
| *\$\_\_toTime* | Replaced by the ending time of the range of the panel casted to `DateTime` | `toDateTime(1447328726)` |
| *\$\_\_fromTime\_ms* | Replaced by the starting time of the range of the panel casted to `DateTime64(3)` | `fromUnixTimestamp64Milli(1415792726123)` |
| *\$\_\_toTime\_ms* | Replaced by the ending time of the range of the panel casted to `DateTime64(3)` | `fromUnixTimestamp64Milli(1447328726456)` |
| *\$\_\_interval\_s* | Replaced by the interval in seconds | `20` |
| *\$\_\_timeInterval(columnName)* | Replaced by a function calculating the interval based on window size in seconds, useful when grouping | `toStartOfInterval(toDateTime(column), INTERVAL 20 second)` |
| *\$\_\_timeInterval\_ms(columnName)* | Replaced by a function calculating the interval based on window size in milliseconds, useful when grouping | `toStartOfInterval(toDateTime64(column, 3), INTERVAL 20 millisecond)` |
| *\$\_\_conditionalAll(condition, \$templateVar)* | Replaced by the first parameter when the template variable in the second parameter does not select every value. Replaced by the 1=1 when the template variable selects every value. | `condition` or `1=1` |
The plugin also supports notation using braces {}. Use this notation when queries are needed inside parameters.
### Templates and variables
To add a new ClickHouse query variable, refer to [Add a query variable](https://grafana.com/docs/grafana/latest/variables/variable-types/add-query-variable/).
After creating a variable, you can use it in your ClickHouse queries by using [Variable syntax](https://grafana.com/docs/grafana/latest/variables/syntax/). For more information about variables, refer to [Templates and variables](https://grafana.com/docs/grafana/latest/variables/).
### Importing dashboards for ClickHouse
Follow these [instructions](https://grafana.com/docs/grafana/latest/dashboards/export-import/#import-dashboard) to import a dashboard.
You can also find available, pre-made dashboards by navigating to the data sources configuration page, selecting the ClickHouse data source and clicking on the Dashboards tab.
We distribute the following dashboards with the plugin. These are aimed at assisting with support analysis of a ClickHouse cluster and do not rely on external datasets. The querying user requires access to the `system` database.
1. Cluster Analysis - an overview of configured clusters, merges, mutations and data replication.
2. Data Analysis - an overview of current databases and tables, including their respective sizes, partitions and parts.
3. Query Analysis - an analysis of queries by type, performance and resource consumption.
### Ad Hoc Filters
Ad hoc filters are only supported with version 22.7+ of ClickHouse.
Ad hoc filters allow you to add key/value filters that are automatically added to all metric queries that use the specified data source, without being explicitly used in queries.
By default, Ad Hoc filters will be populated with all Tables and Columns. If you have a default database defined in the Datasource settings, all Tables from that database will be used to populate the filters. As this could be slow/expensive, you can introduce a second variable to allow limiting the Ad Hoc filters. It should be a `constant` type named `clickhouse_adhoc_query` and can contain: a comma delimited list of databases, just one database, or a database.table combination to show only columns for a single table.
Ad Hoc filters also work with the Map and JSON types for OTel data. Map is the default, and will automatically convert the merged labels output into a usable filter. To have the filter logic use JSON syntax, add a dashboard variable with a `constant` type called `clickhouse_adhoc_use_json` (the variable's `value` is ignored, it just has to be present).
For more information on Ad Hoc filters, check the [Grafana docs](https://grafana.com/docs/grafana/latest/variables/variable-types/add-ad-hoc-filters/)
#### Using a query for Ad Hoc filters
The second `clickhouse_adhoc_query` also allows any valid ClickHouse query. The query results will be used to populate your ad-hoc filter's selectable filters. You may choose to hide this variable from view as it serves no further purpose.
For example, if `clickhouse_adhoc_query` is set to `SELECT DISTINCT machine_name FROM mgbench.logs1` you would be able to select which machine names are filtered for in the dashboard.
#### Manual Ad Hoc Filter Placement with `$__adHocFilters`
By default, ad-hoc filters are automatically applied to queries by detecting the target table using SQL parsing. However, for queries that use CTEs or ClickHouse-specific syntax like `INTERVAL` or aggregate functions with parameters, the automatic detection may fail. In these cases, you can manually specify where to apply ad-hoc filters using the `$__adHocFilters('table_name')` macro.
This macro expands to the ClickHouse `additional_table_filters` setting with the currently active ad-hoc filters. It should be placed in the `SETTINGS` clause of your query.
Example:
```
SELECT *
FROM (
SELECT * FROM my_complex_table
WHERE complicated_condition
) AS result
SETTINGS $__adHocFilters('my_complex_table')
```
When ad-hoc filters are active (e.g., `status = 'active'` and `region = 'us-west'`), this expands to:
```
SELECT *
FROM (
SELECT * FROM my_complex_table
WHERE complicated_condition
) AS result
SETTINGS additional_table_filters={'my_complex_table': 'status = \'active\' AND region = \'us-west\''}
```
## Learn more
- Add [Annotations](https://grafana.com/docs/grafana/latest/dashboards/annotations/).
- Configure and use [Templates and variables](https://grafana.com/docs/grafana/latest/variables/).
- Add [Transformations](https://grafana.com/docs/grafana/latest/panels/transformations/).
- Set up alerting; refer to [Alerts overview](https://grafana.com/docs/grafana/latest/alerting/). |
| Shard | 116 (laksa) |
| Root Hash | 4741960896119687916 |
| Unparsed URL | com,grafana!/grafana/plugins/grafana-clickhouse-datasource/ s443 |