βΉοΈ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.1 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://clickhouse.com/docs/guides/developer/ttl |
| Last Crawled | 2026-04-06 15:13:33 (2 days ago) |
| First Indexed | 2025-02-21 00:01:39 (1 year ago) |
| HTTP Status Code | 200 |
| Meta Title | Manage data with TTL (Time-to-live) | ClickHouse Docs |
| Meta Description | TTL (time-to-live) refers to the capability of having rows or columns moved, deleted, or rolled up after a certain interval of time has passed. |
| Meta Canonical | null |
| Boilerpipe Text | Overview of TTL
β
TTL (time-to-live) refers to the capability of having rows or columns moved, deleted, or rolled up after a certain interval of time has passed. While the expression "time-to-live" sounds like it only applies to deleting old data, TTL has several use cases:
Removing old data: no surprise, you can delete rows or columns after a specified time interval
Moving data between disks: after a certain amount of time, you can move data between storage volumes - useful for deploying a hot/warm/cold architecture
Data rollup: rollup your older data into various useful aggregations and computations before deleting it
Note
TTL can be applied to entire tables or specific columns.
TTL syntax
β
The
TTL
clause can appear after a column definition and/or at the end of the table definition. Use the
INTERVAL
clause to define a length of time (which needs to be a
Date
or
DateTime
data type). For example, the following table has two columns
with
TTL
clauses:
CREATE
TABLE
example1
(
timestamp
DateTime
,
x UInt32 TTL
timestamp
+
INTERVAL
1
MONTH
,
y String TTL
timestamp
+
INTERVAL
1
DAY
,
z String
)
ENGINE
=
MergeTree
ORDER
BY
tuple
(
)
The x column has a time to live of 1 month from the timestamp column
The y column has a time to live of 1 day from the timestamp column
When the interval lapses, the column expires. ClickHouse replaces the column value with the default value of its data type. If all the column values in the data part expire, ClickHouse deletes this column from the data part in the filesystem.
Best practice
When using table-level TTL to remove old rows, we recommend to
partition your table by the date or month
of the same time field used in your TTL expression.
ClickHouse can drop entire partitions much more efficiently than deleting individual rows.
When your partition key aligns with your TTL expression, ClickHouse can drop whole partitions at once when they expire, rather than rewriting data parts to remove expired rows.
Choose your partition granularity based on your TTL period:
For TTL of days/weeks: partition by day using
toYYYYMMDD(date_field)
For TTL of months/years: partition by month using
toYYYYMM(date_field)
or
toStartOfMonth(date_field)
Triggering TTL events
β
The deleting or aggregating of expired rows isn't immediate - it only occurs during table merges. If you have a table that's not actively merging (for whatever reason), there are two settings that trigger TTL events:
merge_with_ttl_timeout
: the minimum delay in seconds before repeating a merge with delete TTL. The default is 14400 seconds (4 hours).
merge_with_recompression_ttl_timeout
: the minimum delay in seconds before repeating a merge with recompression TTL (rules that roll up data before deleting). Default value: 14400 seconds (4 hours).
So by default, your TTL rules will be applied to your table at least once every 4 hours. Just modify the settings above if you need your TTL rules applied more frequently.
Note
Not a great solution (or one that we recommend you use frequently), but you can also force a merge using
OPTIMIZE
:
OPTIMIZE
TABLE
example1 FINAL
OPTIMIZE
initializes an unscheduled merge of the parts of your table, and
FINAL
forces a reoptimization if your table is already a single part.
Removing rows
β
To remove entire rows from a table after a certain amount of time, define the TTL rule at the table level:
CREATE
TABLE
customers
(
timestamp
DateTime
,
name String
,
balance Int32
,
address String
)
ENGINE
=
MergeTree
ORDER
BY
timestamp
TTL
timestamp
+
INTERVAL
12
HOUR
Additionally, it is possible to define a TTL rule based on the record's value.
This is easily implemented by specifying a where condition.
Multiple conditions are allowed:
CREATE
TABLE
events
(
`
event
`
String
,
`
time
`
DateTime
,
`
value
`
UInt64
)
ENGINE
=
MergeTree
ORDER
BY
(
event
,
time
)
TTL
time
+
INTERVAL
1
MONTH
DELETE
WHERE
event
!=
'error'
,
time
+
INTERVAL
6
MONTH
DELETE
WHERE
event
=
'error'
Removing columns
β
Instead of deleting the entire row, suppose you want just the balance and address columns to expire. Let's modify the
customers
table and add a TTL for both columns to be 2 hours:
ALTER
TABLE
customers
MODIFY
COLUMN
balance Int32 TTL
timestamp
+
INTERVAL
2
HOUR
,
MODIFY
COLUMN
address String TTL
timestamp
+
INTERVAL
2
HOUR
Implementing a rollup
β
Suppose we want to delete rows after a certain amount of time but hang on to some of the data for reporting purposes. We don't want all the details - just a few aggregated results of historical data. This can be implemented by adding a
GROUP BY
clause to your
TTL
expression, along with some columns in your table to store the aggregated results.
Suppose in the following
hits
table we want to delete old rows, but hang on to the sum and maximum of the
hits
columns before removing the rows. We will need a field to store those values in, and we will need to add a
GROUP BY
clause to the
TTL
clause that rolls up the sum and maximum:
CREATE
TABLE
hits
(
timestamp
DateTime
,
id String
,
hits Int32
,
max_hits Int32
DEFAULT
hits
,
sum_hits Int64
DEFAULT
hits
)
ENGINE
=
MergeTree
PRIMARY
KEY
(
id
,
toStartOfDay
(
timestamp
)
,
timestamp
)
TTL
timestamp
+
INTERVAL
1
DAY
GROUP
BY
id
,
toStartOfDay
(
timestamp
)
SET
max_hits
=
max
(
max_hits
)
,
sum_hits
=
sum
(
sum_hits
)
;
Some notes on the
hits
table:
The
GROUP BY
columns in the
TTL
clause must be a prefix of the
PRIMARY KEY
, and we want to group our results by the start of the day. Therefore,
toStartOfDay(timestamp)
was added to the primary key
We added two fields to store the aggregated results:
max_hits
and
sum_hits
Setting the default value of
max_hits
and
sum_hits
to
hits
is necessary for our logic to work, based on how the
SET
clause is defined
Implementing a hot/warm/cold architecture
β
Not supported in ClickHouse Cloud
Note
If you're using ClickHouse Cloud, the steps in the lesson aren't applicable. You don't need to worry about moving old data around in ClickHouse Cloud.
A common practice when working with large amounts of data is to move that data around as it gets older. Here are the steps for implementing a hot/warm/cold architecture in ClickHouse using the
TO DISK
and
TO VOLUME
clauses of the
TTL
command. (By the way, it doesn't have to be a hot and cold thing - you can use TTL to move data around for whatever use case you have.)
The
TO DISK
and
TO VOLUME
options refer to the names of disks or volumes defined in your ClickHouse configuration files. Create a new file named
my_system.xml
(or any file name) that defines your disks, then define volumes that use your disks. Place the XML file in
/etc/clickhouse-server/config.d/
to have the configuration applied to your system:
<
clickhouse
>
<
storage_configuration
>
<
disks
>
<
default
>
</
default
>
<
hot_disk
>
<
path
>
./hot/
</
path
>
</
hot_disk
>
<
warm_disk
>
<
path
>
./warm/
</
path
>
</
warm_disk
>
<
cold_disk
>
<
path
>
./cold/
</
path
>
</
cold_disk
>
</
disks
>
<
policies
>
<
default
>
<
volumes
>
<
default
>
<
disk
>
default
</
disk
>
</
default
>
<
hot_volume
>
<
disk
>
hot_disk
</
disk
>
</
hot_volume
>
<
warm_volume
>
<
disk
>
warm_disk
</
disk
>
</
warm_volume
>
<
cold_volume
>
<
disk
>
cold_disk
</
disk
>
</
cold_volume
>
</
volumes
>
</
default
>
</
policies
>
</
storage_configuration
>
</
clickhouse
>
The configuration above refers to three disks that point to folders that ClickHouse can read from and write to. Volumes can contain one or more disks - we defined a volume for each of the three disks. Let's view the disks:
SELECT
name
,
path
,
free_space
,
total_space
FROM
system
.
disks
ββnameβββββββββ¬βpathββββββββββββ¬βββfree_spaceββ¬ββtotal_spaceββ
β cold_disk β ./data/cold/ β 179143311360 β 494384795648 β
β default β ./ β 179143311360 β 494384795648 β
β hot_disk β ./data/hot/ β 179143311360 β 494384795648 β
β warm_disk β ./data/warm/ β 179143311360 β 494384795648 β
βββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
And...let's verify the volumes:
SELECT
volume_name
,
disks
FROM
system
.
storage_policies
ββvolume_nameββ¬βdisksββββββββββ
β default β ['default'] β
β hot_volume β ['hot_disk'] β
β warm_volume β ['warm_disk'] β
β cold_volume β ['cold_disk'] β
βββββββββββββββ΄ββββββββββββββββ
Now we will add a
TTL
rule that moves the data between the hot, warm and cold volumes:
ALTER
TABLE
my_table
MODIFY
TTL
trade_date
TO
VOLUME
'hot_volume'
,
trade_date
+
INTERVAL
2
YEAR
TO
VOLUME
'warm_volume'
,
trade_date
+
INTERVAL
4
YEAR
TO
VOLUME
'cold_volume'
;
The new
TTL
rule should materialize, but you can force it to make sure:
ALTER
TABLE
my_table
MATERIALIZE TTL
Verify your data has moved to its expected disks using the
system.parts
table:
Using
the system
.
parts
table
,
view
which disks the parts are
on
for
the crypto_prices
table
:
SELECT
name
,
disk_name
FROM
system
.
parts
WHERE
(
table
=
'my_table'
)
AND
(
active
=
1
)
The response will look like:
ββnameβββββββββ¬βdisk_nameββ
β all_1_3_1_5 β warm_disk β
β all_2_2_0 β hot_disk β
βββββββββββββββ΄ββββββββββββ |
| Markdown | [Skip to main content](https://clickhouse.com/docs/guides/developer/ttl#__docusaurus_skipToContent_fallback)
[](https://clickhouse.com/)
- [Products](https://clickhouse.com/docs/guides/developer/ttl)
- [ClickHouse Cloud Best way to use ClickHouse. Available on AWS, GCP, and Azure.](https://clickhouse.com/cloud)
- [BYOC (Bring Your Own Cloud) The fully managed ClickHouse Cloud service, Can be deployed in your AWS account.](https://clickhouse.com/cloud/bring-your-own-cloud)
- [ClickHouse Set up a database with open-source ClickHouse. ClickHouse](https://clickhouse.com/clickhouse)
- [Discover more than 100 integrations.](https://clickhouse.com/integrations)
[Discover more than 100 integrations.](https://clickhouse.com/integrations)
- [Use cases](https://clickhouse.com/docs/guides/developer/ttl)
- [Real-time analytics](https://clickhouse.com/use-cases/real-time-analytics)
- [Machine Learning & Generative AI](https://clickhouse.com/use-cases/machine-learning-and-data-science)
- [Business Intelligence](https://clickhouse.com/use-cases/data-warehousing)
- [Logs, Events, Traces](https://clickhouse.com/use-cases/observability)
- [All use cases](https://clickhouse.com/use-cases)
[All use cases](https://clickhouse.com/use-cases)
- [Documentation](https://clickhouse.com/docs)
- [Resources](https://clickhouse.com/docs/guides/developer/ttl)
- [User stories](https://clickhouse.com/user-stories)
- [Blog](https://clickhouse.com/blog)
- [Events](https://clickhouse.com/company/events)
- [Learning and certification](https://clickhouse.com/learn)
- [Comparison](https://clickhouse.com/docs/guides/developer/ttl)
- [BigQuery](https://clickhouse.com/comparison/bigquery)
- [PostgreSQL](https://clickhouse.com/comparison/postgresql)
- [Redshift](https://clickhouse.com/comparison/redshift)
- [Rockset](https://clickhouse.com/comparison/rockset)
- [Snowflake](https://clickhouse.com/comparison/snowflake)
- [Video](https://clickhouse.com/videos)
- [Demo](https://clickhouse.com/demos)
- [Pricing](https://clickhouse.com/pricing)
- [Contact](https://clickhouse.com/company/contact?loc=nav)
[46\.7k](https://github.com/ClickHouse/ClickHouse?utm_source=clickhouse&utm_medium=website&utm_campaign=website-nav)
[Search`Ctrl``K`](https://clickhouse.com/docs/search)
[Sign in](https://console.clickhouse.cloud/signIn?loc=docs-nav-signIn-cta&glxid=dfe9d7c8-e366-4bd8-a024-158376ba2b5f&pagePath=%2Fdocs%2Fguides%2Fdeveloper%2Fttl&origPath=%2Fdocs%2Fguides%2Fdeveloper%2Fttl&utm_ga=GA1.1.230363323.1775488415)
[Get started](https://console.clickhouse.cloud/signUp?loc=docs-nav-signUp-cta&glxid=dfe9d7c8-e366-4bd8-a024-158376ba2b5f&pagePath=%2Fdocs%2Fguides%2Fdeveloper%2Fttl&origPath=%2Fdocs%2Fguides%2Fdeveloper%2Fttl&utm_ga=GA1.1.230363323.1775488415)
[Get started](https://clickhouse.com/docs/introduction-clickhouse)
[Cloud](https://clickhouse.com/docs/cloud/overview)
[Manage data](https://clickhouse.com/docs/updating-data)
[Server admin](https://clickhouse.com/docs/guides/manage-and-deploy-index)
[Reference](https://clickhouse.com/docs/sql-reference)
[Integrations](https://clickhouse.com/docs/integrations)
[ClickStack](https://clickhouse.com/docs/use-cases/observability/clickstack/overview)
[chDB](https://clickhouse.com/docs/chdb)
[About](https://clickhouse.com/docs/about)
[Knowledge Base](https://clickhouse.com/docs/knowledgebase)
[English](https://clickhouse.com/docs/guides/developer/ttl)
- [English](https://clickhouse.com/docs/guides/developer/ttl)
- [ζ₯ζ¬θͺ](https://clickhouse.com/docs/jp/guides/developer/ttl)
- [δΈζ](https://clickhouse.com/docs/zh/guides/developer/ttl)
- [Π ΡΡΡΠΊΠΈΠΉ](https://clickhouse.com/docs/ru/guides/developer/ttl)
- [νκ΅μ΄](https://clickhouse.com/docs/ko/guides/developer/ttl)
[Skip to main content](https://clickhouse.com/docs/guides/developer/ttl#__docusaurus_skipToContent_fallback)
[](https://clickhouse.com/)
- [Products](https://clickhouse.com/docs/guides/developer/ttl)
- [ClickHouse Cloud Best way to use ClickHouse. Available on AWS, GCP, and Azure.](https://clickhouse.com/cloud)
- [BYOC (Bring Your Own Cloud) The fully managed ClickHouse Cloud service, Can be deployed in your AWS account.](https://clickhouse.com/cloud/bring-your-own-cloud)
- [ClickHouse Set up a database with open-source ClickHouse. ClickHouse](https://clickhouse.com/clickhouse)
- [Discover more than 100 integrations.](https://clickhouse.com/integrations)
[Discover more than 100 integrations.](https://clickhouse.com/integrations)
- [Use cases](https://clickhouse.com/docs/guides/developer/ttl)
- [Real-time analytics](https://clickhouse.com/use-cases/real-time-analytics)
- [Machine Learning & Generative AI](https://clickhouse.com/use-cases/machine-learning-and-data-science)
- [Business Intelligence](https://clickhouse.com/use-cases/data-warehousing)
- [Logs, Events, Traces](https://clickhouse.com/use-cases/observability)
- [All use cases](https://clickhouse.com/use-cases)
[All use cases](https://clickhouse.com/use-cases)
- [Documentation](https://clickhouse.com/docs)
- [Resources](https://clickhouse.com/docs/guides/developer/ttl)
- [User stories](https://clickhouse.com/user-stories)
- [Blog](https://clickhouse.com/blog)
- [Events](https://clickhouse.com/company/events)
- [Learning and certification](https://clickhouse.com/learn)
- [Comparison](https://clickhouse.com/docs/guides/developer/ttl)
- [BigQuery](https://clickhouse.com/comparison/bigquery)
- [PostgreSQL](https://clickhouse.com/comparison/postgresql)
- [Redshift](https://clickhouse.com/comparison/redshift)
- [Rockset](https://clickhouse.com/comparison/rockset)
- [Snowflake](https://clickhouse.com/comparison/snowflake)
- [Video](https://clickhouse.com/videos)
- [Demo](https://clickhouse.com/demos)
- [Pricing](https://clickhouse.com/pricing)
- [Contact](https://clickhouse.com/company/contact?loc=nav)
[46\.7k](https://github.com/ClickHouse/ClickHouse?utm_source=clickhouse&utm_medium=website&utm_campaign=website-nav)
[Search`Ctrl``K`](https://clickhouse.com/docs/search)
[Sign in](https://console.clickhouse.cloud/signIn?loc=docs-nav-signIn-cta&glxid=dfe9d7c8-e366-4bd8-a024-158376ba2b5f&pagePath=%2Fdocs%2Fguides%2Fdeveloper%2Fttl&origPath=%2Fdocs%2Fguides%2Fdeveloper%2Fttl&utm_ga=GA1.1.230363323.1775488415)
[Get started](https://console.clickhouse.cloud/signUp?loc=docs-nav-signUp-cta&glxid=dfe9d7c8-e366-4bd8-a024-158376ba2b5f&pagePath=%2Fdocs%2Fguides%2Fdeveloper%2Fttl&origPath=%2Fdocs%2Fguides%2Fdeveloper%2Fttl&utm_ga=GA1.1.230363323.1775488415)
[Get started](https://clickhouse.com/docs/introduction-clickhouse)
[Cloud](https://clickhouse.com/docs/cloud/overview)
[Manage data](https://clickhouse.com/docs/updating-data)
[Server admin](https://clickhouse.com/docs/guides/manage-and-deploy-index)
[Reference](https://clickhouse.com/docs/sql-reference)
[Integrations](https://clickhouse.com/docs/integrations)
[ClickStack](https://clickhouse.com/docs/use-cases/observability/clickstack/overview)
[chDB](https://clickhouse.com/docs/chdb)
[About](https://clickhouse.com/docs/about)
[Knowledge Base](https://clickhouse.com/docs/knowledgebase)
[English](https://clickhouse.com/docs/guides/developer/ttl)
- [English](https://clickhouse.com/docs/guides/developer/ttl)
- [ζ₯ζ¬θͺ](https://clickhouse.com/docs/jp/guides/developer/ttl)
- [δΈζ](https://clickhouse.com/docs/zh/guides/developer/ttl)
- [Π ΡΡΡΠΊΠΈΠΉ](https://clickhouse.com/docs/ru/guides/developer/ttl)
- [νκ΅μ΄](https://clickhouse.com/docs/ko/guides/developer/ttl)
[Search`Ctrl``K`](https://clickhouse.com/docs/search)
- [Core concepts](https://clickhouse.com/docs/managing-data/core-concepts)
- [Table parts](https://clickhouse.com/docs/parts)
- [Table partitions](https://clickhouse.com/docs/partitions)
- [Part merges](https://clickhouse.com/docs/merges)
- [Table shards and replicas](https://clickhouse.com/docs/shards)
- [Primary indexes](https://clickhouse.com/docs/primary-indexes)
- [Architecture overview](https://clickhouse.com/docs/academic_overview)
- [Updating data](https://clickhouse.com/docs/updating-data)
- [Working with updates](https://clickhouse.com/docs/updating-data/overview)
- [Working with the ReplacingMergeTree engine](https://clickhouse.com/docs/guides/replacing-merge-tree)
- [Deleting data](https://clickhouse.com/docs/managing-data/deleting-data/overview)
- [Delete overview](https://clickhouse.com/docs/deletes/overview)
- [Lightweight deletes](https://clickhouse.com/docs/guides/developer/lightweight-delete)
- [Delete mutations](https://clickhouse.com/docs/managing-data/delete_mutations)
- [Truncate table](https://clickhouse.com/docs/managing-data/truncate)
- [Drop partition](https://clickhouse.com/docs/managing-data/drop_partition)
- [Data modeling](https://clickhouse.com/docs/data-modeling/overview)
- [Schema design](https://clickhouse.com/docs/data-modeling/schema-design)
- [Dictionary](https://clickhouse.com/docs/guides/developer/ttl)
- [Materialized views](https://clickhouse.com/docs/materialized-views)
- [Projections](https://clickhouse.com/docs/guides/developer/ttl)
- [Data compression](https://clickhouse.com/docs/data-compression/compression-in-clickhouse)
- [Denormalizing Data](https://clickhouse.com/docs/data-modeling/denormalization)
- [Backfilling Data](https://clickhouse.com/docs/data-modeling/backfilling)
- [Advanced guides](https://clickhouse.com/docs/guides/developer/overview)
- [Dynamic column selection](https://clickhouse.com/docs/guides/developer/dynamic-column-selection)
- [Merge table function](https://clickhouse.com/docs/guides/developer/merge-table-function)
- [Alternative query languages](https://clickhouse.com/docs/guides/developer/alternative-query-languages)
- [Cascading materialized views](https://clickhouse.com/docs/guides/developer/cascading-materialized-views)
- [Debugging memory issues](https://clickhouse.com/docs/guides/developer/debugging-memory-issues)
- [Deduplicating inserts on retries](https://clickhouse.com/docs/guides/developer/deduplicating-inserts-on-retries)
- [Deduplication strategies](https://clickhouse.com/docs/guides/developer/deduplication)
- [Time series - gap fill](https://clickhouse.com/docs/guides/developer/time-series-filling-gaps)
- [Transactional (ACID) support](https://clickhouse.com/docs/guides/developer/transactional)
- [TTL (Time to Live)](https://clickhouse.com/docs/guides/developer/ttl)
- [Understanding query execution with the analyzer](https://clickhouse.com/docs/guides/developer/understanding-query-execution-with-the-analyzer)
- [Using JOINs in ClickHouse](https://clickhouse.com/docs/guides/joining-tables)
- [Stored procedures & query parameters](https://clickhouse.com/docs/guides/developer/stored-procedures-and-prepared-statements)
- [Performance and optimizations](https://clickhouse.com/docs/operations/overview)
- [Query optimization](https://clickhouse.com/docs/optimize/query-optimization)
- [Primary indexes](https://clickhouse.com/docs/guides/best-practices/sparse-primary-indexes)
- [Query parallelism](https://clickhouse.com/docs/optimize/query-parallelism)
- [Partitioning key](https://clickhouse.com/docs/optimize/partitioning-key)
- [Data skipping indexes](https://clickhouse.com/docs/optimize/skipping-indexes)
- [PREWHERE optimization](https://clickhouse.com/docs/optimize/prewhere)
- [Lazy materialization](https://clickhouse.com/docs/optimize/lazy-materialization)
- [Bulk inserts](https://clickhouse.com/docs/optimize/bulk-inserts)
- [Asynchronous inserts](https://clickhouse.com/docs/optimize/asynchronous-inserts)
- [Avoid mutations](https://clickhouse.com/docs/optimize/avoid-mutations)
- [Avoid nullable columns](https://clickhouse.com/docs/optimize/avoid-nullable-columns)
- [Avoid optimize final](https://clickhouse.com/docs/optimize/avoidoptimizefinal)
- [Analyzer](https://clickhouse.com/docs/operations/analyzer)
- [Query Profiling](https://clickhouse.com/docs/operations/optimizing-performance/sampling-query-profiler)
- [Query cache](https://clickhouse.com/docs/operations/query-cache)
- [Query condition cache](https://clickhouse.com/docs/operations/query-condition-cache)
- [Userspace page cache](https://clickhouse.com/docs/operations/userspace-page-cache)
- [Testing Hardware](https://clickhouse.com/docs/operations/performance-test)
- [Advanced guides](https://clickhouse.com/docs/guides/developer/overview)
- TTL (Time to Live)
[Edit this page](https://github.com/ClickHouse/clickhouse-docs/blob/main/docs/guides/developer/ttl.md)
# Manage data with TTL (time-to-live)
## Overview of TTL[β](https://clickhouse.com/docs/guides/developer/ttl#overview-of-ttl "Direct link to Overview of TTL")
TTL (time-to-live) refers to the capability of having rows or columns moved, deleted, or rolled up after a certain interval of time has passed. While the expression "time-to-live" sounds like it only applies to deleting old data, TTL has several use cases:
- Removing old data: no surprise, you can delete rows or columns after a specified time interval
- Moving data between disks: after a certain amount of time, you can move data between storage volumes - useful for deploying a hot/warm/cold architecture
- Data rollup: rollup your older data into various useful aggregations and computations before deleting it
Note
TTL can be applied to entire tables or specific columns.
## TTL syntax[β](https://clickhouse.com/docs/guides/developer/ttl#ttl-syntax "Direct link to TTL syntax")
The `TTL` clause can appear after a column definition and/or at the end of the table definition. Use the `INTERVAL` clause to define a length of time (which needs to be a `Date` or `DateTime` data type). For example, the following table has two columns with `TTL` clauses:
```
CREATE TABLE example1 (
timestamp DateTime,
x UInt32 TTL timestamp + INTERVAL 1 MONTH,
y String TTL timestamp + INTERVAL 1 DAY,
z String
)
ENGINE = MergeTree
ORDER BY tuple()
```
- The x column has a time to live of 1 month from the timestamp column
- The y column has a time to live of 1 day from the timestamp column
- When the interval lapses, the column expires. ClickHouse replaces the column value with the default value of its data type. If all the column values in the data part expire, ClickHouse deletes this column from the data part in the filesystem.
Note
TTL rules can be altered or deleted. See the [Manipulations with Table TTL](https://clickhouse.com/docs/sql-reference/statements/alter/ttl) page for more details.
Best practice
When using table-level TTL to remove old rows, we recommend to **partition your table by the date or month** of the same time field used in your TTL expression.
ClickHouse can drop entire partitions much more efficiently than deleting individual rows. When your partition key aligns with your TTL expression, ClickHouse can drop whole partitions at once when they expire, rather than rewriting data parts to remove expired rows.
Choose your partition granularity based on your TTL period:
- For TTL of days/weeks: partition by day using `toYYYYMMDD(date_field)`
- For TTL of months/years: partition by month using `toYYYYMM(date_field)` or `toStartOfMonth(date_field)`
## Triggering TTL events[β](https://clickhouse.com/docs/guides/developer/ttl#triggering-ttl-events "Direct link to Triggering TTL events")
The deleting or aggregating of expired rows isn't immediate - it only occurs during table merges. If you have a table that's not actively merging (for whatever reason), there are two settings that trigger TTL events:
- `merge_with_ttl_timeout`: the minimum delay in seconds before repeating a merge with delete TTL. The default is 14400 seconds (4 hours).
- `merge_with_recompression_ttl_timeout`: the minimum delay in seconds before repeating a merge with recompression TTL (rules that roll up data before deleting). Default value: 14400 seconds (4 hours).
So by default, your TTL rules will be applied to your table at least once every 4 hours. Just modify the settings above if you need your TTL rules applied more frequently.
Note
Not a great solution (or one that we recommend you use frequently), but you can also force a merge using `OPTIMIZE`:
```
OPTIMIZE TABLE example1 FINAL
```
`OPTIMIZE` initializes an unscheduled merge of the parts of your table, and `FINAL` forces a reoptimization if your table is already a single part.
## Removing rows[β](https://clickhouse.com/docs/guides/developer/ttl#removing-rows "Direct link to Removing rows")
To remove entire rows from a table after a certain amount of time, define the TTL rule at the table level:
```
CREATE TABLE customers (
timestamp DateTime,
name String,
balance Int32,
address String
)
ENGINE = MergeTree
ORDER BY timestamp
TTL timestamp + INTERVAL 12 HOUR
```
Additionally, it is possible to define a TTL rule based on the record's value. This is easily implemented by specifying a where condition. Multiple conditions are allowed:
```
CREATE TABLE events
(
`event` String,
`time` DateTime,
`value` UInt64
)
ENGINE = MergeTree
ORDER BY (event, time)
TTL time + INTERVAL 1 MONTH DELETE WHERE event != 'error',
time + INTERVAL 6 MONTH DELETE WHERE event = 'error'
```
## Removing columns[β](https://clickhouse.com/docs/guides/developer/ttl#removing-columns "Direct link to Removing columns")
Instead of deleting the entire row, suppose you want just the balance and address columns to expire. Let's modify the `customers` table and add a TTL for both columns to be 2 hours:
```
ALTER TABLE customers
MODIFY COLUMN balance Int32 TTL timestamp + INTERVAL 2 HOUR,
MODIFY COLUMN address String TTL timestamp + INTERVAL 2 HOUR
```
## Implementing a rollup[β](https://clickhouse.com/docs/guides/developer/ttl#implementing-a-rollup "Direct link to Implementing a rollup")
Suppose we want to delete rows after a certain amount of time but hang on to some of the data for reporting purposes. We don't want all the details - just a few aggregated results of historical data. This can be implemented by adding a `GROUP BY` clause to your `TTL` expression, along with some columns in your table to store the aggregated results.
Suppose in the following `hits` table we want to delete old rows, but hang on to the sum and maximum of the `hits` columns before removing the rows. We will need a field to store those values in, and we will need to add a `GROUP BY` clause to the `TTL` clause that rolls up the sum and maximum:
```
CREATE TABLE hits (
timestamp DateTime,
id String,
hits Int32,
max_hits Int32 DEFAULT hits,
sum_hits Int64 DEFAULT hits
)
ENGINE = MergeTree
PRIMARY KEY (id, toStartOfDay(timestamp), timestamp)
TTL timestamp + INTERVAL 1 DAY
GROUP BY id, toStartOfDay(timestamp)
SET
max_hits = max(max_hits),
sum_hits = sum(sum_hits);
```
Some notes on the `hits` table:
- The `GROUP BY` columns in the `TTL` clause must be a prefix of the `PRIMARY KEY`, and we want to group our results by the start of the day. Therefore, `toStartOfDay(timestamp)` was added to the primary key
- We added two fields to store the aggregated results: `max_hits` and `sum_hits`
- Setting the default value of `max_hits` and `sum_hits` to `hits` is necessary for our logic to work, based on how the `SET` clause is defined
## Implementing a hot/warm/cold architecture[β](https://clickhouse.com/docs/guides/developer/ttl#implementing-a-hotwarmcold-architecture "Direct link to Implementing a hot/warm/cold architecture")
Not supported in ClickHouse Cloud
Note
If you're using ClickHouse Cloud, the steps in the lesson aren't applicable. You don't need to worry about moving old data around in ClickHouse Cloud.
A common practice when working with large amounts of data is to move that data around as it gets older. Here are the steps for implementing a hot/warm/cold architecture in ClickHouse using the `TO DISK` and `TO VOLUME` clauses of the `TTL` command. (By the way, it doesn't have to be a hot and cold thing - you can use TTL to move data around for whatever use case you have.)
1. The `TO DISK` and `TO VOLUME` options refer to the names of disks or volumes defined in your ClickHouse configuration files. Create a new file named `my_system.xml` (or any file name) that defines your disks, then define volumes that use your disks. Place the XML file in `/etc/clickhouse-server/config.d/` to have the configuration applied to your system:
```
<clickhouse>
<storage_configuration>
<disks>
<default>
</default>
<hot_disk>
<path>./hot/</path>
</hot_disk>
<warm_disk>
<path>./warm/</path>
</warm_disk>
<cold_disk>
<path>./cold/</path>
</cold_disk>
</disks>
<policies>
<default>
<volumes>
<default>
<disk>default</disk>
</default>
<hot_volume>
<disk>hot_disk</disk>
</hot_volume>
<warm_volume>
<disk>warm_disk</disk>
</warm_volume>
<cold_volume>
<disk>cold_disk</disk>
</cold_volume>
</volumes>
</default>
</policies>
</storage_configuration>
</clickhouse>
```
1. The configuration above refers to three disks that point to folders that ClickHouse can read from and write to. Volumes can contain one or more disks - we defined a volume for each of the three disks. Let's view the disks:
```
SELECT name, path, free_space, total_space
FROM system.disks
```
```
ββnameβββββββββ¬βpathββββββββββββ¬βββfree_spaceββ¬ββtotal_spaceββ
β cold_disk β ./data/cold/ β 179143311360 β 494384795648 β
β default β ./ β 179143311360 β 494384795648 β
β hot_disk β ./data/hot/ β 179143311360 β 494384795648 β
β warm_disk β ./data/warm/ β 179143311360 β 494384795648 β
βββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
```
1. And...let's verify the volumes:
```
SELECT
volume_name,
disks
FROM system.storage_policies
```
```
ββvolume_nameββ¬βdisksββββββββββ
β default β ['default'] β
β hot_volume β ['hot_disk'] β
β warm_volume β ['warm_disk'] β
β cold_volume β ['cold_disk'] β
βββββββββββββββ΄ββββββββββββββββ
```
1. Now we will add a `TTL` rule that moves the data between the hot, warm and cold volumes:
```
ALTER TABLE my_table
MODIFY TTL
trade_date TO VOLUME 'hot_volume',
trade_date + INTERVAL 2 YEAR TO VOLUME 'warm_volume',
trade_date + INTERVAL 4 YEAR TO VOLUME 'cold_volume';
```
1. The new `TTL` rule should materialize, but you can force it to make sure:
```
ALTER TABLE my_table
MATERIALIZE TTL
```
1. Verify your data has moved to its expected disks using the `system.parts` table:
```
Using the system.parts table, view which disks the parts are on for the crypto_prices table:
SELECT
name,
disk_name
FROM system.parts
WHERE (table = 'my_table') AND (active = 1)
```
The response will look like:
```
ββnameβββββββββ¬βdisk_nameββ
β all_1_3_1_5 β warm_disk β
β all_2_2_0 β hot_disk β
βββββββββββββββ΄ββββββββββββ
```
[Previous Transactional (ACID) support](https://clickhouse.com/docs/guides/developer/transactional)
[Next Understanding query execution with the analyzer](https://clickhouse.com/docs/guides/developer/understanding-query-execution-with-the-analyzer)
- [Overview of TTL](https://clickhouse.com/docs/guides/developer/ttl#overview-of-ttl)
- [TTL syntax](https://clickhouse.com/docs/guides/developer/ttl#ttl-syntax)
- [Triggering TTL events](https://clickhouse.com/docs/guides/developer/ttl#triggering-ttl-events)
- [Removing rows](https://clickhouse.com/docs/guides/developer/ttl#removing-rows)
- [Removing columns](https://clickhouse.com/docs/guides/developer/ttl#removing-columns)
- [Implementing a rollup](https://clickhouse.com/docs/guides/developer/ttl#implementing-a-rollup)
- [Implementing a hot/warm/cold architecture](https://clickhouse.com/docs/guides/developer/ttl#implementing-a-hotwarmcold-architecture)
Was this page helpful?
###### Try ClickHouse Cloud for FREE
Separation of storage and compute, automatic scaling, built-in SQL console, and lots more. \$300 in free credits when signing up.
[Try it for Free](https://console.clickhouse.cloud/signUp?loc=doc-card-banner&glxid=dfe9d7c8-e366-4bd8-a024-158376ba2b5f&pagePath=%2Fdocs%2Fguides%2Fdeveloper%2Fttl&origPath=%2Fdocs%2Fguides%2Fdeveloper%2Fttl&utm_ga=GA1.1.230363323.1775488415)
Β© 2016β2026 ClickHouse, Inc.
[Trademark](https://clickhouse.com/legal/trademark-policy)Β·[Privacy](https://clickhouse.com/legal/privacy-policy)Β·[Security](https://trust.clickhouse.com/)Β·[Terms of Service](https://clickhouse.com/legal/agreements/terms-of-service)

Β© 2016β2026 ClickHouse, Inc.
[Trademark](https://clickhouse.com/legal/trademark-policy)Β·[Privacy](https://clickhouse.com/legal/privacy-policy)Β·[Security](https://trust.clickhouse.com/)Β·[Terms of Service](https://clickhouse.com/legal/agreements/terms-of-service)

[](https://clickhouse.com/)
EN
- Get startedβΌ
- CloudβΌ
- Manage dataβΌ
- Server adminβΌ
- ReferenceβΌ
- IntegrationsβΌ
- ClickStackβΌ
- chDBβΌ
- AboutβΌ
[](https://clickhouse.com/)
EN
main-menu
- Core conceptsβΌ
- Updating dataβΌ
- Deleting dataβΌ
- Data modelingβΌ
- Advanced guidesβΌ
- [Dynamic column selection](https://clickhouse.com/docs/guides/developer/dynamic-column-selection)
- [Merge table function](https://clickhouse.com/docs/guides/developer/merge-table-function)
- [Alternative query languages](https://clickhouse.com/docs/guides/developer/alternative-query-languages)
- [Cascading materialized views](https://clickhouse.com/docs/guides/developer/cascading-materialized-views)
- [Debugging memory issues](https://clickhouse.com/docs/guides/developer/debugging-memory-issues)
- [Deduplicating inserts on retries](https://clickhouse.com/docs/guides/developer/deduplicating-inserts-on-retries)
- [Deduplication strategies](https://clickhouse.com/docs/guides/developer/deduplication)
- [Time series - gap fill](https://clickhouse.com/docs/guides/developer/time-series-filling-gaps)
- [Transactional (ACID) support](https://clickhouse.com/docs/guides/developer/transactional)
- [TTL (Time to Live)](https://clickhouse.com/docs/guides/developer/ttl)
- [Understanding query execution with the analyzer](https://clickhouse.com/docs/guides/developer/understanding-query-execution-with-the-analyzer)
- [Using JOINs in ClickHouse](https://clickhouse.com/docs/guides/joining-tables)
- [Stored procedures & query parameters](https://clickhouse.com/docs/guides/developer/stored-procedures-and-prepared-statements)
- Performance and optimizationsβΌ |
| Readable Markdown | ## Overview of TTL[β](https://clickhouse.com/docs/guides/developer/ttl#overview-of-ttl "Direct link to Overview of TTL")
TTL (time-to-live) refers to the capability of having rows or columns moved, deleted, or rolled up after a certain interval of time has passed. While the expression "time-to-live" sounds like it only applies to deleting old data, TTL has several use cases:
- Removing old data: no surprise, you can delete rows or columns after a specified time interval
- Moving data between disks: after a certain amount of time, you can move data between storage volumes - useful for deploying a hot/warm/cold architecture
- Data rollup: rollup your older data into various useful aggregations and computations before deleting it
Note
TTL can be applied to entire tables or specific columns.
## TTL syntax[β](https://clickhouse.com/docs/guides/developer/ttl#ttl-syntax "Direct link to TTL syntax")
The `TTL` clause can appear after a column definition and/or at the end of the table definition. Use the `INTERVAL` clause to define a length of time (which needs to be a `Date` or `DateTime` data type). For example, the following table has two columns with `TTL` clauses:
```
CREATE TABLE example1 (
timestamp DateTime,
x UInt32 TTL timestamp + INTERVAL 1 MONTH,
y String TTL timestamp + INTERVAL 1 DAY,
z String
)
ENGINE = MergeTree
ORDER BY tuple()
```
- The x column has a time to live of 1 month from the timestamp column
- The y column has a time to live of 1 day from the timestamp column
- When the interval lapses, the column expires. ClickHouse replaces the column value with the default value of its data type. If all the column values in the data part expire, ClickHouse deletes this column from the data part in the filesystem.
Best practice
When using table-level TTL to remove old rows, we recommend to **partition your table by the date or month** of the same time field used in your TTL expression.
ClickHouse can drop entire partitions much more efficiently than deleting individual rows. When your partition key aligns with your TTL expression, ClickHouse can drop whole partitions at once when they expire, rather than rewriting data parts to remove expired rows.
Choose your partition granularity based on your TTL period:
- For TTL of days/weeks: partition by day using `toYYYYMMDD(date_field)`
- For TTL of months/years: partition by month using `toYYYYMM(date_field)` or `toStartOfMonth(date_field)`
## Triggering TTL events[β](https://clickhouse.com/docs/guides/developer/ttl#triggering-ttl-events "Direct link to Triggering TTL events")
The deleting or aggregating of expired rows isn't immediate - it only occurs during table merges. If you have a table that's not actively merging (for whatever reason), there are two settings that trigger TTL events:
- `merge_with_ttl_timeout`: the minimum delay in seconds before repeating a merge with delete TTL. The default is 14400 seconds (4 hours).
- `merge_with_recompression_ttl_timeout`: the minimum delay in seconds before repeating a merge with recompression TTL (rules that roll up data before deleting). Default value: 14400 seconds (4 hours).
So by default, your TTL rules will be applied to your table at least once every 4 hours. Just modify the settings above if you need your TTL rules applied more frequently.
Note
Not a great solution (or one that we recommend you use frequently), but you can also force a merge using `OPTIMIZE`:
```
OPTIMIZE TABLE example1 FINAL
```
`OPTIMIZE` initializes an unscheduled merge of the parts of your table, and `FINAL` forces a reoptimization if your table is already a single part.
## Removing rows[β](https://clickhouse.com/docs/guides/developer/ttl#removing-rows "Direct link to Removing rows")
To remove entire rows from a table after a certain amount of time, define the TTL rule at the table level:
```
CREATE TABLE customers (
timestamp DateTime,
name String,
balance Int32,
address String
)
ENGINE = MergeTree
ORDER BY timestamp
TTL timestamp + INTERVAL 12 HOUR
```
Additionally, it is possible to define a TTL rule based on the record's value. This is easily implemented by specifying a where condition. Multiple conditions are allowed:
```
CREATE TABLE events
(
`event` String,
`time` DateTime,
`value` UInt64
)
ENGINE = MergeTree
ORDER BY (event, time)
TTL time + INTERVAL 1 MONTH DELETE WHERE event != 'error',
time + INTERVAL 6 MONTH DELETE WHERE event = 'error'
```
## Removing columns[β](https://clickhouse.com/docs/guides/developer/ttl#removing-columns "Direct link to Removing columns")
Instead of deleting the entire row, suppose you want just the balance and address columns to expire. Let's modify the `customers` table and add a TTL for both columns to be 2 hours:
```
ALTER TABLE customers
MODIFY COLUMN balance Int32 TTL timestamp + INTERVAL 2 HOUR,
MODIFY COLUMN address String TTL timestamp + INTERVAL 2 HOUR
```
## Implementing a rollup[β](https://clickhouse.com/docs/guides/developer/ttl#implementing-a-rollup "Direct link to Implementing a rollup")
Suppose we want to delete rows after a certain amount of time but hang on to some of the data for reporting purposes. We don't want all the details - just a few aggregated results of historical data. This can be implemented by adding a `GROUP BY` clause to your `TTL` expression, along with some columns in your table to store the aggregated results.
Suppose in the following `hits` table we want to delete old rows, but hang on to the sum and maximum of the `hits` columns before removing the rows. We will need a field to store those values in, and we will need to add a `GROUP BY` clause to the `TTL` clause that rolls up the sum and maximum:
```
CREATE TABLE hits (
timestamp DateTime,
id String,
hits Int32,
max_hits Int32 DEFAULT hits,
sum_hits Int64 DEFAULT hits
)
ENGINE = MergeTree
PRIMARY KEY (id, toStartOfDay(timestamp), timestamp)
TTL timestamp + INTERVAL 1 DAY
GROUP BY id, toStartOfDay(timestamp)
SET
max_hits = max(max_hits),
sum_hits = sum(sum_hits);
```
Some notes on the `hits` table:
- The `GROUP BY` columns in the `TTL` clause must be a prefix of the `PRIMARY KEY`, and we want to group our results by the start of the day. Therefore, `toStartOfDay(timestamp)` was added to the primary key
- We added two fields to store the aggregated results: `max_hits` and `sum_hits`
- Setting the default value of `max_hits` and `sum_hits` to `hits` is necessary for our logic to work, based on how the `SET` clause is defined
## Implementing a hot/warm/cold architecture[β](https://clickhouse.com/docs/guides/developer/ttl#implementing-a-hotwarmcold-architecture "Direct link to Implementing a hot/warm/cold architecture")
Not supported in ClickHouse Cloud
Note
If you're using ClickHouse Cloud, the steps in the lesson aren't applicable. You don't need to worry about moving old data around in ClickHouse Cloud.
A common practice when working with large amounts of data is to move that data around as it gets older. Here are the steps for implementing a hot/warm/cold architecture in ClickHouse using the `TO DISK` and `TO VOLUME` clauses of the `TTL` command. (By the way, it doesn't have to be a hot and cold thing - you can use TTL to move data around for whatever use case you have.)
1. The `TO DISK` and `TO VOLUME` options refer to the names of disks or volumes defined in your ClickHouse configuration files. Create a new file named `my_system.xml` (or any file name) that defines your disks, then define volumes that use your disks. Place the XML file in `/etc/clickhouse-server/config.d/` to have the configuration applied to your system:
```
<clickhouse>
<storage_configuration>
<disks>
<default>
</default>
<hot_disk>
<path>./hot/</path>
</hot_disk>
<warm_disk>
<path>./warm/</path>
</warm_disk>
<cold_disk>
<path>./cold/</path>
</cold_disk>
</disks>
<policies>
<default>
<volumes>
<default>
<disk>default</disk>
</default>
<hot_volume>
<disk>hot_disk</disk>
</hot_volume>
<warm_volume>
<disk>warm_disk</disk>
</warm_volume>
<cold_volume>
<disk>cold_disk</disk>
</cold_volume>
</volumes>
</default>
</policies>
</storage_configuration>
</clickhouse>
```
1. The configuration above refers to three disks that point to folders that ClickHouse can read from and write to. Volumes can contain one or more disks - we defined a volume for each of the three disks. Let's view the disks:
```
SELECT name, path, free_space, total_space
FROM system.disks
```
```
ββnameβββββββββ¬βpathββββββββββββ¬βββfree_spaceββ¬ββtotal_spaceββ
β cold_disk β ./data/cold/ β 179143311360 β 494384795648 β
β default β ./ β 179143311360 β 494384795648 β
β hot_disk β ./data/hot/ β 179143311360 β 494384795648 β
β warm_disk β ./data/warm/ β 179143311360 β 494384795648 β
βββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
```
1. And...let's verify the volumes:
```
SELECT
volume_name,
disks
FROM system.storage_policies
```
```
ββvolume_nameββ¬βdisksββββββββββ
β default β ['default'] β
β hot_volume β ['hot_disk'] β
β warm_volume β ['warm_disk'] β
β cold_volume β ['cold_disk'] β
βββββββββββββββ΄ββββββββββββββββ
```
1. Now we will add a `TTL` rule that moves the data between the hot, warm and cold volumes:
```
ALTER TABLE my_table
MODIFY TTL
trade_date TO VOLUME 'hot_volume',
trade_date + INTERVAL 2 YEAR TO VOLUME 'warm_volume',
trade_date + INTERVAL 4 YEAR TO VOLUME 'cold_volume';
```
1. The new `TTL` rule should materialize, but you can force it to make sure:
```
ALTER TABLE my_table
MATERIALIZE TTL
```
1. Verify your data has moved to its expected disks using the `system.parts` table:
```
Using the system.parts table, view which disks the parts are on for the crypto_prices table:
SELECT
name,
disk_name
FROM system.parts
WHERE (table = 'my_table') AND (active = 1)
```
The response will look like:
```
ββnameβββββββββ¬βdisk_nameββ
β all_1_3_1_5 β warm_disk β
β all_2_2_0 β hot_disk β
βββββββββββββββ΄ββββββββββββ
``` |
| Shard | 89 (laksa) |
| Root Hash | 12633450985039531489 |
| Unparsed URL | com,clickhouse!/docs/guides/developer/ttl s443 |