Timeplus Enterprise v2.7 is now Generally Available! In our second GA release of 2025, we significantly improved the data pipeline capabilities by supporting MySQL and S3 in the core engine, and introducing dictionaries for fast lookup in remote database tables or Timeplus streams. Other key features include high performance Python User-Defined-Function and new UI for cluster monitoring and materialized views troubleshooting.
Powered by our lightweight, powerful, and efficient single-binary platform, Timeplus Enterprise offers flexible deployment options. Whether fully managed in the public cloud, self-hosted in your data center, running on your laptop, or even on edge devices, our platform adapts to diverse user needs and unblocks massive use cases, from handling large-scale workloads to operating in resource-contained edge environments, capturing real-time insights where data resides. Most importantly, Timeplus Enterprise drives down operational efforts and costs down with simplicity.
Our software has been rigorously tested and proven in a variety of industries and mission-critical use cases, including cybersecurity, algorithm-based trading, and real-time streaming pipeline and analytics. As we shared in a recent webinar, Timeplus partnered with Salla to improve the performance and stability of thousands of pipelines in their e-commerce business.
“Timeplus enabled us to efficiently scale our data processing with minimal latency and maximum throughput, which lets us reduce bottlenecks and enhance service resilience.”

Salah Alkhwlani | CTO, Salla
“We constantly seek ways to accelerate analytics and reporting. However, handling multi-stream table JOINs on the fly is not ClickHouse’s strength. This is where Timeplus' MULTI-JOIN on stream acts as a crucial solution, minimizing duplicate processing overhead.”

Ibrahim Bakhsh | Senior Cloud Data Engineer, Salla
Today, we are excited to upgrade our offerings with the release of Timeplus Enterprise v2.7. This unified, enterprise-grade product can be installed on various infrastructures, including bare metal, Docker, and Kubernetes.
Key Breakthroughs in Timeplus Enterprise v2.7
The v2.7 release of Timeplus Enterprise introduces several groundbreaking features:
External Tables for MySQL and S3
Data Lookup with Dictionaries
Python UDF
Troubleshooting UI
Built-in Support for MySQL and S3 Read and Write
Amazon S3 is cloud object storage with industry-leading scalability, data availability, security, and performance.
In Timeplus Enterprise v2.7, we added the first-class integration for S3-compatible object storage systems, as a new type of External Table. You can read or write data in Amazon S3 or S3-compatible cloud or local storage.
To create an external table for S3, you can run the following DDL SQL:
CREATE EXTERNAL TABLE [IF NOT EXISTS] name
(<col_name1> <col_type1>, <col_name2> <col_type2>, ...)
PARTITION BY .. -- optional
SETTINGS
type='s3', -- required
use_environment_credentials=true|false, -- optional, default false
access_key_id='..', -- optional
secret_access_key='..', -- optional
region='..', -- required
bucket='..', -- required
read_from='..', -- optional
write_to='..', -- optional
data_format='..', -- optional
compression_method='..', -- optional
config_file='..', -- optional
endpoint='..', -- optional
...
The credential of your AWS account can be put directly into the DDL, but you can also bind an IAM role with proper permissions if you deploy Timeplus Enterprise on an EC2 instance or Kubernetes pod. Alternatively, you can store the credentials as environment variables or a local file. The latter can be mounted by a security manager such as HashiCorp Vault. Please check the integration guide here. The new setting config_file is applicable to all external streams or external tables in Timeplus, such as Kafak, Pulsar, ClickHouse.
With the new S3 external tables, you can build data pipelines to read from AWS MSK and send the transformed data to S3 as Parquet files. This can significantly reduce the data storage cost in MSK. The files in S3 can be read by Timeplus, or other tools such as Athena or Spark. In the next month’s release, we will natively support Apache Iceberg open table format. Stay tuned for more information.
External Data Lookup With Dictionaries
In Timeplus Enterprise 2.7, you can create dictionaries to look up data in MySQL, ClickHouse or other Timeplus deployment, cache them in the memory or as mutable streams. This works particularly well with streaming JOINs.
For example, you can create a dictionary to connect to a MySQL database:
CREATE DICTIONARY mysql_products_dict
(
`id` string,
`name` string,
…
)
PRIMARY KEY id
SOURCE(MYSQL(DB ‘test’ TABLE ‘products’ HOST ‘127.0.0.1’ PORT 3306 USER ‘admin’ PASSWORD ‘my’)
LAYOUT(complex_key_direct());
You can look up the name based on the specified ID by calling the dict_get function.
SELECT dict_get(‘mysql_products_dict’, ‘id’, ‘00005’);
But more commonly, you can use the dictionary in a DIRECT JOIN, e.g.:
SELECT * FROM orders_stream
JOIN mysql_products_dict AS products ON orders.product_id = products.id
SETTINGS join_algorithm = ‘direct’;
In this example, no matter how many unique product IDs are, the streaming SQL will check the cached data in the dictionary to look up name by ID. If the id is not found, then query the MySQL database then put the data in cache. You can specify the TTL(Time-To-Live) settings to control how much data will be cached in the memory.
In Timeplus Enterprise 2.7, we also support using a mutable stream as the cache, so that customers can warm up the cache with frequently accessed dimensional data, and leverage CDC (Change Data Capture) technologies to update the cache proactively when the raw data in MySQL is updated.

Python UDF
Python is recognized as one of the most popular languages in the field of data science. Its flexibility as a scripting language, ease of use, and extensive range of statistical libraries make it an indispensable tool for data scientists and analysts.
Python excels in writing complex parsing and data transformation logic, especially in scenarios where SQL capabilities are insufficient. Python User-Defined Functions (UDFs) offer the flexibility to implement intricate data processing mechanisms. These include:
Custom Tokenization: Breaking down data into meaningful elements based on specific criteria.
Data Masking: Concealing sensitive data elements to protect privacy.
Data Editing: Modifying data values according to specific rules or requirements.
Encryption Mechanisms: Applying encryption to data for security purposes.
Timeplus Enterprise 2.7 is the first release with a public preview of the Python UDF for Linux x86_64 platforms. Unlike other databases who also support defining functions in Python, Timeplus core engine has the embedded CPython runtime to execute Python code directly without a bridge call to an external Python runtime.

This not only simplifies the deployment of Timeplus Enterprise, but also significantly improves the performance of Python UDF. For example, with the traditional bridge based solution to call a local Python process, we got the baseline as 2 million events per second. With the embedded CPython in Timeplus core engine, we achieved 12 million events per second with the same Python code.
With the rich ecosystem of Python, you can build new SQL functions to greatly extend Timeplus Enterprise. For example, the following SQL creates a Python UDF to call OpenAI to ask any questions.
CREATE OR REPLACE FUNCTION ask_ai(question string) RETURNS string LANGUAGE PYTHON AS
$$
import os
from openai import OpenAI
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def ask_ai(question):
res = []
for q in question:
try:
chat_completion = client.chat.completions.create(
messages=[{
"role": "user",
"content": q
}],
model="gpt-3.5-turbo")
res.append(chat_completion.choices[0].message.content)
except Exception as e:
res.append(str(e))
return res
$$
Troubleshooting UI
Based on the feedback from Salla and other customers, we built an advanced state monitoring for materialized views and streams in Timeplus Enterprise 2.7. This greatly improved the productivity for the customers to manage thousands of streaming pipelines. With the new UI, the data team can intuitively understand which materialized views are facing high workloads and what is the Kafka offset or internal sequence number on each node.



Other Enhancements
Support IAM authentication for accessing Amazon MSK: Avoid storing static credentials in Kafka external streams by setting sasl_mechanism to AWS_MSK_IAM.
Kafka metadata: read the header key/value pairs for Kafka messages.
PostgreSQL and MySQL CDC via Redpanda Connect: Timeplus Enterprise now supports CDC (Change Data Capture) for PostgreSQL and MySQL databases via Redpanda Connect. This feature enables real-time data ingestion from these databases into Timeplus.
Mutable stream delete: You can now delete data from mutable streams with the DELETE SQL command.
Historical data replay: Replay historical data in local streams or Kafka external streams with the replay_speed setting.
Multiple database namespaces: in the web console, you can choose a non-default database and manage the streams and materialized views in that namespace.
Learn More
To explore the capabilities of Timeplus Enterprise v2.7, please check out https://docs.timeplus.com/enterprise-v2.7 for installation guide as well as change logs.
Thank you for being a part of our journey. We look forward to your continued support and collaboration as we move towards a future powered by real-time streaming analytics.
Ready to try Timeplus Enterprise? Try for free for 30-days.
Join our Timeplus Community! Connect with other users or get support in our Slack community.