Less ceremony, more shipping
Deployments in data finally get their due. With tb deploy, live schema migrations happen painlessly and automatically.

In the data world, the concept of "deployments" is still finding its footing. Tools like dbt and Prisma have made impressive strides in bringing deployment rigor to data workflows, but for most teams, data deployment still means relying on app-level processes or cobbling together scripts to run SQL migrations.
It's a fragmented landscape that leaves too much room for error. Developer productivity is precious, yet most of us spend more time wrestling with our tooling than actually building cool stuff. Changing production data schemas is especially painful. You tweak one column type and suddenly your entire data pipeline catches fire, backfills drag on for hours, and somehow three days of data vanish into thin air.
Tinybird Forward's tb deploy
builds on the foundation that tools like dbt started, bringing even more of the simplicity and reliability of software development deployments into the data world, so you can spend less time firefighting and more time shipping.
Software engineers have long enjoyed clean, reliable deployment workflows. Version control. Staging environments. Automated testing. Atomic deployments. Data teams (and software engineers doing data team things) deserve the same.
With Tinybird Forward, we're borrowing the best practices from software engineering and applying them to data workflows, so you can finally treat your data infrastructure with the same rigor as your application code.
Start using it by installing the new CLI:
curl https://tinybird.co | sh
How to use tb deploy
We've created a streamlined deployment command that abstracts away the complexity of deploying changes to your data applications:
This one command handles the entire deployment lifecycle. No more context-switching between operations or managing separate deployment steps. Just build your changes, run tb deploy
, and let Tinybird handle the rest.
Why tb deploy
makes your life better
The tb deploy
command isn't just a convenience—it's a fundamental rethinking of how data deployments should work. Here's why it transforms your workflow:
- No cognitive overhead: No more mental checklists of deployment steps or worrying about the order of operations. Just write your code and ship it with a single command.
- Pre-deployment validation: With
tb deploy --check
, you can validate your deployment beforehand, catching potential issues early and ensuring smooth, error-free deployments. - Fail-safe deployments: Built-in protections prevent the most common pitfalls of schema changes. Destructive operations require explicit confirmation, you can test ingestion and reads with staging deployments, and discard them if something goes wrong with
tb deployment discard
. - Continuous data flow: Even during complex schema migrations, your data pipelines keep running. New data continues flowing while backfills happen in the background—no downtime, no data loss.
- CI/CD-ready: Automate your entire workflow with a deployment command that works perfectly in continuous integration pipelines. From GitHub Actions to Jenkins to GitLab CI,
tb deploy
fits right in. - Automatic backfills: Schema changes in materialized views trigger automatic backfills, seamlessly migrating your historical data without manual intervention or downtime.
Schema evolution with FORWARD_QUERY
One of the hardest parts of deploying changes to data applications is handling backward-incompatible schema evolution. What happens when you need to change a column type, add a new field, or modify your sorting keys?
In Tinybird Forward, we've introduced the FORWARD_QUERY
instruction to make this process seamless. This powerful feature allows you to transform your data from the old schema to the new one during deployment.
Let's say you need to change your session_id
column from a String
to a UUID
type. Instead of dealing with complex migration scripts, you simply:
1. Update your schema in the .datasource file
2. Add a FORWARD_QUERY
instruction that tells Tinybird how to transform the data:
When you run tb deploy
, Tinybird automatically:
- Creates a new table with the updated schema
- Executes your
FORWARD_QUERY
to transform and populate the data - Handles all the complexity of migrating your data
What's really happening under the hood?
When you run tb deploy
, Tinybird performs a sophisticated orchestration to ensure zero downtime and data consistency during schema migrations.
Multiple tables working together
Tinybird deployments work by using multiple tables for a single data source:
- Auxiliary tables for new data: When you start a deployment, Tinybird creates auxiliary tables that will receive new incoming data while maintaining your production environment.
- Parallel processing: While your production system continues to run, Tinybird works in the background to:
- Run a backfill process (using Populate) to transform historical data using your Forward Query
- Create materialized views with your new schema for ongoing real-time ingestion
- Unified reading experience: During the transition, a View with
UNION
operation combines the main and auxiliary tables, ensuring all data is always available for your queries without interruption. We create two Views withUNION
operations: one that you can keep using in your Live deployment and another one with the changes for the new deployment.
The deployment lifecycle
The tb deploy
process follows these stages:
- Initialization: Tinybird analyzes your schema changes and prepares the deployment strategy. This calculates what new tables must be created and what data migrations are needed. For instance, for a simple endpoint change, no data migrations are needed, but if you change the sorting key of a landing data source, the full table must be migrated.
- Data migration: Historical data is transformed and backfilled through a Populate process using your Forward Query. Meanwhile, new incoming data is captured in both original and transformed formats. Deployment backfills leverage the same underlying technology as Populates, which means they can take as long as Populates (up to 48 hours) for large data transformations. We've chosen this approach deliberately to focus on consistent performance optimization across our platform rather than maintaining multiple ways of doing the same thing.
- Promotion: When everything is ready, a simple metadata change points to the new, migrated tables. This is nearly instantaneous, so any service disruption is negligible.
- Cleanup: After successful promotion, the previous deployment tables are removed, and we merge back the tables with historical and real-time data.
Discard safety
One of the most powerful aspects of this architecture is that discards are possible at any time during deployment. Since we maintain two versions of the current ongoing ingestion—one without transformation and another with the Forward Query applied—you can always revert to the previous state if needed before promoting.
FAQ about Tinybird deployments
What happens if a deployment fails?
If a deployment fails, tb deploy
will provide clear error messages about what went wrong. The previous deployment remains active, so there's no production impact. You can fix the issues and try again, or use tb deployment discard
if needed.
Can I see what will be deployed before executing it?
Yes, use tb deploy --check
to get a detailed report of changes that would be made without actually applying them.
How does this work with CI/CD pipelines?
Like a dream :) Here are our suggested CI/CD workflows.
Can I start ingesting to new tables without promoting them?
Yes, we have included a specific API parameter to the Events API for this, __tb_min_deployment
. This allows you to start sending data to your new schema while the deployment is still in staging, ensuring no data loss during the transition period and reducing coordination frictions.
Limitations and considerations
- Destructive operations require explicit permission via the
--allow-destructive-operations
flag - To speed up schema changes to data sources with large amounts of data (i.e., various terabytes), we are working on different strategies that do not require backfill.
The future of deployments in Forward
We're just getting started with tb deploy
, and we have an exciting roadmap ahead:
- Options to avoid data migrations: Right now, we always migrate data if there have been changes on a data source, but if said table has a short TTL a user might want to avoid the data migration and just wait until the TTL passes, applying the needed transformations at read time.
- Enhanced observability: Better insights into deployment progress and data migration status.
- Validation: Once the backfill is complete, Tinybird will verify data consistency between the old and new schemas.