Why we created new pricing for Developer plans
Last week, Tinybird deployed a new pricing model for Developer plans. Here's a deep dive into our reasoning behind the new pricing and how it helps developers ship faster.

At Tinybird, we've always been committed to making real-time analytics accessible to developers worldwide.
Recently, we announced significant changes to our self-serve paid plans, moving from pricing based on processed data to pricing based on infrastructure usage. I wanted to share the thinking behind this change and why we believe it's a crucial step in helping our users build and ship faster.
The challenge with pricing on processed data
Our previous pricing model was based on processed data, which meant that the cost was directly tied to how much data your queries processed. We originally chose this pricing model because it effectively gave our users a "pure serverless" analytics database.
The thinking behind this choice was simple: give people access to compute without them needing to plan or think about sizing that compute power. They could decide whether they wanted to go fast (and pay more if necessary) or control their costs (and spend more time optimizing); we aligned our pricing with that choice and its benefits (and tradeoffs).
However, it turned out the choice wasn't that simple for many users. While this model worked well for some, we noticed three recurring patterns that were holding many of our users back:
1. Fear of unexpected bill spikes
Many users expressed anxiety about potential cost spikes due to inefficient queries or sudden traffic increases. This fear of "bill shock" prevented teams from fully embracing real-time analytics.
We spent a lot of time at the end of every month reaching out to people and preparing them for the bill shock they were about to receive, teaching them how to optimize, and handing out coupons to try to take the edge off these spikes.
2. Too much time spent optimizing
With the processed data model, many Tinybird users were spending too much time optimizing queries to reduce processed data, instead of shipping new APIs.
The core value of Tinybird is the ability to rapidly create a scalable analytics API, and those APIs are incredibly performant out of the box without any optimization. As a user, this value is especially apparent when you're just starting, and the amount of data you store is typically much smaller.
While query optimization is important as you scale, we were making it our customers’ primary focus too early (and spending quite a bit of time ourselves providing engineering support to design and implement optimizations), instead of making it easy for users to validate ideas and ship their first version.
3. Difficulty estimating costs
The concept of processed data proved challenging for many users to grasp and predict, and it heavily penalized developers less experienced in working with large amounts of data. This made it hard for teams to budget effectively and plan for scale.
Why we shifted to infrastructure-based pricing
Aware of the problems that our pricing model was causing, we dug into possible solutions. Our goal was clear: develop a pricing model that better aligned with our core mission to help developers ship fast.
We spent many hours analyzing usage patterns, talking to customers, and brainstorming different pricing models and ideas. We were already billing many enterprise customers on an infrastructure-based model, and we determined that some of the lessons we learned from operating these customers on dedicated clusters could also be applied to our shared infrastructure and self-serve paid customers.
We knew we didn't want to completely eliminate the usage basis for our pricing. People choose Tinybird not because they want to have access to a stable and dedicated cluster, but because they want to move fast and pay as they scale.
As you may have seen in last week's announcement, we ultimately chose a pricing model based on vCPU usage. Developers get an allotment of vCPU-hours to consume each month, with built in mechanisms to autoscale for bursty or spiky traffic.
The new vCPU-based pricing model is still based on resource consumption to a degree, but it addresses the above challenges in several ways:
1. Costs are more predictable
By basing pricing on infrastructure (an allotment of vCPU-hours) rather than processed data, we've made costs more predictable and easier to grasp. You know exactly how much computing time you get each month and how much it will cost, regardless of how you use it (ingestion, querying, materialization, etc.).
2. Less pressure to optimize too early
One of the most important aspects of the new model is that it removes the constant pressure to optimize queries. While optimization is still valuable, you can now focus on shipping features first and optimize when it makes sense for your business.
3. Better alignment with value
Infrastructure-based pricing is something developers are already familiar with from working with cloud providers. We believe it's easier to understand and plan for, making it simpler to calculate an ROI and make informed decisions about scaling.
Learning from our Enterprise customers
This change wasn't made in a vacuum. We had already migrated our enterprise customers to infrastructure-based pricing several months ago, and the feedback was overwhelmingly positive. Teams reported better cost predictability and, more importantly, faster development cycles. We wanted to bring these same benefits to our Developer plans while still respecting that not everybody needs dedicated infra with pricing based purely on cluster sizing.
Looking ahead
While we're confident in this direction, we recognize that pricing changes are complex, especially for a flexible product like Tinybird that serves a diverse array of use cases.
We've built in several features to support this transition:
- Improved usage monitoring in the UI
- Autoscaling capabilities to handle traffic spikes
- Burst mode for temporary high-CPU utilization
- Flexible plan sizes to match different needs
Still, we're committed to learning and adapting based on how this model works in practice.
We're actively monitoring these changes and their impact to our users, and we are open to adjusting as we learn more.
For example, based on the feedback we've already received, we're doing the following:
- Evaluating QPS limits. We are aware that with the new pricing, the QPS limits have been the driving factor in price increases for some of our users. We are considering raising the limits for each plan.
- Introducing QPS bursts. One main point of feedback was that hard rate limiting queries beyond the QPS limit would hurt end user experience, so we intend to introduce QPS bursting (as we already do with vCPU) so that you only have to upgrade when you are intentionally scaling, not because you're blocked by a brief usage spike.
- Improving the cost estimator. Perhaps the biggest point of feedback was the shock that some experienced with the cost estimation tool in the Tinybird UI. The cost estimator previously used max QPS over the last 30 days as the basis for recommending a new plan. With the QPS limit changes above, we will update the cost estimator to use AVG/P95/P99 QPS instead of max QPS as the basis for the recommendation.
Our primary goal here is to help you migrate to a plan that makes sense for your level of scale, not put you on the largest possible plan. We think these three changes will help give you the flexibility to upgrade only when it is aligned with your growth.
If you have additional feedback about the new pricing model or want to discuss your specific use case, please reach out to us at support@tinybird.co.
Our commitment
Our goal hasn't changed since we started Tinybird over 5 years ago. We exist to give developers tooling and infra that makes it fast and easy to ship features with large amounts of real-time data.
This pricing change is a reflection of and an alignment with that goal. We believe that by removing the anxiety around cost spikes and the pressure to constantly optimize, we're creating an environment where developers can focus on building and shipping great products.
The future of software development requires working with large amounts of real-time data, and we want to make sure nothing stands in the way of developers building that future.
One last thing...
As you may have intuited if you saw our "Tinybird Local" announcement, some new and exciting improvements to our developer experience are coming soon. We hope these new Developer plans will make it easier for you to get started with Tinybird, especially within this evolving context. If you have any feedback on the new local experience, please email us or find us in the Tinybird Slack community.