Why Can’t I Reduce Cosmos DB Throughput After Scaling Up? (Physical Partitions / Minimum RU Limits)

Jamey Schaap 0 Reputation points
2025-12-10T08:32:06.92+00:00

For testing purposes, I scaled the throughput of one of our Cosmos DB Databases from 200–2,000 RUs up to 15,000–150,000 RUs. Unfortunately, I'm now unable to scale it back down below 15,000 RUs. My assumption is that this happened because the database now has multiple physical partitions (based on the minimum throughput requirements). Is that correct? (https://dori-uw-1.kuma-moon.com/en-us/azure/cosmos-db/concepts-limits#minimum-throughput-on-container)

Are there any ways to scale it back down to the ~200–2,000 RU range? I noticed the “Partition merge (preview)” feature, would that allow the container to merge partitions so the minimum throughput can be lowered?

Azure Cosmos DB
Azure Cosmos DB
An Azure NoSQL database service for app development.
0 comments No comments
{count} votes

3 answers

Sort by: Most helpful
  1. Q&A Assist
    2025-12-10T08:32:22.0133333+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    When you scale up the throughput of a Cosmos DB database, it can lead to the creation of multiple physical partitions. This is due to the minimum throughput requirements that are tied to the number of partitions. Once you have scaled up and created these partitions, the minimum RU/s you can set for your container in the future is determined by the highest RU/s ever provisioned, among other factors. Therefore, if you have scaled up to 15,000 RUs, the minimum throughput you can set afterward is likely to be higher than your desired range of 200–2,000 RUs.

    To lower the throughput back down to the desired range, you can utilize the partition merge feature. Merging partitions allows you to reduce the number of physical partitions used for your container, which can help in lowering the minimum throughput requirements. By merging partitions, you effectively consolidate the throughput across fewer partitions, potentially allowing you to set a lower overall RU/s.

    However, ensure that your Azure Cosmos DB account meets the eligibility criteria for using the merge feature, as it is currently in preview and may have specific requirements regarding the API and provisioned throughput type.


    References:

    0 comments No comments

  2. SUNOJ KUMAR YELURU 17,326 Reputation points MVP Volunteer Moderator
    2025-12-10T08:43:09.9966667+00:00

    Hello @Jamey Schaap

    To scale down your Azure Cosmos DB throughput to the range of 200–2,000 RUs, you may consider the “Partition merge (preview)” feature. This feature is designed to help manage the number of physical partitions within a container. When you initially scaled up your throughput, multiple partitions were likely created to handle the increased demand. If the “Partition merge” feature allows for the merging of these partitions, it could potentially enable you to lower the minimum throughput requirements, making it feasible to scale back down.

    However, since the “Partition merge” feature is in preview, its functionality and availability may vary. It is advisable to check the latest Azure documentation or support resources for updates regarding this feature and its impact on your current Cosmos DB setup. If you need specific guidance, reaching out to Azure support could provide tailored assistance based on your situation. Overall, managing throughput effectively is crucial for optimizing performance and cost in Azure Cosmos DB.


    If the Answer is helpful, please click Accept Answer and Up-Vote 👍, so that this can be beneficial to other community members.

    0 comments No comments

  3. VRISHABHANATH PATIL 2,305 Reputation points Microsoft External Staff Moderator
    2025-12-10T09:03:39.52+00:00

    Hello @Jamey Schaap**
    **
    I see you’re facing a challenge with scaling down the throughput for your Cosmos DB after increasing it. You’re correct that the inability to reduce your throughput under the current conditions likely relates to the minimum throughput requirements imposed by Azure Cosmos DB.

    When you scale up your Cosmos DB throughput, it can indeed create multiple physical partitions, which have their own minimum throughput limits. Each physical partition supports up to 10,000 RU/s, and the database as a whole has minimum throughput that needs to be maintained depending on its current configuration.

    To answer your question about scaling down to the 200–2,000 RU range—here’s what you can consider:

    Minimum Throughput Requirements: After scaling up, your Cosmos DB container might have set a new minimum RU/s that you cannot go below without addressing the physical partitions. According to the documentation, Cosmos DB requires a certain minimum throughput based on your current storage and the highest RU/s you’ve ever provisioned.

    Partition Merge Feature: The "Partition merge (preview)" feature you noticed may help in this scenario. It allows for the merging of partitions, which could potentially reduce the minimum throughput required. You might want to explore how to use this feature correctly to achieve your desired RU/s.

    Alternate Approaches: If the partition merge feature doesn’t help or isn’t suitable, consider creating a new container with the desired throughput, migrating your data there, and then dropping the old container. This incurs some overhead but can be a straightforward solution when drastic scaling is needed.

    Here are some steps to help you move forward:

    • Check the current number of physical partitions and their throughput.
    • Investigate if merging partitions can lower your required minimum RU/s.
    • If needed, consider the data migration approach.

    References:

    Hope this helps! If you have more specific details like the current number of partition keys or the RU that you’re trying to scale down to, feel free to share!

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.