In Memory OLTP Table's indexes not releasing memory for deleted records

James Southward 20 Reputation points
2024-07-31T20:23:03.66+00:00

We have a small (logical size about 6GB) OLTP database. In it we have several work tables that during processing get filled with about 100,000+ rows. Once the calculation is complete the records are no longer needed and the data is purged. However the storage related to the tables' indexes does not get released. So, as displayed on the Memory Usage by Memory Optimized Objects report as the Blue segment, Index Unused Memory just keeps growing until the system stops when it hits the limit for in memory databases. Touching (rebuild the HASH with a different bucket count or adding/deleting ) any index on the table instantly releases all the memory for that table's indexes back to the system but since that needs a full table lock it is not a practical solution. It doesn't seem to be an issue for the tables themselves only their indexes. There's no rebuild option for the indexes which you would do for a disk based index. Any ideas?

SQL Server | Other
0 comments No comments
{count} vote

Answer accepted by question author
  1. Erland Sommarskog 128.7K Reputation points MVP Volunteer Moderator
    2024-08-02T21:52:46.2433333+00:00

    I did some testing, and it seems that sys.sp_xtp_force_gc will do the job for you. You did not say which version of SQL Server you are using, but this is a relatively new SP what was introduced in SQL 2019 CU15 and SQL 2022 CU1.

    Also, when I tested, I found that neither the memory for the heap, nor for the nonclustered index was released directly. But when I came back several hours, some of the memory had been reclaimed. I guess the deallocation is done by a background process which is taking things slow and gentle.


2 additional answers

Sort by: Most helpful
  1. Anonymous
    2024-08-01T07:19:05.04+00:00

    Hi @James Southward ,

    Thank you for your reaching out and welcome to Microsoft Q&A!

    However, the storage related to the tables' indexes does not get released.

    Whenever an index is created, rebuilt, or dropped, disk space for both the old (source) and new (target) structures is required in their appropriate files and filegroups. The old structure is not deallocated until the index creation transaction commits. Additional temporary disk space for sorting operations may also be needed. For more information, see Disk Space Requirements for Index DDL Operations.

    There's no rebuild option for the indexes which you would do for a disk based index.

    From this official document, we can get the information below:

    A columnstore index can be built on a disk-based table, even as the clustered index. But on a memory-optimized table a columnstore index cannot be clustered.

    And we can get the information from the overview of columnstore index: SQL Server marks the row as logically deleted but doesn't reclaim the physical storage for the row until the index is rebuilt.

    Feel free to share your issues here if you have any concerns!

    Best regards,

    Lucy Chen


    If the answer is the right solution, please click "Accept Answer" and kindly upvote it. If you have extra questions about this answer, please click "Comment".

    Note: Please follow the steps in our Documentation to enable e-mail notifications if you want to receive the related email notification for this thread.

    https://docs.microsoft.com/en-us/answers/support/email-notifications


  2. James Cook 0 Reputation points
    2025-12-12T13:57:32.6833333+00:00

    I have found the same issue, had an runaway job create GBs of in memory data 10 data 10gb index.
    deleted the data to a few hundred mb and ran EXECUTE sys.sp_xtp_force_gc; . however the index space was 11gb. dropping and creating my least used index resolved them all.

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.