Infrastructure at your Service

David Barbarin

Memory management, OOM issues with SQL Server 2014 In-memory OLTP

Last week I gave a workshop about SQL Server 2014 and the new features. The first day we worked on new In-memory OLTP and different topics such as the new internal storage, the new transaction processing behavior or the new checkpointing process. During this day, one of the attendees asked me about the memory management with In-Memory OLTP feature. It was a very interesting question but unfortunately I didn’t have the time to discuss about it with him, so I decided to publish something concerning this topic. This subject may be extensive and time consuming, so I will try to give only a good overview to understand correctly how memory management works against memory-optimized objects and how important the monitoring aspect is in this particular context.

First of all, keep in mind that memory-optimized tables are memory-oriented feature. It means that memory-optimized structures (indexes and data rows) will reside exclusively in memory. This is by design and this point is very important. Let me explain why later in this blog post.

For the moment, let’s focus on memory aspects of this new feature. In-memory OLTP is not different from other memory consumers on SQL Server. Indeed, In-memory OLTP objects have their own memory clerk MEMORYCLERK_XTP. Let’s have a look at the sys.dm_os_memory_clerks DMV to show information concerning In-Memory OLTP allocated memory.

       pages_kb/1024 as size_MB
from sys.dm_os_memory_clerks where type like ‘%xtp%’;


In my case we may notice that the database dbi_hk (DB_ID = 24) contains memory-optimized objects with a dedicated memory clerk on it. Others xtp memory clerks are dedicated for system threads (first line) and DAC (last line) but let’s focus on my user database memory clerk which has 2336MB of page memory allocated.

On my lab environment, I have only one memory-optimized table named bigTransactionHistory_xtp inside the dbi_hk database. Let’s have a look at the new DMV sys.dm_db_xtp_table_memory_stats to show memory information for this table:

       object_name(object_id) AS table_name,
       memory_allocated_for_indexes_kb / 1024 as mem_alloc_index_mb,
       memory_allocated_for_table_kb / 1024 as mem_alloc_table_mb,
       memory_used_by_indexes_kb / 1024 as mem_used_index_mb,
       memory_used_by_table_kb / 1024 as mem_used_table_mb,
       (memory_allocated_for_table_kb + memory_allocated_for_indexes_kb) / 1024 as mem_alloc_total_mb,
       (memory_used_by_table_kb + memory_used_by_indexes_kb) /1024 as mem_used_total_mb
FROM sys.dm_db_xtp_table_memory_stats
where object_id = object_id(‘bigTransactionHistory_xtp’);


We may expect to retrieve the same amount of memory page allocated here and in the dedicated memory clerk of the dbi_hk database. This is approximatively the case. The difference we found concerns probably memory allocated for system internal structures. We may have a look at the concerned DMV sys.dm_db_xtp_memory_consumers but I will focus on it in a next blog post.

At this point we know where to find information concerning the memory consumption for memory-optimized objects but I still have one question in mind: how does SQL Server memory manager deal with memory concurrent activities between memory-optimized tables and their disk-based table counterparts? Like any other memory consumer, the in-memory OLTP engine responds to memory-pressure, but to a limited degree because memory consumed by data and indexes can’t be released even under memory pressure.

To deal correctly with In-Memory OLTP engine and others consumers we have to turn on the resource governor (RG) side. Indeed, by default all databases are mapped to the default resource pool regardless the RG is enabled. In the same way, workloads issued from both disk-based tables and memory-optimized tables will run concurrently on the default resource pool if any special configuration is performed. In such case, RG will use an internal threshold for In-Memory OLTP to avoid conflicts over pool usage. The threshold is depending on the memory size configured for SQL Server and especially to the target commit memory for the SQL Server instance. You can refer to the Microsoft documentation here for more details.

So, in my case the max memory setting value is configured to 6144MB and the target committed memory is as follows:

       committed_target_kb / 1024 as committed_target_mb
from sys.dm_os_sys_info;



According the Microsoft documentation (cf. link above) the percent available for in-memory tables will be 70% or 0.7 * 4898 = 3429MB. I may retrieve this information by using the DMV related on the RG. You can find an original version of this script on website.

;with cte
as (
             RP.pool_id ,
             RP.Name ,
             RP.min_memory_percent ,
             RP.max_memory_percent ,
             cast(RP.max_memory_kb / 1024. as numeric(12, 2)) AS max_memory_mb ,
             cast(RP.used_memory_kb / 1024. as numeric(12, 2)) AS used_memory_mb ,
             cast(RP.target_memory_kb / 1024. as numeric(12,2)) AS target_memory_mb,
             cast(SI.committed_target_kb / 1024. as numeric(12, 2)) AS committed_target_mb
   from sys.dm_resource_governor_resource_pools RP
   cross join sys.dm_os_sys_info SI
       c.pool_id ,
       c.Name ,
       c.min_memory_percent ,
       c.max_memory_percent ,
       c.max_memory_mb ,
       c.used_memory_mb ,
       c.target_memory_mb ,
       CAST(c.committed_target_mb * case when c.committed_target_mb then 0.7
                                                               when c.committed_target_mb < 16384 then 0.75
                                                               when c.committed_target_mb < 32768 then 0.8
                                                               when c.committed_target_mb then 0.85
                                                               when c.committed_target_mb > 98304 then 0.9
                                                      end * c.max_memory_percent / 100 as numeric(12,2)) as [Max_for_InMemory_Objects_mb],
       CAST(c.committed_target_mb * case when c.committed_target_mb then 0.7
                                                               when c.committed_target_mb < 16384 then 0.75
                                                               when c.committed_target_mb < 32768 then 0.8
                                                               when c.committed_target_mb then 0.85
                                                               when c.committed_target_mb > 98304 then 0.9
                                                      end * c.max_memory_percent / 100 as numeric(12,2)) - c.used_memory_mb as Free_for_InMemory_Objects_mb
FROM cte c;


Ok I retrieve (approximatively) this value by looking at the Max_for_InMemory_Objects_mb column on the default pool line record. Notice that we have already 2008MB used in the default resource pool.

At this point the In-memory OLTP and the disk-based OLTP run concurrently on the same resource pool and of course, this is not a recommended situation. Indeed, we may be in a situation where In-Memory OLTP consumes all the available memory from this pool. In such situation, SQL Server will be forced to flush data pages from disk-based tables and you know the performance impact of this process.

Go ahead and let’s create an issue you can faced with In-Memory OLTP and misconfigured environments. First we decrease the max memory setting value to 4096MB and then we load another bunch of data into bigTransactionHistory_xtp table to consume an important part of the available memory dedicated to memory-optimized objects in the default resource pool. Finally let’s have again a look at the RG memory configuration by using the previous script. We have now a good picture of changes applied after our reconfiguration:


As expected, several values have changed for target memory, memory available for memory-optimized tables and memory used memory into the default resource pool. The new available memory value for the resource pool is now 1605MB (3891MB – 2286MB). I let you think about a bad situation where your memory-optimized table will consume all available memory inside the default resource pool in order of magnitude … the consequences are obvious (even if they depend on the context): probably a lot of memory pressures between the buffer pool consumer and In-Memory OLTP consumer and in the worst case a potential OOM issue like as follows:


After loading data into the bigTransactionHistory_xtp we can notice we have consumed all available memory for In-memory objects into the default resource pool. However as said earlier, RG guarantees a certain amount of memory for disk-based tables.


Ok now let’s simulate a crash recovery scenario by restarting the SQL Server instance. In my case the SQL Server engine service didn’t restart correctly… ouch… What’s going on? Of course my first though was to take a look directly on the error log of my SQL Server instance. The first error message I encountered was as follows:


Ok… it seems there is an issue during the dbi_hk recovery database process. In fact during the recovery process one step consists in building the index structure and link the data rows to this structure. But you can see that this step fails with an OOM (Out Of Memory) issue.



In this second part, we have interesting information concerning our OOM issue. First of all in the “process / system counts” section we may notice that SQL Server had to deal with internal memory pressures (process physical memory low = 1) so we can exclude external memory pressure. Then, in the “memory manager” section we have two additional sections Last OOM Factor and Page Alloc Potential. The former confirms an OOM (out of memory) issue into the Memory manager. The latter shows a negative value that indicates that the buffer pool does not have any free memory so our assumption that it was an internal memory pressure is correct. As a reminder Page Alloc Potential is similar to Stolen Potential in previous versions of SQL Server.

Let’s continue and point out the memory clerks which are responsible for the memory pressure. By investigating down into the log file, I found two relevant memory clerks with a lot of pages allocated as shown above:



As expected, the first memory clerk concerns In-memory OLTP (XTP as Extreme Transaction Processing) and the second is related on the log pool manager that is heavily used during recovery processing. The both memory clerks, at the time of the OOM issue, have a total size of 3.7GB. This does not leave much room for the caches left in the default resource pool. Finally the end of the error log contains the following error messages that confirm that SQL Server is missing memory for its default resource pool.


According to the Microsoft documentation that’s the resolution of OOM issues with In-Memory table’s scenarios, the number of solutions are very limited. In my case, I started the SQL Server engine with –f parameter to load minimal configuration and then I increased the amount of memory dedicated to In-memory OLTP by increasing the max server memory option in the server side. This fix will avoid to face the same issue on the next restart of my SQL Server engine service.

Is it possible to fix definitely this OOM condition? The response is yes and we have to configure a resource pool with memory limitations and bind it with our memory-optimized database. This is another story and I let you check the Microsoft document! My intention in this blog post is only to create awareness of the importance of a good memory management with new In-memory OLTP feature.

Happy configuration!


Leave a Reply

five × 8 =

David Barbarin
David Barbarin

Senior Consultant & Microsoft Technology Leader