SQL Server 2014 has been released last week, isn’t that great? Sure it is! But what’s in it for you? should you run to your CTO to get the upgrade budget right now? Do you start building the new servers immediately? Are you one of those people that wait for service pack before you consider going to production? To make the right decision, you need to make sure the value proposition of the upgrade is worthwhile and will give you a positive return. In the next few blog posts, we will cover some of the exciting new technologies that SQL Server 2014 has to offer, what bottlenecks they were designed to solve, and provide general guidelines regarding their actual benefit and where these might be a good fit.

Without a doubt, In-Memory OLTP is the single most talked about feature of SQL Server 2014. It is in many ways, a game changer for many applications. A quick Bing search resulted in more than 2 Million results!

Interestingly, a Google search for the same term returned only 145,000 results.

Is your application one of those for which the rules are about to change? In part I of this series we will see how In-Memory OLTP helps applications that are bound by insert performance. Typically, performance of inserts is bound by the speed of transaction logging, index maintenance operations, and by contention with concurrent read or write sessions.

For traditional disk based tables, every insert performs the following operations:

  1. Appropriate pages are located and fetched from disk into memory, if not already in cache. These include:
    1. Object metadata pages (from sys.objects, sys.columns, sys.indexes etc).
    2. Allocation metadata pages (IAM, PFS, GAM, SGAM etc).
    3. Actual data and index pages, including non-leaf level pages of B-Trees.
  2. Inserts are performed in memory on the cached pages.
  3. If there is no room on the page where the row or key need to be inserted, a page split occurs which requires allocating a new page, loading it into memory and updating page link lists.
  4. Modified pages are placed in the transaction log buffer and flushed to disk every time the buffer fills up. Since the log buffer is shared for all connections to the database, a single transaction may be logged in multiple flushes resulting in many small IO operations.
  5. After all pages have been flushed successfully to disk, the transaction commits.

All page disk to memory fetch operations require PAGEIO latches. All page modification operations in memory require PAGE latches and exclusive resource locks on rows/keys, pages, and potentially the whole table (unless using SNAPSHOT isolation level). In addition, schema stability locks are required to prevent other sessions from modifying the table’s structure. These latches and locks in turn may conflict with concurrent sessions and create a performance hot spot as queues form where processes need to access these pages in turns due to lock and latch incompatibility.
As you can see, there are a lot of IO operations involved and a lot of overhead for managing the cache, locks and metadata.

In-Memory OLTP alleviates most of these bottlenecks by providing the following benefits:

  • Data is stored in rows with no pages or other physical containers so management and metadata overhead is minimized. This also eliminates the potentially very expensive page splits.
  • All objects and data reside in memory so there is no IO involved for the modifications.
  • Transaction logging is minimized as only data changes are logged. Index modifications are not logged since the indexes are not persisted. They are rebuilt from the data every time the database goes through the recovery process. Note that for UPDATE operations, data logging may produce larger volume of log data than as every update is split into an INSERT and a DELETE. With traditional disk-based tables, the optimizer may perform either a split to an INSERT-DELETE an in-place UPDATE as it sees fit.
  • IO can be eliminated altogether by using non-durable memory-optimized tables (SCHEMA_ONLY) if the data is transient in nature and does not need to be persisted. This is commonly the case for ETL type of processes where the data may be reconstructed from its source.
  • Transaction logging is optimized by minimizing the number of log flushes and using larger IO blocks. Transactions against memory-optimized tables are only flushed on commit and may use large IO blocks.
  • Contention is eliminated as In-Memory OLTP uses latch free and lock free isolation mechanisms.

Based on the above, we can conclude that In-Memory OLTP may provide great benefits if your insert performance bottleneck is one of the following:

  • Slow data IO operations
  • Index change logging for tables with multiple indexes
  • Locking and latching contention

It may provide less of a benefit if your bottleneck is one of the following:

  • Logging thread contention
  • Slow IO for transaction log for data modifications

In the next post, we will investigate the benefits of In-Memory OLTP for applications that expereince read performance bottlenecks.

Have a wonderful week!