Category Archives: Memory-Optimized

Memory Optimized Table – Insert Test …Again

Last time my tests showed no major difference on simple inserting between disk-based table and memory-optimized table. On the opposite side, there is a Microsoft published script and test process to prove how much performance gain we can get using memory-optimized tables. This test, shows positive performance gains. Below is the test result I got from my VM:

 

 

This test had been done under different workload (5K, 10K and 15K) and different parallel connections (10 threads, 20 threads and 30 threads). The test result of this test shows memory-optimized table is actually faster than disk-based table, and it’s much more scalable than disk-based table.

 

Where did the difference come from? In Microsoft’s testing script, all the inserts are done through stored procedures, either native compiled procedures or traditional stored procedures. The performance gain we saw in this test is actually coming from compiled procedures. It also demonstrates traditional stored procedure is less memory-friendly!

 

Microsoft’s testing script can be downloaded here: http://msftdbprodsamples.codeplex.com/releases/view/114491

 

Enjoy!

Memory-Optimized Tables – Insert Test (continued)

Memory-optimized tables didn’t show much performance gain during the test last time. So I have decided to do more tests – just inserting tests and see how well memory-optimized tables perform.

 

Last time my VM had 2GB RAM and memory-optimized table could only hold 150K rows. To be able to test more cases, I added another 6GB and made it 8GB this time. In the meantime, I have also added 3 more logical CPU cores.

 

The tests I’m going to do this time is to compare insert performance of memory-optimized tables and disk-based tables under different workload: 50K, 100K, 150K, 200K, 250K and 300K row inserts. And I’m going to run these tests 3 times:

 

First run:

Second run:

 

Third run:

 

The results seem to be consistent – memory-optimized table showed performance gain only under 200K to 300K workload. For less heavy workload (<200K), disk-based table can perform just as well as memory-optimize table.

 

From this test, memory-optimized table seems to be working better under heavy workload. To prove this, I once again, added 4 GB to the testing VM and expand the test row set up to 450K. Again, 3 times:

 

First run:

 

Second run:

Third run:

 

The results are close – disk-optimized table can perform just as well as memory-optimized table under given workload. Unfortunately, I don’t have a power server handy for more tests. For now, I will consider memory-optimized table has no apparent performance gain over traditional disk-based table.

Memory-optimized Table – Day 1 Test

I had been checking Microsoft’s website since 12:00AM April 1st. Finally downloaded and installed it. After my SQL Server 2014 Enterprise Edition is up and running on my Windows Server 2012R2, I have decided to do some quick test:

 

First thing first, create a test environment:

use master;
go
create database db1; go

alter  database db1 add filegroup  IMOLTP contains memory_optimized_data;

go

alter  database db1 add file (name=‘imoltp’,filename=‘c:\db\imoltp’) to filegroup IMOLTP;

go

use  db1;

go

create table t1 (id int primary key, msg char(8000)); 

go

create table t1_oltp (id int not null primary key nonclustered hash with (bucket_count = 1000000), msg char(8000)) with (memory_optimized = on)

go

 

2 Tables are created, one is disk-based table called t1 and another one is memory-optimized table called t1_oltp. They have same definition so the results will be comparable.

 

Now, let’s run some tests:

Test #1: Insert Operation:

 

declare  @start datetime2;select @start = getdate(); set statistics time on

go

declare @n int = 1;

while (@n < 1000000)

begin

    insert into t1 values (@n,replicate(‘a’,8000)) 

    set @n = @n +

end

select datediff(ms,@start,getdate())

go

declare @start datetime2; 

select @start = getdate();

declare @n int = 1;

while (@n < 1000000) 

begin    

insert into t1_oltp values (@n,replicate(‘a’,8000)) 

set @n = @n + 1

end

select datediff(ms,@start,getdate())

GO

 

The plan was to insert 1M rows and compare the performance. However, it turned out to be I was way too optimistic about the capability of my VM. SQL Server showed this error while inserting:

 

It’s good that SQL Server actually has a hard limit (80%) on how much memory memory-optimized tables could use. This helps SQL Server to ensure memory-optimized tables won’t consume all SQL Server has and leave other tables starving to death.

 

Now I will have to reduce the amount of record for this test, I’m going to be conservative this time – 50K records is what I will be testing on.

 

The results are:

Disk-base table: 33323ms

Memory-optimized table: 124017ms

 

Surprisingly, memory-optimized table is much slower compare to disk-based table. MSDN explicitly mentioned high insert rate is one of the scenarios we should consider memory-optimized table:

http://msdn.microsoft.com/en-us/library/dn614087(v=sql.120).aspx

“High data insert rate

One of the critical needs of high performance applications is an extremely high data loading rate. The typical pattern is a need to ingest data where due to either high rate fluctuations and spikes, the workload requirements exceed the capabilities of a traditional RDBMS. Some business cases which may use this solution include smart metering, or repositories for telemetry or performance data ingestion.”

 

This test makes no sense. Before I start investigating what happened to my insert test, I decided to continue finish the rest of my test.

 

Test #2: Select Operation – Table Scan:

set  statistics time onselect * from t1
select * from t1_oltp
set statistics time off

 

 

Result:

(49999 row(s) affected)

 

SQL Server Execution Times:

CPU time = 2047 ms, elapsed time = 27617 ms.

 

(49999 row(s) affected)

 

SQL Server Execution Times:

CPU time = 1703 ms, elapsed time = 59434 ms.

 

Here we see some performance improvement.

 

 

Test #3 Select – Seek:

set  statistics time on select * from t1 where id = 25000
select * from t1_oltp where id = 25000
set statistics time off

 

Result:

(1 row(s) affected)

 

SQL Server Execution Times:

CPU time = 0 ms, elapsed time = 44 ms.

SQL Server parse and compile time:

CPU time = 0 ms, elapsed time = 0 ms.

 

(1 row(s) affected)

 

SQL Server Execution Times:

CPU time = 0 ms, elapsed time = 37 ms.

 

Hard to tell, both are lightning fast! However, notice that there is difference, disk-based table has an entry called “parse and compile” time while memory-optimized table doesn’t. This is one of the benefits of compile object SQL Server 2014 brings.

 

Test #4: Delete:

set  statistics time on delete from t1 delete from t1_oltp

set statistics time off

Result:

SQL Server Execution Times:

CPU time = 13328 ms, elapsed time = 24345 ms.

 

(49999 row(s) affected)

 

SQL Server Execution Times:

CPU time = 219 ms, elapsed time = 208 ms.

 

(49999 row(s) affected)

 

Here we see a big difference, 98% less in CPU time, WOW!

 

Test #5: Bulk load:

 

Create a staging table for bulk load:

–staging table
create table t2 (id int primary key, msg char(8000));godeclare @n int = 1;

while (@n < 50000)

begin

    insert into t2 values (@n,replicate(‘a’,8000))

    set @n = @n + 1

end

 

Now let’s load them:

set statistics time on
insert into t1
select * from t2 insert into t1_oltp

select * from t2

set statistics time off

 

Result:

SQL Server Execution Times:

CPU time = 9765 ms, elapsed time = 18497 ms.

 

(49999 row(s) affected)

 

SQL Server Execution Times:

CPU time = 1235 ms, elapsed time = 39432 ms.

 

(49999 row(s) affected)

 

To review the tests we have done so far:

Disk_based Table_Memory_based_table

Except for my suspicious insert operation and my miscalculation about the number of 50K rows (49,999 rows instead), all results are positive and performance friendly. Just keep in mind memory-optimized tables are memory monsters and they do consume A LOT memory and default resource pool has a hard limit on how much memory-optimized tables can use, that is 80%.

 

And, I will look into what happened to my insert test in my next blog.