Infrastructure at your Service

Category Archives: Development & Performance

Oracle Team

OCM 12c preparation: Materialized View explain_rewrite

By | Development & Performance, Oracle | No Comments

By Franck Pachot . When I passed the OCM 11g exam, I had to prepare the ‘Manage Materialized Views to improve rewrite’ topic. Now that I’m preparing the 12c upgrade I checked quickly what I’ve prepared… and it doesn’t work. This post is about query rewrite, a bug in 12c, how to understand why it doesn’t rewrite, and how rewrite is accepted when constraints are not enforced and when materialized view is stale.

Read More
Matthieu Munch

SAP HANA SQL scripting optimization: the CE Functions

By | Development & Performance | 3 Comments

In SAP HANA, you have two possibilities to create the Calculated Views: Using the graphical method Using the scripting method with CE functions In this blog, I will demonstrate that CE Functions can improve performances from a Calculated View. First, I will give you some general information regarding the CE Functions. After that, I will show you the two different ways to create a Calculated View. And at the end, I will compare their performances…

Read More
David Barbarin

SQL Server 2016 CTP2 : first thoughts about tempdb database

By | Development & Performance | No Comments

In my first blog about SQL Server 2016, I noticed during the SQL Server installation process that we are able to configure the number of files for tempdb. This is surely a great improvement but at this point, you may think that the configuration of the tempdb database is finished but in fact you’re wrong. Let me say why. First let’s have a look at the tempdb files configuration after installing SQL Server. use tempdb;…

Read More
Matthieu Munch

Row Store vs Column Store in SAP HANA

By | Development & Performance | One Comment

The SAP HANA database allows you to create your tables in Row or Column Store mode. In this blog, I will demonstrate that each method has its advantages and disadvantages and should be used for specific cases. Thanks to two kind of tests, I will show you that the Row Store mode should be used for simple SELECT SQL queries, without aggregation and the Column Store mode should be used for complex SELECT queries, containing…

Read More
David Barbarin

Variations on 1M rows insert (3): using SQL Server In-Memory features

By | Development & Performance | No Comments

Let’s continue with this series about inserting 1M rows and let’s perform the same test with a new variation by using SQL Server In-Memory features. For this blog post, I will still use a minimal configuration that consists of only 1 virtual hyper-V machine with 1 processor, 512MB of memory. In addition my storage includes VHDx disks placed on 2 separate SSDs  (one INTEL SSDC2BW180A3L and one Samsung SSD 840 EVO). No special configuration has…

Read More
David Barbarin

Variations on 1M insert (2): write commit

By | Development & Performance | No Comments

In this blog post, I will try to do the same than my colleagues about Oracle and for PostgreSQL. As a reminder, we’ve seen in my previous blog post that SQL Server is designed to commit transactions implicitly by default and inserting 1M rows in this case may have a huge impact on the transaction log throughput. Each transaction is synchronously committed to the transaction log. In this blog post, we’ll see a variation of…

Read More
David Barbarin

Variations on 1M rows insert (1): bulk insert

By | Development & Performance | No Comments

I think you already have read the interesting series of blog posts of my colleagues Franck and Daniel about inserting on 1 million rows for Oracle and PostGreSQL. So it’s time to write the first of the same series concerning SQL Server. First of all, just to clarify, the idea is not to make a direct comparison between Oracle, PostGreSQL and SQL Server but just to see variations that exist for each of them to insert…

Read More
Daniel Westermann

the fastest way to load 1m rows in postgresql

By | Development & Performance | 3 Comments

There have been several posts on how to load 1m rows into a database in the last days: Variations on 1M rows insert (1): bulk insert Variations on 1M rows insert(2): commit write Variations on 1M rows insert (1): bulk insert – PostgreSQL Variations on 1M rows insert(2): commit write – PostgreSQL Variations on 1M rows insert (3): TimesTen In this post I’ll focus on how to prepare a PostgreSQL database for bulk loading in…

Read More
Daniel Westermann

bulk loading semi structured data in postgresql

By | Development & Performance | 3 Comments

The last post took a look at how to efficiently load 1m rows into a table in PostgreSQL. In this post I’ll take a look on how to do the same with semi structured data. Again it is the same system as in the last post using the same PostgreSQL parameters. Using the hstore datatype For storing key/value pairs there is the hstore extension in PostgreSQL. Using the same sample data as in the last…

Read More
Daniel Westermann

Variations on 1M rows insert(2): commit write – PostgreSQL

By | Development & Performance | No Comments

Franck was quite fast with his second post Variations on 1M rows insert(2): commit write. Lets see what we can do on the PostgreSQL side. I’ll take the same test table as Franck again: create table DEMO (“id” number , “text” varchar2(15), “number” number) ; The pgplsql block which corresponds to Frank’s plsql block looks like this: \timing on truncate table DEMO; DO $$DECLARE l_people_array varchar(15)[12] := ‘{“Marc”, “Bill”, “George”, “Eliot”, “Matt”, “Trey”, “Tracy”,”Greg”, “Steve”,…

Read More