When you are working on Oracle you probably learned that each DDL does an implicit commit in the background. There is no way to have DDLs wrapped in a real transaction (with the exception of the “create schema” command which I blogged about a long time ago). If you take a look a the SQL:2008 standard (ISO/IEC 9075-1:2008) you can find this: “An SQL-transaction (transaction) is a sequence of executions of SQL-statements that is atomic with respect to recovery. That is to say: either the execution result is completely successful, or it has no effect on any SQL schemas or SQL-data.” Well, a schema is what DDL is about, isn’t it? Lets see how PostgreSQL handles this.

When you create objects like you would do it in Oracle you will probably do it like this:

(postgres@[local]:5000) [postgres] > create table tab1 ( a int, b date );
CREATE TABLE
Time: 60.109 ms
(postgres@[local]:5000) [postgres] > create view view1 as select a from tab1;
CREATE VIEW
Time: 64.288 ms
(postgres@[local]:5000) [postgres] > create sequence seq1;
CREATE SEQUENCE
Time: 36.861 ms

Creating object this way behaves pretty much the same as Oracle does. There are autocommits in the background and all objects got created:

(postgres@[local]:5000) [postgres] > d tab1
     Table "public.tab1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 
 b      | date    | 

(postgres@[local]:5000) [postgres] > d view1
     View "public.view1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

(postgres@[local]:5000) [postgres] > d seq1
            Sequence "public.seq1"
    Column     |  Type   |        Value        
---------------+---------+---------------------
 sequence_name | name    | seq1
 last_value    | bigint  | 1
 start_value   | bigint  | 1
 increment_by  | bigint  | 1
 max_value     | bigint  | 9223372036854775807
 min_value     | bigint  | 1
 cache_value   | bigint  | 1
 log_cnt       | bigint  | 0
 is_cycled     | boolean | f
 is_called     | boolean | f

But is this what you really want? What do you do when one ore more of the statements fail? Drop all objects that got created and start from the beginning? Imagine you want to create hundred of objects and you want to either all of them to succeed or roll back everything if one ore more fail. This is where transactions come into the game: A transaction is atomic which means: Either everything inside the transaction succeeds or everything is rolled back. Transactions are not only for DML as you might think. PostgreSQL lets you do this:

(postgres@[local]:5000) [postgres] > begin;
BEGIN
Time: 0.075 ms
(postgres@[local]:5000) [postgres] > create table tab1 ( a int, b date );
CREATE TABLE
Time: 0.543 ms
(postgres@[local]:5000) [postgres] > create view view1 as select a from tab1;
CREATE VIEW
Time: 0.520 ms
(postgres@[local]:5000) [postgres] > create sequence seq1;
CREATE SEQUENCE
Time: 0.576 ms
(postgres@[local]:5000) [postgres] > end;
COMMIT
Time: 0.854 ms

All the DDLs are executed in one transaction. As soon as then transaction is closed by issuing “end;” a commit is performed. All objects are there:

(postgres@[local]:5000) [postgres] > d tab1
     Table "public.tab1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 
 b      | date    | 

(postgres@[local]:5000) [postgres] > d view1
     View "public.view1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

(postgres@[local]:5000) [postgres] > d seq1
            Sequence "public.seq1"
    Column     |  Type   |        Value        
---------------+---------+---------------------
 sequence_name | name    | seq1
 last_value    | bigint  | 1
 start_value   | bigint  | 1
 increment_by  | bigint  | 1
 max_value     | bigint  | 9223372036854775807
 min_value     | bigint  | 1
 cache_value   | bigint  | 1
 log_cnt       | bigint  | 0
 is_cycled     | boolean | f
 is_called     | boolean | f

The pretty cool thing is what happens if one or more of the statements fail:

(postgres@[local]:5000) [postgres] > begin;
BEGIN
Time: 0.076 ms
(postgres@[local]:5000) [postgres] > create table tab1 ( a int, b date );
CREATE TABLE
Time: 0.500 ms
(postgres@[local]:5000) [postgres] > create view view1 as select a from tab99999;
ERROR:  relation "tab99999" does not exist

As the table referenced by the view does not exist the create view statements fails. What is the status of my transaction now? Lets try to continue with the create sequence statement as it would happen if all the DDLs are executed inside a script:

(postgres@[local]:5000) [postgres] > create sequence seq1;
ERROR:  current transaction is aborted, commands ignored until end of transaction block
Time: 0.126 ms

The transaction was aborted and everything will be ignored until we close the transaction. So lets do it:

(postgres@[local]:5000) [postgres] > end;
ROLLBACK
Time: 0.108 ms

And here we go: Instead of a commit a rollback was executed. Does the table tab1 exist?

(postgres@[local]:5000) [postgres] > d tab1
Did not find any relation named "tab1".

No. So, without worrying about cleaning up anything that was created successfully we can just fix the issue and start again. Isn’t that a nice feature?