Infrastructure at your Service

Franck Pachot

Resize your Oracle datafiles down to the minimum without ORA-03297

By Franck Pachot

.
Your datafiles have grown in the past but now you want to reclaim as much space as possible, because you are short on filesystem space, or you want to move some files without moving empty blocks, or your backup size is too large. ALTER DATABASE DATAFILE … RESIZE can reclaim the space at the end of the datafile, down to the latest allocated extent.

But if you try to get lower, you will get:

ORA-03297: file contains used data beyond requested RESIZE value

So, how do you find this minimum value, which is the datafile’s high water mark?
You have the brute solution: try a value. If it passes, then try a lower value. If it failed, then try a higher one.
Or there is the smart solution: find the datafile high water mark.

You can query DBA_EXTENTS to know that. But did you try on a database with a lot of datafiles? It runs forever. Because DBA_EXTENTS is doing a lot of joins that you don’t need here. So my query directly reads SYS.X$KTFBUE which is the underlying fixed table that gives extent allocation in Locally Managed Tablespaces.

Note that the query may take a few minutes when you have a lot of tables, because the information is on disk, in each segment header, in the bitmaps used by LMT tablepaces. And you have to read all of them.

Here is my query:

set linesize 1000 pagesize 0 feedback off trimspool on
with
 hwm as (
  -- get highest block id from each datafiles ( from x$ktfbue as we don't need all joins from dba_extents )
  select /*+ materialize */ ktfbuesegtsn ts#,ktfbuefno relative_fno,max(ktfbuebno+ktfbueblks-1) hwm_blocks
  from sys.x$ktfbue group by ktfbuefno,ktfbuesegtsn
 ),
 hwmts as (
  -- join ts# with tablespace_name
  select name tablespace_name,relative_fno,hwm_blocks
  from hwm join v$tablespace using(ts#)
 ),
 hwmdf as (
  -- join with datafiles, put 5M minimum for datafiles with no extents
  select file_name,nvl(hwm_blocks*(bytes/blocks),5*1024*1024) hwm_bytes,bytes,autoextensible,maxbytes
  from hwmts right join dba_data_files using(tablespace_name,relative_fno)
 )
select
 case when autoextensible='YES' and maxbytes>=bytes
 then -- we generate resize statements only if autoextensible can grow back to current size
  '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
   ||'M from '||to_char(ceil(bytes/1024/1024),999999)||'M */ '
   ||'alter database datafile '''||file_name||''' resize '||ceil(hwm_bytes/1024/1024)||'M;'
 else -- generate only a comment when autoextensible is off
  '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
   ||'M from '||to_char(ceil(bytes/1024/1024),999999)
   ||'M after setting autoextensible maxsize higher than current size for file '
   || file_name||' */'
 end SQL
from hwmdf
where
 bytes-hwm_bytes>1024*1024 -- resize only if at least 1MB can be reclaimed
order by bytes-hwm_bytes desc
/

and here is a sample output:

/* reclaim    3986M from    5169M */ alter database datafile '/u01/oradata/DB1USV/datafile/o1_mf_undotbs1_o9pfojva_.dbf' resize 1183M;
/* reclaim    3275M from   15864M */ alter database datafile '/u01/oradata/DB1USV/datafile/o1_mf_apcpy_o5pfojni_.dbf' resize 12589M;
/* reclaim    2998M from    3655M */ alter database datafile '/u01/oradata/DB1USV/datafile/o1_mf_cpy_qt_oepfok3n_.dbf' resize 657M;
/* reclaim    2066M from    2250M */ alter database datafile '/u01/oradata/DB1USV/datafile/o1_mf_undotbs2_olpfokc9_.dbf' resize 185M;
/* reclaim     896M from    4000M */ alter database datafile '/u01/oradata/DB1USV/datafile/o1_mf_cpy_ocpfok3n_.dbf' resize 3105M;

You get directly the resize statements, with the reclaimable space in comments.

A few remarks about my query:

  • I generate the resize statements only for datafiles which are autoextensible. This is because I want to be sure that the datafiles can grow back to their original size if needed.
  • When datafile is not autoextensible, or maxsize is not higher than the current size, I only generate a comment.
  • When a datafile has no extents at all I generate a resize to 5MB. I would like to find the minimum possible size (without getting ORA-3214) but my test do not validate yet what is documented in MOS. If anyone has an idea, please share.
  • There is probably a way to get that high water mark in a cheaper way. Because the alter statement gives the ORA-03297 much quicker. Information is probably available in the datafile headers, without going to segment headers, but I don’t know if it is exposed in a safe way. If you have an idea, once again, please share.

Note that I’m using that query for quite a long time. I even think that it was my first contribution to Oracle community on the web, about 9 years ago, in the dba-village website. Since then my contribution has grown to forums, blogs, articles, presentations, … and tweets. Sharing is probably addictive ๐Ÿ˜‰

Update May 18th, 2016

Thanks to #QueryScope @SQLdep here is the query visualisation:
current_graph_column_lineage_HWM_Datafile
Nothing difficult here, isn’t it?

Update 11-AUG-2018

In multitenant the query above will display only the root datafiles, multiple times. Here is the query I use in multitenant to show all CDB datafiles that can be resized:
https://github.com/FranckPachot/scripts/blob/master/administration/resize-datafiles.sql

44 Comments

  • Leruo Khabutlane says:

    this is great stuff Frank!

    Thanks!
    Leruo

  • Pureprecepts says:

    Hello.
    I have a tablespace with 4 datafiles. The name is mail_ts.dbf. It had grown to over 90gb. I reviewed the application owner and discovered I could truncate a table. This reduced the size to 4GB. The resize will not work due o the high water mark. How can I resize the mail_ts.dbf to 4GB and use only one datafile?

  • Hi,

    Basically, you need to move (table move or index rebuild) extents that are above the free space. You can check which table/index it is from dba_extents.

    Or much simple – but need 2x space – just move everything to another tablespace. This is the only solution if you want to reduce the number of datafiles rather than reduce their size but keep all.
    Regards,
    Franck.

  • Ralph says:

    Great script Franck! Just saved me a ton of ball ache.
    Cheers
    Ralph

  • Mark says:

    Very usefull information! Thank you so much!

    I have a doubt, the following queries (1 and 2) should return the same information?

    1) select max(ktfbuebno+ktfbueblks-1) hwm_blocks from sys.x$ktfbue where ktfbuefno = 8;

    2)select max(block_id + blocks-1) hwm_blocks from dba_extents where file_id=8;

    I am asking that because I have seen several places saying that the second one could be used to find the datafile HWM but I can’t resize my datafiles based on it (ORA-03297). From x$ktfbue I found a realistic number but unfortunatelly I dont’t know how to map it to database objects so i can move/shrink then to release free space and reduce fragmentation.

  • Hi Mark,
    DBA_EXTENTS doesn’t show extents from tables in recyclebin. That may be the reason for the difference with querying directly x$ktfbue.
    The mapping to objects is done by ktfbuesegfno,ktfbuesegbno which maps to HEADER_FILE, HEADER_BLOCKS in DBA_SEGMENTS.
    Regards,
    Franck.

  • Laurent B. says:

    Hi Franck,

    I had a sql use to reduce some datafile and i loose it.

    I find your page and really, it s so excellent. Simple, quick and efficient, i like it.

  • Thanks a lot Laurent for your feedback.
    A good way to avoid to loose your sql is to put them in a blog post ๐Ÿ˜‰
    Regards,
    Franck.

  • Rahul H says:

    Hi Frank,

    Your query is excellent, but it only checks for datafiles which are autoextend on. I tried to modify it for all datafiles but it still returning for autoextend on datafiles.

    Can you help me modify your query for all datafiles?

    Regards,
    Rahul

  • Hi Rhaul,
    Thanks for your feedback.
    It’s on purpose that I generate resize statements only for autoextensible files that can reach back their original size, to avoid the risk of errors or monitoring alerts. The statements are commented out when datafile is not autoextensible.
    So you should see them but commented out.
    Regards,
    Franck

  • Rahul says:

    Thanks for the quick response Frank, There are around 663 datafiles in my database.I want to resize all the datafiles which autoexend off as there are they have too much space. My plan is to do not resize the datafiles which are autoextend on.

    I am trying to modify your query for this, but still unable to fetch the commands for the datafiles which are autoextend off.

    Can you help me out with modifying your code?

    Regards,
    Rahul

  • Rahul says:

    Hi Frank, I was able to modify your query for autoextend off files. Also, this one will exclude UNDO tablespace datafiles which we should not resize.

    =========

    set linesize 1000 pagesize 0 feedback off trimspool on
    with
    hwm as (
    -- get highest block id from each datafiles ( from x$ktfbue as we don't need all joins from dba_extents )
    select /*+ materialize */ ktfbuesegtsn ts#,ktfbuefno relative_fno,max(ktfbuebno+ktfbueblks-1) hwm_blocks
    from sys.x$ktfbue group by ktfbuefno,ktfbuesegtsn
    ),
    hwmts as (
    -- join ts# with tablespace_name
    select name tablespace_name,relative_fno,hwm_blocks
    from hwm join v$tablespace using(ts#)
    ),
    hwmdf as (
    -- join with datafiles, put 5M minimum for datafiles with no extents
    select file_name,nvl(hwm_blocks*(bytes/blocks),5*1024*1024) hwm_bytes,bytes,autoextensible,maxbytes
    from hwmts right join dba_data_files using(tablespace_name,relative_fno) where tablespace_name not like '%UNDO%'
    )
    select
    case when autoextensible='NO' and maxbytes>=bytes
    then -- we generate resize statements only if autoextensible can grow back to current size
    '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
    ||'M from '||to_char(ceil(bytes/1024/1024),999999)||'M */ '
    ||'alter database datafile '''||file_name||''' resize '||ceil(hwm_bytes/1024/1024)||'M;'
    else -- generate only a comment when autoextensible is off
    '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
    ||'M from '||to_char(ceil(bytes/1024/1024),999999)||'M */ '
    ||'alter database datafile '''||file_name||''' resize '||ceil(hwm_bytes/1024/1024)||'M;'
    end SQL
    from hwmdf
    where
    bytes-hwm_bytes>1024*1024 -- resize only if at least 1MB can be reclaimed
    order by bytes-hwm_bytes desc
    /

  • Hi Rahul,
    Thanks for sharing.
    Regards,
    Franck.

  • j says:

    Incredible
    Thanks for sharing

    J

  • Syaifuddin says:

    Hi,

    Many thanks for sharing this! It’s saved my a lot of time!.

    regards,
    Syaifuddin

  • asad_tashmatov says:

    Thank you, Frank!
    I’ve left my headache!

  • Andrew Limsk says:

    Excellent! I have seen several versions of similar scripts in the past but this is the most elegantly efficient of them

  • Rahul Singh says:

    This is indeed commendable work Frank.
    Something not many people would share that easily.

    Thankyou!

  • Daniel Ne says:

    Thanks for sharing this SQL. It performed very well even in a very huge database.

  • Thanks Frank !

    I made some changes to your query :

    1.Add the same type of resizing for temporary files (based on sys.x$KTSSO) , mainly for 10g where we do not have the convenient alter tablespace temp shrink space;
    2.Wrap the whole thing in a pl/sql

    declare
    sql_stm varchar2(300);
    BEGIN
    FOR enreg IN (
    with
    hwm as (
    -- get highest block id from each datafiles ( from x$ktfbue as we don't need all joins from dba_extents )
    select /*+ materialize */ ktfbuesegtsn ts#,ktfbuefno relative_fno,max(ktfbuebno+ktfbueblks-1) hwm_blocks
    from sys.x$ktfbue group by ktfbuefno,ktfbuesegtsn
    union
    select /*+ materialize */ KTSSOTSNUM ts#,KTSSORFNO relative_fno,max(KTSSOBNO+KTSSOBLKS-1) hwm_blocks
    from sys.x$KTSSO group by KTSSORFNO,KTSSOTSNUM
    ),
    hwmts as (
    -- join ts# with tablespace_name
    select name tablespace_name,relative_fno,hwm_blocks
    from hwm join v$tablespace using(ts#)
    ),
    hwmdf as (
    -- join with datafiles, put 5M minimum for datafiles with no extents
    select file_name,nvl(hwm_blocks*(bytes/blocks),5*1024*1024) hwm_bytes,bytes,autoextensible,maxbytes ,'DATA' content
    from hwmts right join dba_data_files using(tablespace_name,relative_fno)
    union
    select file_name,nvl(hwm_blocks*(bytes/blocks),5*1024*1024) hwm_bytes,bytes,autoextensible,maxbytes, 'TEMP' content
    from hwmts right join dba_temp_files using(tablespace_name,relative_fno)
    --where file_name like '%undo%'
    )
    select
    case when autoextensible='YES' and maxbytes>=bytes
    then -- we generate resize statements only if autoextensible can grow back to current size
    '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
    ||'M from '||to_char(ceil(bytes/1024/1024),999999)||'M */ '
    || case when hwmdf.content='DATA' then 'alter database datafile ''' else 'alter database tempfile ''' end||file_name||''' resize '||ceil(hwm_bytes/1024/1024)||'M'
    end SQL_EXECUTE
    from hwmdf
    where
    bytes-hwm_bytes>1024*1024 -- resize only if at least 1MB can be reclaimed
    order by bytes-hwm_bytes desc
    )
    LOOP
    begin
    sql_stm:=enreg.SQL_EXECUTE;
    dbms_output.put_line(sql_stm);
    EXECUTE IMMEDIATE sql_stm;
    exception
    when others then
    dbms_output.put_line('erreor while executing :'|| sql_stm);
    end;
    END LOOP;
    END;
    /

    Have a good day

    Amer

  • Hari Panamanna says:

    Amazing script, worked like a charm. Hats off to Frank!

    Thanks once again, Frank.

  • Nitin says:

    Beautiful script Franck. Just what the doctor ordered. Thanks a ton.

  • Amit says:

    Works like a charm and a very useful script.
    Many thanks Franck.

  • Raud Premananda says:

    Nice Script. Very useful. Thank you very much.

  • Srinivasan says:

    do you have the script to check the same for TEMP files?

  • Tanel Poder says:

    Good stuff. I have @trim_database.sql script that uses a similar approach, but it doesn’t query x$ (and it also is pretty slow).

    I never got to write the faster version myself – but you could just start from the “end” of datafile free space list X$KTFBFE.KTFBFEBNO for each file and walk backwards as long as the free chunks are contiguous. Stop when you see the first gap in free space ranges. Should be doable with an analytic function.

    X$KTFBUE has to go through every segment header (lots of single block reads!) – but X$KTFBFE only goes through each file (potentially reads multiple blocks from each file if there are more LMT bitmap blocks scattered around the file). But still in a large partitioned database this could mean a few thousand single block reads instead of millions.

  • Saku says:

    saved me from lot of troubles. thank you

  • Praveen says:

    Excellent Script.. Really saved a lot of time … Thanks a lot ๐Ÿ™‚

  • Som Raj Rai says:

    Hi,
    It would be great if you response. I have been working as an Oracle DBA but recently I came across a weird scenario. My datafile in tablespace in Oracle database 12c is full and is autoextend on and is full by 2 gb and I tried to extend my datafile to 4 gb but it didn’t show the increased maxsize of my tablespace. What could be the issue or is there something wrong with database, but other datafile extended increase my tablespace. It is very weird to me. Could you explain the possibility? Thanks

  • Baba Fakruddin says:

    Thanks just awesome, It works for me Excellent. Thanks for sharing Franck

  • Baba Fakruddin says:

    Thanks Franck for your efforts , it is very helpful and time saver.

  • Rosel says:

    Thanks for the script. Great article. It will help me a lot.

  • Raju says:

    Thanks. Nice useful script.

  • Erick says:

    Hello Frank,
    im starting as DBA and for now just checking how our database is configured as all was set up few years ago.
    Just couple of day I ran a query to see tablespaces (ASM used) and found all are autoextend =No (have no idea why they are that way), so your query result in just coment to resize each datafile in xxx MB.
    Last month we ran out of space ina tablespace and our remote support resized some datafiles, so i have 2 questions hopping you can give some ligth or point me to some Reading.
    – a reason why aoutoextend was not used ? (as our db is ) should it be adviseable to rezie datafiles as per your query??
    – our tablespaces have 10+ datafiles all different in size , the dba resized only 2 to 8 GB and left all other as they were, should we resize all to the same size? ]I dont see any logic behind why he resized only 2 and how he choose which files

    thanks in advance

  • Javed says:

    Very useful script! Thank you.

  • Jan Schnackenberg says:

    The usefulness of this post cannot be overstated.

    Unfortunately, I have a problem with my Multitenant-environment (Oracle 19.5).

    In this database the x$ktfbue fixed table does not contain any rows for the PDB-datafiles. This of course causes the CDB-version of your script to basically try to resize all datafiles to 5MB, which will of course fail.

    Did you ever experience this issue?

  • Saqib Munir says:

    that what i want,
    great query,
    thank you very much.

  • Danijel says:

    This…. this… this…. THIS IS GOLD !
    Thank you Franck for such a great contribution, you made my life as a part time DBA way more simple !
    Just one more question:
    Is it possible that shrinking datafiles in that way may result in more poor database performance ?
    I just want to know if there’s a possiblity that I affect the database performance with shrinking (using ur script. I have autoextend on of course).

    Thanks !

  • Franck Pachot says:

    Hi Danijel,
    Thanks a lot for the feedback. Happy that it helps.
    I don’t think there can be any performance issue. Except if a large load has to extend the file again.
    Franck.

  • Linh says:

    Hi Franck,

    I have 2 datafiles for one tablespace. the 2nd file is also autoextensible but your script only shows the 1st datafile of the tablespace. Can you explain? Thanks,

  • Franck Pachot says:

    Hi Linh, no idea. Feel free to improve the script.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Franck Pachot
Franck Pachot

Principal Consultant / Database Evangelist