Sometimes Oracle Database 11gR2 complains about not being able to extend a (lob) segment with ORA-1691, but ASM monitoring based on USABLE_FILE_MB did not fire. Adding a new data file fails with ASM ORA-15041. I was under the impression this behaviour of ASM deserved some explanation.
Situation
Database Alert Log complains with ORA-1691:
ORA-1691: unable to extend lobsegment MYUSER.SYS_LOB0013128030C00003$$ by 128 in tablespace USERS ORA-1691: unable to extend lobsegment MYUSER.SYS_LOB0013128030C00003$$ by 8192 in tablespace USERS
Ok, a quick look comparing dba_segments and the tablespace size – it’s full. So let’s extend it with one more datafile:
SQL> alter tablespace USERS add datafile size 2G autoextend on next 1G maxsize 32G; alter tablespace IWACS add datafile size 2G autoextend on next 1G maxsize 32G * FEHLER in Zeile 1: ORA-01119: Fehler bei der Erstellung der Datenbankdatei '+ORADATA' ORA-17502: ksfdcre:4 konnte Datei +ORADATA nicht erstellen ORA-15041: diskgroup "ORADATA" space exhausted
Uh-oh, ORA-15041? But as I well know, the customer is monitoring FREE_MB and USABLE_FILE_MB in v$asm_diskgroup…? Maybe something nasty is going on, let’s check if ASM has to work on balancing:
SQL> select * from v$asm_operation;
no rows selected
What’s the status of diskgroup +ORADATA? (I removed uninvolved ones)
ASMCMD [+] > lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 2506752 344640 0 344640 0 N ORADATA/
Status of the disks:
ASMCMD [+] > lsdsk -k
Total_MB Free_MB OS_MB Name Failgroup Failgroup_Type Library Label UDID
Product Redund Path
2097152 344629 2097152 ORADATA_0000 ORADATA_0000 REGULAR System UNKNOWN
/dev/ORADATA
409600 11 409600 ORADATA_0001 ORADATA_0001 REGULAR System UNKNOWN
/dev/ORADATA2
So what happens here?
The technical term is “imbalanced space distribution”. The above diskgroup is of type EXTERNAL, meaning more or less that ASM does an evenly distribution our user data over both disks. And since it’s not just concatenation, this means that disks have to be the same size! But we have one with 2T and the other disk has just 400GB. Too bad.
A stuck rebalance operation could well have caused a similar problem, but as we have seen, this was not the case.
Obviously, after finding out the above myself and now knowing what to look for, I found a pretty good MOS note, explainig the whole thing:
ORA-15041 IN A DISKGROUP ALTHOUGH FREE_MB REPORTS SUFFICIENT SPACE (Doc ID 460155.1)
Real-life Solution
Not trivial here, since the storage backend was not able to supply us with LUNs bigger than 2TB – and this seemed to be the initial reason for the different sizes: Somebody just “extended” the 2TB initial disk by 400GB…
In this case, we added two new LUNs with 1.5 TB in size each, and removed the others – all in one step:
SQL> alter diskgroup ORADATA add disk '/dev/ORADATA3','/dev/ORADATA4' drop disk ORADATA_0000,ORADATA_0001;
By the way: That’s one of my reasons to favour ASM: You can migrate to new LUNs (or even whole new storage clusters) with an one-liner, and perfectly under full pressure.
I like it! 🙂
Martin