Monday, August 27, 2012

CRS-4535: Cannot communicate with Cluster Ready Services


CRS-4535: Cannot communicate with Cluster Ready Services

Environment:
Enterprise Linux Enterprise Linux Server release 5.5 (Carthage)
Oracle Grid Infrastructure 11.2.0.1
Oracle database server 11.2.0.1


> crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

ocrcheck was successful from one node and fails in the second node.

[root@atlracp01 crsd]# ocrcheck
Status of Oracle Cluster Registry is as follows :
          Version                  :          3
          Total space (kbytes)     :     262120
          Used space (kbytes)      :       2392
          Available space (kbytes) :     259728
          ID                       : 1786641452
          Device/File Name         : /u01_CRS/ocr/ocr1
                                    Device/File integrity check succeeded
          Device/File Name         : /u02_CRS/ocr/ocr2
                                    Device/File integrity check succeeded
          Device/File Name         : /u03_CRS/ocr/ocr3
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

          Cluster registry integrity check succeeded

          Logical corruption check succeeded


[root@atlracp02 ~]# ocrcheck
PROT-602: Failed to retrieve data from the cluster registry
PROC-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]


So, the OCR itself is good and there is no corruption. I checked the permissions, ownership, etc and they looked fine and the same in both the nodes in the cluster. But, one of the cluster was definitely having some issue.
So, do not attempt to restore OCR from backups.

Checking the crsd.log from $GI_HOME/log/<host_name>/crsd

[root@atlracp02 crsd]# tail -f crsd.log
2012-08-24 13:58:23.903: [  OCROSD][538518096]utopen:6m'': OCR location [/u01_CRS/ocr/ocr1] configured is not a valid storage type. Rturn code [37].
2012-08-24 13:58:23.903: [  OCROSD][538518096]utopen:7:failed to open any OCR file/disk, errno=9, os err string=Bad file descriptor
2012-08-24 13:58:23.903: [  OCRRAW][538518096]proprinit: Could not open raw device
2012-08-24 13:58:23.903: [ default][538518096]a_init:7!: Backend init unsuccessful : [26]
2012-08-24 13:58:24.908: [  OCROSD][538518096]NFS file system /u01_CRS/ocr mounted with incorrect options
2012-08-24 13:58:24.909: [  OCROSD][538518096]WARNING:Expected NFS mount options: wsize>=32768,rsize>=32768,hard,(noac | actimeo=0 | acregmin=0,acregmax=0,acdirmin=0,acdirmax=0)
2012-08-24 13:58:24.909: [  OCROSD][538518096]utopen:6m'': OCR location [/u01_CRS/ocr/ocr1] configured is not a valid storage type. Rturn code [37].
2012-08-24 13:58:24.909: [  OCROSD][538518096]utopen:7:failed to open any OCR file/disk, errno=9, os err string=Bad file descriptor
2012-08-24 13:58:24.909: [  OCRRAW][538518096]proprinit: Could not open raw device
2012-08-24 13:58:24.909: [ default][538518096]a_init:7!: Backend init unsuccessful : [26]
2012-08-24 13:58:25.911: [  OCROSD][538518096]NFS file system /u01_CRS/ocr mounted with incorrect options
2012-08-24 13:58:25.911: [  OCROSD][538518096]WARNING:Expected NFS mount options: wsize>=32768,rsize>=32768,hard,(noac | actimeo=0 | acregmin=0,acregmax=0,acdirmin=0,acdirmax=0)
2012-08-24 13:58:25.911: [  OCROSD][538518096]utopen:6m'': OCR location [/u01_CRS/ocr/ocr1] configured is not a valid storage type. Rturn code [37].
2012-08-24 13:58:25.911: [  OCROSD][538518096]utopen:7:failed to open any OCR file/disk, errno=9, os err string=Bad file descriptor
2012-08-24 13:58:25.911: [  OCRRAW][538518096]proprinit: Could not open raw device
2012-08-24 13:58:25.911: [ default][538518096]a_init:7!: Backend init unsuccessful : [26]


Checking mount options on the failing node.

[root@atlracp02 ~]# cat /etc/fstab|grep -i ocr
atlprod01-node2:/vol/oracle_crs0labpp /u01_CRS/ocr nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 0 0
atlprod01-node1:/vol/oracle_crs1labpp /u02_CRS/ocr nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 0 0
atlprod01-node2:/vol/oracle_crs2labpp /u03_CRS/ocr nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 0 0

 [root@atlracp02 ~]# cat /proc/mounts |grep -i ocr
atlprod01-node2:/vol/oracle_crs0labpp /u01_CRS/ocr nfs rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node2 0 0
atlprod01-node1:/vol/oracle_crs1labpp /u02_CRS/ocr nfs rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node1 0 0
atlprod01-node2:/vol/oracle_crs2labpp /u03_CRS/ocr nfs rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node2 0 0
atlprod01-node1:/vol/oracle_crs1labpp/.snapshot /u02_CRS/ocr/.snapshot nfs rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node1 0 0

The fstab entries look good to me.. It is the same as the node where i do not have any issues. After exhausting all other options/checks, decided to unmount and mount the OCR volumes on the failing node. Note, this is a Netapp NFS cluster file system.


[root@atlracp02 crsd]# df -Ph|grep -i ocr
atlprod01-node2:/vol/oracle_crs0labpp  1.0G  4.7M 1020M   1% /u01_CRS/ocr
atlprod01-node1:/vol/oracle_crs1labpp  1.0G  4.7M 1020M   1% /u02_CRS/ocr
atlprod01-node2:/vol/oracle_crs2labpp  1.0G  4.7M 1020M   1% /u03_CRS/ocr

[root@atlracp02 crsd]# umount /u01_CRS/ocr

[root@atlracp02 crsd]# df -Ph|grep -i ocr
atlprod01-node1:/vol/oracle_crs1labpp  1.0G  4.7M 1020M   1% /u02_CRS/ocr
atlprod01-node2:/vol/oracle_crs2labpp  1.0G  4.7M 1020M   1% /u03_CRS/ocr

[root@atlracp02 crsd]# mount /u01_CRS/ocr

[root@atlracp02 crsd]# df -Ph|grep -i ocr
atlprod01-node1:/vol/oracle_crs1labpp  1.0G  4.7M 1020M   1% /u02_CRS/ocr
atlprod01-node2:/vol/oracle_crs2labpp  1.0G  4.7M 1020M   1% /u03_CRS/ocr
atlprod01-node2:/vol/oracle_crs0labpp  1.0G  4.7M 1020M   1% /u01_CRS/ocr

The /proc/mounts entry for the re-mounted volume looked more aligned to my fstab entry.

[root@atlracp02 crsd]# cat /proc/mounts |grep -i ocr
atlprod01-node1:/vol/oracle_crs1labpp /u02_CRS/ocr nfs rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node1 0 0
atlprod01-node2:/vol/oracle_crs2labpp /u03_CRS/ocr nfs rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node2 0 0
atlprod01-node2:/vol/oracle_crs0labpp /u01_CRS/ocr nfs rw,vers=3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node2 0
               
Proceeding with unmounting and mounting the remaining 2 OCR volumes.

[root@atlracp02 crsd]# umount /u02_CRS/ocr
[root@atlracp02 crsd]# mount /u02_CRS/ocr
[root@atlracp02 crsd]# cat /proc/mounts |grep -i ocr
atlprod01-node2:/vol/oracle_crs2labpp /u03_CRS/ocr nfs rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node2 0 0
atlprod01-node2:/vol/oracle_crs0labpp /u01_CRS/ocr nfs rw,vers=3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node2 0 0
atlprod01-node1:/vol/oracle_crs1labpp /u02_CRS/ocr nfs rw,vers=3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node1 0 0


[root@atlracp02 crsd]# umount /u03_CRS/ocr
umount: /u03_CRS/ocr: device is busy
umount: /u03_CRS/ocr: device is busy

[root@atlracp02 crsd]# umount -f /u03_CRS/ocr
umount2: Device or resource busy
umount: /u03_CRS/ocr: device is busy
umount2: Device or resource busy
umount: /u03_CRS/ocr: device is busy

Tried to do a lazy unmount.

[root@atlracp02 crsd]# umount -l /u03_CRS/ocr

It worked..

[root@atlracp02 crsd]#  cat /proc/mounts |grep -i ocr
atlprod01-node2:/vol/oracle_crs0labpp /u01_CRS/ocr nfs rw,vers=3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node2 0 0
atlprod01-node1:/vol/oracle_crs1labpp /u02_CRS/ocr nfs rw,vers=3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node1 0 0

[root@atlracp02 crsd]# mount /u03_CRS/ocr

[root@atlracp02 crsd]# cat /proc/mounts |grep -i ocr
atlprod01-node2:/vol/oracle_crs0labpp /u01_CRS/ocr nfs rw,vers=3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node2 0 0
atlprod01-node1:/vol/oracle_crs1labpp /u02_CRS/ocr nfs rw,vers=3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node1 0 0
atlprod01-node2:/vol/oracle_crs2labpp /u03_CRS/ocr nfs rw,vers=3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=atlprod01-node2 0 0


Now, stopped and started crs and it came back fine.

crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online



CRS-4530: Communications failure contacting Cluster Synchronization Services daemon



CRS-4530: Communications failure contacting Cluster Synchronization Services daemon

Environment:
Oracle Grid Infrastructure 11.2.0.1
Oracle database server 11.2.0.1

> crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager


> crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        OFFLINE OFFLINE
ora.crsd
      1        ONLINE  INTERMEDIATE raclr41
ora.cssd
      1        ONLINE  OFFLINE
ora.cssdmonitor
      1        ONLINE  ONLINE       raclr41
ora.ctssd
      1        ONLINE  OFFLINE
ora.diskmon
      1        OFFLINE OFFLINE
ora.drivers.acfs
      1        OFFLINE OFFLINE
ora.evmd
      1        ONLINE  OFFLINE
ora.gipcd
      1        ONLINE  ONLINE       raclr41
ora.gpnpd
      1        ONLINE  ONLINE       raclr41
ora.mdnsd
      1        ONLINE  ONLINE       raclr41

Tried to start ora.cssd manually

raclr41 | CRS | /home/oracle
> crsctl start res ora.cssd -init

It was not responding and was hung. Checked the ocssd log from another session ($GI_HOME/log/<host_name>/cssd)

2012-08-15 14:05:50.103: [ GIPCNET][1120729408]gipcmodNetworkProcessConnect: slos op  :  sgipcnTcpConnect
2012-08-15 14:05:50.103: [ GIPCNET][1120729408]gipcmodNetworkProcessConnect: slos dep :  No route to host (113)
2012-08-15 14:05:50.103: [ GIPCNET][1120729408]gipcmodNetworkProcessConnect: slos loc :  connect
2012-08-15 14:05:50.103: [ GIPCNET][1120729408]gipcmodNetworkProcessConnect: slos info:  addr '192.168.1.110:29850'
2012-08-15 14:05:50.103: [    CSSD][1120729408]clssscSelect: conn complete ctx 0x2aaaac09bae0 endp 0xa66
2012-08-15 14:05:50.103: [    CSSD][1120729408]clssnmeventhndlr: node(1), endp(0xa66) failed, probe((nil)) ninf->endp (0x100000a66) CONNCOMPLETE
2012-08-15 14:05:50.103: [    CSSD][1120729408]clssnmDiscHelper: raclr40, node(1) connection failed, endp (0xa66), probe(0x100000000), ninf->endp 0xa66
2012-08-15 14:05:50.103: [    CSSD][1120729408]clssnmDiscHelper: node 1 clean up, endp (0xa66), init state 0, cur state 0
2012-08-15 14:05:50.103: [GIPCXCPT][1120729408]gipcInternalDissociate: obj 0x11588660 [0000000000000a66] { gipcEndpoint : localAddr 'gipc://raclr41:68bf-1bc8-a218-974f#192.168.1.111#13372', remoteAddr 'gipc://raclr40:nm_raclr#192.168.1.110#29850', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, flags 0x8061a, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)
2012-08-15 14:05:50.103: [GIPCXCPT][1120729408]gipcDissociateF [clssnmDiscHelper : clssnm.c : 3301]: EXCEPTION[ ret gipcretFail (1) ]  failed to dissociate obj 0x11588660 [0000000000000a66] { gipcEndpoint : localAddr 'gipc://raclr41:68bf-1bc8-a218-974f#192.168.1.111#13372', remoteAddr 'gipc://raclr40:nm_raclr#192.168.1.110#29850', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, flags 0x8061a, usrFlags 0x0 }, flags 0x0
2012-08-15 14:05:50.103: [    CSSD][1120729408]clssnmDiscEndp: gipcDestroy 0xa66
2012-08-15 14:05:50.111: [    CSSD][1108113728]clssnmvDHBValidateNCopy: node 1, raclr40, has a disk HB, but no network HB, DHB has rcfg 229086889, wrtcnt, 9907057, LATS 1513031694, lastSeqNo 9907057, uniqueness 1345052387, timestamp 1345053949/1513006814
2012-08-15 14:05:50.111: [    CSSD][1120729408]clssnmconnect: connecting to addr gipc://raclr40:nm_raclr#192.168.1.110#29850
2012-08-15 14:05:50.111: [    CSSD][1120729408]clssscConnect: endp 0xa72 - cookie 0x2aaaac09bae0 - addr gipc://raclr40:nm_raclr#192.168.1.110#29850
2012-08-15 14:05:50.111: [    CSSD][1120729408]clssnmconnect: connecting to node(1), endp(0xa72), flags 0x10002
2012-08-15 14:05:50.343: [    CSSD][1115998528]clssgmWaitOnEventValue: after CmInfo State  val 3, eval 1 waited 0
2012-08-15 14:05:50.391: [    CSSD][1112844608]clssnmvDHBValidateNCopy: node 1, raclr40, has a disk HB, but no network HB, DHB has rcfg 229086889, wrtcnt, 9907057, LATS 1513031974, lastSeqNo 9907057, uniqueness 1345052387, timestamp 1345053949/1513006814
2012-08-15 14:05:50.391: [    CSSD][1103583552]clssnmvDHBValidateNCopy: node 1, raclr40, has a disk HB, but no network HB, DHB has rcfg 229086889, wrtcnt, 9907057, LATS 1513031974, lastSeqNo 9907057, uniqueness 1345052387, timestamp 1345053949/1513006814
2012-08-15 14:05:51.115: [    CSSD][1108113728]clssnmvDHBValidateNCopy: node 1, raclr40, has a disk HB, but no network HB, DHB has rcfg 229086889, wrtcnt, 9907058, LATS 1513032704, lastSeqNo 9907058, uniqueness 1345052387, timestamp 1345053950/1513007814


> cat /etc/hosts |grep 192.168.1.110
192.168.1.110   raclr40ic raclr40ic.imanheim.com


That is the interconnect ip.

Now to the interconnects.

> ping 192.168.1.110
PING 192.168.1.110 (192.168.1.110) 56(84) bytes of data.
From 192.168.1.111 icmp_seq=2 Destination Host Unreachable
From 192.168.1.111 icmp_seq=3 Destination Host Unreachable
From 192.168.1.111 icmp_seq=4 Destination Host Unreachable

--- 192.168.1.110 ping statistics ---
6 packets transmitted, 0 received, +3 errors, 100% packet loss, time 4999ms
, pipe 3

So, the interconnect interface was down. Engaged system administrators and brought the interface back online. That fixed the issue.

> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE    ONLINE    raclr40
ora....N1.lsnr ora....er.type ONLINE    ONLINE    raclr40
ora....N2.lsnr ora....er.type ONLINE    ONLINE    raclr41
ora....N3.lsnr ora....er.type ONLINE    ONLINE    raclr41
ora.asm        ora.asm.type   OFFLINE   OFFLINE
ora....SM1.asm application    OFFLINE   OFFLINE
ora....18.lsnr application    ONLINE    ONLINE    raclr40
ora....418.gsd application    OFFLINE   OFFLINE
ora....418.ons application    ONLINE    ONLINE    raclr40
ora....418.vip ora....t1.type ONLINE    ONLINE    raclr40
ora....SM2.asm application    OFFLINE   OFFLINE
ora....19.lsnr application    ONLINE    ONLINE    raclr41
ora....419.gsd application    OFFLINE   OFFLINE
ora....419.ons application    ONLINE    ONLINE    raclr41
ora....419.vip ora....t1.type ONLINE    ONLINE    raclr41
ora.eons       ora.eons.type  ONLINE    ONLINE    raclr40
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    raclr40
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE
ora.ons        ora.ons.type   ONLINE    ONLINE    raclr40
ora....ry.acfs ora....fs.type OFFLINE   OFFLINE
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    raclr40
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    raclr41
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    raclr41







Thursday, August 16, 2012

ORA-01752: cannot delete from view without exactly one key-preserved table


ORA-01752: cannot delete from view without exactly one key-preserved table

Environment:
Oracle database server 11.2.0.2.

Reason is that you are trying to delete from view instead of deleting from the underlying base table.

I was testing a new scheduler job in Oracle that calls a shell script on the Linux server. As part of the test, I had to run the job manually using dbms_scheduler to see if the run was fine. I got it working after a while of debugging and testing.
But, the job run log was ugly. There were some failures and some successes. I wanted to purge the entries in the log for the failed runs as this was a new job. That way, I can have a clean log going forward.

Error Message:

I first got the log_id for the failures from dba_scheduler_job_run_details.
Next, I attempted to delete the entries for the log_id from dba_scheduler_job_log.

SQL> delete from DBA_SCHEDULER_JOB_LOG  where log_id=123534;
delete from DBA_SCHEDULER_JOB_LOG  where log_id=123534
            *
ERROR at line 1:
ORA-01752: cannot delete from view without exactly one key-preserved table


Error Explanation:

> oerr ora 01752
01752, 00000, "cannot delete from view without exactly one key-preserved table"
// *Cause: The deleted table had
//         - no key-preserved tables,
//         - more than one key-preserved table, or
//         - the key-preserved table was an unmerged view.
// *Action: Redefine the view or delete it from the underlying base tables.


I knew instantly the reason. Next, find the base table.

SQL> set long 9999 pages 0 head off
SQL> select dbms_metadata.get_ddl('VIEW','DBA_SCHEDULER_JOB_LOG','SYS') from dual;
  CREATE OR REPLACE FORCE VIEW "SYS"."DBA_SCHEDULER_JOB_LOG" ("LOG_ID", "LOG_DAT
E", "OWNER", "JOB_NAME", "JOB_SUBNAME", "JOB_CLASS", "OPERATION", "STATUS", "USE
R_NAME", "CLIENT_ID", "GLOBAL_UID", "CREDENTIAL_OWNER", "CREDENTIAL_NAME", "DEST
INATION_OWNER", "DESTINATION", "ADDITIONAL_INFO") AS
  (SELECT
     LOG_ID, LOG_DATE, OWNER,
     DECODE(instr(e.NAME,'"'),0, e.NAME,substr(e.NAME,1,instr(e.NAME,'"')-1)),
     DECODE(instr(e.NAME,'"'),0,NULL,substr(e.NAME,instr(e.NAME,'"')+1)),
     co.NAME, OPERATION,e.STATUS, USER_NAME, CLIENT_ID, GUID,
     decode(e.credential, NULL, NULL,
        substr(e.credential, 1, instr(e.credential, '"')-1)),
     decode(e.credential, NULL, NULL,
        substr(e.credential, instr(e.credential, '"')+1,
           length(e.credential) - instr(e.credential, '"'))),
     decode(bitand(e.flags, 1), 0, NULL,
        substr(e.destination, 1, instr(e.destination, '"')-1)),
     decode(bitand(e.flags, 1), 0, e.destination,
        substr(e.destination, instr(e.destination, '"')+1,
           length(e.destination) - instr(e.destination, '"'))),
     ADDITIONAL_INFO
  FROM scheduler$_event_log e, obj$ co
  WHERE e.type# = 66 and e.dbid is null and e.class_id = co.obj#(+))


So, my base table is scheduler$_event_log

I was able to delete the entries.

SQL> delete from scheduler$_event_log where LOG_ID=123534;

1 row deleted.

SQL> delete from scheduler$_event_log where LOG_ID=123435;

1 row deleted.

SQL> delete from scheduler$_event_log where LOG_ID=123215;

1 row deleted.

SQL> delete from scheduler$_event_log where LOG_ID=122574;

1 row deleted.

SQL> commit;

Commit complete.


My log looks clean now. I have only the SUCCEEDED entries.

SQL> set lines 200
SQL> col JOB_NAME for a30
SQL> select log_id, JOB_NAME,STATUS from dba_scheduler_job_run_details where job_name='DROP_OLDEST_PARTITION';

    LOG_ID JOB_NAME                        STATUS
---------- ------------------------------  ------------------------------
    121810 DROP_OLDEST_PARTITION           SUCCEEDED
    121970 DROP_OLDEST_PARTITION           SUCCEEDED
    119031 DROP_OLDEST_PARTITION           SUCCEEDED
    119611 DROP_OLDEST_PARTITION           SUCCEEDED
    120310 DROP_OLDEST_PARTITION           SUCCEEDED