AskTom - Hot Questions

Subscribe to AskTom - Hot Questions feed AskTom - Hot Questions
These are the questions on Ask Tom that have the most comments
Updated: 1 hour 16 min ago

The JSON query results does not return the full path

Mon, 01/18/2021 - 00:00
Hi, I'm try to evaluate the Json query. e.g I have following json string in the table <code>{ "WO": [ { "id": "1", "name": "WO1", "action": "add", "items": [ { "id": "1", "author": "Tom Ask" }, { "id": "2", "author": "Lee Frik" } ] }, { "id": "3", "name": "WO1", "action": "add", "items": [ { "id": "1", "author": "John A" }, { "id": "2", "author": "Jean Pontus" } ] } ] }</code> <code> select d.wo_data.WO.items.id from workorder_json d;</code> I get following results: <code>["Tom Ask","Lee Frik","John A","Jean Pontus"] </code> What I hope is return with full path like following <code>{ "WO": [ "items": [ { "author": "Tom Ask" }, { "author": "Lee Frik" } ], "items": [ { "author": "John A" }, { "author": "Jean Pontus" } ] ] } </code> Is there anyway to return like above?

accent/case insensitive search

Wed, 01/06/2021 - 00:00
Hello TOM and Happy New Year! We have a table with a VARCHAR2(200) column that stores `names` with European accents: Valerie, Veronique, and so on. We would like to perform a CI/AI (case-insensitive, accent-insensitive search) on those values, i.e. people searching for "valerie"/"VALERIE" will still fetch "Valerie" and so on. I am thinking of several options: (1) - UTL_RAW.CAST_TO_VARCHAR2(NLSSORT(name , 'nls_sort=binary_ai')) = UTL_RAW.CAST_TO_VARCHAR2(NLSSORT(input_name , 'nls_sort=binary_ai')) But this is awfully slow in SQL (query response time 3-5x slower than without). (2) Call this EXECUTE IMMEDIATE('ALTER SESSION SET NLS_COMP = LINGUISTIC'); EXECUTE IMMEDIATE('ALTER SESSION SET NLS_SORT = BINARY_AI'); But this is bringing me back to (1). (3) Create an invisible column called `curated_name` that is generated always as UTL_RAW.CAST_TO_VARCHAR2(NLSSORT(name , 'nls_sort=binary_ai')) But Oracle complains function is not deterministic (which is true, as it depends on the language). (4) Create a column with collation BINARY_AI, but I've just discovered a new Oracle bug: 32335770 - COLLATION BINARY_AI NOT RETURNING ALL IDENTICAL ROWS (5) Oracle text, but this requires careful maintenance. And (other) ideas ? I'm inclined to use (4), if only the bug is solved by Oracle. Thanks,

MATERIALIZED VIEW Performance Issue!

Wed, 01/06/2021 - 00:00
I have created a MV on UAT server and my MV view using a query which has remote connectivity to PROD and select only rights to these tables which has millions of rows around 10 lakhs in each table but after calculation output of query is 139-150 rows only. query alone without MViews is taking 60 seconds but when I use CREATE MATERIALIZED VIEW NOCOMPRESS NOLOGGING BUILD IMMEDIATE USING INDEX REFRESH FORCE ON DEMAND NEXT null USING DEFAULT LOCAL ROLLBACK SEGMENT USING ENFORCED CONSTRAINTS DISABLE QUERY REWRITE as "query" mview creation happens in one hour and after that refresh time is 20-30 minutes ? which is surely not acceptable as this data is being used for dashboard with 3 minutes delay which MV should take time to refresh! I don't have privilege to anything to check on prod DB but on UAT I have sufficient access! I have tried many option but didn't work so please help me to know what is solution and if no solution what is reason behind this? in addition when my mview refresh it shows in explain plan " INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO abc". Please help me! I am really stuck here and tried my hard to get it resolved or finding a reason where I can explain to relevant team! Please help! 1. I have tried create table with same query and it took less than a minute. 2. Insert statement also working fine taking same time. 3. I tried MV view refresh option with atomic_refresh=false as well but it didn't work and actually it will not help! Please let me know if u have any info required! Note: My mv view query using prod tables(approx 4 tables) with db link from UAT.Prod server has one separate user which has been given below table rights select count(*) from abc@prod; --800000 select count(*) from abc1@prod; --700000 select count(*) from abc2@prod; --200000

Rebuild global index NOLOGGING vs LOGGING : Generation of the same amount of REDO

Tue, 12/15/2020 - 00:00
Hello, I'm busy with rebuild of global indexes on partitioned table and, to make the job faster, I'm using the clause PARALLEL + NOLOGGING with the rebuild. My code is : <code>ALTER INDEX my_schema.my_global_index REBUILD PARALLEL 20 NOLOGGING;</code> The parallelism works. With the clause NOLOGGING, I expected that my rebuild generated a minimum of redo, ie only redo related to dictionary changes. But in practice, it seems that NOLOGGING has no effect on the generation of redo. That generates the same amount of redo with index NOLOGGING vs LOGGING. The amount of redo is similar to the size of the index. <u>Statistic redo size of my session after rebuild NOLOGGING or LOGGING:</u> <code>SELECT sn.name, s.VALUE / 1024 / 1024 redo_size_mb FROM v$statname sn, v$mystat s, v$session se WHERE se.sid = s.sid AND s.statistic# = sn.statistic# AND sn.name = 'redo size';</code> NAME REDO_SIZE_MB ------------------------ redo size 577 <u>The index size :</u> <code>SELECT bytes / 1024 / 1024 index_size_mb FROM dba_segments WHERE owner = 'MY_SCHEMA' AND segment_name = 'MY_GLOBAL_INDEX'</code> INDEX_SIZE_MB ------------- 572 Have you an idea of which factor have an influence on that generation of redo in "mode" NOLOGGING? Is there a way to avoid that generation ? thank you very much, Sebastien Tromme.

Performance problem with access to data dictionary views on apex.oracle.com

Mon, 12/14/2020 - 00:00
Dear Support, I have a performance problem specific to apex.oracle.com where I have installed the 'HR data' sample schema and then try to execute a simple select on a data dictionary view in SQL Commands: <code>select * from SYS.USER_CONSTRAINTS; </code> returns no rows. (because of timeout!?) Explain returns ORA-01039: insufficient privileges on underlying objects of the view What works is creating materialized views on queries that access the data dictionary views. Those views were tuned to run fast in my local Oracle Virtual Box with DB Version 19.3.0.0.0 and in a cloud.oracle.com autonomous DB Version 19.5.0.0.0 environment. But on apex.oracle.com the refresh in a scheduler job takes ca. 10 minutes for the following query. What can I do to make them run fast again? <code>SELECT /*+ RESULT_CACHE PARALLEL USE_MERGE(F FC SC) */ F.TABLE_NAME, F.OWNER, F.CONSTRAINT_NAME, FC.COLUMN_NAME, FC.POSITION, SC.COLUMN_ID, SC.NULLABLE, F.DELETE_RULE, F.DEFERRABLE, F.DEFERRED, F.STATUS, F.VALIDATED, F.R_CONSTRAINT_NAME, F.R_OWNER FROM SYS.USER_CONSTRAINTS F JOIN SYS.USER_CONS_COLUMNS FC ON F.OWNER = FC.OWNER AND F.CONSTRAINT_NAME = FC.CONSTRAINT_NAME AND F.TABLE_NAME = FC.TABLE_NAME JOIN SYS.USER_TAB_COLS SC ON SC.TABLE_NAME = F.TABLE_NAME AND SC.COLUMN_NAME = FC.COLUMN_NAME AND SC.HIDDEN_COLUMN = 'NO' AND F.CONSTRAINT_TYPE = 'R' AND F.OWNER = SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') ; </code>

JSON Value - Oracle PL/SQL : Multiple Fields

Wed, 12/09/2020 - 00:00
have a HCLOB with below sample entry <code> "relist":[{"name":"XYZ","action":["Manager","Specific User List"],"flag":false}] </code> When I try to get name or flag using JSON_VALUE I am able to get it as it has single field , but I want to get the value for action. If I try <code> select JSON_VALUE(JSON_CONTENT,'$.action')JSON_CONTENT from test </code> I get NULL. I read that JSON_VALUE only supports 1 entry . Is there any workaround to get both values of action ?

How frequent should i backup Archive logs

Tue, 12/01/2020 - 00:00
i take complete database backup once daily with following command <code> backup as compressed backupset database plus archivelog; delete noprompt obsolete; </code> now i am planning to backup archive log every 5 mints with following command <code> backup archivelog all; </code> my archivelog deletion policy is 1 'times to DISK', so there will not be duplicate backups of same log files. i am doing this because i could not do complete recovery of one of my database due to missing archive log. my question is that, is this a good approch ? or there is any better way to be able to do complete recovery ? my RMAN settings are as folows <code> configure controlfile autobackup on; configure backup optimization on; configure retention policy to redundancy 5; configure archivelog deletion policy to backed up 1 times to DISK; configure retention policy to recovery window of 7 days; </code>

How to find child or parent from data with only ID and Level

Mon, 11/30/2020 - 00:00
Hi Tom, I have this data <code>select 1 LineNumber, 1 mylevel, 10 id from dual union all select 2 , 2 , 11 from dual union all select 3 , 3 , 13 from dual union all select 4 , 3 , 14 from dual union all select 5 , 4 , 15 from dual union all select 6 , 2 , 20 from dual union all select 7 , 2 , 30 from dual union all select 8 , 3 , 31 from dual union all select 9 , 4 , 33 from dual union all select 10 , 3 , 32 from dual union all select 11 , 3 , 34 from dual union all select 12 , 4 , 35 from dual union all select 13 , 5 , 36 from dual</code> I'm look for a way to get - the child ids recursively of the id = 30. - the parent id of the id = 30

Cannot read external table or read using utl_file.get_line on Windows 10 with Oracle 18c

Wed, 11/25/2020 - 00:00
I just upgraded to Oracle 18c XE from 11g. I have an external table that worked fine on 11g, but I keep getting the following errors on 18c. ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-11604: no directory object specified for log file The directory does exist and I have the correct grants. <code>SELECT * FROM all_directories WHERE directory_name = 'MYDIR';</code> OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID SYS MYDIR C:\Users\sneuf\OneDrive\PLSQL\OracleDirectory 1 <code>SELECT * FROM all_tab_privs WHERE table_name = 'MYDIR' AND grantee = 'C##_SNEUF';</code> GRANTOR GRANTEE TABLE_SCHEMA TABLE_NAME PRIVILEGE GRANTABLE HIERARCHY COMMON TYPE INHERITED SYS C##_SNEUF SYS MYDIR EXECUTE YES NO NO DIRECTORY NO SYS C##_SNEUF SYS MYDIR READ YES NO NO DIRECTORY NO SYS C##_SNEUF SYS MYDIR WRITE YES NO NO DIRECTORY NO I'm pretty sure I'm missing a grant somewhere, but I can't figure out what. Here is my table: <code>CREATE TABLE C##_SNEUF.CHECKING_TBL_EXT2 ( DB_KEY NUMBER, CHECK_NUM VARCHAR2(10), TRANS_DATE TIMESTAMP (6), DESCRIPTION VARCHAR2(100), DEPOSIT_WITHDRAWAL VARCHAR2(1), AMOUNT VARCHAR2(12), MEMO VARCHAR2(200) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY MYDIR ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE BADFILE MYDIR: 'checking.bad' FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM MISSING FIELD VALUES ARE NULL ( DB_key CHAR, check_num CHAR(10), trans_date CHAR(21) DATE_FORMAT DATE MASK 'MM/DD/YYYY HH24:MI:SS', description CHAR(100), deposit_withdrawal CHAR(1), amount CHAR(12), memo CHAR(200) ) ) LOCATION ( MYDIR: 'checking.csv' ) ) REJECT LIMIT UNLIMITED ;</code> Thanks, Steve

How to dynamically transpose data into with changing column headers with or without pivot/pivot xml?

Tue, 11/24/2020 - 00:00
This question is most common with no clear solution. I have 2 scenarios for dynamic pivoting and the solution requested could be using pivot xml(tried, but extracting is a task)/dynamic sql execution/arays if possible we could use.. Scenario 1: In the first case,I i need to pivot the data where data source is same table. However the pivot needs to be dynamic as the columns header would keep changing as per column app_id. So if app_id=1. The column header would be A,B,C,D, If app_id=2, column would be CDEF and so on. Also each set of value has an id. So for id, 120 and app_id=1 , column A,B,C,D, would display the values and so on. The current sample data has only 2 app_ids, but there could be many more, so app_id and labels would kepe changing thus i need to write a dynamic query. Table is DATA_1 <code>ID label value app_id --- ----- ----- ------ 120 A Alpha 1 120 B Beta 1 120 C Class 1 120 D Delta 1 120 C Alpha 2 120 D Beta 2 120 E Class 2 120 F Delta 2</code> And expected output would be something like this. P.S. Data is dummy and app_ids, and ids would keep getting inserted and removed with changing column headers,so a dynamic solution is needed. <code>SELECT * FROM data WHERE ID = 120 AND app_id = 1; app_id A B C D ID ------ ------ ----- ----- ----- ----- 1 Alpha Beta Class Delta 120 SELECT * FROM data WHERE ID = 120 AND app_id = 2; app_id C D E F ID ------ ------ ----- ----- ----- ----- 2 Alpha Beta Class Delta 120</code> Scenario 2: Here were had the data in one table. Now we have separate table containing the labels and current table with values. SO in table from scenario one we will focus on id, app_id and value columns only. Select app_id,id,value from data_1; The labels will come from another table DATA_Labels in column Header_Name: <code>APP_ID SEQ HEADER_NAME 1 1 A 1 2 B 1 3 C 1 4 D 1 5 E 1 6 F 1 7 G 1 8 H 1 9 I 1 10 J 2 1 P 2 2 Q 2 3 R 2 4 S 2 5 T 2 6 U 2 7 V 2 8 W 2 9 X 2 10 Y</code> So, for labels we would use header_names. ANd Data_1 and Data_Labels could be joined on basis of app_id. In case the values exceed column headers, we can assume that the headers would reach say maximum 20. And in case the header/name is not available, it could put some default header name like COL11,COL12 if the value stretches upto there. I did lot of research but most solutions are too complex and confusing. Any leads would be appreciated.

Operations preserving row order

Tue, 11/24/2020 - 00:00
Hi Tom, a fan of your work. Have a question: are there any operations in Oracle preserving row order? For example, can I expect that <code> select * from (select tag from test order by tag) </code> Will return in sorted order? Or if a pipelined table function produces a dozen rows in certain order, can I use "select * from table(f())" to see them in the same order? Will a cursor read rows from a pipelined function in the same order they are piped? Basically, looking for exceptions to the general rule "any operation destroys row order".

How to extract specific tags from a clob colum storing XML

Fri, 11/20/2020 - 00:00
I have a clob column that as different tags in it, like the example below, I am trying to get the comments tag of all the rows, one of them is returning null, I am assuming it is because it has the word "comments" more than once, this is the query I am using: <b>select d.d1_activity_id, dbms_lob.substr(d.bo_data_area, dbms_lob.getlength(d.bo_data_area), 1) as DCLOB, extractValue(xmlparse(contentt d.bo_data_area),'comments' ) AS comnt from d1_activity d where dbms_lob.instr(d.bo_data_area,'comments') > 0 </b> This is an example of the data we have in that column: <code><comments>C2M Test Exchange Meter</comments><instructions>C2M Test Exchange Meter</instructions><replyToExternalSystem>D1YS</replyToExternalSystem><retryDetails><numberOfRetries>0</numberOfRetries><isToDoEntrySuppressed>false</isToDoEntrySuppressed></retryDetails><contactDetails/><connectNewDevice>D1CN</connectNewDevice><oldDeviceId>061840493997</oldDeviceId> <isFieldActivityCompleted>D1NO</isFieldActivityCompleted><isAppointmentNecessary>N</isAppointmentNecessary><appointmentWindow/><comments>C2M Test for M-Exchange Orch to PragmaCad</comments><instructions>C2M Test for M-Exchange Orch to PragmaCad</instructions><isMeasurementFound>D1NO</isMeasurementFound><replyToExternalSystem>D1YS</replyToExternalSystem><retryDetails><numberOfRetries>0</numberOfRetries><isToDoEntrySuppressed>false</isToDoEntrySuppressed></retryDetails><allowParentTransition>true</allowParentTransition><overrideRestrictions>D1NA</overrideRestrictions><fieldWorkSystemAddress><address1>3456 BOWDEN CIR W</address1><address4>15305034560000&gt;&lt;193954</address4><crossStreet>6249</crossStreet><city>JACKSONVILLE</city><county>DUVAL</county><postal>32216</postal><country>USA</country><state>FL</state><geocodeLatitude>0.000000</geocodeLatitude><geocodeLongitude>0.000000</geocodeLongitude></fieldWorkSystemAddress><contactDetails/> <updateSpecificActivity>D1YS</updateSpecificActivity><updateableItems><comments>Editing comments</comments><instructions>Editing comments</instructions><startDateTime>2020-10-27-00.00.00</startDateTime></updateableItems><isAppointmentNecessary>N</isAppointmentNecessary><appointmentWindow/><allowParentTransition>true</allowParentTransition><replyToExternalSystem>D1YS</replyToExternalSystem><retryDetails><numberOfRetries>0</numberOfRetries><isToDoEntrySuppressed>false</isToDoEntrySuppressed></retryDetails> </code>

Extracting data elements from Json within PL/SQL

Wed, 11/11/2020 - 00:00
I am building a package procedure that will accept a string of json as the input. In SQL, I can select a single value from the json object, but, using select x into var from ... in pl/sql throws an error: <code> ORA-06550: line 10, column 12: PL/SQL: ORA-19200: Invalid column specification </code> <code> with jdata as ( select treat('{"practitionerId":12345,"gender":"F"}' as json) as jrow from dual ) select j.jrow.practitionerId from jdata j; declare p_input_json varchar2(32767) := '{"practitionerId":12345,"gender":"F"}'; p_practitioner int; begin with jdata as ( select treat(p_input_json as json) as jrow from dual ) select j.jrow.practitionerId into p_practitioner from jdata j; end; / </code> Several hours of searching docs and ye olde internet, I cannot find a method of extracting scalars from json in pl/sql. Can you point me in the right direction?

RMAN restore performance based in ASM

Thu, 11/05/2020 - 00:00
Enviroment: RHEL 6.9 udev asmdisk DB 11.2.0.4 Scene A: MEMORY_TARGET=4GB PGA_TARGET=1GB with default _backup_disk/file_bufsz/cnt values restore 17TB full completed for 42 Hrs with the approximate 21MB PGA used each channel. <code>COMPONENT CURRENT_SIZE MIN_SIZE USER_SPECIFIED_SIZE TYPE ---------------------------------------------------------------- ------------ ---------- ------------------- ------------- shared pool 656 528 0 GROW large pool 416 416 0 STATIC java pool 64 64 0 STATIC streams pool 0 0 0 STATIC SGA Target 3072 3072 0 STATIC DEFAULT buffer cache 1888 1888 0 SHRINK KEEP buffer cache 0 0 0 STATIC RECYCLE buffer cache 0 0 0 STATIC DEFAULT 2K buffer cache 0 0 0 STATIC DEFAULT 4K buffer cache 0 0 0 STATIC DEFAULT 8K buffer cache 0 0 0 STATIC DEFAULT 16K buffer cache 0 0 0 STATIC DEFAULT 32K buffer cache 0 0 0 STATIC Shared IO Pool 0 0 0 STATIC PGA Target 1024 1024 1024 STATIC ASM Buffer Cache 0 0 0 STATIC 16 rows selected. PARAMETER VALUE DESCRIPTION Default? --------------------- ------------------ -------------------------------------------------- ---------- _pga_max_size 209715200 Maximum size of the PGA memory for one process TRUE _smm_max_size 102400 maximum work area size in auto mode (serial) TRUE _smm_px_max_size 524288 maximum work area size in auto mode (global) TRUE pga_aggregate_target 1073741824 Target size for the aggregate PGA memory consumed FALSE by the instance KSPPINM KSPPSTVL KSPPDESC ------------------------------ --------------- ------------------------------------------------------- _backup_disk_io_slaves 0 BACKUP Disk I/O slaves _backup_ksfq_bufcnt_max 64 maximum number of buffers used for backup/restore _backup_ksfq_bufsz 0 size of buffers used for backup/restore _backup_ksfq_bufcnt 0 number of buffers used for backup/restore _backup_disk_bufsz 0 size of buffers used for DISK channels _backup_disk_bufcnt 0 number of buffers used for DISK channels _backup_file_bufsz 0 size of buffers used for file access _backup_file_bufcnt 0 ...

SQL Developer drops connection to Oracle Database 19c when trying to edit PL/SQL code

Tue, 11/03/2020 - 00:00
Dear Colleagues, I failed to find solution of my problem in the Internet and kindly ask for your assistance. Production database was migrated from Oracle 11 to Oracle 19. Middleware and the entire production system work fine. Allround Automations' PL/SQL Developer works as usual. However, Oracle SQL Developer drops connection any time I try to open a PL/SQL object (function, procedure, trigger, package) for viewing/editing. Select from user_source works fine. I have v.17 and v.19 SQL Developers in two PCs, the first one informs that the connection was reset, the second one pops up login pannel. In fact they both drop connection. Both can select from dba_source view, but the actual SQL query sent to the database closes the connection: <code> WITH src AS (SELECT ROWNUM,LINE,TEXT,origin_con_id FROM SYS.Dba_SOURCE WHERE TYPE = :TYPE AND OWNER = :OWNER AND NAME = :NAME) SELECT text FROM src, (SELECT max(origin_con_id) max_orig FROM src) WHERE origin_con_id = max_orig ORDER BY LINE ;</code> Parts of the above query work fine, not the entire query. Please advise. Sincerely, Yury PS: Tried to create a test procedure from script. Createds successfully, same behaviour when try to view/edit.

XML vs JSON

Tue, 11/03/2020 - 00:00
Hi, I'm receiving the data from 3rd Party vendor through API, Which Data format(XML/JSON) is very easy to read and insert into DB table?? I have to extract the value from (XML/JSON) using PLSQL. which one is Best to extract the value and which one is best performance?? and also suggest the function name which function is best?? Regards, Praveen

RMAN : how to restore a dropped tablespace if no catalog and no PITR

Tue, 11/03/2020 - 00:00
Hello experts, I am in 12.2, multi-tenant architecture, no RMAN catalog, auto backup control file. I have a problem to restore with RMAN a deleted tablespace. I create it and I made a complete backup of my container with the PDB and the tbs. <code>SQL> CREATE TABLESPACE ZZTBS DATAFILE '/u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf' size 10m EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; RMAN> BACKUP DATABASE PLUS ARCHIVELOG; ... Starting backup at 02-NOV-20 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00010 name=/u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf input datafile file number=00011 name=/u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf input datafile file number=00009 name=/u01/app/oracle/oradata/orcl12c/orcl/system01.dbf input datafile file number=00012 name=/u01/app/oracle/oradata/orcl12c/orcl/users01.dbf input datafile file number=00016 name=/u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf input datafile file number=00013 name=/u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf input datafile file number=00014 name=/u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf channel ORA_DISK_1: starting piece 1 at 02-NOV-20 channel ORA_DISK_1: finished piece 1 at 02-NOV-20 piece handle=/u01/app/oracle/fast_recovery_area/orcl12c/ORCL12C/49BFF8A6BB912582E0530100007F8BE4/backupset/2020_11_02/o1_mf_nnndf_TAG20201102T102548_ht097xb2_.bkp tag=TAG20201102T102548 comment=NONE ... </code> We see that the backup is OK : BS 2, Key 16 and, most important, the column Name is fill with the datafile of my tbs. <code>RMAN> list backup; List of Backup Sets =================== ... BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 2 Full 1.41G DISK 00:00:34 02-NOV-20 BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20201102T102548 Piece Name: /u01/app/oracle/fast_recovery_area/orcl12c/ORCL12C/49BFF8A6BB912582E0530100007F8BE4/backupset/2020_11_02/o1_mf_nnndf_TAG20201102T102548_ht097xb2_.bkp List of Datafiles in backup set 2 Container ID: 3, PDB Name: ORCL File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name ---- -- ---- ---------- --------- ----------- ------ ---- 9 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/system01.dbf 10 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf 11 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf 12 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/users01.dbf 13 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf 14 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf 16 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf </code> I delete my tbs. <code>SQL> drop tablespace ZZTBS INCLUDING CONTENTS AND DATAFILES; Tablespace dropped.</code> The problem is that, after the delete tbs, in the control file there is no more reference to my tbs. So, when I use RMAN, connected to the PDB, I get an error message saying that it does not know my tbs. <code>RMAN> LIST BACKUP OF TABLESPACE ZZTBS; RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of list command at 11/02/2020 10:28:10 RMAN-20202: Tablespace not...

"Checkpoint not complete" messages with log ship to standby database, Oracle 19 (Standard edition)

Tue, 11/03/2020 - 00:00
Hello, I found out, that there are "Checkpoint not complete" messages by each log switch in alert log of our Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0. We are using log shipping to standby database, therefore we force log switch witch archive_lag_target parameter each 2 minutes (see PFILE below). Our checkpoints seem to be quite long (~4 minutes). Also it seems, that all our checkpoints are not complete all the time, independent on load of the database. But on other hand our processes can normally use DML statements without waiting significant time on event "Checkpoint not complete". That is why I kindly ask you to explain: 1. What is the meaning of "Checkpoint not complete" messages in this case? 2. Why our DML statements do not hang during incomplete checkpoint event? 3. Would it make sense to setup incremental checkpoints each N seconds with log_checkpoint_timeout parameter? Thanks in advance, <code> =============================================================================== alert.log: =============================================================================== Beginning log switch checkpoint up to RBA [0x100a.2.10], SCN: 39289565 2020-09-25T14:50:37.557027+02:00 Thread 1 advanced to log sequence 4106 (LGWR switch), current SCN: 39289565 Current log# 2 seq# 4106 mem# 0: /data/oradata/PROD/redo2_0.log 2020-09-25T14:50:37.709741+02:00 ARC1 (PID:12141): Archived Log entry 22746 added for T-1.S-4105 ID 0xf432ed53 LAD:1 2020-09-25T14:52:40.171692+02:00 Thread 1 cannot allocate new log, sequence 4107 Checkpoint not complete Current log# 2 seq# 4106 mem# 0: /data/oradata/PROD/redo2_0.log 2020-09-25T14:52:40.469412+02:00 Completed checkpoint up to RBA [0x1009.2.10], SCN: 39288123 2020-09-25T14:52:43.269670+02:00 Beginning log switch checkpoint up to RBA [0x100b.2.10], SCN: 39290953 2020-09-25T14:52:43.270757+02:00 Thread 1 advanced to log sequence 4107 (LGWR switch), current SCN: 39290953 Current log# 3 seq# 4107 mem# 0: /data/oradata/PROD/redo3_0.log 2020-09-25T14:52:43.281539+02:00 ARC0 (PID:12135): Archived Log entry 22747 added for T-1.S-4106 ID 0xf432ed53 LAD:1 2020-09-25T14:54:42.798805+02:00 Thread 1 cannot allocate new log, sequence 4108 Checkpoint not complete Current log# 3 seq# 4107 mem# 0: /data/oradata/PROD/redo3_0.log 2020-09-25T14:54:43.377453+02:00 Completed checkpoint up to RBA [0x100a.2.10], SCN: 39289565 2020-09-25T14:54:45.873354+02:00 Beginning log switch checkpoint up to RBA [0x100c.2.10], SCN: 39291331 =============================================================================== Statspack: =============================================================================== Avg %Total %Tim Total Wait wait Waits Call Event Waits out Time (s) (ms) /txn Time ---------------------------- ------------ ---- ---------- ------ -------- ------ log file sync 2,363 0 143 60 1.4 3.2 direct path read 2,266 0 69 30 1.3 1.5 log buffer space 123 0 42 344 0.1 .9 Failed Logon Delay 32 100 33 1036 0.0 .7 log file switch (checkpoint 11 0 9 783 0.0 .2 direct path sync 14 0 7 529 0.0 .2 log file switch completion 25 0 7 262 0.0 .1 direct path write 1,360 0 6 5 0.8 .1 =============================================================================== PFILE: =============================================================================== PROD._...

Not able to run DBMS_SCHEDULER jobs on Shell Scripts from any user other than SYS

Wed, 10/28/2020 - 00:00
Hello Team, Greetings! I tried to run dbms_scheduler jobs (executable) from schema other than SYS and it is failing and not able to see the error for the failure. But when I run the same job from SYS user it is succeeding. Please help me to resolve this issue. Thanks Femy Mellwin

Create web source module on APEX

Tue, 10/27/2020 - 00:00
Hi everyone, I have one web page name sth like that: 'https://bus/api/check'. When I run this web, it returns data normally. But when I create web source module, it shows me as below An error occurred during URL invocation. ORA-29024: Certificate validation failure I search on GG about that problem. I try to solve my problem follow this web: https://apex.oracle.com/pls/apex/germancommunities/apexcommunity/tipp/6121/index-en.html It is very cleart. I created new wallet named https_wallet and added certificate successfully. Howerver, I check by select statement on DB, with code: <code>select APEX_WEB_SERVICE.make_rest_request(p_url => ''https://bus/api/check',p_http_method => 'GET',p_wallet_path => 'file:C:/temp/wallet/https_wallet',p_wallet_pwd => 'pass_word') from dual;</code> <b>It shows HTTPS request failed and security violation. I don't know why that reason. I tried those steps on Google web, I can get certificate successfully. Now my certificate is from Amazone. I check web via checkssl online, everything it is ok, but I don't understand when I check serial number of the first of certificate, it shows me serial number is 00. Can anyone help me to solve my those problem. I do not have much experience in create web source module. Thank in advance.</b> Ask about Web Source Module

Pages