Feed aggregator

RMAN restore performance based in ASM

AskTom - Hot Questions - Thu, 11/05/2020 - 00:00
Enviroment: RHEL 6.9 udev asmdisk DB 11.2.0.4 Scene A: MEMORY_TARGET=4GB PGA_TARGET=1GB with default _backup_disk/file_bufsz/cnt values restore 17TB full completed for 42 Hrs with the approximate 21MB PGA used each channel. <code>COMPONENT CURRENT_SIZE MIN_SIZE USER_SPECIFIED_SIZE TYPE ---------------------------------------------------------------- ------------ ---------- ------------------- ------------- shared pool 656 528 0 GROW large pool 416 416 0 STATIC java pool 64 64 0 STATIC streams pool 0 0 0 STATIC SGA Target 3072 3072 0 STATIC DEFAULT buffer cache 1888 1888 0 SHRINK KEEP buffer cache 0 0 0 STATIC RECYCLE buffer cache 0 0 0 STATIC DEFAULT 2K buffer cache 0 0 0 STATIC DEFAULT 4K buffer cache 0 0 0 STATIC DEFAULT 8K buffer cache 0 0 0 STATIC DEFAULT 16K buffer cache 0 0 0 STATIC DEFAULT 32K buffer cache 0 0 0 STATIC Shared IO Pool 0 0 0 STATIC PGA Target 1024 1024 1024 STATIC ASM Buffer Cache 0 0 0 STATIC 16 rows selected. PARAMETER VALUE DESCRIPTION Default? --------------------- ------------------ -------------------------------------------------- ---------- _pga_max_size 209715200 Maximum size of the PGA memory for one process TRUE _smm_max_size 102400 maximum work area size in auto mode (serial) TRUE _smm_px_max_size 524288 maximum work area size in auto mode (global) TRUE pga_aggregate_target 1073741824 Target size for the aggregate PGA memory consumed FALSE by the instance KSPPINM KSPPSTVL KSPPDESC ------------------------------ --------------- ------------------------------------------------------- _backup_disk_io_slaves 0 BACKUP Disk I/O slaves _backup_ksfq_bufcnt_max 64 maximum number of buffers used for backup/restore _backup_ksfq_bufsz 0 size of buffers used for backup/restore _backup_ksfq_bufcnt 0 number of buffers used for backup/restore _backup_disk_bufsz 0 size of buffers used for DISK channels _backup_disk_bufcnt 0 number of buffers used for DISK channels _backup_file_bufsz 0 size of buffers used for file access _backup_file_bufcnt 0 ...

RMAN : how to restore a dropped tablespace if no catalog and no PITR

AskTom - Hot Questions - Tue, 11/03/2020 - 00:00
Hello experts, I am in 12.2, multi-tenant architecture, no RMAN catalog, auto backup control file. I have a problem to restore with RMAN a deleted tablespace. I create it and I made a complete backup of my container with the PDB and the tbs. <code>SQL> CREATE TABLESPACE ZZTBS DATAFILE '/u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf' size 10m EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; RMAN> BACKUP DATABASE PLUS ARCHIVELOG; ... Starting backup at 02-NOV-20 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00010 name=/u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf input datafile file number=00011 name=/u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf input datafile file number=00009 name=/u01/app/oracle/oradata/orcl12c/orcl/system01.dbf input datafile file number=00012 name=/u01/app/oracle/oradata/orcl12c/orcl/users01.dbf input datafile file number=00016 name=/u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf input datafile file number=00013 name=/u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf input datafile file number=00014 name=/u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf channel ORA_DISK_1: starting piece 1 at 02-NOV-20 channel ORA_DISK_1: finished piece 1 at 02-NOV-20 piece handle=/u01/app/oracle/fast_recovery_area/orcl12c/ORCL12C/49BFF8A6BB912582E0530100007F8BE4/backupset/2020_11_02/o1_mf_nnndf_TAG20201102T102548_ht097xb2_.bkp tag=TAG20201102T102548 comment=NONE ... </code> We see that the backup is OK : BS 2, Key 16 and, most important, the column Name is fill with the datafile of my tbs. <code>RMAN> list backup; List of Backup Sets =================== ... BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 2 Full 1.41G DISK 00:00:34 02-NOV-20 BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20201102T102548 Piece Name: /u01/app/oracle/fast_recovery_area/orcl12c/ORCL12C/49BFF8A6BB912582E0530100007F8BE4/backupset/2020_11_02/o1_mf_nnndf_TAG20201102T102548_ht097xb2_.bkp List of Datafiles in backup set 2 Container ID: 3, PDB Name: ORCL File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name ---- -- ---- ---------- --------- ----------- ------ ---- 9 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/system01.dbf 10 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf 11 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf 12 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/users01.dbf 13 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf 14 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf 16 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf </code> I delete my tbs. <code>SQL> drop tablespace ZZTBS INCLUDING CONTENTS AND DATAFILES; Tablespace dropped.</code> The problem is that, after the delete tbs, in the control file there is no more reference to my tbs. So, when I use RMAN, connected to the PDB, I get an error message saying that it does not know my tbs. <code>RMAN> LIST BACKUP OF TABLESPACE ZZTBS; RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of list command at 11/02/2020 10:28:10 RMAN-20202: Tablespace not...

"Checkpoint not complete" messages with log ship to standby database, Oracle 19 (Standard edition)

AskTom - Hot Questions - Tue, 11/03/2020 - 00:00
Hello, I found out, that there are "Checkpoint not complete" messages by each log switch in alert log of our Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0. We are using log shipping to standby database, therefore we force log switch witch archive_lag_target parameter each 2 minutes (see PFILE below). Our checkpoints seem to be quite long (~4 minutes). Also it seems, that all our checkpoints are not complete all the time, independent on load of the database. But on other hand our processes can normally use DML statements without waiting significant time on event "Checkpoint not complete". That is why I kindly ask you to explain: 1. What is the meaning of "Checkpoint not complete" messages in this case? 2. Why our DML statements do not hang during incomplete checkpoint event? 3. Would it make sense to setup incremental checkpoints each N seconds with log_checkpoint_timeout parameter? Thanks in advance, <code> =============================================================================== alert.log: =============================================================================== Beginning log switch checkpoint up to RBA [0x100a.2.10], SCN: 39289565 2020-09-25T14:50:37.557027+02:00 Thread 1 advanced to log sequence 4106 (LGWR switch), current SCN: 39289565 Current log# 2 seq# 4106 mem# 0: /data/oradata/PROD/redo2_0.log 2020-09-25T14:50:37.709741+02:00 ARC1 (PID:12141): Archived Log entry 22746 added for T-1.S-4105 ID 0xf432ed53 LAD:1 2020-09-25T14:52:40.171692+02:00 Thread 1 cannot allocate new log, sequence 4107 Checkpoint not complete Current log# 2 seq# 4106 mem# 0: /data/oradata/PROD/redo2_0.log 2020-09-25T14:52:40.469412+02:00 Completed checkpoint up to RBA [0x1009.2.10], SCN: 39288123 2020-09-25T14:52:43.269670+02:00 Beginning log switch checkpoint up to RBA [0x100b.2.10], SCN: 39290953 2020-09-25T14:52:43.270757+02:00 Thread 1 advanced to log sequence 4107 (LGWR switch), current SCN: 39290953 Current log# 3 seq# 4107 mem# 0: /data/oradata/PROD/redo3_0.log 2020-09-25T14:52:43.281539+02:00 ARC0 (PID:12135): Archived Log entry 22747 added for T-1.S-4106 ID 0xf432ed53 LAD:1 2020-09-25T14:54:42.798805+02:00 Thread 1 cannot allocate new log, sequence 4108 Checkpoint not complete Current log# 3 seq# 4107 mem# 0: /data/oradata/PROD/redo3_0.log 2020-09-25T14:54:43.377453+02:00 Completed checkpoint up to RBA [0x100a.2.10], SCN: 39289565 2020-09-25T14:54:45.873354+02:00 Beginning log switch checkpoint up to RBA [0x100c.2.10], SCN: 39291331 =============================================================================== Statspack: =============================================================================== Avg %Total %Tim Total Wait wait Waits Call Event Waits out Time (s) (ms) /txn Time ---------------------------- ------------ ---- ---------- ------ -------- ------ log file sync 2,363 0 143 60 1.4 3.2 direct path read 2,266 0 69 30 1.3 1.5 log buffer space 123 0 42 344 0.1 .9 Failed Logon Delay 32 100 33 1036 0.0 .7 log file switch (checkpoint 11 0 9 783 0.0 .2 direct path sync 14 0 7 529 0.0 .2 log file switch completion 25 0 7 262 0.0 .1 direct path write 1,360 0 6 5 0.8 .1 =============================================================================== PFILE: =============================================================================== PROD._...

Not able to run DBMS_SCHEDULER jobs on Shell Scripts from any user other than SYS

AskTom - Hot Questions - Wed, 10/28/2020 - 00:00
Hello Team, Greetings! I tried to run dbms_scheduler jobs (executable) from schema other than SYS and it is failing and not able to see the error for the failure. But when I run the same job from SYS user it is succeeding. Please help me to resolve this issue. Thanks Femy Mellwin

Locking issue on self created Counter scheme in our HMIS Application

AskTom - Hot Questions - Mon, 10/19/2020 - 00:00
Dear Team We had recently upgraded our Application counter scheme ( Previously using oracle sequences) to our own created procedure to generate Next number/counter due to our requirement to generate seperate counters for our different Hospital Campuses. Every year we have to reset all sequences because our scheme is using year based counter values. Now the issue is locking problem on our Physician Encounter where we have to generate 7-8 different counters on the same time and our procedure sometimes get locked and does not return any counter value due to locking issue Our application user may loss data to No Counter return We have two options of Row update, wait 1 sec and Nowait, sample code is mentioned below. can you please guide us on method close to Oracle sequence so that we can update code and overcome this locking issue. Also Please guide do you recommend to use FOR UPDATE without wait or NOWAIT option in SQL ? <code> IF NVL(P_LOCK_WAIT, 'N') = 'Y' THEN SELECT ROWID INTO P_ROWID FROM COUNTERS.SYSTEM_COUNTERS_VALUES T WHERE T.COUNTER_ID = P_COUNTER_ID AND T.SERIAL_NO = P_SERIAL_NO FOR UPDATE WAIT 1; ELSE SELECT ROWID INTO P_ROWID FROM COUNTERS.SYSTEM_COUNTERS_VALUES T WHERE T.COUNTER_ID = P_COUNTER_ID AND T.SERIAL_NO = P_SERIAL_NO FOR UPDATE NOWAIT; END IF </code>

Generate string based on pattern

AskTom - Hot Questions - Mon, 10/19/2020 - 00:00
Hi Chirs, Connor, Could you please help or suggest a way to generate string based on pattern Pattern - <b>^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[89ab][a-f0-9]{3}-[a-f0-9]{12}$</b> I need to generate millions of string like <b>f9f8c8e2-0b20-4160-8f74-e836f4661fc5</b> matches with the pattern. e.g. <code>SELECT * FROM dual WHERE REGEXP_LIKE('f9f8c8e2-0b20-4160-8f74-e836f4661fc5', '^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[89ab][a-f0-9]{3}-[a-f0-9]{12}$');</code>

In database row archival

AskTom - Hot Questions - Wed, 10/14/2020 - 00:00
I do have a question about Oracle Indatabase archiving in Oracle 19c, given below table has a XMLtype column alter table HTS_SUMMARY_DATA_CELL row archival Error report - SQL Error: ORA-38353: This table type is not supported by the ROW ARCHIVAL clause. This error comes. The same Indatabase archiving in Oracle 12c,12.1.0.2 alter table HTS_SUMMARY_DATA_CELL row archival The table gets archived. What is the differnce there in 12C and 19C versions,

Oracle .DBF file format structure

AskTom - Hot Questions - Wed, 10/14/2020 - 00:00
Hi, i want to iterate all rows in table and do some operations on them, but i don't want to get data with some select statement. In order to do that i tried to export data as .dbf and read it with Java and do some stuff. But it seems Oracle .dbf format differs than xBase (https://www.clicketyclick.dk/databases/xbase/format/) .dbf format. Where can i get Oracle .dbf data file structure details. Or can you give me more suitable techniques and advice in this purpose. Thank you.

How to differentiate between internal system queries and application queries in Oracle

AskTom - Hot Questions - Tue, 10/13/2020 - 00:00
Hi Team, Firstly, I want to thank you for all the immense help that we continue to receive from your end, making us a better, knowledgeable Oracle developer. I've this weird question related to internal queries that Oracle fires and the queries that we, the user fires (either from application, or sql developer, sql prompt). (Environment: Oracle 11g R2) <code>select sql_fulltext,sql_id,executions,service,parsing_schema_name,rows_processed,optimizer_cost from v$sql where to_date(last_load_time,'yyyy-mm-dd/hh24:mi:ss')>= to_date(to_char(sysdate,'yyyy-mm-dd'),'yyyy-mm-dd') and service='SYS$USERS' and parsing_schema_name <> 'SYS' and rows_processed>0</code> Below is the sample output: <code> select count(*) from all_objects where object_name = :NAME and owner = :OWNER gq4p31m39qpms 1 SYS$USERS HR 1 4 select last_load_time,sql_fulltext,sql_id,loaded_versions,fetches,executions... 2hx8s4mqk6qfc 2 SYS$USERS HR 100 1 begin DBMS_UTILITY.NAME_RESOLVE ( name => :NAME, context........ ats0vqbmfytk8 1 SYS$USERS HR 1 0 select * from v$sql gtzvs9utsg0qs 1 SYS$USERS HR 500 1 SELECT dbms_transaction.local_transaction_id FROM dual gqj5gaygdbfs5 2 SYS$USERS SCOTT 2 2 select * from employees f34thrbt8rjt5 3 SYS$USERS HR 150 3 select owner,object_name from all_objects where object_type in ('TABLE',....... 3yqkryv9c3nhk 1 SYS$USERS HR 2 72 </code> Out of these, only select * from employees/Select * from v$sql/(the above stated query) are executed by the user from sql developer/sql prompt/application. Rest of them are all fired by oracle internally. My question is, how can I find these queries only, the ones that are fired from sql developer/application/sql prompt. Is there any other metadata views/parameters/columns by which we can segregate these queries as fired by the user/application and queries fired by Oracle internally. Please help in this regard

Accent-insensitive conversion

AskTom - Hot Questions - Sat, 10/10/2020 - 00:00
Hello TOM, Is there a way to perform a accent-insensitive conversion ONLY? I'm using the construct: <code>SELECT utl_raw.cast_to_varchar2(NLSSORT('ABCoua', 'nls_sort=binary_ai')) FROM dual;</code> which - as byproduct - is performing also a case-insensitive replacement, that is not required. Something like this: <code>SELECT my_funct('ABCoua') FROM dual;</code> should output: <code>ABCoua</code> and not: <code>abcoua</code> In case you're wondering why i need this, it's because we're storing the data in UNICODE format in our database, but we need to export it using LATIN-based characters to an international institution. Thanks,

How to validate the precision and scale of a number in pure PL/SQL

AskTom - Hot Questions - Thu, 10/08/2020 - 00:00
Hello, I came across a problem to validate if a user-provided number will fit the column in a table. The check to be done in PL/SQL is to validate if precision and scale will be big enough for the provided number. What would be your suggestion to achieve that? My efforts below. I cannot find it in the documentation but I found that PL/SQL does not allow datatype limits in neither VALIDATE_CONVERSION nor CAST functions, so the following fails: <code>DECLARE tn NUMBER; BEGIN tn := CAST( '123.456' AS NUMBER( 6, 2 )); DBMS_OUTPUT.PUT_Line( tn); END; </code> SQL does allow datatype limits in there, so the solution could be something like: <code>execute IMMEDIATE 'SELECT CAST(:p_test_num AS NUMBER(:p_precision,:p_scale)) from dual' INTO ... </code> However, the check is to be done in bulk (loading a text file), so context switches will become a problem. I could use PL/SQL like: <code>set serveroutput ON SIZE UNLIMITED CREATE OR REPLACE PROCEDURE P(p_test_num IN VARCHAR2, p_precision IN PLS_INTEGER, p_scale IN PLS_INTEGER, p_format IN VARCHAR2 DEFAULT NULL) IS l_result PLS_INTEGER; -- -1 - the number will fit but with rounding -- 0 - the number will not fit -- 1 - the number will fit perfectly BEGIN execute IMMEDIATE 'declare l_test_num NUMBER('||p_precision||', '||p_scale||'); begin if :test_format is null then l_test_num := to_number(:test_num); else l_test_num := to_number(:test_num, :test_format); end if; if l_test_num = :test_num then :result := 1; else :result := -1; end if; exception when value_error then :result := 0; when others then raise; end;' USING IN p_format, IN p_test_num, OUT l_result; DBMS_OUTPUT.PUT_Line( l_result); END; exec p('21474.83647',38,1) exec p('2147483647' ,38,1) exec p('2147483647' ,38,127) exec p('21474.83647',38,1,'99999D99') exec p('21474.83647' ,38,1,'99999D99999') exec p('21474.83647' ,38,1,'99999999D99999999') exec p('21474.83647' ,38,12,'99999999D99999999') exec p('2147483647' ,38,127,'99999999999') </code> ... but I have a feeling this could be done better...

recursive query raises ORA-01789

AskTom - Hot Questions - Wed, 10/07/2020 - 00:00
This is a minimized version of complex recursive query. The query works when columns in recursive member are listed explicitly: <code> with t (c,p) as ( select 2,1 from dual ), rec (c,p) as ( select c,p from t union all select t.c,t.p from rec join t on rec.c = t.p ) select * from rec </code> I don't get why error <i>ORA-01789: query block has incorrect number of result columns</i> is raised when specified t.* instead. <code> with t (c,p) as ( select 2,1 from dual ), rec (c,p) as ( select c,p from t union all select t.* from rec join t on rec.c = t.p ) select * from rec </code> Why t.* is not equivalent to t.c,t.p here? Could you please point me to documentation for any reasoning? Tested on 11g, 18 - db fiddle https://dbfiddle.uk/?rdbms=oracle_18&fiddle=6b44cda99ae2a3d5486d14b28206f289 and 19 (livesql).

SQL performance tuning

AskTom - Hot Questions - Wed, 10/07/2020 - 00:00
Hi Tom, I have bee trying to tune the following sql. This is a datawarehouse query which used to run for 2 hrs but gradually after 3 months, this query is now taking around 3 hrs. I tried creating index, use hints (parallel hints, index hint)but getting worse execution plan. I also ran sql tuning advisor and it shows this is the best plan. Please guide on this. <code> SELECT fpa.pnl_sum_rec_key, NVL ( (SELECT SUM (fpa2.pnl_ptd_actual) FROM fact_pnl_consol fpa2 WHERE fpa2.pnl_report_key = fpa.pnl_report_key AND fpa2.pnl_period_yr = fpa.pnl_period_yr - 1 AND fpa2.pnl_period_qtr = fpa.pnl_period_qtr AND fpa2.pnl_period_num = fpa.pnl_period_num AND fpa2.pnl_rpt_row_num = fpa.pnl_rpt_row_num AND fpa.PNL_RPT_CURRENCY = fpa2.PNL_RPT_CURRENCY AND NVL (fpa2.ASC_STANDARD, 'X') = NVL (fpa.ASC_STANDARD, 'X')), 0) calc_ptd_act_py, NVL ( (SELECT SUM (fpa3.pnl_ptd_actual) FROM fact_pnl_consol fpa3 WHERE fpa3.pnl_report_key = fpa.pnl_report_key AND fpa3.pnl_period_yr = fpa.pnl_period_yr AND fpa3.pnl_period_qtr = fpa.pnl_period_qtr AND fpa3.pnl_period_num <= fpa.pnl_period_num AND fpa3.pnl_rpt_row_num = fpa.pnl_rpt_row_num AND fpa.PNL_RPT_CURRENCY = fpa3.PNL_RPT_CURRENCY AND NVL (fpa3.ASC_STANDARD, 'X') = NVL (fpa.ASC_STANDARD, 'X')), 0) calc_qtd_act, NVL ( (SELECT SUM (fpa4.pnl_ptd_budget) FROM fact_pnl_consol fpa4 WHERE fpa4.pnl_report_key = fpa.pnl_report_key AND fpa4.pnl_period_yr = fpa.pnl_period_yr AND fpa4.pnl_period_qtr = fpa.pnl_period_qtr AND fpa4.pnl_period_num <= fpa.pnl_period_num AND fpa4.pnl_rpt_row_num = fpa.pnl_rpt_row_num AND fpa.PNL_RPT_CURRENCY = fpa4.PNL_RPT_CURRENCY AND NVL (fpa4.ASC_STANDARD, 'X') = NVL (fpa.ASC_STANDARD, 'X')), 0) calc_qtd_bud, NVL ( (SELECT SUM (fpa5.pnl_ptd_actual) FROM fact_pnl_consol fpa5 WHERE fpa5.pnl_report_key = fpa.pnl_report_key AND fpa5.pnl_period_yr = fpa.pnl_period_yr - 1 AND fpa5.pnl_period_qtr = fpa.pnl_period_qtr AND fpa5.pnl_period_num <= fpa.pnl_period_num AND fpa5.pnl_rpt_row_num = fpa.pnl_rpt_row_num AND fpa.PNL_RPT_CURRENCY = fpa5.PNL_RPT_CURRENCY AND NVL (fpa5.ASC_STANDARD, 'X') = NVL (fpa.ASC_STANDARD, 'X')), 0) calc_qtd_act_py, NVL ( (SELECT SUM (fpa6.pnl_ptd_actual) FROM fact_pnl_consol fpa6 WHERE fpa6.pnl_report_key = fpa.pnl_report_key AND fpa6.pnl_period_yr = fpa.pnl_period_yr AND fpa6.pnl_period_num <= fpa.pnl_period_num AND fpa6.pnl_rpt_row_num = fpa.pnl_rpt_row_num AND fpa.PNL_RPT_CURRENCY = fpa6.PNL_RPT_CURRENCY AND NVL (fpa6.ASC_STANDARD, 'X') = NVL (fpa.ASC_STANDARD, 'X')), 0) calc_ytd_act, NVL ( (SELECT SUM (fpa7.pnl_ptd_budget) FROM fact_pnl_consol fpa7 WHERE fpa7.pnl_report_key = fpa.pnl_report_key AND fpa7.pnl_period_yr = fpa.pnl_period_yr AND fpa7.pnl_period_num <= fpa.pnl_period_num AND fpa7.pnl_rpt_row_num = fpa.pnl_rpt_row_num ...

Global Temp Table

AskTom - Hot Questions - Wed, 10/07/2020 - 00:00
We're considering the value of using GTT table approach to address a variety of 'staging table' requirements we have at our applications. We typically load multiple Mb of data ( 50-100 Mb per frequent, concurrent data load ) into staging tables from where we transform, validate and consolidate data into persisting portfolio data tables and then delete the individual load specific data from the staging table(s). We expect that replacing our staging tables with equivalent GTT's and indexes etc will remove the overhead of deleting staged data at the end of each data load process. We understand also, although we haven't found confirmation at Oracle docs, that GTT's will use PGA space where possible before spilling data over into Temp tablespace if PGA is depleted. If our understanding here is not misguided then we might also consider splitting our data loading flows into smaller chunks to make better use of available PGA memory ? Our question really is - are we on the right track in terms of our understanding of making best possible use of GTT approach and are there any other key considerations or policy rules that you would encourage ? thanks for your time.... dermot..

Convert JSON into Record

AskTom - Hot Questions - Wed, 09/30/2020 - 00:00
HI, i have a json file, i have to insert into one table it's have multiple columns, How will convert into as record? Please see the below example <b>Example:</b> {"EMPNO":123,"ENAME":"TEST","JOB":"ENGINEER","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":3,"ENAME":"harish","JOB":"developer","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":555,"ENAME":"TESTTTT","JOB":"SOFTWARE","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":102,"ENAME":"harish","JOB":"developer","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":444,"ENAME":"TESTTTT","JOB":"ENR2222","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":700,"ENAME":"PANIPOORI","JOB":"CRICKET","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":101,"ENAME":"harish","JOB":"developer","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} i want to insert into emp table or any other ways to create insert scripts based on the Json file? Thanks Praveen

SQL Profiles Long Term Benefits and Issues

AskTom - Hot Questions - Wed, 09/16/2020 - 00:00
Hello, I researched your Q/A bank before asking this question. What are some of the things to keep in mind, where a benefit to using a sql profile turns into a curse. What are the things that can cause the use of sql profile to go bad or problems that can go un noticed as a result of using the profile ? Thank you.

Is there a way to interrupt rollback if I don't care about the table?

AskTom - Hot Questions - Tue, 09/08/2020 - 00:00
Hello Connor, hello Chris. Oracle version: 12.1.0.2.0. I am wondering if there is a way to interrupt a long running rollback if we don't care what happens to the table? After the the interrupt we would truncate it anyway. A junior developer attempted to populate an empty table with many indexes with tens of millions of rows. It didn't fit into the undo and the transaction went into a very long rollback. It lasted at least 10 times longer than the query ran before the rollback. During that time the database barely responded to other queries which slowed down or blocked his work and the work of other people. This happened to me in the past as well and it is very frustrating to wait hours and hours until a useless(from our perspective, not oracle) task finishes because of a mistake. I know that he should've disabled indexes and used /*+ append*/, I'm just asking is there a way to tell Oracle 'I don't care about that table'?

Select Data with Partition Name

AskTom - Hot Questions - Fri, 08/28/2020 - 00:00
Hi, I have a table with partitions. I am trying to select the data and partition name in the same query. I have created a part_test table and inserted data. It is on the Live SQL link. Looking to query something like this, SELECT ld_dt, ld_src, <partition name> FROM part_test, all_tab_partitions and join the part_test with all_tab_partitions to get both data and partition name. Any help is greatly appreciated. Thanks, Mani

Pages