Feed aggregator

uninitialized collections

AskTom - Hot Questions - Tue, 02/16/2021 - 00:00
Hi, i have de following error on this code: <code>--Exercici 1 Create type Adreca_T As Object( carrer varchar2(80), numero number(3), poblacio varchar2(80), codiPostal number(5) ); Create type E_Empleat_T As Object( Nif varchar2(9), Nom varchar2(50), adreca Adreca_T, dataNaixement date, sou number(5), cap REF E_Empleat_T ); create table TaulaEmpleats of E_Empleat_T; insert into TaulaEmpleats values ('38234567A','Josep Verdu Bargallo', Adreca_T('Estanislau Figueres',25,'Tarragona',43002), to_date('12/12/1973','dd/mm/yyyy'),4500,null); insert into TaulaEmpleats values ('39311345X','Pere Garcia Llorens', Adreca_T('Blanca dAnjou',45,'Tarragona',43002), to_date('05/07/1980','dd/mm/yyyy'),1000,(select REF(E) from TaulaEmpleats E where nif = '38234567A')); --Exercici 2 CREATE TYPE telefons_T is VARRAY(5) of varchar2(10); Create or replace type E_Empleat2_T As Object( Nif varchar2(9), Nom varchar2(50), adreca Adreca_T, dataNaixement date, sou number(5), cap REF E_Empleat2_T, Telefon Telefons_t, member procedure afegirTelefon(Telefon in varchar2), member procedure modificarTelefon (telefonold in varchar2, telefonnew in varchar2) ); drop table TaulaEmpleats; create table TaulaEmpleats of E_Empleat2_T; insert into TaulaEmpleats values ('38234567A','Josep Verdu Bargallo', Adreca_T('Estanislau Figueres',25,'Tarragona',43002), to_date('12/12/1973','dd/mm/yyyy'),4500,null,null); insert into TaulaEmpleats values ('39311345X','Pere Garcia Llorens', Adreca_T('Blanca dAnjou',45,'Tarragona',43002), to_date('05/07/1980','dd/mm/yyyy'),1000, (select REF(E) from TaulaEmpleats E where nif = '38234567A'),null); --Exercici 3 Create or Replace Type body E_Empleat2_T as member PROCEDURE afegirTelefon (telefon VARCHAR2) as taula_af telefons_T; numero number(5); begin select telefon into taula_af from TaulaEmpleats where nif =SELF.Nif; numero:=taula_af.Count; taula_af.EXTEND(); taula_af(numero+1):=Telefon; update TaulaEmpleats set telefon=taula_af where nif =SELF.Nif; end afegirTelefon; member PROCEDURE modificarTelefon (telefonold VARCHAR2, telefonnew VARCHAR2) as taula_af telefons_T; i number(5); fi boolean; begin select telefon into taula_af from TaulaEmpleats where nif =SELF.Nif; i:=1; fi:=false; while (i<=taula_af.COUNT and not fi) loop if (taula_af(i) = telefonold) then taula_af(i):=telefonnew; fi:=true; end if; i:=i+1; end loop; Update TaulaEmpleats set telefon=taula_af where nif=SELF.Nif; end modificarTelefon; end; declare x E_Empleat2_T; begin select value(E) into x from TaulaEmpleats E where E.nif = '39311345X'; x.afegirTelefon('659312111'); update TaulaEmpleats E set E= x where E.nif='39311345X'; End;</code> The problem, was in "Exercici 3" The banner Error que empieza en la linea: 1 del comando : declare x E_Empleat2_T; begin select value(E) into x from TaulaEmpleats E where E.nif = '39311345X'; x.afegirTelefon('659312111'); update TaulaEmpleats E set E= x where E.nif='39311345X'; End; Ther error: Informe de error - ORA-06531: Reference to uninitialized collection ORA-06512: at "SYSTEM.E_EMPLEAT2_T", line 8 ORA-06512: at line 5 06531. 00000 - "Reference to uninitialized collection" *Cause: An element or member function of a nested table or varray was referenced (where an initialized collection is needed) without the collection having been initialized. *Action: Initialize the collection with an appropriate constructor or whole-object assignment. Any help will be appreciated.

UTL_HTTP.begin_request

AskTom - Hot Questions - Tue, 02/16/2021 - 00:00
Hi, Could you help-me please. I am tryed to execute a simple command and does not work. SELECT utl_http.request('http://www.oracle.com/') FROM dual * ERROR at line 1: ORA-29273: HTTP request failed ORA-12535: TNS:operation timed out ORA-06512: at "SYS.UTL_HTTP", line 1491 ORA-06512: at line 1 I tryed in cloud anvironment and dos not work too. I am conected as Admin; ORA-01031: insufficient privileges ORA-06512: at "SYS.UTL_HTTP", line 1810 ORA-06512: at "SYS.UTL_HTTP", line 136 ORA-06512: at "SYS.UTL_HTTP", line 1745 ORA-06512: at line 1 I want to do a interface using HTTP to send data to a IoT plataform. when I comment the line where is the comand UTL_HTTP.begin_request don't occur error and show I have access to the package UTL_HTTP... DECLARE l_url VARCHAR2(50) := 'http://api.losant.com'; l_http_request UTL_HTTP.req; l_http_response UTL_HTTP.resp; BEGIN -- Make a HTTP request and get the response. --l_http_request := UTL_HTTP.begin_request(l_url, 'GET', 'HTTP/1.1'); <<----------------- Why don't I have access? --l_http_response := UTL_HTTP.get_response(l_http_request); UTL_HTTP.end_response(l_http_response); END;

Databases producing excessive loads of archive logs

AskTom - Hot Questions - Wed, 02/10/2021 - 00:00
Hi all, Our company is running on oracle database 19c version 19.9 on windows 2012r2 servers. Recently, I am facing an issue with two databases producing excessive loads of archive logs something around 100GB per day. I tried to resize the redo logs though what I achieved was less number of archives but total log size remained the same. Moreover, trying to find out what causes such high loads of logs, it usually appears to be the DBWx, M00x oracle processes producing such huge redo logging even at night. But how could I figure out what's is causing those processes to produce such loads all day, e.g. is it application, a materialised view refreshing frequently etc. As I am newbie, could please someone give me any suggestion how/where/what to look further on that issue, even some documentation to go through? Thanks in advance. Best regards, VangelP

Direct path insert via dblink

AskTom - Hot Questions - Fri, 02/05/2021 - 00:00
Hello Chris, Hello Connor. Is there any way to do a direct path insert when pushing data via dblink? Our DWH currently truncates and loads data into our tables and it takes way too long. I'm not sure what exactly takes it so long, whether it's the load on their servers or network or anything else, but I am certain that generating undo on our side is unnecessary because the tables can be either empty or in a complete state. DWH won't allow us to create a dblink to their base, so we cannot make an insert /*+ append*/ on our side. In other responses I've stumbled into mentions that dblinks are not meant to be a good way to transfer large amounts of data between databases. What is a good way to do that aside from golden gate?

RLS Causing Performance issue

AskTom - Hot Questions - Wed, 02/03/2021 - 00:00
Hi Tom, we have implemented RLS based on client identifier of a session when RLS is applied for a particular client identifier the query is keep on running if i apply same where condition with admin user the query is running fast is this due to RLS or any other reason? is where condition is applied for each row of the table or entire table at once?

Internal function in plan

AskTom - Hot Questions - Wed, 02/03/2021 - 00:00
Hi Team , <code>----------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ----------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 4 (100)| | | | | 1 | SORT AGGREGATE | | 1 | 54 | | | | | | 2 | NESTED LOOPS | | 3 | 162 | 4 (0)| 00:00:01 | | | | 3 | NESTED LOOPS | | 3 | 162 | 4 (0)| 00:00:01 | | | | 4 | PARTITION HASH SINGLE | | 3 | 108 | 3 (0)| 00:00:01 | KEY | KEY | |* 5 | TABLE ACCESS BY GLOBAL INDEX ROWID| TABLE1 | 3 | 108 | 3 (0)| 00:00:01 | ROWID | ROWID | |* 6 | INDEX RANGE SCAN | IDX_TABLE1 | 27 | | 1 (0)| 00:00:01 | KEY | KEY | | 7 | PARTITION HASH ALL | | 1 | | 1 (0)| 00:00:01 | 1 | 64 | |* 8 | INDEX RANGE SCAN | IDX_TABLE2 | 1 | | 1 (0)| 00:00:01 | 1 | 64 | |* 9 | TABLE ACCESS BY LOCAL INDEX ROWID | TABLE2 | 1 | 18 | 1 (0)| 00:00:01 | 1 | 1 | ----------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - filter(("TABLE1"."PART_T_TYPE"=:SYS_B_3 AND "TABLE1"."DFLG"=:SYS_B_4 AND "TABLE1"."PFLG"=:SYS_B_5)) 6 - access("TABLE1"."AIC"=:1) 8 - access("TABLE1"."TRIAN_ID"="TABLE2"."TRIAN_ID") 9 - filter((INTERNAL_FUNCTION("DEL_ID") AND "TABLE2"."COL_ID"=:2))</code> Need to know cause of internal function for DEL_ID column <code>select count(*) from TABLE1 a ,TABLE2 b where a.train_id= b.train_id ===> datatype is same for both joining columns and a.aic = 'AaaaaaQWESQ' AND (DEL_ID) in ('OOO','QQQ')====> causing internal function , datatype is varhcar2(10) and a.part_t_type ='L' and a.dflg='N' and a.pflg='Y' and a.col_ID=b.col_ID and a.col_ID= 'QWE'</code> manual execution : select count(*) from TABLE2 b where (DEL_ID) in ('OOO','QQQ') actual plan shows not internal function TABLE2 is hash partitioned table b.train_id is partitioning key

Hierarchical CONNECT BY NOCYCLE Issue

AskTom - Hot Questions - Wed, 02/03/2021 - 00:00
Hi In the shared livesql link contains re-producible test case. I'm interested to list all the parent entities for a given child entity. Below is the output I get when there is no cyclical loop condition: <code> SELECT child_entity AS child, parent_entity AS Parent, level, CONNECT_BY_ISCYCLE AS IsLoop, SYS_CONNECT_BY_PATH(child_entity,'\') AS Path FROM entities START WITH child_entity = 'abet' CONNECT BY NOCYCLE child_entity = PRIOR parent_entity ORDER SIBLINGS BY child_entity; CHILD PARENT LEVEL ISLOOP PATH ------ ------ ----- ------ ---------------- abet abe 1 0 \abet abe ab 2 0 \abet\abe ab a 3 0 \abet\abe\ab a null 4 0 \abet\abe\ab\a ab c 3 0 \abet\abe\ab abe d 2 0 \abet\abe abet b 1 0 \abet b null 2 0 \abet\b 8 rows selected. </code> Now when I introduce cyclical loop condition, the output is like below. Duplicate last 2 rows can been seen in output. <code> insert into entities values('abet','a',900); commit; SELECT child_entity AS child, parent_entity AS Parent, level, CONNECT_BY_ISCYCLE AS IsLoop, SYS_CONNECT_BY_PATH(child_entity,'\') AS Path FROM entities START WITH child_entity = 'abet' CONNECT BY NOCYCLE child_entity = PRIOR parent_entity ORDER SIBLINGS BY child_entity; CHILD PARENT LEVEL ISLOOP PATH ------ ------ ----- ------ ---------------- abet abe 1 0 \abet abe ab 2 0 \abet\abe ab a 3 0 \abet\abe\ab a abet 4 1 \abet\abe\ab\a abet b 5 0 \abet\abe\ab\a\abet b null 6 0 \abet\abe\ab\a\abet\b a null 4 0 \abet\abe\ab\a ab c 3 0 \abet\abe\ab abe d 2 0 \abet\abe abet b 1 0 \abet b null 2 0 \abet\b 11 rows selected. </code> I'm expecting to produce an output like below. How to remove the duplicate extra two rows coming in above SQL result. Please share inputs on how I can achieve the same. <code> CHILD PARENT LEVEL ISLOOP PATH ------ ------ ----- ------ ---------------- abet abe 1 0 \abet abe ab 2 0 \abet\abe ab a 3 0 \abet\abe\ab a abet 4 1 \abet\abe\ab\a abet b 5 0 \abet\abe\ab\a\abet b null 6 0 \abet\abe\ab\a\abet\b a null 4 0 \abet\abe\ab\a ab c 3 0 \abet\abe\ab abe d 2 0 \abet\abe </code> Thanks..

SQL For Elimination of Overlaps + MIN and MAX Project Wise

AskTom - Hot Questions - Tue, 02/02/2021 - 00:00
The requirement here is To Eliminate Overlaps but also Take MIN and MAX + Distinct Combinations per CUST_EQP_CONTRACT_NUM(Project) Accordingly. <code>--create table1(EXPORT_TABLE) this is my actual data create table EXPORT_TABLE(CUST_EQP_CONTRACT_NUM varchar2(150),EQP_ANT_BASE_HEIGHT_CALC number,EQP_ANT_TIP_HEIGHT_CALC number); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168903',166,174); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168903',167.52,172.48); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168903',168.53,171.48); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',99.69,108.31); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',102.69,111.31); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',231,239); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',231.5,238.5); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',232.5,240.5); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',233.58,239.82); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('507286',98.98,101.02); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('507286',118.98,121.02); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('507286',138.5,161.5); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('507286',148.5,171.5); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('666905',255,263); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('666905',256.64,261.36); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168999',4.2,5.2); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168999',3.9,4.9); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168999',2.1,4.8); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168999',6.2,6.8); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168999',5.9,6.5);</code> Not I will create another Table just to eliminate source rows and see how many distinct combinations are available for me in One PROJECT(CUST_EQP_CONTRACT_NUM) <code>create table export_table2 as select distinct CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC from EXPORT_TABLE; ---DESIRED OUTPUT CUST_EQP_CONTRACT_NUM EQP_ANT_BASE_HEIGHT_CALC EQP_ANT_TIP_HEIGHT_CALC 168903 166 174 283874 99.69 111.31 283874 231 240.5 507286 98.98 101.02 507286 118.98 121.02 507286 138.5 171.5 666905 255 263 168999 2.1 5.2 168999 5.9 6.8 --DB VERSION select* from v$version; Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production 0 PL/SQL Release 12.1.0.2.0 - Production 0 "CORE 12.1.0.2.0 Production" 0 TNS for Linux: Version 12.1.0.2.0 - Production 0 NLSRTL Version 12.1.0.2.0 - Production 0</code> I Took...

What does it take to support Exadata competently?

AskTom - Hot Questions - Tue, 02/02/2021 - 00:00
GM, We are considering moving to Exadata. I am both excited and nervous about having to support what seems like it would require a skillset beyond what most DBAs have. How much more knowledge beyond being knowledgeable and experienced in RAC and having a good understanding of the Oracle engine is recommended when supporting Exadata? If it is significant, is it recommended that the company should bring in a consultant say to set it up and then support it in production for a 3-6 months for fear that someone without experience will set it up incorrectly or cause outages while managing it? Thanks, John

When Dba_Scheduler_Jobs.Start_Date is not specified, where does Oracle store Job Enabling time?

AskTom - Hot Questions - Thu, 01/28/2021 - 00:00
Hello, I've a question on DBMS_Scheduler functionality: When I create a scheduler job via DBMS_SCHEDULER.CREATE_JOB proc and I don't specify Start_Date value, then job is scheduled for execution as soon as it is enabled which is correct. I want to know where does Oracle store the Job enabling time because same enabling time can be seen in dba_scheduler_job_run_details.Req_Start_Date when job finishes. Basically I want to enabling time for a job which is in SCHEDULED state and Start_date value for that job is NULL. Thanks for sharing knowledge!

A replacement technology in Oracle database for ETL process

AskTom - Hot Questions - Thu, 01/28/2021 - 00:00
Hi, We currently use a third-party tool for the ingestion of files into Oracle database tables. This piece of software is quite expensive and I am investigating the possibility of replicating a small part of its functionality directly within Oracle technologies instead. I appreciate that this question is vague, so I will keep to the high-level features that need to be replicated. If you need more information, I am more than happy to discuss this further. * The ETL tool has its own scripting language that is dynamically generated for each invocation of the ETL process. Most of the dynamic parts of the script are driven from data values within the Oracle database. Once generated, the script is then executed to perform the work. * The ability to read text and binary files including multiple file formats within a single input file. Dynamic file formats are an example of the previous requirement. * To be able to perform optional enrichments by lookup up a key value. * Bulk loading of the transformed data directly into an Oracle table partition. * Multiple executions of the same ETL process may occur simultaneously working on a different set of files. I have experimented with the obvious such as external tables, global temporary tables, SQL, in-memory and PL/SQL pipelined functions. Whilst I can get the correct output for a simple use-case, I cannot compete with the speed of the ETL tool using Oracle technologies. I discounted the use of SQL joins because some of the enrichment lookups are conditional, sometimes based upon another value in the data row being processed. There could be multiple lookups per row and tens of millions of rows in a single file. I discounted the use of SQL in PL/SQL because of the number of context switches this will incur. I am stuck for ideas right now and would really appreciate your input as to whether there is another technique for implementing this type of ETL process directly in SQL or PL/SQL. Many thanks in advance for any help and advice you can give. Mark.

Table with LONG data type not being freed

AskTom - Hot Questions - Mon, 01/18/2021 - 00:00
Hi Tom, We are dealing with a system which has some legacy tables with LONG column. One of these tables contains critical data (email bodies and attachments, chat transcripts, etc) in LONG column. Unfortunately we cannot remove or even change this table in any way, system is heavily dependent on it. 2 years ago we introduced the following solution: - New table with same columns, but CLOB instead of LOB - Minutely job, which copies newly inserted records from original table into new table, and converts LONG to CLOB - Daily job which deletes already copied records from old table When this solution was introduced, we did a manual cleanup of the original table: - Stop the system which uses the table - Create new table with exact same definition (LONG column) - Copy over all records which had to be kept - Rename original table to "_OLD" postfix and new table to original table name - Drop old table However, after 2 years, customer where this database is located wanted to drop old partitions (note: the problematic table is NOT partitioned), and while checking disk space, noticed that the original table is still growing to be very large (this was the original problem also 2 years ago). Our expectation from the above solution was that while Oracle will not free up the disk space after the DELETE because of the LONG column, it will still reuse existing blocks, so that table will not grow after a certain point. However this is not what is happening, the table keeps growing, day by day. Currently it has about 13 000 records, but over 19 million blocks. About 18 000 records are added during the day, and each of those are deleted at night. I tried to reproduce the problem using the below SQL, but I'm not sure it's fully correct, it's producing a different result, something which is what we were expecting: after we delete and insert again, used blocks count is not increasing. <code> -- Helper proc CREATE OR REPLACE PROCEDURE show_space ( PI_TABLE_NAME IN VARCHAR2 ) AS l_unformatted_blocks NUMBER; l_unformatted_bytes NUMBER; l_fs1_blocks NUMBER; l_fs1_bytes NUMBER; l_fs2_blocks NUMBER; l_fs2_bytes NUMBER; l_fs3_blocks NUMBER; l_fs3_bytes NUMBER; l_fs4_blocks NUMBER; l_fs4_bytes NUMBER; l_full_blocks NUMBER; l_full_bytes NUMBER; PROCEDURE p ( p_label IN VARCHAR2, p_num IN NUMBER ) IS BEGIN dbms_output.put_line(rpad(p_label, 40, '.') || p_num); END; BEGIN dbms_space.space_usage( 'CCQ', PI_TABLE_NAME, 'TABLE', l_unformatted_blocks, l_unformatted_bytes, l_fs1_blocks, l_fs1_bytes, l_fs2_blocks, l_fs2_bytes, l_fs3_blocks, l_fs3_bytes, l_fs4_blocks, l_fs4_bytes, l_full_blocks, l_full_bytes); p('Unformatted Blocks ', l_unformatted_blocks); p('FS1 Blocks (0-25) ', l_fs1_blocks); p('FS2 Blocks (25-50) ', l_fs2_blocks); p('FS3 Blocks (50-75) ', l_fs3_blocks); p('FS4 Blocks (75-100)', l_fs4_blocks); p('Full Blocks ', l_full_blocks); END; / -- Dummy table, exact same definition as original table CREATE TABLE test_long ( qwkey NUMBER(38, 0) NOT NULL, tablename VARCHAR2(90 BYTE), fieldname VARCHAR2(90 BYTE), textkey NUMBER(38, 0), text LONG ); ALTER TABLE test_long ADD CONSTRAINT pk_test_long PRIMARY KEY (qwkey) USING INDEX; -- Original stats BEGIN dbms_stats.gather_table_stats(ownname => '"CCQ"', tabname => '"TEST_LONG"', estimate_percent => 1); END; BEGIN show_space('TEST_LONG'); END; /* Output: Unformatted...

The JSON query results does not return the full path

AskTom - Hot Questions - Mon, 01/18/2021 - 00:00
Hi, I'm try to evaluate the Json query. e.g I have following json string in the table <code>{ "WO": [ { "id": "1", "name": "WO1", "action": "add", "items": [ { "id": "1", "author": "Tom Ask" }, { "id": "2", "author": "Lee Frik" } ] }, { "id": "3", "name": "WO1", "action": "add", "items": [ { "id": "1", "author": "John A" }, { "id": "2", "author": "Jean Pontus" } ] } ] }</code> <code> select d.wo_data.WO.items.id from workorder_json d;</code> I get following results: <code>["Tom Ask","Lee Frik","John A","Jean Pontus"] </code> What I hope is return with full path like following <code>{ "WO": [ "items": [ { "author": "Tom Ask" }, { "author": "Lee Frik" } ], "items": [ { "author": "John A" }, { "author": "Jean Pontus" } ] ] } </code> Is there anyway to return like above?

Is there a view that a DBA can query to find out if "ORA-02393: exceeded call limit on CPU usage"

AskTom - Hot Questions - Wed, 01/13/2021 - 00:00
Greetings, I've seen when "cpu_per_call" limit is reached. ORA-02393 is sent to the SQL Plus. Is there a view that a DBA can query to find out if "ORA-02393: exceeded call limit on CPU usage" occurs to applications using the database since it isn't written to alert log. Thanks, John

accent/case insensitive search

AskTom - Hot Questions - Wed, 01/06/2021 - 00:00
Hello TOM and Happy New Year! We have a table with a VARCHAR2(200) column that stores `names` with European accents: Valerie, Veronique, and so on. We would like to perform a CI/AI (case-insensitive, accent-insensitive search) on those values, i.e. people searching for "valerie"/"VALERIE" will still fetch "Valerie" and so on. I am thinking of several options: (1) - UTL_RAW.CAST_TO_VARCHAR2(NLSSORT(name , 'nls_sort=binary_ai')) = UTL_RAW.CAST_TO_VARCHAR2(NLSSORT(input_name , 'nls_sort=binary_ai')) But this is awfully slow in SQL (query response time 3-5x slower than without). (2) Call this EXECUTE IMMEDIATE('ALTER SESSION SET NLS_COMP = LINGUISTIC'); EXECUTE IMMEDIATE('ALTER SESSION SET NLS_SORT = BINARY_AI'); But this is bringing me back to (1). (3) Create an invisible column called `curated_name` that is generated always as UTL_RAW.CAST_TO_VARCHAR2(NLSSORT(name , 'nls_sort=binary_ai')) But Oracle complains function is not deterministic (which is true, as it depends on the language). (4) Create a column with collation BINARY_AI, but I've just discovered a new Oracle bug: 32335770 - COLLATION BINARY_AI NOT RETURNING ALL IDENTICAL ROWS (5) Oracle text, but this requires careful maintenance. And (other) ideas ? I'm inclined to use (4), if only the bug is solved by Oracle. Thanks,

MATERIALIZED VIEW Performance Issue!

AskTom - Hot Questions - Wed, 01/06/2021 - 00:00
I have created a MV on UAT server and my MV view using a query which has remote connectivity to PROD and select only rights to these tables which has millions of rows around 10 lakhs in each table but after calculation output of query is 139-150 rows only. query alone without MViews is taking 60 seconds but when I use CREATE MATERIALIZED VIEW NOCOMPRESS NOLOGGING BUILD IMMEDIATE USING INDEX REFRESH FORCE ON DEMAND NEXT null USING DEFAULT LOCAL ROLLBACK SEGMENT USING ENFORCED CONSTRAINTS DISABLE QUERY REWRITE as "query" mview creation happens in one hour and after that refresh time is 20-30 minutes ? which is surely not acceptable as this data is being used for dashboard with 3 minutes delay which MV should take time to refresh! I don't have privilege to anything to check on prod DB but on UAT I have sufficient access! I have tried many option but didn't work so please help me to know what is solution and if no solution what is reason behind this? in addition when my mview refresh it shows in explain plan " INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO abc". Please help me! I am really stuck here and tried my hard to get it resolved or finding a reason where I can explain to relevant team! Please help! 1. I have tried create table with same query and it took less than a minute. 2. Insert statement also working fine taking same time. 3. I tried MV view refresh option with atomic_refresh=false as well but it didn't work and actually it will not help! Please let me know if u have any info required! Note: My mv view query using prod tables(approx 4 tables) with db link from UAT.Prod server has one separate user which has been given below table rights select count(*) from abc@prod; --800000 select count(*) from abc1@prod; --700000 select count(*) from abc2@prod; --200000

Rebuild global index NOLOGGING vs LOGGING : Generation of the same amount of REDO

AskTom - Hot Questions - Tue, 12/15/2020 - 00:00
Hello, I'm busy with rebuild of global indexes on partitioned table and, to make the job faster, I'm using the clause PARALLEL + NOLOGGING with the rebuild. My code is : <code>ALTER INDEX my_schema.my_global_index REBUILD PARALLEL 20 NOLOGGING;</code> The parallelism works. With the clause NOLOGGING, I expected that my rebuild generated a minimum of redo, ie only redo related to dictionary changes. But in practice, it seems that NOLOGGING has no effect on the generation of redo. That generates the same amount of redo with index NOLOGGING vs LOGGING. The amount of redo is similar to the size of the index. <u>Statistic redo size of my session after rebuild NOLOGGING or LOGGING:</u> <code>SELECT sn.name, s.VALUE / 1024 / 1024 redo_size_mb FROM v$statname sn, v$mystat s, v$session se WHERE se.sid = s.sid AND s.statistic# = sn.statistic# AND sn.name = 'redo size';</code> NAME REDO_SIZE_MB ------------------------ redo size 577 <u>The index size :</u> <code>SELECT bytes / 1024 / 1024 index_size_mb FROM dba_segments WHERE owner = 'MY_SCHEMA' AND segment_name = 'MY_GLOBAL_INDEX'</code> INDEX_SIZE_MB ------------- 572 Have you an idea of which factor have an influence on that generation of redo in "mode" NOLOGGING? Is there a way to avoid that generation ? thank you very much, Sebastien Tromme.

Performance problem with access to data dictionary views on apex.oracle.com

AskTom - Hot Questions - Mon, 12/14/2020 - 00:00
Dear Support, I have a performance problem specific to apex.oracle.com where I have installed the 'HR data' sample schema and then try to execute a simple select on a data dictionary view in SQL Commands: <code>select * from SYS.USER_CONSTRAINTS; </code> returns no rows. (because of timeout!?) Explain returns ORA-01039: insufficient privileges on underlying objects of the view What works is creating materialized views on queries that access the data dictionary views. Those views were tuned to run fast in my local Oracle Virtual Box with DB Version 19.3.0.0.0 and in a cloud.oracle.com autonomous DB Version 19.5.0.0.0 environment. But on apex.oracle.com the refresh in a scheduler job takes ca. 10 minutes for the following query. What can I do to make them run fast again? <code>SELECT /*+ RESULT_CACHE PARALLEL USE_MERGE(F FC SC) */ F.TABLE_NAME, F.OWNER, F.CONSTRAINT_NAME, FC.COLUMN_NAME, FC.POSITION, SC.COLUMN_ID, SC.NULLABLE, F.DELETE_RULE, F.DEFERRABLE, F.DEFERRED, F.STATUS, F.VALIDATED, F.R_CONSTRAINT_NAME, F.R_OWNER FROM SYS.USER_CONSTRAINTS F JOIN SYS.USER_CONS_COLUMNS FC ON F.OWNER = FC.OWNER AND F.CONSTRAINT_NAME = FC.CONSTRAINT_NAME AND F.TABLE_NAME = FC.TABLE_NAME JOIN SYS.USER_TAB_COLS SC ON SC.TABLE_NAME = F.TABLE_NAME AND SC.COLUMN_NAME = FC.COLUMN_NAME AND SC.HIDDEN_COLUMN = 'NO' AND F.CONSTRAINT_TYPE = 'R' AND F.OWNER = SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') ; </code>

JSON Value - Oracle PL/SQL : Multiple Fields

AskTom - Hot Questions - Wed, 12/09/2020 - 00:00
have a HCLOB with below sample entry <code> "relist":[{"name":"XYZ","action":["Manager","Specific User List"],"flag":false}] </code> When I try to get name or flag using JSON_VALUE I am able to get it as it has single field , but I want to get the value for action. If I try <code> select JSON_VALUE(JSON_CONTENT,'$.action')JSON_CONTENT from test </code> I get NULL. I read that JSON_VALUE only supports 1 entry . Is there any workaround to get both values of action ?

Pages