AskTom - Hot Questions

Subscribe to AskTom - Hot Questions feed AskTom - Hot Questions
These are the questions on Ask Tom that have the most comments
Updated: 46 min 41 sec ago

UTL_HTTP.begin_request

Tue, 02/16/2021 - 00:00
Hi, Could you help-me please. I am tryed to execute a simple command and does not work. SELECT utl_http.request('http://www.oracle.com/') FROM dual * ERROR at line 1: ORA-29273: HTTP request failed ORA-12535: TNS:operation timed out ORA-06512: at "SYS.UTL_HTTP", line 1491 ORA-06512: at line 1 I tryed in cloud anvironment and dos not work too. I am conected as Admin; ORA-01031: insufficient privileges ORA-06512: at "SYS.UTL_HTTP", line 1810 ORA-06512: at "SYS.UTL_HTTP", line 136 ORA-06512: at "SYS.UTL_HTTP", line 1745 ORA-06512: at line 1 I want to do a interface using HTTP to send data to a IoT plataform. when I comment the line where is the comand UTL_HTTP.begin_request don't occur error and show I have access to the package UTL_HTTP... DECLARE l_url VARCHAR2(50) := 'http://api.losant.com'; l_http_request UTL_HTTP.req; l_http_response UTL_HTTP.resp; BEGIN -- Make a HTTP request and get the response. --l_http_request := UTL_HTTP.begin_request(l_url, 'GET', 'HTTP/1.1'); <<----------------- Why don't I have access? --l_http_response := UTL_HTTP.get_response(l_http_request); UTL_HTTP.end_response(l_http_response); END;

Databases producing excessive loads of archive logs

Wed, 02/10/2021 - 00:00
Hi all, Our company is running on oracle database 19c version 19.9 on windows 2012r2 servers. Recently, I am facing an issue with two databases producing excessive loads of archive logs something around 100GB per day. I tried to resize the redo logs though what I achieved was less number of archives but total log size remained the same. Moreover, trying to find out what causes such high loads of logs, it usually appears to be the DBWx, M00x oracle processes producing such huge redo logging even at night. But how could I figure out what's is causing those processes to produce such loads all day, e.g. is it application, a materialised view refreshing frequently etc. As I am newbie, could please someone give me any suggestion how/where/what to look further on that issue, even some documentation to go through? Thanks in advance. Best regards, VangelP

Direct path insert via dblink

Fri, 02/05/2021 - 00:00
Hello Chris, Hello Connor. Is there any way to do a direct path insert when pushing data via dblink? Our DWH currently truncates and loads data into our tables and it takes way too long. I'm not sure what exactly takes it so long, whether it's the load on their servers or network or anything else, but I am certain that generating undo on our side is unnecessary because the tables can be either empty or in a complete state. DWH won't allow us to create a dblink to their base, so we cannot make an insert /*+ append*/ on our side. In other responses I've stumbled into mentions that dblinks are not meant to be a good way to transfer large amounts of data between databases. What is a good way to do that aside from golden gate?

RLS Causing Performance issue

Wed, 02/03/2021 - 00:00
Hi Tom, we have implemented RLS based on client identifier of a session when RLS is applied for a particular client identifier the query is keep on running if i apply same where condition with admin user the query is running fast is this due to RLS or any other reason? is where condition is applied for each row of the table or entire table at once?

Internal function in plan

Wed, 02/03/2021 - 00:00
Hi Team , <code>----------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ----------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 4 (100)| | | | | 1 | SORT AGGREGATE | | 1 | 54 | | | | | | 2 | NESTED LOOPS | | 3 | 162 | 4 (0)| 00:00:01 | | | | 3 | NESTED LOOPS | | 3 | 162 | 4 (0)| 00:00:01 | | | | 4 | PARTITION HASH SINGLE | | 3 | 108 | 3 (0)| 00:00:01 | KEY | KEY | |* 5 | TABLE ACCESS BY GLOBAL INDEX ROWID| TABLE1 | 3 | 108 | 3 (0)| 00:00:01 | ROWID | ROWID | |* 6 | INDEX RANGE SCAN | IDX_TABLE1 | 27 | | 1 (0)| 00:00:01 | KEY | KEY | | 7 | PARTITION HASH ALL | | 1 | | 1 (0)| 00:00:01 | 1 | 64 | |* 8 | INDEX RANGE SCAN | IDX_TABLE2 | 1 | | 1 (0)| 00:00:01 | 1 | 64 | |* 9 | TABLE ACCESS BY LOCAL INDEX ROWID | TABLE2 | 1 | 18 | 1 (0)| 00:00:01 | 1 | 1 | ----------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - filter(("TABLE1"."PART_T_TYPE"=:SYS_B_3 AND "TABLE1"."DFLG"=:SYS_B_4 AND "TABLE1"."PFLG"=:SYS_B_5)) 6 - access("TABLE1"."AIC"=:1) 8 - access("TABLE1"."TRIAN_ID"="TABLE2"."TRIAN_ID") 9 - filter((INTERNAL_FUNCTION("DEL_ID") AND "TABLE2"."COL_ID"=:2))</code> Need to know cause of internal function for DEL_ID column <code>select count(*) from TABLE1 a ,TABLE2 b where a.train_id= b.train_id ===> datatype is same for both joining columns and a.aic = 'AaaaaaQWESQ' AND (DEL_ID) in ('OOO','QQQ')====> causing internal function , datatype is varhcar2(10) and a.part_t_type ='L' and a.dflg='N' and a.pflg='Y' and a.col_ID=b.col_ID and a.col_ID= 'QWE'</code> manual execution : select count(*) from TABLE2 b where (DEL_ID) in ('OOO','QQQ') actual plan shows not internal function TABLE2 is hash partitioned table b.train_id is partitioning key

Hierarchical CONNECT BY NOCYCLE Issue

Wed, 02/03/2021 - 00:00
Hi In the shared livesql link contains re-producible test case. I'm interested to list all the parent entities for a given child entity. Below is the output I get when there is no cyclical loop condition: <code> SELECT child_entity AS child, parent_entity AS Parent, level, CONNECT_BY_ISCYCLE AS IsLoop, SYS_CONNECT_BY_PATH(child_entity,'\') AS Path FROM entities START WITH child_entity = 'abet' CONNECT BY NOCYCLE child_entity = PRIOR parent_entity ORDER SIBLINGS BY child_entity; CHILD PARENT LEVEL ISLOOP PATH ------ ------ ----- ------ ---------------- abet abe 1 0 \abet abe ab 2 0 \abet\abe ab a 3 0 \abet\abe\ab a null 4 0 \abet\abe\ab\a ab c 3 0 \abet\abe\ab abe d 2 0 \abet\abe abet b 1 0 \abet b null 2 0 \abet\b 8 rows selected. </code> Now when I introduce cyclical loop condition, the output is like below. Duplicate last 2 rows can been seen in output. <code> insert into entities values('abet','a',900); commit; SELECT child_entity AS child, parent_entity AS Parent, level, CONNECT_BY_ISCYCLE AS IsLoop, SYS_CONNECT_BY_PATH(child_entity,'\') AS Path FROM entities START WITH child_entity = 'abet' CONNECT BY NOCYCLE child_entity = PRIOR parent_entity ORDER SIBLINGS BY child_entity; CHILD PARENT LEVEL ISLOOP PATH ------ ------ ----- ------ ---------------- abet abe 1 0 \abet abe ab 2 0 \abet\abe ab a 3 0 \abet\abe\ab a abet 4 1 \abet\abe\ab\a abet b 5 0 \abet\abe\ab\a\abet b null 6 0 \abet\abe\ab\a\abet\b a null 4 0 \abet\abe\ab\a ab c 3 0 \abet\abe\ab abe d 2 0 \abet\abe abet b 1 0 \abet b null 2 0 \abet\b 11 rows selected. </code> I'm expecting to produce an output like below. How to remove the duplicate extra two rows coming in above SQL result. Please share inputs on how I can achieve the same. <code> CHILD PARENT LEVEL ISLOOP PATH ------ ------ ----- ------ ---------------- abet abe 1 0 \abet abe ab 2 0 \abet\abe ab a 3 0 \abet\abe\ab a abet 4 1 \abet\abe\ab\a abet b 5 0 \abet\abe\ab\a\abet b null 6 0 \abet\abe\ab\a\abet\b a null 4 0 \abet\abe\ab\a ab c 3 0 \abet\abe\ab abe d 2 0 \abet\abe </code> Thanks..

SQL For Elimination of Overlaps + MIN and MAX Project Wise

Tue, 02/02/2021 - 00:00
The requirement here is To Eliminate Overlaps but also Take MIN and MAX + Distinct Combinations per CUST_EQP_CONTRACT_NUM(Project) Accordingly. <code>--create table1(EXPORT_TABLE) this is my actual data create table EXPORT_TABLE(CUST_EQP_CONTRACT_NUM varchar2(150),EQP_ANT_BASE_HEIGHT_CALC number,EQP_ANT_TIP_HEIGHT_CALC number); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168903',166,174); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168903',167.52,172.48); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168903',168.53,171.48); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',99.69,108.31); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',102.69,111.31); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',231,239); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',231.5,238.5); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',232.5,240.5); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('283874',233.58,239.82); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('507286',98.98,101.02); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('507286',118.98,121.02); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('507286',138.5,161.5); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('507286',148.5,171.5); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('666905',255,263); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('666905',256.64,261.36); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168999',4.2,5.2); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168999',3.9,4.9); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168999',2.1,4.8); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168999',6.2,6.8); Insert into EXPORT_TABLE (CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC) values ('168999',5.9,6.5);</code> Not I will create another Table just to eliminate source rows and see how many distinct combinations are available for me in One PROJECT(CUST_EQP_CONTRACT_NUM) <code>create table export_table2 as select distinct CUST_EQP_CONTRACT_NUM,EQP_ANT_BASE_HEIGHT_CALC,EQP_ANT_TIP_HEIGHT_CALC from EXPORT_TABLE; ---DESIRED OUTPUT CUST_EQP_CONTRACT_NUM EQP_ANT_BASE_HEIGHT_CALC EQP_ANT_TIP_HEIGHT_CALC 168903 166 174 283874 99.69 111.31 283874 231 240.5 507286 98.98 101.02 507286 118.98 121.02 507286 138.5 171.5 666905 255 263 168999 2.1 5.2 168999 5.9 6.8 --DB VERSION select* from v$version; Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production 0 PL/SQL Release 12.1.0.2.0 - Production 0 "CORE 12.1.0.2.0 Production" 0 TNS for Linux: Version 12.1.0.2.0 - Production 0 NLSRTL Version 12.1.0.2.0 - Production 0</code> I Took...

What does it take to support Exadata competently?

Tue, 02/02/2021 - 00:00
GM, We are considering moving to Exadata. I am both excited and nervous about having to support what seems like it would require a skillset beyond what most DBAs have. How much more knowledge beyond being knowledgeable and experienced in RAC and having a good understanding of the Oracle engine is recommended when supporting Exadata? If it is significant, is it recommended that the company should bring in a consultant say to set it up and then support it in production for a 3-6 months for fear that someone without experience will set it up incorrectly or cause outages while managing it? Thanks, John

When Dba_Scheduler_Jobs.Start_Date is not specified, where does Oracle store Job Enabling time?

Thu, 01/28/2021 - 00:00
Hello, I've a question on DBMS_Scheduler functionality: When I create a scheduler job via DBMS_SCHEDULER.CREATE_JOB proc and I don't specify Start_Date value, then job is scheduled for execution as soon as it is enabled which is correct. I want to know where does Oracle store the Job enabling time because same enabling time can be seen in dba_scheduler_job_run_details.Req_Start_Date when job finishes. Basically I want to enabling time for a job which is in SCHEDULED state and Start_date value for that job is NULL. Thanks for sharing knowledge!

A replacement technology in Oracle database for ETL process

Thu, 01/28/2021 - 00:00
Hi, We currently use a third-party tool for the ingestion of files into Oracle database tables. This piece of software is quite expensive and I am investigating the possibility of replicating a small part of its functionality directly within Oracle technologies instead. I appreciate that this question is vague, so I will keep to the high-level features that need to be replicated. If you need more information, I am more than happy to discuss this further. * The ETL tool has its own scripting language that is dynamically generated for each invocation of the ETL process. Most of the dynamic parts of the script are driven from data values within the Oracle database. Once generated, the script is then executed to perform the work. * The ability to read text and binary files including multiple file formats within a single input file. Dynamic file formats are an example of the previous requirement. * To be able to perform optional enrichments by lookup up a key value. * Bulk loading of the transformed data directly into an Oracle table partition. * Multiple executions of the same ETL process may occur simultaneously working on a different set of files. I have experimented with the obvious such as external tables, global temporary tables, SQL, in-memory and PL/SQL pipelined functions. Whilst I can get the correct output for a simple use-case, I cannot compete with the speed of the ETL tool using Oracle technologies. I discounted the use of SQL joins because some of the enrichment lookups are conditional, sometimes based upon another value in the data row being processed. There could be multiple lookups per row and tens of millions of rows in a single file. I discounted the use of SQL in PL/SQL because of the number of context switches this will incur. I am stuck for ideas right now and would really appreciate your input as to whether there is another technique for implementing this type of ETL process directly in SQL or PL/SQL. Many thanks in advance for any help and advice you can give. Mark.

Pages