AskTom - Hot Questions

Subscribe to AskTom - Hot Questions feed AskTom - Hot Questions
These are the questions on Ask Tom that have the most comments
Updated: 34 min 57 sec ago

Generate string based on pattern

Mon, 10/19/2020 - 00:00
Hi Chirs, Connor, Could you please help or suggest a way to generate string based on pattern Pattern - <b>^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[89ab][a-f0-9]{3}-[a-f0-9]{12}$</b> I need to generate millions of string like <b>f9f8c8e2-0b20-4160-8f74-e836f4661fc5</b> matches with the pattern. e.g. <code>SELECT * FROM dual WHERE REGEXP_LIKE('f9f8c8e2-0b20-4160-8f74-e836f4661fc5', '^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[89ab][a-f0-9]{3}-[a-f0-9]{12}$');</code>

In database row archival

Wed, 10/14/2020 - 00:00
I do have a question about Oracle Indatabase archiving in Oracle 19c, given below table has a XMLtype column alter table HTS_SUMMARY_DATA_CELL row archival Error report - SQL Error: ORA-38353: This table type is not supported by the ROW ARCHIVAL clause. This error comes. The same Indatabase archiving in Oracle 12c,12.1.0.2 alter table HTS_SUMMARY_DATA_CELL row archival The table gets archived. What is the differnce there in 12C and 19C versions,

Oracle .DBF file format structure

Wed, 10/14/2020 - 00:00
Hi, i want to iterate all rows in table and do some operations on them, but i don't want to get data with some select statement. In order to do that i tried to export data as .dbf and read it with Java and do some stuff. But it seems Oracle .dbf format differs than xBase (https://www.clicketyclick.dk/databases/xbase/format/) .dbf format. Where can i get Oracle .dbf data file structure details. Or can you give me more suitable techniques and advice in this purpose. Thank you.

How to differentiate between internal system queries and application queries in Oracle

Tue, 10/13/2020 - 00:00
Hi Team, Firstly, I want to thank you for all the immense help that we continue to receive from your end, making us a better, knowledgeable Oracle developer. I've this weird question related to internal queries that Oracle fires and the queries that we, the user fires (either from application, or sql developer, sql prompt). (Environment: Oracle 11g R2) <code>select sql_fulltext,sql_id,executions,service,parsing_schema_name,rows_processed,optimizer_cost from v$sql where to_date(last_load_time,'yyyy-mm-dd/hh24:mi:ss')>= to_date(to_char(sysdate,'yyyy-mm-dd'),'yyyy-mm-dd') and service='SYS$USERS' and parsing_schema_name <> 'SYS' and rows_processed>0</code> Below is the sample output: <code> select count(*) from all_objects where object_name = :NAME and owner = :OWNER gq4p31m39qpms 1 SYS$USERS HR 1 4 select last_load_time,sql_fulltext,sql_id,loaded_versions,fetches,executions... 2hx8s4mqk6qfc 2 SYS$USERS HR 100 1 begin DBMS_UTILITY.NAME_RESOLVE ( name => :NAME, context........ ats0vqbmfytk8 1 SYS$USERS HR 1 0 select * from v$sql gtzvs9utsg0qs 1 SYS$USERS HR 500 1 SELECT dbms_transaction.local_transaction_id FROM dual gqj5gaygdbfs5 2 SYS$USERS SCOTT 2 2 select * from employees f34thrbt8rjt5 3 SYS$USERS HR 150 3 select owner,object_name from all_objects where object_type in ('TABLE',....... 3yqkryv9c3nhk 1 SYS$USERS HR 2 72 </code> Out of these, only select * from employees/Select * from v$sql/(the above stated query) are executed by the user from sql developer/sql prompt/application. Rest of them are all fired by oracle internally. My question is, how can I find these queries only, the ones that are fired from sql developer/application/sql prompt. Is there any other metadata views/parameters/columns by which we can segregate these queries as fired by the user/application and queries fired by Oracle internally. Please help in this regard

How to validate the precision and scale of a number in pure PL/SQL

Thu, 10/08/2020 - 00:00
Hello, I came across a problem to validate if a user-provided number will fit the column in a table. The check to be done in PL/SQL is to validate if precision and scale will be big enough for the provided number. What would be your suggestion to achieve that? My efforts below. I cannot find it in the documentation but I found that PL/SQL does not allow datatype limits in neither VALIDATE_CONVERSION nor CAST functions, so the following fails: <code>DECLARE tn NUMBER; BEGIN tn := CAST( '123.456' AS NUMBER( 6, 2 )); DBMS_OUTPUT.PUT_Line( tn); END; </code> SQL does allow datatype limits in there, so the solution could be something like: <code>execute IMMEDIATE 'SELECT CAST(:p_test_num AS NUMBER(:p_precision,:p_scale)) from dual' INTO ... </code> However, the check is to be done in bulk (loading a text file), so context switches will become a problem. I could use PL/SQL like: <code>set serveroutput ON SIZE UNLIMITED CREATE OR REPLACE PROCEDURE P(p_test_num IN VARCHAR2, p_precision IN PLS_INTEGER, p_scale IN PLS_INTEGER, p_format IN VARCHAR2 DEFAULT NULL) IS l_result PLS_INTEGER; -- -1 - the number will fit but with rounding -- 0 - the number will not fit -- 1 - the number will fit perfectly BEGIN execute IMMEDIATE 'declare l_test_num NUMBER('||p_precision||', '||p_scale||'); begin if :test_format is null then l_test_num := to_number(:test_num); else l_test_num := to_number(:test_num, :test_format); end if; if l_test_num = :test_num then :result := 1; else :result := -1; end if; exception when value_error then :result := 0; when others then raise; end;' USING IN p_format, IN p_test_num, OUT l_result; DBMS_OUTPUT.PUT_Line( l_result); END; exec p('21474.83647',38,1) exec p('2147483647' ,38,1) exec p('2147483647' ,38,127) exec p('21474.83647',38,1,'99999D99') exec p('21474.83647' ,38,1,'99999D99999') exec p('21474.83647' ,38,1,'99999999D99999999') exec p('21474.83647' ,38,12,'99999999D99999999') exec p('2147483647' ,38,127,'99999999999') </code> ... but I have a feeling this could be done better...

recursive query raises ORA-01789

Wed, 10/07/2020 - 00:00
This is a minimized version of complex recursive query. The query works when columns in recursive member are listed explicitly: <code> with t (c,p) as ( select 2,1 from dual ), rec (c,p) as ( select c,p from t union all select t.c,t.p from rec join t on rec.c = t.p ) select * from rec </code> I don't get why error <i>ORA-01789: query block has incorrect number of result columns</i> is raised when specified t.* instead. <code> with t (c,p) as ( select 2,1 from dual ), rec (c,p) as ( select c,p from t union all select t.* from rec join t on rec.c = t.p ) select * from rec </code> Why t.* is not equivalent to t.c,t.p here? Could you please point me to documentation for any reasoning? Tested on 11g, 18 - db fiddle https://dbfiddle.uk/?rdbms=oracle_18&fiddle=6b44cda99ae2a3d5486d14b28206f289 and 19 (livesql).

SQL performance tuning

Wed, 10/07/2020 - 00:00
Hi Tom, I have bee trying to tune the following sql. This is a datawarehouse query which used to run for 2 hrs but gradually after 3 months, this query is now taking around 3 hrs. I tried creating index, use hints (parallel hints, index hint)but getting worse execution plan. I also ran sql tuning advisor and it shows this is the best plan. Please guide on this. <code> SELECT fpa.pnl_sum_rec_key, NVL ( (SELECT SUM (fpa2.pnl_ptd_actual) FROM fact_pnl_consol fpa2 WHERE fpa2.pnl_report_key = fpa.pnl_report_key AND fpa2.pnl_period_yr = fpa.pnl_period_yr - 1 AND fpa2.pnl_period_qtr = fpa.pnl_period_qtr AND fpa2.pnl_period_num = fpa.pnl_period_num AND fpa2.pnl_rpt_row_num = fpa.pnl_rpt_row_num AND fpa.PNL_RPT_CURRENCY = fpa2.PNL_RPT_CURRENCY AND NVL (fpa2.ASC_STANDARD, 'X') = NVL (fpa.ASC_STANDARD, 'X')), 0) calc_ptd_act_py, NVL ( (SELECT SUM (fpa3.pnl_ptd_actual) FROM fact_pnl_consol fpa3 WHERE fpa3.pnl_report_key = fpa.pnl_report_key AND fpa3.pnl_period_yr = fpa.pnl_period_yr AND fpa3.pnl_period_qtr = fpa.pnl_period_qtr AND fpa3.pnl_period_num <= fpa.pnl_period_num AND fpa3.pnl_rpt_row_num = fpa.pnl_rpt_row_num AND fpa.PNL_RPT_CURRENCY = fpa3.PNL_RPT_CURRENCY AND NVL (fpa3.ASC_STANDARD, 'X') = NVL (fpa.ASC_STANDARD, 'X')), 0) calc_qtd_act, NVL ( (SELECT SUM (fpa4.pnl_ptd_budget) FROM fact_pnl_consol fpa4 WHERE fpa4.pnl_report_key = fpa.pnl_report_key AND fpa4.pnl_period_yr = fpa.pnl_period_yr AND fpa4.pnl_period_qtr = fpa.pnl_period_qtr AND fpa4.pnl_period_num <= fpa.pnl_period_num AND fpa4.pnl_rpt_row_num = fpa.pnl_rpt_row_num AND fpa.PNL_RPT_CURRENCY = fpa4.PNL_RPT_CURRENCY AND NVL (fpa4.ASC_STANDARD, 'X') = NVL (fpa.ASC_STANDARD, 'X')), 0) calc_qtd_bud, NVL ( (SELECT SUM (fpa5.pnl_ptd_actual) FROM fact_pnl_consol fpa5 WHERE fpa5.pnl_report_key = fpa.pnl_report_key AND fpa5.pnl_period_yr = fpa.pnl_period_yr - 1 AND fpa5.pnl_period_qtr = fpa.pnl_period_qtr AND fpa5.pnl_period_num <= fpa.pnl_period_num AND fpa5.pnl_rpt_row_num = fpa.pnl_rpt_row_num AND fpa.PNL_RPT_CURRENCY = fpa5.PNL_RPT_CURRENCY AND NVL (fpa5.ASC_STANDARD, 'X') = NVL (fpa.ASC_STANDARD, 'X')), 0) calc_qtd_act_py, NVL ( (SELECT SUM (fpa6.pnl_ptd_actual) FROM fact_pnl_consol fpa6 WHERE fpa6.pnl_report_key = fpa.pnl_report_key AND fpa6.pnl_period_yr = fpa.pnl_period_yr AND fpa6.pnl_period_num <= fpa.pnl_period_num AND fpa6.pnl_rpt_row_num = fpa.pnl_rpt_row_num AND fpa.PNL_RPT_CURRENCY = fpa6.PNL_RPT_CURRENCY AND NVL (fpa6.ASC_STANDARD, 'X') = NVL (fpa.ASC_STANDARD, 'X')), 0) calc_ytd_act, NVL ( (SELECT SUM (fpa7.pnl_ptd_budget) FROM fact_pnl_consol fpa7 WHERE fpa7.pnl_report_key = fpa.pnl_report_key AND fpa7.pnl_period_yr = fpa.pnl_period_yr AND fpa7.pnl_period_num <= fpa.pnl_period_num AND fpa7.pnl_rpt_row_num = fpa.pnl_rpt_row_num ...

Global Temp Table

Wed, 10/07/2020 - 00:00
We're considering the value of using GTT table approach to address a variety of 'staging table' requirements we have at our applications. We typically load multiple Mb of data ( 50-100 Mb per frequent, concurrent data load ) into staging tables from where we transform, validate and consolidate data into persisting portfolio data tables and then delete the individual load specific data from the staging table(s). We expect that replacing our staging tables with equivalent GTT's and indexes etc will remove the overhead of deleting staged data at the end of each data load process. We understand also, although we haven't found confirmation at Oracle docs, that GTT's will use PGA space where possible before spilling data over into Temp tablespace if PGA is depleted. If our understanding here is not misguided then we might also consider splitting our data loading flows into smaller chunks to make better use of available PGA memory ? Our question really is - are we on the right track in terms of our understanding of making best possible use of GTT approach and are there any other key considerations or policy rules that you would encourage ? thanks for your time.... dermot..

Convert JSON into Record

Wed, 09/30/2020 - 00:00
HI, i have a json file, i have to insert into one table it's have multiple columns, How will convert into as record? Please see the below example <b>Example:</b> {"EMPNO":123,"ENAME":"TEST","JOB":"ENGINEER","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":3,"ENAME":"harish","JOB":"developer","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":555,"ENAME":"TESTTTT","JOB":"SOFTWARE","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":102,"ENAME":"harish","JOB":"developer","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":444,"ENAME":"TESTTTT","JOB":"ENR2222","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":700,"ENAME":"PANIPOORI","JOB":"CRICKET","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} {"EMPNO":101,"ENAME":"harish","JOB":"developer","MGR":null,"HIREDATE":null,"SAL":null,"COMM":null,"DEPTNO":null} i want to insert into emp table or any other ways to create insert scripts based on the Json file? Thanks Praveen

SQL Profiles Long Term Benefits and Issues

Wed, 09/16/2020 - 00:00
Hello, I researched your Q/A bank before asking this question. What are some of the things to keep in mind, where a benefit to using a sql profile turns into a curse. What are the things that can cause the use of sql profile to go bad or problems that can go un noticed as a result of using the profile ? Thank you.

Is there a way to interrupt rollback if I don't care about the table?

Tue, 09/08/2020 - 00:00
Hello Connor, hello Chris. Oracle version: 12.1.0.2.0. I am wondering if there is a way to interrupt a long running rollback if we don't care what happens to the table? After the the interrupt we would truncate it anyway. A junior developer attempted to populate an empty table with many indexes with tens of millions of rows. It didn't fit into the undo and the transaction went into a very long rollback. It lasted at least 10 times longer than the query ran before the rollback. During that time the database barely responded to other queries which slowed down or blocked his work and the work of other people. This happened to me in the past as well and it is very frustrating to wait hours and hours until a useless(from our perspective, not oracle) task finishes because of a mistake. I know that he should've disabled indexes and used /*+ append*/, I'm just asking is there a way to tell Oracle 'I don't care about that table'?

Select Data with Partition Name

Fri, 08/28/2020 - 00:00
Hi, I have a table with partitions. I am trying to select the data and partition name in the same query. I have created a part_test table and inserted data. It is on the Live SQL link. Looking to query something like this, SELECT ld_dt, ld_src, <partition name> FROM part_test, all_tab_partitions and join the part_test with all_tab_partitions to get both data and partition name. Any help is greatly appreciated. Thanks, Mani

How to compare a table present in 2 different schemas in same server, as part of data migration

Wed, 08/19/2020 - 00:00
Team: The DB verison we are using : Oracle Database 11g Release 11.2.0.4.0 - 64bit Production Below are Sample table Generation scripts for reference: I have two DB users by name USER_A and USER_B and we have a table by name EMP in both the DB users. The Create table statements are shown below: <code> CREATE TABLE user_a.emp ( empno NUMBER(4), ename VARCHAR2(10 BYTE) DEFAULT 'XYZ', job VARCHAR2(9 BYTE), mgr NUMBER(4), hiredate DATE, sal NUMBER(7,2), comm NUMBER(7,2), deptno NUMBER(2) ); CREATE TABLE user_b.emp ( empno NUMBER(4), ename VARCHAR2(10 BYTE) DEFAULT 'XYZ', job VARCHAR2(9 BYTE), mgr NUMBER(4), hiredate DATE, sal NUMBER(7,2), comm NUMBER(7,2), dept_name VARCHAR2(50) DEFAULT 'ACCOUNTING', dept_location VARCHAR2(50) ); </code> On further obsertvation , we see that user_a.emp table has deptno column and user_b.emp table does not have the deptno column. And user_b.emp table has dept_name and dept_location columns which are not present in user_a.emp table. Requirement : Can someone assist me in writing 2 alter statement(dynamically) wherein one alter statement is to generate the column names along with datatype and default_values which are present in user_b.emp table and not present in user_a.emp table. I need this alter statement to add the missing columns .The table sizes are small but our project migration tables have a lot number of columns. And the other alter statement to generate the column names along its datatype and default_values present in user_a.emp table and not present in user_b.emp table. This would assist me in making tables to be in sync

How to get a complete list of analytic functions

Wed, 08/19/2020 - 00:00
There are many Analytic Functions listed in the doc (https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/Analytic-Functions.html#GUID-527832F7-63C0-4445-8C16-307FA5084056), however I cannot find them in ALL_PROCEDURES view (https://docs.oracle.com/database/121/REFRN/GUID-3DED0AD5-4220-4C77-AE16-07060A19E6DF.htm#REFRN20168). How do I get a complete list of analytic functions, together with its signature (i.e. argument type, return type, etc.)?

query execution time in first attempt

Tue, 08/18/2020 - 00:00
Hi Tom, I am using SQL Developer and a particular user/schema for my Oracle Development Work in my project. While optimizing a query - it completes execution in ~60 sec in the 1st attempt where as it takes ~10 sec in subsequent run.Lets, say I am adding some filter conditions and trigger it again, it will still use the old share pool,library cache and hence completing n less than 60 sec. My question is - How can I check for a query how much time it takes in its first attempt ? Can I do it by - alter system flush buffer_cache; ? But, I don't have privilege for this. Is there any other way where I can check the execution time for a query in its first attempt. Many Thanks, Asit

Longest common substring - Ratcliff-Obershelp similarity algorithm

Tue, 08/18/2020 - 00:00
Hi TOM, Is there a way to make my code faster ? I'm trying to find the longest common sub-string (short: lcs) using dynamic programming; in this respect, i'm building a 2D array with x rows and Y columns, where X - length of string1, Y = length of string2. First, i initialize the array with 0; (can this be avoided ?) Then, i start from top-left, one character (from string 1) at a time and each time there's a match with a character from string2, i put +1 in that cell Here's a visual description: https://imgur.com/a/gz8JTms I'm already using PL/SQL native compilation. Following this lcs function, i'm using <b>Ratcliff-Obershelp</b> similarity algorithm and i'm relly stucked at how to build a function iteratively that would use this "lcs" function; any help would be greatly appreciated. Thanks!

Updating and returning a nested table via OUT parameter

Tue, 08/18/2020 - 00:00
I?m trying to create and fill a nested table with a SELECT? INTO, to be returned as an out parameter from a PL/SQL PROCEDURE. I?ve declared the record and table at both the package level and the schema level as follows: In the package: <code> type rec_DT_REWARD_POOL_ROW is record ( "DT_RP_ID" NUMBER(24,0), "DT_RP_PROG_ID" NUMBER(10,0), "DT_RP_TP_ID" VARCHAR2(18), "DT_RP_TYPE_CODE" NUMBER(10,0), "DT_RP_VALUE" FLOAT(126)); type tab_DT_REWARD_POOL is table of rec_DT_REWARD_POOL_ROW; </code> I?ve also declared it at the schema level <code>CREATE OR REPLACE TYPE REC_DT_REWARD_POOL_ROW AS OBJECT ( "DT_RP_ID" NUMBER(24,0), "DT_RP_PROG_ID" NUMBER(10,0), "DT_RP_TP_ID" VARCHAR2(18), "DT_RP_TYPE_CODE" NUMBER(10,0), "DT_RP_VALUE" FLOAT(126)); CREATE OR REPLACE TYPE tab_DT_REWARD_POOL AS TABLE OF REC_DT_REWARD_POOL_ROW;</code> My Procedure (standalone and package) is <code>PROCEDURE REWARDS_DB_PROD.RETRIEVE_REWRADS( TOKEN REWARDS_DB_PROD.DT_REWARD_POOL.DT_RP_TP_ID%TYPE, P_Available_Reward OUT tab_DT_REWARD_POOL, P_MIC_REWARD OUT tab_DT_REWARD_POOL) AS</code> However, when I try to use the following select into: <code> SELECT DT_RP_ID , DT_RP_PROG_ID , DT_RP_TP_ID , DT_RP_TYPE_CODE , DT_RP_VALUE INTO p_mic_reward FROM REWARD_POOL WHERE ??;</code> I?m not returning the whole table. The package declared one gets a PLS-00642 error: Local collection types not allowed in SQL. The schema declared one tells me ORA-00947: Not enough values. Since the record and Object types had their field lists lifted directly from the CREATE TABLE for REWARD_POOL, I don?t understand what?s going on.

Oracle has any feature similar to "Always Encrypted" that is offered by SQL server?

Tue, 08/11/2020 - 00:00
Hello, It would be great if you can help me here. Can you please share if Oracle has any feature similar to the "Always Encrypted" feature offered by SQL server? Link pasted at end has information on "Always Encrypted". I understand that Oracle offers data redaction to mask data. However, my understanding is that users with high authorization can bypass it. Oracle also offers Vault to control data access. However, there still will be Oracle users that can see the data in clear. It would be really helpful if you can share some pointers. Thanks, AB ------------------------------------------------------------------------------------------------------------------------------- Link: https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15 Text from this link: Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data and can view it, and those who manage the data but should have no access. By ensuring on-premises database administrators, cloud database operators, or other high-privileged unauthorized users, can't access the encrypted data, Always Encrypted enables customers to confidently store sensitive data outside of their direct control. This allows organizations to store their data in Azure, and enable delegation of on-premises database administration to third parties, or to reduce security clearance requirements for their own DBA staff.

Dynamically passing sequence name to get currval

Mon, 08/10/2020 - 00:00
I am trying to get the currval of all the user sequences in the schema. When I run the below sql it gives me invalid sql statement. I am not sure if the below is the right way to achieve it. Please advise. Assumption: The current value of the sequences are already set in the session. <code> set serveroutput on; declare sq number; sqnm varchar2(50); stmt varchar2(1000); cursor sqnc is (select sequence_name from user_sequences); begin for row in sqnc loop sqnm := row.sequence_name; stmt := 'SELECT' || sqnm ||'.currval into' || sq || 'from dual'; execute immediate stmt; dbms_output_put_line(sqnm || ' ' ||sq); end loop; end; </code>

Best practice to delete rows with a CLOB column

Wed, 08/05/2020 - 00:00
Environment: Oracle 12.1.0.2 on Exadata I have a table with 30 columns, one of which is a CLOB, that contains about 26 million rows. I have a purge job (PL/SQL packaged procedure) that DELETEs rows from nine (9) other tables based on a list of IDs from the driver table that contains the CLOB. I save the list of IDs in a global temporary table and use that to delete the associated rows from the other tables that are done in the correct order to maintain all the FK relationships. I am running a test today that has identified about 98,000 IDs to be purged but I have about 5 million that need to be purged to 'catch up' and going forward I'll have about 10,000 per day to purge. The DELETE on the other 9 tables runs very quickly. The table with the CLOB column is taking many times longer than the others, like several hours as opposed to several minutes. I'm guessing there is a better and more efficient way to remove the rows from the table containing the CLOB and to regain the storage used by the deleted CLOB. I'm currently issuing just a 'DELETE FROM <table> WHERE ID IN (SELECT ID FROM <gtt>', where ID is the PK. I would appreciate any suggestions and order of operational steps to accomplish this purge, both for the 'catch up' and the daily run of 10,000. Thanks in advance for all the help!! Much appreciated! -gary

Pages