Feed aggregator

SQL Profiles Long Term Benefits and Issues

AskTom - Hot Questions - Wed, 09/16/2020 - 00:00
Hello, I researched your Q/A bank before asking this question. What are some of the things to keep in mind, where a benefit to using a sql profile turns into a curse. What are the things that can cause the use of sql profile to go bad or problems that can go un noticed as a result of using the profile ? Thank you.

Select Data with Partition Name

AskTom - Hot Questions - Fri, 08/28/2020 - 00:00
Hi, I have a table with partitions. I am trying to select the data and partition name in the same query. I have created a part_test table and inserted data. It is on the Live SQL link. Looking to query something like this, SELECT ld_dt, ld_src, <partition name> FROM part_test, all_tab_partitions and join the part_test with all_tab_partitions to get both data and partition name. Any help is greatly appreciated. Thanks, Mani

How to compare a table present in 2 different schemas in same server, as part of data migration

AskTom - Hot Questions - Wed, 08/19/2020 - 00:00
Team: The DB verison we are using : Oracle Database 11g Release - 64bit Production Below are Sample table Generation scripts for reference: I have two DB users by name USER_A and USER_B and we have a table by name EMP in both the DB users. The Create table statements are shown below: <code> CREATE TABLE user_a.emp ( empno NUMBER(4), ename VARCHAR2(10 BYTE) DEFAULT 'XYZ', job VARCHAR2(9 BYTE), mgr NUMBER(4), hiredate DATE, sal NUMBER(7,2), comm NUMBER(7,2), deptno NUMBER(2) ); CREATE TABLE user_b.emp ( empno NUMBER(4), ename VARCHAR2(10 BYTE) DEFAULT 'XYZ', job VARCHAR2(9 BYTE), mgr NUMBER(4), hiredate DATE, sal NUMBER(7,2), comm NUMBER(7,2), dept_name VARCHAR2(50) DEFAULT 'ACCOUNTING', dept_location VARCHAR2(50) ); </code> On further obsertvation , we see that user_a.emp table has deptno column and user_b.emp table does not have the deptno column. And user_b.emp table has dept_name and dept_location columns which are not present in user_a.emp table. Requirement : Can someone assist me in writing 2 alter statement(dynamically) wherein one alter statement is to generate the column names along with datatype and default_values which are present in user_b.emp table and not present in user_a.emp table. I need this alter statement to add the missing columns .The table sizes are small but our project migration tables have a lot number of columns. And the other alter statement to generate the column names along its datatype and default_values present in user_a.emp table and not present in user_b.emp table. This would assist me in making tables to be in sync

How to get a complete list of analytic functions

AskTom - Hot Questions - Wed, 08/19/2020 - 00:00
There are many Analytic Functions listed in the doc (https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/Analytic-Functions.html#GUID-527832F7-63C0-4445-8C16-307FA5084056), however I cannot find them in ALL_PROCEDURES view (https://docs.oracle.com/database/121/REFRN/GUID-3DED0AD5-4220-4C77-AE16-07060A19E6DF.htm#REFRN20168). How do I get a complete list of analytic functions, together with its signature (i.e. argument type, return type, etc.)?

Longest common substring - Ratcliff-Obershelp similarity algorithm

AskTom - Hot Questions - Tue, 08/18/2020 - 00:00
Hi TOM, Is there a way to make my code faster ? I'm trying to find the longest common sub-string (short: lcs) using dynamic programming; in this respect, i'm building a 2D array with x rows and Y columns, where X - length of string1, Y = length of string2. First, i initialize the array with 0; (can this be avoided ?) Then, i start from top-left, one character (from string 1) at a time and each time there's a match with a character from string2, i put +1 in that cell Here's a visual description: https://imgur.com/a/gz8JTms I'm already using PL/SQL native compilation. Following this lcs function, i'm using <b>Ratcliff-Obershelp</b> similarity algorithm and i'm relly stucked at how to build a function iteratively that would use this "lcs" function; any help would be greatly appreciated. Thanks!

Updating and returning a nested table via OUT parameter

AskTom - Hot Questions - Tue, 08/18/2020 - 00:00
I?m trying to create and fill a nested table with a SELECT? INTO, to be returned as an out parameter from a PL/SQL PROCEDURE. I?ve declared the record and table at both the package level and the schema level as follows: In the package: <code> type rec_DT_REWARD_POOL_ROW is record ( "DT_RP_ID" NUMBER(24,0), "DT_RP_PROG_ID" NUMBER(10,0), "DT_RP_TP_ID" VARCHAR2(18), "DT_RP_TYPE_CODE" NUMBER(10,0), "DT_RP_VALUE" FLOAT(126)); type tab_DT_REWARD_POOL is table of rec_DT_REWARD_POOL_ROW; </code> I?ve also declared it at the schema level <code>CREATE OR REPLACE TYPE REC_DT_REWARD_POOL_ROW AS OBJECT ( "DT_RP_ID" NUMBER(24,0), "DT_RP_PROG_ID" NUMBER(10,0), "DT_RP_TP_ID" VARCHAR2(18), "DT_RP_TYPE_CODE" NUMBER(10,0), "DT_RP_VALUE" FLOAT(126)); CREATE OR REPLACE TYPE tab_DT_REWARD_POOL AS TABLE OF REC_DT_REWARD_POOL_ROW;</code> My Procedure (standalone and package) is <code>PROCEDURE REWARDS_DB_PROD.RETRIEVE_REWRADS( TOKEN REWARDS_DB_PROD.DT_REWARD_POOL.DT_RP_TP_ID%TYPE, P_Available_Reward OUT tab_DT_REWARD_POOL, P_MIC_REWARD OUT tab_DT_REWARD_POOL) AS</code> However, when I try to use the following select into: <code> SELECT DT_RP_ID , DT_RP_PROG_ID , DT_RP_TP_ID , DT_RP_TYPE_CODE , DT_RP_VALUE INTO p_mic_reward FROM REWARD_POOL WHERE ??;</code> I?m not returning the whole table. The package declared one gets a PLS-00642 error: Local collection types not allowed in SQL. The schema declared one tells me ORA-00947: Not enough values. Since the record and Object types had their field lists lifted directly from the CREATE TABLE for REWARD_POOL, I don?t understand what?s going on.

Oracle has any feature similar to "Always Encrypted" that is offered by SQL server?

AskTom - Hot Questions - Tue, 08/11/2020 - 00:00
Hello, It would be great if you can help me here. Can you please share if Oracle has any feature similar to the "Always Encrypted" feature offered by SQL server? Link pasted at end has information on "Always Encrypted". I understand that Oracle offers data redaction to mask data. However, my understanding is that users with high authorization can bypass it. Oracle also offers Vault to control data access. However, there still will be Oracle users that can see the data in clear. It would be really helpful if you can share some pointers. Thanks, AB ------------------------------------------------------------------------------------------------------------------------------- Link: https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15 Text from this link: Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data and can view it, and those who manage the data but should have no access. By ensuring on-premises database administrators, cloud database operators, or other high-privileged unauthorized users, can't access the encrypted data, Always Encrypted enables customers to confidently store sensitive data outside of their direct control. This allows organizations to store their data in Azure, and enable delegation of on-premises database administration to third parties, or to reduce security clearance requirements for their own DBA staff.

Dynamically passing sequence name to get currval

AskTom - Hot Questions - Mon, 08/10/2020 - 00:00
I am trying to get the currval of all the user sequences in the schema. When I run the below sql it gives me invalid sql statement. I am not sure if the below is the right way to achieve it. Please advise. Assumption: The current value of the sequences are already set in the session. <code> set serveroutput on; declare sq number; sqnm varchar2(50); stmt varchar2(1000); cursor sqnc is (select sequence_name from user_sequences); begin for row in sqnc loop sqnm := row.sequence_name; stmt := 'SELECT' || sqnm ||'.currval into' || sq || 'from dual'; execute immediate stmt; dbms_output_put_line(sqnm || ' ' ||sq); end loop; end; </code>

Best practice to delete rows with a CLOB column

AskTom - Hot Questions - Wed, 08/05/2020 - 00:00
Environment: Oracle on Exadata I have a table with 30 columns, one of which is a CLOB, that contains about 26 million rows. I have a purge job (PL/SQL packaged procedure) that DELETEs rows from nine (9) other tables based on a list of IDs from the driver table that contains the CLOB. I save the list of IDs in a global temporary table and use that to delete the associated rows from the other tables that are done in the correct order to maintain all the FK relationships. I am running a test today that has identified about 98,000 IDs to be purged but I have about 5 million that need to be purged to 'catch up' and going forward I'll have about 10,000 per day to purge. The DELETE on the other 9 tables runs very quickly. The table with the CLOB column is taking many times longer than the others, like several hours as opposed to several minutes. I'm guessing there is a better and more efficient way to remove the rows from the table containing the CLOB and to regain the storage used by the deleted CLOB. I'm currently issuing just a 'DELETE FROM <table> WHERE ID IN (SELECT ID FROM <gtt>', where ID is the PK. I would appreciate any suggestions and order of operational steps to accomplish this purge, both for the 'catch up' and the daily run of 10,000. Thanks in advance for all the help!! Much appreciated! -gary

Comparing 2 Nested Table Collection Which have 180 field defined in each collection by passing field name dynamically.

AskTom - Hot Questions - Wed, 08/05/2020 - 00:00
I have 2 tables which of same structure with around 180 columns in each table with one columns as PK. Each table have around 200 k records. I need to compare to tables columns by column and if for that records any difference is found for any of the remaining 179 columns then need to track that column name and the both the old values and new value from that column from both tables. This can be achieve by SQL statement with UNION and group and LEAD functions but as need to compare the for 179 fields the length of the SQL code is very long. So thought of used nested table collection to compare both the tables by bulk collect both the values is 2 different nested tables and iterate them. First loop to iterate using collection count value and second loop using USER_TAB_COLS to iterate based on number of columns in the tables. Is there any possible to pass the field name to the nested loop dynamically ? Below is the sample code for that. <code>SET SERVEROUTPUT ON; DECLARE TYPE TEST1_TYPE IS TABLE OF TEST1%ROWTYPE ; TEST1_TAB TEST1_TYPE; TEST2_TAB TEST1_TYPE; lcCol1 VARCHAR2(3000); lcCol2 VARCHAR2(3000); lQuery VARCHAR2(3000); CURSOR CUR_TAB_COL IS SELECT COLUMN_NAME ,DATA_TYPE FROM USER_TAB_COLS WHERE TABLE_NAME='TEST1' ORDER BY COLUMN_ID; TYPE COL_TYPE IS TABLE OF CUR_TAB_COL%ROWTYPE; COL_TAB COL_TYPE; BEGIN SELECT * BULK COLLECT INTO TEST1_TAB FROM TEST1 ORDER BY ID; SELECT * BULK COLLECT INTO TEST2_TAB FROM TEST2 ORDER BY ID; OPEN CUR_TAB_COL; FETCH CUR_TAB_COL BULK COLLECT INTO COL_TAB; CLOSE CUR_TAB_COL; FOR I IN 1..TEST2_TAB.count LOOP FOR j IN COL_TAB.FIRST..COL_TAB.LAST LOOP lQuery:='SELECT TEST1_TAB('||i||').'||COL_TAB(j).COLUMN_NAME||',FROM DUAL'; EXECUTE IMMEDIATE lQuery INTO lcCol1; lQuery:='SELECT TEST2_TAB('||i||').'||COL_TAB(j).COLUMN_NAME||',FROM DUAL'; EXECUTE IMMEDIATE lQuery INTO lcCol2; END LOOP; END LOOP; END; /</code>

Extract xml node value with existnode filter condition

AskTom - Hot Questions - Wed, 07/29/2020 - 00:00
Hi Chris, Appreciate for your support. I have a requirement where want to return specific node values if a particular node exist. Example - for the below XML, my output will be 2 records because (Jobsdata/jobList/jobData/job/document) node exist 2 times and returning nodes are (job/document/fileID and job/directoryid ). O/p --> 100,D100 200,D200 Please find the XML table in the Live SQL link . My O/P should be as given below- <code>id fileID directoryid 1 100 D100 1 200 D200</code> Filter Criteria - I don't want to display the directoryid : D300 because there is no <document> tag. Hence, file ID and directoryid should be displayed as <document> tag available for those. Thanks,Asit

exporting csv within the limits of excel

AskTom - Hot Questions - Mon, 07/27/2020 - 00:00
Hello Tom, I need help on below question please help me Single sensor data from different tables should be merged in one excel. File should not exceed 4mb size and records should not exceed 100000 records. IF single sensor data from different tables exceeds more than 4mb file or 100000 records limit then the remaining data should go to second excel. the csv to be export in to structure folder like below Year ? Month ? Bi-weekly/Weekly

DataPump crashed, now cant create new jobs

AskTom - Hot Questions - Mon, 07/27/2020 - 00:00
Hy Tom, a sheduled task that worked for month crashed sudenly. Its an EXPDP job transfering Data to an external SSD (USB3.0). The Disk still runs. The errors listet after a new test to export are like: (EXPDP SYSTEM/XXXX@....) ORA-31626: now job exists ORA-31638: Job SYS_EXPORT_SCHEMA_01 for User SYSTEM cannot be attached ORA-06512: in "SYS.DBMS_SYS_ERROR", Line 95 ...... I read some hints resulting in deleting all DataPump jobs (was too much I think).

Listagg function is failing in 12c but working fine in 11g with same data

AskTom - Hot Questions - Tue, 07/21/2020 - 00:00
Hi Tom, Recently our database migrated from to production. So currently we have two database one is 11g and another one is 12c. Once 12c issues are fixed we will shutdown 11g and expose 12c database to the customer. After migration some of our report started failing in 12c database. Mainly i saw this issue in listagg function. Initially i thought the data length is more than 4000 that is why it is failing but when i started investigating i found same data is working fine in 11g and data length is around 3200 which is much lesser than 4000 I used dump function to check incase it is considering input type as raw. But to my surprise it is varchar2. And dump function is not throwing any error where on the same data even length function is throwing error. Ora error is:- Ora-01489: result of string concatenation is too long I checked the MAX_STRING_SIZE is set to Standard I tried with different dataset and found whenever the input data length is more than 2000 we are getting this error in 12c for this query. I have requested dba team to set MAX_STRING_SIZE to Extended (and utl32k.sql) but i am not very sure about the solution if that will work as i did not get the root cause of the problem. But i am afraid if we change this parameter value to extended then we won't be able to change it back to standard and if we face any issue then we have to rebuild the database. Which will be a big problem. I need your help and valuable suggestion on this

Inserting N rows from a cartesian

AskTom - Hot Questions - Wed, 06/24/2020 - 00:00
I have the following setup (test case below). How can the query be modified to INSERT N number of rows per employee_id. I want to add something like this but can't figure how to implement this for my test case ( select level dbms_random.value(5...

Oracle doesn't release undo even if time is way past the undo_retention value

AskTom - Hot Questions - Wed, 06/24/2020 - 00:00
Good Evening, I'm hoping that I can finally find out how Oracle decides when it is time to release the UNDO. In this database, the UNDO_RETENTION is set to 600 seconds. However, it has been over two hours since the huge purge job that used u...

How to find why the same SQL is slower with a different user

AskTom - Hot Questions - Fri, 06/19/2020 - 00:00
I have run the same SQL on production environment with System and other normal user. while checking the explain plan, showing same using both users. But while executing the SQL, system user return the result 50 times faster than normal user. I have s...

Using 1 CTE for 2 different DML statements

AskTom - Hot Questions - Thu, 06/18/2020 - 00:00
I have a query that reads through access_history data, which looks for record_type='T' as I only want Time & attendance records, which haven't been paired previously (processed=0). I am also keeping the Systimestamp for each record so I can get the ...

Timestamp with local time zone

AskTom - Hot Questions - Sun, 06/14/2020 - 00:00
Trying to deal with timezone + DST aware datetime handling, but I couldn't get it safe and convenient at the same time. The almost perfect solution would be timestamp with local time zone , except the fact that it uses nls_timestamp_format (opposed...