SQL Developer extract to Excel - Date Format is changing from 02-NOV-2020 12:00:00 AM to 11/2/20 Hi I'm using SQL Developer 20.4 and Excel 365 16.0.13127.21490. When I run a SQL with dates, the dates are delivered in the format of 02-NOV-2020 12:00:00 AM. When I export to Excel, the excel file is showing the dates as 11/2/20. How do I go about keeping the original date format that Excel had?
In the Example of Multiple Recipients in the https://oracle-base.com/articles/misc/email-from-oracle-plsql#multiple-recipients link, mail goes to all the recipients. But, all the names in the "To" list is available to all the recipients.Is there any parameter setting such that, only the intended recipients name appears. So that each recipient is not aware of other names in the "To" list.
I have the script below that was working normally, however, I believe, that the gmail certificate has expired. With OpenSSL> s_client -connect smtp.gmail.com/1687 -starttls smtp, I generated another certificate for my wallet and updated the ACLs, but still it still has an error to send on the l_reply line : = sys.utl_smtp.starttls (L_MAIL_CONN ); and ORA-28759: ora-28759 failure to open file, by if I get command bellow, it's work. select UTL_HTTP.request(url => 'http://www.google.com', wallet_path => 'file:/oracle/ora11g/owm/wallets/', wallet_password => 'xxxxxxxxx') from dual; <code> CREATE OR REPLACE PROCEDURE PR_JUN_MAIL_NF_DEVOL AS VAR_CONTROLE NUMBER; VAR_MENSAGEM VARCHAR2(32767); L_MAILHOST VARCHAR2(64) := 'smtp.gmail.com'; P_USERNAME_ VARCHAR2(50) := 'firstname.lastname@example.org'; P_PASSWORD_ VARCHAR2(50) := 'xxxxxxxx'; L_DE VARCHAR2(64) := 'Intranet'; L_FROM VARCHAR2(64) := 'email@example.com'; k_wallet_path CONSTANT VARCHAR2(100) := 'file:/oracle/ora11g/owm/wallets'; k_wallet_password CONSTANT VARCHAR2(100) := 'xxxxxxxxxxxxx'; k_domain CONSTANT VARCHAR2(100) := 'localhost'; l_port INTEGER := 587; l_reply utl_smtp.reply; l_replies utl_smtp.replies; TO_NAME VARCHAR2(200):= 'firstname.lastname@example.org'; SUBJECT VARCHAR2(200):= 'NOTAS FISCAIS DE DEVOLUCAO EMITIDAS'; L_MAIL_CONN UTL_SMTP.CONNECTION; BEGIN VAR_CONTROLE:=1; BEGIN SELECT distinct 0 INTO VAR_CONTROLE from tambem_JUN_NFE nfe inner join bgm_notafiscal bnf on nfe.codintnf_bgmnf = bnf.codintnf inner join esfclass e on e.codclassfisc = bnf.codclassfisc INNER join BGM_CLIENTE CLI on bnf.Codcli = CLI.CODCLI INNER join CTR_FILIAL F on nfe.empresa = f.codigoempresa and nfe.filial = f.codigofl WHERE NFE.DATAEMISSAO > to_date('01/02/2017', 'dd/mm/yyyy') and upper(e.descclassfisc) like '%DEVOLU%' AND NFe.Codintnf_Bgmnf || upper(decode(nfe.status,'A','AUTORIZADA', 'C','CANCELADA', 'I','INUTILIZADA', 'N','NORMAL', 'P','PROCESSANDO', 'R','REJEITADA')) not in (select jc.codintnf || upper(jc.statunf) from CONTROLE_NF_DEV JC); EXCEPTION WHEN OTHERS THEN VAR_CONTROLE:=1; end; if VAR_CONTROLE = 0 then l_reply := UTL_SMTP.OPEN_CONNECTION( host => L_MAILHOST, port => l_port, c => L_MAIL_CONN, wallet_path => k_wallet_path, wallet_password => k_wallet_password, secure_connection_before_smtp => FALSE); Dbms_Output.Put_Line ('apos conectar'); l_replies := sys.utl_smtp.ehlo(L_MAIL_CONN, k_domain); l_reply := sys.utl_smtp.starttls(L_MAIL_CONN); l_replies := sys.utl_smtp.ehlo(L_MAIL_CONN, k_domain); l_reply := sys.utl_smtp.auth(L_MAIL_CONN, P_USERNAME_, P_PASSWORD_, utl_smtp.all_schemes); l_reply := sys.utl_smtp.mail(L_MAIL_CONN, L_FROM); l_reply := sys.utl_smtp.RCPT(L_MAIL_CONN, TO_NAME); l_reply := UTL_SMTP.OPEN_DATA(L_MAIL_CONN); UTL_SMTP.WRITE_DATA( L_MAIL_CONN, 'FROM:' ||L_DE||'<'|| L_FROM|| '>' || UTL_TCP.CRLF); UTL_SMTP.WRITE_DATA( L_MAIL_CONN, 'TO:' ||TO_NAME||UTL_TCP.CRLF); UTL_SMTP.WRITE_DATA( L_MAIL_CONN, 'SUBJECT:' ||SUBJECT||UTL_TCP.CRLF); UTL_SMTP.WRITE_DATA(L_MAIL_CONN, 'Content-Type: text/plain; boundary="gc0p4Jq0M2Yt08jU534c0p"'||UTL_TCP.CRLF); UTL_SMTP.WRITE_DATA(L_MAIL_CONN, 'MIME-Version: 1.0'||UTL_TCP.CRLF); UTL_SMTP.WRITE_DATA(L_MAIL_CONN, '--gc0p4Jq0M2Yt08jU534c0p' ); UTL_SMTP.WRITE_DATA(L_MAIL_CONN, 'Content-Type: text/plain' ); UTL_SMTP.WRITE_DATA(L_MAIL_CONN, ' '||UT...
The most peculiar Oracle situation in my career- Oracle changes how it records a block read from direct read to not recording it in an I/O wait event at all
Greetings, I have this extremely perplexing situation where Oracle changes how it records a block read. Last week it wasn't COUNTING block reads at all in an I/O wait event; this week it started to add it to the ?direct read? wait event. This is occurring in our production environment; however, I was able to reproduce the situation in our test environment with test data. I used all_source view to create two test tables until I reached 1.2 million for table 1 and 4 million for table 2: Table1 ( 1.2 Mil records) create table table1 as select * from dba_source where rownum; Table2 ( 4 Mil records ) create table table2 as select * from dba_source; create index t1_pk on table1(owner); create index t2_pk on table2(owner, line); exec dbms_stats.gather_schema_stats('JOHN'); Then I ran this select statement 120 times: <code>select count(*) from Table1 where line=1 and owner in (select Table2.owner from Table2 where Table2.owner=Table1.owner) order by owner;</code> In some cases Oracle 19c records the I/O in "direct path read" wait events and in other cases, it doesn't seem to report in any I/O wait event. That is soooo odd. TEST CASE 1: IOStats summary doesn't record I/O nor does it in a wait event: <code>Top 10 Foreground Events by Total Wait Time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Wait Avg % DB Wait Event Waits Time (sec) Wait time Class ------------------------------ ----------- ---------- --------- ------ -------- DB CPU 20.2 99.6 PGA memory operation 2,524 .1 20.27us .3 Other Disk file operations I/O 520 0 59.49us .2 User I/O db file sequential read 211 0 12.33us .0 User I/O Parameter File I/O 8 0 257.00us .0 User I/O enq: RO - fast object reuse 2 0 784.50us .0 Applicat control file sequential read 209 0 5.32us .0 System I log file sync 1 0 .95ms .0 Commit SQL*Net message to client 546 0 1.53us .0 Network SQL*Net more data to client 22 0 33.77us .0 Network SQL ordered by Gets DB/Inst: ORACLE/stbyoracle Snaps: 2727-2728 -> Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code. -> %Total - Buffer Gets as a percentage of Total Buffer Gets -> %CPU - CPU Time as a percentage of Elapsed Time -> %IO - User I/O Time as a percentage of Elapsed Time -> Total Buffer Gets: 3,399,948 -> Captured SQL account for 98.1% of Total Buffer Gets Elapsed Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id ----------- ----------- ------------ ------ ---------- ----- ----- ------------- 3,241,728 120 27,014.4 95.3 14.4 99.5 0 82mps751cqh84 Module: SQL*Plus select count(*) from Table1 where line=1 and owner in (select Table2.owner from Table2 where Table2.owner=Table1.owner) order by owner IOStat by Function summary DB/Inst: ORACLE/stbyoracle Snaps: 2727-2728 -> 'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000 -> ordered by (Data Read + Write) desc Reads: Reqs Data Writes: Reqs Data Waits: Avg Function Name Data per sec per sec Data per sec per sec Count Time --------------- ------- ------- ------- ------- ------- ------- ------- -------- LGWR 3M 1.5 .022M 10M 3.6 .075M 678 368.73us Others 7M 2...
Hi I have tried 19.9 - 19.11 I have noticed some suspicious behaviour regarding dual. I will create an example. At the moment, here you can see that from mytab, there comes only single row. Then I will dump the datatype to output. <code> SQL> set serveroutput on size unlimited; declare a clob; l_msg_content_begin CLOB := EMPTY_CLOB(); CURSOR cur IS with mytab as ( select 'SOMERANDOMTABLE' as main_table from dual --union select 'ALSOSOMERANDOMTABLE' as main_table from dual ) select main_table, lower_main_table from ( select main_table, lower(main_table) as lower_main_table from mytab ) order by 1 desc; rec cur%rowtype; BEGIN FOR rec IN cur LOOP dbms_output.put_line(rec.main_table); select dump(rec.lower_main_table) into a from dual; dbms_output.put_line(a); -- ORA-06502: PL/SQL: numeric or value error: character string buffer too small -- If you have only one row from dual, then you get error if you uncomment this: "l_msg_content_begin := ..." -- With 2 or more rows from dual, all good --l_msg_content_begin := 'blabla '||rec.lower_main_table||' blablabla '||rec.lower_main_table||'bla'||UTL_TCP.CRLF; END LOOP; --dbms_output.put_line(substr(l_msg_content_begin, 1, 2000) || 'AA'); END; / </code> And here you can see, datatype is CHAR (Typ=96), and check the length (so whole string padded with spaces "ascii32 == space") <code> SOMERANDOMTABLE Typ=96 Len=32767: 115,111,109,101,114,97,110,100,111,109,116,97,98,108,101,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,.................... </code> Seems like <b>lower()</b> function somehow produces this strange behaviour. Is this normal..? Also when I dump rec.main_table instead (so not lower() function output) <code>select dump(rec.main_table) into a from dual;</code> Then I get type CHAR and an actual length. So it is expected. On contrast, when I uncomment this second line also <code>--union select 'ALSOSOMERANDOMTABLE' as main_table from dual</code> Then it is expected: <code> SOMERANDOMTABLE Typ=1 Len=15: 115,111,109,101,114,97,110,100,111,109,116,97,98,108,101 ALSOSOMERANDOMTABLE Typ=1 Len=19: 97,108,115,111,115,111,109,101,114,97,110,100,111,109,116,97,98,108,101 </code> Type is varchar and length is actual length. Regards Raul
Hi Tom, I am given the "current" allocation of items to eight buckets, and I want to make it more efficient by filling as much as possible of bucket A, then of bucket B, then of bucket C (as indicated by the "priority"), by moving items between buckets by taking as many P1 items from the bucket H and reassigning them to bucket A (as many as possible), then to bucket B, etc., until you allocated all of them. Then you take the next lowest-priority bucket and repeat. How much of P1 and P2 to fill is defined in the volumes table i.e. each A, B-H can have quantities in multiple of eight and seven of P1 and P2 in sample data. I also want to include round-up and round-down logic to nearly distribute the quantities across buckets if one bucket has too big a quantity. The items P1 and P2 are completely independent and one's result shouldn't impact the other. The height, weight, and width don't matter here so not present in any sample data. The below I have started with but couldn't make round-up and round-down cases work. Also, in the cases when quantity is moved into two or more buckets from one bucket or moved out from two buckets into one bucket, can we show a single comma-separated row instead of multiple step-by-step rows? In the cases when one row has too much quantity, can we implement round up and round down logic to distribute the quantities near equally in multiple of the quantities of value table in buckets e.g. if we update the quantity as seventy-two in bucket H of P1 part, the current result gives five rows for H bucket. Can we round buckets A-H with sixteen and then the remaining ones in the H bucket?
ORA-65040 when loading jar file into pluggable database with sys.dbms_java.loadjava or commandline tool loadjava
I want to load a jar file into the database. First I loaded it into the root container. loadjava -o -f -noverify -synonym -stdout -verbose -recursivejars -resolve -grant PUBLIC -user sys/password@CDB javax.mail.jar >\mail.log This also works, but I don't have access to it in the PDB. Trying to load the jarfile into the PDB fails with the error ORA-65040: operation not allowed from within a pluggable database. The same message comes when I use dbms_java.loadjava('-v -r -g PUBLIC /path/java/javax.mail.jar'); after switching into the PDB. How can I load jarfiles into a PDB or make them available there? Thanks for help Sigrid
Hello, Ask TOM Team. I have an application (is like a report tool) that reads data from a 18c Oracle Database. One day the results return quick but a couple days after it gets slow. When I execute "ALTER SYSTEM FLUSH SHARED_POOL" command the the results return quick again. What could be the problem? Thanks in advanced. Regards,
Hi Tom I have installed SAP on Oracle DB there is one table space PSAPSR3 got created & we have only 1 sapdata FS like /Oracle/FS/sapdata. So can you help me how auto extend will work , will it extend all table space if we set auto extend? Thanks, Nitin
I have a recursive query as below: <code>with addresses as ( select cust_id,address addr from ( select 10 cust_id,'9 Help Street, Level 4' address from dual union all select 11 cust_id,'22 Victoria Street' address from dual union all select 12 cust_id,'1495 Franklin Str.' address from dual union all select 13 cust_id,'30 Hasivim St.,Petah-Tikva' address from dual union all select 14 cust_id,'2 Jakaranda St' address from dual union all select 15 cust_id,'61, Science Park Rd' address from dual union all select 16 cust_id,'61, Social park road.' address from dual union all select 17 cust_id,'Av. Hermanos Escobar 5756' address from dual union all select 18 cust_id,'Ave. Hermanos Escobar 5756' address from dual union all select 19 cust_id,'8000 W FLORISSANT Ave.' address from dual union all select 20 cust_id,'8600 MEMORIAL PKWY SW' address from dual union all select 21 cust_id,'8200 FLORISSANTMEMORIALWAYABOVE SW' address from dual union all select 22 cust_id,'8600 MEMORIALFLORISSANT PKWY SW.' address from dual ) t1 ), replacements as ( select id,to_str,from_string from_str from ( select 1 id,'St' to_str,'Street' from_string from dual union all select 2 id,'St' to_str,'St' from_string from dual union all select 3 id,'St' to_str,'Strit' from_string from dual union all select 4 id,'St' to_str,'Str' from_string from dual union all select 5 id,'Rd' to_str,'Rd.' from_string from dual union all select 6 id,'Rd' to_str,'road' from_string from dual union all select 7 id,'Av' to_str,'Av.' from_string from dual union all select 8 id,'Av' to_str,'Ave' from_string from dual union all select 9 id,'Av' to_str,'Avenue' from_string from dual union all select 10 id,'Av' to_str,'Aven.' from_string from dual union all select 11 id,'West' to_str,'W' from_string from dual union all select 12 id,'South West' to_str,'SW' from_string from dual ) t2 ), r(cust_id,addr,test_addr,l) as ( select cust_id,addr,regexp_replace(addr,'(^|\W)' || from_str || '(\W|$)','\1' || to_str || '\2') test_addr, id - 1 from addresses, replacements where id = (select count(*) from replacements) union all select cust_id,addr,regexp_replace(test_addr,'(^|\W)' || from_str || '(\W|$)','\1' || to_str || '\2') test_addr, l - 1 from r, replacements where id = l ) select cust_id,addr,test_addr,l from r where l=0 ;</code> PRESENT OUTPUT: <code>cust_id addr test_addr 10 9 Help Street, Level 4 9 Help St, Level 4 11 22 Victoria Street 22 Victoria St 12 1495 Franklin Str. 1495 Franklin St. 13 30 Hasivim St.,Petah-Tikva 30 Hasivim St.,Petah-Tikva 14 2 Jakaranda St 2 Jakaranda St 15 61, Science Park Rd 61, Science Park Rd 16 61, Social park road. 61, Social park Rd 17 Av. Hermanos Escobar 5756 Av Hermanos Escobar 5756 18 Ave. Hermanos Escobar 5756 Av Hermanos Escobar 5756 19 8000 W FLORISSANT Ave. 8000 West FLORISSANT Av 20 8600 MEMORIAL PKWY SW 8600 MEMORIAL PKWY South West 21 8200 FLORISSANTMEMORIALWAYABOVE SW 8200 FLORISSANTMEMORIALWAYABOVE South West 22 8600 MEMORIALFLORISSANT PKWY SW. 8600 MEMORIALFLORISSANT PKWY South West.</code> Query not working as expected for cust_id like 16,18,22. in cust_id dot is there after road but still it changes to Rd. I need 2 queries...one with exact match including dot and the other sql having match with dot or without dot. The expect out put for first sql with exact match: <code>cust_id addr test_addr 10 9 Help Street, Level 4 9 Help St, Level 4 11 22 Victoria Street 22 Victoria St 12 1495 Franklin Str. 1495 Franklin St. 13 30 Hasivim St.,Petah-Tikva 30 Hasivim St.,Petah-Tikva 14 2 Jakaranda St 2 Jakaranda St 15 61, Science Park Rd 61, Science Park Rd 16 61, Social park road. 61,...
Hi Tom, I'm planning to partition a 4 G table (i.e. table A) 1) the first partition is by date interval 3 months 2) subpartition is by hash custumer_id (unique values) I need to decide if the hash partition should be 8 or 16. - if I use 8, each partition size is around 16 MB - if I use 16, each partition size is around 8 MB I'm also investigating other tables for partitioning (10G to 20G) (i.e. table B) -first by list then subpartition by hash My understanding is that in order to take advantage of partition-wise joins the hash partitions should be equal in both tables. Meaning: if I use interval partition by date and hash partition 16 in table A then I should use a list partition and hash partition 16 in table B. Hash Partitions in table A and B = 16 Please advise Best Regards Jola
Hi, I am new to Apex and I would like to add a dynamic action to automatically update the old end_date column when new record inserted. The idea is, for the new record, whatever we put in start_date, same date will automatically assign to previous record end_date. For Example Business_Name | Business_id | City_name | City_id | Start_date | End_Date ABC 1000 NY 20 03-10-2021 12/31/9999 When user create another record for same business name but with different city, the end_date from previous record set to the start_date for new record. Business_Name | Business_id | City_name | City_id | Start_date | End_Date ABC 1000 NY 20 03-10-2021 03-15-2021 --Old Record ABC 1000 Dalas 30 03-15-2021 12/31/9999 --New Record IS there any way that i can achieve this without triggers?
https://docs.oracle.com/en/database/oracle/oracle-database/21/sqpug/VARIABLE.html#GUID-B4A0DAA3-B6E0-42F7-89B0-EF9C41F02FA3 I read the followings at the above link The following example illustrates assigning a value to a variable for input binding: <code>VARIABLE tmp_var VARCHAR2(10)=Smith</code> The following example illustrates an alternate method to achieve the same result as the previous example: <code>VARIABLE tmp_var VARCHAR2(10) VARIABLE tmp_var=Smith EXECUTE DBMS_OUTPUT.PUT_LINE(:tmp_var)</code> My problem: When I tried to run above commands to create these variables in sqlplus.exe, Oracle did not create them, I am a newbie I even tried to run these command in Oracle live too but couldn't succeed, I was just executing them as Oracle instructed me to do. could you please guide me what wrong I am committing. Regards, Anna
Hi team, I have an oracle apex 19.2 application. My app is supposed to be integrated on my clients web sites as an iframe. This is something which works very badly since Safari doesn't accept third party cookies (cookies of my app are considered third pary cookies). Chrome will end accepting third party cookies as of next year as well. So my app keeps showing the error : your session has expired. My app pages are public and don't require any authentication scheme. I was wondering if it's possible to make an apex app that doesn't require any authentication work without Cookies. Thanks. cheers,
Dear AskTom followers, Let's consider following DWH scenario. I want to use expdp/impdp as methods for populating stage area in DWH. Tables in stage area have the same structure as business tables, except column int_date date, where we catch the date when the data is valid. So, simple example, let's use schemas hr and hr_stage. <code>conn hr_stage hr_stage@PDB1> create table int_employees 2 as select * from hr.employees 3 where 1=0; Table created. hr_stage@PDB1> hr_stage@PDB1> alter table int_employees add (int_date date ); Table altered. hr_stage@PDB1> hr_stage@PDB1> create or replace 2 package pkg_date 3 as 4 function f_int_date (p_int_date in date) return date; 5 end; 6 / Package created. hr_stage@PDB1> hr_stage@PDB1> hr_stage@PDB1> create or replace 2 package body pkg_date as 3 function f_int_date (p_int_date in date) return date as 4 begin 5 return p_int_date; 6 end; 7 end; 8 / Package body created. -- now, conn as hr hr@PDB1> host expdp hr/hr@pdb1 directory=DIR_HOME dumpfile=imp_cli.dmp TABLES=employees CONTENT=DATA_ONLY Export: Release 184.108.40.206.0 - Production on Wed Mar 17 15:58:53 2021 Version 220.127.116.11.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 18.104.22.168.0 - Production Starting "HR"."SYS_EXPORT_TABLE_01": hr/********@pdb1 directory=DIR_HOME dumpfile=imp_cli.dmp TABLES=employees CONTENT= DATA_ONLY Processing object type TABLE_EXPORT/TABLE/TABLE_DATA . . exported "HR"."EMPLOYEES" 17.08 KB 107 rows Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for HR.SYS_EXPORT_TABLE_01 is: /home/oracle/imp_cli.dmp Job "HR"."SYS_EXPORT_TABLE_01" successfully completed at Wed Mar 17 15:59:02 2021 elapsed 0 00:00:08 -- So table hr.empoyees is exported. Now, let's connect as hr_stage and try to import table employees into the table hr_stage.int_employees -- We want to populate as sysdate-1, using hr.pkg_date.f_int_date(sysdate-1) -- As the documentation says, it is possible using REMAP_DATA parameter using pkg.function conn hr_stage hr_stage@PDB1> host impdp hr/hr@pdb1 directory=DIR_HOME dumpfile=imp_cli.dmp remap_schema=hr:hr_stage REMAP_TABLE=hr.employees:hr_stage.int_employees CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=APPEND remap_data=hr_stage.int_employees.int_date:hr_stage.pkg_date.f_int_date(sysdate-1) Import: Release 22.214.171.124.0 - Production on Wed Mar 17 16:01:53 2021 Version 126.96.36.199.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 188.8.131.52.0 - Production ORA-39001: invalid argument value ORA-39232: invalid remap function: HR.PKG_DATE.F_INT_DATE(SYSDATE-1)</code> Not clear what is wrong. Thanks, Dusan
When playing with the well known PivotImpl and my own derivate of it, I discover a strange behavior with <b>OCDITableDescribe</b> and <b>OCDITableStart</b> when the query contains a FLOAT column: while the <b>FLOAT</b> is (correctly) detected as <b>NUMBER</b> in <b>ODCITabeDecribe</b> implementation of PivotImpl: the <b>CASE desc_tab(i).col_type</b> returns 2... the <b>CASE type_code</b> in <b>ODCITableStart</b> returns 4 ! (and 4 is not documented anywhere as far as I was able to search in the doc...). And of course the outside call fails with a ORA-06595: CASE not found while executing CASE statement. When looking a the generated TYPE "ST000..." for the OBJECT, it contains a <b>FLOAT(126)</b>: how possible if the <b>CASE</b> in <b>ODCITableDescribe</b> defines it as <b>DBMS_TYPE.TYPECODE_NUMBER</b>... ???? One could think just add a "<b>WHEN 4</b>" in the various <b>CASE</b> in Describe, Prepare, Fetch... Well first in Describe it never returns 4 but 2 for NUMBER of which FLOAT is a subtype, then in Prepare if you add a <b>WHEN 4</b> and call <b>dbms_sql.define_column</b> with a "<b>CAST(NULL as NUMBER)</b>" it pass this step but you still get a problem in Fetch: in the "<b>WHEN 4</b>" you may add try to do a <b>outset.setnumber</b>, another ERR is raised: ORA-22626, ORA-06512 "Type Mismatch while constructing or accessing OCIAnyData Cause: Type supplied is not matching the type of the AnyData". So except rewriting the queries (views, ...) to CAST AS NUMBER all the met FLOAT... (I don't control the data model generated by a 4G...) I don't found any solution yet. Any idea?
Hello, I want to write a delete trigger. The requirement is:- 1) There is a delete statement for a single record that does not exist. <code>Delete from t1 where pk = 'myuser'</code> 2) convert the delete to the uppercase value of the field old.pk <code>Delete from t1 where pk = 'MYUSER'</code> The problem is a row level delete will not fire as there is no record of myuser.
I have a database and I want to print the following, The last 10 rows The first 10 rows The largest zipcodes number The smallest zipcodes number This is the code I used. import pymysql db = pymysql.connect(host="192.168.0.180",port=4000,user="maxuser",passwd="maxpwd") cursor=db.cursor() cursor.execute("SELECT * FROM zipcodes_one.zipcodes_one where <= MAX_ROWS") results=cursor.fetchall() for result in results: print (result) Please help Thank you
Hi Tom, Thanks in advance. I am trying to replace naked decimal with '0.' Here is the example. String: '.45ML-.91ML-25MCG/.5ML-.9ML(.3ML)-25.5ML or .45' Every occurrence of naked decimal point should be replaced with '0.' resulting in '0.45ML-0.91ML-25MCG/0.5ML-0.9ML(0.3ML)-25.5ML or 0.45'. Please note 25.5 in the string is not naked decimal and remains as is. I tried to achieve using replace function but am not totally confident of the solution. <code>with str_rec as ( SELECT '.45ML-.91ML-25MCG/.5ML-.9ML(.3ML)-25.5ML or .45' str from dual ) select case when substr(str, 1, 1) = '.' then regexp_replace(replace( replace( replace( replace( replace( str, ' .', ' 0.'), '/.', '/0.'), '\.', '\0.'), '(.', '(0.'), '-.', '-0.'), '[.]', '0.', 1, 1) else replace( replace( replace( replace( replace( str, ' .', ' 0.'), '/.', '/0.'), '\.', '\0.'), '(.', '(0.'), '-.', '-0.') end, regexp_count(str, '[.]', 1, 'i') from str_rec;</code> Can we achieve this using regexp_replace or is there a better way to do this. Thanks