Quantcast
Channel: #cloud blog
Viewing all 259 articles
Browse latest View live

SQL Developer saves the day

$
0
0


So you get a call at the weekend.....

"we need you to run some data update against our database to complete a migration task"

This is the final task in a very long process and one that has taken months to verify all the steps and complete.

Problem 1 is that this is SQL Server (and i'm no SQL Server expert)

 (before you oracle guys stop reading, don't this is useful for you too)

Problem 2 - due to my laptop rebuild a week ago I'm missing almost all software from my laptop

Problem 3 - due to "security" I don't have admin rights on my laptop to install

Problem 4 - this is a managed database to which we have no direct server access

So the pressure is on, this updated is needed "now" and i don't have any client available or any server access to be able to do this and no means of getting a client installed..... or do I?

To be able to "save the day" i need the following tools:

sqldeveloper - http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
jdk 8 - http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
7-zip portable - http://portableapps.com/apps/utilities/7-zip_portable
jtds - http://sourceforge.net/projects/jtds/files/

And the steps are as here:

1) unzip sqldeveloper to anywhere you like (somewhere that doesn't need admin rights...)
2) 'install' 7 zip portable to wherever you like (no admin right needed here)
3) using the 7zip program follow the excellent blog note here http://www.brucalipto.org/java/how-to-create-a-portable-jdk-1-dot-7-on-windows/ (i found from the GUI you have to use 7-zip -> open archive to view what's inside the .exe file) - also read the comments at the end for a shortcut for the latter processes
4) follow my own previous link on installing jtds 'plugin' for sql developer (making sure to use the newer jtds version as my blog is quite old now)  http://dbaharrison.blogspot.de/2014/11/sql-developer-and-sql-server.html

Once you've done all of that you have a fully working client to allow you to work with MSSQL (or indeed oracle) installed with out any need for admin rights.

Here is the screenshot showing sqldev working with mssql



This got me out of a hole - maybe it does the same for someone else

Getting exp to include/exclude like datapump?

$
0
0


Well holiday is now over (hence the lack of blog posts for a while) and it's back to the normal day to day work.

An interesting post though to shake off the post holiday blues (inspired by an oracle forums question).

The basic problem was that they wanted to take a full schema export but the dictionary table containing the sequences (SEQ$) has a block corruption and the job keeps failing.

How do i therefore extract the schema and bypass the sequence issue?

Easy enough you think just use exclude=sequence and its job done.

However this is oracle 8 (yes people still use these versions....) and you only have exp which cannot get clever about how you filter what is extracted..... or can it?

I'm doing my demo here on 11.2 as i personally don't have v8 any more but the principle is exactly the same in v8.

First up lets create a demo schema with a sequence to use for testing


SYS@DEMODB>create user demo identified by demo;

User created.

SYS@DEMODB>grant connect,resource,create sequence to demo;

Grant succeeded.

SYS@DEMODB>conn demo/demo
Connected.
DEMO@DEMODB>create sequence demoseq;

Sequence created.

DEMO@DEMODB>

So thats now ready to go - if i extract a schema level export now it happily extracts the seqence

 exp demo/demo

Export: Release 11.2.0.3.0 - Production on Thu Sep 3 08:08:06 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Enter array fetch buffer size: 4096 >

Export file: expdat.dmp >

(2)U(sers), or (3)T(ables): (2)U >

Export grants (yes/no): yes >

Export table data (yes/no): yes >

Compress extents (yes/no): yes >

Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user DEMO
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user DEMO
About to export DEMO's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export DEMO's tables via Conventional Path ...
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

If i now use show=y to list the dumpfile contents we can see the schema ddl

 imp demo/demo show=y

Import: Release 11.2.0.3.0 - Production on Thu Sep 3 08:09:02 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option

Export file created by EXPORT:V11.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing DEMO's objects into DEMO
 "BEGIN  "
 "sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','"
 "CURRENT_SCHEMA'), export_db_name=>'DEMODB', inst_scn=>'8944079698514');"
 "COMMIT; END;"
 "CREATE SEQUENCE "DEMOSEQ" MINVALUE 1 MAXVALUE 9999999999999999999999999999 "
 "INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE"
Import terminated successfully without warnings.

So how do we not extract this - well this is where the trick comes in. All of the objects that exp extracts are listed in views called EXUxxxxx - these are just views of the data dictionary to give it the information it needs.

We can manipulate these to filter what it does and mimic what datapump can do natively. In fact by modifying these views directly we can pretty much do everything datapump does..... (though of course messing around like this is not advisable...)

So for sequences i look in the $ORACLE_HOME/rdbms/admin directory and open up the catexp.sql file to check what the sequence view is - turns out it's EXU8SEQ.

The current definition is this

CREATE OR REPLACE VIEW exu8seq (
                owner, ownerid, name, objid, curval, minval, maxval, incr,
                cache, cycle, order$, audt) AS
        SELECT  u.name, u.user#, o.name, o.obj#, s.highwater, s.minvalue,
                s.maxvalue, s.increment$, s.cache, s.cycle#, s.order$, s.audit$
        FROM    sys.exu81obj o, sys.user$ u, sys.seq$ s
        WHERE   o.obj# = s.obj# AND
                o.owner# = u.user#
/

Let's add a line to that so no sequences are present in the view

CREATE OR REPLACE VIEW exu8seq (
                owner, ownerid, name, objid, curval, minval, maxval, incr,
                cache, cycle, order$, audt) AS
        SELECT  u.name, u.user#, o.name, o.obj#, s.highwater, s.minvalue,
                s.maxvalue, s.increment$, s.cache, s.cycle#, s.order$, s.audit$
        FROM    sys.exu81obj o, sys.user$ u, sys.seq$ s
        WHERE   o.obj# = s.obj# AND
                o.owner# = u.user#
and 1=0 -- added to bypass sequences in export due to seq$ corruption
/

We now run that in

SYS@DEMODB>CREATE OR REPLACE VIEW exu8seq (
  2                  owner, ownerid, name, objid, curval, minval, maxval, incr,
  3                  cache, cycle, order$, audt) AS
  4          SELECT  u.name, u.user#, o.name, o.obj#, s.highwater, s.minvalue,
  5                  s.maxvalue, s.increment$, s.cache, s.cycle#, s.order$, s.audit$
  6          FROM    sys.exu81obj o, sys.user$ u, sys.seq$ s
        WHERE   o.obj# = s.obj# AND
                o.owner# = u.user#
  7    8    9  and 1=0 -- added to bypass sequences in export due to seq$ corruption
 10  /

View created.

And now we repeat the exp/imp process

 exp demo/demo

Export: Release 11.2.0.3.0 - Production on Thu Sep 3 08:10:51 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Enter array fetch buffer size: 4096 >

Export file: expdat.dmp >

(2)U(sers), or (3)T(ables): (2)U >

Export grants (yes/no): yes >

Export table data (yes/no): yes >

Compress extents (yes/no): yes >

Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user DEMO
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user DEMO
About to export DEMO's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export DEMO's tables via Conventional Path ...
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.


 imp demo/demo show=y

Import: Release 11.2.0.3.0 - Production on Thu Sep 3 08:11:19 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option

Export file created by EXPORT:V11.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing DEMO's objects into DEMO
 "BEGIN  "
 "sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','"
 "CURRENT_SCHEMA'), export_db_name=>'DEMODB', inst_scn=>'8944079698736');"
 "COMMIT; END;"
Import terminated successfully without warnings.

And there we go - the sequence ddl is now missing and we have excluded sequences form exp

This could be a very useful technique for people on old versions (and to be honest i wish i'd written this post about 20 years ago......). Datapump however can do all of this for you natively and is far superior in other ways too.

This should help a few people though i think.

windows and the secret /dev/null ?

$
0
0


This week i had something of a surprise in what was available from windows command prompt.....

Let me give you a quick bit of background.

We are doing an application migration from a very old to a very new version of a 3rd party piece of software. This migration involves running an executable which logs on to the database and does loads of dml and ddl to move up to the next version.

This utility spews out a huge amount of output to the screen that i just don't want to see and i was pretty sure this was causing an immense slow down of the whole process (total migration time 90 minutes). I contacted the vendor to see if this could be turned off and the answer was no.....

So frustrated i went away and started seeing if there was any other workaround to help with this.

A quick google revealed something that i initially thought was a mistake or some april fool.

Windows seemed to have the capability to do this style of syntax according to a microsoft technet article

command > nul 2>&1 

what!? how long has that been possible - has there been some conspiracy to hide this as its so much like unix/linux......

Anyway with this in mind i then ran my upgrade process like this

command.exe arg1 arg2 2>screenoutput.txt

So basically instead of sending stderr (2) to the screen send it all to the new textfile screenoutput.txt (the vendor seemed to be sending all the output to stderr for some reason)

A rerun of the whole process with this change now takes 36 minutes..... I've saved nearly an hour by not sending output to the screen.

A useful trick and I can't believe i didn't know it was possible before.....

Windows even has /dev/null (well its called nul in windows world) - so i could have said

command.exe arg1 arg2 2>nul

and just discarded all of the information - however in this case i did want to keep it to check the output for genuine errors.

Switch off database network ACL's

$
0
0


This is a reminder for me as much as anything - the damn acl's that control what traffic can be allowed from 'inside' the database are a real pain and sometimes its nice just to switch the things off - here is the code block to do that

begin
 dbms_network_acl_admin.create_acl(
 acl =>         'norules.xml',
 description => 'everything allowed',
 principal =>   'PUBLIC',
 is_grant =>    true,
 privilege =>   'connect'
 );
 DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(
 acl =>         'norules.xml',
 principal =>   'PUBLIC',
 is_grant  =>   true,
 privilege =>   'resolve'
 );
 dbms_network_acl_admin.assign_acl(
 acl =>         'norules.xml',
 host =>        '*'
 );
end;
/

commit;

Not recommended of course but sometimes its nice to just quickly do this to get you moving rather than messing about wondering what are the specifics of what is being blocked.

Give it a REST Apex

$
0
0


REST or "Representational state transfer" to give it it's full name has been around for a while, i hear lots of people talking about it but to be frank i don't really have a clue what it's really about.

I tried to find a simple explanation by searching the net but couldn't really find one that just explained it in simple terms so i gave up.

Even the 'simple' wikipedia link didn't seem to explain it well.

Anyway for my purposes i don't really need to know all the in's and out's of how it functions i just need to know how to call it to return some data.

I want to do this via an Apex application as we are building a reporting dashboard that needs to fetch data from another system. This system has a database (mySQL) backend to which firewall access is quite restrictive, it also has a datamodel that is not that simple to work with so direct database link (using hsodbc for example) would be tricky.

However the vendor of the application helpfully provides a 'REST' interface into the application and we can use that to fetch data.

Before i even contemplate doing this in apex though i want to make sure the REST interface works - to do this i make use of the wget utility for linux which can make the appropriate calls. Note: simple browsing to the addresses does not work - the thing calling the rest interface has to be rest capable - a native browser is not (though plugins can be got to do this kind of thing).

I won't bore you with the details of how i arrived at the command line - but the end result is this

wget  --keep-session-cookies --save-cookies cookies.txt  --post-data 'user=xxxx&pass=xxxx' http://servername/REST/1.0/search/ticket?query=queue=\'"DB Administrator"\'

This command when run returns the data i want (basically a list of tickets from a queue) - the actual details of what this particlar call does is not that relevant as every rest interface built is going to be unique for that application - the end result is many rows of colon separated data is returned from this call.

I now know the rest interface works all i have to do now is call this from Apex - so how do i do that? Well it's relatively easy provided you know a couple of tricks.

The first thing to do is just create an application which will contain the call, just do this through the wizard as normal , just a minimal selection will do - just choose database, click next, give it a name and link it with a preexisting schema (or create a new one) then click create application (just skip all the other bits for now).

Once that's done and you are on the main application screen do the following (and bear with it as there are a few screenshots here):

1) click on shared components



2) Choose web service references


3) Choose REST

4) Fill in the appropriate REST URL details and give it a name. Note here i have to substitute a space character with %20 and also note i have to choose the POST method for this particular REST call.

5) Now i specify the two arguments to be passed in - in this case username and password - these correspond to the 2 parameters i passed earlier in wget
6) Now i define what the returned data will look like - in my case it's text, with fields separated by : and with the default newline character delimiting a 'row'

7) Now i click the test button fill in the username/password - and as you can see at the bottom here some data is returned - so far so good.


8) Now we click create and finish up, which gives us this screen where we can straight away create a form and report based on the rest 'thing' we just created.

9) First we choose the reference which is the name we just created previously - everything else is default
10) All defaults on the next page
11) Then we define the form elements that are passed through to the REST call (i.e. username/password in this specific case)

12) Give the collection that holds the results a name and define which if the returned elements we are interested in - in this case everything.


13) ok - it seems happy - now we click run page
14) we get a form with fields to populate with the elements to be passed to the REST post call

15) and.... ta -daah it works!


We still need to make all this look nice etc - but the basic connectivity (which is the hard bit i think) is now in place. Actually it's surprisingly easy once you figure out the various elements involved

One issue i did have that is maybe worth mentioning is that when the database is started it picks up a couple of shell variables that define proxy servers - i found that our proxy did not support rest calls and i had to connect direct.

This meant i had to explicitly unset these two parameter below before starting the database (i'm running Apex directly from the DB) i.e. using the plsql gateway.

[oracle@server]::[~]# unset http_proxy
[oracle@server]::[~]# unset https_proxy

Give it a REST PL/SQL

$
0
0


In a rapid follow up to the wildly popular post 'give it a rest apex' i decided to have a go at implementing the same concept in plsql. In fact i wanted to be able to basically run a select statement that could return me the results from the REST call as a series of rows to then do something with - this would be more flexible than only having it available in Apex.

So i went about investigating how this could be done (by investigating i mean trawling the internet for code i could 'reuse' of course).

And i couldn't find anything that did exactly what i wanted, in fact it was difficult to actually find the right search terms to even get me in the right ball park - there is lots of stuff out there about turning oracle in to a rest enabled database - but i just wanted oracle to call a rest service that existed elsewhere.

I ended up having to write my own :-(

It was a little fiddly and used a few different tricks and techniques but i think it's really worth sharing as with quite a short code block i can do what i want honest.....

At a high level i've defined a couple of types (i know , i know no-one really likes these but they are essential to make the pipelined function work), then a pipelined table function to return data and then just a SQL statement to call the pipelined table function.

The function uses an apex function to actually make the REST call which made things quite easy on that front (although some of the inputs were a little fiddly as the REST url contained quotes and spaces).

Anyway enough waffle i'm sure you just want to see the code - so here it is:

1) First the types - a 'row' definition, and then a 'table' definition based on that

CREATE TYPE rt_row AS OBJECT (
  ticket VARCHAR2(4000)
);
/

CREATE TYPE rt_tab IS TABLE OF rt_row;
/

2) Then we create the pipelined table function

create or replace function rt return rt_tab
  pipelined as
  rtresults clob;
  l_pos     PLS_INTEGER := 1;
  l_idx     PLS_INTEGER;
  l_delim   varchar2(1) := CHR(10);

begin
  apex_web_service.g_request_headers(1).name := 'Content-Type';
  apex_web_service.g_request_headers(1).Value := 'application/x-www-form-urlencoded; charset=utf-8';
  rtresults := apex_web_service.make_rest_request(p_url         => 'http://servername/REST/1.0/search/ticket?query=queue=''DB%20Administrator''',
                                                  p_http_method => 'POST',
                                                  p_parm_name   => apex_util.string_to_table('user:pass'),
                                                  p_parm_value  => apex_util.string_to_table('username:password'));
  LOOP
    l_idx := INSTR(rtresults, l_delim, l_pos);
    IF l_idx > 0 THEN
      PIPE ROW(rt_row(substr(rtresults, l_pos, l_idx - l_pos)));
      l_pos := l_idx + LENGTH(l_delim);
    ELSE
      PIPE ROW(rt_row(substr(rtresults, l_pos)));
      RETURN;
    END IF;
  END LOOP;
  RETURN;
end;
/

3) So everything is in place now we just need to select from the function


select * from table(RT);

15782: UAT42 Refresh
15785: RP creation
etc
etc

And there we go - calling a remote REST interface from a select statement - neat huh?

(oh and a couple of notes that might stop this working - make sure your proxy allows this kind of access if you have one and make sure the database acls allow the request through).



Preventing cloud control alerts for controlled restarts

$
0
0


We have hooked cloud control into the request tracker ticketing system (and by hooked in i mean that cloud control just sends an email notification which is then picked up by request tracker to generate a ticket - nice and simple).

This means we get tickets generated for every alert we configure in cloud control (it's also the same system that users can request service requests in - again they just send an email in and it generates a ticket) - this way we get tickets for every piece of work we do for management reporting and the overhead on people to get these tickets created is minimal - this is all great.

However....

The nature of development and test work means that databases are very often restarted to enable some new feature, fix a memory leak, flashback etc - all the usual stuff that happens day to day. This is a problem though - every time the database is restarted (even if you do it really quick) cloud control notices and sends an alert - which generates an email - which generates a ticket. This we don't want - the ticket is a consequence of some activity (that itself was a ticket) and we shouldn't be getting an extra ticket for that. The extra tickets skew the reporting and as we have subcontracted out some of the support can have cost implications as we are billed per ticket to some degree.

So how do we resolve this - i want a controlled restart of the database to not generate a ticket.

Our initial thoughts were to set a blackout every time a restart was required - the problem with this is you have to remember to do this and for most people there life is spent in a sqlplus session on the server and they've worked a certain way for many years - changing now to have to run an emctl command before (and after) the restart just wasn't working.

So what could we do here?

After lots more thought we came up with a solution - which after the fact actually seemed pretty obvious, we just hadn't considered it up front. All we had to do was add a delay in the alerting via an option in cloud control - see the screenshot below


So here we can say only send the alert if the database has been down for 5 minutes - in almost all of our cases the restart only takes a few seconds so the alert will never happen. This of course means that a genuine database down alert is delayed for 5 minutes - those are so infrequent though that for us it's not really an issue - server crashes are far more common (for us at least) - and these would be alerted straight away.

In the end a very simple fix.....

Arise from the flashback ashes......

$
0
0


If you're read some of my older stories on this blog you'll know I've had some interesting battles with flashback over the past couple of years... don't get me wrong i think it's (arguably) the best feature introduced over the past 10 years (i'm talking about all the different flavours of flashback with that comment).

Today i hit a whole series of random errors discovered a minor  new (well new to me) syntax to a very old command and managed for a second time with flashback to bring a database back from the dead.

Read on for the saga....

The series of events started with me just trying to do a simple flashback database, so shutdown, startup mount and then this:

SQL> flashback database to restore point DRYRUN1;

Flashback complete.

Normal so far - but then i tried this to being the db open

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-38760: This database instance failed to turn on flashback database

not so good

lets check the alert log

*************************************************************
Unable to allocate flashback log of 43056 blocks from
current recovery area of size 322122547200 bytes.
Recovery Writer (RVWR) is stuck until more space
is available in the recovery area.
Unable to write Flashback database log data because the
recovery area is full, presence of a guaranteed
restore point and no reusable flashback logs.
Cannot open flashback thread due to an error when trying
to switch into a new flashback log.
Database mounted in Exclusive Mode
Lost write protection disabled
Ping without log force is disabled.
Completed: ALTER DATABASE   MOUNT
2015-10-06 09:43:29.018000 +01:00
 flashback database to restore point DRYRUN1
ORA-16433 signalled during:  flashback database to restore point DRYRUN1...
2015-10-06 09:43:42.109000 +01:00
alter database open
Errors in file /oracle/admin/DBNAME/diag/rdbms/DBNAME/DBNAME/trace/DBNAME_ora_3274.trc:
ORA-38760: This database instance failed to turn on flashback database
ORA-38760 signalled during: alter database open...

If we scroll a little bit further back we see this

*************************************************************
Unable to allocate flashback log of 43056 blocks from
current recovery area of size 322122547200 bytes.
Recovery Writer (RVWR) is stuck until more space
is available in the recovery area.
Unable to write Flashback database log data because the
recovery area is full, presence of a guaranteed
restore point and no reusable flashback logs.
2015-10-06 09:33:17.114000 +01:00
Errors in file /oracle/admin/DBNAME/diag/rdbms/DBNAME/DBNAME/trace/DBNAME_arc1_17003.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 322122547200 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************

OK - so it's just out os space - not a big problem to fix...?

Lets add some more and try again

SQL> alter system set db_recovery_file_dest_size=350G;

System altered.

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01139: RESETLOGS option only valid after an incomplete database recovery

hmm - odd lets try noresetlogs

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-38760: This database instance failed to turn on flashback database

right.......

Lets try turning flashback off completely

SQL> alter database flashback off;

Database altered.

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kcvcrv_fb_inc_mismatch], [2127],
[1894563532], [892373920], [2124], [645743042], [889783889], [], [], [], [], []


This is looking pretty bad - lets ask metalink MOS for help - and we actually get what looks like a perfect match https://support.oracle.com/epmos/faces/DocumentDisplay?id=342160.1

Lets follow the steps in that then...

Well step 1 was turn off flashback - so that's already done - next step is to backup controlfile to trace (with the noresetlogs option on the end - which was a new one on me - i'd been manually deleting all the extra lines for years!)

SQL> alter database backup controlfile to trace noresetlogs;
alter database backup controlfile to trace noresetlogs
*
ERROR at line 1:
ORA-16433: The database or pluggable database must be opened in read/write
mode.

Now that's not in the MOS notes......

What to do then - seem to be running out of options.....

I decided to drop the restore point as we seemed to have flashed back ok earlier just not been able to open the db - maybe getting rid of the flashback logs will help?

SQL> drop restore point dryrun1;

Restore point dropped.

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00600: internal error code, arguments: [kcfckde-mismatch-rlc], [892373920],
[889783889], [], [], [], [], [], [], [], [], []
Process ID: 10753
Session ID: 488 Serial number: 21

Well it's changed the error but we're still nowhere.....

What options do i have left? The MOS note seems to imply that recreating the controlfile will fix this and i can still do this even without the trace file - i can just handcraft it - not something you do every day - but certainly possible - lets give that a try.....

First i track down the actual binary controlfile and run these two commands to generate me a list of the files the create controlfile command needs to contain

strings o1_mf_bts8jbgh_.ctl |grep .dbf | sort -u

gives me datafiles and this gives me logfiles

 strings o1_mf_bts8jbgh_.ctl |grep onlinelog | sort -u

I then do a backup controlfile to trace on another database just to give me a template to work with - i then just paste in the datafile/logfile/db name and I'm ready to manually recreate the controlfiles

Here is a selection of errors i got when getting the content incorrect (included just to show that the command can be quite forgiving.....most of these were completely new to me)

ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-00357: too many members specified for log file, the maximum is 2
ORA-01517: log member:

ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01224: group number in header 3 does not match GROUP 1
ORA-01517: log member:

ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 204800 (blocks), but should match header
2097152

When i finally got everything correct i ran this

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "DBNAME" NORESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 60
    MAXINSTANCES 1
    MAXLOGHISTORY 4672
LOGFILE
  GROUP 3 (
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/onlinelog/o1_mf_3_btsn335r_.log',
'/oracle/DBNAME/oradata/DBNAME/recovery_area/DBNAME/onlinelog/o1_mf_3_btsn399g_.log'
  ) SIZE 1G BLOCKSIZE 512,
  GROUP 4 (
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/onlinelog/o1_mf_4_btsn4fxg_.log',
'/oracle/DBNAME/oradata/DBNAME/recovery_area/DBNAME/onlinelog/o1_mf_4_btsn4mxo_.log'
  ) SIZE 1G BLOCKSIZE 512,
  GROUP 5 (
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/onlinelog/o1_mf_5_btsn4w5r_.log',
'/oracle/DBNAME/oradata/DBNAME/recovery_area/DBNAME/onlinelog/o1_mf_5_btsn5274_.log'
  ) SIZE 1G BLOCKSIZE 512
DATAFILE
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_aif_btsbh902_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_aif_data_btsbk9fo_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_aif_defa_btsbkm3c_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_align_btsbkoj4_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_aligne_btsbh1to_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_common_btsbkztq_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_common_d_btsbkn9b_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_common_i_c02b4r3m_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_crd_btsbknw7_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_crd_data_btsbkynw_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_crd_defa_btsbkwtf_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_data01_btsbkw86_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_data01_btsbl9p1_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_data02_btsbkz7w_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_data03_btsbk8t3_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_eis_dba_btsbl0fz_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_eisdba_btsdqg8r_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_index01_btsbky1r_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_netaarch_btsbkxfv_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_sysaux_bts8jhts_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_system_bts8jf72_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_system_bttjmdn7_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_sys_undo_bts8jk15_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_tools01_btsbkqw6_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_users_btsbkppn_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_zainet_btsbk874_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_zn_extra_btsbkb14_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_zn_extra_btsbkmp7_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_zn_extra_btsbkp3p_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_zn_extra_btsbkq9n_.dbf',
'/oracle/DBNAME/oradata/DBNAME/oradata/DBNAME/datafile/o1_mf_zn_extra_btsbkvn7_.dbf'
CHARACTER SET WE8ISO8859P1
;

Control file created.

then this

SQL> recover database;
Media recovery complete.
SQL> alter database open;

Database altered.

And it's only worked - back from the dead yet again.

The lesson here is never give up i guess (well and check your FRA isn't full before you start messing about.....)




Oracle, HSODBC and old SQL Server versions

$
0
0


A particularly dry title but i think it's worth posting this as it had me stuck for a couple of days this week. As some of you may have read i wrote up a post about how to connect oracle to sql server through odbc for free using the freetds drivers - see here

This works great and i wanted to reuse that functionality again.

This time i wanted to connect 12.1.0.2 to SQL 2005 which is like connecting a cray supercomputer to a typewriter...

Anyway it should work fine (or so i thought) - i followed by own post (again feeling smug that this was so easy to now setup). After configuring everything i gave it a test and the damn thing didn't work.

I got the useless generic error message ORA-28500 and nothing else - the real error didn't even appear

So i activated trace by adding this to the init file for the hs service

HS_FDS_TRACE_LEVEL = Debug

Then i ran the process again and got this


Exiting hgopars, rc=28500 at 2015/10/12-14:12:14 with error ptr FILE:hgopars.c LINE:578 ID:SELECT list of size 0 is not valid


Again not really that useful, however i did discover while trying this out that describe worked fine - it was only the select statements that were failing - so the link was kind of there....

After much trial and error (and to be honest total guesswork) i discovered the fix

I had to add

TDS_Version=7.2


in the odbc.ini file for the connection definition - this tells freetds that the db i'm talking to is SQL2005 and to behave differently. I think basically the ODBC call order must be slightly different between versions and using the newer one against the older db was doing things in the wrong order.

There is a full list here of what setting you should use for each database version



Windows has grep now too - where will it end?

$
0
0


A few weeks ago i posted my somewhat surprising discovery of a /dev/null for windows, as I've been having to do more work on windows recently i've also had cause to try and use command line pattern matching (i.e. look for a certain string in something).

I've known for a while there are extra tools you can install to let you do this - resource kits, cygwin and the like but it turns out there is actually a native command in the DOS shell that yet again has escaped me - when this appeared i don't know but i get the impression it's been around for years and yet again it's passed me by - maybe i need to pull myself away from a putty terminal once in a while....

anyway the command is called findstr

findstr /? will show you all it can do - but it has quite a lot of functionality and is actually a pleasant surprise to discover

for example to search for the string 143 in a netstat output (looking for sql server ports here) i can simply do this:

c:\> netstat -an |findstr 143
  TCP    0.0.0.0:1433           0.0.0.0:0              LISTENING
  TCP    0.0.0.0:1435           0.0.0.0:0              LISTENING
  TCP    0.0.0.0:1436           0.0.0.0:0              LISTENING
  UDP    0.0.0.0:1434           *:*

Simple eh and available natively - a useful tool when running oracle on windows - i'm sure i'll find out now that everyone who runs oracle on windows has known about this for years.....

Why does anyone do xml manipulation outside of the database?

$
0
0


If you've ever looked into the XML features available with oracle it's a little mindblowing, there is so much stuff available for working with XML it's very difficult to know where to start sometimes. This week one of my colleagues (Susan) has been doing a small proof of concept for turning some large XML files (60MB each) into 'rows' to be used for data analysis  - this has turned out to be amazingly easy to do - but finding out how easy it was just took a while.

I think it's worth sharing here as i think i knew this functionality was there i'd just forgotten - but it's a very useful technique.

We're going to load the xml file from disk using sqlldr and then select direct from that xml as if it's a normal table - so here goes -the XML file looks something like this (just a sample of a few fields)

<?xml version="1.0" standalone="yes"?>
<NewDataSet>
  <OfferteOperatori>
    <PURPOSE_CD>BID</PURPOSE_CD>
    <TYPE_CD>REG</TYPE_CD>
    <STATUS_CD>ACC</STATUS_CD>
    <MARKET_CD>MGP</MARKET_CD>
    <UNIT_REFERENCE_NO>UC_DP0012_CNOR</UNIT_REFERENCE_NO>
    <INTERVAL_NO>2</INTERVAL_NO>
    <BID_OFFER_DATE_DT>20150819</BID_OFFER_DATE_DT>
    <TRANSACTION_REFERENCE_NO>994891806986993</TRANSACTION_REFERENCE_NO>
    <QUANTITY_NO>2.731</QUANTITY_NO>
    <AWARDED_QUANTITY_NO>2.731</AWARDED_QUANTITY_NO>
    <ENERGY_PRICE_NO>0.00</ENERGY_PRICE_NO>
    <MERIT_ORDER_NO>532</MERIT_ORDER_NO>
    <PARTIAL_QTY_ACCEPTED_IN>N</PARTIAL_QTY_ACCEPTED_IN>
    <ADJ_QUANTITY_NO>2.731</ADJ_QUANTITY_NO>
    <GRID_SUPPLY_POINT_NO>PSR_CNOR</GRID_SUPPLY_POINT_NO>
    <ZONE_CD>CNOR</ZONE_CD>
    <AWARDED_PRICE_NO>41.65</AWARDED_PRICE_NO>
    <OPERATORE>Bilateralista</OPERATORE>
    <SUBMITTED_DT>20150818113211953</SUBMITTED_DT>
    <BILATERAL_IN>true</BILATERAL_IN>
  </OfferteOperatori>
etc
etc
etc

First we create a simple table to hold the xml that sqlldr will load along with a filename identifier:


CREATE TABLE MGPOffertePubbliche
(
   filename varchar2(120), 
   xmldata XMLTYPE

 XMLTYPE xmldata STORE AS CLOB;

Now we load that in using this controlfile

load data
infile *
replace
into table MGPOffertePubbliche
(
filename  char(100),
XMLDATA  lobfile(CONSTANT "/tmp/OffertePubbliche.xml") terminated by EOF
)
begindata
20150819MGPOffertePubbliche.xml


Here's the output from that

sqlldr user/xxxx control=OffertePubbliche.ctl log=OffertePubbliche.log

SQL*Loader: Release 11.2.0.4.0 - Production on Thu Oct 15 17:14:34 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 1

So that's loaded in (which is quite neat in itself i think)

Now comes the really clever bit - look at this SQL statement - the special function xmltable and the syntax goes with it allows us to directly pull out xml values!

select t.filename, x.* 
from MGPOFFERTEPUBBLICHE t,
XMLTABLE('/NewDataSet/OfferteOperatori' PASSING t.XMLDATA
COLUMNS 
PURPOSE_CD Varchar2(10) PATH 'PURPOSE_CD',
TYPE_CD Varchar2(10) PATH 'TYPE_CD',
STATUS_CD Varchar2(10) PATH 'STATUS_CD',
MARKET_CD Varchar2(10) PATH 'MARKET_CD',
UNIT_REFERENCE_NO Varchar2(15) PATH 'UNIT_REFERENCE',
BID_OFFER_DATE Varchar2(15) PATH 'BID_OFFER_DATE') x
where t.filename='OffertePubbliche.xml';

Which returns this

FILENAME                                 PURPOSE_CD                     TYPE_CD                        STATUS_CD                      MARKET_CD
---------------------------------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------
UNIT_REFERENCE_NO                             BID_OFFER_DATE
--------------------------------------------- ---------------------------------------------
20150819MGPOffertePubbliche.xml          OFF                            STND                           REJ                            MGP


20150819MGPOffertePubbliche.xml          OFF                            STND                           REJ                            MGP


20150819MGPOffertePubbliche.xml          OFF                            STND                           REJ                            MGP

Seriously neat - that is almost no code - why do people mess about with middle tier code doing xml work when the database makes it so easy?

All it needs is a little knowledge and you can do complicated things very easily!

Scheming anyone?

$
0
0


Ever run the create schema statement in Oracle? No - i thought not. It's one of the least used commands available i would think - I've only seen it used once in all my years of doing this job.

The basic premise of it sounds quite useful though - you can create all of your schema in one command - any errors and the whole thing rolls back - it's all within a single transaction. For creation of a base schema this is quite nice - you can keep running the script from scratch until it finishes 100% without error.

Lets do a quick demo of it (before i come back and tell you why nobody uses this... :-))

As a simple example - here is a new schema with a table a view and a grant (note how the terminating semicolon only appears once at the very end)

CREATE SCHEMA AUTHORIZATION dummyuser
   CREATE TABLE demotab 
      (col1 VARCHAR2(10) ) 
   CREATE VIEW demoview 
      AS SELECT col1 from demotab WHERE col1 > 1 
   GRANT select ON demoview TO missinguser; 


If you run that as is you get this

ERROR at line 1:
ORA-02421: missing or invalid schema authorization identifier

Right.......

Now in turns out that the schema you are creating has to be associated to an existing 'user' - so lets create that.

create user dummyuser identified by dummyuser;
grant connect,resource,create view to dummyuser;

Now try again and we get

ERROR at line 1:
ORA-02421: missing or invalid schema authorization identifier

again - lets check what the error code message actually tells us

oerr ora 2421
02421, 00000, "missing or invalid schema authorization identifier"
// *Cause: the schema name is missing or is incorrect in an authorization
//         clause of a create schema statement.
// *Action: If the name is present, it must be the same as the current
//          schema.


OK - so we need to set current schema to that name - lets try that

alter session set current_schema=dummyuser;

Session altered.

and we try again

ERROR at line 1:
ORA-02421: missing or invalid schema authorization identifier

Right..... lets try connecting as the user

SYS@DB>conn dummyuser/dummyuser
Connected.
DUMMYUSER@DB>CREATE SCHEMA AUTHORIZATION dummyuser
   CREATE TABLE demotab
      (col1 VARCHAR2(10) )
   CREATE VIEW demoview
      AS SELECT col1 from demotab WHERE col1 > 1
   GRANT select ON demoview TO missinguser;   2    3    4    5    6
   GRANT select ON demoview TO missinguser
   *
ERROR at line 6:
ORA-02426: privilege grant failed
ORA-01917: user or role 'MISSINGUSER' does not exist

OK - that works i expected that error

is we now check the table does not exist as everything was rolled back

desc demotab
ERROR:
ORA-04043: object demotab does not exist


If i now change the grant to be to system as this user exists it should work


DUMMYUSER@DB>CREATE SCHEMA AUTHORIZATION dummyuser
   CREATE TABLE demotab
      (col1 VARCHAR2(10) )
   CREATE VIEW demoview
      AS SELECT col1 from demotab WHERE col1 > 1
   GRANT select ON demoview TO system;  2    3    4    5    6

Schema created.

It works - and the objects are created - see the table below

DUMMYUSER@DB>desc demotab
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 COL1                                               VARCHAR2(10)

So it does work - but it has a few limitations - some you've already seen - but others seem to be that it only includes tables/views/grants and nothing else.......

So in it's current form not really going to be used i don't think - it it was enhanced it could become more useful.

What would be really useful is some command like this - i.e. things that operate at the schema level

1) drop schema (leaving user intact)
2) grant select on schema to userx
3) grant all on schema to userx; (grant everything but drop rights)
4) alter schema x readonly;
etc

That would really enhance working with oracle and i'm surprised these features never got introduced.....

livesql.oracle.com - a welcome addition

$
0
0


I've seen a couple of people make reference to this new facility in the past couple of weeks so i decided to go and have a look what it was all about and actually it looks really useful.

There is a blog post announcing it here :

https://blogs.oracle.com/developer/entry/livesql_is_live_write_oracle

If you log on to http://livesql.oracle.com using your normal oracle credentials you'll get access to the tool and the best thing to do is just have a browse around to see what is available to do there.

It essentially gives you a sql worksheet access to a running oracle instance and you can run sql/plsql there without having all the hassle of managing your own instance.

There are a lot if use cases for this tool, personally i think it will be useful for demos , training and in particular answering peoples technet questions - you an build an answer to a question here and then just share a link to the code demo for them to run at their leisure.

For example assume i'd asked how do i return the results of a basic query as xml? I could just refer the poster to this link

https://livesql.oracle.com/apex/livesql/s/cdld9s39vs4607cyossjcors2

where i've done a really basic solution (you can click and see what i did to get an idea).

Over time i guess it's hoped that this will become a code repository people can just borrow from.

Looks good so far - see what you think......

JSON (the slightly late Halloween return)

$
0
0


JSON, JSON, JSON - i've been hearing a lot about it lately and we even have systems now where we are storing JSON data in oracle (and by storing i mean 'dumping' the json data into a clob column).

I like to refer to this as DBaaB (database as a bucket - which sadly is where a lot of development seems to be headed nowadays).

Anyway some of our rocket scientist developers, wrote a whizzy new app that has everything stored as json as it made the app development easy (and to be fair created a quick nice to use app) - however there is now a problem - the original design didn't really have reporting requirements - but now it does and the data is not really in a format where that is easy to produce.

So what do we do?

We could just kick it back and let it all be sorted out by the app developers in .net (or whatever is the latest comically named tool in use) or we could try and be helpful and make use of some of the new features in Oracle 12.

I decided on the latter (though at many points i wished i hadn't) - let me share what i finally built in SQL as i think it will be useful for others as i struggled to find any code to borrow to get me started - though Tim (as always) had some good stuff to start me off

Let me first share with you an extract from the JSON data so you can get a fell of what we're dealing with - i loaded it up out of the database column (using cut and paste....) into the following website as it makes the data easier to browse

http://jsoneditoronline.org/


You can see it's a complex JSON document - there are multiple levels of arrays all over the place and the document itself is several thousand lines long....

Looking into whats possible and there seem to be a few paths to take - some of them rely on the column having an 'is json' check constraint defined against it which i currently didn't have so i went down the route of using the JSON_TABLE syntax (which is kind of similar to the XMLTABLE syntax i featured a couple of posts ago)

This proved initially difficult to get working, partly due to understanding the syntax and partly due to the complexity of the document i was working with.

After a lot (and by lot i mean really a lot) of trial and error i discovered that the JSON document, while 'valid' json, has sometimes used [[ instead of [ - oracle does not like this and this was causing a lot of my issues - you can see this on lines 6/7 in the screenshot above. Once i removed this extra [ and the corresponding ] further down it started working.

The SQL i eventually built looks like this (revenue being the clob column in the json_test table)

select revenue,casex
,casey
,casez ,item1,Mel
from json_test x,
json_table(revenue,'$'
columns
(casex varchar2(32) PATH '$.case',
  nested path '$.values[*]'
  columns (
  casey varchar2(2000) PATH '$.case'
    ,nested path '$.values[*]' columns (casez varchar2(2000) PATH '$.case'
      ,nested path '$.values[*]' columns (item1 varchar2(2000) PATH '$.Item1',Mel varchar2(2000) PATH '$.AdditionalInformation.Mel.Value'))
    )
)
) as jt
where x.servicename='FR'

which outputs something like this (revenue column included too but it's the later columns that are of interest)



Not nice at all to read - but it shows you the constructs for multiple levels of arrays and how to access elements further down the tree - once we have it in this 'SQL' format we can then work with it as normal and do any kind of aggregate reports we like with it.

The issue with the [[ bothered me though so i looked into metalink to see if it was recorded as a bug and had a surprise - it seems there are now json database patch bundles which are on top of database PSU's! - obviously as this is very new there are a lot of fixes being found for it - see link below and a quick screengrab from it

https://support.oracle.com/epmos/faces/DocumentDisplay?id=1992767.1



I'm still to try these patches out to see if it resolves the problem (hopefully it will) but the json related functions actually seem really good once you get the syntax right.

When create mview takes forever....

$
0
0


After having real work get in the way for a couple of weeks ( a major 9i->12c is proving 'interesting' at the moment - maybe even worthy of a talk at some point) I've been slacking a little on writing up any blog posts.

So lets make amends for that now with a short post on some mview creation quirks.

I've been working in the past day or so on a browsing system for our configuration database (using Apex). The configuration database is however hosted on SQL Server and i don't want to develop anything directly there for a few reasons:

1. I don't really know sql server (at least from a development point of view)
2. This is easy in Apex
3. I don't really know sql server...

So all i need to do is query SQL Server directly from oracle which I've blogged about before and is dead easy to set up - so i went ahead and did that and now i can access all the SQL Server data directly just using the @remote_sql_db syntax.

OK that's all good but one of the things i want to be able to do in my cmdb browser is build a tree view pane to let me hunt around and navigate for configurations. SQL Server doesn't support this (well it at least doesn't support the Oracle syntax of the connect by prior type statements). I would imagine that it supports that kind of syntax on some way but i didn't want to waste any more time on researching it.

So i can't query the tables directly over the db link and treewalk them - so what do i do?

Well - the obvious solution is just to create an mview which stores the results of the query against SQL Server - now it's an oracle table and i can do what i like with it. It also has the benefit that is doesn't have to run the complex query over the db link many times (data model is horrendous in the sql server db design by the way). It also decouples the system so an outage on SQL Server doesn't prevent the system running.

So it's all good then - i just need to create the mview with the same definition as the view

So lets do that

The view is a view of multiple views i created in oracle to make the remote data model into something remotely usable - so essentially the view text looks like this

select * from business_service
union all
select * from business_applications
union all
select * from RUNNING_SOFTWARE
union all
select * from computers

That runs fine and returns all the rows in 4 seconds.

However when i do this......

create materialized view jj refresh complete start with sysdate next (sysdate+1)
as
(select * from business_service
union all
select * from business_applications
union all
select * from RUNNING_SOFTWARE
union all
select * from computers)

It just hangs......and hangs and actually never comes back (at least not in a reasonable time period).

Hmmmm.....

Lets try the same thing as a CTAS and see what happens

create table jj as
(select * from business_service
union all
select * from business_applications
union all
select * from RUNNING_SOFTWARE
union all
select * from computers)

That creates in 4 seconds - so whats going on here?

Well i can see that the create mview is doing loads of extra work and seem to be constantly reading and writing from temp -it's not executing the query in the same way.

So how do i fix that?

Well i could mess about for hours trying to find out exactly what is going wrong but this time i going to take the easy shortcut and do the following.

I know the CTAS was fine - this is what the create mview will run (but its obvious that its doing it some other way due to a quirk - so i just need to give it a helping hand in the right direction)

So lets gather some outline data from the executions details of the good CTAS and embed those in the create mview statement.

So i rerun the CTAS

create table jj as
(select * from business_service
union all
select * from business_applications
union all
select * from RUNNING_SOFTWARE
union all
select * from computers)

followed directly by

select * from table(dbms_xplan.display_cursor(null,null,'ADVANCED ROWS ALLSTATS LAST')); 

Part of the output of this is a whole series of hints (the 'outline' data) near the bottom of the output

In my case its this selection of delights

Outline Data
-------------
 
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      DB_VERSION('12.1.0.2')
      DRIVING_SITE(@"SET$1""A1")
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$A9F4D2D2")
      OUTER_JOIN_TO_INNER(@"SEL$2""S"@"SEL$2")
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$07BDC5B4")
      MERGE(@"SEL$4")
      OUTLINE_LEAF(@"SEL$ABDE6DFF")
      MERGE(@"SEL$6")
      OUTLINE_LEAF(@"SEL$8A3193DA")
      MERGE(@"SEL$8")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$5")
      OUTLINE(@"SEL$6")
      OUTLINE(@"SEL$7")
      OUTLINE(@"SEL$8")
      NO_ACCESS(@"SEL$1""BUSINESS_SERVICE"@"SEL$1")
      FULL(@"SEL$A9F4D2D2""O"@"SEL$2")
      FULL(@"SEL$A9F4D2D2""S"@"SEL$2")
      FULL(@"SEL$A9F4D2D2""L2"@"SEL$2")
      FULL(@"SEL$A9F4D2D2""R"@"SEL$2")
      FULL(@"SEL$A9F4D2D2""L"@"SEL$2")
      LEADING(@"SEL$A9F4D2D2""O"@"SEL$2""S"@"SEL$2""L2"@"SEL$2""R"@"SEL$2""L"@"SEL$2")
      USE_HASH(@"SEL$A9F4D2D2""S"@"SEL$2")
      USE_HASH(@"SEL$A9F4D2D2""L2"@"SEL$2")
      USE_HASH(@"SEL$A9F4D2D2""R"@"SEL$2")
      USE_HASH(@"SEL$A9F4D2D2""L"@"SEL$2")
      PQ_DISTRIBUTE(@"SEL$A9F4D2D2""L"@"SEL$2" NONE NONE)
      SWAP_JOIN_INPUTS(@"SEL$A9F4D2D2""S"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$A9F4D2D2""R"@"SEL$2")
      USE_HASH_AGGREGATION(@"SEL$A9F4D2D2")
      END_OUTLINE_DATA
  */

So if i take these and embed then in the create mview statement like this

create materialized view jq refresh complete start with sysdate next (sysdate+1)
as
(select /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      DB_VERSION('12.1.0.2')
      DRIVING_SITE(@"SET$1""A1")
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$A9F4D2D2")
      OUTER_JOIN_TO_INNER(@"SEL$2""S"@"SEL$2")
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$07BDC5B4")
      MERGE(@"SEL$4")
      OUTLINE_LEAF(@"SEL$ABDE6DFF")
      MERGE(@"SEL$6")
      OUTLINE_LEAF(@"SEL$8A3193DA")
      MERGE(@"SEL$8")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$5")
      OUTLINE(@"SEL$6")
      OUTLINE(@"SEL$7")
      OUTLINE(@"SEL$8")
      NO_ACCESS(@"SEL$1""BUSINESS_SERVICE"@"SEL$1")
      FULL(@"SEL$A9F4D2D2""O"@"SEL$2")
      FULL(@"SEL$A9F4D2D2""S"@"SEL$2")
      FULL(@"SEL$A9F4D2D2""L2"@"SEL$2")
      FULL(@"SEL$A9F4D2D2""R"@"SEL$2")
      FULL(@"SEL$A9F4D2D2""L"@"SEL$2")
      LEADING(@"SEL$A9F4D2D2""O"@"SEL$2""S"@"SEL$2""L2"@"SEL$2""R"@"SEL$2""L"@"SEL$2")
      USE_HASH(@"SEL$A9F4D2D2""S"@"SEL$2")
      USE_HASH(@"SEL$A9F4D2D2""L2"@"SEL$2")
      USE_HASH(@"SEL$A9F4D2D2""R"@"SEL$2")
      USE_HASH(@"SEL$A9F4D2D2""L"@"SEL$2")
      PQ_DISTRIBUTE(@"SEL$A9F4D2D2""L"@"SEL$2" NONE NONE)
      SWAP_JOIN_INPUTS(@"SEL$A9F4D2D2""S"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$A9F4D2D2""R"@"SEL$2")
      USE_HASH_AGGREGATION(@"SEL$A9F4D2D2")
      END_OUTLINE_DATA
  */* from business_service
union all
select * from business_applications
union all
select * from RUNNING_SOFTWARE
union all
select * from computers)

Now with those hints in place that oracle created for me it now runs in 4 seconds and i have the solution i wanted.

To be honest I've used MViews (or snapshots as they were in the olden days) and this is the first time i ever had to do this 'fix' - it must some some quirk about the SQL that is causing the optimizer to get things wrong somehow.

Relatively simple workaround though and as this is just a 'utility' system rather than a full blown business app i;m not too bothered that i have all those hardcoded hints in place.




A simplex apex DEPT/EMP based report

$
0
0


Today i've been doing some more work with Apex, i don't do that much of this and i'm entirely self taught (that's a disclaimer if how i do this following demo is done badly....).

I wanted to display a simple report screen where a simple drop down list at the top influences what is shown in the bottom half of the screen - so sort of a parent/child report (well kind of). I got it working how i wanted in the end and thought i'd share it here as much for myself as anything so i know how to do this when i come to it again.

I don't want to use my actual screen as it contains loads of company data so i've repeated the example using the emp/dept tables that everyone is likely familiar with.

Getting these loaded in now however is quite a pain so i just cut and pasted the SQL to do that from the following site. I've blogged before about how over complicated the whole demo install process is - anyway ignoring that for now- i just created emp and dept and filled them with demo data.

The plan for the screen is to then choose the dept at the top and display the employees for that department below - should be simple enough......?

Now i've done this by adding a page to an existing app - but you could just as well create a brand new one with a single page.

So first up i click 'create page' => 'blank page' => 'next' => chose a page name here - i used 'demopage' => then on to next screen and 'do not use tabs' => then confirm

Now i just have a page with nothing on it - if you run it you just get a blank screen.

Now i need to create a list of values element to appear in the top part of the screen - there are a few ways to do this - i'm creating it as an LOV in the shared components area

So i now click 'Shared Components'=> Lists of Values=> create from scratch=>next=> give it a name - i chose 'deptlist' and made in dynamic as i want the list to be created from whats in the table ot a hardcoded list in APex. 

On the last screen i now have to create some SQL to build the LOV - this must return both a name and an identifier so - in my case its just a simple

select dname,deptno from dept order by 1 ;

Which returns this in sqlplus

DNAME              DEPTNO
-------------- ----------
ACCOUNTING             10
OPERATIONS             40
RESEARCH               20
SALES                  30

The ordering is of course up to you - it's the order it will appear in on screen.

Now i have an LOV object but i make no reference to it on screen yet so lets do that.

I go back and edit my demopage and click on the create region icon.

Here i again choose the html option=> then 'html' => next => i give it a title of 'headerregion'=>next=> next => create region

If i run the page now it looks like this


i.e. - particularly dull - i need to add on the lov to that header region - so lets do that

So i click on the + icon in the items area of the page and choose the 'select list' from the choice, i give it a name of selheader, then next , next, on the lov screen i choose the lov i created earlier and choose display null value as No, then next and create item.

Now when i run the screen i see this


A step forward - now i need the employee details screen

So lets add a new reports region, so i click the + button next to the region section on the admin screen.

From there i choose report => classic report +> give it a name i chose 'employees'=> then i enter the select statement in my case (being lazy) i just choose

select * from emp where deptno = :SELHEADER (to pass in the id number selected from the top half of the screen)

and i also set 'page items to submit' to be SELHEADER

i then click create region and skip the other 2 screens.

**** AN important side note here on how LOV's are working - the display value on screen is not the value of the page item value variable - this is instead the deptnovalue. So the LOV query you define is essentially 

select "display value", "item variable value" from blah blah blah - 

this is an important point to realise as you may think the item has a certain value when it doesn't and if you refer to that value in other SQL you may not get the behaviour you expect ******

Now my screen looks like this - but the screen doesn't work - choosing the value at the top has no effect


Now the reason for this is that in the SELHEADER item i did not set 'page action when value changed' - if i now change this to 'submit page' then the screen does work - though not 100% how i want

if i choose RESEARCH for example i now get this



This works great - however on screen load when ACCOUNTING is the default nothing shows, if i click RESEARCH and then go back and click ACCOUNTING it works OK - but the initial screen shows no data - which is not what i want

The easy way out is to just re-enable null values on the SELHEADER item and then give the null display value a value of 'select department...'

So i do that and the screen now looks like this

Now whatever i choose is different to the dummy value first shown and causes the bottom screen query to be re-executed - so i get this

And there you have it - a very simple example of how to do this kind of screen - it's very easy and i often find simple examples in Apex are hard to find. I think a firm grip of the basics is very important as with such a flexible tool there are many ways to do the same thing and you can easily miss the simple solution (as i have done many times in the past)




Loading a multi column excel/csv into oracle using Apex

$
0
0


This week i was asked about a simple requirement to upload a csv formatted file into Apex (well into a table using Apex as the tool to be precise). The only issue is the csv has 64 'columns' so the default tools you can use for this kind of thing do not work.

The solution?

Well you just use the file upload item with some clever post process plsql - however first you have to get past the confusion where the file upload seems to work fine but no data shows in WWV_FLOW_FILES..... (more on that in a bit)

Lets start at the beginning (in an existing app in my case but you could just as well create a new one)

We start with create page -> blank page-> next -> give it a name ('testpage' in my case) -> whatever tab options you want -> then finish

Now you have a blank page with nothing on it

Now edit that page to add a region

New region -> HTML -> HTML -> title (testregion) -> click create region

Now we still have a blank page, just with a testregion area on it (but empty)

Now to add a file browse item

Add Item -> File Browse -> give it a name (browser for me) ->next->next -> change storage type to WWV_FLOW_FILES -> next ->create item

Now we have a screen that looks like this


When you click browse you get the normal windows explorer interface and you can go and choose a file - when you do the field is populated with that filename.

Now we need to actually save this into the database - to do this is simple we just need to submit the page - the file browse item does the rest.

So lets add a page submit button

Add button -> next switch to among region items -> next -> give the button a name and label (i chose upload for both) -> then click create button as the default action for a button is submit page and there is nothing to change

Now when i run the page i see

Now if i browse again and choose a file and then click upload to submit the page, the page is submitted without error so that means its worked right?

Well lets go and have a look in the WWV_FLOW_FILES table (well view as i found out later......)

So the first thing i try is this

SQL> select count(*) from wwv_flow_files;

  COUNT(*)
----------
         0

Right... - that's working well :-(

Resolving that took me quite a while and a tremendous amount of frustration, i'll cut to the chase though and just tell you what was wrong.

Essentially WWV_FLOW_FILES has a vpd policy on it (it hasn't ts just a view with a where clause but the result is the same) - if you don't set your context to be the workspace you are using in Apex then you don't see anything.

The key clause on the end of the view is

 where security_group_id = wwv_flow.get_sgid

If i run that for my sqlplus login i get

SQL> select wwv_flow.get_sgid from dual;
         0

So how do i set it?

First i need to find my workspace is - i do this with this simple sql

select workspace, to_char(workspace_id)
from apex_workspaces;

The second column returned is the id which i then use to pass into a 'setter' procedure

SQL> exec wwv_flow_api.set_security_group_id(4096721095133068); -- (where the number is the value returned from the previous query)

PL/SQL procedure successfully completed.

Now when i query it

SQL> select FILENAME from wwv_flow_files where CREATED_ON > sysdate-1/24
  2  /

FILENAME
--------------------------------------------------------------------------------
post_install_acf_part1.csv

It shows up fine - the damn thing was there all along i just hadn't set my 'context' up  -aargh

So phase 1 of what i wanted is complete - now i want to take that csv that was uploaded and post process it into my real destination table (the one with loads of columns)

First i need to actually create the table (long i know but this is the pain when the csv has loads of columns)

  CREATE TABLE "APEX_WORKSPACE_ER"."RICH_TEST"
   (    "ID" NUMBER,
        "BUSINESS_DATE" DATE,
        "SOURCE_SYS_NAME" VARCHAR2(1 CHAR),
        "DELTA_CODE" VARCHAR2(1 CHAR),
        "DESTINCLUDEEMIR" VARCHAR2(1 CHAR),
        "DESTSTATUSEMIR" VARCHAR2(1 CHAR),
        "DESTINCLUDEREMIT" VARCHAR2(1 CHAR),
        "DESTSTATUSREMIT" VARCHAR2(30 CHAR),
        "OURENTITYID" VARCHAR2(30 CHAR),
        "OURENTITYTRADERUSERNAME" VARCHAR2(30 CHAR),
        "TRADINGCOUNTERPARTYID" VARCHAR2(30 CHAR),
        "EXCHANGEFLAG" VARCHAR2(1 CHAR),
        "DELEGATED" VARCHAR2(1 CHAR),
        "THIRDPARTY" VARCHAR2(1 CHAR),
        "TRADINGCTRPRTYTRADERUSERNAME" VARCHAR2(1 CHAR),
        "BENEFICIARYID" VARCHAR2(1 CHAR),
        "DELEGATEDBENEFICIARYID" VARCHAR2(1 CHAR),
        "DELEGATEDBENEFICIARYIDTYPE" VARCHAR2(1 CHAR),
        "TRADINGCAPACITY" VARCHAR2(1 CHAR),
        "DELEGATEDTRADINGCAPACITY" VARCHAR2(1 CHAR),
        "BUYSELLFLAG" VARCHAR2(1 CHAR),
        "INITIATORAGGRESSOR" VARCHAR2(1 CHAR),
        "ISBOOKTRANSFER" VARCHAR2(1 CHAR),
        "STANDARDPRODUCTID1" VARCHAR2(30 CHAR),
        "INTERNALPRODUCT" VARCHAR2(1 CHAR),
        "TRADETYPE" VARCHAR2(30 CHAR),
        "UNDERLYING" VARCHAR2(1 CHAR),
        "INDEXVALUE" VARCHAR2(1 CHAR),
        "NOTIONALCURRENCY1" VARCHAR2(30 CHAR),
        "CONTRACTDESCRIPTION" VARCHAR2(30 CHAR),
        "CONTRACTTRADINGHOURS" VARCHAR2(30 CHAR),
        "TRADEID" VARCHAR2(255 CHAR),
        "SOURCESYSTEMTRADEID" VARCHAR2(30 CHAR),
        "VENUEOFEXECUTION" VARCHAR2(128 CHAR),
        "DEALTPRICE" VARCHAR2(40 CHAR),
        "PRICENOTATION" VARCHAR2(30 CHAR),
        "NOTIONALAMOUNT" VARCHAR2(40 CHAR),
        "PRICEMULTIPLIER" VARCHAR2(40 CHAR),
        "QUANTITY" VARCHAR2(40 CHAR),
        "QUANTITYUNIT" VARCHAR2(30 CHAR),
        "DERIVATIVETYPE" VARCHAR2(10 CHAR),
        "EXECUTIONTIMESTAMP" VARCHAR2(40 CHAR),
        "TERMINATIONDATE" VARCHAR2(1 CHAR),
        "LINKEDTRANSACTIONID" VARCHAR2(128 CHAR),
        "LINKEDORDERID" VARCHAR2(128 CHAR),
        "DELEGATEDLINKEDTRANSACTIONID" VARCHAR2(1 CHAR),
        "VOICEFLAG" VARCHAR2(1 CHAR),
        "COMMODITY" VARCHAR2(30 CHAR),
        "DELIVERYPROFILENUMBER" VARCHAR2(40 CHAR),
        "DELIVERYPOINT" VARCHAR2(30 CHAR),
        "LOADTYPE" VARCHAR2(30 CHAR),
        "DURATION" VARCHAR2(1 CHAR),
        "DAYSOFWEEK" VARCHAR2(30 CHAR),
        "DELIVERYSTARTDATE" VARCHAR2(30 CHAR),
        "DELIVERYENDDATE" VARCHAR2(30 CHAR),
        "REMITLASTDATETIME" VARCHAR2(30 CHAR),
        "LOADDELIVERYINTERVALS" VARCHAR2(30 CHAR),
        "QUANTITYOFLEG" VARCHAR2(30 CHAR),
        "QUANTITYUNITOFLEG" VARCHAR2(1 CHAR),
        "PRICEPERTIMEINTERVALQUANTITIES" VARCHAR2(1 CHAR),
        "PUTCALLFLAG" VARCHAR2(1 CHAR),
        "REMITOPTIONEXERCISEDATE" VARCHAR2(1 CHAR),
        "OPTIONSTYLE" VARCHAR2(1 CHAR),
        "STRIKEPRICE" VARCHAR2(1 CHAR),
        "SOURCESYSTEMTRANSTATUS" VARCHAR2(30 CHAR),
         CONSTRAINT "RICH_TEST_PK" PRIMARY KEY ("ID"));

Now a sequence to generate the PK

CREATE SEQUENCE  "APEX_WORKSPACE_ER"."RICH_TEST_SEQ" 
MINVALUE 1 MAXVALUE 9999999999999999999999
999999 INCREMENT BY 1 START WITH 201 CACHE 20 NOORDER  NOCYCLE;

A trigger to create the pk for me (note this is 11g not 12c)

  CREATE OR REPLACE TRIGGER "APEX_WORKSPACE_ER"."BI_RICH_TEST"
  before insert on "RICH_TEST"
  for each row
begin
  if :new."ID" is null then
    select "RICH_TEST_SEQ".nextval into :new."ID" from sys.dual;
  end if;
end;
/

Now i need a function to do hex to decimal conversion as a prereq for some code i need later - i borrowed that from here 

CREATE OR REPLACE FUNCTION hex2dec (hexnum IN CHAR) RETURN NUMBER IS
  i                 NUMBER;
  digits            NUMBER;
  result            NUMBER := 0;
  current_digit     CHAR(1);
  current_digit_dec NUMBER;
BEGIN
  digits := LENGTH(hexnum);
  FOR i IN 1..digits LOOP
     current_digit := SUBSTR(hexnum, i, 1);
     IF current_digit IN ('A','B','C','D','E','F') THEN
        current_digit_dec := ASCII(current_digit) - ASCII('A') + 10;
     ELSE
        current_digit_dec := TO_NUMBER(current_digit);
     END IF;
     result := (result * 16) + current_digit_dec;
  END LOOP;
  RETURN result;
END hex2dec;
/

OK - still with me - now i have everything i need in the database i just need to create a plsql post process to fire after upload to parse the blob that just created in WWV_FLOW_FILES and turn that into rows and columns in my new table - so lets do that.

So i click on

create process -> PL/SQL-> give it a name (dodata in my case) -> then paste in the plsql (see below for that code block) and then click create process


**** note - the framework for this code came from here

DECLARE    
v_blob_data       BLOB;    
v_blob_len        NUMBER;    
v_position        NUMBER;    
v_raw_chunk       RAW(10000);    
v_char      CHAR(1);    
c_chunk_len   number       := 1;    
v_line        VARCHAR2 (32767)        := NULL;    
v_data_array      wwv_flow_global.vc_arr2;    
v_rows number;    
v_sr_no number := 1;  
--v_first_line_done boolean := false;  
v_error_cd number :=0;    
BEGIN    
 
select    
 blob_content    
 into v_blob_data    
 from wwv_flow_files    
 where name = :BROWSER;    
v_blob_len := dbms_lob.getlength(v_blob_data);    
v_position := 1;    
WHILE ( v_position <= v_blob_len )    
 LOOP    
 v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);    
 v_char :=  chr(hex2dec(rawtohex(v_raw_chunk)));    
 v_line := v_line || v_char;    
 v_position := v_position + c_chunk_len;    
 IF v_char = CHR(10) THEN    
  v_line := REPLACE (v_line, ',', ':');    
  v_data_array := wwv_flow_utilities.string_to_table (v_line);  
IF v_sr_no = 1 THEN
null;
ELSE
APEX_DEBUG.MESSAGE(v_sr_no);
insert into rich_test (
 BUSINESS_DATE,
 SOURCE_SYS_NAME,
 DELTA_CODE  ,
 DESTINCLUDEEMIR,
 DESTSTATUSEMIR,
 DESTINCLUDEREMIT,
 DESTSTATUSREMIT,
 OURENTITYID   ,
 OURENTITYTRADERUSERNAME,
 TRADINGCOUNTERPARTYID ,
 EXCHANGEFLAG  ,
 DELEGATED  ,
 THIRDPARTY ,
 TRADINGCTRPRTYTRADERUSERNAME,
 BENEFICIARYID        ,
 DELEGATEDBENEFICIARYID         ,
 DELEGATEDBENEFICIARYIDTYPE  ,
 TRADINGCAPACITY         ,
 DELEGATEDTRADINGCAPACITY   ,
 BUYSELLFLAG                              ,
 INITIATORAGGRESSOR                          ,
 ISBOOKTRANSFER                                   ,
 STANDARDPRODUCTID1                                ,
 INTERNALPRODUCT                                   ,
 TRADETYPE                                         ,
 UNDERLYING                                        ,
 INDEXVALUE                                       ,
 NOTIONALCURRENCY1                                ,
 CONTRACTDESCRIPTION                               ,
 CONTRACTTRADINGHOURS                             ,
 TRADEID                                        ,
 SOURCESYSTEMTRADEID                             ,
 VENUEOFEXECUTION                             ,
 DEALTPRICE                                      ,
 PRICENOTATION                                 ,
 NOTIONALAMOUNT                                ,
 PRICEMULTIPLIER                            ,
 QUANTITY                                  ,
 QUANTITYUNIT                             ,
 DERIVATIVETYPE                               ,
 EXECUTIONTIMESTAMP                            ,
 TERMINATIONDATE                                ,
 LINKEDTRANSACTIONID                         ,
 LINKEDORDERID                               ,
 DELEGATEDLINKEDTRANSACTIONID               ,
 VOICEFLAG                                ,
 COMMODITY                                ,
 DELIVERYPROFILENUMBER                     ,
 DELIVERYPOINT                              ,
 LOADTYPE                                      ,
 DURATION                                       ,
 DAYSOFWEEK                                     ,
 DELIVERYSTARTDATE                                  ,
 DELIVERYENDDATE                                    ,
 REMITLASTDATETIME                                ,
 LOADDELIVERYINTERVALS                             ,
 QUANTITYOFLEG                                   ,
 QUANTITYUNITOFLEG                                 ,
 PRICEPERTIMEINTERVALQUANTITIES                 ,
 PUTCALLFLAG                                     ,
 REMITOPTIONEXERCISEDATE                         ,
 OPTIONSTYLE                                  ,
 STRIKEPRICE                                ,
 SOURCESYSTEMTRANSTATUS                    
)
values 
(to_date(v_data_array(1),'DD/MM/YYYY'), v_data_array(2), v_data_array(3), v_data_array(4), v_data_array(5), v_data_array(6), v_data_array(7),v_data_array(8),v_data_array(9),v_data_array(10),
v_data_array(11), v_data_array(12), v_data_array(13), v_data_array(14), v_data_array(15), v_data_array(16), v_data_array(17),v_data_array(18),v_data_array(19),v_data_array(20),
v_data_array(21), v_data_array(22), v_data_array(23), v_data_array(24), v_data_array(25), v_data_array(26), v_data_array(27),v_data_array(28),v_data_array(29),v_data_array(30),
v_data_array(31), v_data_array(32), v_data_array(33), v_data_array(34), v_data_array(35), v_data_array(36), v_data_array(37),v_data_array(38),v_data_array(39),v_data_array(40),
v_data_array(41), v_data_array(42), v_data_array(43), v_data_array(44), v_data_array(45), v_data_array(46), v_data_array(47),v_data_array(48),v_data_array(49),v_data_array(50),
v_data_array(51), v_data_array(52), v_data_array(53), v_data_array(54), v_data_array(55), v_data_array(56), v_data_array(57),v_data_array(58),v_data_array(59),v_data_array(60),
v_data_array(61), v_data_array(62), v_data_array(63), v_data_array(64))
  ; 
END IF;
   
   v_line := NULL;
 v_sr_no := v_sr_no + 1;  
 
 END IF;    
 END LOOP;  
DELETE FROM WWV_FLOW_FILES where name = :BROWSER;  
 
END;  


When you now run the screen and click upload instead of it instantly returning it now fires the post process code , spins around for a while. Once it finishes i query the destination tables and see

SQL>  select count(*) from apex_workspace_er.rich_test;

  COUNT(*)
----------
       200

So it's worked fine. So we have our solution!

Before i finish off a few comments in general about the plsql post process block above

1) You have to use the :BROWSER (or whatever you named it) page item in two places in the code above to identify the file from WWV_FLOW_FILES based on what is on screen and laso to delete it at the end (that's optional).
2) Date columns seem to be a pain - if there is a formatting error you dont get much hep in which column is the problem one mismatch lengths are fine but date conversion is a pain
3) i added in  an APEX_DEBUG.MESSAGE(v_sr_no); - this can be useful in debug mode to see which line is a problem
4) it doesn't seem very quick - a bulk/forall may be required to help with this

Hope that's useful for someone - in particular the first bit with what is visible in WWV_FLOW_FILES!

Don't use the force Luke!

$
0
0


This week revealed another issue with our datawarehouse - this time with a statement that hadn't been a problem before but suddenly starting performing very badly (from maybe 10 minutes to 10 hours) .

So i set about investigating, an initial look showed the SQL was about 200 lines long...... however at second glance the statement was actually quite simple - it's simply a two table join with an aggregate - there are just loads of columns involved and the plan looks more complex as there is a lot of parallel processing going on.

The initial statement looks like this in cloud control


Which at first look doesn't seem that bad, however looking more closely reveals that the optimizer has got it badly wrong with its estimates from one of the tables - DA_STATIC_TRADE.

The index range scan only has a cost of 2 - looking at the data there is actually a huge amount of data in here and actually the table is partitioned by day and we are only interested on 1 day - and indeed most of that data in that day as even when applying the other filter it's still retrieving about 80% of the table.

What we actually want is two complete full table scans of the single partition (both tables involved are interval partitioned by day) followed by a hash join to give us the results back as quickly as possible (and with some parallel thrown in for good measure)

So what's going wrong?

My first though was stats, but a look through the table/index/histogram stats all looked good - so what on earth is going on?

Well a closer look at the cloud control screen shows this


If you notice what should be literals (as that's what is passed in to the query - and is generally desired 100% of the time in a warehouse as we want to give the optimizer as much information as possible)
actually seem to be binds......

Right.....

I know what the issue is here - cursor sharing - a quick check reveals it's been changed - it's set to FORCE.

Let's now do a quick test and switch it back to EXACT.

Now we get this (i.e. the literal shows correctly)



Which is exactly what we want - the costs look higher (i.e. its got them right) and we now get the correct plan - the statement is now back to 10 minutes.

It turned out the parameter was changed to try and resolve another issue and we quickly got it changed back.

A couple of lessons from this:

1) beware the impact of changing a global optimizer setting on a running system - you could bring it to its knees
2) Don't use the FORCE- use the EXACT!

And in fact the recommendation from Oracle themselves is to never use FORCE any more - it does have lots of nasty side effects - bad plans is just one of them.....


Moving Cloud control up to 12.1.0.5

$
0
0


Yesterday we patched cloud control to 12.1.0.5 from 12.1.0.4. This patch was largely prompted by the fact that this version gives support for ie11 - prior to the patch you just get a cryptic message on the login screen and you can't actually get past that. There are one off patches to address is but we prefer to stick to major patchsets where possible.

The process went pretty well - the installer is very slick now and does a huge amount of extra checking for you to prevent issues - as long as you heed the advice.....

We ignored the message about the db compatible parameter (11.2.0.2) not matching the actual database version (12.1.0.2) as we didn't think it would matter - it had been left at 11 as we only recently moved to 12c.

The process happily ran along for quite a while until we hit this

ORA-39726: unsupported add/drop column operation on compressed tables
File:/oracle/12105_oms/oms/plugins/oracle.sysman.cfw.oms.plugin_12.1.0.3.0/sql/cfw/12.1.0.2.0/cfw_service_family_schema_upgrade.sql
Statement:ALTER TABLE
   CFW_SERVICE_FAMILY_HOME
ADD
   (IS_SOFTWARE_LIB_TAB NUMBER(1,0)  DEFAULT 0)


So it seems that adding a column to a compressed table is only possible in 12c if compatible is 12c....

At this point we had a minor panic thinking we'd have to restore and start the whole thing again - but actually we didn't - the process was just restartable.

So what we did was set compatible to 12.1.0.2 in the db, bounce the database and then just click the retry button in the installer window.

The process then just figured out what it needed to do to pick up from where it failed and then it just carried on.

It then ran to completion and we were successfully patched.

It's really nice to see this patching coped with the failure - in older versions this would not have been possible.

Nice job by the developers.

As a summary for others the process for a 75GB repo db with ~150 hosts and 400 databases  (a system that the cloud control management page classifies as 'Medium')  took about 1 hour end to end i think (ignoring the retry part we went through) - the majority of that being the repo database changes.

When the big switch doesn't switch

$
0
0


Some time ago i wrote a few blog posts about how to very simply switch databases in (and out) of ASM. This worked like a dream when i tried it in 12c

http://dbaharrison.blogspot.de/2014/03/switching-into-asm.html
http://dbaharrison.blogspot.de/2014/10/switching-bigish-to-asm.html
http://dbaharrison.blogspot.de/2014/03/faster-switching-into-asm.html

This week we've been switching a load of 11.2 databases into ASM and have hit 2 snags.

I'll start with the easier of the two....

In one of my posts i previously demonstrated how it was possible to apply an incremental backup to an existing on disk backup in ASM. This could then roll forward that copy and make for a shorter downtime to actually migrate.

This doesn't work in 11g - well at least not without a minor syntax change to the base copy backup.

in 12c this works

backup as copy database format '+DATA' tag 'demo';

in 11g you have to make a subtle adjustment to this

backup as copy incremental level 0 database format '+DATA' tag 'demo';

So it seems that in 11g the database is a full backup and not an incremental level 0 backup, though they are essentially the same, there is a difference. in 12c either the default has been changed or the two really have become exactly synonymous.

So that had me scratching my head for a while.

However not as long as this next one did

In this case we did a full backup to asm went to switch as normal and then got this

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 12/23/2015 10:53:41
RMAN-06571: datafile 1 does not have recoverable copy

Initially we thought we'd made a mistake somewhere and retried a few times but to no avail - something was wrong here and it wasn't at all obvious what.

The only difference we could find to other environments was that compatible was set too low (11.2.0.3 rather than 11.2.0.4) so we changed that and tried again but that didn't help either.

At this point we were reduced to guesswork.....

I had noticed that there were 2 incarnations for this database in the controlfile (one of them orphaned) - this shouldn't have mattered i didn't think but it was different to the other databases that had worked.

I therefore decided to backup controlfile to trace and recreate nice clean fresh controlfiles.

And guess what - it fixed it. Next run through ran as expected!

Not sure if this is a bug or controlfile corruption or what but anyway i thought i'd share as it may be useful for someone else.


Viewing all 259 articles
Browse latest View live