Quantcast
Channel: #cloud blog
Viewing all 259 articles
Browse latest View live

XMLDB files,folders and acls

$
0
0


Another XMLDB post.....

I've not been able to find a simple place i can look that shows me for a certain file/folder which acl is controlling access to it - it could be there i just haven't found it.

Anyway i gave up looking and wrote my own

SELECT b.any_path as Folder,a.ANY_PATH as Protected_by_ACL
  FROM RESOURCE_VIEW a,resource_view b
  WHERE extractValue(a.RES, '/Resource/XMLRef') = make_ref(XDB.XDB$ACL,extractValue(b.RES, '/Resource/ACLOID'))
  and b.any_path not like '%.xml%';

This gives a 2 column output, the first being the xmldb resource and the second the acl that applies to it. In the SQL above i exclude .xml files (as i was not interested in them in my case) - the filter can be anything you want of course.

An example output is shown below


If you use xmldb this might be very useful , if you don't then it's pretty useless other than to scare you about this xmldb stuff even more with it's crazy syntax and obscure workings......

I hate longs

$
0
0


Now long's have been around since Oracle was running around in short trousers and while not really used by anyone any more they are still there in the dictionary. Aaargh. I guess Oracle have just decided it's not worth messing around with - the whole dictionary stuff just works why mess around with it - and to be honest that's probably the view i would take too.

So this week one of our testers has been wanting to build something that lets us compare the code installed in many of the different environments by extracting ddl definitions from all of them and storing it a central repository.

Now there is more than one approach to this but the way he chose to do it was to create a db link to each database and just pull the definitions over that link and store them. This is all fine until you start trying to do it for views like DBA_VIEWS where the view definition is stored in a long..... and we want to store the result in varchar columns.

We tried a few different ways of doing this but there seemed to be an issue in every step. Longs and db links (and in fact lobs and db links have quite a few limitations......)

In the end we came up with this approach which uses a built in package sys.dbms_metadata_util (one that i've rarely used before - but it does seem to have some useful stuff in it).

The following code extracts the view text in varchar format from a remote database

select db.owner,
db.view_name,
sys.dbms_metadata_util.long2varchar@dblink(4000,'VIEW$','TEXT',sy.rowid)
from sys.view$@dblinksy,
dba_objects@dblinkobj,
 dba_views@dblinkdb
where obj.owner=db.owner
and obj.object_name=db.view_name
and obj.object_id=sy.obj#


However..... this only works up to 4000 bytes - after that the code works but returns null for the text - to cope with longer fields there is another function called long2vcmax which returns an array of varchar2 which you'd have to store as separate rows - however this works the opposite way - anything less than 4000 returns null......

Anyway - i think this does have it's uses

I just wish longs and clobs were easier to work with sometimes.....

Update - here is a query for constraints too.....

select db.OWNER,
       CONSTRAINT_NAME,
       CONSTRAINT_TYPE,
       TABLE_NAME,
       sys.dbms_metadata_util.long2varchar@dblink(4000,
                                                        'cdef$',
                                                        'CONDITION',
                                                        cdef.rowid),
       R_OWNER,
       R_CONSTRAINT_NAME,
       DELETE_RULE,
       db.STATUS,
       DEFERRABLE,
       DEFERRED,
       VALIDATED,
       db.GENERATED,
       BAD,
       RELY,
       LAST_CHANGE,
       INDEX_OWNER,
       INDEX_NAME,
       INVALID,
       VIEW_RELATED
  from dba_constraints@dblink db,
       sys.cdef$@dblink cdef,
       sys.con$@dblink    con,
       dba_users@dblink usr   
 where con.con# = cdef.con#
       and db.constraint_name=con.name
       and usr.user_id=con.owner#
       and usr.username=db.owner



and another update for triggers

select db.OWNER,
       TRIGGER_NAME,
       TRIGGER_TYPE,
       TRIGGERING_EVENT,
       TABLE_OWNER,
       BASE_OBJECT_TYPE,
       TABLE_NAME,
       COLUMN_NAME,
       referencing_names,
       when_clause,
       db.status,
       description,
       action_type,
       sys.dbms_metadata_util.long2varchar@dblink(4000,
                                                        'trigger$',
                                                        'action#',
                                                        tr.rowid),
       crossedition,
       BEFORE_STATEMENT,
       before_row,
       after_row,
       after_statement,
       instead_of_row,
       fire_once,
       apply_server_only
  from dba_triggers@dblink db,
       sys.trigger$@dblink tr,
       dba_users@dblink us,
       dba_objects@dblink obj
 where 
       us.username = db.owner
       and tr.obj# = obj.OBJECT_ID
       and obj.OBJECT_NAME=db.trigger_name

       and db.owner=obj.owner

command line blackouts

$
0
0


We've been using blackouts more and more to prevent tickets being raised when we do maintenance work - as we tend to work more from the command line than from the cloud control gui we wanted to find an easy way to do this.

The first issue we had was trying to find a list of the targets that are actually monitored by the agent - this is easy enough in the GI but how do you just get a list of targets from the command line - looking at the command help there doesn't seem to be anything listed that does that?

After a lot of digging i finally found it (oh and make sure for all of this use use emctl from the agent home and not the db home otherwise you end up running database control commands)

So to get a list of targets you run this:

emctl config agent listtargets

This produces output similar to below

Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
[server, host]
[agent12c2_2_server, oracle_home]
[server:3872, oracle_emd]
[DB4, oracle_database]
[OraDb11g_home112031_1_server, oracle_home]
[DB3, oracle_database]
[LISTENER_DB1_1521_server, oracle_listener]
[DB2, oracle_database]
[OraDb11g_home11204_22_server, oracle_home]
[OraHome18_23_server, oracle_home]
[DB1, oracle_database]
[agent12c1_24_server, oracle_home]

SO now we have all the info we need to start a blackout just for one of those targets - so as an example if we want to blackout DB1 for 30 days we say

emctl start blackout test DB1:oracle_database -d 30

which then says

Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Blackout test added successfully
EMD reload completed successfully

So there you have it

Stopping is then just the reverse command

emctl stop blackout test

which gives

Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Blackout test stopped successfully
EMD reload completed successfully

easy!

When DNS doesn't DNS.....

$
0
0


The saying you learn something new every day is very true in my experience, some days it's small things and some days it's big things.

Today was a day for a very minor thing about DNS - but quite a surprise.

Now I'm sure you all know that when you want to resolve an name to an ip (or vice versa) generally you'd ask the dns server to resolve that for you (there are other options but i think almost everyone would be using DNS).

To resolve the name you talk to whatever dns servers are configured (from resolv.conf on unix or however it does it on windows) - this request is sent over udp port 53 and the dns server responds with what you asked of it.

So for example i can say

nslookup server-name

this sends a request to whatever dns server is configured on udp port 53 and i get a reply of something like

10.10.10.10 (along with some other info)

Today i discovered something i never knew (and maybe it's a bit of an unusual case), if the reply from dns is greater than 512 bytes (this is configurable) - the request is essentially cancelled and the client is told to talk back to the dns server on tcp port 53 (i.e. not udp)

Now in most cases you are unlikely to see a dns reply > 512 bytes (in our case it was a round robin alias for all the AD servers so was quite a long list) and even if the reply was really long it wouldn't matter.

However....

In our case udp port 53 was open in the firewall but tcp port 53 was not - so we couldn't resolve the name!

Easy enough to fix but quite a surprise.

Helpfully you do get a hint this is going on - the first reply from the server appears like this:

 nslookup alias-name
;; Truncated, retrying in TCP mode.

Linking one apex report to values in another

$
0
0


I've been trying (for a while) to get some (what i though was simple) apex functionality to work - what i wanted was a classic report at the top of the page with a second classic report just below it. The second report's data would be based on a value clicked  in the top section. So a screen that looked something like this:

If i click on the 'click me' link the data displayed in the bottom section returns only rows relative to the id column i clicked next to. So if i click on the row with an id of 1 the report at the bottom shows me rows relative to that id.

The query is essentially then (in the top section)

select  DBID
,null As link
 from SUMM


Followed by (in the detail section)

select * from det where dbid= (value picked in the top section).

Now that sounds like a pretty simple requirement right? Well it is pretty easy as long as you know what you are doing (which i didn't....) So mainly for my benefit i'm writing this down in case i need to do it again...

So the elements we need are as follows:

1) query to populate the top report
2) query to populate the second report (that we somehow pass a variable in to)
3) the ability to click some on screen value and that to pass a value to point 2

To make this work in Apex you have to use the following components

1) a hidden page item to hold a variable value
2) a javascript function to populate that variable value
3) a dynamic action to refresh the bottom section of the screen when i call the javascript function
4) and the queries of course, one simple and the other that somehow has a link to this variable value

So how do we then build this?

For point 1 (the easy bit) we just add a page item with the following definition





So thats really nothing special at all (and all we use it for is to hold a variable value)

For point 2 we just call a very simple javascript function - all that it does is accept a single parameter - it then assigns that value to the hidden page item we created in step 1)  This is of the following form

javascript:$s('PN_SELECTED_NODE', #DBID#);

This built in function $s is essentially setting values - so we set PN_SELECTED_NODE (The hidden page item) to a value of #DBID# - this is the value of the column of the row we are currently 'on'. 



 For point 3 we create a simple dynamic action - that fires when the value of the hidden page item is changed - when it does change the lower part of the screen is refreshed









So now we are nearly there all we need to be able to do is trigger the javascript function, this will populate the hidden item which in turn causes the dynamic action to fire and refresh the lower part of the page

 Now the bit i didn't mention so far is the slight amendment to the second sql statement - we just need to add the clause "where column= hidden page item value" - as demonstrated below. Then when the page is submitted - the value of the hidden item is used to restrict the dataset and show only the rows we are interested in. The SQL for this is shown below.


All that remains then is to enable the javascript to be clicked/called from the top section of the page (this is the part that gave me the most trouble). In the end it's very easy.

If we navigate to the column properties of the second column (called link in my case - which was not meant to be confusing....). Here we define what we want the text to say underneath the hyperlink (click me in this case). We set the target to url, and then in the url section we paste the javascript function call i mentioned earlier

javascript:$s('PN_SELECTED_NODE', #DBID#);

So this is saying when i click on this link call the function and set the hidden page item to #DBID# - this then sets the other events in motion and the screen magically works




So now my screen behaves like this

I click on the 'click me' next to dbid 1 and the report at the bottom then shows the detail rows only for dbid 1.

So for my own understanding the steps are as follows

Easy when you know how......

Thanks to Scott Wesley (http://www.grassroots-oracle.com/) who helped me out via the oracle forums with getting this to work

Some SQL Loader basics - datafile to column mappings

$
0
0


I've had cause this week to dig out some old scripts for sqlloader just to demonstrate some of the basic functionality of how values from a datafile relate to the columns they load into in the database.

Below are 2 simple test cases that demonstrate 2 things:

1) How column order in the datafile does not have to match the column order in the database
2) How to load the same value from a datafile multiple time

So for the 1st question lets set up a simple table with two columns - col1,col2 in that order

SQL> create table demo (col1 number, col2 date);

Table created.

My datafile has the data i need but is the other way round col2,col1 - so how do i deal with that?

# cat loader.dat
10-JAN-2014,1
11-JAN-2014,2

Well it's very easy - the way that sqlloader is working here is that you just need to tell it a list of columns - the order of these columns should match the order of the data in the datafile - not the order of the columns in the table. Below you are essentially saying the first csv value load into col2, the second csv value load into col1 (and so on if there were more columns)

# cat loader.ctl
  LOAD DATA
   INFILE loader.dat
   INTO TABLE demo
   FIELDS TERMINATED BY ","
   (col2,
    col1)

So lets load it in to prove it

# sqlldr / control=loader.ctl

SQL*Loader: Release 11.2.0.3.0 - Production on Tue Dec 9 12:35:28 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 2

It looks fine - otherwise it would have errored with datatype mismatch or something like that


And then select the data back - and you can see it's fine

SQL> select * from demo;

      COL1 COL2
---------- ---------------
         1 10-JAN-14
         2 11-JAN-14

 Simple eh - so lets deal with point 2)

 So in my artificial demo i have a 4 columns table - see below and a datafile with only one 'column'
 
SQL> create table demo (col1 number,col2 number,col3 number,col4 number);

Table created.

cat loader.dat
1
2
3
4

So how do i deal with that i want to do stuff with 'column' 1 from the datafile and have it appear in all 4 of the table columns

Well here it is - a control file to do just that - the trick here is the use of the :col1 value which is the current value of col1 when it was read in from the datafile - we can do with this what we like. (note i had to add TRAILING NULLCOLS as there values were missing from the datafile and oracle was not happy with that)

cat loader.ctl
  LOAD DATA
   INFILE loader.dat
   INTO TABLE demo
   FIELDS TERMINATED BY "," TRAILING NULLCOLS
   (col1,
    col2 ":col1",
    col3 ":col1*99",
    col4 ":col1 /2")

So lets load the datafile in

sqlldr / control=loader.ctl

SQL*Loader: Release 11.2.0.3.0 - Production on Wed Dec 10 09:07:30 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 4

 And check what it looks like

SQL> select * from demo;

      COL1       COL2       COL3       COL4
---------- ---------- ---------- ----------
         1          1         99         .5
         2          2        198          1
         3          3        297        1.5
         4          4        396          2

Job done!

So sqlldr stuff is pretty easy (and quite powerful), however it has been superceded by external tables now. These offer even more flexibility and performance than sqlldr and i would encourage to use them instead

Oracle talking to SQL Server over a normal database link?

$
0
0


Now with all my (off topic) posting about SQL Server these past few weeks it was inevitable that at some point i would want to create a direct link between Oracle and SQL Server for interfacing.

Now annoyingly this is quite easy from SQL to Oracle - it's a little more fiddly the other way round.

I've been meaning to post on this for a while as it's something I've done many time over the years and I always struggle to find a good note that describes all the steps required to create a direct link.

And to be clear here what i am doing is making a SQL Server database 'appear' at the end of a normal database link. So i will have the ability to say

select * from table@SQLServer 

directly from in sqlplus - now wouldn't that be nice......

My example here uses what is known as "database gateway for odbc" or "dg4odbc for short (in earlier oracle version prior to 11.1 it was knows as "heterogeneous services for odbc" or "hsodbc"). Neither of those flows off the tongue very well. You'll see as i go through the example a lot of things still refer to "hs" - this is just for historic reasons.

In my case I'm running Oracle 11.2.0.2 (just what i had to hand no particular reason for that exact version) and i'm on SLES11 sp 2 (though i think the steps shown are reasonably Linux generic).

Before we get started just a quick summary of how the thing hangs together - the flow of processing is as follows:

Database link -> tnsnames -> listener ->dg4odbc->odbc driver -> sql server

so we are essentially creating an odbc connection and then telling oracle how to get to it and use it.

Right now that's all clear then lets get started.

For me the first thing to do was to get out friendly unix team to install some software for me

The 3 rpms they installed were (though i think the development one is not required)

# rpm -qa |grep -i odbc
unixODBC-devel-2.2.12-198.17
unixODBC-2.2.12-198.17
freetds-unixodbc-0.91-1


This was done for me as i do not have root rights.

unixODBC is the driver manager software (just think of the odbc tool on windows where lots of different vendors drivers are installed and managed - this is the driver manager software)
freetds is a free odbc driver that allows connection to SQL Server (others are available of course)

Once that software is installed what i first want to prove is that i can just create an odbc connection outside of oracle.

After installation of unixODBC i have 2 config files to deal with

/etc/unixODBC/odbc.ini - containing specific database connection details - driver/location etc

and

/etc/unixODBC/odbcinst.ini - which contains the odbc drivers installed on the system (in this case just FreeTDS)

So in the odbcinst.ini i have

[FreeTDS]
Description             = FreeTDS unixODBC Driver
Driver          = /usr/lib64/libtdsodbc.so.0
Setup           = /usr/lib64/libtdsodbc.so.0
UsageCount              = 1

At this point the odbc.ini was empty

Now as i don;t have root rights i got the unix team to give me sudo access to run the odbcinst program which lets you add and remove drivers/database connections (this you don't have to do - you could just edit the 2 files directly if you have rights)

I preconfigured a template file (/tmp/test.temp) with the details of what i wanted to connect to - this takes the following format

[DEMO]
Driver = FreeTDS
Description = Demo connection
Trace = No
Server = sql server hostname here
Database = sql server db name here
UID = username here
Password=password here
Port = 1435

As you can see this entry is specifically saying i want to use the FreeTDS driver please. To install this i now run

sudo odbcinst -i -s -f /tmp/test.temp

i for install
s for data source
and f for the template file with the definition

Now odbc.ini contains the definition i just installed - i am now in a position to test it - this can be done with the isql tool which comes as part of unixodbc

I just pass in the DSN name (DEMO as defined earlier) as well as the username and password (which makes me wonder why they are in the config file at all - anyway...)

oracle@server:> isql DEMO username password
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL>


And we get a Connected! message implying this is some kind of miracle.......

Now lets try and run some SQL

SQL> select product from master.sys.servers;
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| product                                                                                                                                                                                                                                                        |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| SQL Server                                                                                                                                                                                                                                                     |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
SQLRowCount returns 1
1 rows fetched

And all that seems fine - we have a working odbc driver in place and we can get to our desired destination - now comes the tricky part - wrapping up the Oracle layer on top - this was not easy to get to the bottom of (even though I've done this many times before on other platforms....)

The first thing i did was create a new listener - this isn't 100% essential it could be all in one but it's nice to keep it away from the 'normal' stuff

first up new listener

/oracle/11.2.0.2.0.DB/network/admin]# cat listener.ora
DEMO =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = server)(PORT = 1545))
    )
  )

SID_LIST_DEMO =
  (SID_LIST =
    (SID_DESC=
      (SID_NAME=DEMO)
      (ORACLE_HOME=/oracle/11.2.0.2.0.DB)
      (PROGRAM=hsodbc)
    )
 )

Nothing really special there - just the program part may look odd to you

The next step is to create something that maps to the SID_NAME of DEMO listed in the listener.ora - what will happen when connections are sent to the listener is that it will call the hsodbc program and try and connect to something which has a config called DEMO.

This DEMO config sits in $ORACLE_HOME/hs (probably a directory you never noticed before...)

The content of this file is as follows:

/oracle/11.2.0.2.0.DB/hs/admin]# cat initDEMO.ora
HS_FDS_CONNECT_INFO = DEMO
HS_FDS_TRACE_LEVEL = 1
HS_FDS_TRACE_FILE_NAME = /tmp/hstrace.log
HS_FDS_SHAREABLE_NAME = /usr/lib64/libtdsodbc.so.0

The first line is the key mapping line here we are saying the listener has routed something with a SID_NAME of DEMO to this file - this will now go off and try and find a HS_FDS_CONNECT_INFO (the DSN basically) - also called DEMO.

We then need a tns entry for out client to be able to find out where this SQL Server DB is - note the content refers to the local server on the new listener port we just created (nothing remote here) and we have to include the special HS=OK value to tell oracle there is some crazy odbc stuff going on.


DEMO =
    ( DESCRIPTION =
        ( ADDRESS =
            ( PROTOCOL = TCP )
            ( HOST = servername)
            ( PORT = 1545 )
        )
        ( CONNECT_DATA =
            ( SID = DEMO )
        )
        ( HS = OK )
    )

If we try and tnsping this address

tnsping demo

TNS Ping Utility for Linux: Version 11.2.0.2.0 - Production on 10-DEC-2014 19:42:19

Copyright (c) 1997, 2010, Oracle.  All rights reserved.

Used parameter files:
/oracle/11.2.0.2.0.DB/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact ( DESCRIPTION = ( ADDRESS = ( PROTOCOL = TCP) ( HOST = server) ( PORT = 1545)) ( CONNECT_DATA = ( SID = DEMO)) ( HS = OK))
TNS-12541: TNS:no listener

We get a failure as the listener is not started - so lets start it

lsnrctl start demo

LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 10-DEC-2014 19:42:42

Copyright (c) 1991, 2010, Oracle.  All rights reserved.

Starting /oracle/11.2.0.2.0.DB/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 11.2.0.2.0 - Production
System parameter file is /oracle/11.2.0.2.0.DB/network/admin/listener.ora
Log messages written to /oracle/diag/tnslsnr/server/demo/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server)(PORT=1545)))
TNS-01201: Listener cannot find executable /oracle/11.2.0.2.0.DB/bin/hsodbc for SID DEMO

Listener failed to start. See the error message(s) above...

And now begins the first of many issues in getting this working.... At least this first issue is obvious - it's back to the point i mentioned earlier - the whole thing was renamed so hsodbc no longer exists - i need to update my listener entry with the new program dg4odbc - which i do below

DEMO =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = server)(PORT = 1545))
    )
  )

SID_LIST_DEMO =
  (SID_LIST =
    (SID_DESC=
      (SID_NAME=DEMO)
      (ORACLE_HOME=/oracle/11.2.0.2.0.DB)
      (PROGRAM=dg4odbc)
    )
 )

 Now i try and start it again

lsnrctl start demo

LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 10-DEC-2014 19:45:15

Copyright (c) 1991, 2010, Oracle.  All rights reserved.

Starting /oracle/11.2.0.2.0.DB/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 11.2.0.2.0 - Production
System parameter file is /oracle/11.2.0.2.0.DB/network/admin/listener.ora
Log messages written to /oracle/diag/tnslsnr/server/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server=1545)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=server=1545)))
STATUS of the LISTENER
------------------------
Alias                     demo
Version                   TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date                10-DEC-2014 19:45:16
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /oracle/11.2.0.2.0.DB/network/admin/listener.ora
Listener Log File         /oracle/diag/tnslsnr/server/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server=1545)))
Services Summary...
Service "DEMO" has 1 instance(s).
  Instance "DEMO", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Thats better - now lets try the tnsping

 tnsping demo

TNS Ping Utility for Linux: Version 11.2.0.2.0 - Production on 10-DEC-2014 19:45:28

Copyright (c) 1997, 2010, Oracle.  All rights reserved.

Used parameter files:
/oracle/11.2.0.2.0.DB/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact ( DESCRIPTION = ( ADDRESS = ( PROTOCOL = TCP) ( HOST = server) ( PORT = 1545)) ( CONNECT_DATA = ( SID = DEMO)) ( HS = OK))
OK (10 msec)

So now we are looking good

Lets try sqlplus now to this connection (using some random username/password)

sqlplus a/b@demo

SQL*Plus: Release 11.2.0.2.0 Production on Wed Dec 10 19:46:08 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

ERROR:
ORA-28547: connection to server failed, probable Oracle Net admin error

That's not good...a check in the log directory shows this.....

$ORACLE_HOME/hs/log

Oracle Corporation --- WEDNESDAY DEC 10 2014 19:46:08.804


Heterogeneous Agent Release
11.2.0.2.0


HS Agent diagnosed error on initial communication,
   probable cause is an error in network administration
   Network error 2:  NCR-00002: NCR: Invalid usage
HS Gateway:  NULL connection context at exit

But the listener seems happy enough.....

10-DEC-2014 19:48:24 * (CONNECT_DATA=(SID=DEMO)(CID=(PROGRAM=sqlplus@server)(HOST=server)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=51495)) * establish * DEMO * 0

So what could be wrong - a bit of digging (hoping really as there was not a lot to go on) revealed that maybe i had to tell it exactly where the odbc.ini file was - so i did that in the initDEMO.ora file by adding a line at the end.

HS_FDS_CONNECT_INFO = DEMO
HS_FDS_TRACE_LEVEL = Debug
HS_FDS_TRACE_FILE_NAME = /tmp/hstrace.log
HS_FDS_SHAREABLE_NAME = /usr/lib64/libtdsodbc.so.0
set ODBCINI=/etc/unixODBC/odbc.ini

I also realized that i had to also add some environment specific LD_LIBRARY_PATH settings so that the listener would be able to find all the libraries it needed - so i added an extra line to the listener.ora as shown below

DEMO =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = server)(PORT = 1545))
    )
  )

SID_LIST_DEMO =
  (SID_LIST =
    (SID_DESC=
      (SID_NAME=DEMO)
      (ORACLE_HOME=/oracle/11.2.0.2.0.DB)
      (PROGRAM=dg4odbc)
      (ENVS=LD_LIBRARY_PATH=/usr/lib64/unixODBC:$ORACLE_HOME/lib)
    )
 )

To check these are picked up ok after a restart we can do some clever stuff in lsnrctl to reveal it - see demo below

LSNRCTL> set displaymode verbose
Service display mode is VERBOSE
LSNRCTL> services
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=server)(PORT=1545)))
Services Summary...
Service "DEMO" has 1 instance(s).
  Instance "DEMO", status UNKNOWN, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:1 refused:0
         LOCAL SERVER
         (ADDRESS=(PROTOCOL=beq)(PROGRAM=/oracle/11.2.0.2.0.DB/bin/dg4odbc)(ENVS='LD_LIBRARY_PATH=/usr/lib64/unixODB                                     C:$ORACLE_HOME/lib,ORACLE_HOME=/oracle/11.2.0.2.0.DB,ORACLE_SID=DEMO')(ARGV0=dg4odbcDEMO)(ARGS='(LOCAL=NO)'))
The command completed successfully


So now feeling confident lets create a database link and give it a try..... (and note i used username and password in double quotes to force the specific case to be kept)

create database link demo connect to "username" identified by
 "password" using 'DEMO';

Database link created.

Now lets try and select from it

select product from "master.sys.databases"@demo;
select product from "master.sys.databases"@demo
                                           *
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
ORA-02063: preceding line from DEMO

And a suitable disaster - lets check the logs again - and at least we get something better to go on - progress at least

from hs/log directory

Entered hgolofns at 2014/12/10-20:12:04
 hoaerr:28500
Exiting hgolofns at 2014/12/10-20:12:04
Failed to load ODBC library symbol: /usr/lib64/libtdsodbc.so.0(SQLDescribeParam)

A bit of digging again revealed the issue - i had mentioned the freetds library in my config - that was wrong - i needed the generic unixodbc library - so i updated that in the initDEMO.ora file

HS_FDS_CONNECT_INFO = DEMO
HS_FDS_TRACE_LEVEL = Debug
HS_FDS_TRACE_FILE_NAME = /tmp/hstrace.log
HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so
set ODBCINI=/etc/unixODBC/odbc.ini


I restarted listener just in case and tried again

select product from "master.sys.databases"@demo;
select product from "master.sys.databases"@demo
                                           *
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[

Still not working but the trace file changed


Exiting hgopoer, rc=0 at 2014/12/10-20:17:04
hgocont, line 2754: calling SqlDriverConnect got sqlstate I
Exiting hgocont, rc=28500 at 2014/12/10-20:17:04 with error ptr FILE:hgocont.c LINE:2774 FUNCTION:hgocont() ID:Something other than invalid authorization
Exiting hgolgon, rc=28500 at 2014/12/10-20:17:04 with error ptr FILE:hgolgon.c LINE:790 FUNCTION:hgolgon() ID:Calling hgocont

Another quick search revealed that this setting may need to be mentioned in the odbc.ini and odbcinst.ini for all freetds connections

TDS_Version             = 8.0

So i added the line into the odbc.ini and odbcinst.ini (well i had to use odbcinst to do it - you might be able to just vi the file) - as an example here is the odbcinst.ini config now

 cat odbcinst.ini

[FreeTDS]
Description             = FreeTDS unixODBC Driver
Driver          = /usr/lib64/libtdsodbc.so.0
Setup           = /usr/lib64/libtdsodbc.so.0
UsageCount              = 3
TDS_Version             = 8.0

I then tried again...this time getting a different error - ORA-28513 returned - looking and traces and googling then revealed a multibyte characterset problem - this could be resolved by setting yet another config line in the initDEMO.ora config file to fix the LANGUAGE/CHARSET to be used - see HS_LANGUAGE below

cat initDEMO.ora
HS_FDS_CONNECT_INFO = DEMO
HS_FDS_TRACE_LEVEL = Debug
HS_FDS_TRACE_FILE_NAME = /tmp/hstrace.log
HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so
HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P1
set ODBCINI=/etc/unixODBC/odbc.ini

And we try again........

select product from "master.sys.databases"@demo;
select product from "master.sys.databases"@demo
                                           *
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[unixODBC][FreeTDS][SQL Server]Login failed for user 'username'.
{42000,NativeErr = 18456}[unixODBC][FreeTDS][SQL Server]Unable to connect to
data source {08001}
ORA-02063: preceding 2 lines from DEMO

And this is the first point where it looks like a credential issue - lets see what sql server says


Message
Login failed for user 'username'. Reason: Password did not match that for the login provided. [CLIENT: server]

And sure enough i discovered a type in my password - lets fix that by dropping and recreating the db link to give it the correct password.

And now we try again (again)

select product from "master.sys.databases"@demo;
select product from "master.sys.databases"@demo
                    *
ERROR at line 1:
ORA-00942: table or view does not exist
[FreeTDS][SQL Server]Invalid object name 'master.sys.databases'.
{42S02,NativeErr = 208}[FreeTDS][SQL Server]Statement(s) could not be prepared.
{42000,NativeErr = 8180}[FreeTDS][SQL Server]Invalid object name
'master.sys.databases'. {42S02,NativeErr = 208}[FreeTDS][SQL
Server]Statement(s) could not be prepared. {42000,NativeErr = 8180}
ORA-02063: preceding 2 lines from DEMO

Now that's looking much better -it connected it just doesn't like my SQL - lets try something a bit simpler with none of that dot business going on

select * from table_i_know_exists@demo

and it works (for a few rows)

until.....

ERROR:
ORA-28528: Heterogeneous Services datatype conversion error
ORA-02063: preceding line from DEMO

Aaargh

Another investigation shows me its some 32/64 conversion error going on somewhere in the chain - again there is an easy fix

Another new config line in initDEMO.ora (you can see a pattern developing here right......) This time its HS_FDS_SQLLEN_INTERPRETATION=32


HS_FDS_CONNECT_INFO = DEMO
HS_FDS_TRACE_LEVEL = Debug
HS_FDS_TRACE_FILE_NAME = /tmp/hstrace.log
HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so
HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P1
HS_FDS_SQLLEN_INTERPRETATION=32
set ODBCINI=/etc/unixODBC/odbc.ini

Now with that all added it works a treat! Now i just need to switch off the debug stuff so it runs much faster!

So there you have it how to link oracle directly to SQL Server - for free! (well as long as you don't need to buy ODBC drivers its free as this function is part of the base product from Oracle). There are further enhanced version of these kind of connections (referred to as gateways) where there is a specific extra piece of software to be installed - this is all discussed in this excellent blog note from oracle here:


Within that it refers to a metalink ( i still refuse to call it MOS) note which explicit states the facts from a licence point of view:


So what are you waiting for give it a try.........

xmldb - gone but not forgotten?

$
0
0



During a phase of patching this week to 11.2.0.4 i came across an unusual error as part of the preparation stage. Running the utlu112i script as normal produced the normal output - see below

SYS@DB>@/oracle/11.2.0.4.3.DB/rdbms/admin/utlu112i
Oracle Database 11.2 Pre-Upgrade Information Tool 12-11-2014 19:05:15
Script Version: 11.2.0.4.0 Build: 007
.
**********************************************************************
Database:
**********************************************************************
--> name:          DB
--> version:       11.2.0.2.0
--> compatible:    11.2.0.0.0
--> blocksize:     8192
--> platform:      Linux IA (32-bit)
--> timezone file: V14
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 3626 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 752 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 400 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 60 MB
.
**********************************************************************
Flashback: ON
**********************************************************************
FlashbackInfo:
--> name:          /oracle/DB/recovery_area
--> limit:         4500 MB
--> used:          3505 MB
--> size:          4500 MB
--> reclaim:       3272.2451171875 MB
--> files:         48
WARNING: --> Flashback Recovery Area Set.  Please ensure adequate disk space              in recover
y areas before performing an upgrade.
.
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile]
Note: Pre-upgrade tool was run on a lower version 32-bit database.
**********************************************************************
--> If Target Oracle is 32-Bit, refer here for Update Parameters:
WARNING: --> "sga_target" needs to be increased to at least 412 MB
.

--> If Target Oracle is 64-Bit, refer here for Update Parameters:
WARNING: --> "sga_target" needs to be increased to at least 596 MB
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
-- No obsolete parameters found. No changes are required
.

**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views         [upgrade]  VALID
--> Oracle Packages and Types    [upgrade]  VALID
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> Database contains INVALID objects prior to upgrade.
.... The list of invalid SYS/SYSTEM objects was written to
.... registry$sys_inv_objs.
.... The list of non-SYS/SYSTEM objects was written to
.... registry$nonsys_inv_objs.
.... Use utluiobj.sql after the upgrade to identify any new invalid
.... objects due to the upgrade.
.... USER SYS has 59 INVALID objects.
WARNING: --> Your recycle bin contains 22 object(s).
.... It is REQUIRED that the recycle bin is empty prior to upgrading
.... your database.  The command:
        PURGE DBA_RECYCLEBIN
.... must be executed immediately prior to executing your upgrade.
.
**********************************************************************
Recommendations
**********************************************************************
Oracle recommends gathering dictionary statistics prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:

    EXECUTE dbms_stats.gather_dictionary_stats;

**********************************************************************

So apart from the normal kind of stuff the main thing that stands out is the invalid objects in the SYS schema - ignoring that for now as i want to show what else was wrong.

The next thing i tried was a purge of the recycle bin - this produced the following error:

SYS@DB>purge dba_recyclebin;
purge dba_recyclebin
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-04098: trigger 'SYS.XDB_PI_TRIG' is invalid and failed re-validation

So - that doesn't look good - the trigger is obviously a 'database wide' one and is firing on the purging of the old objects - lets try and compile it

SYS@DB>alter trigger SYS.XDB_PI_TRIG compile;

Warning: Trigger altered with compilation errors.

Kind of expected that - so why does it error?

SYS@DB>show errors
Errors for TRIGGER SYS.XDB_PI_TRIG:
3/5      PL/SQL: Statement ignored
3/13     PLS-00905: object SYS.IS_VPD_ENABLED is invalid

OK - so whats that?

SYS@DB>select * from dba_objects where object_name='IS_VPD_ENABLED';
SYS
IS_VPD_ENABLED
                                   117668                FUNCTION            21-OCT-14
11-DEC-14       2014-12-11:19:05:58 INVALID N N N          1

Does that compile?

SYS@DB>alter function IS_VPD_ENABLED compile;

Warning: Function altered with compilation errors.

So what's wrong?

SYS@DB>show errors
Errors for FUNCTION IS_VPD_ENABLED:
0/0      PL/SQL: Compilation unit analysis terminated
4/43     PLS-00421: circular synonym 'PUBLIC.DBMS_XDBZ'

Hmm OK - lets just see what state the DBA_REGISTRY is in - specifically i want to see if there is a mention of XMLDB

SYS@DB>select * from dba_registry;
CATALOG
Oracle Database Catalog Views
11.2.0.2.0                     VALID                             05-MAR-2012 20:28:55
SERVER                         SYS                            SYS
DBMS_REGISTRY_SYS.VALIDATE_CATALOG



CATPROC
Oracle Database Packages and Types
11.2.0.2.0                     VALID                             05-MAR-2012 20:28:55
SERVER                         SYS                            SYS
DBMS_REGISTRY_SYS.VALIDATE_CATPROC

APPQOSSYS,DBSNMP,DIP,ORACLE_OCM,OUTLN,SYSTEM

Nope - no mention of it - lets double check what that synonym is pointing at

SYS@DB>select * from dba_synonyms where synonym_name='DBMS_XDBZ';
PUBLIC                         DBMS_XDBZ                      XDB
DBMS_XDBZ

Sure enough it's not there it's an orphaned synonym

SYS@DB>desc XDB.DBMS_XDBZ
ERROR:
ORA-04043: object XDB.DBMS_XDBZ does not exist

There is nothing at all owned by XDB, in fact the user is absent.

SYS@DB>select * from dba_objects where owner='XDB';

Lets have a look at the trigger source

SYS@DB>select * from dba_triggers where trigger_name='XDB_PI_TRIG';

SYS                            XDB_PI_TRIG                    BEFORE EVENT
DROP OR TRUNCATE
SYS                            DATABASE

REFERENCING NEW AS NEW OLD AS OLD

ENABLED
sys.xdb_pi_trig
BEFORE DROP OR TRUNCATE on DATABASE
PL/SQL      BEGIN                                                                            NO
              BEGIN
                IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xd
            b.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
                  xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionar
            y_obj_name);
                ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name,
             xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
                  xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dicti
            onary_obj_name);
                END IF;
              EXCEPTION
                WHEN OTHERS THEN
                 null;
              END;
            END;
NO  NO  NO  NO  NO  NO  NO

So lets drop the trigger as it is purely for xmldv and try the purge again - and sure enough it works

SYS@DB>drop trigger sys.xdb_pi_trig;
SYS@DB>purge dba_recyclebin;

And remove the invalid synonym

SYS@DB>drop public synonym dbms_xdbz;

I also removed an additional database wide trigger that had reference to XDB, there were still a lot of SYS objects in an invalid state (50+) but nothing that would cause an issue for the patch so I'll deal with those later.

So basically what had happened was that XDB had been installed at some point (no idea when) and then removed (though not completely cleanly) and this created the issue - a manual tiny up was easy enough - though it's never nice to have to mess around in SYS and the official line would always be to check with oracle support when doing this kind of thing.

I thought that iff XDB was ever installed it always remained in the REGISTRY in some 'removed' type status - maybe thats a newer thing or some intermediate status - but in this case it was completely absent.

Anyway - goes to show what happens when it's partially removed.

I would have said don;t install it unless you have to (XDB that is) - but since 12c you don't get a choice in the matter..... captors runs xdv as it's required for other components now.....



More SQL Server basics - using STR

$
0
0


Another random SQL Loader demo - here i'm assuming that the input file is just one long string of numbers separated by commas - i just want to load this into a single column table so i can do 'stuff' with it.

So i create a single column table

SQL> create table demo(col1 number);

Table created.

Here is my input file

# cat input.csv
1,2,3,4,5,66,77,888,99991,2,3,4,5,66,77,888,99991,2,3,4,5,66,77,888,99991,2,3,4,5,66,77,888,etc etc

And here is my controlfile

LOAD DATA
INFILE "input.csv""STR ','"
INTO TABLE DEMO
TRUNCATE
FIELDS TERMINATED by ','
TRAILING NULLCOLS
(COL1)

The 'trick' is the use of the STR command to tell sqlloader that comma is my end or record marker (as well as being the end of field marker)

Now run that in

sqlldr demo/demo control=loader.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Wed Dec 17 14:00:12 2014

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Conventional
Commit point reached - logical record count 64
Commit point reached - logical record count 65

Table DEMO:
  65 Rows successfully loaded.

Check the log file:
  loader.log
for more information about the load.

And there we have it

SQL> select * from demo.demo;

      COL1
----------
         1
         2
         3
         4
         5
        66
        77
       888
     99991
         2
   etc

Kind of pointless but the demo of the STR feature is useful


When you drop a schema by mistake......

$
0
0


Now everyone makes mistakes, some are easier to fix than others of course.

This week i was overcome with an urge to tidy things up in our cloud control repository as we've had a few accounts where people have moved on or the functionality is not used any more.

So i merrily clicked through the screens dropping users, a couple of minutes after i did that i suddenly got that feeling (the one where your brain suddenly wakes up and you realize what you did - otherwise known as the 'oh shit' moment).

So in my case i had a schema that had been dropped from the repository database - i didn't need the account any more in the cloud control tool but i did need the content of what was in it for other things.

So how do i fix this....

I do not want to restore the database back to 15 minutes ago as it will mess up cloud control. I could do some clever stuff with flashback if i take an outage on cloud control for a few minutes - however flashback isn't on anyway so i can't suspend cloud control, flashback, extract the user and then flash forward again (that's something to note for later - always switch on flashback!)
I have no separate 'logical' backups of the schema...

So what options do i have?

I could maybe do something with TSPITR - however this schema was in amongst other stuff (again something to note for later...) - i could possibly interrupt the process at a specific point and complete the last part manually - but it's too risky an approach

So what i decided to do was restore the database (as of 15 mins ago) to a separate server, once it is restored i can extract the schema from there with datapump and load it back into the original database.

So here is how i did that.

Now the first issue i have is that all of my backups are on tape (we have tsm configured to write directly to a VTL) so i need to enable the 'donor' server to be able to pretend to be the actual server from a tape access point of view.

Fortunately this is very simple - i just need to copy the /tsm/ORACLE_SID directory completely from the 'live' server to the 'donor' one - this may be different for your environment - it depends how you set up tsm (or whatever other software you are using).

Once that is set up i need to create an instance of oracle (i.e. just the memory/processes up and running - no control files or any other components). For this i copy the init file from live onto the donor server - make any small amendments needed in config and then start up the database in nomount mode (which just creates the instance).

(I just realized i glossed over the fact you of course need the same oracle software installed on the donor server - i already did - but it's easy enough to use the clone.pl utility to do that for you.)

So assuming now you have an instance up and running we now use the rman duplicate feature to do the copy for us

so we start up rman like this

rman rcvcat=rmanschema/rmanschemapassword@rmancatalog auxiliary=/

where auxiliary is our instance in nomount mode

Once connected we run

run{
     set dbid 12345678;
    set until time "to_date('19-DEC-2014 14:45:00','dd-mon-yyyy hh24:mi:ss')";
     allocate auxiliary channel ch1 type'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/tsm/DBNAME/conf/tdpo.opt)';
     duplicate database 'DBNAME' to DBNAME;

Couple of points on this - dbid is essential - it't the only way we are able to tell which database we want to duplicate from as we have no info from the controlfile. the DBNAME at the end - the first one has to match 'live' - the second one does not - but the second name has to match the instance name of the auxiliary we just created - in my case i kept them the same name.

So i run it and get this

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 12/19/2014 15:08:15
RMAN-05501: aborting duplication of target database
RMAN-06457: UNTIL SCN (9060086030062) is ahead of last SCN in archived logs (906

which is a new one on me - however the fix is quite clear - the archive logs i need are not backed up yet (as the time i chose for duplicate is only 15 minutes ago)

So i switch logfile and do a backup and then try again - and get this

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 12/19/2014 15:12:22
RMAN-05501: aborting duplication of target database
RMAN-06617: UNTIL TIME (19-DEC-14) is ahead of last NEXT TIME in archived logs (19-DEC-14)

different error but pretty much the same problem - so this time i archive log the current logfile, do some more switches just for good luck - then do another backup on live and try again......

This time it's ore successful and it spews out loads of log info (too much to paste here) - until i hit this:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 12/19/2014 16:40:07
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-19870: error while restoring backup piece redolog_DB_866518690_20141217_167032_2.rman
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 356509184 bytes disk space from 4294967296 limit

So I've run out of space in the recovery area - so i make it massively bigger and try again....

And this time it runs through - ending with

Executing: alter database force logging

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 20-DEC-14
released channel: ch1

So all good - however it took 12 hours - most of which was jumping around through tapes to bring the archives back (3 days worth) - this is something to be very aware of with large environments - your archive log backups are likely spread over lots of tapes and a lot of time is wasted jumping around and waiting for busy tapes to become free. A definite case for more regular incrementals or some better way of keeping archivelogs co-located.

Anyway i now have my copy db up and running and i can simple do my datapump

expdp user/pass directory=tmp schemas=oneideleted

that runs through fine - i then copy the dumpfile over to the 'live' server and load it in

impdp user/pass directory=tmp

And the issue is fixed - though it took a little longer than i expected.

A good reminder to triple check before you tidy up, but some useful things discovered that need to be addressed in the way the environment is set up - every cloud eh....?


Apex, XMLDB and connect by.....

$
0
0


I've been doing a lot of work recently with xmldb and have been frustrated by the lack of a GUI that did exactly what i wanted - i.e. give me a treeview of the structure and allow me to easily see the properties of each document (acl,created date etc) as well as the actual content of those documents (in our case just xml).

Cloud control can do all of the above but it requires a lot of clicks and it's not all located just in one simple to use screen.

SQL Developer seems to allow you to browse xml schemas but not the actual xmldb repo itself

There may be other tools that can do this - but I couldn't find one. So i decided to try and do my own in Apex........

Now I'm by no means an Apex expert but i thought i knew enough to produce something basic - here is the story so far.....

The first bit to tackle was to get a tree view of the xmldb folders and files - Apex has a nice built in tree view display page but I needed to build an appropriate query with connect by to be able to use that.

Apex helpfully gives you a sample code block of the format it wants - which is shown below

select case when connect_by_isleaf = 1 then 0 when level = 1 then 1 else -1 end as status,
       level,
       ENAME as title
       NULL  as icon,
       EMPNO as value,
       ENAME as tooltip,
       NULL  as link
from EMP
start with MGR is null
connect by prior EMPNO = MGR
order siblings by ENAME

So some of those attributes are used internally by the tree viewer code others are used to define icons,tooltips and links

So i just had to build something similar for the xmldb folders - i couldn;t find anyone who had done this when googling so i had to write my own - this was a little fiddly and uses a couple of trciks - but the resulting end code is shown below

with xmldata as
 (select '/' as any_path, null as parentpath
    from dual
  union all
  select any_path,
         nvl(substr(any_path, 1, (instr(any_path, '/', -1) - 1)), '/') as parentpath
    from resource_view)
SELECT case
         when connect_by_isleaf = 1 then
          0
         when level = 1 then
          1
         else
          -1
       end as status,
       level,
       any_path as title,
       null as icon,
       parentpath as value,
       null as tooltip,
       'javascript:$s(''HIDDEN_XMLRES'',''' || any_path || ''')' As link
  FROM xmldata
 start with any_path = '/'
CONNECT BY PRIOR any_path = parentpath;

Some points to help explain this:

1) The with clause made it easier to code
2) The initial select in the with clause - the first part of the union - is necessary as this data does not exist in xmldb - the initial 'root' entry for connect by.
3) I use substr and instr to manipulate the path to get me the parent path
4) The column with an alias link will be used later as part of the 'app' - clicking the link it creates will populate a hidden page item HIDDEN_XMLRES with the value of the currently clicked item.

Running this in plsql developer shows the following output


So now i have the treeview query i need to make an Apex page to display it.

Here is a quick whizz through the screens to just create an Apex shell - then i'll come back and explain the tree view bit.







So that's the shell created now lets add the tree view magic.....most of this speaks for itself so i won't add any extended commentary. I make use of the EMP table just to allow me to quickly go through the screens and create a dummy tree which i later change - if you created the schema through the 'create workspace' wizard then this table will exist already - if not you can just create the EMP table based on the normal definition which can be found from many different sources.












So now we have a dummy tree created we just have to update the SQL to what we actually want it to run - which we do as follows:



Now run the page





And there we have it a working tree! Performance is OK (a few seconds to load the screen) - but very dependent i guess on the size and depth of the tree being built.....

Now i just need to build the other bits to display what i want it to - so far this is proving very difficult - the onchange dynamic actions are not working and i think it's because the treeview runs some code that wipes them out......

Will post some more details as the screens develop - so far so good though - and surprisingly easy.

datapump and unified audit

$
0
0


So the new year is upon us and I've not written anything for a couple of weeks - thought it was about time i blogged something.
I've been looking further into some of the new 12c stuff in preparation for a possible large scale upgrade project this year. I got to the audit part, which to be honest is one of the least glamorous parts of DBA work - but it has to be done.
I thought i'd take the chance to learn how the new audit stuff works and combine it with the supposed new feature of being able to audit datapump commands. I was thinking that maybe there is a way to track when someone does an export using the compression=all option where the advanced compression option is required.

I did my testing on 12.1.0.2 (but it shouldn't be any different in 12.1.0.1 i think).

The testing i did in a new style PDB/CDB combo rather than a traditional style db - but again there is essentially no difference in how it would work.

I'm doing the testing in a CDB called Rich with a PDB in it called MARKER

First up here is a quick overview of the setup


SQL> sho pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MARKER                         READ WRITE NO
         4 REMOTEDB                       READ WRITE NO

Now lets switch to the marker pdb (and i do wish they'd not made the command container= - it's more logical for it to be plug=/pluggable= - something like that?) - anyway we switch and create a new user to do the export as

SQL> alter session set container=marker;

Session altered.

SQL> create user testuser identified by testuser;

User created.

SQL> grant dba to testuser;

Grant succeeded.

Now lets create some of this new audit setup - we can do this as the new user i created as i was lazy and granted it DBA.

So we login

[oracle@server-name]:RICH:[~]# sqlplus testuser/testuser@marker

And create a policy to audit all datapump activities

SQL>  create audit policy dp_usage actions component=datapump ALL;

Audit policy created.

Now we set this as an active policy

SQL> audit policy dp_usage;

Audit succeeded.

Now we do an export

[oracle@server-name]:RICH:[~]# expdp testuser/testuser@marker directory=tmp

Export: Release 12.1.0.2.0 - Production on Wed Jan 7 10:18:07 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights
reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 -
64bit  Production
With the Partitioning, OLAP, Advanced Analytics and Real Application
Testing options
Starting "testuser"."SYS_EXPORT_SCHEMA_01":  testuser/********@marker

directory=tmp
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type

SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Master table "testuser"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
***************************************************************************

***
Dump file set for testuser.SYS_EXPORT_SCHEMA_01 is:
  /tmp/expdat.dmp
Job "testuser"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Jan 7

10:18:40 2015 elapsed 0 00:00:32

SO that ran OK - lets see if the audit record got created.

[oracle@server-name]:RICH:[~]# sqlplus testuser/testuser@marker

Lets flush any entries still in memory to the table (as the new method to be more efficient by default logs to memory and asynchronously writes to a table)

SQL> exec sys.dbms_audit_mgmt.flush_unified_audit_trail;
BEGIN sys.dbms_audit_mgmt.flush_unified_audit_trail; END;

*
ERROR at line 1:
ORA-46276: DBMS_AUDIT_MGMT operation on unified audit trail failed
ORA-55906: Secure file log [id: 0 name: ORA$AUDIT_NEXTGEN_LOG] does not

exist
ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 1746
ORA-06512: at line 1

And this is where the stuff that i'd read didn't tally up with reality (and to be honest when i read it it didn't sound right)

Let's reconnect to the CDB and try it there maybe that's the problem?

SQL> exec sys.dbms_audit_mgmt.flush_unified_audit_trail;
BEGIN sys.dbms_audit_mgmt.flush_unified_audit_trail; END;

*
ERROR at line 1:
ORA-46276: DBMS_AUDIT_MGMT operation on unified audit trail failed
ORA-55906: Secure file log [id: 0 name: ORA$AUDIT_NEXTGEN_LOG] does not

exist
ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 1746
ORA-06512: at line 1

So - still fails, lets just see if there is anything in the log

SQL> select count(*) from unified_audit_trail;

  COUNT(*)
----------
         0

No - as i thought. Then i discovered a slightly different option to flush the audit trail - lets try that

SQL> BEGIN
 DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL(
  CONTAINER  => DBMS_AUDIT_MGMT.CONTAINER_ALL);
END;
/  2    3    4    5
BEGIN
*
ERROR at line 1:
ORA-46273: DBMS_AUDIT_MGMT operation failed in one of the PDB
ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 1746
ORA-06512: at line 2

But still no...

Hmm - let's go back and look at some of the basics - other stuff i read said it had to actually manually be enabled before you do anything - so lets check the status of the option

SQL> select * from v$option where parameter like 'Uni%'
  2  /

PARAMETER
----------------------------------------------------------------
VALUE                                                                CON_ID
---------------------------------------------------------------- ----------
Unified Auditing
FALSE                                                                     0

Sure enough it's switched off.....

So lets actually make sure this is enabled - to do this we have to recompile oracle with the option on - this isn't actually available in chopt yet - you have to do it the old way - see the chopt output below

[oracle@server-name]:RICH:[/oracle/12.1.0.2/bin]# chopt

usage:

chopt <enable|disable> <option>

options:
                  dm = Oracle Data Mining RDBMS Files
                olap = Oracle OLAP
        partitioning = Oracle Partitioning
                 rat = Oracle Real Application Testing

e.g. chopt enable rat

So lets do it the old style way.....

[oracle@server-name]:RICH:[/oracle/12.1.0.2/bin]# cd $ORACLE_HOME/rdbms/lib

[oracle@server-name]:RICH:[/oracle/12.1.0.2/rdbms/lib]# make -f ins_rdbms.mk uniaud_on ioracle
/usr/bin/ar d /oracle/12.1.0.2/rdbms/lib/libknlopt.a kzanang.o
/usr/bin/ar cr /oracle/12.1.0.2/rdbms/lib/libknlopt.a

/oracle/12.1.0.2/rdbms/lib/kzaiang.o
chmod 755 /oracle/12.1.0.2/bin

 - Linking Oracle
rm -f /oracle/12.1.0.2/rdbms/lib/oracle
/oracle/12.1.0.2/bin/orald  -o /oracle/12.1.0.2/rdbms/lib/oracle -m64 -z

noexecstack -Wl,--disable-new-dtags -L/oracle/12.1.0.2/rdbms/lib/ -

L/oracle/12.1.0.2/lib/ -L/oracle/12.1.0.2/lib/stubs/   -Wl,-E

/oracle/12.1.0.2/rdbms/lib/opimai.o /oracle/12.1.0.2/rdbms/lib/ssoraed.o

/oracle/12.1.0.2/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv12 -Wl,--

no-whole-archive /oracle/12.1.0.2/lib/nautab.o /oracle/12.1.0.2/lib/naeet.o

/oracle/12.1.0.2/lib/naect.o /oracle/12.1.0.2/lib/naedhs.o

/oracle/12.1.0.2/rdbms/lib/config.o  -lserver12 -lodm12 -lcell12 -lnnet12

-lskgxp12 -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -

lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -

lclient12  -lvsn12 -lcommon12 -lgeneric12 -lknlopt `if /usr/bin/ar tv

/oracle/12.1.0.2/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ;

then echo "-loraolap12" ; fi` -lskjcx12 -lslax12 -lpls12  -lrt -lplp12 -

lserver12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 `if [ -f

/oracle/12.1.0.2/lib/libavserver12.a ] ; then echo "-lavserver12" ; else

echo "-lavstub12"; fi` `if [ -f /oracle/12.1.0.2/lib/libavclient12.a ] ;

then echo "-lavclient12" ; fi` -lknlopt -lslax12 -lpls12  -lrt -lplp12 -

ljavavm12 -lserver12  -lwwg  `cat /oracle/12.1.0.2/lib/ldflags`    -

lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat

/oracle/12.1.0.2/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12

-lnnzst12 -lzt12 -lztkg12 -lmm -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12

-lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -

lcore12 -lnls12 -lztkg12 `cat /oracle/12.1.0.2/lib/ldflags`    -lncrypt12

-lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /oracle/12.1.0.2/lib/ldflags`  

 -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnnzst12 -lzt12 -lztkg12   -

lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -

lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `if /usr/bin/ar

tv /oracle/12.1.0.2/rdbms/lib/libknlopt.a | grep "kxmnsd.o"> /dev/null

2>&1 ; then echo "" ; else echo "-lordsdo12 -lserver12"; fi` -

L/oracle/12.1.0.2/ctx/lib/ -lctxc12 -lctx12 -lzx12 -lgx12 -lctx12 -lzx12 -

lgx12 -lordimt12 -lclsra12 -ldbcfg12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12

-lxml12 -locr12 -locrb12 -locrutl12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -

lxml12  -lgeneric12 -loraz -llzopro -lorabz2 -lipp_z -lipp_bz2 -

lippdcemerged -lippsemerged -lippdcmerged  -lippsmerged -lippcore  -

lippcpemerged -lippcpmerged  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -

lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -

lcore12 -lnls12 -lsnls12 -lunls12  -lsnls12 -lnls12  -lcore12 -lsnls12 -

lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12

-lcore12 -lnls12 -lasmclnt12 -lcommon12 -lcore12  -laio -lons    `cat

/oracle/12.1.0.2/lib/sysliblist` -Wl,-rpath,/oracle/12.1.0.2/lib -lm   

`cat /oracle/12.1.0.2/lib/sysliblist` -ldl -lm   -L/oracle/12.1.0.2/lib
test ! -f /oracle/12.1.0.2/bin/oracle ||\
           mv -f /oracle/12.1.0.2/bin/oracle /oracle/12.1.0.2/bin/oracleO
mv /oracle/12.1.0.2/rdbms/lib/oracle /oracle/12.1.0.2/bin/oracle
chmod 6751 /oracle/12.1.0.2/bin/oracle


So that's recompiled with the option on

[oracle@server-name]:RICH:[/oracle/12.1.0.2/rdbms/lib]# s

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 7 10:56:59 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit

Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options


SQL>

And we can see already the banner changed to say it's enabled

Now lets bounce the DB to make sure we really did pick this up (to be honest the db should have been down when i recompiled.......)

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORA-00600: internal error code, arguments: [krsh_fsga_sgaq.ds_not_found],

[], [], [], [], [], [], [], [], [], [], []

:-) - i think that's a direct result of the recompile without the db being down - so i'm ignoring it....

So i'll do a quick cycle through to make sure it's all clean now

[oracle@server-name]:RICH:[/oracle/12.1.0.2/rdbms/lib]# s

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 7 10:58:10 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1409286144 bytes
Fixed Size                  3710736 bytes
Variable Size            1342177520 bytes
Database Buffers           50331648 bytes
Redo Buffers               13066240 bytes
Database mounted.
Database opened.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1409286144 bytes
Fixed Size                  3710736 bytes
Variable Size            1342177520 bytes
Database Buffers           50331648 bytes
Redo Buffers               13066240 bytes
Database mounted.
Database opened.
SQL> alter pluggable database all open;

Pluggable database altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0

- 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

So all looks fine

Let's try a flush now

SQL> exec sys.dbms_audit_mgmt.flush_unified_audit_trail;
BEGIN sys.dbms_audit_mgmt.flush_unified_audit_trail; END;

*
ERROR at line 1:
ORA-46276: DBMS_AUDIT_MGMT operation on unified audit trail failed
ORA-55906: Secure file log [id: 0 name: ORA$AUDIT_NEXTGEN_LOG] does not

exist
ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 1746
ORA-06512: at line 1

Hmm not in the script..... - lets try it just for this pdb

SQL> BEGIN
 DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL(
  CONTAINER  => DBMS_AUDIT_MGMT.CONTAINER_CURRENT);
END;
/  2    3    4    5

PL/SQL procedure successfully completed.

That's better, out of interest lets trying doing all of them

SQL> BEGIN
 DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL(
  CONTAINER  => DBMS_AUDIT_MGMT.CONTAINER_ALL);
END;
/  2    3    4    5
BEGIN
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database
ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 1746
ORA-06512: at line 2

So that's not possible - lets try it in the container

SQL> BEGIN
 DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL(
  CONTAINER  => DBMS_AUDIT_MGMT.CONTAINER_ALL);
END;
/  2    3    4    5
BEGIN
*
ERROR at line 1:
ORA-46273: DBMS_AUDIT_MGMT operation failed in one of the PDB
ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 1746
ORA-06512: at line 2

Hmm so one of the PDB's is playing up - i'm ignoring this for now.....

Let's see what's in the audit trail now

SQL> select count(*) from unified_audit_trail;

  COUNT(*)
----------
         4

hooray progress - something is being logged

Lets do a few exports

[oracle@server-name]:RICH:[/oracle/12.1.0.2/rdbms/lib]# expdp testuser/testuser@marker directory=tmp
[oracle@server-name]:RICH:[/oracle/12.1.0.2/rdbms/lib]# expdp testuser/testuser@marker directory=tmp compresion=all
[oracle@server-name]:RICH:[/oracle/12.1.0.2/rdbms/lib]# echo "directory=tmp reuse_dumpfiles=y compression=all"> temp.par
[oracle@server-name]:RICH:[/oracle/12.1.0.2/rdbms/lib]# expdp testuser/testuser@marker parfile=temp.par

OK - 3 different variations on the same thing - lets see what is logged

Lets flush  to be sure they are in the table.

SQL> BEGIN
 DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL(
  CONTAINER  => DBMS_AUDIT_MGMT.CONTAINER_CURRENT);
END;
/  2    3    4    5

PL/SQL procedure successfully completed.

Now we check the log

Not the easiest format to read how I've displayed it - but here you go

SQL> select DP_TEXT_PARAMETERS1,DP_BOOLEAN_PARAMETERS1

,CLIENT_PROGRAM_NAME,event_timestamp from unified_audit_trail
  2   where DP_TEXT_PARAMETERS1 is not null;

MASTER TABLE:  "testuser"."SYS_EXPORT_SCHEMA_01" , JOB_TYPE: EXPORT,

METADATA_JOB_MODE: SCHEMA_EXPORT, JOB VERSION: 12.0.0, ACCESS METHOD:

AUTOMATIC, DATA OPTIONS: 0, DUMPER DIRECTORY: NULL  REMOTE LINK: NULL,

TABLE EXISTS: NULL, PARTITION OPTIONS: NONE
MASTER_ONLY: FALSE, DATA_ONLY: FALSE, METADATA_ONLY: FALSE,

DUMPFILE_PRESENT: TRUE, JOB_RESTARTED: FALSE                               

                                                             oracle@server-name

(DW00)                             07-JAN-15 11.06.59.281884 AM

MASTER TABLE:  "testuser"."SYS_EXPORT_SCHEMA_01" , JOB_TYPE: EXPORT,

METADATA_JOB_MODE: SCHEMA_EXPORT, JOB VERSION: 12.0.0, ACCESS METHOD:

AUTOMATIC, DATA OPTIONS: 0, DUMPER DIRECTORY: NULL  REMOTE LINK: NULL,

TABLE EXISTS: NULL, PARTITION OPTIONS: NONE
MASTER_ONLY: FALSE, DATA_ONLY: FALSE, METADATA_ONLY: FALSE,

DUMPFILE_PRESENT: TRUE, JOB_RESTARTED: FALSE                               

                                                             oracle@server-name

(DW00)                             07-JAN-15 11.07.34.239826 AM

MASTER TABLE:  "testuser"."SYS_EXPORT_SCHEMA_01" , JOB_TYPE: EXPORT,

METADATA_JOB_MODE: SCHEMA_EXPORT, JOB VERSION: 12.0.0, ACCESS METHOD:

AUTOMATIC, DATA OPTIONS: 0, DUMPER DIRECTORY: NULL  REMOTE LINK: NULL,

TABLE EXISTS: NULL, PARTITION OPTIONS: NONE
MASTER_ONLY: FALSE, DATA_ONLY: FALSE, METADATA_ONLY: FALSE,

DUMPFILE_PRESENT: TRUE, JOB_RESTARTED: FALSE                               

                                                             oracle@server-name

(DW00)                             07-JAN-15 11.08.34.156892 AM


SQL> set long 32000
SQL> /
MASTER TABLE:  "testuser"."SYS_EXPORT_SCHEMA_01" , JOB_TYPE: EXPORT,

METADATA_JOB_MODE: SCHEMA_EXPORT, JOB VERSION: 12.0.0, ACCESS METHOD:

AUTOMATIC, DATA OPTIONS: 0, DUMPER DIRECTORY: NULL  REMOTE LINK: NULL,

TABLE EXISTS: NULL, PARTITION OPTIONS: NONE
MASTER_ONLY: FALSE, DATA_ONLY: FALSE, METADATA_ONLY: FALSE,

DUMPFILE_PRESENT: TRUE, JOB_RESTARTED: FALSE                               

                                                             oracle@server-name

(DW00)                             07-JAN-15 11.06.59.281884 AM

MASTER TABLE:  "testuser"."SYS_EXPORT_SCHEMA_01" , JOB_TYPE: EXPORT,

METADATA_JOB_MODE: SCHEMA_EXPORT, JOB VERSION: 12.0.0, ACCESS METHOD:

AUTOMATIC, DATA OPTIONS: 0, DUMPER DIRECTORY: NULL  REMOTE LINK: NULL,

TABLE EXISTS: NULL, PARTITION OPTIONS: NONE
MASTER_ONLY: FALSE, DATA_ONLY: FALSE, METADATA_ONLY: FALSE,

DUMPFILE_PRESENT: TRUE, JOB_RESTARTED: FALSE                               

                                                             oracle@server-name

(DW00)                             07-JAN-15 11.07.34.239826 AM

MASTER TABLE:  "testuser"."SYS_EXPORT_SCHEMA_01" , JOB_TYPE: EXPORT,

METADATA_JOB_MODE: SCHEMA_EXPORT, JOB VERSION: 12.0.0, ACCESS METHOD:

AUTOMATIC, DATA OPTIONS: 0, DUMPER DIRECTORY: NULL  REMOTE LINK: NULL,

TABLE EXISTS: NULL, PARTITION OPTIONS: NONE
MASTER_ONLY: FALSE, DATA_ONLY: FALSE, METADATA_ONLY: FALSE,

DUMPFILE_PRESENT: TRUE, JOB_RESTARTED: FALSE                               

                                                             oracle@server-name

(DW00)                             07-JAN-15 11.08.34.156892 AM

So - it's worked OK that's the good news..... and the new auditing stuff looks quite neat.

The bad news is half the parameter aren't logged - including the compression one i wanted.....

Oh well i learnt something and i assume the other stuff will get added in later patches/upgrades

SQL loading data containing carriage returns

$
0
0


In a follow up to my recent post on using the STR directive with sql loader here is another example where the same technique is used to load data which has carriage return / line feed characters within it that we want to actually load and store. Normally with basic sql loader syntax these would really confuse things.

STR to the rescue again.

Here is some setup for the demo

We create a basic table

create table demo( a_id number,h_id number,title varchar2(128),descn varchar2(4000),risk varchar2(4000),comm varchar2(4000));

And this is the file we want to load

 [#BOR#][#EOC#]109[#EOC#]4[#EOC#]testdata_Duplicate[#EOC#]testdata_Duplicate from chat[#EOC#]this
is
carriage return  field[#EOC#]test2[#EOR#]


The file has [#EOR#] as an end of record marker, and [#EOC#] as the field terminator, the data in the middle contains carriage returns that we want to preserve on load.

So what does the control file look like to do that?

LOAD DATA
INFILE "data.dat""STR '[#EOR#]'"
INTO TABLE DEMO
TRUNCATE
FIELDS TERMINATED by '[#EOC#]'
TRAILING NULLCOLS
(field filler,a_id integer external,h_id integer external,title char(128),descn char(4000),risk char(4000),comm char(4000))


So we tell sql loader explicitly the field and record terminators - now lets load this in

 sqlldr / control=data.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Jan 13 16:58:01 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Conventional
Commit point reached - logical record count 1

Table DEMO:
  1 Row successfully loaded.

Check the log file:
  data.log
for more information about the load.


Selecting the data back confirms the carriage returns are there (and doubly confirmed by the use of the dump function to show what data is actually there.

 sqlplus /

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jan 13 16:59:16 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Jan 13 2015 16:59:11 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> set pages 0 lines 1024
SQL> select * from demo;
       109          4 testdata_Duplicate
testdata_Duplicate from chat
this
is
carriage return  field
test2



SQL> select dump(risk) from demo;
Typ=1 Len=30: 116,104,105,115,10,105,115,10,99,97,114,114,105,97,103,101,32,114,101,116,117,114,110,32,32,102,105,101,108,100


Very easy once you know how!





rawutl - an overlooked utility

$
0
0


While looking in $ORACLE_HOME/bin of a 12.1.0.2 DB i noticed an executable I'd never seen before and wondered what it did - so i ran it.....


[oracle]# rawutl
Usage: rawutl <option>
where <option> :
        -s <raw_device_name>: size of given raw device in bytes usable by Oracle
        -h: prints usage.


So lets pass in a raw volume and see if it works

rawutl -s /dev/mapper/oracleasm-disk1
64427130880


So the device is 64GB

Kind of useful i guess......

I then went back to look at some older homes to see when this appeared (thinking it was a 12c thing), seems it's been there since 10.2 (and maybe 10.2 - but i didn't have a home to check that in) - somehow I'd completely overlooked in.

Not that useful i guess as there are many ways to get that same info - but this is maybe the simplest....?


Datapump, partitioning and parallel

$
0
0


One of our systems is reasonable large (~2TB) and has 4 tables that have over a billion rows each (and by billion i mean 1000 million - there still seems to be confusion , at least with me,about what is a billion and what is a trillion and it seems to vary from country to country) - anyway thats not really relevant to what i want to say - lets just say the tables are quite big.......

The tables are all related and the larger tables are children of some much smaller tables (no prizes for guessing that one). There are performance issues with this system (again no prizes for that one :-)) and I've been looking for a more efficient way to sructure the tables.

In most cases the number of distinct values for the foreign key column (which is generally what we are joining and filtering on in multitable joins) is relatively small (approx 40,000) and there are good referential constraints between everything.

This led me down the path of trying to interval partition the tables on this FK column - that way when doing queries (which generally always include a single value for the FK column) we can just do a FTS of the single interval partition - currently the complexity of the queries (and the multiple joins) is sending the optimizer down the wrong path a lot of the time.

Currently we're on 11.2 but we plan to upgrade soon to 12c where we can also combine the interval with reference partitioning to hopefully mean the optimizer can very quickly pull all the related records back.

So sounds good in theory - so i went about trying to do some tests. I found out some useful stuff about datapump and discovered some of the restrictions and also some of the limitations of partitioning itself.

So here we go.

To partitition the table i decided to create a new partitioned empty table and do an export/import into this from the original table. I could of used a loopack database link to just do this in one step but decided to keep it simple and just export to a file and then load that in. The basic new table script is like this - i've skipped most of the columns for brevity. As you can see it's interval partitioned on the MODEL_SCENARIO_ID column

CREATE TABLE "TP_PART"."MODEL_SCENARIO_DATA"
   (    "ID" NUMBER(19,0) NOT NULL ENABLE,
        "MODEL_SCENARIO_ID" NUMBER(19,0) NOT NULL ENABLE,
     )
 partition by range(model_scenario_id)
           Interval  (1)
(partition p1 values less than (2) )
  TABLESPACE "TP_PART" ;


I then exported the original table, i took the opportunity here to try out some stuff with datapump and parallel which gives some interesting results.

The first thing i did was run the export with parallel=4 - here is what the activity looked like in cloud control


 You can see that only 3 session show as active, datapump had created two datapump workers, one it created was sat idle while the other was exporting the table in parallel degree 3. From the screenshot below you can see those 3 sessions all running the 'create table' command (this seems a little odd for it to be create table - but datapump is basically doing an insert into an external table - which is consider a create table.)


 Here we can see the waits for one of the individual sessions


 And here is the sql monitoring report as it was running (which is a feature i think is just great)


Once it has finished - looking at the summary report shows us that the elapsed/clock time is 39.5 minutes but the 'database' time is 2 hours (basically 3 session each of 40 minutes)


So quite impressive you might think, 1.4 billion rows in 40 minutes - a lot faster than it would have been with a single thread....????

Well this is where it's interesting - what if we repeat that whole process but with no parallel at all?

 What's interesting is what it looks like when it completes.... (notice the tick at the top to show it is complete - the screenshot was taken at the exact point it finished so it does look like the graph might continue - but it doesn't)


So the overall time is 28.8 minutes! So a single thread is faster than 3 parallel threads?

The difference is the method that datapump is using to extract the data (see more info here on that access method ). So in the case of a single thread direct path is used, as soon as we say parallel the external table method is used.

It seems however in this basic case that direct path is at least 3 or 4 times faster than external table access.

Something to be well aware of if you are using datapump for large objects!

So anyway after my brief testing here i carried on with my plan to load the data into the precreated partition table.

So i just import and tell the process to ignore the fact the table already exists and it will just create all the partitions i need?

Well i thought it should but here's what happens when you try that

Import: Release 11.2.0.4.0 - Production on Fri Jan 9 16:35:27 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning option
Master table "ORACLE"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ORACLE"."SYS_IMPORT_FULL_01":  /******** dumpfile=MSDnoparallel.dmp directory=uresh table_exists_action=truncate logfile=imp.log access_method=external_table
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "TP_PART"."MODEL_SCENARIO_DATA" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object "TP_PART"."MODEL_SCENARIO_DATA" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-14400: inserted partition key does not map to any partition
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "ORACLE"."SYS_IMPORT_FULL_01" completed with 1 error(s) at Fri Jan 9 16:35:39 2015 elapsed 0 00:00:12


It seems that it throws an error, - now this is where it got a little interesting - why wasn't it working?

A quick MOS search revealed there are actually bugs in this area and it plain doesn't work - at least not in 11.2 anyway.

Hmm - thats annoying.

So what i then did was import the table back in (under a different name with REMAP_TABLE), i then tried to do an insert into as select * from the temp table.

And then this happened

ORA-14300: partitioning key maps to a partition outside maximum permitted

Strange - there were only 40,000 values - but hold on what is the range of those values?

SQL> select min(MODEL_SCENARIO_ID),max(MODEL_SCENARIO_ID) from tp_part.rich;

MIN(MODEL_SCENARIO_ID) MAX(MODEL_SCENARIO_ID)
---------------------- ----------------------
                401414               10822814


Ah - there lies the problem - even if i'd started at the minimum value there are over 10 million numbers different between max and min. That means interval partitioning has to create (or at least reserve) this many partitions.

A quick google reveals that (even in 12c its the same) - the max number of partitions is 1 million.....

Aaaargh - so the whole exercise is pointless really - the partitioning approach i want to try cannot be done - back to the drawing board...........

In order to try and salvage something a little extra i thought i would do a further test of the partitioning and datapump/parallel behaviour - as it might be useful longer term.

I created a 4 way hash partition of the table and then did a further test.

Running the export with parallel 4 then resulted in the following behaviour

Each partition was unloaded in direct path at the same time (so it parallelized across the partitions but not within the partitions) - this gave the fastest runtime so far of about 25 minutes. I'm sure it would have been faster still if the files were spread over different devices etc etc

Just out of interest i then did the export with parallel 8 to see what it would do (thinking it would just use parallel 2 for each partition). What it actually did was export 2 of the partitions with direct path and the other 2 with external table in parallel 3 - so not sure what the algorithmn is actually doing... :-)

Anyway here is what that looked like when it was running


The serial direct path exports finished much faster than the parallel ones....

So some interesting conclusions

Partitoning is limited to 1 million partitions
importing into a partitioned table just plain failed (at least for this example)

Most interestingly though is the behaviour of parallel, direct path and external tables - in some cases not using parallel may be a better idea - when almost always with datapump parallel is better.

This is probably an edge case though when you are exporting a very limited number of objects, as soon as you do multiple tables/schemas parallel is almost never used for single objects - it's used to do multiple objects at the same time.


Pointless method of pfile recreation?

$
0
0


I've got a disaster recovery situation for you - pretty unlikely and pretty pointless but it again shows how useful cloud control can be....

So take the case that in some strange turn of disastrous events that we have everything we need to activate the database (datafile/redo etc etc) - the only thing missing is the lowly pfile/spfile that we need to start the instance up and get things going.

Somehow the pfile and spfile both got deleted, and somehow it's not been backed up at all with rman, bizarrely the alert log has also got removed (which could have told us what all of the settings were).

So we have nothing to start the instance.

It would be easy enough to just try and constuct one from scratch - all you really need is db_name and contro_files specified and you could just try and make up a lot of the the other settings until the system seemed OK again.

Is there another way though to get the exact file contents back as they were and save time rediscovering all the settings you had before by trial and error?

Well there is at least one - as long as you have cloud control set up.

The contents of the init file are uploaded to the repository and we can just query it we know where to look (which after some trial and error i do).

So if we know the db_name we are interested in we can just say

select init_param_name||'='||init_param_value
from MGMT$CS_DB_INIT_PARAMS p ,mgmt$target t
where t.target_name like '%YOUR_DB_NAME_HERE%'
and t.target_guid=p.target_guid
and p.INIT_PARAM_IS_DEFAULT='FALSE'

 (the number just comes from the fact i exported it in plsql developer which adds a row number column.
   INIT_PARAM_NAME||'='||INIT_PAR
1audit_file_dest=/oracle/admin/DBNAME/adump
2audit_trail=DB
3backup_tape_io_slaves=TRUE
4compatible=11.1.0.0.0
5control_file_record_keep_time=14
6control_files=/oracle/DBNAME/oradata/DBNAME/controlfile/o1_mf_87c072qk_.ctl, /oracle/DBNAME/recovery_area/DBNAME/controlfile/o1_mf_87c072t8_.ctl
7db_32k_cache_size=100663296
8db_block_checksum=full
9db_block_size=8192
10db_create_file_dest=/oracle/DBNAME/oradata
11db_domain=WORLD
12db_keep_cache_size=100663296
13db_name=DBNAME
14db_recovery_file_dest=/oracle/DBNAME/recovery_area
15db_recovery_file_dest_size=19327352832
16db_unique_name=DBNAME
17diagnostic_dest=/oracle/admin/DBNAME
18dispatchers=(PROTOCOL=TCP) (SERVICE=DBNAMEXDB)
19instance_name=DBNAME
20java_pool_size=268435456
21job_queue_processes=100
22log_archive_dest_1=LOCATION=USE_db_recovery_file_dest
23log_archive_format=DBNAME_%t_%s_%r.arc
24memory_target=0
25open_cursors=300
26pga_aggregate_target=314572800
27processes=300
28remote_login_passwordfile=EXCLUSIVE
29service_names=DBNAME, DBNAME.WORLD
30sessions=480
31sga_max_size=1577058304
32sga_target=1577058304
33shared_pool_size=268435456
34undo_tablespace=UNDOTBS1
35utl_file_dir=/oracle/utlfile/DBNAME

So you can see you can easily rebuild the exact file from that - just needs some quotes adding in the right places.......

Pretty pointless i know - but could be useful after a very unlikely series of events.......

VIEWS_AS_TABLES in 11g and even 10g ???

$
0
0


12c introduced a new feature for datapump which allowed you to extract a 'view' as if it were a table and then take this off somewhere else to import as a 'table'.

Very nice.

However you can kind of do it (though not so elegantly) in earlier versions using datapump/external tables - here's how..... (and note i did the test in 12c as thats what i had to hand - but the process works fine in 10g and 11g)

In the example below I'm adding a couple of dummy columns to a select from a table - but the SQL query could be anything you like here - i just kept the example simple.

First up we create a very basic table and insert one row (1 row is enough to illustrate the point...)

SQL> create table dummytable (col1 number,col2 number);

Table created.


SQL> insert into dummytable values (1,2);

1 row created.

I then create an extenral table definition using the datapump driver - which does allow writing (where the normal sqlloader one does not). Notice i add 2 'dummy' columns - but the query could be a 10 table join - anything you like. You can also use parallel etc if need be.

CREATE TABLE demoxt
    ORGANIZATION EXTERNAL
   (
       TYPE ORACLE_DATAPUMP
      DEFAULT DIRECTORY data_pump_dir
       LOCATION ( 'demo.dmp' )
    )
    AS
       SELECT 1 as newcol1, 'A' as newcol2, col1,col2
      FROM   dummytable;

I then extract the table ddl for later use.

select dbms_metadata.get_ddl('TABLE','DEMOXT') from dual
SQL> /

DBMS_METADATA.GET_DDL('TABLE','DEMOXT')
--------------------------------------------------------------------------------

  CREATE TABLE "OPS$ORACLE"."DEMOXT"
   (    "NEWCOL1" NUMBER,
        "NEWCOL2" CHAR(1),
        "COL1" NUMBER,
        "COL2" NUMBER
   )
   ORGANIZATION EXTERNAL
    ( TYPE ORACLE_DATAPUMP
      DEFAULT DIRECTORY "DATA_PUMP_DIR"
      LOCATION
       ( 'demo.dmp'
       )
    )

A quick ls reveals the dumpfile is created

SQL>  !ls /oracle/12.1.0.2/rdbms/log/demo.dmp
/oracle/12.1.0.2/rdbms/log/demo.dmp

 I drop the original table and the external table now

SQL> drop table demoxt;

Table dropped.

SQL> drop table dummytable;

Table dropped.

 Note the dumpfile remains intact.

SQL>  !ls /oracle/12.1.0.2/rdbms/log/demo.dmp
/oracle/12.1.0.2/rdbms/log/demo.dmp

I then recreate the external table definition pointing at the previously created file

SQL>   CREATE TABLE "OPS$ORACLE"."DEMOXT"
  2     (    "NEWCOL1" NUMBER,
  3          "NEWCOL2" CHAR(1),
  4          "COL1" NUMBER,
  5          "COL2" NUMBER
   )
   ORGANIZATION EXTERNAL
    ( TYPE ORACLE_DATAPUMP
      DEFAULT DIRECTORY "DATA_PUMP_DIR"
  6    7    8    9   10        LOCATION
       ( 'demo.dmp'
       )
    )
11   12   13   14  ;

Table created.

I then create a 'real' table from the external table dumpfile.

SQL> create table dummytable as select * from demoxt;

Table created.

 I can now describe it showing the extra columns

SQL> desc dummytable;
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
NEWCOL1                                            NUMBER
NEWCOL2                                            CHAR(1)
COL1                                               NUMBER
COL2                                               NUMBER

And a quick select shows the table is there

SQL> select * from dummytable;

   NEWCOL1 N       COL1       COL2
---------- - ---------- ----------
         1 A          1          2


This is probably not that useful as the dumpfile is only useful to map an external table definition on top of - the file cannot just be loaded directly with impdp.

It's a pity there is no sql unloader which works in the same way and would just create a csv directly - this surely is not that hard to create and would be very useful - rather than using sqlplus or utl_file to create files.

How oracle resolves name.........an analogy

$
0
0



Ever been asked what the hell it is you do at work and tried to relate it to everyday life?

Here is my attempt to simplify/explain in 'laymans' terms what oracle names resolution is all about

Lets pretend I'm a motorist with no sense of direction and I've a group of friends who i can call upon to tell me how to get somewhere (resolve the name i want to connect to).

I have 5 friends with me:

Tnsnames - has a 'road map' of where all the locations are
ldap - has a sat nav
ezconnect - only knows how to get somewhere if you tell him pretty much exactly where it is
hostname - only knows how to get somewhere if the country/city/street all have the same name and the people live at number 1521
nis - the wierd guy who has an old satnav that still works but no-one asks any more

Every time i 'take a journey' (do a names resolution) i could have 1 or more of these friends with me - whether these friends are there or not is determined by

names.directory_path= (list of friends)

I could have just one, or all of them in the car with me. The order i list them in is the order i ask them directions. If one of them gives me the correct directions i don't bother asking anyone else.

In most cases tnsnames seem's to be everyones best friend and he comes on 99% of all trips and rides shotgun, the others are sometimes there by default but just sit in the back and are never asked questions of. Tnsnames has been our best friend for many years since at least Oracle 7 and we can't remember further back than that....

While tnsnames is a great guy he sometimes doesn't know everything - his 'road map' might be last years version and it may be missing some new locations or have wrong pointers to locations. Generally he's pretty reliable as long as he's been kept up to date

ldap guy is the cool kid with the iphone 6+, the iwatch and knows everything about everything, if his 3g/4g connections goes though he is screwed and cant resolve anything - where tnsnames guy is still ok as his road map is in the car and doesn't rely on anything external.

ezconnect is the new kid in town, he's handy if  tnsnames is tied up (if he's staring out of the window or something - and you don't have access to him) he can tell you the way but you need to have some idea yourself where things are - he's useful though sometimes

hostname guy can't ever remember been asked where to go in the last 20 years.....

NIS guy remembers being asked something many years ago, him and hostname guy sit in silence most of the time with glum faces.....both of them are worried that when we upgrade the car there won't be a seat for them....

If none of the guys being asked knows where the location is then we throw the famous ORA-12154 (basically i don't know where you are trying to go mate).

Sometimes we get a very similar sounding reply ORA-12154 - this means we've found the address but the person we want isn't in.....

We also sometimes get ORA-12505 - this is very similar to ORA-12154 - perhaps we tried the doorbell rather than just knocking - but there is still no-one in.

Sometimes we get an ORA-12545 - this is when the street name we are looking for doesn't seem to exist - perhaps it's a misprint in the 'road map' or the satnav has a typo.

Occasionally the friends reminisce about the old friend 'oracle names' who died a few years ago - they all however blame ldap for the untimely death

Anyway - hope you enjoyed it.....

The wallet, the password and the odd requirement

$
0
0


An otn question this week had the requirement to allow a client account to be able to run "sqlplus /" from a client machine and this would connect to a remote database server with no credentials being passed.

This can easily be done using ops$ accounts and setting remote_os_authent to true - however this is notoriously insecure (i.e. i could easily connect my own machine to the network, add a local user of the ops$localuser name to match the db and the database would let me login with no password)

In there case however they were already using a 'secure external password store' (a wallet) and they wanted to be able to use "sqlplus /" using this.

However they were having trouble as they seemed to be forced to always specify the tnsnames alias

i.e. sqlplus /@DB worked but sqlplus / didn't

I initially said 'just set TWO_TASK' thinking this would resolve it as it forces normal connections to go out over sqlnet to the tns alias mentioned in TWO_TASK. This apparently didn't work - i didn't believe them so i went ahead and did a test myself

So some pre-reqs - i have a dummy db called 'DB' and i add a user called demosd to it

SQL> create user demosd identified by demosd;

User created.

SQL> grant create session to demosd;

Grant succeeded.

SQL>

I then create a wallet to store these credentials

mkstore -wrl /home/oracle -create
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:

(choosing welcome1 as the password at the prompts)

I can then see it created some wallet stuff

-rw-rw-rw- 1 oracle oinstall        0 Jan 27 14:55 ewallet.p12.lck
-rw------- 1 oracle oinstall       75 Jan 27 14:55 ewallet.p12
-rw-rw-rw- 1 oracle oinstall        0 Jan 27 14:55 cwallet.sso.lck
-rw------- 1 oracle oinstall      120 Jan 27 14:55 cwallet.sso

I then add the user/password/database combo i'm interested in

mkstore -wrl /home/oracle -createCredential DB demosd demosd
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
Create credential oracle.security.client.connect_string1

So that's all setup

Now a few extra lines in the sqlnet.ora to tell oracle where the wallet is and that he should always use it

cat sqlnet.ora

# sqlnet.ora Network Configuration File: /oracle/12.0.0.1/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
SQLNET.ALLOWED_LOGON_VERSION=10

WALLET_LOCATION =
   (SOURCE =
      (METHOD = FILE)
      (METHOD_DATA = (DIRECTORY = /home/oracle))
)

SQLNET.WALLET_OVERRIDE = TRUE

Now i just export TWO_TASK=DB and it works right.....?

Well no it doesn't and i was quite surprised..... (sqlplus /@DB works fine just not sqlplus /)

So how to solve this?

A simple alias won't work as you can't have a space in an alias

The fix i came up with (and to be honest i don't like it and you probably shouldn't use it) is to do this:

Create a function called sqlplus in the .profile and 'export' it

sqlplus() {
    if [[ $@ == "/" ]]; then
        command sqlplus /@DB
    fi
}

export -f sqlplus

Then after executing the .profile again we have a function called sqlplus which when passed an argument of / runs sqlplus /@DB....

total bodge i know - but it does work - i just wouldn't ever want to set up a system this way.....

Datapump out from all pdb's in one go?

$
0
0


Again inspired by an otn question i came up with a solution thats perhaps a useful concept to solve lots of similar type requirements.

In this case they wanted a script to export out each PDB and wondered if this was possible in a single command somehow. Well it isn't but the following solution seems quite neat to me (though you may disagree....)

In order to make the datapump possible we need to create a directory in each PDB (with the same name) - DATA_PUMP_DIR looks like it should be ready made for this but doesn't work in the PDB's for some reason.

So lets create that first (tun in each pdb)

create directory tmp as '/tmp';

(you could use the method i show later to do this everywhere........)

Now we have that i create a small file with a bit of plsql to run datapump exports

cat $HOME/demo.sql

DECLARE
  l_dp_handle      NUMBER;
  v_job_state      varchar2(4000);
  v_day            varchar2(20);
BEGIN
  select SYS_CONTEXT('USERENV', 'CON_NAME')||'-'||rtrim(to_char(sysdate,'DAY')) into v_day from dual;
  l_dp_handle := DBMS_DATAPUMP.open(operation   => 'EXPORT',
                                    job_mode    => 'FULL',
                                    remote_link => NULL,
                                    version     => 'LATEST');
  DBMS_DATAPUMP.add_file(handle    => l_dp_handle,
                         filename  => v_day||'.dmp',
                         directory => 'TMP',
                         reusefile => 1);
  DBMS_DATAPUMP.add_file(handle    => l_dp_handle,
                         filename  => v_day||'.log',
                         directory => 'TMP',
                         filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE,
                         reusefile => 1);
  DBMS_DATAPUMP.start_job(l_dp_handle);
  DBMS_DATAPUMP.WAIT_FOR_JOB(l_dp_handle, v_job_state);
  DBMS_OUTPUT.PUT_LINE(v_job_state);
END;
/


I make the file name the container name followed by - and the day of the week.

So now we have something that will create an export when run against a PDB - but how to make it run against all PDB's automatically?

This is where we borrow catcon.pl......

You can read up about al the options and what it does elsewhere - i'll just show what the end command i came up with was:


perl catcon.pl -C 'CDB$ROOT PDB$SEED' -b /tmp/demo /home/oracle/demo.sql







I'm specifically excluding the seed and root pdb's as i don;t care about them - thats what the -C does

When run this gives the following output

 catcon: ALL catcon-related output will be written to /tmp/demo_catcon_8698.lst
catcon: See /tmp/demo*.log files for output generated by scripts
catcon: See /tmp/demo_*.lst files for spool files, if any
catcon.pl: completed successfully



There is loads of stuff outputted from catcon under /tmp but all  i want to look at is the dumpfiles and logfiles (my PDBS@ are called MARKER and REMOTEDB)

ls -l /tmp

-rw-r--r-- 1 oracle oinstall     9098 Jan 28 20:34 MARKER-WEDNESDAY.log
-rw-r----- 1 oracle oinstall  2080768 Jan 28 20:34 MARKER-WEDNESDAY.dmp
-rw-r--r-- 1 oracle oinstall     8954 Jan 28 20:34 REMOTEDB-WEDNESDAY.log
-rw-r----- 1 oracle oinstall  2068480 Jan 28 20:34 REMOTEDB-WEDNESDAY.dmp


So looks good - and checking the last few lines of a logfile i see this

. . exported "C##LINK"."DEMO"                            5.054 KB       1 rows
Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  /tmp/REMOTEDB-WEDNESDAY.dmp
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Wed Jan 28 20:34:50 2015 elapsed 0 00:02:33


There you go - maybe not something i'd implement for real but it's a useful technique nonetheless.



Viewing all 259 articles
Browse latest View live