Quantcast
Channel: #cloud blog
Viewing all 259 articles
Browse latest View live

When the Apex login screen is slow?

$
0
0


For quite a while now we've had issues with one of our Apex installations. We have two completely parallel systems that independently process the same data - the business can access either one of these and get the same results (no rac/cluster - just two seperate systems that process the same inputs).

They are both running the same versions of apex/oracle/os but the one login page is massively slower than the other - why?

Both are using the EPG way of accessing the applications (the simplest setup i always find) - so why is the one much slower then the other. A quick comparison of the database parameters found no real differences so it would seem to be somewhere else (we had seen problems on other systems where shared_servers was set to too small a value but that wasn't a problem here).

We needed to narrow down the issue - the infrastructure is hugely complex with large numbers of routers, firewalls and proxis between the client and the application.

The simplest way to start the narrowing down was to take the network 'stuff' out of the picture and just retrieve the apex web page directly on the linux server - so how do do that?

Well wget has been around for a while and is perfectly suited to doing what we wanted to do - so we set up a very simple wget fetch to retreive the login page - if it was quick locally then the problem lies somewhere else in the infrastructure.

so lets retrieve the page on both servers - to do this we run the following command

wget http://localhost:7777/apex/apex
--22:54:18--  http://localhost:7777/apex/apex
           => `apex'
Resolving localhost... 127.0.0.1, ::1
Connecting to localhost|127.0.0.1|:7777... connected.
HTTP request sent, awaiting response... 302 Found
Location: f?p=4550:1:3358419253871 [following]
--22:54:18--  http://localhost:7777/apex/f?p=4550:1:3358419253871
           => `f?p=4550:1:3358419253871'
Connecting to localhost|127.0.0.1|:7777... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]

    [                                                                <=>                                     ] 9,935         --.--K/s

22:55:18 (165.64 B/s) - `f?p=4550:1:3358419253871' saved [9935]


Now this wokred fine on both servers however it took 60 seconds in both cases (actually way longer than either url access from a browser) - this then muddied the waters even more as it then appeared to be a localized box issue.....


However after a lot of trial and error (and i mean a lot - trying almost every switch that exists in wget) i came across the solution

adding this switch (--ignore-length) and the page comes back straight away on both boxes



wget --ignore-length  http://127.0.0.1:7777/apex/apex
--22:58:23--  http://127.0.0.1:7777/apex/apex
           => `apex'
Connecting to 127.0.0.1:7777... connected.
HTTP request sent, awaiting response... 302 Found
Location: f?p=4550:1:22032694640760 [following]
--22:58:23--  http://127.0.0.1:7777/apex/f?p=4550:1:22032694640760
           => `f?p=4550:1:22032694640760'
Connecting to 127.0.0.1:7777... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]

    [ <=>                                                                                                    ] 9,945         --.--K/s

22:58:23 (474.21 MB/s) - `f?p=4550:1:22032694640760' saved [9945]



The problem seems to be that the reported length and the actual length of the html are different )don;t know why this would be) - but wget sits there for 60 seconds waiting for extra data which it never gets (as there is no more) - it then gives up and completes the call.

So with the extra switch we know that the local box/oracle/apex setup is fine and the problem is somewhere else in the system.

The network/firewall team are now looking into things and a tcpdump of one of the interfaces seems to show some 'interesting' results - so hopefully now we are on the path to fixing it......

A short note on ACL's inside the database

$
0
0


Back in 11g oracle introduced the concept of access control lists (ACL's) to restrict what network ports could be opened from within the database - personally i think this was solving a problem that wasn't really there but I'm sure there are cases where it is useful and access does need to be very tightly controlled. It's pretty much caused nothing but hassle for me though....

Anyway it caught me out again this week- though it wasn't obvious that this was the problem. For a lot of our Apex applications we use ldap to control authentication (see post about that here ) on one system this was not working and i started to look into why.

The basic test i always do is to confirm that the ldap authentication works from plsql before delving into apex - so i run the following code block

DECLARE
  l_retval      PLS_INTEGER;
  l_retval2     PLS_INTEGER;
  l_session     dbms_ldap.session;
  l_ldap_host   VARCHAR2(256);
  l_ldap_port   VARCHAR2(256);
  l_ldap_user   VARCHAR2(256);
  l_ldap_passwd VARCHAR2(256);
  l_ldap_base   VARCHAR2(256);
BEGIN
  l_retval                := -1;
  dbms_ldap.use_exception := TRUE;
  l_ldap_host             := 'domain-name-here';
  l_ldap_port             := '389';
 l_ldap_user             := 'domain\user';
  l_ldap_passwd           := 'password';
  l_session               := dbms_ldap.init(l_ldap_host, l_ldap_port);
  l_retval                := dbms_ldap.simple_bind_s(l_session,
                                                     l_ldap_user,
                                                     l_ldap_passwd);
  dbms_output.put_line('Return value: ' || l_retval);
  l_retval2 := dbms_ldap.unbind_s(l_session);
EXCEPTION
  WHEN OTHERS THEN
    dbms_output.put_line(rpad('ldap session ', 25, '') || ': ' ||
                         rawtohex(substr(l_session, 1, 8)) ||
                         '(returned from init)');
    dbms_output.put_line('error: ' || SQLERRM || '' || SQLCODE);
    dbms_output.put_line('user: ' || l_ldap_user);
    dbms_output.put_line('host: ' || l_ldap_host);
    dbms_output.put_line('port: ' || l_ldap_port);
    l_retval := dbms_ldap.unbind_s(l_session);
END;
/


If that returns 0 (with serveroutput on of course) then ldap is OK.

In this case the plsql worked fine so why was apex not working?

Initially i thought maybe it's some issue with Apex itself (as we were using a newer version of Apex) but after a lot of headscratching i realised the problem...

The test above i ran as SYS (as I'm lazy and i have an alias set up to just type s to log me on) - however there is a caveat when this plsql block is run as sys - acl's don't apply..... so the code will work regardless

It's only when you run it as a non sys user (even if they have DBA rights it is ok - it's just sys is special) then you see the issue:


DECLARE
*
ERROR at line 1:
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.DBMS_LDAP_API_FFI", line 25
ORA-06512: at "SYS.DBMS_LDAP", line 48
ORA-06512: at line 17


So one to remember (and I'm sure it is documented) - Network ACL's do not apply to the SYS user.....

As soon as we set the ACL up correctly then apex started to work fine......


What on earth is SCHEMA_EXPORT/STATISTICS/MARKER in datapump?

$
0
0


Recently when doing some datapump work i noticed an odd extra line appearing in 12c when doing schema exports - but only when exporting from a PDB - let me show you what i mean.

So this is the output from a schema export in a plain old (pre 12c style database)

expdp marker/marker reuse_dumpfiles=y directory=tmp

Export: Release 12.1.0.1.0 - Production on Fri Sep 5 21:20:11 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "MARKER"."SYS_EXPORT_SCHEMA_01":  marker/******** reuse_dumpfiles=y directory=tmp
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Master table "MARKER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MARKER.SYS_EXPORT_SCHEMA_01 is:
  /tmp/expdat.dmp
Job "MARKER"."SYS_EXPORT_SCHEMA_01" successfully completed at Fri Sep 5 21:22:43 2014 elapsed 0 00:02:29



When we do the same thing against a PDB we see this (note you have to do an @ connection to datapump from a PDB)

expdp test/test@//localhost:1521/MARKER schemas=test directory=tmp

Export: Release 12.1.0.1.0 - Production on Fri Sep 5 09:15:14 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "TEST"."SYS_EXPORT_SCHEMA_01":  test/********@//localhost:1521/MARKER schemas=test directory=tmp
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Master table "TEST"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TEST.SYS_EXPORT_SCHEMA_01 is:
  /tmp/expdat.dmp
Job "TEST"."SYS_EXPORT_SCHEMA_01" successfully completed at Fri Sep 5 09:15:45 2014 elapsed 0 00:00:29


So pretty much exactly the same - apart from 1 line

Processing object type SCHEMA_EXPORT/STATISTICS/MARKER

 Sowhat is this mysterious 'marker' - the fact that it is mentioned at all implies it thinks it needs to extract something - but what is that something?

Trying an import to a sqlfile to see what metadata it contained revealed nothing - no mention of anything to do with it.

Hmmmm

Lets try and do a bit of detective work

A look in DBA_EXPORT_OBJECTS makes no reference to this - so there is no comment to help us. So a bit of a dead end there.

Lets trace datapump and se if that tells us anything - activating full tracing and we see this in the trace file

KUPW:10:21:47.728: 1: DBMS_LOB.TRIM returned.
KUPW:10:21:47.728: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
META:10:21:47.728: OBJECTS_FETCHED for TABLE_STATISTICS = 1
META:10:21:47.729:  get_xml_inputs SCHEMA_EXPORT/STATISTICS/MARKER:
SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('MARKER_T', '7')), 0 ,KU$.MARKER ,'MARKER' FROM SYS.KU$_MARKER_VIEW KU$
rowtag: ROW objnum_count: 0 callout: 0 Bind count: 0 Bind values:


Aha - some kind of progress - so there is a view KU$_MARKER_VIEW which datapump is using - maybe this contains some useful comments?

A quick look at the source code - and it's very basic......

select '1', '0', dbms_metadata_util.get_marker
  from dual


which when run shows

select * from SYS.KU$_MARKER_VIEW;

V V     MARKER
- - ----------
1 0         42


So lets have a look at the source of this.... (well as much as we can do as the package body is wrapped).

The only bit of any note is this

-- PACKAGE VARIABLES

  marker NUMBER := 42;  -- marker number: used in worker/mcp to take actions
                        -- at appropriate times, without depending on
                        -- pathnames


Which doesn't really tell us anything.

And that's pretty much the end of the road, we all know about 42 of course....... http://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy

So it's either a coincidence that 42 exists or there is some deeper meaning to any of this existing (i suspect the former in this case :-)).

Anyone know what this is actually doing? It's off that it seems to be a 'schema level' thing to export not something associated with segments directly





All roads lead to agentdeploy

$
0
0


Over 12 months ago now i wrote a short summary of manually installing a cloud control agent - that post is here. Revisiting that this week with a fresh batch of installs i had a look again at the options that are available to do installs in general - there are of course 2 main categories:

1) GUI
2) command line

I'm ignoring any shared options or cloning for the purposes of this post.

GUI speaks for itself and I'm not going to mention that further. However the command line option has a few sub-options that i though it's worth discussing further.

The previous post just covered running the agentdeploy script once you had the agent software on the target server. However there is more than one way to get to that point and trigger that process and I thought it would be useful to share a bit more on this topic.

I'll cover 3 main command line options (there is a 4th using rpm's but with my access to our systems I didn't bother trying that one)

1) emcli
2) getAgentimage
3) manual 'fudge'

All of these 3 sub-options ultimately call agentdeploy.sh in one way or another so i won't cover old ground at the end on that topic (the post above can deal with that). I'll just cover how you can get to that point in different ways.

So first up option 1 "emcli"

Now from the previous post you can see that emcli was used on the OMS host to extract the agent software. emcli can though be run from anywhere - it just needs to be installed and configured. It's not there by default and does need to be set up (don't confuse this with emctl - emcli is a command line version of the cloud control console and has a huge amount of functionality).

To install emcli you simply need to download it from with the management repository - there are a couple of screen that explain what to do with it - so from the home page navigate here


And you'll see this screen

If you right click the link you can see where it points to - which should be
http://oms-hostname:7788/em/public_lib_download/emcli/kit/emclikit.jar

I'm running on linux so i then use wget to fetch this file

wget http://oms-hostname:7788/em/public_lib_download/emcli/kit/emclikit.jar
--17:38:50--  http://oms-hostname:7788/em/public_lib_download/emcli/kit/emclikit.jar
           => `emclikit.jar'
Resolving oms-server... 10.10.10.10
Connecting to oms-server|10.10.10.10|:7788... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/octet-stream]

    [ <=>                                                                                                                                                      ] 2,521,837     12.80M/s

17:38:50 (12.79 MB/s) - `emclikit.jar' saved [2521837]


So i now have a local copy on my target server - i now need to initialize it (you'll need a java 6 installation on your target server by the way)

java -jar emclikit.jar -install_dir=/tmp
Oracle Enterprise Manager 12c Release 4.
Copyright (c) 2012, 2014 Oracle Corporation.  All rights reserved.

EM CLI client-side install completed successfully.
Execute "emcli help setup" from the EM CLI home (the directory where you have installed EM CLI) for further instructions.


So now it's installed we need to configure it

emcli setup -url=http://oms-server.e-ssi.net:7788 -username=sysman
Oracle Enterprise Manager 12c Release 4.
Copyright (c) 1996, 2014 Oracle Corporation and/or its affiliates. All rights reserved.

Enter password

Emcli setup successful


So now emcli is setup we can use it to run the same command to download the agent software i used in the original post

emcli get_agentimage -destination=/tmp/rich -platform="Linux x86-64" -version=12.1.0.4.0
 === Partition Detail ===
Space free : 4 GB
Space required : 1 GB
Check the logs at /oracle/home/oracle/.emcli/get_agentimage_2014-09-08_19-18-43-PM.log
Downloading /tmp/rich/12.1.0.4.0_AgentCore_226.zip
File saved as /tmp/rich/12.1.0.4.0_AgentCore_226.zip
Downloading /tmp/rich/12.1.0.4.0_PluginsOneoffs_226.zip
File saved as /tmp/rich/12.1.0.4.0_PluginsOneoffs_226.zip
Downloading /tmp/rich/unzip
File saved as /tmp/rich/unzip
Agent Image Download completed successfully.
oracle@server:/tmp>


SO now we're at the point we can install with agentdeploy.sh - at this point i'll move on....

Option 2 "getagentimage"

This is very similar to option1 in many ways - you download a file from the OMS and then use this to download and (optionally) install the agent

First up we donwload the magic shell script (again wth wget)

wget http://oms-server:7788/em/install/getAgentImage
--19:21:11--  http://oms-server:7788/em/install/getAgentImage
           => `getAgentImage'
Resolving oms-server... 10.10.10.10
Connecting to oms-server|10.10.10.10|:7788... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21,161 (21K) [application/octet-stream]

100%[=========================================================================================================================================================>] 21,161        --.--K/s

19:21:11 (936.09 KB/s) - `getAgentImage' saved [21161/21161]


oracle@server:/tmp>

Then we give exec permissions

oracle@server:/tmp> chmod +x getAgentImage






Now we can run it to show which agent software can be downloaded/installed

oracle@server:/tmp> ./getAgentImage -showPlatforms


Platforms       Version
Linux x86       12.1.0.3.0
Linux x86-64    12.1.0.3.0
IBM AIX on POWER Systems (64-bit)       12.1.0.3.0
Linux x86-64    12.1.0.4.0
Linux x86-64    12.1.0.1.0


Using the various switches we can now pull the software down using the download_only flag ( it will by default try and install the agent also)

./getAgentImage LOGIN_USER=sysman LOGIN_PASSWORD=password PLATFORM="Linux x86-64" VERSION=12.1.0.4.0 -download_only

/bin/chmod: changing permissions of `/tmp': Operation not permitted

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  237M  100  237M    0     0  40.1M      0  0:00:05  0:00:05 --:--:-- 44.6M
test of /tmp/agent.zip OK
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 7939k  100 7939k    0     0  2417k      0  0:00:03  0:00:03 --:--:-- 3605k
Command: /usr/bin/curl  https://oms-server.e-ssi.net:4901/em/install/getAgentImage?user=sysman&platform=Linux x86-64&version=12.1.0.4.0&script=download&host=server&type=pluginimage --insecure -o /tmp/plugin.zip -S --stderr /tmp/error.txt
test of /tmp/plugin.zip OK
  adding: plugin.zip (stored 0%)
Agent image downloaded successfully.


A quick check shows

ls -l

-rw-r--r-- 1 oracle oinstall 256914932 2014-09-08 19:28 agent.zip


And again we are at the point where agentdeploy.sh can be used

Finally "option 3" - the manual fudge

To be honest this was my original plan before i realised that option 1 and option 2 did a lot of the work for me......

In this case i downloaded the agent software onto the OMS server using emcli - and i then had to work out where to put this file under the oracle apache install to be able to just download it directly with wget :-)

I finally discovered i had to copy the file to

/oracle/gc_inst12104/WebTierIH1/config/OHS/ohs1/htdocs

once it was there i could fetch the file with

wget http://oms-server:7788/agentlinux64.zip
--21:14:57--  http://oms-server:7788/agentlinux64.zip
           => `agentlinux64.zip'
Resolving oms-server... 10..10.10.10
Connecting to oms-server|10.10.10.10|:7788... connected.
HTTP request sent, awaiting response... 200 OK
Length: 256,914,996 (245M) [application/zip]

100%[=========================================================================================================================================================>] 256,914,996   20.44M/s    ETA 00:00

21:15:26 (8.38 MB/s) - `agentlinux64.zip' saved [256914996/256914996]



And again we are at the point where agentdeploy can be used



So there you have it 3 different ways to get to the same end goal - not sure which i prefer to be honest - i'll leave the choice up to you.........

Datapump export from standby/read only databases?

$
0
0


Occasionally i am reminded of the fact that the exp tool still works against a standby or a database open read only where the new and much improved datapump does not - datapump is not looking so clever now they say....

Then i say

ha - it can work you just need to make use of a 'surrogate' database! - is that a new term i just invented there?

Anyway the issue is that datapump has to write something to the database it's extracting from - the Master table, which it uses to build and track it's work - obviously in a readonly or standby database this isn't possible (i'm ignoring logical standby's before you get clever, and snapshot standbys for that matter)

So how do we do this then?

Well the answer lies in the surrogate comment i made earlier - we use a separate database to store the master table in and fetch all of the data through this surrogate using a network_link.

So anyway - lets restate the issue - when you run datapump normally against a read only database - you see this message

expdp user/passfull=y

Export: Release 11.2.0.3.0 - Production on Tue Sep 16 19:07:14 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
ORA-31626: job does not exist
ORA-31633: unable to create master table "PUMPY.SYS_EXPORT_FULL_05"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1020
ORA-16000: database open for read-only access


So to get round it you just need another database with a database link to the read only system. Ths can be on the same (or different) server but the version must be within the normal version rules for expdp. I would always suggest to use the exact same version as the db you want to extract from.

So lets switch to this surrogate database and run these steps to set things up

SQL> create public database link pumpy connect to user identified by pass using 'tns alias of actual source we want to extract from.world';

Database link created.


This assumes of course this user already exists in your read only database.....

Now the db link is there we just do datapump in the normal way - the only extra parameter is the network_link one

 expdp / network_link=pumpy dumpfile=surrogate.dmp full=y

This then carries on as normal (as if you were exporting from the actual source) - but the master_table is created in the surrogate and the data/metatdata all pulled over the db link

Export: Release 11.2.0.3.0 - Production on Tue Sep 16 19:20:57 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Starting "OSAORACLE"."SYS_EXPORT_FULL_02":  /******** network_link=pumpy dumpfile=surrogate.dmp full=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.894 GB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE

 --- some content cut out

. . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES"             0 KB       0 rows
Master table "OSAORACLE"."SYS_EXPORT_FULL_02" successfully loaded/unloaded
******************************************************************************
Dump file set for OSAORACLE.SYS_EXPORT_FULL_02 is:
  /oracle/11.2.0.3.1.DB/rdbms/log/surrogate.dmp
Job "OSAORACLE"."SYS_EXPORT_FULL_02" successfully completed at 19:16:33


And there you have it...

Enjoy!







Bill and Ted's excellent datapump

$
0
0


Completely pointless post really but I'll add it anyway.

When i was doing an import today i just happened to take a look at the external table definition the import process was loading from





So the DW process is just doing an insert into select blah, blah from external table - i wondered what this external table defintion actually looked like - so i used dbms_metadtaa to find that

select dbms_metadata.get_ddl('TABLE','ET$0043A3CD0001') from dual;

which returned

CREATE TABLE "ORACLE"."ET$0043A3CD0001"
   (    "PRICING_RESULT_CONTRACT" NUMBER(10,0),
        "PRICING_RESULT_PRICE" BINARY_DOUBLE,
        "PRICING_RESULT_OFFICIAL_PRICE" BINARY_DOUBLE,
        "PRICI__ULT_OFFICIAL_PRICE_DATE" DATE,
        "PRICING_RESULT_KEY" VARCHAR2(3900),
        "PRICING_RESULT_MODEL_KIND" NUMBER(10,0),
        "PRICING_RESULT_PRICING_TAG" NUMBER(10,0),
        "PRICI__T_EFFECTIVE_PRICING_TAG" NUMBER(10,0),
        "PRICI__ESULT_INITIAL_TIMESTAMP" BINARY_DOUBLE,
        "PRICING_RESULT_FINAL_TIMESTAMP" BINARY_DOUBLE,
        "PRICING_RESULT_USER" NUMBER(10,0),
        "ID" NUMBER(10,0),
        "VERSION" NUMBER(19,0),
        "PRICING_RESULT_CURRENCY" CLOB,
        "PRICI__ESULT_OFFICIAL_CURRENCY" CLOB,
        "PRICING_RESULT_ANNOTATION" CLOB,
        "PRICING_RESULT_BID_ASK" CLOB,
        "PRICING_RESULT_DESCRIPTION" CLOB,
        "PRICING_RESULT_COMMENT" CLOB,
        "PRICI__RESULT_MODEL_PARAMETERS" CLOB,
        "PRICI__KET_DATA_TRANSFORMATION" CLOB,
        "PRICI__ESULT_MARKET_DATA_QUERY" CLOB,
        "PRICI__ESULT_MARKET_DATA_ITEMS" CLOB,
        "PRICING_RESULT_EXTRAS" CLOB,
        "PRICING_RESULT_EXTRAS_SUMMARY" CLOB,
        "PRICI__MANAGED_STEPS_UNIQUEIDS" CLOB,
        "PRICING_RESULT_DATE" CLOB,
        "PRICING_RESULT_ADJUSTMENTS" CLOB,
        "PRICING_RESULT_PROPERTIES" CLOB,
        "PRICING_RESULT_CREDIT_SPREAD" CLOB
   )
   ORGANIZATION EXTERNAL
    ( TYPE ORACLE_DATAPUMP
      DEFAULT DIRECTORY "DATA_PUMP_DIR"
      ACCESS PARAMETERS
      ( DEBUG = (0 , 0) DATAPUMP INTERNAL TABLE "LEXIFI_MONSTER"."PRICING_RESULTS"
JOB ( "ORACLE","SYS_IMPORT_SCHEMA_01",1)
 WORKERID 1 PARALLEL 1 VERSION '11.2.0.1' ENCRYPTPASSWORDISNULL  COMPRESSION DISABLED  ENCRYPTION DISABLED     )
      LOCATION
       ( 'bogus.dat'
       )
    )
   REJECT LIMIT UNLIMITED


So it seems to be doing some funky internal stuff - what amused me though was that the dummy file it's using was called bogus.dat - someone still stuck in the 90's wrote this code maybe.... :-)


Getting to apex via the wrong port?

$
0
0


Today we've had lots of 'fun' with out firewall setup and Apex - we had originally setup port 1099 which worked from one office location but we planned to move to a different port to be accessible from all locations. This move was done but it ruend out not all locations could access the new port.


We had a situation then that either one team could work or the other team could work - but not both....



The firewalls will eventually get sorted but we came up with a short term fix to resolve this using port forwarding

What this does is reroute traffic from one port to another port – so in our case we set up a tunnel that sent all traffic from port 1099 to port 18080, this means that either port will work as everything ends up going to port 18080 in the end but port 1099 also appears to be active.

To set this up I had to enable the apex server to ssh back to itself without a password (to avoid typing this on the command line) and then use the ssh command to set up a permanent port forwarding between the two ports – to do this I ran this command as the oracle user:

ssh -fNL server-name:1099:server-name:18080 server-name

(this needs to be started with a nohup at the front – otherwise when I log out the forwarding dies)

this is just saying any traffic received on port 1099 forward on to port 18080 – which makes both url’s work

So now we can allow all users to work even though apex is really only running on port 18080.

A useful trick!

Tuning the elephant

$
0
0


This past couple of weeks i was contacted about a performance issue with some plsql code in one of our live systems. The batch job was taking well over an hour to run where previously (allegedly) it was running 'much' faster than that.

OK i said I'll have a look - thinking it was just going to be a simple piece of code.

It turned out the problem code was several thousand lines long with calls to other procedures too....every wished you'd said no.......

So how to go about fixing such a 'large' problem?

The first thing i did was run the plsql block with debug mode switched on and wrapped it in the plsql profile procedure calls (well this was all done via my plsql developer gui - other tools are of course available.....). Thus breaks down all the plsql run and gives you a nice breakdown of where the time is being spent - as DBA's we rarely use this it;s more aimed at developers but it's surprising easy and nice to use.

This quickly showed me that it was in fact one piece of SQL inside the thousands of lines of plsql causing the issue.

So the problem just got easier right? Well yes and no....

The statement was an insert append with mutiple unions and joins, fetching from views and nested tables......  all in all about 600 lines long (would be much more apart from the views).

So how to deal with that?

Well first of course i look at the explain plan (from the executed code of course - not some approximation).

Now the plan was 103 lines long which apart from anythng else is hard to read let alone debug....

Now this is the really difficult bit - looking at the plan and trying to work out why oracle is doing badly - a good pointer is always to look at estimated rows vs actual rows - this is often the key factor in fixing these things - if stats are bad then oracle is going to make the wrong choices.

However in this case the stats looked ok and the plan did look sort of alright - i tried creating some indexes i thought may help and a couple of mviews which i thought would maybe remove some of the direct workload - however nothing seemed to be helping.

The volumes in the tables were all fairly large and FTS and  hash joins were largely being used - from knowing about the data and what the query was trying to do (and this is perhaps the most valuable input you can have when tuning a query) this approach looked OK - in fact it looked like it should be using hash more than it was.

The system was set up with auto PGA which i generally do everywhere, however in this specifc case we needed more hash_area_size to improve the hashing speed and to also give oracle a nudge that doing hashing is the right way to go here (a large hash area does affect the explain plan the optimizer comes up with)- now this can be fiddled around with with underscore parameters with auto pga - but i didn't want to affect any more of the system as that was working fine.

So instead i reverted to using the ols style memory settings - just for my session - so to do this i just added the following lines onto the plsql block

  execute immediate 'alter session set workarea_size_policy=MANUAL';
  execute immediate 'alter session set sort_area_size=2048000000';
  execute immediate 'alter session set hash_area_size=2048000000';


This makes just the session executing the plsql revert to old memory setting style and i increased the sort and hash area sizes up to their max (2GB).

Now when the plsql is run we get it completed in just under 14 minutes.

Now tuning this was no mean feat and a lot of trial and error got me to this point - I've maybe made it sound like it was just an easy process - it wasn't by any means.

As with any big problem you need to break it down to target what is actually wrong - there are lots of tools to do this now.

Tuning SQL is not some kind of mystical art - it just takes a lot of logical thinking, really understanding your data and a lot of trial and error. Over time you get better at it from expereince learned along the way. I would advise to always start with the simple stuff - don't jump into something hundreds of lines long as your first tuning exercise - you'll likely get very frustrated and not learn much.

So they way to tune an elephant is the same way you eat one - a little bit at a time... :-)

It's a trap!

$
0
0


I'll avoid going into a whole load of star wars metaphors here, what i wanted to mention is that sometimes the simplest things can make you fall into a trap that could be disastrous...

In our case all we wanted to do (luckily) was just list the details of all the files modified in a directory in the last week.

Now getting the files in simple enough

find /znextract/logs  -mtime -1
~~~~~~~ lots of file name removed here
/znextract/logs/znextract.log.XCOTRD.BXY783
/znextract/logs/znextract.log.XCOTRD.BXY784
/znextract/logs/znextract.log.XCOTRDHDR.BXY783
/znextract/logs/znextract.log.XCOTRDHDR.BXY784


This returns about 80 lines (notice i say lines and not files here....)

If we want to get the full details of the files we just string some extra options on to the find command


 find /znextract/logs  -mtime -1  -exec ls -l {} \;
~~~~~~~~loads of output removed here
-rw-r--r--   1 zainet   staff           346 Sep 24 13:39 /znextract/logs/znextract.log.XCOTRD.BXY783
-rw-r--r--   1 zainet   staff           346 Sep 24 13:42 /znextract/logs/znextract.log.XCOTRD.BXY784
-rw-r--r--   1 zainet   staff           349 Sep 24 13:39 /znextract/logs/znextract.log.XCOTRDHDR.BXY783
-rw-r--r--   1 zainet   staff           349 Sep 24 13:42 /znextract/logs/znextract.log.XCOTRDHDR.BXY784






Now the thing that confused me here (wheen the script had worked perfectly well on another server) was that appening the -exec switches reutrned thousands of files like it had ignored the initial -mtime flag....

So what's going on?

Well after some confusion and a brief flirtation that it may be a bug one of my colleagues spotted the 'trap' here.

On the original server all of the directories were has 'old' modification times on then so they would never show up in the find command i was using. On this server however the directory modification time was also n the last 7 days so the directory was being returned in the list also...

An ls -l of a directory therefore shows all the files......

To fix this we just add the -type switch to only display files

 find /znextract/logs  -mtime -1 -type f -exec ls -l {} \;
~~so the correct number of lines/files are now shown as the output
-rw-r--r--   1 zainet   staff           346 Sep 24 13:39 /znextract/logs/znextract.log.XCOTRD.BXY783
-rw-r--r--   1 zainet   staff           346 Sep 24 13:42 /znextract/logs/znextract.log.XCOTRD.BXY784
-rw-r--r--   1 zainet   staff           349 Sep 24 13:39 /znextract/logs/znextract.log.XCOTRDHDR.BXY783
-rw-r--r--   1 zainet   staff           349 Sep 24 13:42 /znextract/logs/znextract.log.XCOTRDHDR.BXY784



It's just a good job we didn't -exec rm -rf .........


So cloud control - what's my database PSU?

$
0
0


Keeping track in an automated way of all the database PSU versions applied across the entire database estate can be a little tricky and time consuming (or both). Cloud control has the facility to be able to give us all this information - however what is available by default is not perfect for a few of reasons:

1) Anything 'inventory' related seems to be part of extra cost option packs  - we don't have these packs and i guess most shops don't - this is pretty cost prohibitive to try and use
2) Querying the underlying tables/views that contain the data used in the cloud control screen is possible - but again this may require a licence - I'm not sure of the exact details of what does and doesn't need a licence. Maybe even looking at the data requires a licence (e.g. AWR/ASH inside the database for example)
3) Even if you have the licences the screen may not do exactly what you want and repository tables/views are not the easiest to work with. I couldn't also find a simple list of the PSU's to be able to use that for reporting purposes (that's not to say it isn't there - i just couldn't find it)

Anyway to collect this information bypassing all of the above issues - i decided to collect the information by myself using metric extensions and then build a report based of that collected information.

I've shown before how to make use of metric extensions but I'll document it again here as I did this one with plsql rather than straight SQL and it was a little tricky to get correct - so i think it may be useful for others.

1) First up we go to Enterprise->Monitoring->metric extensions and fill in some basic details (this is a view after i created it rather than while i was doing it - but you can see the relevant details)


2) Now we get onto the second screen (which is where i had all the trouble). Here i had to define some plsql that would return the current PSU in the database - the approach i came up with seems to work in every case i tried it on - but I can;t guarantee it's 100% correct.......

 To help anyone who wants to copy this method here is the plsql in text format

declare
  v_version  varchar2(80);
  v_comp     varchar2(80);
  complete_v varchar2(80);
  l_output1  number;
  l_output2  varchar2(80);
begin
  dbms_utility.db_version(v_version, v_comp);
  OPEN :1 for
    select 1, maxer
      from (select max(psus) as maxer
              from (select replace(comments, 'PSU ', '') as psus
                      from sys.registry$history
                     where comments like 'PSU%'
                       and replace(comments, 'PSU ', '') > v_version
                    union
                    select v_version as psus from dual));
  --dbms_output.put_line(complete_v);
end;


Make sure to set up the bind variables exactly as i have them to get it to work.

3) now we map the return columns





4) Now we use default credentials (dbsnmp unless you changed it), test if necessary and then finish up and create deployable draft.

Once it's deployable , we then need to publish it and assign it to as many hosts as we see fit - this can be manually selecting all hosts or assigning it to a monitoring template which is auto deployed.

So now we have the PSU version being collected and uploaded by the agent every day - we now want to be able to join to that data to create some reports.

You can see the real time value straight away by browsing to the all metrics section of cloud for a particular database - the metric extensions stand out as they have a little logo next to them




So we can see above the database in question is version 11.2.0.2.11.

Right now lets find where these values are stored in the OMR. Now this took me a while to track down (and i know the tables reasonably well) - but that's a one off activity.

But I've done the hard bit for you - the metric values (for string metrics at least) can be found in this view (and I'm using the _LATEST version of it) and i'm joining this to the main targets table

SELECT target_name, value
    FROM sysman.GC_METRIC_STR_VALUES_latest mv, MGMT_TARGETS MT
   WHERE ENTITY_GUID = MT.Target_Guid
     AND METRIC_COLUMN_NAME = 'PSU_VERSION'


If we then do another couple of joins we can link it to get an overall summary of some of the basics

with psus as
 (SELECT target_name, value
    FROM sysman.GC_METRIC_STR_VALUES_latest mv, MGMT_TARGETS MT
   WHERE ENTITY_GUID = MT.Target_Guid
     AND METRIC_COLUMN_NAME = 'PSU_VERSION')
select d.target_name,
       d.host_name,
       --d.characterset,
       psus.value,
       --d.startup_time,d.release,
       d.dbversion
--d.is_64bit,os.os_summary,os.freq,os.mem,os.disk,os.cpu_count
  from MGMT$DB_DBNINSTANCEINFO d, MGMT$OS_HW_SUMMARY os, psus
 where os.host_name = d.host_name
   and psus.target_name = d.target_name


I've commented out a few lines here as i just wanted to compare information from the version column against the target with that returned from my metric - but you can add in whatever is useful.

So running the query above returns this data


So there we go - a relatively easy way to get a list of the PSU versions in every database in your estate - and it's automatically updated - and it even keeps history.......

Footnote - the data may not fill in immediately (I'm not sure when the rollup of the data into the tables/views is done to make them queryable) - but if you waut 24 hours everything will definitely be there.


Random datapump examples to show what can be done

$
0
0


This is a reference as much for me as anyone else of 3 recent exports that were done where we wanted schema copies with all of the structure (including plsql and user definition etc) but only some of the data. The trick is to include a generic query clause at the top followed by some specifc queries further down - that way by default everything has the original query applied unless it's overridden.

They are all very similar in technique but often it's useful to see multiple examples.

In all examples there were many tens of tables which we extracted with no rows - with only those with specific queries actually extracting any data.

Example 1
schemas=SCHEMA_NAME
flashback_time=systimestamp
QUERY="where 1=0"
QUERY=SCHEMA_NAME.CATEGORY:"where 1=1"
QUERY=SCHEMA_NAME.CATEGORY_VALUE:"where 1=1"
QUERY=SCHEMA_NAME.CHANGELOG:"where 1=1"
QUERY=SCHEMA_NAME.CONSTRAINED_STRUCTURE:"where 1=1"
QUERY=SCHEMA_NAME.DATA_CLASSIFICATION:"where 1=1"
QUERY=SCHEMA_NAME.FIXED_STRUCTURE_COLUMNS:"where 1=1"


Example 2
schemas=SCHEMA_NAME
flashback_time=systimestamp
QUERY="where 1=0"
QUERY=SCHEMA_NAME.CHANGELOG:"where 1=1"
QUERY=SCHEMA_NAME.EXECUTION_STATUS:"where 1=1"
QUERY=SCHEMA_NAME.HOST_CONFIG_TYPE:"where 1=1"
QUERY=SCHEMA_NAME.MODEL_EXECUTION_RESULT:"where 1=1"
QUERY=SCHEMA_NAME.MODEL_STATUS:"where 1=1"



Example 3
schemas=SCHEMA_NAME
flashback_time=systimestamp
QUERY="where 1=0"
QUERY=SCHEMA_NAME.APPLICATIONACCESS:"where ROLEID in (1,23,24,25)"
QUERY=SCHEMA_NAME.APPLICATIONBUNDLES:"where applicationid in (select applicationid from SCHEMA_NAME.APPLICATIONS where (applicationgroup = 0 OR applicationgroup = 41) and iscurrentversion=1)"
QUERY=SCHEMA_NAME.APPLICATIONGROUPS:"where applicationgroupid=41"
QUERY=SCHEMA_NAME.APPLICATIONPARAMETERS:"where applicationid in (select applicationid from SCHEMA_NAME.applications where (applicationgroup = 0 OR applicationgroup = 41) and iscurrentversion=1)"
QUERY=SCHEMA_NAME.APPLICATIONS:"where (applicationgroup = 0 OR applicationgroup = 41) and iscurrentversion=1"
QUERY=SCHEMA_NAME.GROUPOPERATIONS:"where 1=1"
QUERY=SCHEMA_NAME.GROUPS:"where 1=1"
QUERY=SCHEMA_NAME.OPERATIONS:"where 1=1"
QUERY=SCHEMA_NAME.RESOURCES:"where 1=1"
QUERY=SCHEMA_NAME.ROLEGROUPMEMBERSHIP:"where 1=1"
QUERY=SCHEMA_NAME.ROLES:"where 1=1"
QUERY=SCHEMA_NAME.SERVERAPPLICATIONROLES:"where 1=1"
QUERY=SCHEMA_NAME.SERVERAPPLICATIONS:"where 1=1"
QUERY=SCHEMA_NAME.USERROLES:"where roleid in (1,23,24,25)"
QUERY=SCHEMA_NAME.USERS:"where USERID in (SELECT userid from SCHEMA_NAME.USERROLES where ROLEID in (1, 23, 24, 25))"

Adding a new replicated schema into an existing streams setup

$
0
0


I love streams - i think it's one of the best features in the database - it's incredibly powerful and incredibly underused. I still prefer it to goldengate.

Anyway what i wanted to share was how simple it was to add a schema into an already existing setup - i thought this may be a little tricky but was a piece of cake.

All i had to do was use one of the built in plsql procedures that did everything for me and everything worked fine first second time.

The code just looked like this

begin
DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
   schema_names                 =>'COMPLIANCE',
   source_directory_object      =>'DATA_PUMP_DIR',
   destination_directory_object =>'DATA_PUMP_DIR',
   source_database              =>'DB1',
   destination_database         =>'DB2',
   capture_name=>'CAPTURE_COMP',
propagation_name=>'PROP_COMP',
apply_name=>'APPLY_COMP',
include_ddl=>TRUE);
end;
/


So i just want to replicate the schema COMPLIANCE from DB1 to DB2 (with a few explicitly named streams components and including ddl)

So the thing chugs away for a few minutes and then says

PL/SQL procedure successfully completed.

Great?

However it didn't work - so it's helpful that it throws no error......

The problem was the datapump import of the schema in DB2 failed as the tablespace for this new user did not exist.

So I remove all of the config (using cloud control - lazy i know......) and add the tablespace in DB2.

And run the script again - and it again reports

PL/SQL procedure successfully completed.

But this time it really has worked and everything is replicating just fine.

I'm quite imporessed by how easy that was - I'm pretty sure doing that in GG is not so simple......

I'm missing a transform....

$
0
0


We've recently copied one of our schemas from a tablespace using TDE (i.e. wallets etc) to another system where we didn't have the ASO licence.

The issue we had is that datapump dump file was created fine with the data unencrypted (it tells you it's doing this with a nice warning at the end)

ORA-39173: Encrypted data has been stored unencrypted in dump file set.

All fine - thats what i wanted

Now when i come to import the table and data in to my other system - i get this problem

Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:"SCHEMA"."BINARY_BUCKET_ELEMENT" failed to create with error:
ORA-28365: wallet is not open
Failing sql is:
CREATE TABLE "SCHEMA"."XX" ("XX_ID" NUMBER(18,0) NOT NULL ENABLE, "XX_MD5_HASH" VARCHAR2(20 BYTE), "XX_DATA" BLOB ENCRYPT USING 'AES128''SHA-1') PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "XX -- then rest of DDL truncated


The problem of course being the clause:

"XX_DATA" BLOB ENCRYPT USING 'AES128''SHA-1')

Which needs an encryptable tablespace in place - which clearly i haven't got. This is a real pain as there is no transform option i can apply that will strip out this clause (even in 12c) - as it's part of the column definition not the storage clause.

The only option I'm left with is to pre-create the table and load with the truncate/append option and then worry about adding in all indexes/constraints etc manually afterwards.

I think a small enhancement is needed to expdp/impdp to allow this clause to be manipulated - otherwise it's a real pain.

Next version perhaps......?

Datapump and partitions - some examples

$
0
0


A commonly asked question i see is around the use of dataump for moving partitions around and what is and isn't possible. I decided to do some small testing for myself to illustrate some of what is (and isn't possible).

The examples below are based on a very simple list partitioned table and hopefully it's going to get the points across.


So first up we create a very basic list partitioned table with a couple of indexes and 4 rows of data.

CREATE TABLE demo(
      pkcol   NUMBER PRIMARY KEY,
      partcol NUMBER)
  PARTITION BY LIST (partcol) ( 
       PARTITION p1 VALUES (1),
       PARTITION p2 VALUES (2),
       PARTITION p3 VALUES (3));

OPS$ORACLE@DEMODB>create index loc_idx on demo(partcol) local;

Index created.


OPS$ORACLE@DEMODB>create index loc_idx on demo(partcol) local;

Index created.

OPS$ORACLE@DEMODB>insert into demo values (1,1);

1 row created.

OPS$ORACLE@DEMODB>insert into demo values (2,2);

1 row created.

OPS$ORACLE@DEMODB>insert into demo values (3,3);

1 row created.

OPS$ORACLE@DEMODB>insert into demo values (4,1);

1 row created.

OPS$ORACLE@DEMODB>commit;

Commit complete.


Now lets do an export of just P1 from that table

[oracle@server-name]:DEMODB:[~]# expdp / tables=demo:p1

Export: Release 11.2.0.3.0 - Production on Fri Oct 3 19:59:28 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Starting "OPS$ORACLE"."SYS_EXPORT_TABLE_03":  /******** tables=demo:p1
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
. . exported "OPS$ORACLE"."DEMO":"P1"                    5.429 KB       2 rows
Master table "OPS$ORACLE"."SYS_EXPORT_TABLE_03" successfully loaded/unloaded
******************************************************************************
Dump file set for OPS$ORACLE.SYS_EXPORT_TABLE_03 is:
  /oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
Job "OPS$ORACLE"."SYS_EXPORT_TABLE_03" successfully completed at 19:59:41


Now lets see what metadata actually got put into the file


 impdp / sqlfile=test.sql

Import: Release 11.2.0.3.0 - Production on Fri Oct 3 20:00:17 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01":  /******** sqlfile=test.sql
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 20:00:19

If we display the content of that file we see thats it's the complete DDL of everything and not just the partition ddl. If you think about it this is the only way it could really do it.

[oracle@server-name]:DEMODB:[/oracle/11.2.0.3.0.DB/rdbms/log]# cat test.sql
-- CONNECT OPS$ORACLE
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: TABLE_EXPORT/TABLE/TABLE
CREATE TABLE "OPS$ORACLE"."DEMO"
   (    "PKCOL" NUMBER,
        "PARTCOL" NUMBER
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM"
  PARTITION BY LIST ("PARTCOL")
 (PARTITION "P1"  VALUES (1) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM" ,
 PARTITION "P2"  VALUES (2) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM" ,
 PARTITION "P3"  VALUES (3) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM" ) ;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX
CREATE INDEX "OPS$ORACLE"."LOC_IDX" ON "OPS$ORACLE"."DEMO" ("PARTCOL")
  PCTFREE 10 INITRANS 2 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) LOCAL
 (PARTITION "P1"
  PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
  STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM" ,
 PARTITION "P2"
  PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
  STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM" ,
 PARTITION "P3"
  PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
  STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM" ) PARALLEL 1 ;

  ALTER INDEX "OPS$ORACLE"."LOC_IDX" NOPARALLEL;
-- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ALTER TABLE "OPS$ORACLE"."DEMO" ADD PRIMARY KEY ("PKCOL")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
  STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSTEM"  ENABLE;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
DECLARE I_N VARCHAR2(60);
  I_O VARCHAR2(60);
  NV VARCHAR2(1);
  c DBMS_METADATA.T_VAR_COLL;
  df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
 stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
BEGIN
  DELETE FROM "SYS"."IMPDP_STATS";
  i_n := 'LOC_IDX';
  i_o := 'OPS$ORACLE';
  EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2014-10-03 19:58:11',df),NV;
  i_n := 'LOC_IDX';
  i_o := 'OPS$ORACLE';
  EXECUTE IMMEDIATE stmt USING 0,I_N,'P1',NV,I_O,0,0,0,0,0,0,0,NV,NV,NV,TO_DATE('2014-10-03 19:58:11',df),NV;
  i_n := 'LOC_IDX';
  i_o := 'OPS$ORACLE';
  EXECUTE IMMEDIATE stmt USING 0,I_N,'P2',NV,I_O,0,0,0,0,0,0,0,NV,NV,NV,TO_DATE('2014-10-03 19:58:11',df),NV;
  i_n := 'LOC_IDX';
  i_o := 'OPS$ORACLE';
  EXECUTE IMMEDIATE stmt USING 0,I_N,'P3',NV,I_O,0,0,0,0,0,0,0,NV,NV,NV,TO_DATE('2014-10-03 19:58:11',df),NV;

  DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
  DELETE FROM "SYS"."IMPDP_STATS";
END;
/

So quite a lot of extra stuff in there. Now lets take that file and load in 'just' a single partition

impdp / tables=demo:p2

Import: Release 11.2.0.3.0 - Production on Fri Oct 3 20:02:46 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
ORA-39002: invalid operation
ORA-39164: Partition OPS$ORACLE.DEMO:P2 was not found.


First interesting point here - the file clearly contains the full DDL for the table - but something about the file content means i can't import P2 as i didn't export it - so the data is being tagged in some way to make that check happen. So expdp is doing something slightly more than just select * from table partition (pxxx).

Ok - now lets 'just' load in the partition we did export

[oracle@server-name]:DEMODB:[/oracle/11.2.0.3.0.DB/rdbms/log]# impdp / tables=demo:p1

Import: Release 11.2.0.3.0 - Production on Fri Oct 3 20:03:20 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_TABLE_01":  /******** tables=demo:p1
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "OPS$ORACLE"."DEMO":"P1"                    5.429 KB       2 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "OPS$ORACLE"."SYS_IMPORT_TABLE_01" successfully completed at 20:03:21



This loads fine - but you'll see the partition is loaded under the table data section not some special partition section. This means that the full table ddl was run and we have all the partitions - again when you think about it this is the only way it could work - so we see a select from both P1 and P2 is valid.

OPS$ORACLE@DEMODB>select * from demo partition (p2);

no rows selected

OPS$ORACLE@DEMODB>select * from demo partition (p1);

     PKCOL    PARTCOL
---------- ----------
         1          1
         4          1

Right lets clear the table down

OPS$ORACLE@DEMODB>truncate table demo;

Table truncated.




Now we try and 'just' load that partition again 

[oracle@server-name]:DEMODB:[/oracle/11.2.0.3.0.DB/rdbms/log]# impdp / tables=demo:p1

Import: Release 11.2.0.3.0 - Production on Fri Oct 3 20:04:58 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_TABLE_01":  /******** tables=demo:p1
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39151: Table "OPS$ORACLE"."DEMO" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "OPS$ORACLE"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at 20:04:59

And it fails of course as the partition already exists - if we want to load it we can just use the DATA_ONLY option of the content variable.


 impdp / tables=demo:p1 content=DATA_ONLY

Import: Release 11.2.0.3.0 - Production on Fri Oct 3 20:05:53 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_TABLE_01":  /******** tables=demo:p1 content=DATA_ONLY
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "OPS$ORACLE"."DEMO":"P1"                    5.429 KB       2 rows
Job "OPS$ORACLE"."SYS_IMPORT_TABLE_01" successfully completed at 20:05:56

So what if we had no partition p1 - it was a newly added partition in an existing table that we were moving into our copy table somewhere else? Lets drop the partition and try and import.

OPS$ORACLE@DEMODB>alter table demo drop partition p1;

Table altered.

 impdp / tables=demo:p1

Import: Release 11.2.0.3.0 - Production on Fri Oct 3 20:08:02 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_TABLE_01":  /******** tables=demo:p1
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39151: Table "OPS$ORACLE"."DEMO" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "OPS$ORACLE"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at 20:08:03
And it fails as the table already exists of course - so lets try with the append option.....


impdp / tables=demo:p1 table_exists_action=append
Import: Release 11.2.0.3.0 - Production on Fri Oct 3 20:08:35 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_TABLE_01":  /******** tables=demo:p1 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "OPS$ORACLE"."DEMO" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-39242: Unable to export/import TABLE_DATA:"OPS$ORACLE"."DEMO":"P1" due to table attributes.
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "OPS$ORACLE"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at 20:08:37


So there's an obscure error - but with an obvious cause - P1 does not exist (don't know why it just doesn't say that ........

So how to deal with this case - we have a new partition in the source which doesnt exist in the destination system? Well one option is shown below:

impdp / tables=demo:p1 partition_options=departition

Import: Release 11.2.0.3.0 - Production on Fri Oct 3 20:23:43 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_TABLE_01":  /******** tables=demo:p1 partition_options=departition
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "OPS$ORACLE"."DEMO_P1"                      5.429 KB       2 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
ORA-39083: Object type INDEX failed to create with error:
ORA-14016: underlying table of a LOCAL partitioned index must be partitioned
Failing sql is:
CREATE INDEX "OPS$ORACLE"."I_LOC_IDX_P1" ON "OPS$ORACLE"."DEMO_P1" ("PARTCOL") PCTFREE 10 INITRANS 2 MAXTRANS 255  STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) LOCAL (PARTITION "P1" PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING  STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ORA-31684: Object type CONSTRAINT:"OPS$ORACLE"."SYS_C0058428" already exists
Job "OPS$ORACLE"."SYS_IMPORT_TABLE_01" completed with 2 error(s) at 20:23:45

Couple of errors here but the main table load has worked - the option i chose here 'departition' turns the P1 'partition' into a new table name - by default named TABLENAME_PARTNAME - so DEMO_P1 in this case - some of the later ddl then fails.

We can se though the table is created fine

OPS$ORACLE@DEMODB>select * from demo_p1;

     PKCOL    PARTCOL
---------- ----------
         1          1
         4          1


Now if we want to get this into the demo table as a partion we need to use partition exchange - but first we need to create a 'real dummy' partition in the table that we can swap with.

OPS$ORACLE@DEMODB>alter table demo add partition p1  values (1);

Table altered.


So thats added ok and is empty

OPS$ORACLE@DEMODB>select * from demo partition (p1);

no rows selected


So lets give it a try - we exchange partition to just swap the 2 objects - the dummy partition becomes a standalone table and the table becomes partition p1 (all that happens is a switch in the data dictionary)


OPS$ORACLE@DEMODB>alter table demo exchange partition p1 with table demo_p1;
alter table demo exchange partition p1 with table demo_p1
*
ERROR at line 1:
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION

And it fails - due to the ddl that failed during the import - as we can see the table definitions are different

OPS$ORACLE@DEMODB>desc demo_p1
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PKCOL                                              NUMBER
 PARTCOL                                            NUMBER

OPS$ORACLE@DEMODB>desc demo
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PKCOL                                     NOT NULL NUMBER
 PARTCOL                                            NUMBER


So lets add in the missing index and pk.

OPS$ORACLE@DEMODB>alter table demo_p1 add primary key (pkcol);

Table altered.


OPS$ORACLE@DEMODB>CREATE INDEX "OPS$ORACLE"."I_LOC_IDX_P1" ON "OPS$ORACLE"."DEMO_P1" ("PARTCOL");

Index created.


Now the table matches

OPS$ORACLE@DEMODB>desc demo_p1
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PKCOL                                     NOT NULL NUMBER
 PARTCOL                                            NUMBER



So lets try the switch again

OPS$ORACLE@DEMODB>alter table demo exchange partition p1 with table demo_p1;

Table altered.


OK - looks good - a quick query will tell us if it looks OK

OPS$ORACLE@DEMODB>select * from demo partition (p1);

     PKCOL    PARTCOL
---------- ----------
         1          1
         4          1

OPS$ORACLE@DEMODB>select * from demo_p1;

no rows selected


And there we go - we loaded in a new partition into an existing table with datapump - well with datapump and a little extra help......

Finding cluster ip address on SLES

$
0
0


Slightly off topic here and I'm not sure if it's the same story when using oracle clusterware (i guess it is - but nowhere to test it out for the moment) but it seems that good old ifconfig does not actually display cluster ip address in it's output on linux. I don't know if this is a new thing or if it has always been this way but it was a surprise to me when I'm used to seeing this information displayed on AIX.

As an example on AIX  the following command shows me all the ip addresses running on the system


ifconfig -a
en8: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,CHAIN>
        inet x.x.x.x netmask 0xfffff800 broadcast x.x.x.x
        inet x.x.x.x netmask 0xfffff800 broadcast x.x.x.x
        inet x.x.x.x netmask 0xfffff800 broadcast x.x.x.x
        inet x.x.x.x netmask 0xfffff800 broadcast x.x.x.x5
        inet x.x.x.x netmask 0xfffff800 broadcast x.x.x.x
        inet x.x.x.x netmask 0xfffff800 broadcast x.x.x.x
 some output removed here......


So we have a normal ip address and multiple cluster addresses - all fine

However when i try this on linux i get

ifconfig -a
Absolute path to 'ifconfig' is '/sbin/ifconfig', so it might be intended to be run only by user with superuser privileges (eg. root).
-bash: ifconfig: command not found


Ok - thats annoying to start with - so if i try the full path

/sbin/ifconfig -a
bond0     Link encap:Ethernet  HWaddr A0:B3:CC:EB:95:58
          inet addr:x.x.x.x  Bcast:x.x.x.x  Mask:255.255.255.0
           UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:992747840 errors:0 dropped:0 overruns:0 frame:0
          TX packets:846653127 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:162594677125 (155062.3 Mb)  TX bytes:376461468132 (359021.6 Mb)


So only one address - when i know there is an extra one

It seems this is normal behaviour and if you want to see the other address yopu have to use this command

ip addr show bond0

9: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether a0:b3:cc:eb:95:58 brd ff:ff:ff:ff:ff:ff
    inet x.x.x.x/24 brd x.x.x.x scope global bond0
    inet x.x.x.x/24 brd x.x.x.x scope global secondary bond0
          valid_lft forever preferred_lft forever


And then you can see them all - so it's easy once you know but yet another subtle difference between operating systems to remember

Which DB is Apex running out of?

$
0
0



With the use of Apex growing and growing inside our organisation at the moment I've been wondering how we actually know when we connect to an Apex url (and I'm talking using EPG or emebedded plsql gateway here - i.e. no apache/weblogic etc just running it out of the database) which database are we actually connected to?

So for example if we connect to http://server:7777/apex - which database is serving us up the pages?

In our case we have servers with up to 12 databases on - many of which can be running apex.

Now you may think on the face of it this is a very easy thing to find out - but it seems that it's not - at least not without logging in to the Apex instance itself and then running some kind of query or inferring it from some objects/schemas that are visible - but what if i don't want to do that and i just want to directly link a port to a database?

Now you still may be thinking again this is easy and I've missed a trick - we just run this SQL against every database

select DBMS_XDB.GETHTTPPORT from dual; 

Whichever database returns 7777 is the winner right?

Well kind of - but what if you have multiple databases have 7777 configured- which one is actually the working one? The one that started first you may think (and grabbed the port) - but what if there is some misconfiguration somewhere or the listener was restarted - it's not definitive..... Actually this whole process is very annoying - you dont' get any kind of error if you pick a port that is already in use - it just seems to silently fail.

So how to find it out?

After a bit of digging i found a way of doing it but it's still not as straightforward as i would like.....

So here goes

The first thing we do is go into lsnrctl and do the following:

LSNRCTL> show trc_level
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
LISTENER parameter "trc_level" set to off
The command completed successfully
LSNRCTL> set trc_level user
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
LISTENER parameter "trc_level" set to user
The command completed successfully






This enables a basic degree of tracing in the listener.

Once the trace is active i visit the url http://server:7777/apex

Then i switch tracing off and find the location of the file that was created

LSNRCTL> set trc_level OFF
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
LISTENER parameter "trc_level" set to off
The command completed successfully
LSNRCTL> show trc_file
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
LISTENER parameter "trc_file" set to ora_25305_140243129435552.trc
The command completed successfully
LSNRCTL> show trc_directory
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
LISTENER parameter "trc_directory" set to /oracle/diag/tnslsnr/server/listener/trace
The command completed successfully


Now i look at the contents of this trace file

vi /oracle/diag/tnslsnr/server/listener/trace/ora_25305_140243129435552.trc

trawling through that file i find this section

2014-10-07 18:17:01.655473 : nttcnr:connected on source ipaddr x.x.x.x port 7777
2014-10-07 18:17:01.655485 : nttcnr:connected on destination ipaddr x.x.x.x port 60341
2014-10-07 18:17:01.655496 : nttvlser:valid node check on incoming node x.x.x.x
2014-10-07 18:17:01.655508 : nttvlser:Accepted Entry: x.x.x.x
2014-10-07 18:17:01.655531 : nttcon:set TCP_NODELAY on 62
2014-10-07 18:17:01.655544 : nsopen:transport is open
2014-10-07 18:17:01.655569 : nsopen:global context check-in (to slot 42) complete
2014-10-07 18:17:01.655611 : nsanswer:deferring connect attempt; at stage 9
2014-10-07 18:17:01.655632 : nstoClearTimeout:ATO disabled for ctx=0x0xa92c60
2014-10-07 18:17:01.655670 : nstoUpdateActive:Active timeout is -1 (see nstotyp)
2014-10-07 18:17:01.655687 : nstoControlATO:ATO disabled for ctx=0x0xa92c60
2014-10-07 18:17:01.655739 : nsglbgetRSPidx:returning ecode=0
2014-10-07 18:17:01.655804 : nsglbgetSdPidx:secondary protocol=4
2014-10-07 18:17:01.655831 : nsc2addr:(ADDRESS=(PROTOCOL=tcp)(HOST=x.x.x.x)(PORT=14217))


The key line being the last one - the connection has been routed to port 14217 - this is the port the database dispatcher process is listening on for apex requests.

Now it's just a simple case of using lsof to map this pid to a process

lsof |grep 14217

ora_d000_  1067     oracle   13u     IPv6         1853344907         0t0        TCP *:14217 (LISTEN)




OK so the vital name part didn;t quite fit - but now we just do a ps -ef to locate the full process name

[oracle@server]::[~]# ps -ef |grep 1067
oracle    1067     1  0 Aug21 ?        00:00:48 ora_d000_APEXDB
oracle    1653 12834  0 18:21 pts/1    00:00:00 grep 1067



So there we can see the result - APEXDB

So we've tracked through from a connection on port 7777 to the actual destination database it connects to

There is probably more than one way to do this but this worked for me........





Sqlplus meets the cloud.....

$
0
0


Love sqlplus but looking to make use of the cloud.....?


This 'functionality' has been around for a long time (maybe even prior to 9i - i didn't have anything old enough running where i could test it) but I've never seen anyone (other than me) actually ever use it. That could be because it's not useful and to be honest i've never really used it for anything really meaningful but it is occasionally useful and it's nice to know this feature is available....

So what is this mysterious cloud feature i speak of?

Well how would it be if you could run sql scripts that are stored in the cloud directly in a sqlplus session - wouldn't that just be amazing (well maybe amazing is stretching it).

Well you already can - just execute the script like this

I create a dummy sql file (test.sql) and stick it on a webserver  cloud server somewhere - the contents of test.sql is as follows:

select sysdate from dual;

(original eh?)

Now if we log on to sqlplus we can do this

SQLPLUS>@http://some-random-cloudserver/test.sql

SYSDATE
--------------------
10-OCT-2014 18:21:41


groundbreaking stuff.....

So in theory you could just store all your scripts somewhere in the internet cloud and you can access them from anywhere (assuming you can reach the webserver from your db server).

For me at least external sites clouds are difficult to reach from db servers as everything is pretty much blocked - however you could use an internal webserver cloud  which can be reached.

Calling some total strangers sql files remotely maybe isn't such a hot idea though - there could be any series of nasty things lurking in the cloud......

You think i shoehorned enough mentions of cloud into this... :-)

Reporting on Oracle licence usage......

$
0
0


Now i'm on dodgy ground talking about this topic and i'm caveating this right at the start of this post. Do not trust ANY information found on random blog sites to do with licencing of Oracle databases - the only source you can trust is the Oracle licence team themselves.

This post is about a short report i created from the cloud control OMS repository to give some information about the licence usage across all the databases that it knows about. This is somewhat of a shortcut to get a lot of the information Oracle may require as part of an audit but it will still require some manual work to collect everything.

The query at the end of the post will generate that report but i want to lay out some further caveats about the content of the report before i share that - so read this bit carefully....

1) The report relies on information gathered from mgmt$db_featureusage - this is in turn collecting information from the DBA_FEATURE_USAGE_STATISTICS view. It therefore has all the restrictions/mistakes that this has
2) Partitioning does seem to be reported correctly as far as i can tell
3) For ASO (advanced security option) i know we use TDE and securefile encryption - both of these seem to report OK - however there are many other usages of this option that may or may not show up
4) For ACO (Advanced compression) i know we are only using securefile compression and deduplication - I am not checking for 'OLTP' style compression (which doesnt seem to be recorded)
5) For diag/tuning (i report as DIAG) - i just list everything as used if the db is greater than 9i as there seems to be no effective way to report if it's used or not (for example selecting from a view can count as usage....)
6) any other options are not in scope for this report (though could be added with a little work) RAC etc
7) The cpu counts shown may not reflect the actual cpu count relevant for licencing - i.e. multipliers based on cpu type etc are ignored

So with all those caveats out of the way here is the SQL (i run it as sysman but any user with rights should be able to use it)



with db_size as
 (select target_guid, sum(file_size) as dbsz
    from MGMT$DB_DATAFILES t
   group by target_guid)
select d.target_name,
       d.host_name,
       d.dbversion,
       d.is_64bit,
       (case
         when (select count(*)
                 from MGMT$DB_FEATUREUSAGE M
                where name = 'Partitioning (user)'
                  and currently_used = 'TRUE'
                  and M.database_name = d.target_name) > 0 THEN
          'TRUE'
         ELSE
          'FALSE'
       END) Partitioning,
       (case
         when (select count(*)
                 from MGMT$DB_FEATUREUSAGE M
                where name in ('Transparent Data Encryption',
                               'SecureFile Encryption (user)')
                  and currently_used = 'TRUE'
                  and M.database_name = d.target_name) > 0 THEN
          'TRUE'
         ELSE
          'FALSE'
       END) ASO,
       (case
         when (select count(*)
                 from MGMT$DB_FEATUREUSAGE M
                where name in ('SecureFile Deduplication (user)',
                               'SecureFile Compression (user)')
                  and currently_used = 'TRUE'
                  and M.database_name = d.target_name) > 0 THEN
          'TRUE'
         ELSE
          'FALSE'
       END) ACO,
       (case
         when (dbversion) not like '9.2%' THEN
          'TRUE'
         ELSE
          'FALSE'
       END) DIAG,
       db_size.dbsz / (1024 * 1024 * 1024) as "Database Size",
       os.os_summary,
       os.freq,
       os.mem,
       os.disk,
       os.cpu_count
  from MGMT$DB_DBNINSTANCEINFO d, MGMT$OS_HW_SUMMARY os, db_size
 where os.host_name = d.host_name
   and db_size.target_guid = d.target_guid



This produces an output in the following format





I blanked out the db name and hostname and didnt show all the columns off to the right - but you get the idea.

For our few hundred instances (a medium sized repo according to the new summary screen in cloud control) this report ran in under 1 second......

So in summary don't trust what I've written here but maybe use it as a jump start in collecting the information

And finally again - don't trust what I've written here ( i think i caveated that enough now?)


ASM audit file overload

$
0
0


I posted a few weeks ago about adding a cloud control metric extension to monitor inodes. inodes have come up again today as the filesystem filled up (i know, i know the metric extension should have been set up and checking before that happened).

Anyway the problem was quickly fixed but the issue that caused the problem needs addressing so that's what I'll talk about here.

So the problem we had was this

df -i .
Filesystem                    Inodes   IUsed IFree IUse% Mounted on
/dev/mapper/rootvg-lv_oracle 2752512 2752512     0  100% /oracle


A quick check reveals 99% of the files are in this directory

/oracle/product/12.1.0/grid/rdbms/audit 

This directory contains over 2 million tiny files all with content similar to this

Audit file /oracle/product/12.1.0/grid/rdbms/audit/+ASM_ora_4912_20141014124951016241143795.aud
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Automatic Storage Management option
ORACLE_HOME = /oracle/product/12.1.0/grid
System name:    Linux
Node name:      server
Release:        3.0.93-0.5-default
Version:        #1 SMP Tue Aug 27 08:17:02 UTC 2013 (925d406)
Machine:        x86_64
Instance name: +ASM
Redo thread mounted by this instance: 0 <none>
Oracle process number: 7
Unix process pid: 4912, image: oracle@server (TNS V1-V3)

Tue Oct 14 12:49:51 2014 +01:00
LENGTH : '144'
ACTION :[7] 'CONNECT'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSASM'
CLIENT USER:[6] 'oracle'
CLIENT TERMINAL:[0] ''
STATUS:[1] '0'
DBID:[0] ''


This will be very familiar to most DBA's - it's logging any time privileged access is used - in this case SYSASM.

Looking more closely there is one of these files generated every second! Now initially i thought is this cloud control going crazy? Then i discounted that - it's not going to be trying every 1 second to connect. The only real candidate then was the oracle restart software which is monitoring the ASM instance - there must be something configured there that is going crazy.

A quick hunt around reveals that the problem is the check interval being applied to the ASM resource - it's every 1 second - which is a crazy amount of checking - this is shown below

 crsctl stat res ora.asm -p |grep ^CHECK_INTERVAL
CHECK_INTERVAL=1


So lets change that to something more sensible

crsctl modify resource ora.asm -attr "CHECK_INTERVAL=60"


And check that is set OK

 crsctl stat res ora.asm -p |grep ^CHECK_INTERVAL
CHECK_INTERVAL=60

That change is picked up straight away so it's going to log at lot less now - but we need to clean up.

Annoyingly there seems to be nothing built in to deal with this (well other than logging to syslog and dealing with it that way - in our setup though with outsourced providers this is too painful to contemplate). So we are reduced to old school cron jobs

So i run this as a one off

 /usr/bin/find /oracle/product/12.1.0/grid/rdbms/audit -type f -mtime +14 -exec rm {} \;

and schedule the same thing in crontab every day at 12:00

# Below line is to housekeep ASM audit files generated by CRS checking processes
00 12 * * * /usr/bin/find /oracle/product/12.1.0/grid/rdbms/audit -type f -mtime +14 -exec rm {} \;



So problem fixed - now just need to roll it out on all the single instance asm restart servers......




An unexpected sequel (the template of doom) ...

$
0
0


In an unexpected follow up to the hit post AUTOMATICALLY ASSIGNING MONITORING TEMPLATES TO DATABASES IN CLOUD CONTROL I've something of a warning about properly testing templates and what they are actually going to do.....

We've recently had a couple of cases where tablespaces ran out of space (DBA 101 i know) - i put this down to an agent glitch but when it happened a third time i started to dig deeper and found a terrible mistake in the whole template thing I'd done......

Now it seems I'd made the assumption (after doing much reading on the subject too which makes this all the more worse) that when a template is applied via the dynamic admin groups that anything you don;t specify is 'left alone' - well it turns out this isn't the case.

If you apply a template directly (ignoring any special clever groups) then you are asked the question below


So the default is - only apply the changes - anything i don't specify just leave that alone OK....

Now as this seems to be the default when you apply templates in this way i kind of assumed that it was the same for administration groups and unless i missed a screen (or missed the option on the screen) there was no choice in the matter when this was set up (my memory could be playing tricks here of course).

But it would seem that they don;t behave as i had expected - what actually happens is that when the template is applied all of the metrics continue to be collected (unless explicitly disabled in the template) - however - and this is a big HOWEVER - any thresholds associated with any of the database metrics are just nulled out! So any alerting based on anything other than you explicitly set in the template is removed!

This can be seen when we go and look at the metrics screen for a database without the auto assigned template and one with it




As you can see all the thresholds are missing - however if you just casually browse the all metrics screen and pull back values all the information is there - just completely ignored by the alerting as no thresholds are set!

So anyway after this shock revelation all i had to do was copy the metrics i was actually interested in alerting on from a unchanged database and copy these into the template and then reapply the template to all the databases.

And that fixed it - and resulted in about 60 emails with tablespace running out of space......

Now I'm not sure if there should be an option to choose how the templates are applied or at least some warning about this - it just seems a little dangerous to me.....
Viewing all 259 articles
Browse latest View live