Quantcast
Channel: #cloud blog
Viewing all 259 articles
Browse latest View live

Another unexpected sequel (this time its NLS)

$
0
0


A long time ago in a galaxy far... i posted (an admittedly short) blog entry on the weirdest performance problem I'd ever seen. At the time i was very pleased we'd managed to sort the issue as it was so obscure.

However this week we've seen the same issue again (but with an even further twist).

For those of you that didn't read the original post (that's most of you i'm guessing unless you clicked the link above) so a quick recap.

You have a stored proc in a database (11.2.0.2), you have 2 clients, from one client you call the stored proc passing in argument 'XXX', from the other client you do the exact same thing with the same argument.

Now all common sense would tell you these should execute the same way - it's within the database after all - but again we saw massively different performance profiles.

client 1 ran in 1 second, client 2 ran in 90 minutes!

Thankfully because we'd the similar issue before i knew the track to go along to investigate this. Indeed the stored proc (well the one statement within it) had a completely different execution plan - causing the big difference in the timings. Now unfortunately the plan for this statement is about 300 lines long and finding the exact point where they diverge was not easy (in fact i gave up) - but essentially it was the same issue i discovered before - an NLS setting of German caused Oracle to come up with a different plan - i can only think it's some part of the step where it involves sorting (where the different language comes into play - maybe a sort/group by step) that the difference occurs.

So how to fix it.

Well i just assumed we'd fix it the same way as before - just stick NLS_LANG in the registry and job done.....

However - for the normal 11g client it was already set, the application in use though was using odp.net (which still completely confuses me how the thing is setup and how it relates to a normal client, what is managed/unmanaged - do you need 32/64 bit etc etc)

So anyway after much messing about with many registry keys and config file settings we got nowhere - it just refused to use English. The only thing we didn't change was the locale etc of the actual windows server - which we didn't want to do as it could have affected other applications (this was currently set to German and clearly where odp seemed to be picking it up from).

So how do we resolve this?

Use plsql of course!

So right inside the start of the stored proc - directly after the BEGIN - we add this little nugget:

dbms_session.set_nls('NLS_LANGUAGE','ENGLISH');

And note it's NLS_LANGUAGE not NLS_LANG - thanks to the developers for that one which had me stumped for a while ( I assume the NLS_LANG environment variable was shortened many years ago to meet 8 character limits that used to exist in the past?)

So what this does is override any client setting of NLS_LANG and force it to use English - and it's magically fixed - the execution plan is now the fast one.

Now this could have been fixed with logon triggers,execute immediate or making some .NET change or indeed by changing the app server locale - this line of plsql seemed the least impact approach though.

Not something I've ever used before - but nice to know it's there and works - and in the case proved very useful!

impdp to remap one tablespace to many?

$
0
0


A recent forums question prompted me to write this post as it may be useful to others.

The scenario is that you have 2 (or more) schemas who all share the same tablespace xx in the 'source' database. When this is loaded into another database however we want to keep the objects for schema xx in tablespace xx but for schema xy we want to load them into xy (i.e. move them out of xx). With a default remap_tablespace this is not possible as all objects get remapped - there is not an 'apply just to this schema' option - well at least not without running the import multiple times.

So how can we do this?

Here is a simple walk through example:

First up we create users xx and xy who have all their objects in the same xx tablespace.

SYS@DB>create user xx identified by xx default tablespace xx;

User created.

SYS@DB>alter user xx quota unlimited on xx;

User altered.

SYS@DB>grant create table to xx;

Grant succeeded.

SYS@DB>create user xy identified by xy default tablespace xx;

User created.

SYS@DB>alter user xx quota unlimited on xx;

User altered.

SYS@DB>grant create table to xx;

Grant succeeded.

SYS@DB>create table xx.tab1(col1 number);

Table created.

SYS@DB>create table xy.tab1(col1 number);

Table created.

SYS@DB>

A simple query shows us that both tables for both schemas are in tablespace xx.

SYS@DB>select table_name,tablespace_name from dba_tables where owner in ('XX','XY');

TABLE_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
TAB1                           XX
TAB1                           XX

Now we unload everything for those schemas

 expdp / schemas=xx,xy reuse_dumpfiles=y

Export: Release 11.2.0.3.0 - Production on Mon Oct 20 17:51:10 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_08":  /******** schemas=xx,xy reuse_dumpfiles=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "XX"."TAB1"                                    0 KB       0 rows
. . exported "XY"."TAB1"                                    0 KB       0 rows
Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_08" successfully loaded/unloaded
******************************************************************************
Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_08 is:
  /oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_08" successfully completed at 17:51:57


Now in our destination database we pre-create the users and give them different default tablespaces (with appropriate quota)

SYS@DB>create user xx identified by xx default tablespace xx;

User created.

SYS@DB>grant create table to xx;

Grant succeeded.

SYS@DB>alter user xx quota unlimited on xx;

User altered.

SYS@DB>create user xy identified by xy default tablespace xy;

User created.

SYS@DB>grant create table to xy;

Grant succeeded.

SYS@DB>alter user xy quota unlimited on xy;

User altered.

Now comes the trick - the transform option specified here allows us to strip off the tablespace clause and let the objects get created in the users default tablespace

impdp / schemas=xx,xy exclude=USER transform=segment_attributes:n

Import: Release 11.2.0.3.0 - Production on Mon Oct 20 17:54:10 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_SCHEMA_01":  /******** schemas=xx,xy exclude=USER transform=segment_attributes:n
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "XX"."TAB1"                                    0 KB       0 rows
. . imported "XY"."TAB1"                                    0 KB       0 rows
Job "OPS$ORACLE"."SYS_IMPORT_SCHEMA_01" successfully completed at 17:54:11

Now a quick query shows us the objects have moved

SYS@DB>select table_name,tablespace_name from dba_tables where owner in ('XX','XY');

TABLE_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
TAB1                           XX
TAB1                           XY

Useful sometimes i guess - mainly if you have a lot of schemas to move from source to destination and you want to do this.



Manual database upgrade from 12.1.0.1 to 12.1.0.2

$
0
0


Today I've come to do my first upgrade from 12.1.0.1 (12cR1) to 12.1.0.2 (12cR2). I always do these upgrades manually (it's kind of stuck since you had to do it that way in the 'old' days).

So the first thing i normally do is check on metalink "my oracle support" for the manual upgrade steps.

And i couldn't find it, i just kept finding references to cloud control and even times ten but nothing specific about the database update i wanted to do.

So i turned to google

And i quickly found this link to the oracle documentation which covers all the steps

http://docs.oracle.com/database/121/UPGRD/upgrade.htm#UPGRD12408

I'm not going to cover all the steps in detail - you can read that in the docs for yourself - i would however pick out a couple of points.

I was doing an upgrade of a "cdb/seed/single pdb"/multitenant database so followed the notes for that.

I found that between steps 9 and 10 there is a missing missing startup upgrade

When the upgrade finished (which was all fine by the way) - this line is recorded in the screen output

Upgrade Summary Report Located in:
/oracle/12.1.0.2/cfgtoollogs/RICH/upgrade/upg_summary.log

Opening this file gives a nice summary of the CDB/SEED and PDB

Oracle Database 12.1 Post-Upgrade Status Tool           10-22-2014 08:13:59
                             [CDB$ROOT:1]

Component                               Current         Version  Elapsed Time
Name                                    Status          Number   HH:MM:SS

Oracle Server                          UPGRADED      12.1.0.2.0  00:06:55
JServer JAVA Virtual Machine              VALID      12.1.0.2.0  00:01:56
Oracle Real Application Clusters     OPTION OFF      12.1.0.2.0  00:00:01
Oracle Workspace Manager                  VALID      12.1.0.2.0  00:01:11
OLAP Analytic Workspace                   VALID      12.1.0.2.0  00:00:18
Oracle OLAP API                           VALID      12.1.0.2.0  00:00:20
Oracle Label Security                     VALID      12.1.0.2.0  00:00:11
Oracle XDK                                VALID      12.1.0.2.0  00:00:36
Oracle Text                               VALID      12.1.0.2.0  00:00:23
Oracle XML Database                       VALID      12.1.0.2.0  00:00:52
Oracle Database Java Packages             VALID      12.1.0.2.0  00:00:12
Oracle Multimedia                         VALID      12.1.0.2.0  00:01:45
Spatial                                UPGRADED      12.1.0.2.0  00:02:00
Oracle Application Express                VALID     4.2.5.00.08  00:02:49
Oracle Database Vault                     VALID      12.1.0.2.0  00:00:11
Final Actions                                                    00:00:11
Post Upgrade                                                     00:01:36

Total Upgrade Time: 00:21:39 [CDB$ROOT]

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09

Oracle Database 12.1 Post-Upgrade Status Tool           10-22-2014 08:33:48
                             [MARKER:3]

Component                               Current         Version  Elapsed Time
Name                                    Status          Number   HH:MM:SS

Oracle Server                          UPGRADED      12.1.0.2.0  00:09:46
JServer JAVA Virtual Machine              VALID      12.1.0.2.0  00:01:04
Oracle Real Application Clusters     OPTION OFF      12.1.0.2.0  00:00:01
Oracle Workspace Manager                  VALID      12.1.0.2.0  00:00:54
OLAP Analytic Workspace                   VALID      12.1.0.2.0  00:00:19
Oracle OLAP API                           VALID      12.1.0.2.0  00:00:15
Oracle Label Security                     VALID      12.1.0.2.0  00:00:04
Oracle XDK                                VALID      12.1.0.2.0  00:00:28
Oracle Text                               VALID      12.1.0.2.0  00:00:06
Oracle XML Database                       VALID      12.1.0.2.0  00:00:38
Oracle Database Java Packages             VALID      12.1.0.2.0  00:00:07
Oracle Multimedia                         VALID      12.1.0.2.0  00:00:56
Spatial                                UPGRADED      12.1.0.2.0  00:01:26
Oracle Application Express                VALID     4.2.5.00.08  00:01:40
Oracle Database Vault                     VALID      12.1.0.2.0  00:00:10
Final Actions                                                    00:00:10
Post Upgrade                                                     00:01:07

Total Upgrade Time: 00:19:20 [MARKER]

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.10

Oracle Database 12.1 Post-Upgrade Status Tool           10-22-2014 08:34:43
                             [PDB$SEED:2]

Component                               Current         Version  Elapsed Time
Name                                    Status          Number   HH:MM:SS

Oracle Server                             VALID      12.1.0.2.0  00:09:40
JServer JAVA Virtual Machine              VALID      12.1.0.2.0  00:00:57
Oracle Real Application Clusters     OPTION OFF      12.1.0.2.0  00:00:01
Oracle Workspace Manager                  VALID      12.1.0.2.0  00:00:51
OLAP Analytic Workspace                   VALID      12.1.0.2.0  00:00:18
Oracle OLAP API                           VALID      12.1.0.2.0  00:00:14
Oracle Label Security                     VALID      12.1.0.2.0  00:00:05
Oracle XDK                                VALID      12.1.0.2.0  00:00:26
Oracle Text                               VALID      12.1.0.2.0  00:00:06
Oracle XML Database                       VALID      12.1.0.2.0  00:00:39
Oracle Database Java Packages             VALID      12.1.0.2.0  00:00:08
Oracle Multimedia                         VALID      12.1.0.2.0  00:01:01
Spatial                                   VALID      12.1.0.2.0  00:01:26
Oracle Application Express                VALID     4.2.5.00.08  00:01:37
Oracle Database Vault                     VALID      12.1.0.2.0  00:00:12
Final Actions                                                    00:00:10
Post Upgrade                                                     00:01:04

Total Upgrade Time: 00:19:05 [PDB$SEED]

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.12

Upgrade Times Sorted In Descending Order

Total Upgrade Time: 00:21:39 [CDB$ROOT]
Total Upgrade Time: 00:19:20 [MARKER]
Total Upgrade Time: 00:19:05 [PDB$SEED]
Grand Total Upgrade Time:    [0d:0h:43m:20s]

The overall summary time is quite useful and it seems that the new parallel upgrade seems to work well with the catctl script (we can see SEED and PDB were done at the same time based on the total and individual timings)

There is a step in the notes about checking if catuppst.sql has been run - this check does not seem to be correct - i think it is more valid to search for this string to see if it ran OK

COMP_TIMESTAMP POSTUP_BGN 

Before you run utlrp (via catctl) - the following needs to be run to open all the PDB's (again missing from the steps)

SQL> alter pluggable database all open;

Pluggable database altered.

Other than those minor doc issues (which I've fed back through the documentation comment process - we'll see how well that works as i never did that before) it was very smooth.

I'm sure dbua would have done just as good a job but i can't give up the command line upgrades - I've converted to some GUI's but I'm still stuck in the past with this......



12cR2 new feature - containers clause in SQL

$
0
0


Now for those of you using the multitenant feature (which is a fairly small group at the moment as far as i can make out) a new feature in 12.1.0.2 may be very useful for you.

This was possible in 12.1.0.1 via an undocumented process - see this post on that

In 12cR2 though it's official to make use of the feature you use the new containers keyword:

So for example to query the dual table in all PDB's (well all PDB's and the CDB but not the Seed PDB) you simple do the following:

SQL> select  * from containers(dual);

X          1
X          3

The output is suffixed with the container id - so in this case 1 for the CDB and 3 for the first non seed pdb.

I can see this being really useful with environments that are heavily consolidated. A single SQL statement could generate reporting across a number of the PDB's.

I think 12cR2 may be the release that makes multitenant really viable.


What tablespaces do i need to import a datapump dumpfile?

$
0
0


Sometimes when you receive a datapump export file from a 3rd party source and are asked to import it you are given very little information about it's contents. For example the tablespace names and sizes you might need to create to hold all the objects.

This trick lets you see roughly what you will need before you do the import for real

So the first thing we do is upload just the master table from the dumpfile and make sure that it doesn't get removed at the end of the job - we do this with the less commonly used parameters below

impdp user/pass master_only=y keep_master=y directory=xx dumpfile=xx

Import: Release 11.2.0.3.0 - Production on Thu Oct 23 08:22:14 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Job "OPS$ORACLE"."SYS_IMPORT_FULL_01" successfully completed at 08:22:15

You can see this runs really quickly (just 1 second).

Now we have the master table we can query it to find out information about what is in the file - so in the simple case below we can see there is only one tablespace and the objects in it are only 1472 bytes (and empty table in the example here)


OPS$ORACLE@DB>select OBJECT_TABLESPACE,sum(SIZE_ESTIMATE) from SYS_IMPORT_FULL_01 where object_tablespace is not null group by OBJECT_TABLESPACE;

OBJECT_TABLESPACE              SUM(SIZE_ESTIMATE)
------------------------------ ------------------
XX                                           1472

You could of course just keep importing and then dropping users/objects until you create everything you need - this just seems a bit more professional.... :-)


Switching big(ish) to ASM

$
0
0


Some time ago i posted some notes on switching your entire database into asm.

I've recently followed this myself to switch a much larger database than my demo - i thought I'd post some stats about that process.

Now the logfile/tempfile bits took negligible time really (as did the bct file).

The only points of interest were - the datafile themselves - the total size of the DB was 345GB spread over 54 files.

The rman command i used to create the image backup in ASM was as follows

configure device type disk parallelism 4;
backup as copy database format '+DATA';

So i had 4 parallel session all doing copies - the total run time for this operation was 25 minutes - not bad

Switching over to this then went very smoothly (other than the recover wanting a tape session even though everything it needed was still on local disk) - anyway it got there in the end.

The last thing to switch was the controlfile - the command i tried to use from my example worked but they just restored the controlfile back to the original location. I'm 99% sure this is exactly the process i used last time - i think the difference was that in my example i used the default controlfile name - real database of course use non default paths.

To fix this i did:

startup nomount;
alter database reset control_files; (this sets it back to default)

and then

rman> restore controlfile from 'path_to_filesystem_controlfile';

This then worked fine

All in all quite succesful - ow to finish off the other databases on the same server....

Rman date format

$
0
0


Ever get annoyed that rman date formats just show the date and no time element (i know i do). Well it's easily fixed

just run

export NLS_DATE_FORMAT='dd-mon-yyyy hh24:mi:ss';

Then when you start your rman session it's all changed to this format

 rman target=/

Recovery Manager: Release 11.2.0.2.0 - Production on Tue Oct 28 08:39:12 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DB(DBID=1234567)

RMAN> configure device type disk parallelism 4;
backup as copy database format '+DATA';
using target database control file instead of recovery catalog
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

RMAN>

Starting backup at 28-oct-2014 08:39:28
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=477 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=10 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=86 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=160 device type=DISK
channel ORA_DISK_1: starting datafile copy

Some stuff removed here........

Finished backup at 28-oct-2014 08:41:58

Starting Control File and SPFILE Autobackup at 28-oct-2014 08:41:58
piece handle=/oracle/DB/recovery_area/DB/autobackup/2014_10_28/o1_mf_s_862130518_b4yopq2d_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 28-oct-2014 08:42:01

As easy as that - so if you set that in the .profile it will always be set

Dump file space exhausted?

$
0
0


Generally if while running an export you see this error

ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes

You would probably think - filesystem full, i'll free up some space and try again. However that's not always the cause as i found out today.

In my case the filesystem i was using had nearly 1TB free (and loads of inodes before you ask) but i still got the error after about 5 minutes - very odd.

So i tried the export again just to see what it would do a second time and it failed again - but this time much more quickly. I though perhaps there is some weird filesystem quota thing going on - so i copied a few large file to see what would happen. That it turned out was fine so what was going on?

Well here is my command line - see if you can spot the mistake

expdp / directory=scratch parallel=4 keep_master=y full=y reuse_dumpfiles=y

Spotted it?

Well if you look closely I'm specifying parallel 4 but not actually listing a dumpfile with a wilcarded name (in fact i don't specify a name at all so i just get one file call expdat.dmp) - so all the slaves will want to use the same file.

You'd think might just cause contention on the file (and of course it does) - but some of the locks it creates cause datapump to throw the error that it can't allocate any space - which i guess it can;t as it can;t get a lock but it throws a different error.

Anyway it's a known feature - and the workaround is obvious don;t use parallel without multiple dumpfiles!

MOS ref note below

https://support.oracle.com/epmos/faces/DocumentDisplay?id=433391.1

Squeezing info out of the master table

$
0
0


Now the default log generated from datapump tells you enough of the basic detail to be useful (and is further enhanced in 12c) however there is more information that the log could be telling us that it doesn't.

This information can be squeezed out of the master table, where datapump keeps all its processing information, as long as you don't throw this away (which unfortunately is the default....)

So first up for it to be useful to us after the event we need to keep the table - to do this we simply specify:

keep_master=y

A simple extra switch to the export. If we do this all that juicy metadata is left behind.

So assuming we did that what can we do with that information?

Well here are a couple of examples - these two assume you did a full db export and the master table is called OSAORACLE.SYS_EXPORT_FULL_06 (now the chance of your table being called that are pretty much zero - so you'll need to replace it with the appropriate table name).

So first example is this


select object_path_seqno,
       object_type,
       completed_rows as number_exported,
       object_type_path,
       (completion_time - start_time) * 60 * 60 * 24 as elapsed
  from osaoracle.sys_export_full_06
 where process_order = -5
 order by object_path_seqno

This produces the following output


So we can see how many objects of a certain type were extracted, the order they were done in and how long that extract took

Tables are a little more tricky as the metadata is split over a couple of lines - but we can sort that with some basic analytics - so the next example is this

select process_order,
       tab_owner,
       tab_name,
       obj_tbs,
       round((est_size / (1024 * 1024 * 1024)), 2) || ' GB' estimated_size,
       round((dump_length / (1024 * 1024 * 1024)), 2) || ' GB' actual_SIZE,
       unload_meth,
       completed_rows,
       elapsed_time
  from (select base_object_schema,
               object_name,
               (lag(object_name, 1) over(order by process_order)) as TAB_NAME,
               (lag(base_object_schema, 1) over(order by process_order)) as TAB_OWNER,
               (lag(object_tablespace, 1) over(order by process_order)) as obj_tbs,
               (lag(size_estimate, 1) over(order by process_order)) as est_size,
               (lag(unload_method, 1) over(order by process_order)) as unload_meth,
               completed_rows,
               elapsed_time,
               process_order,
               dump_length,
               unload_method
          from osaoracle.sys_export_full_06
         where object_type_path = 'DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA'
           and process_order >= 1)
 where object_name is null
 order by process_order





So with this we get some extra useful info, estimated size vs actual, tablespace names for objects, unload method used and elapsed time.

You might think this all pretty pointless but if you are trying to analyze where all the time is spent on export (and indeed import - though different queries are needed here) it can be a very useful tool.


Apex 4.1 to 4.2.6 patch issues

$
0
0


This week we thought we'd a simple Apex patch to do but it didn't turn out that way...

So after some time spent fixing the issues after we did this in the dev/test environment (changes to session state protection caused a load of hassle) - something that was introduced in a minor patchset in 4.1 that we hit after upgrading, we then wanted to move forward and upgrade the real system.

So we told all the users to come off the system and began running the upgrade script - to start with all looked fine - this kind of stuff (that big text oracle really didn't paste well....)

.  ____   ____           ____        ____
. /    \ |    \   /\    /     |     /
.|      ||    /  /  \  |      |    |
.|      ||---    ----  |      |    |--
.|      ||   \  /    \ |      |    |
. \____/ |    \/      \ \____ |____ \____
.
. Application Express (APEX) Installation.
..........................................
.
... Checking prerequisites (MANUAL)
.

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

.And it chugged away from a while until we hit this...

create or replace package sys.wwv_dbms_sql authid current_user as
*
ERROR at line 1:
ORA-04021: timeout occurred while waiting to lock object

But all the users were out and nothing else should be using this?

Anyway we let the script carry on and after a few more issues it got to this point and finally gave up

ERRORS EXIST!!!
...There are 558 errors in the log table!
...Select upgrade_error from WWV_FLOW_UPGRADE_PROGRESS to review errors.
-- Upgrade is complete -----------------------------------------
timing for: Upgrade
Elapsed: 01:31:14.52
VIII.   I N S T A L L    O R A C L E   A P E X   A P P L I C A T I O N S
define "^" (hex 5e)

FOO
------------
apxsqler.sql
...Internal messages
APPLICATION 4411 - Oracle APEX  System Messages and Native Types
Set Credentials...
begin
*
ERROR at line 1:
ORA-04063: package body "APEX_040200.WWV_FLOW_SECURITY" has errors
ORA-06508: PL/SQL: could not find program unit being called:
"APEX_040200.WWV_FLOW_SECURITY"
ORA-06512: at "APEX_040200.WWV_FLOW_API", line 16952
ORA-06512: at line 4

So what to do now - did we have to restore everything? Could we run it again?

Well after a bit of checking it seemed the solution was easy - as the upgrade had left the original schema untouched (APEX_040100) we could simply drop the new one and try again - great! However whats blocking the package from being replaced?

We decided to just ran that package header in and see what was blocking it, we tracked down the Apex source file and ran it in - and sure enough again it hung. A bit of investigation then revealed that we had some old rogue connections as the ANONYMOUS account(we are using EPG) that seemed to be holding locks. So we killed these off and the package replaced fine.

So now we just had to re-run....and now it runs through fine and we see this kind of stuff in the end

timing for: Install Internal Applications
Elapsed: 00:13:20.62



Thank you for installing Oracle Application Express.

Oracle Application Express is installed in the APEX_040200 schema.

The structure of the link to the Application Express administration services is as follows:
http://host:port/pls/apex/apex_admin (Oracle HTTP Server with mod_plsql)
http://host:port/apex/apex_admin     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)

The structure of the link to the Application Express development interface is as follows:
http://host:port/pls/apex (Oracle HTTP Server with mod_plsql)
http://host:port/apex     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)



So we got there in the end, but a lesson in making usre everything is out of the system when you do any kind of patching.....

Flashback oddities

$
0
0


Occasionally flashback throws up the odd surprise (I'm just tallking about database level flashback here  - not the myriad of other 'related' bits that are all badged under the flashback banner).

I posted some time ago about an issue with the way it worked here and I've a couple more interesting bits to add now.

The first point is around the need for 'flashback database' to be enabled - there is an interesting section in the manual you may have missed (https://docs.oracle.com/cd/E11882_01/backup.112/e10642/flashdb.htm#BRADV584) which explains that flashback database does not have to be on to use flashback database! (say what!) However it behaves differently (i'll leave you to digest the differences).

The second point is a 'feature' i'd guess you'd call it - but a little dangerous i think. So following on from the revelation in point 1 the system will allow you to create (a non guaranteed) restore point that is unusable (the guaranteed ones work OK) - take the example below

First we don't have flashback on and we create a normal restore point

SQL> create restore point demo3;

Restore point created.




And the system is quite happy - no errors

So lets now try and go back to that point.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1403060224 bytes
Fixed Size                  2681464 bytes
Variable Size            1342178696 bytes
Database Buffers           50331648 bytes
Redo Buffers                7868416 bytes
Database mounted.
SQL>  flashback database to restore point demo3;
 flashback database to restore point demo3
*
ERROR at line 1:
ORA-38726: Flashback database logging is not on.


And we can't - the restore point is unusable - so why did it let us create this in the first place - it can never be used?

The third point is another feature - though unlikely to really cause a problem - though i guess it might be an annoyance at some point.

Take the example below - i want to tidy up some tablespaces (and i guess any physical changes would cause the same issue) after i created a GRP

So i create a GRP:

SQL> create restore point demo guarantee flashback database;

Restore point created.


Add a tablespace
 
SQL> create tablespace after_grp;

Tablespace created.

 
 Now try and drop it

SQL> drop tablespace after_grp;
drop tablespace after_grp
*
ERROR at line 1:
ORA-38881: Cannot drop tablespace AFTER_GRP on primary database due to
guaranteed restore points.


And we hit an obscure error - perfectly reasonable though i guess - flashback can't cope if you physically remove files

Downgrading with datapump

$
0
0


What happens if you need to do a logical extract from an Oracle 12 database and load that into an earlier version (11.2 for example) - is that possible?

Well the short answer is yes. The longer answer is you have to do the export with an additional parameter and you will of course not be able to move everything back (for example some new object/option that only exists in 12 can't be exported and loaded into 11). In the main though tables/indexes and the like should work just fine - so how do you go about it?

Well first up we'll create a really basic user and table in 12c (12.1.0.2 here but the versions dont matter too much)

SQL> create user demo identified by demo;

User created.

SQL> grant create session, unlimited tablespace,create table to demo;

Grant succeeded.

SQL> create table demo.demotab(col1 number);

Table created.

SQL> insert into demo.demotab values (1);

1 row created.

SQL> commit;

Commit complete.

SQL>

Now we just do a normal export (no special settings)

 expdp demo/demo schemas=demo dumpfile=12c.dmp directory=data_pump_dir

Export: Release 12.1.0.2.0 - Production on Mon Nov 10 10:10:47 2014

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "DEMO"."SYS_EXPORT_SCHEMA_01":  demo/******** schemas=demo dumpfile=12c.dmp directory=data_pump_dir
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 40 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "DEMO"."DEMOTAB"                            5.054 KB       1 rows
Master table "DEMO"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for DEMO.SYS_EXPORT_SCHEMA_01 is:
  /oracle/12.1.0.2/rdbms/log/12c.dmp
Job "DEMO"."SYS_EXPORT_SCHEMA_01" successfully completed at Mon Nov 10 10:11:25 2014 elapsed 0 00:00:38


At the same time we do the exact same command but with one extra option (version = 11.2 - this extracts the dump in 11.2 format)

expdp demo/demo schemas=demo dumpfile=12cversion11.dmp directory=data_pump_dir version=11.2

Export: Release 12.1.0.2.0 - Production on Mon Nov 10 10:12:00 2014

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "DEMO"."SYS_EXPORT_SCHEMA_01":  demo/******** schemas=demo dumpfile=12cversion11.dmp directory=data_pump_dir version=11.2
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 40 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
. . exported "DEMO"."DEMOTAB"                            5.054 KB       1 rows
Master table "DEMO"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for DEMO.SYS_EXPORT_SCHEMA_01 is:
  /oracle/12.1.0.2/rdbms/log/12cversion11.dmp
Job "DEMO"."SYS_EXPORT_SCHEMA_01" successfully completed at Mon Nov 10 10:12:22 2014 elapsed 0 00:00:21



So now we have both files lets switch to an 11.2 database and try and load them

SYS@11G>create directory demo as '/oracle/12.1.0.2/rdbms/log';

Directory created.

 impdp / directory=demo dumpfile=12c.dmp

Import: Release 11.2.0.2.0 - Production on Mon Nov 10 10:14:28 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39142: incompatible version number 4.1 in dump file "/oracle/12.1.0.2/rdbms/log/12c.dmp"


As we might expect the first file is in 12c format so cannot be loaded by the 11.2 imp utility, now lets try the file that was extracted in 'downgraded' format.

impdp / directory=demo dumpfile=12cversion11.dmp

Import: Release 11.2.0.2.0 - Production on Mon Nov 10 10:15:03 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_FULL_01":  /******** directory=demo dumpfile=12cversion11.dmp
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
ORA-31625: Schema DEMO is needed to import this object, but is unaccessible
ORA-01435: user does not exist
Failing sql is:
BEGIN
sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','CURRENT_SCHEMA'), export_db_name=>'EDJSON', inst_scn=>'11480476');COMMIT; END;
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:"DEMO"."DEMOTAB" failed to create with error:
ORA-01918: user 'DEMO' does not exist
Failing sql is:
CREATE TABLE "DEMO"."DEMOTAB" ("COL1" NUMBER) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SYSTEM"
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Job "OPS$ORACLE"."SYS_IMPORT_FULL_01" completed with 2 error(s) at 10:15:06


And it works , but with errors. Now the errors are the result of the demo user not having exp_full_database role - this means it can;t extract it's own create user statement and causes the error above. Let's resolve that so we get a nice clear demo that works.

So we switch back to 12c and run this
 
SQL> grant exp_full_database to demo;

Grant succeeded.


 Now re-run the extract now we have the extra rights

[oracle@server]:12c:[~]# expdp demo/demo schemas=demo dumpfile=12cversion11.dmp directory=data_pump_dir version=11.2 reuse_dumpfiles=y

Export: Release 12.1.0.2.0 - Production on Mon Nov 10 10:24:21 2014

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "DEMO"."SYS_EXPORT_SCHEMA_01":  demo/******** schemas=demo dumpfile=12cversion11.dmp directory=data_pump_dir version=11.2 reuse_dumpfiles=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 40 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
. . exported "DEMO"."DEMOTAB"                            5.054 KB       1 rows
Master table "DEMO"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for DEMO.SYS_EXPORT_SCHEMA_01 is:
  /oracle/12.1.0.2/rdbms/log/12cversion11.dmp
Job "DEMO"."SYS_EXPORT_SCHEMA_01" successfully completed at Mon Nov 10 10:24:46 2014 elapsed 0 00:00:25


The sharp eyed of you may now notice that there are more things being exported (USER etc...)



Now we import this as before

impdp / directory=demo dumpfile=12cversion11.dmp

Import: Release 11.2.0.2.0 - Production on Mon Nov 10 10:25:09 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_FULL_01":  /******** directory=demo dumpfile=12cversion11.dmp
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "DEMO"."DEMOTAB"                            5.054 KB       1 rows
Job "OPS$ORACLE"."SYS_IMPORT_FULL_01" successfully completed at 10:25:12



And there we have it a 12c export loaded into an 11g db

RMAN and deleting database image copies

$
0
0


After a number of database switches into ASM (which i discussed here )it was inevitable that i would try and switch a database into ASM that was already in ASM (too many similar sounding database names).

The first stage of the switch process is just to take an image copy of the database and store it in ASM,

So you just run

backup as copy database format '+DATA';

So if the DB is already in ASM when you run that, the command still works - you just end up with another complete copy of the database inside ASM. Now i didn't want this as it's eating up a lot of space and my backups are generally not image ones but rman backupsets on tape.

So now I've created this additional copy - how do i get rid of it to free up space in ASM?

Well in a journey through little used commands here is what i did.

First up i need to see what full (image) copies we have ( i removed a lot of the output here as it is just too much to paste)

RMAN> list copy of database;

List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
------- ---- - --------------- ---------- ---------------
36      1    A 11-NOV-14       8697152184577 11-NOV-14
        Name: +DATA/DB_sw/datafile/system.407.863340131
        Tag: TAG20141111T084112

21      1    A 27-OCT-14       8697151276482 27-OCT-14
        Name: /oracle/DB/oradata/DB_SW/datafile/o1_mf_system_7bhnffbq_.dbf

Now what we have here is 2 complete copies of the database mentioned (I'm only displaying the system datafile but it was all there). One copy is the old copy from the filesystem before i switched into ASM (which has since been removed from disk - but the db doesn't know that). The other is the copy i just created. The copy i just created has the tag TAG20141111T084112 whereas the original filesystem copy has nothing at all.

I actually wanted to get rid of both complete copies so i could have just run

RMAN> delete copy of database;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=3 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=592 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=685 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=104 device type=DISK
List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
------- ---- - --------------- ---------- ---------------
36      1    A 11-NOV-14       8697152184577 11-NOV-14
        Name: +DATA/DB_sw/datafile/system.407.863340131
        Tag: TAG20141111T084112

21      1    A 27-OCT-14       8697151276482 27-OCT-14
        Name: /oracle/DB/oradata/DB_SW/datafile/o1_mf_system_7bhnffbq_.dbf


Do you really want to delete the above objects (enter YES or NO)? NO

But i said no as i was interested to see how i could just delete one of the copies (which could be something you wanted to do). So how could that be done?

Deleting the tagged one is easy enough

RMAN> delete copy of database tag "TAG20141111T084112";

using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=8 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=592 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=685 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=104 device type=DISK
List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
------- ---- - --------------- ---------- ---------------
36      1    A 11-NOV-14       8697152184577 11-NOV-14
        Name: +DATA/DB_sw/datafile/system.407.863340131
        Tag: TAG20141111T084112


Do you really want to delete the above objects (enter YES or NO)? YES
deleted datafile copy
datafile copy file name=+DATA/DB_sw/datafile/system.407.863340131 RECID=36 STAMP=863340143

Deleted 10 objects

What if i wanted to delete the other one only though?

With a bit of logic we can use the fact of when it was created and do this

RMAN> delete copy of database completed before "sysdate -7";

released channel: ORA_DISK_1
released channel: ORA_DISK_2
released channel: ORA_DISK_3
released channel: ORA_DISK_4
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=8 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=592 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=685 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=104 device type=DISK
List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
------- ---- - --------------- ---------- ---------------
21      1    A 27-OCT-14       8697151276482 27-OCT-14
        Name: /oracle/ETPOP/oradata/ETPOP_SW/datafile/o1_mf_system_7bhnffbq_.dbf

Do you really want to delete the above objects (enter YES or NO)? YES

RMAN-06207: WARNING: 10 objects could not be deleted for DISK channel(s) due
RMAN-06208:          to mismatched status.  Use CROSSCHECK command to fix status
RMAN-06210: List of Mismatched objects
RMAN-06211: ==========================
RMAN-06212:   Object Type   Filename/Handle
RMAN-06213: --------------- ---------------------------------------------------
RMAN-06214: Datafile Copy   /oracle/ETPOP/oradata/ETPOP_SW/datafile/o1_mf_system_7bhnffbq_.dbf

Now it's complaining as i already moved it from the filesystem without telling Oracle

Now i could mess about here and crosscheck etc but i don't want to do that so i just force it

RMAN> delete force copy of database completed before "sysdate -7";

released channel: ORA_DISK_1
released channel: ORA_DISK_2
released channel: ORA_DISK_3
released channel: ORA_DISK_4
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=8 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=592 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=685 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=104 device type=DISK
List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
------- ---- - --------------- ---------- ---------------
21      1    A 27-OCT-14       8697151276482 27-OCT-14
        Name: /oracle/DB/oradata/DB_SW/datafile/o1_mf_system_7bhnffbq_.dbf

Do you really want to delete the above objects (enter YES or NO)? YES
deleted datafile copy
datafile copy file name=/oracle/DB/oradata/DB_SW/datafile/o1_mf_system_7bhnffbq_.dbf RECID=21 STAMP=862089142

Deleted 10 objects

And there you go - a lesson in rarely used rman commands......now back to switching the rest of the databases - making sure to double check the ORACLE_SID.......


SQL Server connection issues

$
0
0


A while back i wrote a short note on how to connect to sql server (i know, i know, it's not an Oracle post but I'm allowed to do the odd sql server one now and again) This came in useful this week as we had a connectivity issue that just seemed to come out of nowhere. The understanding i got from the earlier post then enabled me to understand and fix the issue quickly.

So the symptoms were - a connection that had been working for a couple of years suddenly stops working and there have been 'no' changes anywhere (heard that one before anyone..?).

A quick look showed that we were getting this from the dotnet application in IIS when it tried to start

Server Error in '/NavigationSetup' Application.

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)

So basically a connection problem of some sort.

The connection string was a simple

connectionString="Data Source=SERVERNAME\NAMEDINSTANCE;Initial Catalog=dbname;User ID=username;Password=password

Now the first thing to check was username/password even though the error didn't really seem to be saying that - that was fine when tested on the db server itself (always best to rule that out).

Now the App server box had sql management client on so we tried a connection from there and got the exact same error as from dotnet - so this implied some kind of network issue in the two talking to each other.

We checked the basic name resolution and did a telnet test to port 1433 where this instance was running and that was all OK - so what could it be?

Well this is where the original post came in useful. If you notice the connect string we are using a named instance - the connection has to talk to the sql browser service on the db server to be told which port (in this case 1433 the instance is running on). This communication to the browser service uses port 1434 over udp. It turned out that somehow the firewall rules got changed and this was being blocked (as identified by the portqry tool).

OK - so an easy fix right - just get the firewall opened again? Well that's one solution but one that would take a lot of time to organize.

The other solution is to take the browser service out of the process and just tell sql what port things are on. We do this with a slightly modified connect string (see the nice bits in the lurid green colour)

Data Source= tcp:servername\namedinstance,1433;Initial Catalog=dbname;User ID=username;Password=password" 

Now when we connect even though we still specify the named instance its essentially redundant - we are explicitly saying connect on port 1433.


DBMS_DATAPUMP import demo

$
0
0





We're currently looking at creating a repository of schema versions to be able to quickly deploy them to other systems - we want to build all of this in plsql so we needed to script up a way of doing datapump extracts. To keep thing simpler we would then store the extracts as schemas rather than dumpfiles.

So how do we do this?

Well to create these schema copies and store them in the database we have to use impdp - expdp can only extract to a file. So we have to do an impdp and 'pull' the schemas in to create the copies.

This is going to be easier if i do an example i think - so lets create a simple schema that we want to 'store' in our repository - here's the SQL for that.

create user dpdemo identified by dpdemo;
grant create session,resource, unlimited tablespace to dpdemo;
create table dpdemo.demo(col1 number);
create index dpdemo.demoidx on dpdemo.demo(col1);


Now comes the clever bit - we create a database link from our 'schema repository' database to wherever the schema we want to duplicate is (this could be a loopback link to the same database) .



create database link demo connect to user identified by password using 'DBNAME';

Once that link is in place we then just need to build the plsql - which i have here:

 DECLARE
  l_dp_handle      NUMBER;
  l_last_job_state VARCHAR2(30) := 'UNDEFINED';
  l_job_state      VARCHAR2(30) := 'UNDEFINED';
  l_sts            KU$_STATUS;
  v_job_state      varchar2(4000);
BEGIN
  l_dp_handle := DBMS_DATAPUMP.open(operation   => 'IMPORT',
                                    job_mode    => 'SCHEMA',
                                    remote_link => 'DEMO',
                                    version     => 'LATEST');
  DBMS_DATAPUMP.add_file(handle    => l_dp_handle,
                         filename  => 'test.log',
                         directory => 'DATA_PUMP_DIR',
                         filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE,
                         reusefile => 1);
  DBMS_DATAPUMP.METADATA_FILTER(l_dp_handle, 'SCHEMA_LIST', '''DPDEMO''');
  DBMS_DATAPUMP.METADATA_FILTER(l_dp_handle,
                                'EXCLUDE_PATH_EXPR',
                                'IN (''INDEX'', ''SYNONYMS'',''GRANTS'',''STATISTICS'')');
  DBMS_DATAPUMP.METADATA_REMAP(l_dp_handle,
                               'REMAP_SCHEMA',
                               'DPDEMO',
                               'REL10A');
  DBMS_DATAPUMP.start_job(l_dp_handle);
  DBMS_DATAPUMP.WAIT_FOR_JOB(l_dp_handle, v_job_state);
  DBMS_OUTPUT.PUT_LINE(v_job_state);
END;
/


So what is this doing?

1)It says i want to do a schema level import using the DEMO database link
2) I want you to log everything to test.log
3) I only want the DPDEMO schema
4) I'm excluding some things (just to show how this can be done)
5) i want this copied schema to now be called REL10A

And there we go - the job runs fine and creates this log file on the server:

Starting "USER"."SYS_IMPORT_SCHEMA_01":
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported "REL10A"."DEMO"                                  0 rows
Job "USER"."SYS_IMPORT_SCHEMA_01" successfully completed at 13:44:30







Oracle lends Microsoft a hand.....

$
0
0





In a kind of random post about a few separate things I'm going to show how i used SQL Developer and a couple of tricks to run a report against multiple SQL Server databases (and by database i kind of mean schemas in Oracle terms, though SQL Server has that term too..... just think of them as schemas...)

So anyway we have a SQL Server instance hosting HPQC (That's HP Quality Centre - a testing tool), this instance contains a number of project databases (schemas.....) all with the same tables in each one. The data contained is relevant for that project only but the structure is the same. It could have been structured i guess as just one database/schema with all the data in but HP chose not to do that and the whole schema is replicated per project - there are pros and cons to either approach i guess.

The requirement is to generate a very basic summary report that shows a count for all the test runs there have been in each project in the past year.

The query to generate this against an individual database is this:

select count(*) from run where (run.RN_EXECUTION_DATE >= '2014-01-01' and run.RN_EXECUTION_DATE <= '2014-12-31')

So i need to find an easy way to do that across 86 databases.......

The first problem i had was actually getting a SQL client to be able to connect to the instance in the first place to run the query. The normal SQL Management client was not installed on any servers that had firewall access to the database, and with my account i didn't have admin rights to be able to install one. The database server itself is supported by a third party so no joy there - so what could i use...?

This is where SQL Developer came in useful - did you know you can simple install another jdbc driver and connect to SQL Server from it? Well you can (unfortunately i did this ages ago and can't remember how i did it - but i remember it being very easy).

So anyway once this extra driver is installed you will see this change in the connectivity screen of SQL developer






You also get sybase as well at the same time - neat huh?


OK so now i have the tool i can log on using the usual port/hostname etc etc

SO now i'm in i just need to figure out how to work this magic query. After many years working with Oracle i often use the trick - use SQL to create SQL - and the exact same principle applies here too - i just need to build a SQL statement that will create the SQL i actually want to run - in a kind of pseudo-dynamic way (big words i know....)

Anyway it's much simpler than that makes it sound....

First up i need a way of getting a list of all the databases in the system - a quick google reveals that this will do the trick

select name from master..sysdatabases

Now i have that i need to use it as a way of generating a select statement against each database in the list - now i won't bore you with the trial and error in between - i'll just show you the finished result:

SELECT 'select ' + CHAR(39) + name + CHAR(39) +' ,count(*) from ' + name + '.td.run where (run.RN_EXECUTION_DATE >= '+CHAR(39)+'2014-01-01'+CHAR(39)+' and run.RN_EXECUTION_DATE <= '+CHAR(39)+'2014-12-31'+CHAR(39)+') union all'
FROM master..sysdatabases
where name not in ('master','model','msdb','qcsiteadmin_dbV11','tempdb','LiteSpeedLocal')


Looks a little odd - but all this will do is generate 86 rows - each of which is a select statement. The useful bits to note here are the use of CHAR(39) (instead of CHR(39) in Oracle....) to generate a ' character in the output, the use of + to concatenate strong together (as opposed to || in oracle) and the inclusion of union all at the end of every line.

The first few lines of output then look like this

select 'DB1' ,count(*) from DB1.td.run where (run.RN_EXECUTION_DATE >= '2013-01-01' and run.RN_EXECUTION_DATE <= '2013-12-31') union all
select 'DB2' ,count(*) from DB2.td.run where (run.RN_EXECUTION_DATE >= '2013-01-01' and run.RN_EXECUTION_DATE <= '2013-12-31') union all
select 'DB3' ,count(*) from DB3.td.run where (run.RN_EXECUTION_DATE >= '2013-01-01' and run.RN_EXECUTION_DATE <= '2013-12-31') union all

etc
etc
up to line 86

So i think you can see where I'm going with this - the output is in it's own right a select statement now - all i have to do is remove the final union all at the end of the last line and then run the 86 line statement (it takes a while as it has to be run in every database) - and then gives me this result


DB10
DB2236
DB315
etc


etc

So there we have it a quick way to summarize data across lots of SQL Server databases using an Oracle tool and tricks learned from an Oracle database...


SQL Developer and SQL Server

$
0
0


Now i mentioned it in my previous post and actually as fate would have it i actually needed to do this myself again on another machine this week. So how do you extend SQL Developer to connect to SQL Server - well here's how:
 First navigate to here http://www.oracle.com/technetwork/developer-tools/sql-developer/thirdparty-095608.html where you'll see some basic info about the options (including MySQL, MSSQL and Sybase)

From there you'll see a link to the jtds download - this currently points here

http://downloads.sourceforge.net/jtds/jtds-1.2-dist.zip?modtime=1131459277&big_mirror=0

We don't actually need all of this we just want a particular jar file out of it. So if we unzip the file we just need to locate

 jtds-1.2.jar


This file then needs to be copied anywhere you like - putting it somewhere under the sqldev tree makes sense though of course

Once we have that file sorted we need to tell SQL Developer about it - to do this we navigate to
SQL Developer tools->preferences


Once in there we go to the following location:


Click add entry and then browse and locate the jar file



Click OK and then instantly when we click create connection we have extra options open to us - easy!





collating charactersets?

$
0
0


I seem to be talking a lot about SQL Server of late (just because of the work that has come up recently) - i promise to get back on topic soon....

I'll try and relate this back to Oracle anyway as it's sometimes useful to compare functionality.

In a previous post i demonstrated a simple technique to run some SQL against lots of SQL Server databases - now i tried to use that same technique again with some different SQL and got an unexpected error:


The SQL i was trying to run was this

select RUN.RN_TESTER_NAME , count(RUN.RN_TESTER_NAME) as run_count, RUN.RN_STATUS , count(RUN.RN_STATUS) as status_count  from DB1.td.run where (run.RN_EXECUTION_DATE >= '2014-01-01' and run.RN_EXECUTION_DATE <= '2014-12-31')  group by  RUN.RN_TESTER_NAME, RUN.RN_STATUS  union all
etc union all
etc

Now the only real difference between this statement and the last one was the aggregate function i wanted to use (count) and the group by.

This will obviously force the database to do more work and will involve use of grouping and sorting operations to be able to do that.

Now this is where it gets interesting...

The collation (characterset in Oracle world) can be defined in a number of places it would seem - either at the instance level (an instance being an install of sql server), a database level (think schema in oracle) or at the column level (as can be done in oracle in a limited way with NCHAR/NVARCHAR2 columns). In this particular case the collation settings were different

For the SQL Server instance it was

SELECT SERVERPROPERTY('collation') 

Which returned Latin1_General_CI_AS

For the databases it was

SELECT
 NAME, 
 COLLATION_NAME
FROM sys.Databases
 ORDER BY DATABASE_ID ASC

Which returned a mix of SQL_Latin1_General_CP1_CI_AS and Latin1_General_CI_AS

So that seems to be the root cause of the problem - the fact that some of them are different and don't match the instance level values. When MSSQL then tries to group/order the items it can't as there is a mix of collation/charactersets.

So what can we do about that?

Well helpfully there is a collation function which converts the character datatypes to the appropriate setting

So as an example this

RUN.RN_TESTER_NAME collate Latin1_General_CI_AS

Makes the RN_TESTER_NAME column return as that collation

So now all i need to do is add this extra snippet into my code that generates the actual report i want to run - like this

SELECT 'select ' + CHAR(39) + name + CHAR(39) +' ,RUN.RN_TESTER_NAME collate Latin1_General_CI_AS, count(RUN.RN_TESTER_NAME) as run_count, RUN.RN_STATUS collate Latin1_General_CI_AS, count(RUN.RN_STATUS) as status_count  from ' + name + '.td.run where (run.RN_EXECUTION_DATE >= '+CHAR(39)+'2014-01-01'+CHAR(39)+' and run.RN_EXECUTION_DATE <= '+CHAR(39)+'2014-12-31'+CHAR(39)+')  group by  RUN.RN_TESTER_NAME, RUN.RN_STATUS  union all'
FROM master..sysdatabases
where name not in ('master','model','msdb','qcsiteadmin_dbV11','tempdb','LiteSpeedLocal')

And the issue is fixed - i can now run the query i want to - which ends up like this:

select 'DB1' ,RUN.RN_TESTER_NAME collate Latin1_General_CI_AS, count(RUN.RN_TESTER_NAME) as run_count, RUN.RN_STATUS collate Latin1_General_CI_AS, count(RUN.RN_STATUS) as status_count  from DB1.run where (run.RN_EXECUTION_DATE >= '2014-01-01' and run.RN_EXECUTION_DATE <= '2014-12-31')  group by  RUN.RN_TESTER_NAME, RUN.RN_STATUS  union all
select 'DB2' ,RUN.RN_TESTER_NAME collate Latin1_General_CI_AS, count(RUN.RN_TESTER_NAME) as run_count, RUN.RN_STATUS collate Latin1_General_CI_AS, count(RUN.RN_STATUS) as status_count  from DB2.td.run where (run.RN_EXECUTION_DATE >= '2014-01-01' and run.RN_EXECUTION_DATE <= '2014-12-31')  group by  RUN.RN_TESTER_NAME, RUN.RN_STATUS  union all
etc 
etc

So it seems charactersets are as much of a problem in MSSQL as they are in Oracle.....



XMLDB and acl permissions - more confusion

$
0
0


So our project which is making heavy use of xmldb functionality continues on and we're still coming across features/surprises along the way.

XMLDB for those of you that don't (which is likely most readers of this as it's so little used) is a way of storing and retrieving xml documents by various different protocols (http/ftp/plsql/webdav) from within the database (I'm oversimplifying a lot here but the core bit of what it is about can probably be summarized like that). There are various oracle manuals and books on the topic (which run to hundreds/thousands of pages - which gives you some idea that there is a lot to it). To be frank it's difficult to find where to start with it a lot of the time and that is maybe what has held back a lot of customers from using it.

This week we've had some off problem with permissions which i thought i'd share.

So in our case we are loading and retrieving the xml documents via http/https calls

In a very basic example this url call shows the folders/files that exist at the top level in the repository

http://server:port/public - in our case this returns


And the public folder (just think of them like windows explorer folders) just contains 2 subfolders and no files.

With the issue we discovered we just wanted to add a new folder (which can be done in many ways) - but we did it with plsql

declare
retval boolean;
begin
retval := dbms_xdb.createfolder('/public/richdemo');
end;
 /

So this should create a new folder under the top level for me - which indeed it does :



By default this folder just inherits permissions from the parent folder - i want to change that - so i create a new access control list via this code:

DECLARE
retBool BOOLEAN;
BEGIN
retBool:=DBMS_XDB.CreateResource('/public/richdemo/richdemoacl.xml','<acl 
 xmlns="http://xmlns.oracle.com/xdb/acl.xsd"
 xmlns:dav="DAV:"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd http://xmlns.oracle.com/xdb/acl.xsd"
>
    <ace>
        <grant>true</grant>
        <principal>DOESNOTEXIST</principal>
        <privilege>
            <all/>
        </privilege>
    </ace>
    <ace>
        <grant>true</grant>
        <principal>dav:owner</principal>
        <privilege>
            <all/>
        </privilege>
    </ace>
</acl>');
end;
/

commit
/

Again the acl itself is just a document inside the xdb repository (there is a chicken and egg thing developing here....)

So that acl is now created - next i have to apply that to the folder i just created
To do that i run this:

begin
dbms_xdb.setacl('/public/richdemo', '/public/richdemo/richdemoacl.xml');
end;
/
commit
/

So now i've locked down permissions so that dav:owner (i.e. me) and the user/role with the name "DOESNOTEXIST" (chosen for comic effect) have permission.

Now when i try and view it i should still be able to see it because i am the owner - but what actually appears is this




So the folder is not visible - but it should be. Now i've kind of given you a hint about what the problem may be by my choice of username earlier, in our case though we had complex ACL's and it was not at all obvious.

What we did have was a trace file - with this content

*** 2014-11-24 14:25:01.368
*** SESSION ID:(56.11) 2014-11-24 14:25:01.368
*** SERVICE NAME:(SYS$USERS) 2014-11-24 14:25:01.368

ACLProblem set to true, error2 set to 44416, clearing error
in internal exception handler, error2: 44416

Implying some kind of acl issue - but nothing more than that.

After much investigation we found that there was a missing role - so we added that


SQL> create role doesnotexist;

Role created.

Refreshing the http session (via ctrl F5) and the folder appeared again.



So it seems that any missing users/roles included in an acl effectively break the entire acl - this is not at all obvious - in the browser the folder is just missing - no errors at all.




SAP BODS install using respone file

$
0
0


BODS (or business objects data services) is an ETL tool from SAP that I've had the pleasure of having to install a number of times over the past couple of years.

I've got very frustrated with this process and I've finally got round to making it easier for myself - I've documented it here as it may be useful for other users.

We're running the software on SLES (unlike 99% of the rest of the customers it would seem who are on windows). This gave us a few challenges when installing but now I've got it down to a fine art and have actually set it up now with response files so the install is even easier

So first up I'm assuming you have a separate unix account with which to do this (don't use oracle....) and that this account has permissions on the oracle software that is already installed (i.e. access to tnsnames and networking libraries etc)

I'm also assuming that all of the relevant ulimit values are set appropriately - as an example here are mine

ulimit -a
address space limit (kbytes)   (-M)  unlimited
core file size (blocks)        (-c)  unlimited
cpu time (seconds)             (-t)  unlimited
data size (kbytes)             (-d)  unlimited
file size (blocks)             (-f)  unlimited
locks                          (-x)  unlimited
locked address space (kbytes)  (-l)  256
message queue size (kbytes)    (-q)  800
nice                           (-e)  0
nofile                         (-n)  8192
nproc                          (-u)  774884
pipe buffer size (bytes)       (-p)  4096
max memory size (kbytes)       (-m)  unlimited
rtprio                         (-r)  0
socket buffer size (bytes)     (-b)  4096
sigpend                        (-i)  774884
stack size (kbytes)            (-s)  unlimited
swap size (kbytes)             (-w)  not supported
threads                        (-T)  not supported
process size (kbytes)          (-v)  unlimited


You need to export the following variables

export LC_ALL="en_US.UTF-8"
export ORACLE_HOME=/path to oracle home
export LD_LIBRARY_PATH=$ORACLE_HOME/lib



Once that is in place you need to track down the SAP software (this is easier said than done) and also get hold of the appropriate licence key that has been purchased (again easier said than done)

I'm just going to be installing the 'information platform services' which is the base services used for many products, followed by the dataservices software.

I'll leave you to track down the software from SAP marketplace yourself - the IPS software once I'd extract if ended up in a tree like this (once i'd unrar'd it - who uses this - come on?)

$ANY_DIRECTORY/51043631/DATA_UNITS/SBOP_IPS_lnx

Anyway once it's unzipped the installation script is called InstallIPS - this just seems to wrap the setup.sh script and provide the licence key for you. You can install the software with setup.sh as normal but then you have to provide the key - for me the installer did not seem to want to accept the (valid) key i had.

ANyway as I'd done this a few times i wanted to make the process easier - i discovered (by accident) that the tool can be passed a response file to tell it what to do (exactky as we use for oracle installations) - all i had to do was work out what format this file had to be in.

This turned out to be incredibly easy

All you have to do is this:

./setup.sh -w /tmp/responsefile.ini

This fires up the installer as usual - yoou answer all the questions as normal - at the end the the script exits immediately and a file is produced.

This file can then be passed in to the setup.sh command to run a prompt free installation. The ini file has to be populated with licence keys and passwords however - these are removed during the initial capture process. Once that is done we have a response file that can be used to make the IPS install a 1 line command

./setup.sh -r /tmp/responsefile.ini

And about 20 minutes later it's all done and everything is up and running - easy eh?

Now i repeat that same process for the dataservices install (following the exact same steps) and again about 20 mins later dataservices is fully installed and working.

Now all i have to do is repeat the exact same steps in the other test and live environments - this makes the process easy - and also means all the installs will end up exactly the same - no worries about someone choosing the wrong option

Now all i have to do is set fixed ports in the cms console to prevent any firewall issues
Viewing all 259 articles
Browse latest View live