Friday 5 October 2007

Creating Oracle Table Varray Data Type Columns

Best explained reference to this with easy examples can be found here:

PART I
http://blogs.ittoolbox.com/database/solutions/archives/working-with-varrays-in-oracle-part-i-9111

PART II

http://blogs.ittoolbox.com/database/solutions/archives/working-with-varrays-in-oracle-part-ii-9229

PART III
http://blogs.ittoolbox.com/database/solutions/archives/working-with-varrays-in-oracle-part-iii-9349

I tried creating an attribute in MicroStrategy which uses this column as its Form but it seems MicroStrategy does not support this data type. Even Freeform Sql failed to generate the sql.

Wednesday 29 August 2007

Installing a Loopback Adapter on Windows 2000

Dynamic Host Configuration Protocol (DHCP) assigns dynamic IP addresses on a network. Dynamic addressing allows a computer to have a different IP address each time it connects to the network. In some cases, the IP address can change while the computer is still connected.

You can have a mixture of static and dynamic IP addressing in a DHCP system. In a DHCP setup, the software tracks IP addresses, which simplifies network administration. This lets you add a new computer to the network without having to manually assign that computer a unique IP address. However, before installing Oracle Database onto a computer that uses the DHCP protocol, you need to install a loopback adapter to assign a local IP address to that computer.

Loopback adapter approach is recommended particularly for laptops (presumably used only for learning purposes!) which connect and disconnect from the real, corporate network. The loopback adapter means that Oracle will function regardless of whether the laptop is connected to the network or not. It gives Oracle a static, always-there, point of reference independent of what shenannigans the real NIC gets up to.

For instructions regarding the Loopback Adapter on Windows 2000 see the instructions from 2.4.5.3 onwards:

http://download-west.oracle.com/docs/html/B10130_02/reqs.htm#BABDJJFF

Friday 24 August 2007

Transparent Data Encryption (TDE)

You can find all the basic stuff about Transparent Data Encryption here:
http://www.oracle.com/technology/oramag/oracle/05-sep/o55security.html

and here:
http://www.oracle.com/technology/oramag/oracle/05-jan/o15security.html

We needed to issue the statement in MicroStrategy before running the report:
alter system set encryption wallet open authenticated by "password"; so that the encrypted columns would be displayed in the report.

" around password are double quotes.

However if the statement is issued like this then by selecting the Sql view for the report in MicroStrategy, one can easily see the password and so we needed to hide it inside a procedure.

I couldnt find any examples of using the alter statement in a procedure and this failed when executed:

create or replace procedure open_my_wallet is
begin
execute immediate 'alter system set encryption wallet open authenticated by "password"'; end;

with privilege errors.

The reason is that the alter system privilege was given through a role and needs to be granted directly. Alternatively by doing it this way, the privileges are granted directly

--
create or replace procedure open_wallet authid current_user
is
begin
execute immediate 'alter system set encryption wallet open authenticated by "password"';
end open_wallet;
/
--

When a procedure is defined as authid current_user (invoker rights), than all privileges available with granted roles will be available to the procedure

Friday 22 June 2007

Returning clause with insert does not work!

On the web there is a lot of material about the new 10g feature returning clause with examples.
However it looks like those who have published these examples have not tested the code and just typed it out.
See : http://orafaq.com/node/34

Returning clause works fine with Update and Delete.

Ex:

declare
tot_count number;
begin
delete from pn_1
RETURNING count(a1) INTO tot_count;

dbms_output.put_line(tot_count);
end;

However with insert

table pn_1 is created with two columns s1 varchar2(30), a2 number

declare
tot_count number;
begin

insert into pn_1
select table_name,1
from user_tables
RETURNING count(a1) INTO tot_count;

dbms_output.put_line(tot_count);
end;


it gives syntax error
ORA-06550: line 5, column 16:
PL/SQL: ORA-00933: SQL command not properly ended

See Metalink Note:302910.1

So still the best way to note how many records have been inserted :


declare

begin
insert into pn_1
select table_name,1 b
from user_tables;
--RETURNING sum(b) INTO tot_count;
dbms_output.put_line(sql%rowcount);
exception
when others then
dbms_output.put_line(sqlerrm);
end;

Thursday 21 June 2007

How to Check ORACLE_HOME from the Database

First Method:

select NVL(substr(file_spec, 1, instr(file_spec, '\', -1, 2) -1) , substr(file_spec, 1, instr(file_spec, '/', -1, 2) -1)) folder
from
dba_libraries
where
library_name = 'DBMS_SUMADV_LIB'


2nd Method (Requires priv to run sys.dbms_system) :

DECLARE

folder VARCHAR2(100);

BEGIN

sys.dbms_system.get_env('ORACLE_HOME', folder);
dbms_output.put_line(
folder);

END;


Archivelog Mode

Just a note of all the things I forget about Archivelog mode

The easiest way to find out if a database is running in archivelog mode:

SQL> select log_mode from v$database;

LOG_MODE
------------
NOARCHIVELOG

And for more reference:

Archive log related stuff

Tuesday 12 June 2007

dbms_stats

There are so many combinations of parameters, how does one decide the most optimum parameters when using dbms_stats?

One method I found useful is checking the accuracy of num_distinct_rows against time taken for gathering statistics. You want good accurate stats but produced in a short time.

First compile this Function to automatically and accurately calculate the number of distinct rows:

create or replace function pn_distinct(p_table in varchar2, p_col in varchar2)
return number
is
q_text varchar2(4000);
v_records number;

begin

q_text := 'select count(distinct ' p_col ') from 'p_table ;execute immediate(q_text) into v_records;
return v_records;

end;

Run the dbms_stat tweaking with the different parameters e.g. to collect stats on all tables in schema that begin with DWH in the example below;

create or replace procedure dwh_gather_stats
as
cursor c1 is
select table_name
from user_tables
where table_name like 'DWH%';

begin

For c1rec in c1 loop
dbms_stats.gather_table_stats(ownname=>'BI_USER',
tabname=>c1rec.table_name,
estimate_percent=>10,
method_opt=>'for all columns size auto',
cascade=>true);

end loop;

end;

record the time taken with each set of parameters.

and then run the following query and decide on a trade off between accuracy of the stats and time taken to gather the stats by comparing the num_distinct and actual_distinct columns. You can use this method to compare dbms_stats against analyze as well.


select p.table_name,p.last_analyzed,t.COLUMN_NAME,t.NUM_DISTINCT,
pn_distinct(p.table_name,t.column_name) actual_distinct
from user_tab_columns T,
user_tables P
where p.table_name like 'DWH%'
and p.table_name = t.TABLE_NAME

Thursday 10 May 2007

nls_characterset etc.

select *
from sys.props$

will provide information on all the nls_ settings.

Monday 16 April 2007

Using dbms_datapump for import

Doing the import this way can be done straight over a database link so one doesn't even need to do an export.

create or replace procedure dwh_import
as

ind NUMBER; -- Loop index number
jobhandle NUMBER; -- Data Pump job handle
l_job_name varchar2(100); -- Job Name
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- Keeps track of job state
le ku$_LogEntry; -- work-in-progress and error messages
js ku$_JobStatus; -- Job status from get_status

sts ku$_Status; -- Status object returned by get_status

BEGIN

l_job_name := 'DWHIMPORT'to_char(sysdate,'YYYYMMDDHH24MISS');
jobhandle := DBMS_DATAPUMP.OPEN(operation => 'IMPORT',
job_mode => 'TABLE',
remote_link => 'APOLLO',
job_name => l_job_name,
version => 'COMPATIBLE');

dbms_output.put_line('jobhandle is : 'jobhandle);

DBMS_DATAPUMP.add_file( handle => jobhandle,
filename => 'DWHIMPORT.log',
directory => 'DATAPUMPX',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
);

dbms_output.put_line('Import Log file is created..');

DBMS_DATAPUMP.SET_PARALLEL(jobhandle, 1);
dbms_output.put_line('parallel set to 1');

DBMS_DATAPUMP.METADATA_REMAP(handle => jobhandle,
name => 'REMAP_SCHEMA',
old_value => 'WD_USER',
value => 'BI_USER');
dbms_output.put_line('metadat_remap');

DBMS_DATAPUMP.SET_PARAMETER(jobhandle,'TABLE_EXISTS_ACTION','REPLACE');
dbms_output.put_line('set_parameter');

-- dbms_datapump.metadata_filter (handle => my_handle, name => 'NAME_EXPR',
-- value => 'LIKE ''DWH_%''', object_type => 'TABLE');

DBMS_DATAPUMP.metadata_filter( handle => jobhandle,
name => 'NAME_EXPR',
value => 'IN (''DWH_CANVASS'',''DWH_CANVASS_DATE'',''DWH_CHANNEL'',
''DWH_CUSTOMER'',''DWH_CYCLE_DATE'', ''DWH_DATE'',
''DWH_DOS_APPOINTMENTS'',''DWH_DOS_SALESREP_TARGET'',
''DWH_DOS_STATE_OF_PLAY'',''DWH_DOS_TEAM'',''DWH_GG_ICANVASS'',
''DWH_GG_WD_CHANNEL'',''DWH_GG_WD_PRODUCTS'',''DWH_GP_WD_CHANNEL'',
''DWH_GP_WD_PRODUCTS'',''DWH_MANDAYS_ACTUALS'',
''DWH_MANDAYS_PLANNED'',''DWH_PA_WD_CHANNEL'',''DWH_PA_WD_PRODUCTS'',
''DWH_PM_WD_CHANNEL'',''DWH_PM_WD_PRODUCTS'',''DWH_PRODUCT'',
''DWH_REP'',''DWH_SALES'',''DWH_SALES_STATUS'',''DWH_TARGET'',
''DWH_UNIT'',''DWH_WD_CANVASS_GROUPS'',''DWH_WD_CHANNEL'',
''DWH_WD_CHANNEL_GRP'',''DWH_WD_DAYOFF'',''DWH_WD_FINANCE_MONTH'',
''DWH_WD_PRODUCT'',''DWH_REFRESH_DATES'',''DWH_MTG_TARGET'',''DWH_DAY_TARGET_ALL'',
''DWH_DAY_TARGET_SALES'',''DWH_DAY_TARGET_CANV'')');
dbms_output.put_line('Name Export');

dbms_datapump.metadata_remap(handle => jobhandle,
name => 'REMAP_TABLESPACE',
old_value => 'DWH_TAB_SMALL',
value => 'BI_USER_TS');

dbms_datapump.metadata_remap(handle => jobhandle,
name => 'REMAP_TABLESPACE',
old_value => 'DWH_TAB_LARGE',
value => 'BI_USER_TS');

dbms_datapump.metadata_remap(handle => jobhandle,
name => 'REMAP_TABLESPACE',
old_value => 'DWH_IND_LARGE',
value => 'USERS');

dbms_datapump.metadata_remap(handle => jobhandle,
name => 'REMAP_TABLESPACE',
old_value => 'DWH_IND_SMALL',
value => 'USERS');

DBMS_DATAPUMP.SET_PARAMETER(jobhandle,'ESTIMATE','STATISTICS');
dbms_output.put_line('Estimate ');

DBMS_DATAPUMP.START_JOB(jobhandle);
dbms_output.put_line('Job Started');

percent_done := 0;
job_state := 'UNDEFINED';

WHILE (job_state != 'COMPLETED') and (job_state != 'STOPPED') LOOP

DBMS_DATAPUMP.get_status(jobhandle,
DBMS_DATAPUMP.ku$_status_job_error +
DBMS_DATAPUMP.ku$_status_job_status +
DBMS_DATAPUMP.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;

IF js.percent_done != percent_done THEN
DBMS_OUTPUT.PUT_LINE('*** Job percent done = ' to_char(js.percent_done));
percent_done := js.percent_done;
END IF;


IF (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0) THEN
le := sts.wip;
ELSE
IF (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0) THEN
le := sts.error;
ELSE
le := null;
END IF;
END IF;


IF le IS NOT NULL THEN
ind := le.FIRST;
WHILE ind IS NOT NULL LOOP

DBMS_OUTPUT.PUT_LINE(le(ind).LogText);
ind := le.NEXT(ind);

END LOOP;
END IF;


END LOOP;


DBMS_OUTPUT.PUT_LINE('Job has completed');
DBMS_OUTPUT.PUT_LINE('Final job state = ' job_state);
DBMS_DATAPUMP.DETACH(jobhandle);

END;

dbms_datapump for Export

Example of dbms_datapump for Export in a procedure that can be automated from dbms_jobsubmit:

CREATE OR REPLACE PROCEDURE "DWH_EXPORT"("P_JOB_NAME" OUT NOCOPY VARCHAR2)
IS

l_dp_handle NUMBER;
l_job_name varchar2(100);
l_file_name varchar2(100);

job_state varchar2(30);
status ku$_Status1010;

begin

l_job_name := 'DWH'to_char(sysdate,'YYYYMMDDHH24MISS');
l_file_name := 'DWH.dmp';

utl_file.fremove('DATAPUMPX',l_file_name);

l_dp_handle := DBMS_DATAPUMP.open( operation => 'EXPORT',
job_mode => 'TABLE',
remote_link => NULL,
job_name => l_job_name,
version => 'LATEST'
);

DBMS_DATAPUMP.add_file( handle => l_dp_handle,
filename => l_file_name,
directory => 'DATAPUMPX',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE
);

DBMS_DATAPUMP.add_file( handle => l_dp_handle,
filename => 'DWH.log',
directory => 'DATAPUMPX',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
);

DBMS_DATAPUMP.metadata_filter( handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => 'IN (''WD_USER'')'

);

DBMS_DATAPUMP.metadata_filter( handle => l_dp_handle,
name => 'NAME_EXPR',
value => 'IN (''DWH_CANVASS'',''DWH_CANVASS_DATE'',''DWH_CHANNEL'',
''DWH_CUSTOMER'',''DWH_CYCLE_DATE'', ''DWH_DATE'',
''DWH_DOS_APPOINTMENTS'',''DWH_DOS_SALESREP_TARGET'',
''DWH_DOS_STATE_OF_PLAY'',''DWH_DOS_TEAM'',''DWH_GG_ICANVASS'',
''DWH_GG_WD_CHANNEL'',''DWH_GG_WD_PRODUCTS'',''DWH_GP_WD_CHANNEL'',
''DWH_GP_WD_PRODUCTS'',''DWH_MANDAYS_ACTUALS'',
''DWH_MANDAYS_PLANNED'',''DWH_PA_WD_CHANNEL'',''DWH_PA_WD_PRODUCTS'',
''DWH_PM_WD_CHANNEL'',''DWH_PM_WD_PRODUCTS'',''DWH_PRODUCT'',
''DWH_REP'',''DWH_SALES'',''DWH_SALES_STATUS'',''DWH_TARGET'',
''DWH_UNIT'',''DWH_WD_CANVASS_GROUPS'',''DWH_WD_CHANNEL'',
''DWH_WD_CHANNEL_GRP'',''DWH_WD_DAYOFF'',''DWH_WD_FINANCE_MONTH'',
''DWH_WD_PRODUCT'',''DWH_REFRESH_DATES'',''DWH_MTG_TARGET'',''DWH_DAY_TARGET_ALL'',
''DWH_DAY_TARGET_SALES'',''DWH_DAY_TARGET_CANV'')');


DBMS_DATAPUMP.start_job(l_dp_handle);
p_job_name := l_job_name;

job_state := 'UNDEFINED';

WHILE (job_state != 'COMPLETED') and (job_state != 'STOPPED') LOOP

DBMS_DATAPUMP.GET_STATUS(l_dp_handle,
dbms_datapump.KU$_STATUS_WIP,
-1,
job_state,
status);


END LOOP;

DBMS_DATAPUMP.detach(l_dp_handle);

EXCEPTION

WHEN OTHERS THEN
dbms_output.put_line('Error:' sqlerrm ' for Job:' l_dp_handle);
RAISE_APPLICATION_ERROR (-20100,'Export Failed - 'sqlerrm);

END;

ORA-31626

I was getting the above error when I was trying to do this:

jobhandle := DBMS_DATAPUMP.OPEN(operation => 'IMPORT',
job_mode => 'TABLE',
remote_link => 'db_link',
job_name =>'DWH_IMPORT');

Did a lot of searching for priviliges and how they were granted, whether through a role or directly etc. At the end it boiled down to the job_name being the same as the procedure name. Changed the job name and the above then worked.

Friday 13 April 2007

OWB Process Flow Not Deploying

If the Process Flow is not deploying and its due to a run away process still executing, find out what the processes are by running this query

select t.*
from wb_rt_audit_deployments t
where audit_status <> wb_rt_constants.DEPLOYMENT_STATUS_COMPLETE

Thursday 8 March 2007

Datapump on Windows Platform

I was trying to do an impdp on a windows platform but I got the follwoing error:

ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation

The obvious answer seems to be the Oracle Directory is not created or is missing read/write priviliges. However this was not the case. as teh following query showed:

SELECT d.owner,privilege, directory_name
FROM user_tab_privs t
, all_directories d
WHERE t.table_name(+)=d.directory_name
ORDER BY 3,2

The problem is that the directory was created using a mapped network drive which was mapped after Oracle was started. Oracle therefore could not read teh directory.
See:
Metalink : Doc ID: Note:221849.1

Symptom(s)
ORA-29283 Invalid file operation when accessing file on a mappeddrive using utl_file package

Cause
The reason for this error is because Oracle service isstarted using system account(default) which doesnt haveprivilege in the mapped folder.

Friday 2 March 2007

OWB PARIS Problems

So far these are some of the OWB PARIS problems/bugs I have come across since started using it. This problems are happening when exports are made on a 64 bit windows machine into a 32 bits windows machine:

Problem:
- You change a process flow and deploy it, but it does not behave as the diagram shows.
Solution:
- Redo the entire process flow from scratch. A clue to the fact that the process flow will not behave in the same way as the diagram shows is to check the outgoing flows from a node. If the are the same numbers in <> it means there is a problem. Something that OWB should validate itself but thats the way OWB PARIS is at the moment.

Note: Work Flow Manager Client can show the errors on the process flow.

Problem:
- The columns to merge on are not set properly when mappings are imported from another OWB installation.
Solution:
- Reset the merge columns correctly again.
or Create the package in production only.

Tuesday 27 February 2007

Using Partition By in a Lag

I had to "unaccumulate" figures in a table. A colleague suggested using the Lag function. Most of the Analytic Lag function examples show the order by clause only, but if the calculations using Lag function are related to specific group of records within a table, then we need the partition by as well. Here is the syntax of how to use partition by as well as order by in a Lag function:

select t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,
t.channel_name,t.aggr_level,t.cycle,t.yearmonth,t.charge_out,
t.charge_out - (lag(charge_out,1,0) over (partition by t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,
t.channel_name,t.aggr_level,t.cycle

order by t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,
t.channel_name,t.aggr_level,t.cycle,to_date(t.yearmonth,'mon-yy') )) m_charge_out
from dwh_target T
order by t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,t.channel_name,t.aggr_level,t.cycle,to_date(t.yearmonth,'mon-yy')
See also:
If you use this in OWB exprssions [Paris Version], make sure its all on one line, otherwise it will compile wrongly.

Friday 12 January 2007

Identifying Sql Statements that Could use Bind Parameters

This function will identify statements that are the same if they used bind variables:

create or replace function remove_constants( p_query in varchar2 )
return varchar2
as l_query long;

l_char varchar2(1);
l_in_quotes boolean default FALSE;

BEGIN
for i in 1 .. length( p_query )
loop l_char := substr(p_query,i,1);
if ( l_char = '''' and l_in_quotes ) then l_in_quotes := FALSE;
elsif ( l_char = '''' and NOT l_in_quotes ) then l_in_quotes := TRUE;
l_query := l_query '''#';
end if;

if ( NOT l_in_quotes ) then
l_query := l_query l_char;
end if;
end loop;

l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );

for i in 0 .. 8 loop
l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
end loop;
return upper(l_query);
END;


Now following this example can show how the above function can be used:

create global temporary table sql_area_tmp
on commit preserve rows
as
select sql_text,sql_text sql_text_two_constants
from v$sqlarea
where 1=0


This is an sql example that does not use bind parameters and so essentially the same statement is read and parsed repeatedly:

DECLARE

cursor c1 is
select object_name
from dba_objects;

l_object dba_objects.object_name%type;

BEGIN

for c1rec in c1 loop
execute immediate 'insert into pn_temp values('''''c1rec.object_name''''')';
end loop;

END;;

Scripts below will show how this statement can be identified as an Sql statement that should be re-examined for re-writing with bind parameters:

insert into sql_area_tmp(sql_text)
select sql_text
from v$sqlarea

update sql_area_tmp
set sql_text_two_constants = remove_constants(sql_text)

select sql_text_two_constants,count(*)
from sql_area_tmp
group by sql_text_two_constants
order by 2 desc

Sql statemnets that should be re-examined will appear at the top with a high count.

Wednesday 10 January 2007

Top N Query

I remember back in the days of Oracle V6, I was asked a question in an interview to write a query in Sql - not using Pl/Sql - to show the top 5 Salaries from the EMP table. Ever since then I have been interested in queries that answer this.

Using the analytic function dense_rank as pointed out by Tom Kyte in this month's Oracle magazine, is the best I have seen so far.

select *
from (
select deptno,ename,sal,dense_rank() over (partition by deptno order by sal desc) Salrnk
from emp)
where SalRnk <=3
order by deptno,sal desc

Thursday 4 January 2007

IN A OWB MAPPING GENERATED JOIN IS CORRUPTED

This is not the sort of thing one would expect to be a bug but sadly it is. It seems to happen with complex joins when you add in new attributes. And the only way round it is to do the join again.

See bug no. 4914839 in Metalink.
Problem ========== Customer has a mapping which has many operators including a join operator. In the output group of join operator output attributes are linked to unrelated items from the input group and hence the final result generated is inserting incorrect values into columns.

03/10/06 07:02 am *** Another note from another developer: ====================================
Back in Bombay, we didn't have the joiner input/output matching properties at all, and no "doPropertyUpgrade" method. So this is probably related to an older joiner metadata corruption bug, not our recent paris bug. I recall that the old bug may be triggered by some combination of adding/removing attributes or groups on the joiner in an older version (pre-Bombay), then upgrading the repository. . At this point, I think You are right that the only way to fix it is to recreate the joiner.

Creating SCD2 and SCD1 Mappings Using OWB

I came across this article on the Oracle Technology site on how to address slowly changing dimensions in OWB, which is all well and good.

Personally however my favourite solution is to use the attribute properties and create a merge statement when target is being populated.
See:

The target table's operator properties should be set to update/insert.
The surrogate key populated from a sequence should have its attribute properties set to:
Load Column When Inserting Row - Yes
Load Column When Updating Row - No
Match Column When Updating Row - No
Update Operation - =

Columns which have been decided to be SCD2 types, should have their attribute properties set to:
Load Column When Inserting Row - Yes
Load Column When Updating Row - No
Match Column When Updating Row - Yes
Update Operation - =
Match Column When Deleting - No

and SCD1 Types should be set to:
Load Column When Inserting Row - Yes
Load Column When Updating Row - Yes
Match Column When Updating Row - No
Update Operation - =
Match Column When Deleting - No

If there are any further updates like setting the flag to identify the most current record etc. the target table should be joined with the source in another part of the same mapping with a join condition which will identify the records to be updated in a similar fashion to below:

join operator property:

src.natural_key = trg.natural_key and
trg.current_flag = 'Y'
(src.scd2type_col1 != trg.scd2type_col1 or
src.scd2type_col2 != trg.scd2type_col2)
......

i.e this will identify all the records that were current but are no longer current in accordance with the latest source.

Finally join the above join results to the target table with operator property set to UPDATE.

OWB Error

I was trying to load a flat file into an external table in Oracle Warehouse Builder and I got a strange error. OWB validated both the flat file and the external table, it generated the script, I deployed it and the table was created ok too. However when I tried to do a select from the external table my sql session was terminated.
There was also no .log or .bad file produced which made the problem investigation even harder.

The flat file had 56 columns and at first I thought there may be some limit to the number of columns. The problem however turned out to be a mispelling with the date format. I had accidentally typed hh24:mis:ss format for a date field.

Bit disappointed that OWB did not catch this out in the validation.

ORA-12514 Error

Couldn't connect to a client database via pl/sql developer today and kept getting ora-12514 error. I couldn't see why though. There was only one tnsnames.ora file in the correct place and all the information was correct. pl/sql developer automatically picked up the service name too, which proved it was reading the tnsnames.ora file from the correct place.

What was confusing me more was that the OWB connected to the database without a problem. Of course OWB does not use TCP/IP but it proved the database was up and running.

Tried using sqlplus from the command prompt and I got the same error again. It turned out the problem was a typo error not in tnsnames.ora but in the sqlnet.ora. So something to remember if I get ora-12514 error again.

Rewrite_Table

If your query is not using the Materialized View, use the /*+ rewrite(mvname) */ hint first to make sure that the query can be redirecte dto use the MV. Usually if the MV is correctly written to correspond to the query but it is still not used, is because the optimizer obtains a lower cost by not using the Materialized views. Think of indexing the MV or increasing its parallel execution if possible:

ALTER MATERIALIZED VIEW mv_name PARALLEL n

If using the rewrite hint shows that the cost of using the MV is lower but the optimizer still chooses not to use the MV, then you can use the dbms_mview.explain_rewrite package to obtain more info:


BEGIN

DBMS_MVIEW.EXPLAIN_REWRITE('Query','mv_name','s_id');

END;

Then
SELECT * FROM rewrite_table where statement_id = 's_id'

and hopefully that will tell you the reason.

Materialized View

Here is a query which takes a long time because of the join:

SELECT a13.PT_CODE PT_CODE,
a16.PT_NAME PT_NAME,
a15.SCIENTIFICGROUPCODE SCIENTIFICGROUPCODE,
a15.SCIETNIFICGROUPNAME SCIENTIFICGROUPNAME,
count(distinct a11.SAFETYREPORTKEY) WJXBFS1
from DWH.F_SAFETYREPORT a11
join DWH.F_ADVERSEREACTION a12
on (a11.SAFETYREPORTKEY = a12.SAFETYREPORTKEY)
join DWH.M_LLT_PT_61_CUR1_VW a13
on (a12.REACTIONMEDDRALLT = a13.LLT_CODE)
join DWH.C_INDEX_SCIENTIFIC a14
on (a12.PRODUCTINDEXCODE = a14.PRODUCTINDEXCODE)
join DWH.H_SCIENTIFICPRODUCT a15
on (a14.SCIENTIFICPRESENTATIONID = a15.SCIENTIFICPRESENTATIONID)
join DWH.H_MD61_PSFGY_VW a16
on (a13.PT_CODE = a16.PT_CODE)
where (a11.CASEVALIDFROM <= To_Date('09-06-2006', 'dd-mm-yyyy') and a11.CASEVALIDTO > To_Date('09-06-2006', 'dd-mm-yyyy')
and a11.CLASSIFICATION not in (4)
and a15.SCIENTIFICPRODUCTID in (20923))
group by a13.PT_CODE,
a16.PT_NAME,
a15.SCIENTIFICGROUPCODE,
a15.SCIETNIFICGROUPNAME

To use Materialized View, this is how we should create a Materialized View:

CREATE MATERIALIZED VIEW mv_scigroup_pt
build immediate
refresh on DEMAND
enable query rewrite
AS
select a13.PT_CODE PT_CODE,
a16.PT_NAME PT_NAME,
a15.SCIENTIFICGROUPCODE SCIENTIFICGROUPCODE,
a15.SCIETNIFICGROUPNAME SCIENTIFICGROUPNAME,
a11.casevalidfrom,
a11.casevalidto,
a15.scientificproductid,
a11.safetyreportkey
from DWH.F_SAFETYREPORT a11
join DWH.F_ADVERSEREACTION a12
on (a11.SAFETYREPORTKEY = a12.SAFETYREPORTKEY)
join DWH.M_LLT_PT_61_CUR1_VW a13
on (a12.REACTIONMEDDRALLT = a13.LLT_CODE)
join DWH.C_INDEX_SCIENTIFIC a14
on (a12.PRODUCTINDEXCODE = a14.PRODUCTINDEXCODE)
join DWH.H_SCIENTIFICPRODUCT a15
on (a14.SCIENTIFICPRESENTATIONID = a15.SCIENTIFICPRESENTATIONID)
join DWH.H_MD61_PSFGY_VW a16
on (a13.PT_CODE = a16.PT_CODE)
where a11.CLASSIFICATION not in (4)

In this case the where clause is common to all queries and the other where clauses are removed and the columns used in the select statement. Once the Materialized view is created, if you run an explain plan for the query, it should use the Materialized View instead of the underlying tables.

You can drop the Materialized View if you dont want the plan to use it or you can do this:
ALTER MATERIALIZED VIEW mv_scigroup_pt DISABLE QUERY REWRITE

to analyze the MV, you treat it as a table:

Analyze table mv_scigroup_pt compute statistics;

To put indexes on the Materialized View, again treat it as an index on an ordinary table:
CREATE INDEX mv_srid_idx ON mv_scigroup_pt(safetyreportkey)

and analyze the index in the usual way.

In this example we have used on demand clause to Refresh the Materialized View. In order to refresh it, say for a DWH, use the dbms_mview package at the end of the ETL process.
See:
http://www.lc.leidenuniv.nl/awcourse/oracle/appdev.920/a96612/d_mview2.htm#94135

Defragmenting Tables

This is the best way to defrag a table:

alter table tablename
move tablespace tablespacename

But be careful doing this means the table's indexes will become unstable and so you need to rebuild all the indexes again:

ALTER INDEX index_name REBUILD

Parallel Query Plan

Had a case of developing ETL in a 10g environment but the production environment being a 9i. When we released the ETL into production, there was a performance issue with one of the reports. Comparing the execution plan in 9i and 10g for the query looked like the problem was the parallel query not running in 9i. The init.ora parameters were also different, so it looked very likely that not running in parallel query was the problem.

The init.ora parameters in 9i were changed to mirror the ones in 10g, yet after restarting with the new parameters, the query path still did not indicate a parallel run. So what was the problem?

The problem is that the plan in 9i does not indicate parallel query in the same informative way as in 10g. In fact both databases were running in parallel, and the problem with the query was somewhere else.

To make sure the query runs in parallel rather than relying on the plan, examine the following:

BEFORE RUNNING:
SQL> select * from v$pq_sesstat;

STATISTIC LAST_QUERY SESSION_TOTAL
------------------------------ ---------- -------------
Queries Parallelized 0 0
DML Parallelized 0 0
DDL Parallelized 0 0
DFO Trees 0 0
Server Threads 0 0
Allocation Height 0 0
Allocation Width 0 0
Local Msgs Sent 0 0
Distr Msgs Sent 0 0
Local Msgs Recv'd 0 0
Distr Msgs Recv'd 0 0

AFTER RUNNING THE SQL WITH PARALLLEL HINT:
SQL> select * from V$pq_sesstat;

STATISTIC LAST_QUERY SESSION_TOTAL
------------------------------ ---------- -------------
Queries Parallelized 1 1
DML Parallelized 0 0
DDL Parallelized 0 0
DFO Trees 1 1
Server Threads 0 0
Allocation Height 7 0
Allocation Width 2 0
Local Msgs Sent 28 0
Distr Msgs Sent 28 0
Local Msgs Recv'd 63 0
Distr Msgs Recv'd 78 0

11 rows selected.

Using Bulk Collect Exceptions

Example to use Bulk Collect exceptions - new in 9i -

CREATE TABLE t (text VARCHAR2(4))

declare
type words_t is table of varchar2(10);
words words_t := words_t ( 'dog', 'fish', 'cat', 'ball', 'bat', 'spoke', 'pad','elephant' );

bulk_errors exception;
pragma exception_init ( bulk_errors, -24381 );

begin
Forall j in words.first..words.last
save exceptions
insert into t ( text ) values ( words(j) );

exception

when bulk_errors THEN
for j in 1..sql%bulk_exceptions.count
loop
Dbms_Output.Put_Line (
sql%bulk_exceptions(j).error_index || ', ' ||
Sqlerrm(-sql%bulk_exceptions(j).error_code)||words(sql%bulk_exceptions(j).error_index));
end loop;

end;

Regular Expressions to Show the First Occurance of n Upper Case Characters

I must admit, I still can't feel comfortable with regular expressions, but here is a neat one:

SELECT (REGEXP_SUBSTR('AbcDERTK1DEcdhfjUWXvbDERTF' , '[[:upper:]]{5,}'))
FROM dual

It selects the first sequence of characters which has 5 upper cases together.
So the result is: DERTK

or

SELECT (REGEXP_SUBSTR('AbcDEcdhfjUWXvbDERTF' , '[[:upper:]]{3,}'))
FROM dual

is : UWX

How to Automate Max Partition Alert

http://www.dbaoncall.net/references/ht_max_partition_alert.html

One of my ETL scripts once failed with the following error:
ORA-14400: inserted partition key is beyond highest legal partition key

So I decided to write an alert which would check my date partitioned tables and pre-warn me of the ones which I have to add partitions to. The problem was that the information writes the maximum date condition of a partition is in column high_value in user_tab_partitions table. This column is still a long type, and the information is in such a format:
TO_DATE(' 2002-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
'NLS_CALENDAR=GREGORIAN')

So the problem was to extract the date portion from a long column and evaluate the maximum date value per partitioned table.

For this I wrote the following function first:
--=====================================================================
--= Function : MaxPartValue
--= By : Potkin Nemat on 21-Jul-2003
--= Parameters :
--= IN : p_table (name o fthe table to evaluate max date partition for.
--= Return : max_date
--= Version :
--= 1.0 wrote the script
--=====================================================================
create or REPLACE FUNCTION MaxPartValue(p_table IN VARCHAR2)
RETURN DATE
IS

TYPE cv_type IS REF CURSOR;
cv cv_type;

p_query VARCHAR2(2000); -- used to retrieve the value of the long
column
p_text VARCHAR2(32000); -- used to store the long value in for
substr operation

CURSOR csub(p_text IN VARCHAR2) IS
SELECT to_date(substr(p_text,11,10),'YYYY-MM-DD') v_date
FROM dual;

max_date DATE;

BEGIN


p_query := 'SELECT high_value FROM user_tab_partitions WHERE
table_name = upper(';
p_query := p_query ||''''|| p_table||''''||')';

max_date := to_date('01-JAN-01'); -- initialise max_date with a date
in the past

OPEN cv FOR p_query;
LOOP
FETCH cv INTO p_text; -- the value of the long column is now
assigned to a varchar2 var
EXIT WHEN CV%NOTFOUND;
FOR csubrec IN csub(p_text) LOOP
IF max_date < csubrec.v_date THEN
max_date := csubrec.v_date; -- record the max partition
date per table
END IF;
END LOOP;
END LOOP;

RETURN max_date;

END MAXpartVALUE;

Once this function is created then it can be used in a simple query to list all the partitioned tables and their maximum date threshold partition:
SELECT p.name,to_char(maxpartvalue(p.name),'dd-MON-yyyy')
FROM user_part_key_columns p
,user_tab_columns t
WHERE p.object_type LIKE 'TABLE%' -- Oracle has spaces after
TABLE in this column!!
AND t.column_name = p.column_name
AND t.table_name = p.NAME
AND t.data_type = 'DATE'
ORDER BY 2 DESC

Counting strings

I posted this:
http://www.oracle.com/technology/oramag/code/tips2005/022105.html


Suppose we want to find out how many times 'aa' is repeated in a string, for example 'aakhgghghjaahhjghghaajghgaa', using SQL only. The SQL statement below can easily give the answer:



SELECT (length('aakhgghghjaahhjghghaajghgaa') -
length(REPLACE('aakhgghghjaahhjghghaajghgaa','aa')))/length('aa')
FROM dual


The code can be easily modified to extract the number of times any pattern is repeated in any string.

Ilya Petrenko, a Sr.DBA at ICT Group Inc., in Newtown, PA. however came up with a neater solution:

http://www.oracle.com/technology/oramag/code/tips2005/051605.html

Using Translate

I had this tip published before on : http://www.dbaoncall.net/references/tt_translate.html

But just in case one day that site is no longer available, I thought I better keep a copy in my own blog too:

If you have a varchar2 field and you want only the numerics, or perhaps some extra escape characters have found their way in, you can strip the field just to numerics by performing a translate such as below:
select translate('abc123'||chr(10)||'F',
'0'||translate('abc123'||chr(10)||'F', 'A1234567890', 'A'), '0')
from ...

This takes all the numbers out of the string and uses the resulting string as the translate so that the result is then only the numbers. Needless to say that one can also clean up the database using:

update tableA
set field = translate(field,'0'||translate(field,'A1234567890','A'),'0')

Avoiding the Table Does not Exist Error when trying to drop it

A quick clever way to avoid Table Does not Exist Error when you have to drop the table in a pl/sql routine. This method saves you searching the data dictionary user_table:

Begin
execute immediate 'Drop table PRODUCT';
Exception
when others then null;
End;

Resizing Temp Tablespace

These are the step by step instructions for resizing a Temp tablespace with AUTOEXTEND ON which has got way too big as a result of a rogue query.
Change filenames and paths accordingly.

The example here is good ebcause the Temp Tablespace is actually made up of two datafiles. I couldnt find any syntax examples for Temp tablespaces with more than one datafile.

## Create another temporary tablespace
CREATE TEMPORARY TABLESPACE TEMPXXX TEMPFILE '/u02/oracle/oradata/EV6PROD/temp03.dbf'
SIZE 1000M REUSE AUTOEXTEND ON NEXT 8K MAXSIZE 2000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024K;

## make it the default temporary tablespace for the database
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMPXXX;

## drop the old tablespace and datafile to free space - CANNOT TAKE A TEMPORARY TABLESPACE OFFLINE
##DROP TABLESPACE TEMP INCLUDING CONTENTS AND DATAFILES; hangs

alter database tempfile '/u01/app/oracle/product/10.1.0/oradata/EV6PROD/temp01.dbf','/u02/oracle/oradata/EV6PROD/temp02.dbf' drop including datafiles;
drop tablespace temp;
-- rm /u02/oradata/EVWHP/temp02.dbf

## recreate the old TEMP tablespace
CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u02/oracle/oradata/EV6PROD/temp01.dbf'
SIZE 1000M REUSE AUTOEXTEND ON NEXT 8K MAXSIZE 2048M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024K;

## make it the default temporary tablespace for the database
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP;

## drop the new, small tablespace and datafile to free space
DROP TABLESPACE TEMPXXX INCLUDING CONTENTS AND DATAFILES

Some good sources for TEMP Tablespace related stuff:

http://www.orafaq.com/node/2
http://www.idevelopment.info/data/Oracle/DBA_tips/Tablespaces/TBS_3.shtml
http://www.oracle.com/technology/oramag/oracle/05-jan/o15dba.html


What can go wrong:

The temp files which were deleted by the command:

alter database tempfile '/u01/app/oracle/product/10.1.0/oradata/EV6PROD/temp01.dbf','/u02/oracle/oradata/EV6PROD/temp02.dbf' drop including datafiles;

will not show when doing a du or ls -l, however df command still shows the space is allocated and not released if a process still thinks its using the file.

lsof unix may show all the processes that still think the file is open but this didnt work for me. At the end I had to bounce the database, so much for trying to do this online :)

The DBMS_UTILITY.GET_HASH_VALUE function

I always forget the name and syntax of this function. So I thought I better make a note of it on my blog.

Parameters

There are three IN parameters to the function

Name :- Text string to be used as the basis of the hashing.

Base :- The lowest number you want for the hash value.

Hash_size:- Size of the hash table.

Returns

A numeric value which is derived from the IN parameter ‘name’.

Dropping an Index to Speed a Query

Had a complex query - generated by Microstrategy - which was taking over 2000 seconds to come back. Two main tables in the query, Table A had 256000 records and table B over 5million records. The execution plan was not using the index on table A as it should. By implementing an index hint the query time was reduced to 44 seconds.
But being a Microstrategy report, this was not an option.

I examined all the underlying tables and indexes in the query, and all were recently analyzed.

At the end, the problem was solved by dropping the index on Table B, which was based on the prinmary key column of Table A. The query returned in 44 seconds once this index was dropped.

Oracle Project Raptor

Downloaded Oracle Project Raptor today on my PC. No problems in download and installation. It seems to be modelled on an older established product, pl/sql Developer, from Allround Automation. What I mean is that the layout is more similar to plsql developer than to Sql Navigator.

But I have to say it seems much slower than plsql Developer in comparison.

How to Alert by email when errors occur in Log file

This is a nice and simple shell script:

pass the alert log filename to the script below:

cat $1 | grep ORA- > alert.err

if [ 'cat alert.err|wc -l' -gt 0 ]
then
mail -s "$0 $1 Errors" potkin@yourcompany.com < alert.err
fi

The title of the message contains the script being executed ($0), the name of the log being searched ($1), and the lines that matched our initial search (ORA-) as the body of the message.

Based on the principles introduced in this script, a larger one can be written and scheduled to execute at periodic intervals that will search the contents of the alert log (or another file of interest) and send an e-mail if any errors exist. Then the contents of the log can be moved to another file, so that only the most recent error messages will be sent via e-mail.

Using Analytical NTILE() Function

I find it hard to remember the syntax of analytical functions, but this one - NTILE()- I came across today on Tom Kyte's site is easy to use and very useful

As Tom puts it, its a "do-it-yourself parallelism". Its as if your table was partitioned and you were only interested in one portion of it.

Lets use user_tables which we are all familiar with. Suppose I want to get a list of the oldest analysed tables.

SELECT table_name
FROM
(SELECT table_name,
last_analyzed,
ntile(6) over
(ORDER BY last_analyzed) nt
FROM user_tables)
WHERE nt=1

Problem with Sqllldr

Had a problem loading data from an ASCII file using sqlldr which is worth noting down.

The dta file contained two description fields which were about 1200 characters long.
Even though I had declared the relevant table columns as Varchar2(4000), the records were rejected saying Max Field Length exceeded.

This was the original CTL file:

LOAD DATA
INFILE 'C:\PN\smq_list.asc' "str X'0a0d'"
BADFILE 'C:\PN\smq_list.bad'
DISCARDFILE 'C:\PN\smq_list.dsc'

INTO TABLE "SMQ_LIST"

FIELDS TERMINATED BY '$'
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(SMQ_CODE INTEGER EXTERNAL,
SMQ_NAME CHAR,
SMQ_LEVEL INTEGER EXTERNAL,
SMQ_DESCRIPTION CHAR,
SMQ_SOURCE CHAR,
SMQ_NOTE CHAR,
MEDDRA_VERSION CHAR,
STATUS CHAR,
SMQ_ALGORITHM CHAR
)


I wasted a lot of time thinking there was some espace character in the fields. Especially if I took just a chunk of the fields, the ctl worked.

The problem was solved however by modifying the CTL file:

LOAD DATA
INFILE 'c:\pn\smq_list.asc'
BADFILE 'c:\pn\smq_list.bad'
DISCARDFILE 'c:\pn\smq_list.dsc'

INTO TABLE "SMQ_LIST"

FIELDS TERMINATED BY '$'
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(SMQ_CODE INTEGER EXTERNAL,
SMQ_NAME CHAR,
SMQ_LEVEL INTEGER EXTERNAL,
SMQ_DESCRIPTION CHAR(2000),
SMQ_SOURCE CHAR(2000),
SMQ_NOTE CHAR,
MEDDRA_VERSION CHAR,
STATUS CHAR,
SMQ_ALGORITHM CHAR
)

It looks like there is a default length for CHAR fields, and if we need it to be over that amount, we should specify in the CTL file.

Another dig at 10g Sql Advisory

First time I sat through a 10g demo, I was led to believe that this is it. No more need for Oracle practitioners to tune their sql, the new Sql Advisory feature will do this. Well I have written about a number of cases where the advisory just doesnt have a clue, here is yet another example of when only a clever human being can help.

I have a large update, the outer loop I select is from the cursor below:

CURSOR C1
is
SELECT f.casenumberid,MAX(f.caseversionnumber)
FROM f_safetyreport F
GROUP BY F.CASENUMBERID;


Now for the update I need to do, I a human being can see that I will not need to select the records which have max(f.caseversionnumber) of 1

so a re-write:
CURSOR C1
IS
SELECT f.casenumberid,MAX(f.caseversionnumber)
FROM f_safetyreport F
HAVING MAX(f.caseversionnumber) > 1
GROUP BY F.CASENUMBERID;

but the GROUP BY can slow things down, so only I a human being who understands the application is aware of a table which records the max(caseversionnumber) against a casenumberid, so here is a further rewrite which really spped things up:

CURSOR C1
IS
SELECT h.casenumberid,h.casemaxversion
FROM h_caseversions H
WHERE h.casemaxversion > 1;

There you go, clever human beings are still needed :)

Re-writing an Update for Massive Performance Gain

Had the following update on a table with more than 5 million records, which was taking forever:

DECLARE

CURSOR c1 IS
SELECT DISTINCT substr(rowid,1,13)
FROM f_adversereaction;


TYPE RowIdType IS TABLE OF VARCHAR2(18) INDEX BY BINARY_INTEGER;
p_rowid RowIdType;

TYPE p_idType IS TABLE OF f_adversereaction.adversereactionid%TYPE INDEX BY BINARY_INTEGER;
p_id p_idType;

BEGIN


OPEN c1;
FETCH c1 BULK COLLECT INTO p_rowid;

FORALL indx IN p_rowid.FIRST..p_rowid.LAST
UPDATE f_adversereaction A
SET a.originalsenderid = (
SELECT F.ORiginalsenderid
FROM f_safetyreport F
WHERE a.safetyreportid = f.safetyreportid )
WHERE a.ROWID LIKE p_rowid(indx)||'%';
END;

Rewrote like this, and update was done in minutes:

DECLARE

CURSOR c1 IS
SELECT a.adversereactionid,F.ORiginalsenderid
FROM f_safetyreport F
,f_adversereaction A
WHERE a.safetyreportid = f.safetyreportid;

TYPE p_idType IS TABLE OF f_adversereaction.adversereactionid%TYPE INDEX BY BINARY_INTEGER;
p_id p_idType;

TYPE p_SenderType IS TABLE OF f_adversereaction.originalsenderid%TYPE INDEX BY BINARY_INTEGER;
p_sender p_senderType;

BEGIN

OPEN c1;
FETCH c1 BULK COLLECT INTO p_id,p_sender;

FORALL indx IN p_id.FIRST..p_id.LAST
UPDATE f_adversereaction A
SET a.originalsenderid = p_sender(indx)
WHERE a.adversereactionid = p_id(indx);

COMMIT;

END;

Regular Expressions

Ed Edwards from Florida Department Of Education asked me if we can use the new Oracle 10g Regular Expressions to rewrite this sql he had written to determine ALL the position of ',' in a string.

select distinct text1, 'A "," is located in position ' || instr(text1, ',', 1, level) pos
from (select text1 from test2)
connect by level <= length(text1) - length(replace(text1, ','))
order by text1 desc, pos asc


I must say I really liked his clever way of repeating the rows as many times as the commas using the connectby level <= numberof commas.

Worthy of saving it as a tip.

Faster Update

If you have an update which is taking a long time even when you do it via forall:

- copy the update procedure a few times, depending on how many cpu are available.
- modify each of the above to update a portion of the table.
- run the above copied procedure in parallel.

So if your total update takes 4 hours, and you break the code into 4 each working on a range of the data, then this will reduce the update to 1 hour.

Flashback in 10g

Flashback in 10 g is really easy.

Suppose you drop a table by mistake. This creates a BIN$ table in the recyclebin which you can not select from the BIN$

SQL> select * from tab;
TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
BIN$04LhcpndanfgMAAAAAANPw==$0 TABLE

To get it back simply type:

SQL> FLASHBACK TABLE RECYCLETEST TO BEFORE DROP;


For more info:
http://www.oracle.com/technology/pub/articles/10gdba/week5_10gdba.html

Interstingly:
plsql developer does not show the bin$ tables in the tree list panel but sqlnavigator does.

Dropping a Primary Key

If a primary key is referenced by foreign key from other tables, its no good disabling the foreign keys, you need to drop them before dropping the primary key.

Submitting the Suggested Oracle Profile

DECLARE
l_sql_tune_task_id VARCHAR2(20);

BEGIN

l_sql_tune_task_id := DBMS_SQLTUNE.accept_sql_profile (task_name => 'a16',name => 'a16');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' l_sql_tune_task_id);

END;

Temp Tablespace Unable to Extend

I had a complex query generated by a Microstrategy report. It kept falling over with a "unable to extend Temp Tablespace". I knew its to do with the group by statement as it ran ok without the group by.The Temp tablespace was 4 GB which I tend to think should be enough.

I searched a lot on the internet for solution of this problem. Tried all the clever things using the no_parallel hint, increasing the sort_area_size, the pga_aggregate_target, used the new 10g facility to suggest a new profile which changed the plan but all to no avail in solving the temp tablespace problem.

Finally I went back to the drawing board and re-examined the query generated by microstrategy. I then relaized that two of the tables in the join were not needed at all to get the columns and the aggregate we wanted and that was the solution to the problem.

Conclusion is who said with 10g there is no more need for DBA???

10g Sql Advisory

grant advisor to

wasted a lot of time trying to figure out why dbms_sql.submit kept claiming that table or view does not exist.

It turned out I had created the tables while logged in as my schema but as sysdba using plsql developer!

SQL > CREATE or REPLACE PROCEDURE create_tuning_task IS
2 tuning_task VARCHAR2(30);
3 sqltext CLOB;
4 BEGIN
5 sqltext := 'select cust_name,count(*)'
6 ' from customer, cust_order'
7 ' where customer.cust_no = cust_order.cust_no'
8 ' and customer.cust_no = 8'
9 ' group by cust_name';
10 tuning_task := DBMS_SQLTUNE.CREATE_TUNING_TASK(
11 sql_text => sqltext,
12 user_name => 'SYS',
13 scope => 'COMPREHENSIVE',
14 time_limit => 30,
15 task_name => 'CUST_ORDERS',
16 description => 'Tuning effort for counting customer orders');
17 END create_tuning_task;
18 /

SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'CUST_ORDERS') FROM DUAL;

Parallel Query

This is a good link:http://www.oracle.com/technology/oramag/oracle/05-may/o35dba.html

when tables are created with degrees > 1 Oracle can assume a parallel execution plan of a large query.

In the plan you can see this by PX

Its a good idea to have the parameter parallel_automatic_tuning set to TRUE.

To disable parallel query use no_parallel hint./* no_parallel(T1) no_paralle(T2) */

Use the table aliases as in the query.