I was trying to do an impdp on a windows platform but I got the follwoing error:
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
The obvious answer seems to be the Oracle Directory is not created or is missing read/write priviliges. However this was not the case. as teh following query showed:
SELECT d.owner,privilege, directory_name
FROM user_tab_privs t
, all_directories d
WHERE t.table_name(+)=d.directory_name
ORDER BY 3,2
The problem is that the directory was created using a mapped network drive which was mapped after Oracle was started. Oracle therefore could not read teh directory.
See:
Metalink : Doc ID: Note:221849.1
Symptom(s)
ORA-29283 Invalid file operation when accessing file on a mappeddrive using utl_file package
Cause
The reason for this error is because Oracle service isstarted using system account(default) which doesnt haveprivilege in the mapped folder.
Thursday, 8 March 2007
Friday, 2 March 2007
OWB PARIS Problems
So far these are some of the OWB PARIS problems/bugs I have come across since started using it. This problems are happening when exports are made on a 64 bit windows machine into a 32 bits windows machine:
Problem:
- You change a process flow and deploy it, but it does not behave as the diagram shows.
Solution:
- Redo the entire process flow from scratch. A clue to the fact that the process flow will not behave in the same way as the diagram shows is to check the outgoing flows from a node. If the are the same numbers in <> it means there is a problem. Something that OWB should validate itself but thats the way OWB PARIS is at the moment.
Note: Work Flow Manager Client can show the errors on the process flow.
Problem:
- The columns to merge on are not set properly when mappings are imported from another OWB installation.
Solution:
- Reset the merge columns correctly again.
or Create the package in production only.
Problem:
- You change a process flow and deploy it, but it does not behave as the diagram shows.
Solution:
- Redo the entire process flow from scratch. A clue to the fact that the process flow will not behave in the same way as the diagram shows is to check the outgoing flows from a node. If the are the same numbers in <> it means there is a problem. Something that OWB should validate itself but thats the way OWB PARIS is at the moment.
Note: Work Flow Manager Client can show the errors on the process flow.
Problem:
- The columns to merge on are not set properly when mappings are imported from another OWB installation.
Solution:
- Reset the merge columns correctly again.
or Create the package in production only.
Tuesday, 27 February 2007
Using Partition By in a Lag
I had to "unaccumulate" figures in a table. A colleague suggested using the Lag function. Most of the Analytic Lag function examples show the order by clause only, but if the calculations using Lag function are related to specific group of records within a table, then we need the partition by as well. Here is the syntax of how to use partition by as well as order by in a Lag function:
select t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,
t.channel_name,t.aggr_level,t.cycle,t.yearmonth,t.charge_out,
t.charge_out - (lag(charge_out,1,0) over (partition by t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,
t.channel_name,t.aggr_level,t.cycle
order by t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,
select t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,
t.channel_name,t.aggr_level,t.cycle,t.yearmonth,t.charge_out,
t.charge_out - (lag(charge_out,1,0) over (partition by t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,
t.channel_name,t.aggr_level,t.cycle
order by t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,
t.channel_name,t.aggr_level,t.cycle,to_date(t.yearmonth,'mon-yy') )) m_charge_out
from dwh_target T
order by t.target_type,t.version,t.unit_code,t.canvass,t.product_descr,t.channel_name,t.aggr_level,t.cycle,to_date(t.yearmonth,'mon-yy')
See also:
If you use this in OWB exprssions [Paris Version], make sure its all on one line, otherwise it will compile wrongly.
Friday, 12 January 2007
Identifying Sql Statements that Could use Bind Parameters
This function will identify statements that are the same if they used bind variables:
create or replace function remove_constants( p_query in varchar2 )
return varchar2
as l_query long;
l_char varchar2(1);
l_in_quotes boolean default FALSE;
BEGIN
for i in 1 .. length( p_query )
loop l_char := substr(p_query,i,1);
if ( l_char = '''' and l_in_quotes ) then l_in_quotes := FALSE;
elsif ( l_char = '''' and NOT l_in_quotes ) then l_in_quotes := TRUE;
l_query := l_query '''#';
end if;
if ( NOT l_in_quotes ) then
l_query := l_query l_char;
end if;
end loop;
l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
for i in 0 .. 8 loop
l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
end loop;
return upper(l_query);
END;
Now following this example can show how the above function can be used:
create global temporary table sql_area_tmp
on commit preserve rows
as
select sql_text,sql_text sql_text_two_constants
from v$sqlarea
where 1=0
This is an sql example that does not use bind parameters and so essentially the same statement is read and parsed repeatedly:
DECLARE
cursor c1 is
select object_name
from dba_objects;
l_object dba_objects.object_name%type;
BEGIN
for c1rec in c1 loop
execute immediate 'insert into pn_temp values('''''c1rec.object_name''''')';
end loop;
END;;
Scripts below will show how this statement can be identified as an Sql statement that should be re-examined for re-writing with bind parameters:
insert into sql_area_tmp(sql_text)
select sql_text
from v$sqlarea
update sql_area_tmp
set sql_text_two_constants = remove_constants(sql_text)
select sql_text_two_constants,count(*)
from sql_area_tmp
group by sql_text_two_constants
order by 2 desc
Sql statemnets that should be re-examined will appear at the top with a high count.
create or replace function remove_constants( p_query in varchar2 )
return varchar2
as l_query long;
l_char varchar2(1);
l_in_quotes boolean default FALSE;
BEGIN
for i in 1 .. length( p_query )
loop l_char := substr(p_query,i,1);
if ( l_char = '''' and l_in_quotes ) then l_in_quotes := FALSE;
elsif ( l_char = '''' and NOT l_in_quotes ) then l_in_quotes := TRUE;
l_query := l_query '''#';
end if;
if ( NOT l_in_quotes ) then
l_query := l_query l_char;
end if;
end loop;
l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
for i in 0 .. 8 loop
l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
end loop;
return upper(l_query);
END;
Now following this example can show how the above function can be used:
create global temporary table sql_area_tmp
on commit preserve rows
as
select sql_text,sql_text sql_text_two_constants
from v$sqlarea
where 1=0
This is an sql example that does not use bind parameters and so essentially the same statement is read and parsed repeatedly:
DECLARE
cursor c1 is
select object_name
from dba_objects;
l_object dba_objects.object_name%type;
BEGIN
for c1rec in c1 loop
execute immediate 'insert into pn_temp values('''''c1rec.object_name''''')';
end loop;
END;;
Scripts below will show how this statement can be identified as an Sql statement that should be re-examined for re-writing with bind parameters:
insert into sql_area_tmp(sql_text)
select sql_text
from v$sqlarea
update sql_area_tmp
set sql_text_two_constants = remove_constants(sql_text)
select sql_text_two_constants,count(*)
from sql_area_tmp
group by sql_text_two_constants
order by 2 desc
Sql statemnets that should be re-examined will appear at the top with a high count.
Wednesday, 10 January 2007
Top N Query
I remember back in the days of Oracle V6, I was asked a question in an interview to write a query in Sql - not using Pl/Sql - to show the top 5 Salaries from the EMP table. Ever since then I have been interested in queries that answer this.
Using the analytic function dense_rank as pointed out by Tom Kyte in this month's Oracle magazine, is the best I have seen so far.
select *
from (
select deptno,ename,sal,dense_rank() over (partition by deptno order by sal desc) Salrnk
from emp)
where SalRnk <=3
order by deptno,sal desc
Using the analytic function dense_rank as pointed out by Tom Kyte in this month's Oracle magazine, is the best I have seen so far.
select *
from (
select deptno,ename,sal,dense_rank() over (partition by deptno order by sal desc) Salrnk
from emp)
where SalRnk <=3
order by deptno,sal desc
Thursday, 4 January 2007
IN A OWB MAPPING GENERATED JOIN IS CORRUPTED
This is not the sort of thing one would expect to be a bug but sadly it is. It seems to happen with complex joins when you add in new attributes. And the only way round it is to do the join again.
See bug no. 4914839 in Metalink.
Problem ========== Customer has a mapping which has many operators including a join operator. In the output group of join operator output attributes are linked to unrelated items from the input group and hence the final result generated is inserting incorrect values into columns.
03/10/06 07:02 am *** Another note from another developer: ====================================
Back in Bombay, we didn't have the joiner input/output matching properties at all, and no "doPropertyUpgrade" method. So this is probably related to an older joiner metadata corruption bug, not our recent paris bug. I recall that the old bug may be triggered by some combination of adding/removing attributes or groups on the joiner in an older version (pre-Bombay), then upgrading the repository. . At this point, I think You are right that the only way to fix it is to recreate the joiner.
See bug no. 4914839 in Metalink.
Problem ========== Customer has a mapping which has many operators including a join operator. In the output group of join operator output attributes are linked to unrelated items from the input group and hence the final result generated is inserting incorrect values into columns.
03/10/06 07:02 am *** Another note from another developer: ====================================
Back in Bombay, we didn't have the joiner input/output matching properties at all, and no "doPropertyUpgrade" method. So this is probably related to an older joiner metadata corruption bug, not our recent paris bug. I recall that the old bug may be triggered by some combination of adding/removing attributes or groups on the joiner in an older version (pre-Bombay), then upgrading the repository. . At this point, I think You are right that the only way to fix it is to recreate the joiner.
Creating SCD2 and SCD1 Mappings Using OWB
I came across this article on the Oracle Technology site on how to address slowly changing dimensions in OWB, which is all well and good.
Personally however my favourite solution is to use the attribute properties and create a merge statement when target is being populated.
See:
The target table's operator properties should be set to update/insert.
The surrogate key populated from a sequence should have its attribute properties set to:
Load Column When Inserting Row - Yes
Load Column When Updating Row - No
Match Column When Updating Row - No
Update Operation - =
Columns which have been decided to be SCD2 types, should have their attribute properties set to:
Load Column When Inserting Row - Yes
Load Column When Updating Row - No
Match Column When Updating Row - Yes
Update Operation - =
Match Column When Deleting - No
and SCD1 Types should be set to:
Load Column When Inserting Row - Yes
Load Column When Updating Row - Yes
Match Column When Updating Row - No
Update Operation - =
Match Column When Deleting - No
If there are any further updates like setting the flag to identify the most current record etc. the target table should be joined with the source in another part of the same mapping with a join condition which will identify the records to be updated in a similar fashion to below:
join operator property:
src.natural_key = trg.natural_key and
trg.current_flag = 'Y'
(src.scd2type_col1 != trg.scd2type_col1 or
src.scd2type_col2 != trg.scd2type_col2)
......
i.e this will identify all the records that were current but are no longer current in accordance with the latest source.
Finally join the above join results to the target table with operator property set to UPDATE.
Personally however my favourite solution is to use the attribute properties and create a merge statement when target is being populated.
See:
The target table's operator properties should be set to update/insert.
The surrogate key populated from a sequence should have its attribute properties set to:
Load Column When Inserting Row - Yes
Load Column When Updating Row - No
Match Column When Updating Row - No
Update Operation - =
Columns which have been decided to be SCD2 types, should have their attribute properties set to:
Load Column When Inserting Row - Yes
Load Column When Updating Row - No
Match Column When Updating Row - Yes
Update Operation - =
Match Column When Deleting - No
and SCD1 Types should be set to:
Load Column When Inserting Row - Yes
Load Column When Updating Row - Yes
Match Column When Updating Row - No
Update Operation - =
Match Column When Deleting - No
If there are any further updates like setting the flag to identify the most current record etc. the target table should be joined with the source in another part of the same mapping with a join condition which will identify the records to be updated in a similar fashion to below:
join operator property:
src.natural_key = trg.natural_key and
trg.current_flag = 'Y'
(src.scd2type_col1 != trg.scd2type_col1 or
src.scd2type_col2 != trg.scd2type_col2)
......
i.e this will identify all the records that were current but are no longer current in accordance with the latest source.
Finally join the above join results to the target table with operator property set to UPDATE.
Subscribe to:
Posts (Atom)