Feed aggregator

Export table with exclude columns

Tom Kyte - 16 hours 10 min ago
Hi, I have large partitioned table now i want to export and give dump to someone but not all columns some specific columns needs to trim.. My approach i am importing table on dev setup then make specific columns unused and then try to drop then...
Categories: DBA Blogs

SQL Plan stability in 11G using stored outlines

Yann Neuhaus - 16 hours 55 min ago

A stored outline is a collection of hints associated with a specific SQL statement that allows a standard execution plan to be maintained, regardless of changes in the system environment or associated statistics. Plan stability is based on the preservation of execution plans at a point in time where the performance of a statement is considered acceptable. The outlines are stored in the OL$, OL$HINTS, and OL$NODES tables, but the [USER|ALL|DBA]_OUTLINES and [USER|ALL|DBA]_OUTLINE_HINTS views should be used to display information about existing outlines.

All of the caveats associated with optimizer hints apply equally to stored outlines. Under normal running the optimizer chooses the most suitable execution plan for the current circumstances. By using a stored outline you may be forcing the optimizer to choose a substandard execution plan, so you should monitor the affects of your stored outlines over time to make sure this isn’t happening. Remember, what works well today may not tomorrow.

Many times we are into the situation when the performance of a query regressing, or the optimizer is not able to choose the better execution plan.

In the next lines I will try to describe a scenario that needs the usage of a stored outline:

–we will identify the different plans that exists for our sql_id

SQL> select hash_value,child_number,sql_id,executions from v$sql where sql_id='574gkc8gn7u0h';

HASH_VALUE CHILD_NUMBER SQL_ID        EXECUTIONS 
---------- ------------ ------------- ---------- 
 524544016            0 574gkc8gn7u0h          4 
 576321033            1 574gkc8gn7u0h          5

 

Between the two different plans, we know that the best one is that with the cost 15 and the hash_value : 4013416232, but which is not all the time choosed by the optimizer, causing peak of performance

SQL> select * from table(dbms_xplan.display_cursor(‘574gkc8gn7u0h’,0));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  574gkc8gn7u0h, child number 0
-------------------------------------
Select   m.msg_message_id,   m.VersionId,   m.Knoten_id,
m.Poly_User_Id,   m.State,   'U' as MutationsCode from
........................................................

Plan hash value: 4013416232

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |       |       |    15 (100)|       |
|   1 |  UNION-ALL                     |                            |       |       |            |       |
|*  2 |   FILTER                       |                            |       |       |            |       |
|   3 |    NESTED LOOPS                |                            |       |       |            |       |
|   4 |     NESTED LOOPS               |                            |     1 |    76 |     7  (15)| 00:00:01 |
|   5 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|   6 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|   8 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|   9 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 10 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN          | XAK2_MSG_MESSAGE_ENTRY     |     1 |       |     1   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY          |     1 |    24 |     2   (0)| 00:00:01 |
|* 13 |   FILTER                       |                            |       |       |            |       |
|  14 |    NESTED LOOPS                |                            |       |       |            |       |
|  15 |     NESTED LOOPS               |                            |     1 |    76 |     8  (13)| 00:00:01 |
|  16 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|  17 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 18 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|  19 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|  20 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 21 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN          | XAK3_MSG_MESSAGE_ENTRY_DEL |     1 |       |     2   (0)| 00:00:01 |
|  23 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY_DEL      |     1 |    24 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(:LASTTREATEDVERSIONID<=:MAXVERSIONID)
   7 - filter("SERIAL#"=1999999999)
  10 - filter("SERIAL#"=1999999998)
----------------------------------------------

 

In order to fix this , we will create and enable an outline, that should help the optimizer to choose always the best plan:

 BEGIN
      DBMS_OUTLN.create_outline(hash_value    =>524544016,child_number  => 0);
    END;
  /

PL/SQL procedure successfully completed.

SQL>
SQL> alter system set use_stored_outlines=TRUE;

System altered.

SQL> create or replace trigger trig_start_out after startup on database
  2  begin
  3  execute immediate 'alter system set use_stored_outlines=TRUE';
  4  end;
  5  /

Trigger created.

As the parameter “use_stored_outlines” is a ‘pseudo’ parameter, is not persistent over the reboot of the system, for that reason we had to create this trigger on startup database.

Now we can check , if the outline is used:

NAME                           OWNER                          CATEGORY                       USED
------------------------------ ------------------------------ ------------------------------ ------
SYS_OUTLINE_18092409295665701  TEST                         DEFAULT                        USED

And also, to check that the execution is taking in account

SQL> select * from table(dbms_xplan.display_cursor('574gkc8gn7u0h',0));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  574gkc8gn7u0h, child number 0
-------------------------------------
Select   m.msg_message_id,   m.VersionId,   m.Knoten_id,
m.Poly_User_Id,   m.State,   'U' as MutationsCode from
msg_message_entry m where   m.VersionId between :LastTreatedVersionId
...................

Plan hash value: 4013416232

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |       |       |    15 (100)|       |
|   1 |  UNION-ALL                     |                            |       |       |            |       |
|*  2 |   FILTER                       |                            |       |       |            |       |
|   3 |    NESTED LOOPS                |                            |       |       |            |       |
|   4 |     NESTED LOOPS               |                            |     1 |    76 |     7  (15)| 00:00:01 |
|   5 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|   6 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|   8 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|   9 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 10 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN          | XAK2_MSG_MESSAGE_ENTRY     |     1 |       |     1   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY          |     1 |    24 |     2   (0)| 00:00:01 |
|* 13 |   FILTER                       |                            |       |       |            |       |
|  14 |    NESTED LOOPS                |                            |       |       |            |       |
|  15 |     NESTED LOOPS               |                            |     1 |    76 |     8  (13)| 00:00:01 |
|  16 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|  17 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 18 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|  19 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|  20 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 21 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN          | XAK3_MSG_MESSAGE_ENTRY_DEL |     1 |       |     2   (0)| 00:00:01 |
|  23 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY_DEL      |     1 |    24 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(:LASTTREATEDVERSIONID<=:MAXVERSIONID)
   7 - filter("SERIAL#"=1999999999)
  10 - filter("SERIAL#"=1999999998)
  
Note
-----
   - outline "SYS_OUTLINE_18092409295665701" used for this statement

To use stored outlines when Oracle compiles a SQL statement we need to enable them by setting the system parameter USE_STORED_OUTLINES to TRUE or to a category name. This parameter can be also be set at the session level.
By setting this parameter to TRUE, the category by default on which the outlines are created is DEFAULT.
If you prefer to add a category on the procedure of outline creation, Oracle will used this outline category until you provide another category value or you disable the usage of the outlines by putting the parameter USE_STORED_OUTLINE to FALSE.

 

Cet article SQL Plan stability in 11G using stored outlines est apparu en premier sur Blog dbi services.

Connection from Data Center to Cisco Switch issue on Exadata X7

Alejandro Vargas - 17 hours 31 min ago

It may happen that when trying to reach the Exadata machine using the management network, the data center switch that connects to the Exadata Cisco switch get a 'connection rejected' error.

On X7-2 port 48 of the Cisco switch is setup by default as ‘spanning tree network’ that set the BPDU* filter to disable, and communications between customer switch and the Cisco are rejected.

To solve run this command on the Cisco:

  • spanning tree BPDU filter en

Enabling BPDU filtering in the interface level stops sending or receiving BPDU on this interface; this is the same as disabling spanning tree on the interface.

See on My Oracle SupportCisco 93108 switch is getting Error Disabled on the Uplink Switch (Doc ID 2360702.1)

* BPUD stands for: Bridge Protocol Data Unit. BPDUs are data messages that are exchanged across the switches within an extended LAN that uses a spanning tree protocol topology

 

Categories: DBA Blogs

Safari 12 Implications for E-Business Suite Users

Steven Chan - Sun, 2018-09-23 22:21

Customers have been asking about the compatibility of new versions of the following Apple products with Oracle E-Business Suite 12.1 and 12.2:

  •          Safari 12 (works with macOS 10.13.6 and 10.12.6)
  •          macOS Mojave (macOS 10.14)

Neither of these two products have been certified with either EBS 12.1 or EBS 12.2 as of September 18, 2018).  

Safari 12 is unable to launch Java in the way that prior Safari versions could. This will prevent E-Business Suite 12.1 and 12.2 customers from running Forms-based products. Therefore, customers should *NOT* upgrade to Safari 12 on macOS desktop platforms.

macOS Mojave (macOS 10.14) will include Safari 12. Customers should *NOT* upgrade to macOS Mojave.

We are aware of the potential impact to E-Business Suite users and are actively pursuing solutions.    

What has changed?

Safari 12 introduces an important change: it removes support for “legacy NPAPI plug-ins”. This affects all EBS releases. macOS Mojave includes Safari 12.

Some products within Oracle EBS 12.1 and 12.2 run via HTML in browsers. These are sometimes called “self-service web applications.” These EBS products are expected to run without issue in Safari 12, but our certification testing is still underway.

Some products within Oracle EBS 12.1 and EBS 12.2 use Oracle Forms. Oracle Forms requires Java for desktop clients. On the macOS desktop platform, the only certified option today for launching Java is via the JRE plugin via the NPAPI approach.

This means that Safari 12 and macOS Mojave (macOS 10.4) will be unable to use the current JRE plugin-based launching technology for Java and Forms for EBS desktop users.

Recommendations for EBS customers

As of today, the latest certified versions of these products are:

  • Safari 11 (works with macOS 10.13)
  • macOS High Sierra (macOS 10.13)

EBS customers should use only certified configurations. EBS customers who use Forms-based products should avoid upgrading to Safari 12 or macOS Mojave today.

What’s next for EBS clients on macOS?

We have certified Java Web Start (JWS) for Windows clients accessing EBS 12.1 and 12.2 environments. Java Web Start is an alternative to the JRE plugin approach. Java Web Start works with browsers that lack NPAPI support. For more details about Java Web Start and EBS on Windows, see:

We are working on certifying Java Web Start technology with macOS. Our goal is to provide a solution that allows E-Business Suite users on macOS to continue without any reduction in functionality. Given changes to Safari and Apple’s security model, we are examining options for browsers such as Firefox.

Once again, we are acutely aware of the potential impact to E-Business Suite users and have been working on solutions for macOS for some time.

Oracle’s Revenue Recognition rules prohibit us from discussing certification and release dates, but you’re welcome to monitor or subscribe to this blog. I’ll post updates here as soon as soon as they’re available.   

Related Articles

Categories: APPS Blogs

Verifying White Listing for Oracle Integration Platform

Antony Reynolds - Sun, 2018-09-23 18:33
Verifying Your White List is Working

A lot of customers require all outbound connections from systems to be validated against a whitelist.  This article explains the different types of whitelist that might be applied and why they are important to Oracle Integration Cloud (OIC).  Whitelisting means that if a system is not specifically enabled then its internet access is blocked.

The Need

If your company requires systems to be whitelisted then you need to consider the following use cases:

  • Agent Requires Access to Integration Cloud
  • On-Premise Systems Initiating Integration Flows
  • On-Premise Systems Raising Events

In all the above cases we need to be able to make a call to Integration Cloud through the firewall which may require whitelisting.

Types of Whitelisting

Typically there are two components involved in whitelisting: the source system and the target system.  In our case the target system will be Oracle Integration Cloud, and if using OAuth then the Identity Cloud Service (IDCS) as well.  The source system will be either the OIC connectivity agent, or a source system initiating integration flows, possibly via an event mechanism.

Whitelisting Patterns   Source Whitelisted Target Whitelisted Target Only No Yes Source & Target Yes Yes Source Only No No

Only the first two are usually seen, the third is included for completeness but I have not seen it in the wild.

Information Required

When providing information to the network group to enable the whitelisting you may be asked to provide IP addresses of the systems being used.  You can obtain these by using the nslookup command.

> nslookup myenv-mytenancy.integration.ocp.oraclecloud.com Server: 123.45.12.34 Address: 123.45.12.34#53 Non-authoritative answer: myenv-mytenancy.integration.ocp.oraclecloud.com canonical name = 123456789ABCDEF.integration.ocp.oraclecloud.com. Name: 123456789ABCDEF.integration.ocp.oraclecloud.com Address: 123.123.123.123

You will certainly need to lookup your OIC instance hostname.  You may also need your IDCS instance which is the URL you get when logging on.

Testing Access

Once the whitelist is enabled we can test it by using the curl command from the machine from which we require whitelist access.

> curl -i -u 'my_user@mycompany.com:MyP@ssw0rd' https://myenv-mytenancy.integration.ocp.oraclecloud.com/icsapis/v2/integrations HTTP/1.1 200 OK Date: Sun, 23 Sep 2018 23:19:44 GMT Content-Type: application/json;charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive X-ORACLE-DMS-ECID: 1234567890abcdef X-ORACLE-DMS-RID: 0 Set-Cookie: iscs_auth=0123456789abcdef; path=/; HttpOnly ...

The -i flag is used to show the header of the response, if there is an error this flag will enable you to see the HTTP error code.

The -u glag is used to provide credentials.

In the example above we have listed all the integrations that are in the instance.  If you don't see the list of integrations then something is wrong.  Common problems are:

  • Wrong URL
  • Wrong username/password - pass them using single quotes to prevent interpretation of special characters by the shell.
  • Access denied due to whitelist not enabled - depending on the environment this may show as a timeout or an error from a proxy server.
Summary

As you can see gathering the information for whitelisting and then testing that it is correctly enabled are straightforward and don't require advanced networking skills.

New Batch Level Of Service Algorithms

Anthony Shorten - Sun, 2018-09-23 18:25

Batch Level Of Service allows implementations to register a service target on individual batch controls and have the Oracle Utilities Application Framework assess the current performance against that target.

In past releases of the Oracle Utilities Application Framework the concept of Batch Level Of Service was introduced. This is an algorithm point that assess the value of a performance metric against a target and returns the appropriate response to that target. By default, if configured, the return value is returned on the Batch Control maintenance screen or whenever called. For example:

Example Batch Control

It was possible to build an algorithm to set and check the target and return the appropriate level. This can be called manually, be available via the Health Service API or used in custom Query Portals. For example:

Example Portal

In this release a number of enhancements have been introduced:

  • Possible to specify multiple algorithms. If you want to model multiple targets it is now possible to link more than one algorithm to a batch control. The appropriate level will be returned (the worst case) for the multiple algorithms.
  • More Out Of The Box algorithms. In past releases a basic generic algorithm was supplied but additional algorithms are now provided to include additional metrics:
Batch Control Description F1-BAT-LVSVC Original Generic Algorithm to evaluate Error Count and Time Since Completion F1-BAT-ERLOS Compare Count Of Records in Error to Threshold F1-BAT-RTLOS Compare Total Batch Run Time to Threshold F1-BAT-TPLOS Compare Throughput to Threshold

For more information about these algorithms refer to the algorithm entries themselves and the online documentation.

Sea of Fertility

Greg Pavlik - Sun, 2018-09-23 18:24
In a discussion on some of my reservations on Murakami's take on 20th century Japanese literature, a friend commented on Mishima's Sea of Fertility tetrology with some real insights I thought worth preserving and sharing, albeit anonymously (if you're not into Japanese literature, now's a good time to stop reading):

"My perspective is different: it was a perfect echo of the end of “Spring Snow” and a final liberation of the main character from his self-constructed prison of beliefs. Honda’s life across the novels represents the false path: of consciousness the inglorious decay and death of the soul trapped in a repetition of situations that it cannot fathom being forced into waking. He is forced into being an observer of his own life eventually debasing himself into a “peeping Tom” even as he works as a judge. The irony is rich. Honda decays through the four novels since he clings to the memory of his friend (Kiyoaki) and does not understand the constructed nature his experience and desires. He is asleep. He wants Matsugae’s final dream to be the truth (that they will “...meet again under the Falls.”) His desires have been leading him in a circle and the final scene in the garden is his recognition of what the Abbess (Satoko from Spring Snow) was trying to convey to him. When she tells him, “There was no such person as Kiyoaki Matsugae”, it is her attempt to cure him of his delusion (and spiritual illness that has rendered him desperate and weak - chasing the ego illusions of his youth and seeking the reincarnation of his friend everywhere.) Honda lives in the dream of his ego and desire. In the final scene, he wakes up for the first time. I loved the image of the shadows falling on the garden. He is finally dying, stripped of illusion. I found it to be Mishima at his most powerful. I agree about “Sailor”, that is a great novel and much more Japanese in its economy of expression. Now, Haruki Murakami is a world apart from Kawabata and Mishima. I love his use of the unconscious/Id as a place to inform and enthrall: the labyrinth of dreams. Most of his characters are trapped (at least part of the time) in this “place”: eg Kafka on the Shore, Windup Bird Chronicle, Hard-boiled Wonderland and End of the World, etc. Literature has to have room for all of them. I like the other Murakami, Ryu Murkami, whose “Audition” and “Famous Hits of the Shōwa Era” are dark, psychotic tales of unrestrained, escalating violence but redeemed by deep probing of unconscious, hidden motives (the inhuman work of the unconscious that guides the characters like the Greek sense of fate (Moira)) and occasional black humor."
 

12.1.0.2.0 ORA-01033: ORACLE initialization or shutdown in progress cascade standby

Michael Dinh - Sun, 2018-09-23 10:22
Objective is to create RAC cascade standby (olapdr) from existing standby (oltpdr) on the same host.
Cascade Standby: ORACLE_SID=olap1; db_name=oltp; db_unique_name=olapdr
Standby:         ORACLE_SID=oltp1; db_name=oltp; db_unique_name=oltpdr

 

Configuration for standby (oltpdr)
$ srvctl config database -d oltpdr
Spfile: +DATA/OLTPDR/spfileoltpdr.ora
Password file: +DATA/OLTPDR/orapwoltpdr
Copy existing password file from disk to ASM for cascade standby (olapdr)

This did not work and possibly different from password file at ASM.
Please don’t ask me why.

-rw-r----- 1 oracle oinstall 7680 Sep 18 13:28 orapwolap
-rw-r----- 1 oracle oinstall 7680 Sep 18 13:28 orapwolap1
-rw-r----- 1 oracle oinstall 7680 May 28 12:08 orapwoltp
-rw-r----- 1 oracle oinstall 7680 May 28 12:08 orapwoltp1

$ cp $ORACLE_HOME/dbs/orapwolap /tmp/orapwolapdr
ASMCMD> pwcopy /tmp/orapwolapdr +DATA/OLAPDR/orapwolapdr
Check standby (oltpdr)
oltp1> @dataguard.sql

Session altered.

*** v$database ***

DB              OPEN                   DATABASE                                REMOTE     SWITCHOVER         DATAGUARD  PRIMARY_DB
UNIQUE_NAME     MODE                   ROLE               PROTECTION_MODE      ARCHIVE    STATUS             BROKER     UNIQUE_NAME
--------------- ---------------------- ------------------ -------------------- ---------- ------------------ ---------- ---------------
oltpdr          READ ONLY WITH APPLY   PHYSICAL STANDBY   MAXIMUM PERFORMANCE  ENABLED    NOT ALLOWED        ENABLED    oltp

*** gv$archive_dest_status ***
                             DB                                        DATABASE                     RECOVERY
 INST  DEST TARGET           UNIQUE_NAME     DESTINATION               MODE            STATUS       MODE                    SCHEDULE PROCESS
----- ----- ---------------- --------------- ------------------------- --------------- ------------ ----------------------- -------- --------
    1     1 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST OPEN_READ-ONLY  VALID        MANAGED REAL TIME APPLY ACTIVE   ARCH
          5 REMOTE           olapdr          olapdr                    UNKNOWN         ERROR        IDLE                    ACTIVE   ARCH
         32 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST UNKNOWN         VALID        IDLE                    ACTIVE   RFS

    2     1 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST OPEN_READ-ONLY  VALID        MANAGED REAL TIME APPLY ACTIVE   ARCH
          5 REMOTE           olapdr          olapdr                    UNKNOWN         ERROR        IDLE                    ACTIVE   ARCH
         32 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST UNKNOWN         VALID        IDLE                    ACTIVE   RFS

6 rows selected.

 INST  DEST STATUS       SRL GAP_STATUS      ERROR
----- ----- ------------ --- --------------- --------------------------------------------------------------------------------
    1     1 VALID        NO                  NONE
          5 ERROR        NO  RESOLVABLE GAP  ORA-01033: ORACLE initialization or shutdown in progress
         32 VALID        NO                  NONE

    2     1 VALID        NO                  NONE
          5 ERROR        NO  RESOLVABLE GAP  ORA-01033: ORACLE initialization or shutdown in progress
         32 VALID        NO                  NONE

6 rows selected.
Suggestion from teammate is to copy password file at ASM from oltpdr to olapdr
ASMCMD> pwcopy +DATA/OLTPDR/orapwoltpdr /tmp/orapwolapdr
ASMCMD> pwcopy /tmp/orapwolapdr +DATA/OLAPDR/orapwolapdr
Check standby (oltpdr) and don’t forget to defer and enable dest.
oltp1> alter system set log_archive_dest_state_5=defer;

System altered.

oltp1> alter system set log_archive_dest_state_5=enable

oltp1> @dataguard.sql

Session altered.

*** v$database ***

DB              OPEN                   DATABASE                                REMOTE     SWITCHOVER         DATAGUARD  PRIMARY_DB
UNIQUE_NAME     MODE                   ROLE               PROTECTION_MODE      ARCHIVE    STATUS             BROKER     UNIQUE_NAME
--------------- ---------------------- ------------------ -------------------- ---------- ------------------ ---------- ---------------
oltpdr          READ ONLY WITH APPLY   PHYSICAL STANDBY   MAXIMUM PERFORMANCE  ENABLED    NOT ALLOWED        ENABLED    oltp

*** gv$archive_dest_status ***
                             DB                                        DATABASE                     RECOVERY
 INST  DEST TARGET           UNIQUE_NAME     DESTINATION               MODE            STATUS       MODE                    SCHEDULE PROCESS
----- ----- ---------------- --------------- ------------------------- --------------- ------------ ----------------------- -------- --------
    1     1 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST OPEN_READ-ONLY  VALID        MANAGED REAL TIME APPLY ACTIVE   ARCH
          5 REMOTE           olapdr          olapdr                    MOUNTED-STANDBY VALID        MANAGED REAL TIME APPLY ACTIVE   ARCH
         32 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST UNKNOWN         VALID        IDLE                    ACTIVE   RFS

    2     1 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST OPEN_READ-ONLY  VALID        MANAGED REAL TIME APPLY ACTIVE   ARCH
          5 REMOTE           olapdr          olapdr                    MOUNTED-STANDBY VALID        MANAGED REAL TIME APPLY ACTIVE   ARCH
         32 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST UNKNOWN         VALID        IDLE                    ACTIVE   RFS

6 rows selected.

 INST  DEST STATUS       SRL GAP_STATUS      ERROR
----- ----- ------------ --- --------------- --------------------------------------------------------------------------------
    1     1 VALID        NO                  NONE
          5 VALID        YES RESOLVABLE GAP  NONE
         32 VALID        NO                  NONE

    2     1 VALID        NO                  NONE
          5 VALID        YES RESOLVABLE GAP  NONE
         32 VALID        NO                  NONE

6 rows selected.
Check standby (olapdr)
olap1> @dataguard.sql

Session altered.

*** v$database ***

DB              OPEN                   DATABASE                                REMOTE     SWITCHOVER         DATAGUARD  PRIMARY_DB
UNIQUE_NAME     MODE                   ROLE               PROTECTION_MODE      ARCHIVE    STATUS             BROKER     UNIQUE_NAME
--------------- ---------------------- ------------------ -------------------- ---------- ------------------ ---------- ---------------
olapdr          MOUNTED                PHYSICAL STANDBY   MAXIMUM PERFORMANCE  ENABLED    NOT ALLOWED        DISABLED   oltp

*** gv$archive_dest_status ***
                             DB                                        DATABASE                     RECOVERY
 INST  DEST TARGET           UNIQUE_NAME     DESTINATION               MODE            STATUS       MODE                    SCHEDULE PROCESS
----- ----- ---------------- --------------- ------------------------- --------------- ------------ ----------------------- -------- --------
    1     1 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST MOUNTED-STANDBY VALID        MANAGED REAL TIME APPLY ACTIVE   ARCH
         32 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST UNKNOWN         VALID        IDLE                    ACTIVE   RFS

    2     1 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST MOUNTED-STANDBY VALID        MANAGED REAL TIME APPLY ACTIVE   ARCH
         32 LOCAL            NONE            USE_DB_RECOVERY_FILE_DEST UNKNOWN         VALID        IDLE                    ACTIVE   RFS

 INST  DEST STATUS       SRL GAP_STATUS      ERROR
----- ----- ------------ --- --------------- --------------------------------------------------------------------------------
    1     1 VALID        NO                  NONE
         32 VALID        NO                  NONE

    2     1 VALID        NO                  NONE
         32 VALID        NO                  NONE

*** v$archived_log ***

TIME                  THREAD# ARCHIVED  APPLIED      GAP
-------------------- -------- -------- -------- --------
23-SEP-2018 09:26:05        1    37550    37467       83
23-SEP-2018 09:26:05        2    32631    32566       65

*** gv$managed_standby ***
                                        CLIENT                                               DELAY
 INST PID                       THREAD# PROCESS    PROCESS  STATUS       SEQUENCE#   BLOCK#   MINS
----- ------------------------ -------- ---------- -------- ------------ --------- -------- ------
    1 399156                          2 N/A        MRP0     APPLYING_LOG     32570   721960      0

olap1> r
  1  select inst_id inst,PID,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where status not in ('CLOSING','IDLE','CONNECTED')
  4  order by inst_id, status desc, thread#, sequence#
  5*
                                        CLIENT                                               DELAY
 INST PID                       THREAD# PROCESS    PROCESS  STATUS       SEQUENCE#   BLOCK#   MINS
----- ------------------------ -------- ---------- -------- ------------ --------- -------- ------
    1 399156                          2 N/A        MRP0     APPLYING_LOG     32570   733658      0

olap1> r
  1  select inst_id inst,PID,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where status not in ('CLOSING','IDLE','CONNECTED')
  4  order by inst_id, status desc, thread#, sequence#
  5*

                                       CLIENT                                               DELAY
 INST PID                       THREAD# PROCESS    PROCESS  STATUS       SEQUENCE#   BLOCK#   MINS
----- ------------------------ -------- ---------- -------- ------------ --------- -------- ------
    1 399156                          1 N/A        MRP0     APPLYING_LOG     37474   499677      0
    2 366691                          2 UNKNOWN    RFS      RECEIVING        32632  1073153      0

olap1> 

Next, configure DataGuard Broker and not looking pretty.

Oracle Offline Persistence Toolkit - Reacting to Replay Conflict

Andrejus Baranovski - Sat, 2018-09-22 08:01
This is next post related to Oracle Offline Persistence Toolkit. Check my previous writing on same subject - Implementing Handle Patch Method in JET Offline Toolkit. Read more about toolkit on GitHub repo.

When application goes online, we call synchronisation method. If at least one of the requests fails, then synchronisation is stopped and error callback is invoked, where we can handle failure. In error callback, we check if failure is related to the conflict - then we open dialog, where user will decide what to do (to force client changes or take server changes). Reading latest change indicator value from response in error callback (to apply it, if user decides to force client changes in the next request):


Dialog is simple - it displays dynamic text for conflicted value and provides user with a choice of actions:


Let's see how it works.

User A editing value Lex and saving it to backend:


User B is offline, editing same value B and saving it in local storage:


We can check it in the log - changes value was stored in local storage:


When going online, pending requests logged offline, will be re-executed. Obviously above request will fail, because same value was changed by another user. Conflict will be reported:


PATCH operation fails with conflict code 409:


User will be asked - how to proceed. To apply changes and override changes in the backend, or on opposite take changes from the backend and bring them to the client:


I will explain how to implement these actions in my next post. In the meantime you can study complete application available on GitHub repo.

[BLOG] Commonly Asked Questions Oracle GoldenGate 12c

Online Apps DBA - Sat, 2018-09-22 07:31

Visit: https://k21academy.com/goldengate26 and learn about: ✔Types of replication in Goldengate ✔Difference between classic & integral extract process ✔What is the datapump in Goldengate & much more… Leave a comment if you have any question related to Oracle Goldengate Visit: https://k21academy.com/goldengate26 and learn about: ✔Types of replication in Goldengate ✔Difference between classic & integral extract process […]

The post [BLOG] Commonly Asked Questions Oracle GoldenGate 12c appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How to run PeopleTools 8.57 in the Cloud

PeopleSoft Technology Blog - Sat, 2018-09-22 01:13

Now that 8.57 is generally available to run in the cloud, I’ll outline the steps to get it up and running so you can take it for a spin. It was announced in an earlier blog that PeopleTools 8.57 will be initially released to run on the Oracle Cloud before it’s available for on-premises installs.  That means you’ll want to use PeopleSoft Cloud Manager 7 to get it up and running.  There’s probably a pretty good chance that this is the first exposure you’ve had to Cloud Manager so it’s worth explaining what you’ll do.

The process is pretty simple.  First, you’ll get an Oracle Cloud Infrastructure (OCI) account.  Then you’ll install Cloud Manager 7 onto that account.  Once installed, Cloud Manager 7 is used to download the 8.57 DPKs into your Cloud Repository.  Then you’ll use Cloud Manager 7 to upgrade a PUM image from 8.56 to 8.57 using the downloaded DPKs.  If you’ve been through a PeopleTools upgrade before, you’ll be amazed how Cloud Manager automates this process.

The steps outlined below assume that you are new to OCI and Cloud Manager.  If you are already familiar with Cloud Manager and have it running in your tenancy, then update your Cloud Manager instance using Interaction Hub Image 07 as described here and go to Step 4.

Step 1: Get your Oracle Cloud trial account.  If you are new to OCI and PeopleSoft Cloud Manager, then please go through all links below to help you get started quickly.

Using the link given above, request for a free account that will give you access to all Oracle Cloud services up to 3500 hours or USD 300 free credits (available in select countries and limited validity). When your request is processed, you will be provisioned a tenancy in Oracle Cloud Infrastructure. Oracle will send you a Welcome email with instructions for signing in to the console for the first time. There is no charge unless you choose to Upgrade to Paid from My Services in the console.

Step 2: Set up your OCI tenancy by creating users, policies and networks.  Refer to OCI documentation here for details or get a quick overview in this blog.  After this step, you will have your tenancy ready to deploy PeopleSoft Cloud Manager. If you are on OCI Classic then you can skip this step.

Step 3: Install and configure PeopleSoft Cloud Manager.  Follow the OCI install guide here. If you are on OCI Classic, follow the install guide here.

As part of this step, you will –

  • Download the Cloud Manager images
  • Upload images to your tenancy
  • Create a custom Microsoft Windows 2012 image
  • Spin up the Cloud Manager instance
  • Run bootstrap to install Cloud Manager
  • Configure Cloud Manager settings
  • Set up file server repository

Step 4: Subscribe to download channels.  The PeopleTools 8.57 download channel is now available under the unsubscribed channel list.  Navigate to Dashboard | Repository | Download Subscriptions.  Click on Unsubscribed tab. Subscribe to the new Tools_857_Linux channel using the related Actions menu. Also subscribe to any application download channels, for example, HCM_92_Linux. Downloading new PeopleTools version takes a while. Wait for the status to indicate success.

Step 5:  Provision a demo environment using PUM Fulltier topology.  Create a new environment template to deploy an application that you downloaded in Step 4.  Use the provided PUM Fulltier topology in the template.  Using this newly created template, provision a new environment. If you already have a PUM environment deployed through Cloud Manager, then you can skip this step. 

Step 6: After the environment is provisioned, navigate to Environment Details | Upgrade PeopleTools.  On the right pane, you’ll have an option to select the PeopleTools 8.57 version that you have downloaded.  Select the PeopleTools version that you want to evaluate. Click Upgrade to begin the upgrade process. 

You’ll be able to see the job details and the steps running on the same page.  Click on the ‘In progress’ status link to view the upgrade job status. 

After the upgrade is complete, click the status link for detailed upgrade job status.

PeopleTools upgrade is now complete.  Login to the upgraded environment PIA to evaluate the new features in PeopleTools 8.57.  For more info on PeopleTools 8.57 go to PeopleTools 8.57 Documentation

Connect Power BI to GCP BigQuery using Simba Drivers

Ittichai Chammavanijakul - Fri, 2018-09-21 21:56

Power BI can connect to GCP BigQuery through its provided connector. However, some reported that they’ve encountered the refresh failure as seen below. Even though the error message suggests that the quota for the API requests per user per minute may be exceeded, some reported that the error still occurs even if with a small dataset is being fetched.

In my case, by simply disabling parallel loading table (Options and settings > Options > Data Load), I no longer see this issue. However, some still said it did not help.

An alternative option is to use another supported ODBC or JDBC driver from Simba Technologies Inc. which is partnered with Google.

Setup

  • Download the latest 64-bit ODBC driver from here.
  • Install it on the local desktop where Power BI Desktop is installed. We will have to install the same driver on the Power BI Gateway Server if the published report needs to be refreshed on Power BI Service.

Configuration

  • From Control Panel > Administrator > ODBC Data Source Administrator > System DSN, click Configure on the Google BigQuery.
  • Follow the instructions from the screens below.

When connecting on Power BI, Get Data > choose ODBC.

Categories: DBA Blogs

RMAN-03002: ORA-19693: backup piece already included

Michael Dinh - Fri, 2018-09-21 18:36

I have been cursed trying to create 25TB standby database.

Active duplication using standby as source failed due to bug.

Backup based duplication using standby as source failed due to bug again.

Now performing traditional restore.

Both attempts failed with RMAN-20261: ambiguous backup piece handle

RMAN> list backuppiece '/bkup/ovtdkik0_1_1.bkp';
RMAN> change backuppiece '/bkup/ovtdkik0_1_1.bkp' uncatalog;

What’s in the backup?

RMAN> spool log to /tmp/list.log
RMAN> list backup;
RMAN> exit

There are 2 identical backuppiece and don’t know how this could have happened.

$ grep ovtdkik0_1_1 /tmp/list.log
    201792  1   AVAILABLE   /bkup/ovtdkik0_1_1.bkp
    202262  1   AVAILABLE   /bkup/ovtdkik0_1_1.bkp

RMAN> delete backuppiece 202262;

Restart restore and is running again.

PeopleTools 8.57 is Available on the Oracle Cloud

PeopleSoft Technology Blog - Fri, 2018-09-21 15:17

We are pleased to announce that PeopleTools 8.57 is generally available for install and upgrade on the Oracle Cloud.  As we announced earlier, PeopleTools 8.57 will initially be available only on the Oracle Cloud.  We plan to make PeopleTools 8.57 available for on-premises downloads with the 8.57.04 CPU patch in January 2019.  

There are many new exiting features in PeopleTools 8.57 including:

  • The ability for end-users to set up conditions in analytics that if met will notify the user
  • Improvements to the way Related Content and Analytics are displayed
  • Add custom fields to Fluid pages with minimum life-cycle impact
  • More capabilities for end user personalization
  • Improved search that supports multi-facet selections
  • Easier than ever to brand the application with your corporate colors and graphics
  • Fluid page preview in AppDesigner and improved UI properties interface
  • End-to-end command-line support for life-cycle management processes
  • And much more!

You’ll want to get all the details and learn about the new features in 8.57.  A great place to start is the PeopleTools 8.57 Highlights Video  posted on the PeopleSoft YouTube channel.  The highlights video gives you a overview of the new features and shows how to use them.

There is plenty more information about the release available today.  Here are some links to some of the other places you can go to learn more about 8.57:

In addition to releasing PeopleTools 8.57, version 7 of PeopleSoft Cloud Manager is also being released today.  CM 7 is similar in functionality to CM 6 with additional support for PeopleTools 8.57.  If you currently use a version of Cloud Manager you must upgrade to version 7 in order to install PT 8.57. 

There are a lot of questions about how to get started using PeopleTools 8.57 and Cloud Manager 7.  Documentation and installation instructions are available on the Cloud Manager Home Page.

More information will be published over the next couple of weeks to help you get started with 8.57 on the cloud.  Additional information will include blogs to help with details of the installation, an video that shows the complete process from creating a free trial account to running PT8.57, and a detailed Spotlight Video that describes configuring OCI and Cloud Manager 7.

PeopleTools 8.57 is a significant milestone for Oracle, making it easier than ever for customers to use, maintain and run PeopleSoft Applications.

OAC 18.3.3: New Features

Rittman Mead Consulting - Fri, 2018-09-21 07:58
 New Features

I believe there is a hidden strategy behind Oracle's product release schedule: every time I'm either on holidays or in a business trip full of appointments a new version of Oracle Analytics Cloud is published with a huge set of new features!

 New Features

OAC 18.3.3 went live last week and contains a big set of enhancements, some of which were already described at Kscope18 during the Sunday Symposium. New features are appearing in almost all the areas covered by OAC, from Data Preparation to the main Data Flows, new Visualization types, new security and configuration options and BIP and Essbase enhancements. Let's have a look at what's there!

Data Preparation

A recurring theme in Europe since last year is GDPR, the General Data Protection Regulation which aims at protecting data and privacy of all European citizens. This is very important in our landscape since we "play" with data on daily basis and we should be aware of what data we can use and how.
Luckily for us now OAC helps to address GDPR with the Data Preparation Recommendations step: every time a dataset is added, each column is profiled and a list of recommended transformations is suggested to the user. Please note that Data Preparation Recommendations is only suggesting changes to the dataset, thus can't be considered the global solution to GDPR compliance.
The suggestion may include:

  • Complete or partial obfuscation of the data: useful when dealing with security/user sensitive data
  • Data Enrichment based on the column data can include:
    • Demographical information based on names
    • Geographical information based on locations, zip codes

 New Features

Each of the suggestion applied to the dataset is stored in a data preparation script that can easily be reapplied if the data is updated.

 New Features

Data Flows

Data Flows is the "mini-ETL" component within OAC which allows transformations, joins, aggregations, filtering, binning, machine learning model training and storing the artifacts either locally or in a database or Essbase cube.
The dataflows however had some limitations, the first one was that they had to be run manually by the user. With OAC 18.3.3 now there is the option to schedule Data Flows more or less like we were used to when scheduling Agents back in OBIEE.

 New Features

Another limitation was related to the creation of a unique Data-set per Data Flow which has been solved with the introduction of the Branch node which allows a single Data Flow to produce multiple data-sets, very useful when the same set of source data and transformations needs to be used to produce various data-sets.

 New Features

Two other new features have been introduced to make data-flows more reusable: Parametrized Sources and Outputs and Incremental Processing.
The Parametrized Sources and Outputs allows to select the data-flow source or target during runtime, allowing, for example, to create a specific and different dataset for today's load.

 New Features

The Incremental Processing, as the name says, is a way to run Data Flows only on top of the data added since the last run (Incremental loads in ETL terms). In order to have a data flow working with incremental loads we need to:

  • Define in the source dataset which is the key column that can be used to indicate new data (e.g. CUSTOMER_KEY or ORDER_DATE) since the last run
  • When including the dataset in a Data Flow enable the execution of the Data Flow with only the new data
  • In the target dataset define if the Incremental Processing replaces existing data or appends data.

Please note that the Incremental Load is available only when using Database Sources.

Another important improvement is the Function Shipping when Data Flows are used with Big Data Cloud: If the source datasets are coming from BDC and the results are stored in BDC, all the transformations like joining, adding calculation columns and filtering are shipped to BDC as well, meaning there is no additional load happening on OAC for the Data Flow.

Lastly there is a new Properties Inspector feature in Data Flow allowing to check the properties like name and description as well as accessing and modifying the scheduling of the related flow.

 New Features

Data Replication

Now is possible to use OAC to replicate data from a source system like Oracle's Fusion Apps, Talend or Eloqua directly into Big Data Cloud, Database Cloud or Data Warehouse Cloud. This function is extremely useful since allows decoupling the queries generated by the analytical tools from the source systems.
As expected the user can select which objects to replicate, the filters to apply, the destination tables and columns, and the load type between Full or Incremental.

Project Creation

New visualization capabilities have been added which include:

  • Grid HeatMap
  • Correlation Matrix
  • Discrete Shapes
  • 100% Stacked Bars and Area Charts

In the Map views, Multiple Map Layers can now be added as well as Density and Metric based HeatMaps, all on top of new background maps including Baidu and Google.

 New Features

Tooltips are now supported in all visualizations, allowing the end user to add measure columns which will be shown when over a section of any graph.

 New Features

The Explain feature is now available on metrics and not only on attributes and has been enhanced: a new anomaly detection algorithm identifies anomalies in combinations of columns working in the background in asynchronous mode, allowing the anomalies to be pushed as soon as they are found.

A new feature that many developers will appreciate is the AutoSave: we are all used to autosave when using google docs, the same applies to OAC, a project is saved automatically at every change. Of course this feature can be turn off if necessary.
Another very interesting addition is the Copy Data to Clipboard: with a right click on any graph, an option to save the underline data to clipboard is available. The data can then natively be pasted in Excel.

Did you create a new dataset and you want to repoint your existing project to it? Now with Dataset replacement it's just few clicks away: you need only to select the new dataset and re-map all the columns used in your current project!

 New Features

Data Management

The datasets/dataflows/project methodology is typical of what Gartner defined as Mode 2 analytics: analysis done by a business user whitout any involvement from the IT. The step sometimes missing or hard to be performed in self-service tools is the publishing: once a certain dataset is consistent and ready to be shared, it's rather difficult to open it to a larger audience within the same toolset.
New OAC administrative options have been addressing this problem: a dataset Certification by an administrator allows a certain dataset to be queried via Ask and DayByDay by other users. There is also a dataset Permissions tab allowing the definition of Full Control, Edit or Read Only access at user or role level. This is the way of bringing the self service dataset back to corporate visibility.

 New Features

A Search tab allows a fine control over the indexing of a certain dataset used by Ask and DayByDay. There are now options to select when then indexing is executed as well as which columns to index and how (by column name and value or by column name only).

 New Features

BIP and Essbase

BI Publisher was added to OAC in the previous version, now includes new features like a tighter integration with the datasets which can be used as datasources or features like email delivery read receipt notification and compressed output and password protection that were already available on the on-premises version.
There is also a new set of features for Essbase including new UI, REST APIs, and, very important security wise, all the external communications (like Smartview) are now over HTTPS.
For a detailed list of new features check this link

Conclusion

OAC 18.3.3 includes an incredible amount of new features which enable the whole analytics story: from self-service data discovery to corporate dashboarding and pixel-perfect formatting, all within the same tool and shared security settings. Options like the parametrized and incremental Data Flows allows content reusability and enhance the overall platform performances reducing the load on source systems.
If you are looking into OAC and want to know more don't hesitate to contact us

Categories: BI & Warehousing

Clob data type error out when crosses the varchar2 limit

Tom Kyte - Fri, 2018-09-21 04:26
Clob datatype in PL/SQL program going to exception when it crosses the varchar2 limit and giving the "Error:ORA-06502: PL/SQL: numeric or value error" , Why Clob datatype is behaving like varchar2 datatype. I think clob can hold upto 4 GB of data. Pl...
Categories: DBA Blogs

Migrating Oracle 10g on Solaris Sparc to Linux RHEL 5 VM

Tom Kyte - Fri, 2018-09-21 04:26
Hi, if i will rate my oracle expertise i would give it 3/10. i just started learning oracle, solaris and linux 2months ago and was given this task to migrate. yes our oracle version is quite old and might not be supported anymore. Both platforms ...
Categories: DBA Blogs

"secure" in securefile

Tom Kyte - Fri, 2018-09-21 04:26
Good Afternoon, My question is a simple one. I've wondered why Oracle decided to give the new data type the name "securefile". Is it because we can encrypt it while before with basicfile, we couldn't encrypt the LOB? Also, why not call it "se...
Categories: DBA Blogs

Pre-allocating table columns for fast customer demands

Tom Kyte - Fri, 2018-09-21 04:26
Hello team, I have come across a strange business requirement that has caused an application team I support to submit a design that is pretty bad. The problem is I have difficulty quantifying this, so I'm going you can help me all the reasons why ...
Categories: DBA Blogs

move system datafiles

Tom Kyte - Fri, 2018-09-21 04:26
Hi Tom, When we install oracle and create the database by default (not manually) ...the system datafiles are located at a specific location .. Is is possible to move these (system tablespace datafiles) datafiles from the original location to...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator