You've reached the end of your free preview.
Want to read all 276 pages?
Unformatted text preview: Pass4sure.1z0-060.300.QA
Passing Score: 800
Time Limit: 120 min
File Version: 14.7 1z0-060
Upgrade to Oracle Database 12c 1.
6. I passed the exam and got 716/1000. Just I studied only this dump I did not consider any other dumps and training material.
Guys!!! That's the best training material forever.
Now sum up more relevant and essential questions in this vce file.
Best ever training material, It's really helps you to maximize the exam preparation.
Perfectly Valid in US, UK, Australia, India and Emirates. All my friends in group have these same questions.
A big success is waiting for you :) Just study it. - Download A+ VCE (latest) free Open VCE Exams - VCE to PDF Converter - VCE Exam Simulator - VCE Online - IT Certifications Exam A
Your multitenant container database (CDB) contains three pluggable database (PDBs). You find that the control file is damaged. You plan to use RMAN
to recover the control file. There are no startup triggers associated with the PDBs.
Which three steps should you perform to recover the control file and make the database fully operational?
F. Mount the container database (CDB) and restore the control file from the control file auto backup.
Recover and open the CDB in NORMAL mode.
Mount the CDB and then recover and open the database, with the RESETLOGS option.
Open all the pluggable databases.
Recover each pluggable database.
Start the database instance in the nomount stage and restore the control file from control file auto backup. Correct Answer: CDF
Step 1: F
Step 2: D
Step 3: C: If all copies of the current control file are lost or damaged, then you must restore and mount a backup control file. You must then run the
RECOVERcommand, even if no data files have been restored, and open the database with the RESETLOGS option.
* RMAN and Oracle Enterprise Manager Cloud Control (Cloud Control) provide full support for backup and recovery in a multitenant environment. You
can back up and recover a whole multitenant container database (CDB), root only, or one or more pluggable databases (PDBs).
A new report process containing a complex query is written, with high impact on the database. You want to collect basic statistics about query, such as
the level of parallelism, total database time, and the number of I/O requests.
For the database instance STATISTICS_LEVEL, the initialization parameter is set to TYPICAL and the CONTROL_MANAGEMENT_PACK_ACCESS
parameter is set to
What should you do to accomplish this task?
A. Execute the query and view Active Session History (ASH) for information about the query.
B. Enable SQL trace for the query. - Download A+ VCE (latest) free Open VCE Exams - VCE to PDF Converter - VCE Exam Simulator - VCE Online - IT Certifications C. Create a database operation, execute the query, and use the
DBMS_SQL_MONITOR.REPORT_SQL_MONITOR function to view the report.
D. Use the DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS procedure to monitor query execution and view the information from the
Correct Answer: C
The REPORT_SQL_MONITOR function is used to return a SQL monitoring report for a specific SQL statement.
Not A: Not interested in session statistics, only in statistics for the particular SQL query.
Not B: We are interested in statistics, not tracing.
Not D: SET_SESSION_LONGOPS Procedure
This procedure sets a row in the V$SESSION_LONGOPS view. This is a view that is used to indicate the on-going progress of a long running operation.
Some Oracle functions, such as parallel execution and Server Managed Recovery, use rows in this view to indicate the status of, for example, a
Applications may use the SET_SESSION_LONGOPS procedure to advertise information on the progress of application specific long running tasks so
that the progress can be monitored by way of the V$SESSION_LONGOPS view.
Identify two valid options for adding a pluggable database (PDB) to an existing multitenant container database (CDB).
E. Use the CREATE PLUGGABLE DATABASE statement to create a PDB using the files from the SEED.
Use the CREATE DATABASE . . . ENABLE PLUGGABLE DATABASE statement to provision a PDB by copying file from the SEED.
Use the DBMS_PDB package to clone an existing PDB.
Use the DBMS_PDB package to plug an Oracle 12c non-CDB database into an existing CDB.
Use the DBMS_PDB package to plug an Oracle 11 g Release 2 (184.108.40.206.0) non-CDB database into an existing CDB. Correct Answer: AD
Your database supports a DSS workload that involves the execution of complex queries:
Currently, the library cache contains the ideal workload for analysis. You want to analyze some of the queries for an application that are cached in the
- Download A+ VCE (latest) free Open VCE Exams - VCE to PDF Converter - VCE Exam Simulator - VCE Online - IT Certifications library cache.
What must you do to receive recommendations about the efficient use of indexes and materialized views to improve query performance?
A. Create a SQL Tuning Set (STS) that contains the queries cached in the library cache and run the SQL Tuning Advisor (STA) on the workload
captured in the STS.
B. Run the Automatic Workload Repository Monitor (ADDM).
C. Create an STS that contains the queries cached in the library cache and run the SQL Performance Analyzer (SPA) on the workload captured in the
D. Create an STS that contains the queries cached in the library cache and run the SQL Access Advisor on the workload captured in the STS.
Correct Answer: D
* SQL Access Advisor is primarily responsible for making schema modification recommendations, such as adding or dropping indexes and materialized
views. SQL Tuning Advisor makes other types of recommendations, such as creating SQL profiles and restructuring SQL statements.
* The query optimizer can also help you tune SQL statements. By using SQL Tuning Advisor and SQL Access Advisor, you can invoke the query
optimizer in advisory mode to examine a SQL statement or set of statements and determine how to improve their efficiency. SQL Tuning Advisor and
SQL Access Advisor can make various recommendations, such as creating SQL profiles, restructuring SQL statements, creating additional indexes or
materialized views, and refreshing optimizer statistics.
* Decision support system (DSS) workload
* The library cache is a shared pool memory structure that stores executable SQL and PL/SQL code. This cache contains the shared SQL and PL/SQL
areas and control structures such as locks and library cache handles.
Reference: Tuning SQL Statements
The following parameter are set for your Oracle 12c database instance:
You want to manage the SQL plan evolution task manually. Examine the following steps:
1. Set the evolve task parameters.
2. Create the evolve task by using the DBMS_SPM.CREATE_EVOLVE_TASK function.
3. Implement the recommendations in the task by using the
4. Execute the evolve task by using the DBMS_SPM.EXECUTE_EVOLVE_TASK function.
5. Report the task outcome by using the DBMS_SPM.REPORT_EVOLVE_TASK function.
- Download A+ VCE (latest) free Open VCE Exams - VCE to PDF Converter - VCE Exam Simulator - VCE Online - IT Certifications Identify the correct sequence of steps:
D. 2, 4, 5
2, 1, 4, 3, 5
1, 2, 3, 4, 5
1, 2, 4, 5 Correct Answer: B
* Evolving SQL Plan Baselines *
2. Create the evolve task by using the DBMS_SPM.CREATE_EVOLVE_TASK function. This function creates an advisor task to prepare the plan
evolution of one or more plans for a specified SQL statement. The input parameters can be a SQL handle, plan name or a list of plan names, time limit,
task name, and description.
1. Set the evolve task parameters.
This function updates the value of an evolve task parameter. In this release, the only valid parameter is TIME_LIMIT.
4. Execute the evolve task by using the DBMS_SPM.EXECUTE_EVOLVE_TASK function. This function executes an evolution task. The input - Download A+ VCE (latest) free Open VCE Exams - VCE to PDF Converter - VCE Exam Simulator - VCE Online - IT Certifications parameters can be the task name, execution name, and execution description. If not specified, the advisor generates the name, which is returned by the
This function implements all recommendations for an evolve task. Essentially, this function is equivalent to using ACCEPT_SQL_PLAN_BASELINE for
all recommended plans. Input parameters include task name, plan name, owner name, and execution name.
5. Report the task outcome by using the DBMS_SPM_EVOLVE_TASK function. This function displays the results of an evolve task as a CLOB. Input
parameters include the task name and section of the report to include.
Reference: Oracle Database SQL Tuning Guide 12c, Managing SQL Plan Baselines
In a recent Automatic Workload Repository (AWR) report for your database, you notice a high number of buffer busy waits. The database consists of
locally managed tablespaces with free list managed segments.
On further investigation, you find that buffer busy waits is caused by contention on data blocks.
Which option would you consider first to decrease the wait event immediately?
E. Decreasing PCTUSED
Increasing the number of DBWN process
Using Automatic Segment Space Management (ASSM)
Increasing db_buffer_cache based on the V$DB_CACHE_ADVICE recommendation Correct Answer: D
Automatic segment space management (ASSM) is a simpler and more efficient way *
of managing space within a segment. It completely eliminates any need to specify and tune the pctused,freelists, and freelist groups storage parameters
for schema objects created in the tablespace. If any of these attributes are specified, they are ignored.
Oracle introduced Automatic Segment Storage Management (ASSM) as a *
replacement for traditional freelists management which used one-way linked-lists to manage free blocks with tables and indexes. ASSM is commonly
called "bitmap freelists" because that is how Oracle implement the internal data structures for free block management.
* Buffer busy waits are most commonly associated with segment header contention onside the data buffer pool (db_cache_size, etc.).
* The most common remedies for high buffer busy waits include database writer (DBWR) contention tuning, adding freelists (or ASSM), and adding
Examine this command: - Download A+ VCE (latest) free Open VCE Exams - VCE to PDF Converter - VCE Exam Simulator - VCE Online - IT Certifications SQL > exec DBMS_STATS.SET_TABLE_PREFS (SH`, CUSTOMERS`, PUBLISH`, false`);
Which three statements are true about the effect of this command?
E. Statistics collection is not done for the CUSTOMERS table when schema stats are gathered.
Statistics collection is not done for the CUSTOMERS table when database stats are gathered.
Any existing statistics for the CUSTOMERS table are still available to the optimizer at parse time.
Statistics gathered on the CUSTOMERS table when schema stats are gathered are stored as pending statistics.
Statistics gathered on the CUSTOMERS table when database stats are gathered are stored as pending statistics. Correct Answer: CDE
* SET_TABLE_PREFS Procedure
This procedure is used to set the statistics preferences of the specified table in the specified schema.
Using Pending Statistics
Assume many modifications have been made to the employees table since the last time statistics were gathered. To ensure that the cost-based
optimizer is still picking the best plan, statistics should be gathered once again; however, the user is concerned that new statistics will cause the
optimizer to choose bad plans when the current ones are acceptable. The user can do the following:
EXEC DBMS_STATS.SET_TABLE_PREFS('hr', 'employees', 'PUBLISH', 'false'); By setting the employees tables publish preference to FALSE, any
statistics gather from now on will not be automatically published. The newly gathered statistics will be marked as pending.
Examine the following impdp command to import a database over the network from a pre-12c Oracle database (source): - Download A+ VCE (latest) free Open VCE Exams - VCE to PDF Converter - VCE Exam Simulator - VCE Online - IT Certifications Which three are prerequisites for successful execution of the command?
A. The import operation must be performed by a user on the target database with the DATAPUMP_IMP_FULL_DATABASE role, and the database link
must connect to a user on the source database with the DATAPUMP_EXD_FULL_DATABASE role.
B. All the user-defined tablespaces must be in read-only mode on the source database.
C. The export dump file must be created before starting the import on the target database.
D. The source and target database must be running on the same platform with the same endianness.
E. The path of data files on the target database must be the same as that on the source database.
F. The impdp operation must be performed by the same user that performed the expdp operation.
Correct Answer: ABD
In this case we have run the impdp without performing any conversion if endian format is differ- ent then we have to first perform conversion.
You notice a high number of waits for the db file scattered read and db file sequential read events in the recent Automatic Database Diagnostic Monitor
(ADDM) report. After further investigation, you find that queries are performing too many full table scans and indexes are not being used even though the
filter columns are indexed.
Identify three possible reasons for this.
E. Missing or stale histogram statistics
Undersized shared pool
High clustering factor for the indexes
High value for the DB_FILE_MULTIBLOCK_READ_COUNT parameter
Oversized buffer cache Correct Answer: ACD
D: DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O during table scans. It specifies the maximum number
of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan depends on such factors as
the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation.
QUESTION 10 - Download A+ VCE (latest) free Open VCE Exams - VCE to PDF Converter - VCE Exam Simulator - VCE Online - IT Certifications Which three features work together, to allow a SQL statement to have different cursors for the same statement based on different selectivity ranges?
E. Bind Variable Peeking
SQL Plan Baselines
Adaptive Cursor Sharing
Bind variable used in a SQL statement
Literals in a SQL statement Correct Answer: ACE
* In bind variable peeking (also known as bind peeking), the optimizer looks at the value in a bind variable when the database performs a hard parse of a
When a query uses literals, the optimizer can use the literal values to find the best plan. However, when a query uses bind variables, the optimizer must
select the best plan without the presence of literals in the SQL text. This task can be extremely difficult. By peeking at bind values the optimizer can
determine the selectivity of a WHERE clause condition as if literals had been used, thereby improving the plan.
C: Oracle 11g/12g uses Adaptive Cursor Sharing to solve this problem by allowing the server to compare the effectiveness of execution plans between
executions with different bind variable values. If it notices suboptimal plans, it allows certain bind variable values, or ranges of values, to use alternate
execution plans for the same statement. This functionality requires no additional configuration.
You notice a performance change in your production Oracle 12c database. You want to know which change caused this performance difference.
Which method or feature should you use?
D. Compare Period ADDM report
AWR Compare Period report
Active Session History (ASH) report
Taking a new snapshot and comparing it with a preserved snapshot Correct Answer: A
Explanation/Reference: - Download A+ VCE (latest) free Open VCE Exams - VCE to PDF Converter - VCE Exam Simulator - VCE Online - IT Certifications QUESTION 12
You want to capture column group usage and gather extended statistics for better cardinality estimates for the CUSTOMERS table in the SH schema.
Examine the following steps:
1. Issue the SELECT DBMS_STATS.CREATE_EXTENDED_STATS (SH`, CUSTOMERS`) FROM dual statement.
2. Execute the DBMS_STATS.SEED_COL_USAGE (null, SH`, 500) procedure.
3. Execute the required queries on the CUSTOMERS table.
4. Issue the SELECT DBMS_STATS.REPORT_COL_USAGE (SH`, CUSTOMERS`) FROM dual statement.
Identify the correct sequence of steps.
D. 3, 2, 1, 4
2, 3, 4, 1
4, 1, 3, 2
3, 2, 4, 1 Correct Answer: B
Step 1 (2). Seed column usage
Oracle must observe a representative workload, in order to determine the appropriate column groups. Using the new procedure
DBMS_STATS.SEED_COL_USAGE, you tell Oracle how long it should observe the workload.
Step 2: (3) You don't need to execute all of the queries in your work during this window. You can simply run explain plan for some of your longer running
queries to ensure column group information is recorded for these queries.
Step 3. (1) Create the column groups
At this point you can get Oracle to automatically create the column groups for each of the tables based on the usage information captured during the
monitoring window. You simply have to call the DBMS_STATS.CREATE_EXTENDED_STATS function for each table.This function requires just two
arguments, the schema name and the table name. From then on, statistics will be maintained for each column group whenever statistics are gathered
on the table.
* DBMS_STATS.REPORT_COL_USAGE reports column usage information and records all the SQL operations the database has processed for a given
* The Oracle SQL optimizer has always been ignorant of the implied relationships between data columns within the same table. While the optimizer has
traditionally analyzed the distribution of values within a column, he does not collect value-based relationships between columns.
* Creating extended statistics
Here are the steps to create extended statistics for related table columns withdbms_stats.created_extended_stats:
1 - The first step is to create column histograms for the related columns.
- Download A+ VCE (latest) free Open VCE Exams - VCE to PDF Converter - VCE Exam Simulator - VCE Online - IT Certifications 2 Next, we run dbms_stats.create_extended_stats to relate the columns together.
Unlike a traditional procedure that is invoked via an execute (exec) statement, Oracle extended statistics are created via a select statement.
Which three statements are true about Automatic Workload Repository (AWR)?
D. All AWR tables belong to the SYSTEM schema.
The AWR data is stored in memory and in the database.
The snapshots collected by AWR are used by the self-tuning components in the database
AWR computes time model statistics based on time usage for activities, which are displayed in the v$SYS time model and V$SESS_TIME_MODEL
E. AWR contain...
View Full Document
- Fall '19
- Oracle Database, VCE Online