SQL> Create pluggable database new_pdb from
remote_pdb@clone_link keystore identified by my_keystore_password |
Oracle Installation guides, Linux Administration tips for DBAs, Performance Tuning tips, Disaster Recovery, RMAN, Dataguard and ORA errors solutions.
No contents from my website can be published anywhere else without my permission. Test every solution before implementing in the production environment.
SQL> Create pluggable database new_pdb from
remote_pdb@clone_link keystore identified by my_keystore_password |
Sessions are returned this error while connecting to instances(s) that are running in restricted mode. DBAs may enable restricted sessions to perform some maintenance tasks or in any other needed situation. We can query (g)v$instance to find out if an instance has restricted mode enabled.
SQL> select logins from gv$instance; |
This message clearly tells that standby database that you are trying to swithover to is not fully in sync with the primary database. To solve this problem, you should make sure that all archived redo logs and redo data has fully been applied to the standby database. Sometimes it may appear that standby database is fully in sync, but in fact it is not. If you have not copied the password file from primary to the satandbay after SYS password change on primary database, you may still face this error message during switchover. Always make sure the password files are in sync on all standby site.
If standby database is not in sync with the primary database, there could be several reasons for this. After you have identified the reason and resolved it, log shipping still may not be functional form the primary to the standby. As a last resort, you may want to kill the currently running archive processes on the standby database so that ARC processes are spawned again and could connect to the standby database. In the following I would explain how to kill the ARC process on Unix based as well as Windows based systems.
Every now and then DBAs need to investigate memory and CPU consumption on database servers. To find out the CPU and memory consumption, “top” is the most commonly used command in Linux environments. “free” is another command to display current memory consumption. Most important part of investigation is to sort the output of ”top” command to find out the top consumers of memory or CPU. In this article I will explain the easiest and simplest way to display top consumers in descending order.
If you face ORA-01105 while starting an Oracle instance in a RAC environment, it means that there is a parameter in init file (of spfile) that is required to have an identical value across all RAC instances. But, the parameter is set differently in the init file of current instance that you are trying to start.
Alert log would report error similar to the specified above. Snippet from the alert log could be similar to the following
Previously I wrote articles about Oracle snapshot standby database and also how to restore a failed over physical standby database back to a physical standby database. In this article I would explain how to enable and disable oracle database flashback and how to create restore points, especially guaranteed restore points that are very handy in case we need to perform a point-in-time recovery to recover from a user failure. Database must be running in archivelog mode before flashback database feature could be enabled.
Since the advent of Oracle Scheduler, there have been maintenance windows defined with default setting to run some automatic maintenance jobs like auto gather stats job or auto space advisor job etc. There are scenarios whereby DBAs need to change the maintenance windows setting if maintenance windows span peak hours. By default, maintenance jobs run during nights and weekends as these are the off-peak hours in most of the case. However, this may not be the case for every production database. In this article I would explain how to simply change any maintenance window start time and/or duration. For12c and above, this setting is individually defined in root container and PDBs.
If you are trying to unset log_archive_dest_n parameter and you face error ORA-16028, it means that you need to have at least one (or as set in LOG_ARCHIVE_MIN_SUCCEED_DEST) log archive destinations. I noticed in one of my RAC databases which had 2 archive destinations set, log_archive_dest_1 and log_archive_dest_3.
MOS article 428681.1 should be your starting point to add a vote disk in your RAC based on the version of RAC you are using and where you are storing your voting disk. In an Oracle RAC environment, it is recommended to store voting disk on a diskgorup with normal or high redundancy and have 3 copies of voting disk. During installation you can select the number of voting disks you want to create, or you can add extra copies later. If you have 4 physical disks in an ASM diskgroup with normal redundancy, you can choose to have 4 copies of vote disk and Oracle would create 4 voting disks on each physical disk of the diskgroup. In case of one or two disks failure, you would still have 2 voting disks copies to start the RAC. There is also a requirement of having at least 2 voting disks in order to start your RAC, otherwise RAC resources would not start up.
If this error appears while opening your database, it means that an instance crash happened before the database start and instance recovery needs redo data to be applied to the database, but redo data is not available in the redo log file. The reason of missing data is that redo data was not written to the log file when database instance crashed. The data flushing to the redo logs was probably stopped because of setting parameter “_disable_logging” to TRUE to speed up the data loading. Alert log file would show entries similar to the following.
Starting 18c,
pga_aggregate_limit setting has also been made dependent on PROCESSES
initialization parameter. You might face ORA-00093 during startup of instance
if value of pga_aggregate_limit is below the required value. For my 19c instance
I faced same issue as follows.
SQL> startup nomount ORA-00093: pga_aggregate_limit must be between 45000M and 100000G ORA-01078: failure in processing system parameters SQL> |
Before reading this article, you might want to read this article about poor buffer cache hit ratio because of full table scans. In this article I will explain a feature of Oracle introduced after 10 to avoid flooding of database buffer cache because of full table scans. Database block is the unit of IO in Oracle, which means that whenever some data is read from the disks, one copy of data block is put in the buffer cache so that this data can be reused later to avoid reading same block from the disk in future. After that, the copy of data is used for user’s SQL.
Sometimes DBAs need to fulfill requirements form development teams or the customer and provide different kind of information regarding database. One of them is to find size of the tables in a schema and number of rows in the tables. This information my be required for future planning or capacity sizing, or this could also be used during investigating a performance issue. There could be different ways of fetching this information; one of them that I feel quite simple is being explained here.
Normally DBAs do not much feel bothered by poor buffer cache hit ratio especially in data warehouse environments where full table scans may be quire frequent and acceptable. However, if you have OLTP environment, I still believe that buffer cache hit ratio should be taken care of and DBAs should try to keep it as high as they can. In OLTP environments, full table scans would cause cache hit ratio to be low because frequent full table scans of different tables would cause entire tables to be read in to buffer cache even if only a few block are needed for processing.
If this error message is appearing in the alert log file, most of the time it is related to log_archive_dest_1 for standby database’s local archive generation and/or log_archive_dest_2, that points to the primary database. You should check if every parameter is set properly and TNS string used to point to the primary database is set properly. For further information, you can see this article that explains how to properly configure dataguard.
PX Deq: Signal ACK RSG wait event does not have much information available even on My Oracle Support portal. I was not able to find much information about this. I noticed this wait event at the top of wait events in my AWR report as can be seen bellow that made me do some investigation. First and the foremost, this wait event is related to parallel execution of SQLs.
SQL> drop diskgroup data; |