mutex,latch,lock,enqueue hash chains latch基础概念

来源:这里教程网 时间:2026-03-03 14:46:59 作者:

latch A low-level serialization control mechanism used to protect shared data structures in the SGA from simultaneous access. lock A database mechanism that prevents destructive interaction between transactions accessing a shared resource such as a table, row, or system object not visible to users. The main categories of locks are DML locks, DDL locks, and latches and internal locks. Mutexes A mutual exclusion object (mutex) is a low-level mechanism that prevents an object in memory from aging out or from being corrupted when accessed by concurrent processes. A mutex is similar to a latch, but whereas a latch typically protects a group of objects, a mutex protects a single object. Mutexes provide several benefits: ■ A mutex can reduce the possibility of contention. Because a latch protects multiple objects, it can become a bottleneck when processes attempt to access any of these objects concurrently. By serializing access to an individual object rather than a group, a mutex increases availability. ■ A mutex consumes less memory than a latch. ■ When in shared mode, a mutex permits concurrent reference by multiple sessions. Internal Locks Internal locks are higher-level, more complex mechanisms than latches and mutexes and serve various purposes. The database uses the following types of internal locks: ■ Dictionary cache locks These locks are of very short duration and are held on entries in dictionary caches while the entries are being modified or used. They guarantee that statements being parsed do not see inconsistent object definitions. Dictionary cache locks can be shared or exclusive. Shared locks are released when the parse is complete, whereas exclusive locks are released when the DDL operation is complete. ■ File and log management locks These locks protect various files. For example, an internal lock protects the control file so that only one process at a time can change it. Another lock coordinates the use and archiving of the online redo log files. Data files are locked to ensure that multiple instances mount a database in shared mode or that one instance mounts it in exclusive mode. Because file and log locks indicate the status of files, these locks are necessarily held for a long time. ■ Tablespace and undo segment locks These locks protect tablespaces and undo segments. For example, all instances accessing a database must agree on whether a tablespace is online or offline. Undo segments are locked so that only one database instance can write to a segment. Latches Latches are simple, low-level serialization mechanisms that coordinate multiuser access to shared data structures, objects, and files. Latches protect shared memory resources from corruption when accessed by multiple processes. Specifically, latches protect data structures from the following situations: ■ Concurrent modification by multiple sessions ■ Being read by one session while being modified by another session ■ Deallocation (aging out) of memory while being accessed Typically, a single latch protects multiple objects in the SGA. For example, background processes such as DBWn and LGWR allocate memory from the shared pool to create data structures. To allocate this memory, these processes use a shared pool latch that serializes access to prevent two processes from trying to inspect or modify the shared pool simultaneously. After the memory is allocated, other processes may need to access shared pool areas such as the library cache, which is required for parsing. In this case, processes latch only the library cache, not the entire shared pool. Unlike enqueue latches such as row locks, latches do not permit sessions to queue. When a latch becomes available, the first session to request the latch obtains exclusive access to it.  (latch不允许排队,当latch变为可用状态,则第一个来请求的将获得独占访问,并不是原来第一个排队的,是第一个来访问时正好latch可用,是竞争机制) Latch spinning occurs when a process repeatedly requests a latch in a loop, whereas latch sleeping occurs when a process releases the CPU before renewing the latch request. (latch请求如果没有被满足,将进入sleep状态并释放cpu,随后循环请求,直到获取之后再次请求cpu) Typically, an Oracle process acquires a latch for an extremely short time while manipulating or looking at a data structure. For example, while processing a salary update of a single employee, the database may obtain and release thousands of latches. The implementation of latches is operating system-dependent, especially in respect to whether and how long a process waits for a latch. An increase in latching means a decrease in concurrency. For example, excessive hard parse operations create contention for the library cache latch. The V$LATCH view contains detailed latch usage statistics for each latch, including the number of times each latch was requested and waited for. cursor A handle or name for a private SQL area in the PGA. Because cursors are closely associated with private SQL areas, the terms are sometimes used interchangeably. child cursor The cursor containing the plan, compilation environment, and other information for a statement whose text is stored in a parent cursor. The parent cursor is number 0, the first child is number 1, and so on. Child cursors reference exactly the same SQL text as the parent cursor, but are different. For example, two statements with the text SELECT * FROM mytable use different cursors when they reference tables named mytable in different schemas. Multiple private SQL areas in the same or different sessions can point to a single execution plan in the SGA 一个会话或多个会话的多个私有SQL区(PGA中)可以执行同一个SGA中的执行计划,这就是为什么当软解析过多时 可以跳过那么多bucket的library cache mutex S(11g以前为library cache latch) latch enqueue hash chain、parent cursor的cursor mutex S 直接进行cursor pin S Latch: Enqueue Hash Chains(Doc ID 445076.1) (Doc ID 445076.1), Solution Acquiring a lock is a series of steps from getting an index number to  identify the hash bucket in the hash table to releasing free resource structures and lock data structures. 获取hash bucket的hash table的索引号的锁需要通过一些列的步骤,以去释放资源结构或锁定资源结构。 Here are the steps: 1. Identifying the Hash Chain and Allocating the Resource Structure 确定hash chain以及分配资源结构 Oracle finds the resource structure associated with the named resource using a hashing algorithm. Oracle通过hash算法寻找已经被命名的相关资源结构 In the hashing algorithm Oracle uses a hash table (array of hash buckets), which is controlled by the parameter _ENQUEUE_HASH. The size of the hash table depends upon the value of this parameter. 在hash算法中,Oracle使用一个hash表(以hash bucket排列),这个被一个隐含参数_ENQUEUE_HASH控制。Hash table的大小依赖于这个参数。 The Hash chain contains the resource structures for that hash value. hash bucket组成的hash chain包含了资源结构的hash值。   When a session tries to acquire an enqueue, Oracle applies a hash function to convert the resource name to an index number in the array of hash buckets.  当一个会话尝试获取一个队列锁,Oracle使用hash函数转换资源名称为hash bucket列表上的索引号。 Each hash bucket has one linked list attached to it, which is called a hash chain. Before accessing the hash bucket, the session acquires an enqueue hash chain latch. 每一个hash bucket都有一个链接列表, 这个被称为hash chain 在访问hash bucket之前,会话会先获取一个hash chain的排队latch(enqueue latch)                After the session acquires an enqueue hash chain latch it moves down the hash chain attached to the bucket to locate the required resource structure. 会话获取到这个hash bucket的enqueue hash chain latch之后,它将开始从从hash chain向下搜寻hash bucket请求的资源结构。 At this point because the session acquires an enqueue hash chain latch  it will record a miss or spin get in the V$LATCH view depending upon the result of the latch operation. 在会话获取到hash chain的latch之后,将会根据获取结果在V$LATCH视图中记录miss或者spin get. There can be situations where the resource structure is not available on the hash chain. In this case where a resource structure is not present, the session will acquire an enqueue latch and will record statistics about the latch operation in V$LATCH. After acquiring the enqueue latch, the session will unlink the head of the resource free list and link it into a hash chain associated with the hash bucket. The enqueue latch will be held while the resource is allocated to the resource table. 也会有hash chain上的资源不可用的情形。在资源不存在的这种情况下,会话将会获得一个排队latch以及在V$LATCH记录latch操作统计信息。 获得一个队列Latch之后,会话会将资源头部从空闲列表中取消链接,将资源链接到一个hash bucket的hash chain中。 当资源被分配到资源表中时队列Latch将会被持有。 2. Populating the Lock Data Structure with the requested resource 锁数据结构由请求的资源组成 Now the session will acquire the enqueue latch again and will unlink the head of the lock free list.  现在会话将会再次获取队列latch以及将其在free list中取消链接。 It will populate the information related to the resource being requested, like mode of lock etc.  它将会填充相关请求的资源信息,比如锁模式等等 Now after populating this information, the session will link this lock structure to one of the linked lists (owner, waiter or converter) associated with the resource structure, depending upon the other sessions owning that resource structure or waiting to own the lock, or waiting to convert the existing held lock for that resource.  在填充了信息之后,会话将把锁结构链接到一个已链接列表(所有者、等待者、转换者)相关的资源结构,根据其他会话正在 拥有或者正在等待持有锁、或正在等待转换正在持有的资源锁。 Oracle will record the statistics in V$LATCH according to the result of the enqueue latch operation. By this time both the enqueue latch and enqueue hash chain latch are held by this session. The session will release the enqueue latch first then the enqueue hash chain latch after linking the lock data structure with the resource structure. Oracle将会记录这些统计信息在V$LATCH视图,根据队列latch操作的结果,此时enqueue latch以及enqueue hash chain 都将被这个会话持有。这个会话在链接到资源结构的锁数据结构之后将会首先释放enqueue latch,然后释放enqueue hash chain latch。 Now if the session is waiting in any queue (owner, waiter or converter) for another session to complete, the enqueue wait event will be recorded in V$SESSION_EVENT. 此时,如果有会话正在等待其他会话任何队列(所有者、等待者、转换者)完成工作,enqueue wait将会记录在V$SESSION_EVENT 3. Releasing a Lock 释放锁定 The method to release a lock is mostly the same as acquiring a lock. First of all Oracle will use a hash function to determine the hash bucket where the resource structure is allocated.  释放锁的方法与获取锁的方法基本相同,首先Oracle将会使用hash算法判断资源结构在哪个hash bucket中 (这也印证了SQL完成语义解析之后,Oracle将SQL文本转换ASCII码计算HASH值之后,直接就可以定位到parent cursor应该处于哪个hash bucket) It will acquire the enqueue hash chain latch and record statistics of the latch operation in V$LATCH.  Then it will locate the resource in the hash chain (linked list associated with the hash bucket identified by hash function).  将会获得enqueue hash chain latch并在V$LATCH中记录latch操作的统计信息。然后在hash chain中搜索资源(使用hash算法已链接资源所在的hash bucket) (这也就是说在SQL执行过程中,如果需要搜索其他相关资源,会利用HASH算法直接到资源所在的hash bucket中搜寻) The session will acquire the enqueue latch and will unlink the lock data structure from the resource structure, and link the lock data structure to the lock free list and release the enqueue latch.  会话将会获得enqueue latch并从资源结构中取消链接锁数据结构,然后将锁资源结构链接到锁空闲列表并释放enqueue latch。 After releasing the enqueue latch, the session will post the next process (waiter or converter) to proceed if appropriate.  Depending upon the LRU algorithm, Oracle will decide whether to unlink the resource structure from the hash chain and link it to the resource free list or not.  根据LRU(Latest Recently Used)算法,Oracle将决定是否将资源结构从hash chain取消链接,并将它链接到资源空闲列表。 After all of this, the session will release the enqueue hash chain latch and the lock will be released. 所有这些操作之后,会话将释放enqueue hash chain latch,然后锁将会被释放。 DML locks protect objects from concurrent modification. Frequently, DML lock allocation latch contention is seen with enqueue hash chain latches, as dml_locks are implemented through TM enqueue locks. These resource structures are hanging from enqueue hash chains serialized by enqueue hash chain latches. So, reducing DML lock allocation latch contention should resolve enqueue hash chain latch contention. About the two bugs that were identified in your previous SR  #(high session with this latch and resmgr:resource group CPU method 、) PROBLEM DESCRIPTION:  There is a high contention on resource manager runnable list latch on big system.  FIX DESCRIPTION:  Have multiple runnable lists per consumer group with multiple child latches. The fix is to have multiple runnable lists for a consumer group with multiple child latches.  1) Number of runnable lists.  I allocate 1 runnable list per 16 CPUs. If there are 128 CPUs, then there  will be 8 runnable lists per consumer group. The number of runnable lists is  capped to 10. The number of runnable lists is tunable using an underscore  parameter.  2) Adding a vt to runnable list  The group of runnable lists for a consumer group is maintained by kgkp. kgsk  calls the add vt function and kgkp is the one that decides which runnable  list the vt should go to. To spread the vts evenly across the group of  runnable lists, an add counter is introduced for each consumer group. The  add counter decides the runnable list that a vt goes to. The add counter is  increased(not atomically) every time the counter is used.  3) Picking a vt from a runnable list  kgsk calls pick vt function and kgkp is the one that decides which runnable  list to pick from. To pick the vts evenly from all runnable lists, a pick  counter is introduced for each consumer group. The pick counter serves as the  target index. If the corresponding target runnable list is empty, the current  process will traverse the runnable lists to find the nearest non-empty  runnable list. The pick counter is increased(not atomically) every time the  counter is used.  As to P1, P2 and P3, they are P1 = Latch address P2 = Latch number P3 = Tries

相关推荐