Intel® VTune™ Amplifier XE and Intel® VTune™ Amplifier for Systems Help
This section provides reference for hardware events that can be monitored for the CPU(s):
The following performance-monitoring events are supported:
This event counts the number of the divide operations executed. Uses edge-detect and a cmask value of 1 on ARITH.FPU_DIV_ACTIVE to get the number of the divide operations executed
Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.
This event counts both taken and not taken speculative and retired branch instructions.
This event counts both taken and not taken speculative and retired macro-conditional branch instructions.
This event counts both taken and not taken speculative and retired macro-unconditional branch instructions, excluding calls and indirects.
This event counts both taken and not taken speculative and retired direct near calls.
This event counts both taken and not taken speculative and retired indirect branches excluding calls and return branches.
This event counts both taken and not taken speculative and retired indirect branches that have a return mnemonic.
This event counts not taken macro-conditional branch instructions.
This event counts taken speculative and retired macro-conditional branch instructions.
This event counts taken speculative and retired macro-conditional branch instructions excluding calls and indirect branches.
This event counts taken speculative and retired direct near calls.
This event counts taken speculative and retired indirect branches excluding calls and return branches.
This event counts taken speculative and retired indirect calls including both register and memory indirect.
This event counts taken speculative and retired indirect branches that have a return mnemonic.
This event counts all (macro) branch instructions retired.
This is a precise version of BR_INST_RETIRED.ALL_BRANCHES that counts all (macro) branch instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts conditional branch instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts conditional branch instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts far branch instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts both direct and indirect near call instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts both direct and indirect near call instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts both direct and indirect macro near call instructions retired (captured in ring 3).
This is a precise version (that is, uses PEBS) of the event that counts both direct and indirect macro near call instructions retired (captured in ring 3).
This is a non-precise version (that is, does not use PEBS) of the event that counts return instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts return instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts taken branch instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts taken branch instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts not taken branch instructions retired.
This event counts both taken and not taken speculative and retired mispredicted branch instructions.
This event counts both taken and not taken speculative and retired mispredicted macro conditional branch instructions.
This event counts both taken and not taken mispredicted indirect branches excluding calls and returns.
This event counts not taken speculative and retired mispredicted macro conditional branch instructions.
This event counts taken speculative and retired mispredicted macro conditional branch instructions.
This event counts taken speculative and retired mispredicted indirect branches excluding calls and returns.
Taken speculative and retired mispredicted indirect calls
This event counts taken speculative and retired mispredicted indirect branches that have a return mnemonic.
This event counts all mispredicted macro branch instructions retired.
This is a precise version of BR_MISP_RETIRED.ALL_BRANCHES that counts all mispredicted macro branch instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted conditional branch instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts mispredicted conditional branch instructions retired.
number of near branch instructions retired that were mispredicted and taken.
number of near branch instructions retired that were mispredicted and taken. (Precise Event - PEBS)
This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts mispredicted return instructions retired.
This event counts the unhalted core cycles during which the thread is in the ring 0 privileged mode.
This event counts when there is a transition from ring 1,2 or 3 to ring0.
This event counts unhalted core cycles during which the thread is in rings 1, 2, or 3.
Count XClk pulses when this thread is unhalted and the other thread is halted.
This is a fixed-frequency event programmed to general counters. It counts when the core is unhalted at 100 Mhz.
Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)
This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling ™' states duty off periods the processor is 'halted'. This event is clocked by base clock (100 Mhz) on Sandy Bridge. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case....
This event counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events...
Core cycles when at least one thread on the physical core is not in halt state
This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.
Core cycles when at least one thread on the physical core is not in halt state
Cycles while L1 cache miss demand load is outstanding.
Counts number of cycles the CPU has at least one pending demand load request missing the L1 data cache.
Cycles while L2 cache miss demand load is outstanding.
Counts number of cycles the CPU has at least one pending demand* load request missing the L2 cache.
Counts number of cycles the CPU has at least one pending demand load request (that is cycles with non-completed load waiting for its data from memory subsystem)
Cycles while memory subsystem has an outstanding load.
Counts number of cycles nothing is executed on any execution port.
Execution stalls while L1 cache miss demand load is outstanding.
Counts number of cycles nothing is executed on any execution port, while there was at least one pending demand load request missing the L1 data cache.
Execution stalls while L2 cache miss demand load is outstanding.
Counts number of cycles nothing is executed on any execution port, while there was at least one pending demand* load request missing the L2 cache. (as a footprint) * includes also L1 HW prefetch requests that may or may not be required by demands
Counts number of cycles nothing is executed on any execution port, while there was at least one pending demand load request.
Execution stalls while memory subsystem has an outstanding load.
Total execution stalls.
This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs. Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 0?2 cycles.
This event counts load misses in all DTLB levels that cause page walks of any page size (4K/2M/4M/1G).
Load operations that miss the first DTLB level but hit the second and do not cause page walks
Load misses that miss the DTLB and hit the STLB (2M)
Load misses that miss the DTLB and hit the STLB (4K)
Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page size.
This event counts load misses in all DTLB levels that cause a completed page walk (1G page size). The page walk can end with or without a fault.
This event counts load misses in all DTLB levels that cause a completed page walk (2M and 4M page sizes). The page walk can end with or without a fault.
This event counts load misses in all DTLB levels that cause a completed page walk (4K page size). The page walk can end with or without a fault.
This event counts the number of cycles while PMH is busy with the page walk.
This event counts store misses in all DTLB levels that cause page walks of any page size (4K/2M/4M/1G).
Store operations that miss the first TLB level but hit the second and do not cause page walks
Store misses that miss the DTLB and hit the STLB (2M)
Store misses that miss the DTLB and hit the STLB (4K)
Store misses in all DTLB levels that cause completed page walks
This event counts store misses in all DTLB levels that cause a completed page walk (1G page size). The page walk can end with or without a fault.
This event counts store misses in all DTLB levels that cause a completed page walk (2M and 4M page sizes). The page walk can end with or without a fault.
This event counts store misses in all DTLB levels that cause a completed page walk (4K page size). The page walk can end with or without a fault.
This event counts the number of cycles while PMH is busy with the page walk.
This event counts cycles for an extended page table walk. The Extended Page directory cache differs from standard TLB caches by the operating system that use it. Virtual machine operating systems use the extended page directory cache, while guest operating systems use the standard TLB caches.
Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired. Each count represents 2 computations. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired. Each count represents 4 computations. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP RSQRT SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired. Each count represents 4 computations. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired. Each count represents 8 computations. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP RSQRT SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational double precision floating-point instructions retired. Applies to SSE* and AVX*scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element. ?
Number of SSE/AVX computational packed floating-point instructions retired. Applies to SSE* and AVX*, packed, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RSQRT RCP SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational scalar floating-point instructions retired. Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RSQRT RCP SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational scalar double precision floating-point instructions retired. Each count represents 1 computation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational scalar single precision floating-point instructions retired. Each count represents 1 computation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP RSQRT SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational single precision floating-point instructions retired. Applies to SSE* and AVX*scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP RSQRT SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element. ?
This event counts cycles with any input and output SSE or x87 FP assist. If an input and output assist are detected on the same cycle the event increments by 1.
This is a non-precise version (that is, does not use PEBS) of the event that counts any input SSE* FP assist - invalid operation, denormal operand, dividing by zero, SNaN operand. Counting includes only cases involving penalties that required micro-code assist intervention.
This is a non-precise version (that is, does not use PEBS) of the event that counts the number of SSE* floating point (FP) micro-code assist (numeric overflow/underflow) when the output value (destination register) is invalid. Counting covers only cases involving penalties that require micro-code assist intervention.
This is a non-precise version (that is, does not use PEBS) of the event that counts x87 floating point (FP) micro-code assist (invalid operation, denormal operand, SNaN operand) when the input value (one of the source operands to an FP instruction) is invalid.
This is a non-precise version (that is, does not use PEBS) of the event that counts the number of x87 floating point (FP) micro-code assist (numeric overflow/underflow, inexact result) when the output value (destination register) is invalid.
Number of times HLE abort was triggered
Number of times an HLE abort was attributed to a Memory condition (See TSX_Memory event for additional details)
Number of times the TSX watchdog signaled an HLE abort
Number of times a disallowed operation caused an HLE abort
Number of times HLE caused a fault
Number of times HLE aborted and was not due to the abort conditions in subevents 3-6
Number of times HLE abort was triggered (PEBS)
Number of times HLE commit succeeded
Number of times we entered an HLE region; does not count nested transactions
This event counts the number of both cacheable and noncacheable Instruction Cache, Streaming Buffer and Victim Cache Reads including UC fetches.
This event counts cycles during which the demand fetch waits for data (wfdM104H) from L2 or iSB (opportunistic hit).
This event counts the number of instruction cache, streaming buffer and victim cache misses. Counting includes UC accesses.
This event counts the number of cycles 4 uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may "bypass" the IDQ.
This event counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may "bypass" the IDQ.
This event counts the number of cycles 4 uops were delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may "bypass" the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).
This event counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may "bypass" the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).
This event counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may "bypass" the IDQ.
This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may "bypass" the IDQ.
This counts the number of cycles that the instruction decoder queue is empty and can indicate that the application may be bound in the front end. It does not determine whether there are uops being delivered to the Alloc stage since uops can be delivered by bypass skipping the Instruction Decode Queue (IDQ) when it is empty.
This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may "bypass" the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).
This event counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may "bypass" the IDQ.
This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may "bypass" the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).
This event counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequenser (MS) is busy. Counting includes uops that may "bypass" the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE.
This event counts cycles during which uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may "bypass" the IDQ.
This event counts the number of deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while the Microcode Sequencer (MS) is busy. Counting includes uops that may "bypass" the IDQ.
This event counts the number of uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may "bypass" the IDQ.
This event counts the number of uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequenser (MS) is busy. Counting includes uops that may "bypass" the IDQ.
Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer
This event counts the total number of uops delivered to Instruction Decode Queue (IDQ) while the Microcode Sequenser (MS) is busy. Counting includes uops that may "bypass" the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE.
This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding ?4 ? x? when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread; b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); c. Instruction Decode Queue (IDQ) delivers four uops.
This event counts, on the per-thread basis, cycles when no uops are delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core =4.
Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.
This event counts, on the per-thread basis, cycles when less than 1 uop is delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core >=3.
Cycles with less than 2 uops delivered by the front end
Cycles with less than 3 uops delivered by the front end
This event counts stalls occured due to changing prefix length (66, 67 or REX.W when they change the length of the decoded instruction). Occurrences counting is proportional to the number of prefixes in a 16B-line. This may result in the following penalties: three-cycle penalty for each LCP in a 16-byte chunk.
This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. Counting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions...
This event counts the number of instructions (EOMs) retired. Counting covers macro-fused instructions individually (that is, increments by two).
This is a precise version (that is, uses PEBS) of the event that counts instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts FP operations retired. For X87 FP operations that have no exceptions counting also includes flows that have several X87, or flows that use X87 uops in the exception handling.
This event counts the number of cycles during which Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the current thread. This also includes the cycles during which the Allocator is serving another thread.
Cycles checkpoints in Resource Allocation Table (RAT) are recovering from JEClear or machine clear
Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke)
This event counts the number of flushes of the big or small ITLB pages. Counting include both TLB Flush (covering all sets) and TLB Set Clear (set-specific).
This event counts store misses in all DTLB levels that cause page walks of any page size (4K/2M/4M/1G).
Operations that miss the first ITLB level but hit the second and do not cause any page walks
Code misses that miss the DTLB and hit the STLB (2M)
Core misses that miss the DTLB and hit the STLB (4K)
Misses in all ITLB levels that cause completed page walks
This event counts store misses in all DTLB levels that cause a completed page walk (1G page size). The page walk can end with or without a fault.
This event counts store misses in all DTLB levels that cause a completed page walk (2M and 4M page sizes). The page walk can end with or without a fault.
This event counts store misses in all DTLB levels that cause a completed page walk (4K page size). The page walk can end with or without a fault.
This event counts the number of cycles while PMH is busy with the page walk.
This event counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.
Cycles a demand request was blocked due to Fill Buffers inavailability
This event counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand; from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.
This event counts duration of L1D miss outstanding in cycles.
Cycles with L1D load Misses outstanding from any thread on physical core
This event counts the number of WB requests that hit L2 cache.
This event counts the number of L2 cache lines filling the L2. Counting does not cover rejects.
This event counts the number of L2 cache lines in the Exclusive state filling the L2. Counting does not cover rejects.
This event counts the number of L2 cache lines in the Invalidate state filling the L2. Counting does not cover rejects.
This event counts the number of L2 cache lines in the Shared state filling the L2. Counting does not cover rejects.
Clean L2 cache lines evicted by demand
This event counts the total number of L2 code requests.
This event counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.
Demand requests that miss L2 cache
Demand requests to L2 cache
This event counts the total number of requests from the L2 hardware prefetchers.
This event counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.
L2 cache hits when fetching instructions, code reads.
L2 cache misses when fetching instructions
This event counts the number of demand Data Read requests that hit L2 cache. Only not rejected loads are counted.
This event counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.
This event counts the number of requests from the L2 hardware prefetchers that hit L2 cache. L3 prefetch new types
This event counts the number of requests from the L2 hardware prefetchers that miss L2 cache.
All requests that miss L2 cache
All L2 requests
RFO requests that hit L2 cache
RFO requests that miss L2 cache
This event counts L2 or L3 HW prefetches that access L2 cache including rejects.
This event counts transactions that access the L2 pipe including snoops, pagewalks, and so on.
This event counts the number of L2 cache accesses when fetching instructions.
This event counts Demand Data Read requests that access L2 cache, including rejects.
This event counts L1D writebacks that access L2 cache.
This event counts L2 fill requests that access L2 cache.
This event counts L2 writebacks that access L2 cache.
This event counts Read for Ownership (RFO) requests that access L2 cache.
This event counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.
This event counts how many times the load operation got the true Block-on-Store blocking code preventing store forwarding. This includes cases when: - preceding store conflicts with the load (incomplete overlap); - store forwarding is impossible due to u-arch limitations; - preceding lock RMW operations are not forwarded; - store has the no-forward bit set (uncacheable/page-split/masked stores); - all-blocking stores are used (mostly, fences and port I/O); and others. The most common case is a load blocked due to its address range overlapping with a preceding smaller uncompleted store. Note: This event does not take into account cases of out-of-SW-control (for example, SbTailHit), unknown physical STA, and cases of blocking loads on store due to being non-WB memory type or a lock. These cases are covered by other events. See the table of not supported store forwards in the Optimization Guide.
This event counts false dependencies in MOB when the partial comparison upon loose net check and dependency was resolved by the Enhanced Loose net mechanism. This may not result in high performance penalties. Loose net checks can fail when loads and stores are 4k aliased.
This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the hardware prefetch.
This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by asm inspection of the nearby instructions.
This event counts the number of cycles when the L1D is locked. It is a superset of the 0x1 mask (BUS_LOCK_CLOCKS.BUS_LOCK_DURATION).
This event counts cycles in which the L1 and L2 are locked due to a UC lock or split lock. A lock is asserted in case of locked memory access, due to noncacheable memory, locked operation that spans two cache lines, or a page walk from the noncacheable page table. L1D and L2 locks have a very high performance penalty and it is highly recommended to avoid such access.
This event counts core-originated cacheable demand requests that miss the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFU.
This event counts core-originated cacheable demand requests that refer to the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFU.
Cycles 4 Uops delivered by the LSD, but didn't come from the decoder
Cycles Uops delivered by the LSD, but didn't come from the decoder
Number of Uops delivered by the LSD. Read more on LSD under LSD_REPLAY.REPLAY
Number of machine clears (nukes) of any type.
This event counts both thread-specific (TS) and all-thread (AT) nukes.
Maskmov false fault - counts number of time ucode passes through Maskmov flow due to instruction's mask being 0 while the flow was completed without raising a fault.
This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following: 1. memory disambiguation, 2. external snoop, or 3. cross SMT-HW-thread snoop (stores) hitting load buffer.
This event counts self-modifying code (SMC) detected, which causes a machine clear.
This is a non-precise version (that is, does not use PEBS) of the event that counts retired load uops which data sources were L3 hit and a cross-core snoop hit in the on-pkg core cache.
This is a non-precise version (that is, does not use PEBS) of the event that counts retired load uops which data sources were HitM responses from a core on same socket (shared L3).
This is a precise version (that is, uses PEBS) of the event that counts retired load uops which data sources were HitM responses from a core on same socket (shared L3).
This is a precise version (that is, uses PEBS) of the event that counts retired load uops which data sources were L3 hit and a cross-core snoop hit in the on-pkg core cache.
This is a non-precise version (that is, does not use PEBS) of the event that counts retired load uops which data sources were L3 Hit and a cross-core snoop missed in the on-pkg core cache.
This is a precise version (that is, uses PEBS) of the event that counts retired load uops which data sources were L3 Hit and a cross-core snoop missed in the on-pkg core cache.
This is a non-precise version (that is, does not use PEBS) of the event that counts retired load uops which data sources were hits in the last-level (L3) cache without snoops required.
This is a precise version (that is, uses PEBS) of the event that counts retired load uops which data sources were hits in the last-level (L3) cache without snoops required.
Retired load uop whose Data Source was: local DRAM either Snoop not needed or Snoop Miss (RspI)
tbd
Retired load uop whose Data Source was: remote DRAM either Snoop not needed or Snoop Miss (RspI)
tbd
Retired load uop whose Data Source was: forwarded from remote cache
tbd
Retired load uop whose Data Source was: Remote cache HITM
tbd
This is a non-precise version (that is, does not use PEBS) of the event that counts retired load uops which data sources were load uops missed L1 but hit a fill buffer due to a preceding miss to the same cache line with the data not ready. Note: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load.
This is a precise version (that is, uses PEBS) of the event that counts retired load uops which data sources were load uops missed L1 but hit a fill buffer due to a preceding miss to the same cache line with the data not ready. Note: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load.
This is a non-precise version (that is, does not use PEBS) of the event that counts retired load uops which data sources were hits in the nearest-level (L1) cache. Note: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load. This event also counts SW prefetches independent of the actual data source
This is a precise version (that is, uses PEBS) of the event that counts retired load uops which data source were hits in the nearest-level (L1) cache. Note: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load. This event also counts SW prefetches independent of the actual data source
This is a non-precise version (that is, does not use PEBS) of the event that counts retired load uops which data sources were misses in the nearest-level (L1) cache. Counting excludes unknown and UC data source.
This is a precise version (that is, uses PEBS) of the event that counts retired load uops which data sources were misses in the nearest-level (L1) cache. Counting excludes unknown and UC data source.
This is a non-precise version (that is, does not use PEBS) of the event that counts retired load uops which data sources were hits in the mid-level (L2) cache.
This is a precise version (that is, uses PEBS) of the event that counts retired load uops which data sources were hits in the mid-level (L2) cache.
This is a non-precise version (that is, does not use PEBS) of the event that counts retired load uops which data sources were misses in the mid-level (L2) cache. Counting excludes unknown and UC data source.
This is a precise version (that is, uses PEBS) of the event that counts retired load uops which data sources were misses in the mid-level (L2) cache. Counting excludes unknown and UC data source.
This is a non-precise version (that is, does not use PEBS) of the event that counts retired load uops which data sources were data hits in the last-level (L3) cache without snoops required.
This is a precise version (that is, uses PEBS) of the event that counts retired load uops which data sources were data hits in the last-level (L3) cache without snoops required.
Miss in last-level (L3) cache. Excludes Unknown data-source.
Miss in last-level (L3) cache. Excludes Unknown data-source. (Precise Event - PEBS)
This event counts loads with latency value being above 128.
This event counts loads with latency value being above 16.
This event counts loads with latency value being above 256.
This event counts loads with latency value being above 32.
This event counts loads with latency value being above four.
This event counts loads with latency value being above 512.
This event counts loads with latency value being above 64.
This event counts loads with latency value being above eight.
This event counts load uops retired to the architected path with a filter on bits 0 and 1 applied. Note: This event counts AVX-256bit load/store double-pump memory uops as a single uop at retirement. This event also counts SW prefetches.
This is a precise version (that is, uses PEBS) of the event that counts load uops retired to the architected path with a filter on bits 0 and 1 applied. Note: This event ?ounts AVX-256bit load/store double-pump memory uops as a single uop at retirement. This event also counts SW prefetches.
This event counts store uops retired to the architected path with a filter on bits 0 and 1 applied. Note: This event counts AVX-256bit load/store double-pump memory uops as a single uop at retirement.
This is a precise version (that is, uses PEBS) of the event that counts store uops retired to the architected path with a filter on bits 0 and 1 applied. Note: This event ?ounts AVX-256bit load/store double-pump memory uops as a single uop at retirement.
This is a non-precise version (that is, does not use PEBS) of the event that counts load uops with locked access retired to the architected path.
This is a precise version (that is, uses PEBS) of the event that counts load uops with locked access retired to the architected path.
This is a non-precise version (that is, does not use PEBS) of the event that counts line-splitted load uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K).
This is a precise version (that is, uses PEBS) of the event that counts line-splitted load uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K).
This is a non-precise version (that is, does not use PEBS) of the event that counts line-splitted store uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K).
This is a precise version (that is, uses PEBS) of the event that counts line-splitted store uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K).
This is a non-precise version (that is, does not use PEBS) of the event that counts load uops with true STLB miss retired to the architected path. True STLB miss is an uop triggering page walk that gets completed without blocks, and later gets retired. This page walk can end up with or without a fault.
This is a precise version (that is, uses PEBS) of the event that counts load uops with true STLB miss retired to the architected path. True STLB miss is an uop triggering page walk that gets completed without blocks, and later gets retired. This page walk can end up with or without a fault.
This is a non-precise version (that is, does not use PEBS) of the event that counts store uops with true STLB miss retired to the architected path. True STLB miss is an uop triggering page walk that gets completed without blocks, and later gets retired. This page walk can end up with or without a fault.
This is a precise version (that is, uses PEBS) of the event that counts store uops true STLB miss retired to the architected path. True STLB miss is an uop triggering page walk that gets completed without blocks, and later gets retired. This page walk can end up with or without a fault.
This event counts speculative cache-line split load uops dispatched to the L1 cache.
This event counts speculative cache line split store-address (STA) uops dispatched to the L1 cache.
Number of integer Move Elimination candidate uops that were eliminated.
Number of integer Move Elimination candidate uops that were not eliminated.
Number of SIMD Move Elimination candidate uops that were eliminated.
Number of SIMD Move Elimination candidate uops that were not eliminated.
This event counts the demand and prefetch data reads. All Core Data Reads include cacheable "Demands" and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.
This event counts both cacheable and noncachaeble code read requests.
This event counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.
This event counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.
This event counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full. Note: Writeback pending FIFO has six entries.
This event counts the number of offcore outstanding cacheable Core Data Read transactions in the super queue every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.
This event counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.
This event counts cycles when offcore outstanding Demand Data Read transactions are present in the super queue (SQ). A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation).
This event counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The "Offcore outstanding" state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.
This event counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The "Offcore outstanding" state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.
This event counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS. Note: A prefetch promoted to Demand is counted from the promotion point.
Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue
This event counts the number of offcore outstanding RFO (store) transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.
tbd
Counts all demand & prefetch code reads that hit in the L3
Counts all demand & prefetch code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all demand & prefetch code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all demand & prefetch code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all demand & prefetch code reads that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all demand & prefetch code reads that miss the L3 and the data is returned from local or remote dram
Counts all demand & prefetch code reads that miss in the L3
Counts all demand & prefetch code reads that miss the L3 and the data is returned from local dram
Counts all demand & prefetch code reads that miss the L3 and the data is returned from remote dram
Counts all demand & prefetch code reads that miss the L3 and the modified data is transferred from remote cache
Counts all demand & prefetch code reads that miss the L3 and clean or shared data is transferred from remote cache
Counts all demand & prefetch data reads that hit in the L3
Counts all demand & prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all demand & prefetch data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all demand & prefetch data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all demand & prefetch data reads that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all demand & prefetch data reads that miss the L3 and the data is returned from local or remote dram
Counts all demand & prefetch data reads that miss in the L3
Counts all demand & prefetch data reads that miss the L3 and the data is returned from local dram
Counts all demand & prefetch data reads that miss the L3 and the data is returned from remote dram
Counts all demand & prefetch data reads that miss the L3 and the modified data is transferred from remote cache
Counts all demand & prefetch data reads that miss the L3 and clean or shared data is transferred from remote cache
Counts all prefetch code reads that hit in the L3
Counts all prefetch code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all prefetch code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all prefetch code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all prefetch code reads that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all prefetch code reads that miss the L3 and the data is returned from local or remote dram
Counts all prefetch code reads that miss in the L3
Counts all prefetch code reads that miss the L3 and the data is returned from local dram
Counts all prefetch code reads that miss the L3 and the data is returned from remote dram
Counts all prefetch code reads that miss the L3 and the modified data is transferred from remote cache
Counts all prefetch code reads that miss the L3 and clean or shared data is transferred from remote cache
Counts all prefetch data reads that hit in the L3
Counts all prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all prefetch data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all prefetch data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all prefetch data reads that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all prefetch data reads that miss the L3 and the data is returned from local or remote dram
Counts all prefetch data reads that miss in the L3
Counts all prefetch data reads that miss the L3 and the data is returned from local dram
Counts all prefetch data reads that miss the L3 and the data is returned from remote dram
Counts all prefetch data reads that miss the L3 and the modified data is transferred from remote cache
Counts all prefetch data reads that miss the L3 and clean or shared data is transferred from remote cache
Counts prefetch RFOs that hit in the L3
Counts prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts prefetch RFOs that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts prefetch RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts prefetch RFOs that hit in the L3 and the snoops sent to sibling cores return clean response
Counts prefetch RFOs that miss the L3 and the data is returned from local or remote dram
Counts prefetch RFOs that miss in the L3
Counts prefetch RFOs that miss the L3 and the data is returned from local dram
Counts prefetch RFOs that miss the L3 and the data is returned from remote dram
Counts prefetch RFOs that miss the L3 and the modified data is transferred from remote cache
Counts prefetch RFOs that miss the L3 and clean or shared data is transferred from remote cache
Counts all data/code/rfo reads (demand & prefetch) that hit in the L3
Counts all data/code/rfo reads (demand & prefetch) that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all data/code/rfo reads (demand & prefetch) that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all data/code/rfo reads (demand & prefetch) that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all data/code/rfo reads (demand & prefetch) that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all data/code/rfo reads (demand & prefetch) that miss the L3 and the data is returned from local or remote dram
Counts all data/code/rfo reads (demand & prefetch) that miss in the L3
Counts all data/code/rfo reads (demand & prefetch) that miss the L3 and the data is returned from local dram
Counts all data/code/rfo reads (demand & prefetch) that miss the L3 and the data is returned from remote dram
Counts all data/code/rfo reads (demand & prefetch) that miss the L3 and the modified data is transferred from remote cache
Counts all data/code/rfo reads (demand & prefetch) that miss the L3 and clean or shared data is transferred from remote cache
Counts all requests that hit in the L3
Counts all requests that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all requests that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all requests that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all requests that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all requests that miss the L3 and the data is returned from local or remote dram
Counts all requests that miss in the L3
Counts all requests that miss the L3 and the data is returned from local dram
Counts all requests that miss the L3 and the data is returned from remote dram
Counts all requests that miss the L3 and the modified data is transferred from remote cache
Counts all requests that miss the L3 and clean or shared data is transferred from remote cache
Counts all demand & prefetch RFOs that hit in the L3
Counts all demand & prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all demand & prefetch RFOs that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all demand & prefetch RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all demand & prefetch RFOs that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all demand & prefetch RFOs that miss the L3 and the data is returned from local or remote dram
Counts all demand & prefetch RFOs that miss in the L3
Counts all demand & prefetch RFOs that miss the L3 and the data is returned from local dram
Counts all demand & prefetch RFOs that miss the L3 and the data is returned from remote dram
Counts all demand & prefetch RFOs that miss the L3 and the modified data is transferred from remote cache
Counts all demand & prefetch RFOs that miss the L3 and clean or shared data is transferred from remote cache
Counts writebacks (modified to exclusive) that hit in the L3
Counts writebacks (modified to exclusive) that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts writebacks (modified to exclusive) that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts writebacks (modified to exclusive) that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts writebacks (modified to exclusive) that hit in the L3 and the snoops sent to sibling cores return clean response
Counts writebacks (modified to exclusive) that miss the L3 and the data is returned from local or remote dram
Counts writebacks (modified to exclusive) that miss in the L3
Counts writebacks (modified to exclusive) that miss the L3 and the data is returned from local dram
Counts writebacks (modified to exclusive) that miss the L3 and the data is returned from remote dram
Counts writebacks (modified to exclusive) that miss the L3 and the modified data is transferred from remote cache
Counts writebacks (modified to exclusive) that miss the L3 and clean or shared data is transferred from remote cache
Counts all demand code reads that hit in the L3
Counts all demand code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all demand code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all demand code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all demand code reads that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all demand code reads that miss the L3 and the data is returned from local or remote dram
Counts all demand code reads that miss in the L3
Counts all demand code reads that miss the L3 and the data is returned from local dram
Counts all demand code reads that miss the L3 and the data is returned from remote dram
Counts all demand code reads that miss the L3 and the modified data is transferred from remote cache
Counts all demand code reads that miss the L3 and clean or shared data is transferred from remote cache
Counts demand data reads that hit in the L3
Counts demand data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts demand data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts demand data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts demand data reads that hit in the L3 and the snoops sent to sibling cores return clean response
Counts demand data reads that miss the L3 and the data is returned from local or remote dram
Counts demand data reads that miss in the L3
Counts demand data reads that miss the L3 and the data is returned from local dram
Counts demand data reads that miss the L3 and the data is returned from remote dram
Counts demand data reads that miss the L3 and the modified data is transferred from remote cache
Counts demand data reads that miss the L3 and clean or shared data is transferred from remote cache
Counts all demand data writes (RFOs) that hit in the L3
Counts all demand data writes (RFOs) that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all demand data writes (RFOs) that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all demand data writes (RFOs) that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all demand data writes (RFOs) that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all demand data writes (RFOs) that miss the L3 and the data is returned from local or remote dram
Counts all demand data writes (RFOs) that miss in the L3
Counts all demand data writes (RFOs) that miss the L3 and the data is returned from local dram
Counts all demand data writes (RFOs) that miss the L3 and the data is returned from remote dram
Counts all demand data writes (RFOs) that miss the L3 and the modified data is transferred from remote cache
Counts all demand data writes (RFOs) that miss the L3 and clean or shared data is transferred from remote cache
Counts any other requests that hit in the L3
Counts any other requests that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts any other requests that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts any other requests that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts any other requests that hit in the L3 and the snoops sent to sibling cores return clean response
Counts any other requests that miss the L3 and the data is returned from local or remote dram
Counts any other requests that miss in the L3
Counts any other requests that miss the L3 and the data is returned from local dram
Counts any other requests that miss the L3 and the data is returned from remote dram
Counts any other requests that miss the L3 and the modified data is transferred from remote cache
Counts any other requests that miss the L3 and clean or shared data is transferred from remote cache
Counts all prefetch (that bring data to LLC only) code reads that hit in the L3
Counts all prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all prefetch (that bring data to LLC only) code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from local or remote dram
Counts all prefetch (that bring data to LLC only) code reads that miss in the L3
Counts all prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from local dram
Counts all prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from remote dram
Counts all prefetch (that bring data to LLC only) code reads that miss the L3 and the modified data is transferred from remote cache
Counts all prefetch (that bring data to LLC only) code reads that miss the L3 and clean or shared data is transferred from remote cache
Counts prefetch (that bring data to L2) data reads that hit in the L3
Counts prefetch (that bring data to L2) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts prefetch (that bring data to L2) data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts prefetch (that bring data to L2) data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts prefetch (that bring data to L2) data reads that hit in the L3 and the snoops sent to sibling cores return clean response
Counts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from local or remote dram
Counts prefetch (that bring data to L2) data reads that miss in the L3
Counts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from local dram
Counts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from remote dram
Counts prefetch (that bring data to L2) data reads that miss the L3 and the modified data is transferred from remote cache
Counts prefetch (that bring data to L2) data reads that miss the L3 and clean or shared data is transferred from remote cache
Counts all prefetch (that bring data to L2) RFOs that hit in the L3
Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from local or remote dram
Counts all prefetch (that bring data to L2) RFOs that miss in the L3
Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from local dram
Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from remote dram
Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the modified data is transferred from remote cache
Counts all prefetch (that bring data to L2) RFOs that miss the L3 and clean or shared data is transferred from remote cache
Counts prefetch (that bring data to LLC only) code reads that hit in the L3
Counts prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts prefetch (that bring data to LLC only) code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoops sent to sibling cores return clean response
Counts prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from local or remote dram
Counts prefetch (that bring data to LLC only) code reads that miss in the L3
Counts prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from local dram
Counts prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from remote dram
Counts prefetch (that bring data to LLC only) code reads that miss the L3 and the modified data is transferred from remote cache
Counts prefetch (that bring data to LLC only) code reads that miss the L3 and clean or shared data is transferred from remote cache
Counts all prefetch (that bring data to LLC only) data reads that hit in the L3
Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from local or remote dram
Counts all prefetch (that bring data to LLC only) data reads that miss in the L3
Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from local dram
Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from remote dram
Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the modified data is transferred from remote cache
Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and clean or shared data is transferred from remote cache
Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3
Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from local or remote dram
Counts all prefetch (that bring data to LLC only) RFOs that miss in the L3
Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from local dram
Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from remote dram
Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the modified data is transferred from remote cache
Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and clean or shared data is transferred from remote cache
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that hit in the L3
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that miss the L3 and the data is returned from local or remote dram
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that miss in the L3
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that miss the L3 and the data is returned from local dram
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that miss the L3 and the data is returned from remote dram
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that miss the L3 and the modified data is transferred from remote cache
Counts all locks that are either split across cache line boundaries or to uncacheable addresses that miss the L3 and clean or shared data is transferred from remote cache
Counts all non-temporal stores that hit in the L3
Counts all non-temporal stores that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
Counts all non-temporal stores that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts all non-temporal stores that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
Counts all non-temporal stores that hit in the L3 and the snoops sent to sibling cores return clean response
Counts all non-temporal stores that miss the L3 and the data is returned from local or remote dram
Counts all non-temporal stores that miss in the L3
Counts all non-temporal stores that miss the L3 and the data is returned from local dram
Counts all non-temporal stores that miss the L3 and the data is returned from remote dram
Counts all non-temporal stores that miss the L3 and the modified data is transferred from remote cache
Counts all non-temporal stores that miss the L3 and clean or shared data is transferred from remote cache
Number of times any microcode assist is invoked by HW upon uop writeback.
This is a non-precise version (that is, does not use PEBS) of the event that counts the number of transitions from AVX-256 to legacy SSE when penalty is applicable.
This is a non-precise version (that is, does not use PEBS) of the event that counts the number of transitions from legacy SSE to AVX-256 when penalty is applicable.
Number of DTLB page walker hits in the L1+FB
Number of DTLB page walker hits in the L2
Number of DTLB page walker hits in the L3 + XSNP
Number of DTLB page walker hits in Memory
Number of ITLB page walker hits in the L1+FB
Number of ITLB page walker hits in the L2
Number of ITLB page walker hits in the L3 + XSNP
This event counts resource-related stall cycles. Reasons for stalls can be as follows: - *any* u-arch structure got full (LB, SB, RS, ROB, BOB, LM, Physical Register Reclaim Table (PRRT), or Physical History Table (PHT) slots) - *any* u-arch structure got empty (like INT/SIMD FreeLists) - FPU control word (FPCW), MXCSR and others. This counts cycles that the pipeline backend blocked uop delivery from the front end.
This event counts ROB full stall cycles. This counts cycles that the pipeline backend blocked uop delivery from the front end.
This event counts stall cycles caused by absence of eligible entries in the reservation station (RS). This may result from RS overflow, or from RS deallocation because of the RS array Write Port allocation scheme (each RS entry has two write ports instead of four. As a result, empty entries could not be used, although RS is not really full). This counts cycles that the pipeline backend blocked uop delivery from the front end.
This event counts stall cycles caused by the store buffer (SB) overflow (excluding draining from synch). This counts cycles that the pipeline backend blocked uop delivery from the front end.
This event counts cases of saving new LBR records by hardware. This assumes proper enabling of LBRs and takes into account LBR filtering done by the LBR_SELECT register.
This event counts cycles during which the reservation station (RS) is empty for the thread. Note: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issues.
Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.
Number of times RTM abort was triggered
Number of times an RTM abort was attributed to a Memory condition (See TSX_Memory event for additional details)
Number of times the TSX watchdog signaled an RTM abort
Number of times a disallowed operation caused an RTM abort
Number of times a RTM caused a fault
Number of times RTM aborted and was not due to the abort conditions in subevents 3-6
Number of times RTM abort was triggered (PEBS)
Number of times RTM commit succeeded
Number of times we entered an RTM region; does not count nested transactions
This event counts the number of DTLB flush attempts of the thread-specific entries.
This event counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, and so on).
Unfriendly TSX abort triggered by a flowmarker
Unfriendly TSX abort triggered by a vzeroupper instruction
Unfriendly TSX abort triggered by a nest count that is too deep
RTM region detected inside HLE
# HLE inside HLE+
Number of times a TSX Abort was triggered due to an evicted line caused by a transaction overflow
Number of times a TSX line had a cache conflict
Number of times a TSX Abort was triggered due to release/commit but data and address mismatch
Number of times a TSX Abort was triggered due to commit but Lock Buffer not empty
Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer
Number of times a TSX Abort was triggered due to a non-release/commit store to lock
Number of times we could not allocate Lock Buffer
Bounce Control
Uncore Clocks
Counter 0 Occupancy
FaST wire asserted
Cache Lookups; Any Request
Cache Lookups; Data Read Request
Cache Lookups; Lookups that Match NID
Cache Lookups; Any Read Request
Cache Lookups; External Snoop Request
Cache Lookups; Write Requests
Lines Victimized; Lines in E state
Lines Victimized
Lines Victimized; Lines in S State
Lines Victimized
Lines Victimized; Lines in M state
Lines Victimized; Victimized Lines that Match NID
Cbo Misc; DRd hitting non-M with raw CV=0
Cbo Misc; Clean Victim with raw CV=0
Cbo Misc; RFO HitS
Cbo Misc; Silent Snoop Eviction
Cbo Misc
Cbo Misc; Write Combining Aliasing
LRU Queue; LRU Age 0
LRU Queue; LRU Age 1
LRU Queue; LRU Age 2
LRU Queue; LRU Age 3
LRU Queue; LRU Bits Decremented
LRU Queue; Non-0 Aged Victim
AD Ring In Use; All
AD Ring In Use; Down
AD Ring In Use; Down and Even
AD Ring In Use; Down and Odd
AD Ring In Use; Up
AD Ring In Use; Up and Even
AD Ring In Use; Up and Odd
AK Ring In Use; All
AK Ring In Use; Down
AK Ring In Use; Down and Even
AK Ring In Use; Down and Odd
AK Ring In Use; Up
AK Ring In Use; Up and Even
AK Ring In Use; Up and Odd
BL Ring in Use; Down
BL Ring in Use; Down
BL Ring in Use; Down and Even
BL Ring in Use; Down and Odd
BL Ring in Use; Up
BL Ring in Use; Up and Even
BL Ring in Use; Up and Odd
Number of LLC responses that bounced on the Ring.; AD
Number of LLC responses that bounced on the Ring.; AK
Number of LLC responses that bounced on the Ring.; BL
Number of LLC responses that bounced on the Ring.; Snoops of processor's cache.
BL Ring in Use; Any
BL Ring in Use; Any
BL Ring in Use; Down
BL Ring in Use; Any
AD
AK
BL
IV
Number of cycles the Cbo is actively throttling traffic onto the Ring in order to limit bounce traffic.
Ingress Arbiter Blocking Cycles; IRQ
Ingress Arbiter Blocking Cycles; IPQ
Ingress Arbiter Blocking Cycles; ISMQ_BID
Ingress Arbiter Blocking Cycles; PRQ
Ingress Allocations; IPQ
Ingress Allocations; IRQ
Ingress Allocations; IRQ Rejected
Ingress Allocations; PRQ
Ingress Allocations; PRQ
Ingress Internal Starvation Cycles; IPQ
Ingress Internal Starvation Cycles; IRQ
Ingress Internal Starvation Cycles; ISMQ
Ingress Internal Starvation Cycles; PRQ
Probe Queue Retries; Address Conflict
Probe Queue Retries; Any Reject
Probe Queue Retries; No Egress Credits
Probe Queue Retries; No QPI Credits
Probe Queue Retries; No AD Sbo Credits
Probe Queue Retries; Target Node Filter
Ingress Request Queue Rejects; Address Conflict
Ingress Request Queue Rejects; Any Reject
Ingress Request Queue Rejects; No Egress Credits
Ingress Request Queue Rejects; No IIO Credits
Ingress Request Queue Rejects
Ingress Request Queue Rejects; No QPI Credits
Ingress Request Queue Rejects; No RTIDs
Ingress Request Queue Rejects; No AD Sbo Credits
Ingress Request Queue Rejects; No BL Sbo Credits
Ingress Request Queue Rejects; Target Node Filter
ISMQ Retries; Any Reject
ISMQ Retries; No Egress Credits
ISMQ Retries; No IIO Credits
ISMQ Retries
ISMQ Retries; No QPI Credits
ISMQ Retries; No RTIDs
ISMQ Retries
ISMQ Request Queue Rejects; No AD Sbo Credits
ISMQ Request Queue Rejects; No BL Sbo Credits
ISMQ Request Queue Rejects; Target Node Filter
Ingress Occupancy; IPQ
Ingress Occupancy; IRQ
Ingress Occupancy; IRQ Rejected
Ingress Occupancy; PRQ Rejects
SBo Credits Acquired; For AD Ring
SBo Credits Acquired; For BL Ring
SBo Credits Occupancy; For AD Ring
SBo Credits Occupancy; For BL Ring
TOR Inserts; All
TOR Inserts; Evictions
TOR Inserts; Local Memory
TOR Inserts; Local Memory - Opcode Matched
TOR Inserts; Misses to Local Memory
TOR Inserts; Misses to Local Memory - Opcode Matched
TOR Inserts; Miss Opcode Match
TOR Inserts; Misses to Remote Memory
TOR Inserts; Misses to Remote Memory - Opcode Matched
TOR Inserts; NID Matched
TOR Inserts; NID Matched Evictions
TOR Inserts; NID Matched Miss All
TOR Inserts; NID and Opcode Matched Miss
TOR Inserts; NID and Opcode Matched
TOR Inserts; NID Matched Writebacks
TOR Inserts; Opcode Match
TOR Inserts; Remote Memory
TOR Inserts; Remote Memory - Opcode Matched
TOR Inserts; Writebacks
TOR Occupancy; Any
TOR Occupancy; Evictions
TOR Occupancy
TOR Occupancy; Local Memory - Opcode Matched
TOR Occupancy; Miss All
TOR Occupancy
TOR Occupancy; Misses to Local Memory - Opcode Matched
TOR Occupancy; Miss Opcode Match
TOR Occupancy
TOR Occupancy; Misses to Remote Memory - Opcode Matched
TOR Occupancy; NID Matched
TOR Occupancy; NID Matched Evictions
TOR Occupancy; NID Matched
TOR Occupancy; NID and Opcode Matched Miss
TOR Occupancy; NID and Opcode Matched
TOR Occupancy; NID Matched Writebacks
TOR Occupancy; Opcode Match
TOR Occupancy
TOR Occupancy; Remote Memory - Opcode Matched
TOR Occupancy; Writebacks
Onto AD Ring
Onto AK Ring
Onto BL Ring
Egress Allocations; AD - Cachebo
Egress Allocations; AD - Corebo
Egress Allocations; AK - Cachebo
Egress Allocations; AK - Corebo
Egress Allocations; BL - Cacheno
Egress Allocations; BL - Corebo
Egress Allocations; IV - Cachebo
Injection Starvation; Onto AD Ring (to core)
Injection Starvation; Onto AK Ring
Injection Starvation; Onto BL Ring
Injection Starvation; Onto IV Ring
QPI Address/Opcode Match; AD Opcodes
QPI Address/Opcode Match; Address
QPI Address/Opcode Match; AK Opcodes
QPI Address/Opcode Match; BL Opcodes
QPI Address/Opcode Match; Address & Opcode Match
QPI Address/Opcode Match; Opcode
BT Cycles Not Empty
BT to HT Not Issued; Incoming Data Hazard
BT to HT Not Issued; Incoming Snoop Hazard
BT to HT Not Issued; Incoming Data Hazard
BT to HT Not Issued; Incoming Data Hazard
HA to iMC Bypass; Not Taken
HA to iMC Bypass; Taken
uclks
Direct2Core Messages Sent
Cycles when Direct2Core was Disabled
Number of Reads that had Direct2Core Overridden
Directory Lat Opt Return
Directory Lookups; Snoop Not Needed
Directory Lookups; Snoop Needed
Directory Updates; Any Directory Update
Directory Updates; Directory Clear
Directory Updates; Directory Set
Counts Number of Hits in HitMe Cache; op is AckCnfltWbI
Counts Number of Hits in HitMe Cache; All Requests
Counts Number of Hits in HitMe Cache; Allocations
Counts Number of Hits in HitMe Cache; Allocations
Counts Number of Hits in HitMe Cache; HOM Requests
Counts Number of Hits in HitMe Cache; Invalidations
Counts Number of Hits in HitMe Cache; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE
Counts Number of Hits in HitMe Cache; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI
Counts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a local request
Counts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a remote request
Counts Number of Hits in HitMe Cache; op is RsSFwd or RspSFwdWb
Counts Number of Hits in HitMe Cache; op is WbMtoE or WbMtoS
Counts Number of Hits in HitMe Cache; op is WbMtoI
Accumulates Number of PV bits set on HitMe Cache Hits; op is AckCnfltWbI
Accumulates Number of PV bits set on HitMe Cache Hits; All Requests
Accumulates Number of PV bits set on HitMe Cache Hits; HOM Requests
Accumulates Number of PV bits set on HitMe Cache Hits; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE
Accumulates Number of PV bits set on HitMe Cache Hits; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI
Accumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a local request
Accumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a remote request
Accumulates Number of PV bits set on HitMe Cache Hits; op is RsSFwd or RspSFwdWb
Accumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoE or WbMtoS
Accumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoI
Counts Number of times HitMe Cache is accessed; op is AckCnfltWbI
Counts Number of times HitMe Cache is accessed; All Requests
Counts Number of times HitMe Cache is accessed; Allocations
Counts Number of times HitMe Cache is accessed; HOM Requests
Counts Number of times HitMe Cache is accessed; Invalidations
Counts Number of times HitMe Cache is accessed; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE
Counts Number of times HitMe Cache is accessed; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI
Counts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a local request
Counts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a remote request
Counts Number of times HitMe Cache is accessed; op is RsSFwd or RspSFwdWb
Counts Number of times HitMe Cache is accessed; op is WbMtoE or WbMtoS
Counts Number of times HitMe Cache is accessed; op is WbMtoI
Cycles without QPI Ingress Credits; AD to QPI Link 0
Cycles without QPI Ingress Credits; AD to QPI Link 1
Cycles without QPI Ingress Credits; BL to QPI Link 0
Cycles without QPI Ingress Credits; BL to QPI Link 0
Cycles without QPI Ingress Credits; BL to QPI Link 1
Cycles without QPI Ingress Credits; BL to QPI Link 1
HA to iMC Normal Priority Reads Issued; Normal Priority
Retry Events
HA to iMC Full Line Writes Issued; All Writes
HA to iMC Full Line Writes Issued; Full Line Non-ISOCH
HA to iMC Full Line Writes Issued; ISOCH Full Line
HA to iMC Full Line Writes Issued; Partial Non-ISOCH
HA to iMC Full Line Writes Issued; ISOCH Partial
IOT Backpressure
IOT Backpressure
IOT Common Trigger Sequencer - Lo
IOT Common Trigger Sequencer - Lo
IOT Common Trigger Sequencer - Hi
IOT Common Trigger Sequencer - Hi
IOT Common Trigger Sequencer - Lo
IOT Common Trigger Sequencer - Lo
OSB Snoop Broadcast; Cancelled
OSB Snoop Broadcast; Local InvItoE
OSB Snoop Broadcast; Local Reads
OSB Snoop Broadcast; Reads Local - Useful
OSB Snoop Broadcast; Remote
OSB Snoop Broadcast; Remote - Useful
OSB Early Data Return; All
OSB Early Data Return; Reads to Local I
OSB Early Data Return; Reads to Local S
OSB Early Data Return; Reads to Remote I
OSB Early Data Return; Reads to Remote S
Read and Write Requests; Local InvItoEs
Read and Write Requests; Remote InvItoEs
Read and Write Requests; Reads
Read and Write Requests; Local Reads
Read and Write Requests; Remote Reads
Read and Write Requests; Writes
Read and Write Requests; Local Writes
Read and Write Requests; Remote Writes
HA AD Ring in Use; Counterclockwise
HA AD Ring in Use; Counterclockwise and Even
HA AD Ring in Use; Counterclockwise and Odd
HA AD Ring in Use; Clockwise
HA AD Ring in Use; Clockwise and Even
HA AD Ring in Use; Clockwise and Odd
HA AK Ring in Use; All
HA AK Ring in Use; Counterclockwise
HA AK Ring in Use; Counterclockwise and Even
HA AK Ring in Use; Counterclockwise and Odd
HA AK Ring in Use; Clockwise
HA AK Ring in Use; Clockwise and Even
HA AK Ring in Use; Clockwise and Odd
HA BL Ring in Use; All
HA BL Ring in Use; Counterclockwise
HA BL Ring in Use; Counterclockwise and Even
HA BL Ring in Use; Counterclockwise and Odd
HA BL Ring in Use; Clockwise
HA BL Ring in Use; Clockwise and Even
HA BL Ring in Use; Clockwise and Odd
iMC RPQ Credits Empty - Regular; Channel 0
iMC RPQ Credits Empty - Regular; Channel 1
iMC RPQ Credits Empty - Regular; Channel 2
iMC RPQ Credits Empty - Regular; Channel 3
iMC RPQ Credits Empty - Special; Channel 0
iMC RPQ Credits Empty - Special; Channel 1
iMC RPQ Credits Empty - Special; Channel 2
iMC RPQ Credits Empty - Special; Channel 3
SBo0 Credits Acquired; For AD Ring
SBo0 Credits Acquired; For BL Ring
SBo0 Credits Occupancy; For AD Ring
SBo0 Credits Occupancy; For BL Ring
SBo1 Credits Acquired; For AD Ring
SBo1 Credits Acquired; For BL Ring
SBo1 Credits Occupancy; For AD Ring
SBo1 Credits Occupancy; For BL Ring
Data beat the Snoop Responses; Local Requests
Data beat the Snoop Responses; Remote Requests
Cycles with Snoops Outstanding; All Requests
Cycles with Snoops Outstanding; Local Requests
Cycles with Snoops Outstanding; Remote Requests
Tracker Snoops Outstanding Accumulator; Local Requests
Tracker Snoops Outstanding Accumulator; Remote Requests
Snoop Responses Received; RSPCNFLCT*
Snoop Responses Received; RspI
Snoop Responses Received; RspIFwd
Snoop Responses Received; RspS
Snoop Responses Received; RspSFwd
Snoop Responses Received; Rsp*Fwd*WB
Snoop Responses Received; Rsp*WB
Snoop Responses Received Local; Other
Snoop Responses Received Local; RspCnflct
Snoop Responses Received Local; RspI
Snoop Responses Received Local; RspIFwd
Snoop Responses Received Local; RspS
Snoop Responses Received Local; RspSFwd
Snoop Responses Received Local; Rsp*FWD*WB
Snoop Responses Received Local; Rsp*WB
Stall on No Sbo Credits; For SBo0, AD Ring
Stall on No Sbo Credits; For SBo0, BL Ring
Stall on No Sbo Credits; For SBo1, AD Ring
Stall on No Sbo Credits; For SBo1, BL Ring
HA Requests to a TAD Region - Group 0; TAD Region 0
HA Requests to a TAD Region - Group 0; TAD Region 1
HA Requests to a TAD Region - Group 0; TAD Region 2
HA Requests to a TAD Region - Group 0; TAD Region 3
HA Requests to a TAD Region - Group 0; TAD Region 4
HA Requests to a TAD Region - Group 0; TAD Region 5
HA Requests to a TAD Region - Group 0; TAD Region 6
HA Requests to a TAD Region - Group 0; TAD Region 7
HA Requests to a TAD Region - Group 1; TAD Region 10
HA Requests to a TAD Region - Group 1; TAD Region 11
HA Requests to a TAD Region - Group 1; TAD Region 8
HA Requests to a TAD Region - Group 1; TAD Region 9
Tracker Cycles Full; Cycles Completely Used
Tracker Cycles Full; Cycles GP Completely Used
Tracker Cycles Not Empty; All Requests
Tracker Cycles Not Empty; Local Requests
Tracker Cycles Not Empty; Remote Requests
Tracker Occupancy Accumultor; Local InvItoE Requests
Tracker Occupancy Accumultor; Remote InvItoE Requests
Tracker Occupancy Accumultor; Local Read Requests
Tracker Occupancy Accumultor; Remote Read Requests
Tracker Occupancy Accumultor; Local Write Requests
Tracker Occupancy Accumultor; Remote Write Requests
Data Pending Occupancy Accumultor; Local Requests
Data Pending Occupancy Accumultor; Remote Requests
Outbound NDR Ring Transactions; Non-data Responses
AD Egress Full; All
AD Egress Full; Scheduler 0
AD Egress Full; Scheduler 1
AD Egress Not Empty; All
AD Egress Not Empty; Scheduler 0
AD Egress Not Empty; Scheduler 1
AD Egress Allocations; All
AD Egress Allocations; Scheduler 0
AD Egress Allocations; Scheduler 1
AK Egress Full; All
AK Egress Full; Scheduler 0
AK Egress Full; Scheduler 1
AK Egress Not Empty; All
AK Egress Not Empty; Scheduler 0
AK Egress Not Empty; Scheduler 1
AK Egress Allocations; All
AK Egress Allocations; Scheduler 0
AK Egress Allocations; Scheduler 1
Outbound DRS Ring Transactions to Cache; Data to Cache
Outbound DRS Ring Transactions to Cache; Data to Core
Outbound DRS Ring Transactions to Cache; Data to QPI
BL Egress Full; All
BL Egress Full; Scheduler 0
BL Egress Full; Scheduler 1
BL Egress Not Empty; All
BL Egress Not Empty; Scheduler 0
BL Egress Not Empty; Scheduler 1
BL Egress Allocations; All
BL Egress Allocations; Scheduler 0
BL Egress Allocations; Scheduler 1
Injection Starvation; For AK Ring
Injection Starvation; For BL Ring
HA iMC CHN0 WPQ Credits Empty - Regular; Channel 0
HA iMC CHN0 WPQ Credits Empty - Regular; Channel 1
HA iMC CHN0 WPQ Credits Empty - Regular; Channel 2
HA iMC CHN0 WPQ Credits Empty - Regular; Channel 3
HA iMC CHN0 WPQ Credits Empty - Special; Channel 0
HA iMC CHN0 WPQ Credits Empty - Special; Channel 1
HA iMC CHN0 WPQ Credits Empty - Special; Channel 2
HA iMC CHN0 WPQ Credits Empty - Special; Channel 3
Total Write Cache Occupancy; Any Source
Total Write Cache Occupancy; Select Source
Clocks in the IRP
Coherent Ops; CLFlush
Coherent Ops; CRd
Coherent Ops; DRd
Coherent Ops; PCIDCAHin5t
Coherent Ops; PCIRdCur
Coherent Ops; PCIItoM
Coherent Ops; RFO
Coherent Ops; WbMtoI
Misc Events - Set 0; Cache Inserts of Atomic Transactions as Secondary
Misc Events - Set 0; Cache Inserts of Read Transactions as Secondary
Misc Events - Set 0; Cache Inserts of Write Transactions as Secondary
Misc Events - Set 0; Fastpath Rejects
Misc Events - Set 0; Fastpath Requests
Misc Events - Set 0; Fastpath Transfers From Primary to Secondary
Misc Events - Set 0; Prefetch Ack Hints From Primary to Secondary
Misc Events - Set 0; Prefetch TimeOut
Misc Events - Set 1; Data Throttled
Misc Events - Set 1
Misc Events - Set 1; Received Invalid
Misc Events - Set 1; Received Valid
Misc Events - Set 1; Slow Transfer of E Line
Misc Events - Set 1; Slow Transfer of I Line
Misc Events - Set 1; Slow Transfer of M Line
Misc Events - Set 1; Slow Transfer of S Line
AK Ingress Occupancy
tbd
BL Ingress Occupancy - DRS
tbd
tbd
BL Ingress Occupancy - NCB
tbd
tbd
BL Ingress Occupancy - NCS
tbd
Snoop Responses; Hit E or S
Snoop Responses; Hit I
Snoop Responses; Hit M
Snoop Responses; Miss
Snoop Responses; SnpCode
Snoop Responses; SnpData
Snoop Responses; SnpInv
Inbound Transaction Count; Atomic
Inbound Transaction Count; Select Source
Inbound Transaction Count; Other
Inbound Transaction Count; Read Prefetches
Inbound Transaction Count; Reads
Inbound Transaction Count; Writes
Inbound Transaction Count; Write Prefetches
No AD Egress Credit Stalls
No BL Egress Credit Stalls
Outbound Read Requests
Outbound Read Requests
Outbound Request Queue Occupancy
DRAM Activate Count; Activate due to Write
DRAM Activate Count; Activate due to Read
DRAM Activate Count; Activate due to Write
ACT command issued by 2 cycle bypass
CAS command issued by 2 cycle bypass
PRE command issued by 2 cycle bypass
DRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (w/ and w/out auto-pre)
DRAM RD_CAS and WR_CAS Commands.; All DRAM Reads (RD_CAS + Underfills)
DRAM RD_CAS and WR_CAS Commands.; All DRAM RD_CAS (w/ and w/out auto-pre)
DRAM RD_CAS and WR_CAS Commands.; Read CAS issued in RMM
DRAM RD_CAS and WR_CAS Commands.; Underfill Read Issued
DRAM RD_CAS and WR_CAS Commands.; Read CAS issued in WMM
DRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (both Modes)
DRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Read Major Mode
DRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Write Major Mode
DRAM Clockticks. This is an alias of the UNC_M_DCLOCKTICKS event.
DRAM Clockticks
DRAM Precharge All Commands
Number of DRAM Refreshes Issued
Number of DRAM Refreshes Issued
ECC Correctable Errors
Cycles in a Major Mode; Isoch Major Mode
Cycles in a Major Mode; Partial Major Mode
Cycles in a Major Mode; Read Major Mode
Cycles in a Major Mode; Write Major Mode
Channel DLLOFF Cycles
Channel PPD Cycles
CKE_ON_CYCLES by Rank; DIMM ID
CKE_ON_CYCLES by Rank; DIMM ID
CKE_ON_CYCLES by Rank; DIMM ID
CKE_ON_CYCLES by Rank; DIMM ID
CKE_ON_CYCLES by Rank; DIMM ID
CKE_ON_CYCLES by Rank; DIMM ID
CKE_ON_CYCLES by Rank; DIMM ID
CKE_ON_CYCLES by Rank; DIMM ID
Critical Throttle Cycles
tbd
Clock-Enabled Self-Refresh
Throttle Cycles for Rank 0; DIMM ID
Throttle Cycles for Rank 0; DIMM ID
Throttle Cycles for Rank 0; DIMM ID
Throttle Cycles for Rank 0; DIMM ID
Throttle Cycles for Rank 0; DIMM ID
Throttle Cycles for Rank 0; DIMM ID
Throttle Cycles for Rank 0; DIMM ID
Throttle Cycles for Rank 0; DIMM ID
Read Preemption Count; Read over Read Preemption
Read Preemption Count; Read over Write Preemption
DRAM Precharge commands.; Precharge due to bypass
DRAM Precharge commands.; Precharge due to timer expiration
DRAM Precharge commands.; Precharges due to page miss
DRAM Precharge commands.; Precharge due to read
DRAM Precharge commands.; Precharge due to write
Read CAS issued with HIGH priority
Read CAS issued with LOW priority
Read CAS issued with MEDIUM priority
Read CAS issued with PANIC NON ISOCH priority (starved)
RD_CAS Access to Rank 0; All Banks
RD_CAS Access to Rank 0; Bank 0
RD_CAS Access to Rank 0; Bank 1
RD_CAS Access to Rank 0; Bank 10
RD_CAS Access to Rank 0; Bank 11
RD_CAS Access to Rank 0; Bank 12
RD_CAS Access to Rank 0; Bank 13
RD_CAS Access to Rank 0; Bank 14
RD_CAS Access to Rank 0; Bank 15
RD_CAS Access to Rank 0; Bank 2
RD_CAS Access to Rank 0; Bank 3
RD_CAS Access to Rank 0; Bank 4
RD_CAS Access to Rank 0; Bank 5
RD_CAS Access to Rank 0; Bank 6
RD_CAS Access to Rank 0; Bank 7
RD_CAS Access to Rank 0; Bank 8
RD_CAS Access to Rank 0; Bank 9
RD_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)
RD_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)
RD_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)
RD_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)
RD_CAS Access to Rank 1; All Banks
RD_CAS Access to Rank 1; Bank 0
RD_CAS Access to Rank 1; Bank 1
RD_CAS Access to Rank 1; Bank 10
RD_CAS Access to Rank 1; Bank 11
RD_CAS Access to Rank 1; Bank 12
RD_CAS Access to Rank 1; Bank 13
RD_CAS Access to Rank 1; Bank 14
RD_CAS Access to Rank 1; Bank 15
RD_CAS Access to Rank 1; Bank 2
RD_CAS Access to Rank 1; Bank 3
RD_CAS Access to Rank 1; Bank 4
RD_CAS Access to Rank 1; Bank 5
RD_CAS Access to Rank 1; Bank 6
RD_CAS Access to Rank 1; Bank 7
RD_CAS Access to Rank 1; Bank 8
RD_CAS Access to Rank 1; Bank 9
RD_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)
RD_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)
RD_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)
RD_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)
RD_CAS Access to Rank 2; Bank 0
RD_CAS Access to Rank 4; All Banks
RD_CAS Access to Rank 4; Bank 0
RD_CAS Access to Rank 4; Bank 1
RD_CAS Access to Rank 4; Bank 10
RD_CAS Access to Rank 4; Bank 11
RD_CAS Access to Rank 4; Bank 12
RD_CAS Access to Rank 4; Bank 13
RD_CAS Access to Rank 4; Bank 14
RD_CAS Access to Rank 4; Bank 15
RD_CAS Access to Rank 4; Bank 2
RD_CAS Access to Rank 4; Bank 3
RD_CAS Access to Rank 4; Bank 4
RD_CAS Access to Rank 4; Bank 5
RD_CAS Access to Rank 4; Bank 6
RD_CAS Access to Rank 4; Bank 7
RD_CAS Access to Rank 4; Bank 8
RD_CAS Access to Rank 4; Bank 9
RD_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)
RD_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)
RD_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)
RD_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)
RD_CAS Access to Rank 5; All Banks
RD_CAS Access to Rank 5; Bank 0
RD_CAS Access to Rank 5; Bank 1
RD_CAS Access to Rank 5; Bank 10
RD_CAS Access to Rank 5; Bank 11
RD_CAS Access to Rank 5; Bank 12
RD_CAS Access to Rank 5; Bank 13
RD_CAS Access to Rank 5; Bank 14
RD_CAS Access to Rank 5; Bank 15
RD_CAS Access to Rank 5; Bank 2
RD_CAS Access to Rank 5; Bank 3
RD_CAS Access to Rank 5; Bank 4
RD_CAS Access to Rank 5; Bank 5
RD_CAS Access to Rank 5; Bank 6
RD_CAS Access to Rank 5; Bank 7
RD_CAS Access to Rank 5; Bank 8
RD_CAS Access to Rank 5; Bank 9
RD_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)
RD_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)
RD_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)
RD_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)
RD_CAS Access to Rank 6; All Banks
RD_CAS Access to Rank 6; Bank 0
RD_CAS Access to Rank 6; Bank 1
RD_CAS Access to Rank 6; Bank 10
RD_CAS Access to Rank 6; Bank 11
RD_CAS Access to Rank 6; Bank 12
RD_CAS Access to Rank 6; Bank 13
RD_CAS Access to Rank 6; Bank 14
RD_CAS Access to Rank 6; Bank 15
RD_CAS Access to Rank 6; Bank 2
RD_CAS Access to Rank 6; Bank 3
RD_CAS Access to Rank 6; Bank 4
RD_CAS Access to Rank 6; Bank 5
RD_CAS Access to Rank 6; Bank 6
RD_CAS Access to Rank 6; Bank 7
RD_CAS Access to Rank 6; Bank 8
RD_CAS Access to Rank 6; Bank 9
RD_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)
RD_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)
RD_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)
RD_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)
RD_CAS Access to Rank 7; All Banks
RD_CAS Access to Rank 7; Bank 0
RD_CAS Access to Rank 7; Bank 1
RD_CAS Access to Rank 7; Bank 10
RD_CAS Access to Rank 7; Bank 11
RD_CAS Access to Rank 7; Bank 12
RD_CAS Access to Rank 7; Bank 13
RD_CAS Access to Rank 7; Bank 14
RD_CAS Access to Rank 7; Bank 15
RD_CAS Access to Rank 7; Bank 2
RD_CAS Access to Rank 7; Bank 3
RD_CAS Access to Rank 7; Bank 4
RD_CAS Access to Rank 7; Bank 5
RD_CAS Access to Rank 7; Bank 6
RD_CAS Access to Rank 7; Bank 7
RD_CAS Access to Rank 7; Bank 8
RD_CAS Access to Rank 7; Bank 9
RD_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)
RD_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)
RD_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)
RD_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)
Read Pending Queue Not Empty
Read Pending Queue Allocations
VMSE MXB write buffer occupancy
VMSE WR PUSH issued; VMSE write PUSH issued in RMM
VMSE WR PUSH issued; VMSE write PUSH issued in WMM
Transition from WMM to RMM because of low threshold; Transition from WMM to RMM because of starve counter
Transition from WMM to RMM because of low threshold
Transition from WMM to RMM because of low threshold
Write Pending Queue Full Cycles
Write Pending Queue Not Empty
Write Pending Queue CAM Match
Write Pending Queue CAM Match
Not getting the requested Major Mode
WR_CAS Access to Rank 0; All Banks
WR_CAS Access to Rank 0; Bank 0
WR_CAS Access to Rank 0; Bank 1
WR_CAS Access to Rank 0; Bank 10
WR_CAS Access to Rank 0; Bank 11
WR_CAS Access to Rank 0; Bank 12
WR_CAS Access to Rank 0; Bank 13
WR_CAS Access to Rank 0; Bank 14
WR_CAS Access to Rank 0; Bank 15
WR_CAS Access to Rank 0; Bank 2
WR_CAS Access to Rank 0; Bank 3
WR_CAS Access to Rank 0; Bank 4
WR_CAS Access to Rank 0; Bank 5
WR_CAS Access to Rank 0; Bank 6
WR_CAS Access to Rank 0; Bank 7
WR_CAS Access to Rank 0; Bank 8
WR_CAS Access to Rank 0; Bank 9
WR_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)
WR_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)
WR_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)
WR_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)
WR_CAS Access to Rank 1; All Banks
WR_CAS Access to Rank 1; Bank 0
WR_CAS Access to Rank 1; Bank 1
WR_CAS Access to Rank 1; Bank 10
WR_CAS Access to Rank 1; Bank 11
WR_CAS Access to Rank 1; Bank 12
WR_CAS Access to Rank 1; Bank 13
WR_CAS Access to Rank 1; Bank 14
WR_CAS Access to Rank 1; Bank 15
WR_CAS Access to Rank 1; Bank 2
WR_CAS Access to Rank 1; Bank 3
WR_CAS Access to Rank 1; Bank 4
WR_CAS Access to Rank 1; Bank 5
WR_CAS Access to Rank 1; Bank 6
WR_CAS Access to Rank 1; Bank 7
WR_CAS Access to Rank 1; Bank 8
WR_CAS Access to Rank 1; Bank 9
WR_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)
WR_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)
WR_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)
WR_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)
WR_CAS Access to Rank 4; All Banks
WR_CAS Access to Rank 4; Bank 0
WR_CAS Access to Rank 4; Bank 1
WR_CAS Access to Rank 4; Bank 10
WR_CAS Access to Rank 4; Bank 11
WR_CAS Access to Rank 4; Bank 12
WR_CAS Access to Rank 4; Bank 13
WR_CAS Access to Rank 4; Bank 14
WR_CAS Access to Rank 4; Bank 15
WR_CAS Access to Rank 4; Bank 2
WR_CAS Access to Rank 4; Bank 3
WR_CAS Access to Rank 4; Bank 4
WR_CAS Access to Rank 4; Bank 5
WR_CAS Access to Rank 4; Bank 6
WR_CAS Access to Rank 4; Bank 7
WR_CAS Access to Rank 4; Bank 8
WR_CAS Access to Rank 4; Bank 9
WR_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)
WR_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)
WR_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)
WR_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)
WR_CAS Access to Rank 5; All Banks
WR_CAS Access to Rank 5; Bank 0
WR_CAS Access to Rank 5; Bank 1
WR_CAS Access to Rank 5; Bank 10
WR_CAS Access to Rank 5; Bank 11
WR_CAS Access to Rank 5; Bank 12
WR_CAS Access to Rank 5; Bank 13
WR_CAS Access to Rank 5; Bank 14
WR_CAS Access to Rank 5; Bank 15
WR_CAS Access to Rank 5; Bank 2
WR_CAS Access to Rank 5; Bank 3
WR_CAS Access to Rank 5; Bank 4
WR_CAS Access to Rank 5; Bank 5
WR_CAS Access to Rank 5; Bank 6
WR_CAS Access to Rank 5; Bank 7
WR_CAS Access to Rank 5; Bank 8
WR_CAS Access to Rank 5; Bank 9
WR_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)
WR_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)
WR_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)
WR_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)
WR_CAS Access to Rank 6; All Banks
WR_CAS Access to Rank 6; Bank 0
WR_CAS Access to Rank 6; Bank 1
WR_CAS Access to Rank 6; Bank 10
WR_CAS Access to Rank 6; Bank 11
WR_CAS Access to Rank 6; Bank 12
WR_CAS Access to Rank 6; Bank 13
WR_CAS Access to Rank 6; Bank 14
WR_CAS Access to Rank 6; Bank 15
WR_CAS Access to Rank 6; Bank 2
WR_CAS Access to Rank 6; Bank 3
WR_CAS Access to Rank 6; Bank 4
WR_CAS Access to Rank 6; Bank 5
WR_CAS Access to Rank 6; Bank 6
WR_CAS Access to Rank 6; Bank 7
WR_CAS Access to Rank 6; Bank 8
WR_CAS Access to Rank 6; Bank 9
WR_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)
WR_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)
WR_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)
WR_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)
WR_CAS Access to Rank 7; All Banks
WR_CAS Access to Rank 7; Bank 0
WR_CAS Access to Rank 7; Bank 1
WR_CAS Access to Rank 7; Bank 10
WR_CAS Access to Rank 7; Bank 11
WR_CAS Access to Rank 7; Bank 12
WR_CAS Access to Rank 7; Bank 13
WR_CAS Access to Rank 7; Bank 14
WR_CAS Access to Rank 7; Bank 15
WR_CAS Access to Rank 7; Bank 2
WR_CAS Access to Rank 7; Bank 3
WR_CAS Access to Rank 7; Bank 4
WR_CAS Access to Rank 7; Bank 5
WR_CAS Access to Rank 7; Bank 6
WR_CAS Access to Rank 7; Bank 7
WR_CAS Access to Rank 7; Bank 8
WR_CAS Access to Rank 7; Bank 9
WR_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)
WR_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)
WR_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)
WR_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)
pclk Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Transition Cycles
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Core C State Demotions
Thermal Strongest Upper Limit Cycles
OS Strongest Upper Limit Cycles
Power Strongest Upper Limit Cycles
IO P Limit Strongest Lower Limit Cycles
Cycles spent changing Frequency
Memory Phase Shedding Cycles
Package C State Residency - C0
Package C State Residency - C1E
Package C State Residency - C2E
Package C State Residency - C3
Package C State Residency - C6
Package C7 State Residency
Number of cores in C-State; C0 and C1
Number of cores in C-State; C3
Number of cores in C-State; C6 and C7
External Prochot
Internal Prochot
Total Core C State Transition Cycles
tbd
VR Hot
Number of qfclks
Count of CTO Events
Direct 2 Core Spawning; Spawn Failure - Egress Credits
Direct 2 Core Spawning; Spawn Failure - Egress and RBT Miss
Direct 2 Core Spawning; Spawn Failure - Egress and RBT Invalid
Direct 2 Core Spawning; Spawn Failure - Egress and RBT Miss, Invalid
Direct 2 Core Spawning; Spawn Failure - RBT Miss
Direct 2 Core Spawning; Spawn Failure - RBT Invalid
Direct 2 Core Spawning; Spawn Failure - RBT Miss and Invalid
Direct 2 Core Spawning; Spawn Success
Cycles in L1
Cycles in L0p
Cycles in L0
Rx Flit Buffer Bypassed
CRC Errors Detected; LinkInit
CRC Errors Detected; Normal Operations
VN0 Credit Consumed; DRS
VN0 Credit Consumed; HOM
VN0 Credit Consumed; NCB
VN0 Credit Consumed; NCS
VN0 Credit Consumed; NDR
VN0 Credit Consumed; SNP
VN1 Credit Consumed; DRS
VN1 Credit Consumed; HOM
VN1 Credit Consumed; NCB
VN1 Credit Consumed; NCS
VN1 Credit Consumed; NDR
VN1 Credit Consumed; SNP
VNA Credit Consumed
RxQ Cycles Not Empty
RxQ Cycles Not Empty - DRS; for VN0
RxQ Cycles Not Empty - DRS; for VN1
RxQ Cycles Not Empty - HOM; for VN0
RxQ Cycles Not Empty - HOM; for VN1
RxQ Cycles Not Empty - NCB; for VN0
RxQ Cycles Not Empty - NCB; for VN1
RxQ Cycles Not Empty - NCS; for VN0
RxQ Cycles Not Empty - NCS; for VN1
RxQ Cycles Not Empty - NDR; for VN0
RxQ Cycles Not Empty - NDR; for VN1
RxQ Cycles Not Empty - SNP; for VN0
RxQ Cycles Not Empty - SNP; for VN1
Flits Received - Group 0; Idle and Null Flits
Flits Received - Group 1; DRS Flits (both Header and Data)
Flits Received - Group 1; DRS Data Flits
Flits Received - Group 1; DRS Header Flits
Flits Received - Group 1; HOM Flits
Flits Received - Group 1; HOM Non-Request Flits
Flits Received - Group 1; HOM Request Flits
Flits Received - Group 1; SNP Flits
Flits Received - Group 2; Non-Coherent Rx Flits
Flits Received - Group 2; Non-Coherent data Rx Flits
Flits Received - Group 2; Non-Coherent non-data Rx Flits
Flits Received - Group 2; Non-Coherent standard Rx Flits
Flits Received - Group 2; Non-Data Response Rx Flits - AD
Flits Received - Group 2; Non-Data Response Rx Flits - AK
Rx Flit Buffer Allocations
Rx Flit Buffer Allocations - DRS; for VN0
Rx Flit Buffer Allocations - DRS; for VN1
Rx Flit Buffer Allocations - HOM; for VN0
Rx Flit Buffer Allocations - HOM; for VN1
Rx Flit Buffer Allocations - NCB; for VN0
Rx Flit Buffer Allocations - NCB; for VN1
Rx Flit Buffer Allocations - NCS; for VN0
Rx Flit Buffer Allocations - NCS; for VN1
Rx Flit Buffer Allocations - NDR; for VN0
Rx Flit Buffer Allocations - NDR; for VN1
Rx Flit Buffer Allocations - SNP; for VN0
Rx Flit Buffer Allocations - SNP; for VN1
RxQ Occupancy - All Packets
RxQ Occupancy - DRS; for VN0
RxQ Occupancy - DRS; for VN1
RxQ Occupancy - HOM; for VN0
RxQ Occupancy - HOM; for VN1
RxQ Occupancy - NCB; for VN0
RxQ Occupancy - NCB; for VN1
RxQ Occupancy - NCS; for VN0
RxQ Occupancy - NCS; for VN1
RxQ Occupancy - NDR; for VN0
RxQ Occupancy - NDR; for VN1
RxQ Occupancy - SNP; for VN0
RxQ Occupancy - SNP; for VN1
Stalls Sending to R3QPI on VN0; BGF Stall - HOM
Stalls Sending to R3QPI on VN0; BGF Stall - DRS
Stalls Sending to R3QPI on VN0; BGF Stall - SNP
Stalls Sending to R3QPI on VN0; BGF Stall - NDR
Stalls Sending to R3QPI on VN0; BGF Stall - NCS
Stalls Sending to R3QPI on VN0; BGF Stall - NCB
Stalls Sending to R3QPI on VN0; Egress Credits
Stalls Sending to R3QPI on VN0; GV
Stalls Sending to R3QPI on VN1; BGF Stall - HOM
Stalls Sending to R3QPI on VN1; BGF Stall - DRS
Stalls Sending to R3QPI on VN1; BGF Stall - SNP
Stalls Sending to R3QPI on VN1; BGF Stall - NDR
Stalls Sending to R3QPI on VN1; BGF Stall - NCS
Stalls Sending to R3QPI on VN1; BGF Stall - NCB
Cycles in L0p
Cycles in L0
Tx Flit Buffer Bypassed
Cycles Stalled with no LLR Credits; LLR is almost full
Cycles Stalled with no LLR Credits; LLR is full
Tx Flit Buffer Cycles not Empty
Flits Transferred - Group 0; Data Tx Flits
Flits Transferred - Group 0; Non-Data protocol Tx Flits
Flits Transferred - Group 1; DRS Flits (both Header and Data)
Flits Transferred - Group 1; DRS Data Flits
Flits Transferred - Group 1; DRS Header Flits
Flits Transferred - Group 1; HOM Flits
Flits Transferred - Group 1; HOM Non-Request Flits
Flits Transferred - Group 1; HOM Request Flits
Flits Transferred - Group 1; SNP Flits
Flits Transferred - Group 2; Non-Coherent Bypass Tx Flits
Flits Transferred - Group 2; Non-Coherent data Tx Flits
Flits Transferred - Group 2; Non-Coherent non-data Tx Flits
Flits Transferred - Group 2; Non-Coherent standard Tx Flits
Flits Transferred - Group 2; Non-Data Response Tx Flits - AD
Flits Transferred - Group 2; Non-Data Response Tx Flits - AK
Tx Flit Buffer Allocations
Tx Flit Buffer Occupancy
R3QPI Egress Credit Occupancy - HOM; for VN0
R3QPI Egress Credit Occupancy - HOM; for VN1
R3QPI Egress Credit Occupancy - AD HOM; for VN0
R3QPI Egress Credit Occupancy - AD HOM; for VN1
R3QPI Egress Credit Occupancy - AD NDR; for VN0
R3QPI Egress Credit Occupancy - AD NDR; for VN1
R3QPI Egress Credit Occupancy - AD NDR; for VN0
R3QPI Egress Credit Occupancy - AD NDR; for VN1
R3QPI Egress Credit Occupancy - SNP; for VN0
R3QPI Egress Credit Occupancy - SNP; for VN1
R3QPI Egress Credit Occupancy - AD SNP; for VN0
R3QPI Egress Credit Occupancy - AD SNP; for VN1
R3QPI Egress Credit Occupancy - AK NDR
R3QPI Egress Credit Occupancy - AK NDR
R3QPI Egress Credit Occupancy - DRS; for VN0
R3QPI Egress Credit Occupancy - DRS; for VN1
R3QPI Egress Credit Occupancy - DRS; for Shared VN
R3QPI Egress Credit Occupancy - BL DRS; for VN0
R3QPI Egress Credit Occupancy - BL DRS; for VN1
R3QPI Egress Credit Occupancy - BL DRS; for Shared VN
R3QPI Egress Credit Occupancy - NCB; for VN0
R3QPI Egress Credit Occupancy - NCB; for VN1
R3QPI Egress Credit Occupancy - BL NCB; for VN0
R3QPI Egress Credit Occupancy - BL NCB; for VN1
R3QPI Egress Credit Occupancy - NCS; for VN0
R3QPI Egress Credit Occupancy - NCS; for VN1
R3QPI Egress Credit Occupancy - BL NCS; for VN0
R3QPI Egress Credit Occupancy - BL NCS; for VN1
VNA Credits Returned
VNA Credits Pending Return - Occupancy
Number of uclks in domain
tbd
tbd
tbd
tbd
R2PCIe IIO Credit Acquired; DRS
R2PCIe IIO Credit Acquired; NCB
R2PCIe IIO Credit Acquired; NCS
R2PCIe IIO Credits in Use; DRS
R2PCIe IIO Credits in Use; NCB
R2PCIe IIO Credits in Use; NCS
R2 AD Ring in Use; All
R2 AD Ring in Use; Counterclockwise
R2 AD Ring in Use; Counterclockwise and Even
R2 AD Ring in Use; Counterclockwise and Odd
R2 AD Ring in Use; Clockwise
R2 AD Ring in Use; Clockwise and Even
R2 AD Ring in Use; Clockwise and Odd
AK Ingress Bounced; Dn
AK Ingress Bounced; Up
R2 AK Ring in Use; All
R2 AK Ring in Use; Counterclockwise
R2 AK Ring in Use; Counterclockwise and Even
R2 AK Ring in Use; Counterclockwise and Odd
R2 AK Ring in Use; Clockwise
R2 AK Ring in Use; Clockwise and Even
R2 AK Ring in Use; Clockwise and Odd
R2 BL Ring in Use; All
R2 BL Ring in Use; Counterclockwise
R2 BL Ring in Use; Counterclockwise and Even
R2 BL Ring in Use; Counterclockwise and Odd
R2 BL Ring in Use; Clockwise
R2 BL Ring in Use; Clockwise and Even
R2 BL Ring in Use; Clockwise and Odd
R2 IV Ring in Use; Any
R2 IV Ring in Use; Counterclockwise
R2 IV Ring in Use; Clockwise
Ingress Cycles Not Empty; NCB
Ingress Cycles Not Empty; NCS
Ingress Allocations; NCB
Ingress Allocations; NCS
Ingress Occupancy Accumulator; DRS
SBo0 Credits Acquired; For AD Ring
SBo0 Credits Acquired; For BL Ring
SBo0 Credits Occupancy; For AD Ring
SBo0 Credits Occupancy; For BL Ring
Stall on No Sbo Credits; For SBo0, AD Ring
Stall on No Sbo Credits; For SBo0, BL Ring
Stall on No Sbo Credits; For SBo1, AD Ring
Stall on No Sbo Credits; For SBo1, BL Ring
Egress Cycles Full; AD
Egress Cycles Full; AK
Egress Cycles Full; BL
Egress Cycles Not Empty; AD
Egress Cycles Not Empty; AK
Egress Cycles Not Empty; BL
Egress CCW NACK; AD CCW
Egress CCW NACK; AK CCW
Egress CCW NACK; BL CCW
Egress CCW NACK; AK CCW
Egress CCW NACK; BL CW
Egress CCW NACK; BL CCW
Number of uclks in domain
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
CBox AD Credits Empty
HA/R2 AD Credits Empty
HA/R2 AD Credits Empty
HA/R2 AD Credits Empty
HA/R2 AD Credits Empty
IOT Backpressure
IOT Backpressure
IOT Common Trigger Sequencer - Hi
IOT Common Trigger Sequencer - Hi
IOT Common Trigger Sequencer - Lo
IOT Common Trigger Sequencer - Lo
QPI0 AD Credits Empty
QPI0 AD Credits Empty
QPI0 AD Credits Empty
QPI0 AD Credits Empty
QPI0 AD Credits Empty
QPI0 AD Credits Empty
QPI0 AD Credits Empty
QPI0 BL Credits Empty
QPI0 BL Credits Empty
QPI0 BL Credits Empty
QPI0 BL Credits Empty
QPI1 AD Credits Empty
QPI1 AD Credits Empty
QPI1 AD Credits Empty
QPI1 AD Credits Empty
QPI1 BL Credits Empty
QPI1 BL Credits Empty
QPI1 BL Credits Empty
QPI1 BL Credits Empty
QPI1 BL Credits Empty
QPI1 BL Credits Empty
QPI1 BL Credits Empty
R3 AD Ring in Use; All
R3 AD Ring in Use; Counterclockwise
R3 AD Ring in Use; Counterclockwise and Even
R3 AD Ring in Use; Counterclockwise and Odd
R3 AD Ring in Use; Clockwise
R3 AD Ring in Use; Clockwise and Even
R3 AD Ring in Use; Clockwise and Odd
R3 AK Ring in Use; All
R3 AK Ring in Use; Counterclockwise
R3 AK Ring in Use; Counterclockwise and Even
R3 AK Ring in Use; Counterclockwise and Odd
R3 AK Ring in Use; Clockwise
R3 AK Ring in Use; Clockwise and Even
R3 AK Ring in Use; Clockwise and Odd
R3 BL Ring in Use; All
R3 BL Ring in Use; Counterclockwise
R3 BL Ring in Use; Counterclockwise and Even
R3 BL Ring in Use; Counterclockwise and Odd
R3 BL Ring in Use; Clockwise
R3 BL Ring in Use; Clockwise and Even
R3 BL Ring in Use; Clockwise and Odd
R3 IV Ring in Use; Any
R3 IV Ring in Use; Clockwise
Ring Stop Starved; AK
Ingress Cycles Not Empty; HOM
Ingress Cycles Not Empty; NDR
Ingress Cycles Not Empty; SNP
VN1 Ingress Cycles Not Empty; DRS
VN1 Ingress Cycles Not Empty; HOM
VN1 Ingress Cycles Not Empty; NCB
VN1 Ingress Cycles Not Empty; NCS
VN1 Ingress Cycles Not Empty; NDR
VN1 Ingress Cycles Not Empty; SNP
Ingress Allocations; DRS
Ingress Allocations; HOM
Ingress Allocations; NCB
Ingress Allocations; NCS
Ingress Allocations; NDR
Ingress Allocations; SNP
VN1 Ingress Allocations; DRS
VN1 Ingress Allocations; HOM
VN1 Ingress Allocations; NCB
VN1 Ingress Allocations; NCS
VN1 Ingress Allocations; NDR
VN1 Ingress Allocations; SNP
VN1 Ingress Occupancy Accumulator; DRS
VN1 Ingress Occupancy Accumulator; HOM
VN1 Ingress Occupancy Accumulator; NCB
VN1 Ingress Occupancy Accumulator; NCS
VN1 Ingress Occupancy Accumulator; NDR
VN1 Ingress Occupancy Accumulator; SNP
SBo0 Credits Acquired; For AD Ring
SBo0 Credits Acquired; For BL Ring
SBo0 Credits Occupancy; For AD Ring
SBo0 Credits Occupancy; For BL Ring
SBo1 Credits Acquired; For AD Ring
SBo1 Credits Acquired; For BL Ring
SBo1 Credits Occupancy; For AD Ring
SBo1 Credits Occupancy; For BL Ring
Stall on No Sbo Credits; For SBo0, AD Ring
Stall on No Sbo Credits; For SBo0, BL Ring
Stall on No Sbo Credits; For SBo1, AD Ring
Stall on No Sbo Credits; For SBo1, BL Ring
Egress CCW NACK; AD CCW
Egress CCW NACK; AK CCW
Egress CCW NACK; BL CCW
Egress CCW NACK; AK CCW
Egress CCW NACK; BL CW
Egress CCW NACK; BL CCW
VN0 Credit Acquisition Failed on DRS; DRS Message Class
VN0 Credit Acquisition Failed on DRS; HOM Message Class
VN0 Credit Acquisition Failed on DRS; NCB Message Class
VN0 Credit Acquisition Failed on DRS; NCS Message Class
VN0 Credit Acquisition Failed on DRS; NDR Message Class
VN0 Credit Acquisition Failed on DRS; SNP Message Class
VN0 Credit Used; DRS Message Class
VN0 Credit Used; HOM Message Class
VN0 Credit Used; NCB Message Class
VN0 Credit Used; NCS Message Class
VN0 Credit Used; NDR Message Class
VN0 Credit Used; SNP Message Class
VN1 Credit Acquisition Failed on DRS; DRS Message Class
VN1 Credit Acquisition Failed on DRS; HOM Message Class
VN1 Credit Acquisition Failed on DRS; NCB Message Class
VN1 Credit Acquisition Failed on DRS; NCS Message Class
VN1 Credit Acquisition Failed on DRS; NDR Message Class
VN1 Credit Acquisition Failed on DRS; SNP Message Class
VN1 Credit Used; DRS Message Class
VN1 Credit Used; HOM Message Class
VN1 Credit Used; NCB Message Class
VN1 Credit Used; NCS Message Class
VN1 Credit Used; NDR Message Class
VN1 Credit Used; SNP Message Class
VNA credit Acquisitions; HOM Message Class
VNA credit Acquisitions; HOM Message Class
VNA Credit Reject; DRS Message Class
VNA Credit Reject; HOM Message Class
VNA Credit Reject; NCB Message Class
VNA Credit Reject; NCS Message Class
VNA Credit Reject; NDR Message Class
VNA Credit Reject; SNP Message Class
Bounce Control
Uncore Clocks
FaST wire asserted
AD Ring In Use; All
AD Ring In Use; Down
AD Ring In Use; Down and Event
AD Ring In Use; Down and Odd
AD Ring In Use; Up
AD Ring In Use; Up and Even
AD Ring In Use; Up and Odd
AK Ring In Use; All
AK Ring In Use; Down
AK Ring In Use; Down and Event
AK Ring In Use; Down and Odd
AK Ring In Use; Up
AK Ring In Use; Up and Even
AK Ring In Use; Up and Odd
BL Ring in Use; All
BL Ring in Use; Down
BL Ring in Use; Down and Event
BL Ring in Use; Down and Odd
BL Ring in Use; Up
BL Ring in Use; Up and Even
BL Ring in Use; Up and Odd
Number of LLC responses that bounced on the Ring.
Number of LLC responses that bounced on the Ring.; Acknowledgements to core
Number of LLC responses that bounced on the Ring.; Data Responses to core
Number of LLC responses that bounced on the Ring.; Snoops of processor's cache.
BL Ring in Use; Any
BL Ring in Use; Any
tbd
tbd
tbd
tbd
Injection Starvation; AD - Bounces
Injection Starvation; AD - Credits
Injection Starvation; BL - Bounces
Injection Starvation; BL - Credits
Bypass; AD - Bounces
Bypass; AD - Credits
Bypass; AK
Bypass; BL - Bounces
Bypass; BL - Credits
Bypass; IV
Injection Starvation; AD - Bounces
Injection Starvation; AD - Credits
Injection Starvation; AK
Injection Starvation; BL - Bounces
Injection Starvation; BL - Credits
Injection Starvation; IVF Credit
Injection Starvation; IV
Ingress Allocations; AD - Bounces
Ingress Allocations; AD - Credits
Ingress Allocations; AK
Ingress Allocations; BL - Bounces
Ingress Allocations; BL - Credits
Ingress Allocations; IV
Ingress Occupancy; AD - Bounces
Ingress Occupancy; AD - Credits
Ingress Occupancy; AK
Ingress Occupancy; BL - Bounces
Ingress Occupancy; BL - Credits
Ingress Occupancy; IV
tbd
tbd
tbd
Egress Allocations; AD - Bounces
Egress Allocations; AD - Credits
Egress Allocations; AK
Egress Allocations; BL - Bounces
Egress Allocations; BL - Credits
Egress Allocations; IV
Egress Occupancy; AD - Bounces
Egress Occupancy; AD - Credits
Egress Occupancy; AK
Egress Occupancy; BL - Bounces
Egress Occupancy; BL - Credits
Egress Occupancy; IV
Injection Starvation; Onto AD Ring
Injection Starvation; Onto AK Ring
Injection Starvation; Onto BL Ring
Injection Starvation; Onto IV Ring
VLW Received
Filter Match
Filter Match
Filter Match
Filter Match
Cycles PHOLD Assert to Ack; Assert to ACK
RACU Request
Monitor Sent to T0; Correctable Machine Check
Monitor Sent to T0; Livelock
Monitor Sent to T0; LTError
Monitor Sent to T0; Monitor T0
Monitor Sent to T0; Monitor T1
Monitor Sent to T0; Other
Monitor Sent to T0; Trap
Monitor Sent to T0; Uncorrectable Machine Check
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7.
Number of uops executed from any thread
Cycles at least 1 micro-op is executed from any thread on physical core
Cycles at least 2 micro-op is executed from any thread on physical core
Cycles at least 3 micro-op is executed from any thread on physical core
Cycles at least 4 micro-op is executed from any thread on physical core
Cycles with no micro-ops executed from any thread on physical core
Cycles where at least 1 uop was executed per-thread
Cycles where at least 2 uops were executed per-thread
Cycles where at least 3 uops were executed per-thread
Cycles where at least 4 uops were executed per-thread
This event counts cycles during which no uops were dispatched from the Reservation Station (RS) per thread.
Number of uops to be executed per-thread each cycle.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0.
Cycles per core when uops are exectuted in port 0
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1.
Cycles per core when uops are exectuted in port 1
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2.
Cycles per core when uops are dispatched to port 2
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3.
Cycles per core when uops are dispatched to port 3
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4.
Cycles per core when uops are exectuted in port 4
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5.
Cycles per core when uops are exectuted in port 5
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6.
Cycles per core when uops are exectuted in port 6
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7.
Cycles per core when uops are dispatched to port 7
This event counts the number of Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS).
Number of flags-merge uops being allocated. Such uops considered perf sensitive; added by GSR u-arch.
Number of Multiply packed/scalar single precision uops allocated
Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not.
This event counts cycles during which the Resource Allocation Table (RAT) does not issue any Uops to the reservation station (RS) for the current thread.
This event counts all actually retired uops. Counting increments by two for micro-fused uops, and by one for macro-fused and other uops. Maximal increment value for one cycle is eight.
This is a precise version (that is, uses PEBS) of the event that counts all actually retired uops. Counting increments by two for micro-fused uops, and by one for macro-fused and other uops. Maximal increment value for one cycle is eight.
This is a non-precise version (that is, does not use PEBS) of the event that counts the number of retirement slots used.
This is a precise version (that is, uses PEBS) of the event that counts the number of retirement slots used.
This is a non-precise version (that is, does not use PEBS) of the event that counts cycles without actually retired uops.
Number of cycles using always true condition (uops_ret < 16) applied to non PEBS uops retired event.
This event counts the number of micro-operations cancelled after they were dispatched from the scheduler to the execution units when the total number of physical register read ports across all dispatch ports exceeds the read bandwidth of the physical register file. The SIMD_PRF subevent applies to the following instructions: VDPPS, DPPS, VPCMPESTRI, PCMPESTRI, VPCMPESTRM, PCMPESTRM, VFMADD*, VFMADDSUB*, VFMSUB*, VMSUBADD*, VFNMADD*, VFNMSUB*. See the Broadwell Optimization Guide for more information.