Intel® VTune™ Amplifier XE and Intel® VTune™ Amplifier for Systems Help
This section provides reference for hardware events that can be monitored for the CPU(s):
The following performance-monitoring events are supported:
Cycles when divide unit is busy executing divide or square root operations. Accounts for integer and floating-point operations.
Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.
This event counts all (macro) branch instructions retired.
This is a precise version of BR_INST_RETIRED.ALL_BRANCHES that counts all (macro) branch instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts conditional branch instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts conditional branch instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts far branch instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts far branch instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts both direct and indirect near call instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts both direct and indirect near call instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts return instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts return instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts taken branch instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts taken branch instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts not taken branch instructions retired.
This event counts all mispredicted macro branch instructions retired.
This is a precise version of BR_MISP_RETIRED.ALL_BRANCHES that counts all mispredicted macro branch instructions retired.
This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted conditional branch instructions retired.
This is a precise version (that is, uses PEBS) of the event that counts mispredicted conditional branch instructions retired.
number of near branch instructions retired that were mispredicted and taken.
number of near branch instructions retired that were mispredicted and taken.
Count XClk pulses when this thread is unhalted and the other thread is halted.
Reference cycles when the thread is unhalted (counts at 100 MHz rate)
Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)
This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling ™' states duty off periods the processor is 'halted'. This event is clocked by base clock (100 Mhz) on Sandy Bridge. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case
This event counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events
Core cycles when at least one thread on the physical core is not in halt state
This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.
Core cycles when at least one thread on the physical core is not in halt state
Cycles while L1 cache miss demand load is outstanding.
Cycles while L2 cache miss demand load is outstanding.
Cycles while L3 cache miss demand load is outstanding.
Cycles while memory subsystem has an outstanding load.
Execution stalls while L1 cache miss demand load is outstanding.
Execution stalls while L2 cache miss demand load is outstanding.
Execution stalls while L3 cache miss demand load is outstanding.
Execution stalls while memory subsystem has an outstanding load.
Total execution stalls.
This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs. Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 0?2 cycles.
This event counts load misses in all DTLB levels that cause page walks of any page size (4K/2M/4M/1G).
Loads that miss the DTLB and hit the STLB.
Cycles when at least one PMH is busy with a page walk for a load.
Load miss in all TLB levels causes a page walk that completes. (All page sizes)
Counts 1 per cycle for each PMH that is busy with a page walk for a load.
This event counts store misses in all DTLB levels that cause page walks of any page size (4K/2M/4M/1G).
Stores that miss the DTLB and hit the STLB.
Cycles when at least one PMH is busy with a page walk for a store.
Store misses in all TLB levels causes a page walk that completes. (All page sizes)
Counts 1 per cycle for each PMH that is busy with a page walk for a store.
Counts 1 per cycle for each PMH that is busy with a EPT (Extended Page Table) walk for any request type.
Cycles total of 1 uop is executed on all ports and Reservation Station was not empty.
Cycles total of 2 uops are executed on all ports and Reservation Station was not empty.
Cycles total of 3 uops are executed on all ports and Reservation Station was not empty.
Cycles total of 4 uops are executed on all ports and Reservation Station was not empty.
Cycles where the Store Buffer was full and no outstanding load.
Cycles where no uops were executed, the Reservation Station was not empty, the Store Buffer was full and there was no outstanding load.
Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired. Each count represents 2 computations. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired. Each count represents 4 computations. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP RSQRT SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired. Each count represents 4 computations. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired. Each count represents 8 computations. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP RSQRT SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational scalar double precision floating-point instructions retired. Each count represents 1 computation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
Number of SSE/AVX computational scalar single precision floating-point instructions retired. Each count represents 1 computation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP RSQRT SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.
This event counts cycles with any input and output SSE or x87 FP assist. If an input and output assist are detected on the same cycle the event increments by 1.
Retired Instructions who experienced decode stream buffer (DSB - the decoded insturction-cache) miss.
Retired Instructions who experienced decode stream buffer (DSB - the decoded insturction-cache) miss. Precise Event.
Retired Instructions who experienced iTLB true miss.
Retired Instructions who experienced iTLB true miss. Precise Event.
Retired Instructions who experienced Instruction L1 Cache true miss.
Retired Instructions who experienced Instruction L1 Cache true miss. Precise Event.
Retired Instructions who experienced Instruction L2 Cache true miss.
Retired Instructions who experienced Instruction L2 Cache true miss. Precise Event.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall. Precise Event.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall. Precise Event.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 2 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall. Precise Event.
Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall. Precise Event.
Retired instructions that are fetched after an interval where the front-end had at least 2 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end had at least 2 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall. Precise Event.
Retired instructions that are fetched after an interval where the front-end had at least 3 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end had at least 3 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall. Precise Event.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 2 cycles which was not interrupted by a back-end stall. Precise Event.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall. Precise Event.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall. Precise Event.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall. Precise Event.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall. Precise Event
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.
Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.
Retired Instructions who experienced STLB (2nd level TLB) true miss.
Retired Instructions who experienced STLB (2nd level TLB) true miss. Precise Event.
Number of times HLE abort was triggered
Number of times an HLE execution aborted due to unfriendly events (such as interrupts).
Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).
Number of times an HLE execution aborted due to incompatible memory type
Number of times HLE abort was triggered (PEBS)
Number of times an HLE execution aborted due to hardware timer expiration.
Number of times an HLE execution aborted due to HLE-unfriendly instructions and certain unfriendly events (such as AD assists etc.).
Number of times HLE commit succeeded
Number of times we entered an HLE region does not count nested transactions
This event counts the number of hardware interruptions received by the processor.
Cycles where a code fetch is stalled due to L1 instruction cache miss.
Instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity.
Instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity.
Cycles where a code fetch is stalled due to L1 instruction cache tag miss.
This event counts the number of cycles 4 uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may 'bypass' the IDQ.
This event counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may 'bypass' the IDQ.
This event counts the number of cycles 4 uops were delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may 'bypass' the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).
This event counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may 'bypass' the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).
This event counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may 'bypass' the IDQ.
This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may 'bypass' the IDQ.
This event counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may 'bypass' the IDQ.
This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may 'bypass' the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).
This event counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequenser (MS) is busy. Counting includes uops that may 'bypass' the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE.
This event counts cycles during which uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQ.
This event counts the number of uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequenser (MS) is busy. Counting includes uops that may 'bypass' the IDQ.
Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer
This event counts the total number of uops delivered to Instruction Decode Queue (IDQ) while the Microcode Sequenser (MS) is busy. Counting includes uops that may 'bypass' the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE.
This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding ?4 ? x? when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions) c. Instruction Decode Queue (IDQ) delivers four uops.
This event counts, on the per-thread basis, cycles when no uops are delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core =4.
Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.
This event counts, on the per-thread basis, cycles when less than 1 uop is delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core >=3.
Cycles with less than 2 uops delivered by the front end
Cycles with less than 3 uops delivered by the front end
This event counts stalls occured due to changing prefix length (66, 67 or REX.W when they change the length of the decoded instruction). Occurrences counting is proportional to the number of prefixes in a 16B-line. This may result in the following penalties: three-cycle penalty for each LCP in a 16-byte chunk.
This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. Counting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions
This event counts the number of instructions (EOMs) retired. Counting covers macro-fused instructions individually (that is, increments by two).
This is a precise version (that is, uses PEBS) of the event that counts instructions retired.
Number of cycles using an always true condition applied to PEBS instructions retired event. (inst_ret< 16)
Cycles the issue-stage is waiting for front-end to fetch from resteered path following branch misprediction or machine clear events.
Cycles checkpoints in Resource Allocation Table (RAT) are recovering from JEClear or machine clear
Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke)
This event counts the number of flushes of the big or small ITLB pages. Counting include both TLB Flush (covering all sets) and TLB Set Clear (set-specific).
This event counts store misses in all DTLB levels that cause page walks of any page size (4K/2M/4M/1G).
Intruction fetch requests that miss the ITLB and hit the STLB.
Code miss in all TLB levels causes a page walk that completes. (All page sizes)
Counts 1 per cycle for each PMH that is busy with a page walk for an instruction fetch request.
This event counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.
Number of times a request needed a FB entry but there was no entry available for it. That is the FB unavailability was dominant reason for blocking the request. A request includes cacheable/uncacheable demands that is load, store or SW prefetch. HWP are e
This event counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.
This event counts duration of L1D miss outstanding in cycles.
Cycles with L1D load Misses outstanding from any thread on physical core
This event counts the number of L2 cache lines filling the L2. Counting does not cover rejects.
tbd
tbd
tbd
This event counts the total number of L2 code requests.
This event counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.
Demand requests that miss L2 cache
Demand requests to L2 cache
This event counts the total number of requests from the L2 hardware prefetchers.
This event counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.
L2 cache hits when fetching instructions, code reads.
L2 cache misses when fetching instructions
This event counts the number of demand Data Read requests that hit L2 cache. Only not rejected loads are counted.
This event counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.
All requests that miss L2 cache
Requests from the L1/L2/L3 hardware prefetchers or Load software prefetches that hit L2 cache
Requests from the L1/L2/L3 hardware prefetchers or Load software prefetches that miss L2 cache
All L2 requests
RFO requests that hit L2 cache
RFO requests that miss L2 cache
This event counts L2 writebacks that access L2 cache.
The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use
This event counts how many times the load operation got the true Block-on-Store blocking code preventing store forwarding. This includes cases when: - preceding store conflicts with the load (incomplete overlap) - store forwarding is impossible due to u-arch limitations - preceding lock RMW operations are not forwarded - store has the no-forward bit set (uncacheable/page-split/masked stores) - all-blocking stores are used (mostly, fences and port I/O) and others. The most common case is a load blocked due to its address range overlapping with a preceding smaller uncompleted store. Note: This event does not take into account cases of out-of-SW-control (for example, SbTailHit), unknown physical STA, and cases of blocking loads on store due to being non-WB memory type or a lock. These cases are covered by other events. See the table of not supported store forwards in the Optimization Guide.
This event counts false dependencies in MOB when the partial comparison upon loose net check and dependency was resolved by the Enhanced Loose net mechanism. This may not result in high performance penalties. Loose net checks can fail when loads and stores are 4k aliased.
This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by asm inspection of the nearby instructions.
This event counts the number of cycles when the L1D is locked. It is a superset of the 0x1 mask (BUS_LOCK_CLOCKS.BUS_LOCK_DURATION).
This event counts core-originated cacheable demand requests that miss the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFU.
This event counts core-originated cacheable demand requests that refer to the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFU.
Cycles 4 Uops delivered by the LSD, but didn't come from the decoder
Cycles Uops delivered by the LSD, but didn't come from the decoder
Number of Uops delivered by the LSD.
Number of machine clears (nukes) of any type.
This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following: 1. memory disambiguation, 2. external snoop, or 3. cross SMT-HW-thread snoop (stores) hitting load buffer.
This event counts self-modifying code (SMC) detected, which causes a machine clear.
All retired load instructions.
All retired load instructions. (Precise Event)
All retired store instructions.
All retired store instructions. (Precise Event)
Retired load instructions with locked access.
Retired load instructions with locked access. (Precise Event)
Retired load instructions that split across a cacheline boundary.
Retired load instructions that split across a cacheline boundary. (Precise Event)
Retired store instructions that split across a cacheline boundary.
Retired store instructions that split across a cacheline boundary. (Precise Event)
Retired load instructions that miss the STLB.
Retired load instructions that miss the STLB. (Precise Event)
Retired store instructions that miss the STLB.
Retired store instructions that miss the STLB. (Precise Event)
Retired load instructions which data sources were L3 and cross-core snoop hits in on-pkg core cache
Retired load instructions which data sources were HitM responses from shared L3
Retired load instructions which data sources were HitM responses from shared L3
Retired load instructions which data sources were L3 and cross-core snoop hits in on-pkg core cache
Retired load instructions which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.
Retired load instructions which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.
Retired load instructions which data sources were hits in L3 without snoops required
Retired load instructions which data sources were hits in L3 without snoops required
Retired load instructions which data sources were load missed L1 but hit FB due to preceding miss to the same cache line with data not ready
Retired load instructions which data sources were load missed L1 but hit FB due to preceding miss to the same cache line with data not ready
Retired load instructions with L1 cache hits as data sources
Retired load instructions with L1 cache hits as data sources
Retired load instructions missed L1 cache as data sources
Retired load instructions missed L1 cache as data sources
Retired load instructions with L2 cache hits as data sources
Retired load instructions with L2 cache hits as data sources
Retired load instructions missed L2 cache as data sources
Retired load instructions missed L2 cache as data sources
Retired load instructions with L3 cache hits as data sources
Retired load instructions with L3 cache hits as data sources
Retired load instructions missed L3 cache as data sources
Retired load instructions missed L3 cache as data sources
Counts loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency.
Counts loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency.
Counts loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency.
Counts loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency.
Counts loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency.
Counts loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency.
Counts loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency.
Counts loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency.
This event counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.
This event counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, and so on.
This event counts both cacheable and noncachaeble code read requests.
This event counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.
This event counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.
Demand Data Read requests who miss L3 cache
This event counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full. Note: Writeback pending FIFO has six entries.
This event counts the number of offcore outstanding cacheable Core Data Read transactions in the super queue every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.
This event counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.
This event counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.
This event counts cycles when offcore outstanding Demand Data Read transactions are present in the super queue (SQ). A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation).
This event counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.
Cycles with at least 1 Demand Data Read requests who miss L3 cache in the superQ
This event counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.
This event counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS. Note: A prefetch promoted to Demand is counted from the promotion point.
Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue
This event counts the number of offcore outstanding RFO (store) transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.
Counts number of Offcore outstanding Demand Data Read requests who miss L3 cache in the superQ every cycle.
Cycles with at least 6 Demand Data Read requests who miss L3 cache in the superQ
Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction
Counts all demand code reads that have any response type.
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded.
Counts all demand code reads that hit in the L3 and the snoops sent to sibling cores return clean response.
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores.
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts all demand code reads that tbd
Counts demand data reads that have any response type.
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded.
Counts demand data reads that hit in the L3 and the snoops sent to sibling cores return clean response.
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores.
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts demand data reads that tbd
Counts all demand data writes (RFOs) that have any response type.
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded.
Counts all demand data writes (RFOs) that hit in the L3 and the snoops sent to sibling cores return clean response.
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores.
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts all demand data writes (RFOs) that tbd
Counts any other requests that have any response type.
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded.
Counts any other requests that hit in the L3 and the snoops sent to sibling cores return clean response.
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores.
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Counts any other requests that tbd
Number of times a microcode assist is invoked by HW other than FP-assist. Examples include AD (page Access Dirty) and AVX* related assists.
This event counts resource-related stall cycles. Reasons for stalls can be as follows: - *any* u-arch structure got full (LB, SB, RS, ROB, BOB, LM, Physical Register Reclaim Table (PRRT), or Physical History Table (PHT) slots) - *any* u-arch structure got empty (like INT/SIMD FreeLists) - FPU control word (FPCW), MXCSR and others. This counts cycles that the pipeline backend blocked uop delivery from the front end.
This event counts stall cycles caused by the store buffer (SB) overflow (excluding draining from synch). This counts cycles that the pipeline backend blocked uop delivery from the front end.
This event counts cycles during which the reservation station (RS) is empty for the thread. Note: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issues.
Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.
Number of times RTM abort was triggered
Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)
Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)
Number of times an RTM execution aborted due to incompatible memory type
Number of times RTM abort was triggered (PEBS)
Number of times an RTM execution aborted due to uncommon conditions.
Number of times an RTM execution aborted due to HLE-unfriendly instructions
Number of times RTM commit succeeded
Number of times we entered an RTM region does not count nested transactions
This event counts the number of DTLB flush attempts of the thread-specific entries.
This event counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, and so on).
Unfriendly TSX abort triggered by a flowmarker
Unfriendly TSX abort triggered by a vzeroupper instruction
Unfriendly TSX abort triggered by a nest count that is too deep
RTM region detected inside HLE
# HLE inside HLE+
Number of times a transactional abort was signaled due to a data capacity limitation for transactional reads or writes.
Number of times a TSX line had a cache conflict
Number of times a TSX Abort was triggered due to release/commit but data and address mismatch
Number of times a TSX Abort was triggered due to commit but Lock Buffer not empty
Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer
Number of times a TSX Abort was triggered due to a non-release/commit store to lock
Number of times we could not allocate Lock Buffer
Number of entries allocated. Account for Any type: e.g. Snoop, Core aperture, etc.
Each cycle count number of all Core outgoing valid entries. Such entry is defined as valid from it's allocation till first of IDI0 or DRS0 messages is sent out. Accounts for Coherent and non-coherent traffic.
Cycles with at least one request outstanding is waiting for data return from memory controller. Account for coherent and non-coherent requests initiated by IA Cores, Processor Graphics Unit, or LLC.;
Each cycle count number of 'valid' coherent Data Read entries that are in DirectData mode. Such entry is defined as valid when it is allocated till data sent to Core (first chunk, IDI0). Applicable for IA Cores' requests in normal case.
Total number of Core outgoing entries allocated. Accounts for Coherent and non-coherent traffic.
Number of Core coherent Data Read entries allocated in DirectData mode
Number of Writes allocated - any write transactions: full/partials writes and evictions.
L3 Lookup any request that access cache and found line in E or S-state
L3 Lookup any request that access cache and found line in I-state
L3 Lookup any request that access cache and found line in M-state
L3 Lookup any request that access cache and found line in MESI-state
L3 Lookup read request that access cache and found line in E or S-state
L3 Lookup read request that access cache and found line in I-state
L3 Lookup read request that access cache and found line in any MESI-state
L3 Lookup write request that access cache and found line in E or S-state
L3 Lookup write request that access cache and found line in M-state
L3 Lookup write request that access cache and found line in MESI-state
A cross-core snoop initiated by this Cbox due to processor core memory request which hits a modified line in some processor core.
A cross-core snoop initiated by this Cbox due to processor core memory request which hits a non-modified line in some processor core.
A cross-core snoop resulted from L3 Eviction which misses in some processor core.
A cross-core snoop initiated by this Cbox due to processor core memory request which misses in some processor core.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6.
This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7.
Number of uops executed from any thread
Cycles at least 1 micro-op is executed from any thread on physical core
Cycles at least 2 micro-op is executed from any thread on physical core
Cycles at least 3 micro-op is executed from any thread on physical core
Cycles at least 4 micro-op is executed from any thread on physical core
Cycles with no micro-ops executed from any thread on physical core
Cycles where at least 1 uop was executed per-thread
Cycles where at least 2 uops were executed per-thread
Cycles where at least 3 uops were executed per-thread
Cycles where at least 4 uops were executed per-thread
This event counts cycles during which no uops were dispatched from the Reservation Station (RS) per thread.
Number of uops to be executed per-thread each cycle.
Counts the number of x87 uops dispatched.
This event counts the number of Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS).
Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not.
This event counts cycles during which the Resource Allocation Table (RAT) does not issue any Uops to the reservation station (RS) for the current thread.
This event counts the number of Blend Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS) in order to preserve upper bits of vector registers. Starting the Skylake microarchitecture, these Blend uops are needed since every Intel SSE instruction executed in Dirty Upper State needs to preserve bits 128-255 of the destination register. For more information, refer to ?Mixing Intel AVX and Intel SSE Code? section of the Optimization Guide.
This is a non-precise version (that is, does not use PEBS) of the event that counts the number of retirement slots used.
This is a non-precise version (that is, does not use PEBS) of the event that counts cycles without actually retired uops.
Number of cycles using always true condition (uops_ret < 16) applied to non PEBS uops retired event.