Intel® VTune™ Amplifier XE and Intel® VTune™ Amplifier for Systems Help
This section provides reference for hardware events that can be monitored for the CPU(s):
The following performance-monitoring events are supported:
The BACLEARS event counts the number of times the front end is resteered, mainly when the Branch Prediction Unit cannot provide a correct prediction and this is corrected by the Branch Address Calculator at the front end. The BACLEARS.ANY event counts the number of baclears for any type of branch.
The BACLEARS event counts the number of times the front end is resteered, mainly when the Branch Prediction Unit cannot provide a correct prediction and this is corrected by the Branch Address Calculator at the front end. The BACLEARS.COND event counts the number of JCC (Jump on Condtional Code) baclears.
The BACLEARS event counts the number of times the front end is resteered, mainly when the Branch Prediction Unit cannot provide a correct prediction and this is corrected by the Branch Address Calculator at the front end. The BACLEARS.RETURN event counts the number of RETURN baclears.
ALL_BRANCHES counts the number of any branch instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
ALL_BRANCHES counts the number of any branch instructions retired (Precise Event). Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
CALL counts the number of near CALL branch instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
CALL counts the number of near CALL branch instructions retired (Precise Event). Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
FAR counts the number of far branch instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
FAR counts the number of far branch instructions retired (Precise Event). Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
IND_CALL counts the number of near indirect CALL branch instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
IND_CALL counts the number of near indirect CALL branch instructions retired (Precise Event). Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
JCC counts the number of conditional branch (JCC) instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
JCC counts the number of conditional branch (JCC) instructions retired (Precise Event). Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
NON_RETURN_IND counts the number of near indirect JMP and near indirect CALL branch instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
NON_RETURN_IND counts the number of near indirect JMP and near indirect CALL branch instructions retired (Precise Event). Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
REL_CALL counts the number of near relative CALL branch instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
REL_CALL counts the number of near relative CALL branch instructions retired (Precise Event). Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
RETURN counts the number of near RET branch instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
RETURN counts the number of near RET branch instructions retired (Precise Event). Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
TAKEN_JCC counts the number of taken conditional branch (JCC) instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
TAKEN_JCC counts the number of taken conditional branch (JCC) instructions retired (Precise Event). Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns.
ALL_BRANCHES counts the number of any mispredicted branch instructions retired. This umask is an architecturally defined event. This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
ALL_BRANCHES counts the number of any mispredicted branch instructions retired (Precise Event). This umask is an architecturally defined event. This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
IND_CALL counts the number of mispredicted near indirect CALL branch instructions retired. This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
IND_CALL counts the number of mispredicted near indirect CALL branch instructions retired (Precise Event). This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
JCC counts the number of mispredicted conditional branches (JCC) instructions retired. This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
JCC counts the number of mispredicted conditional branches (JCC) instructions retired (Precise Event). This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
NON_RETURN_IND counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired. This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
NON_RETURN_IND counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired (Precise Event). This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
RETURN counts the number of mispredicted near RET branch instructions retired. This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
RETURN counts the number of mispredicted near RET branch instructions retired (Precise Event). This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
TAKEN_JCC counts the number of mispredicted taken conditional branch (JCC) instructions retired. This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
TAKEN_JCC counts the number of mispredicted taken conditional branch (JCC) instructions retired (Precise Event). This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa. When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.
Counts the number of (demand and L1 prefetchers) core requests rejected by the L2Q due to a full or nearly full w condition which likely indicates back pressure from L2Q. It also counts requests that would have gone directly to the XQ, but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link. The L2Q may also reject transactions from a core to insure fairness between cores, or to delay a core?s dirty eviction when the address conflicts incoming external snoops. (Note that L2 prefetcher requests that are dropped are not counted by this event.)
Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. In systems with a constant core frequency, this event can give you a measurement of the elapsed time while the core was not in halt state by dividing the event count by the core frequency. This event is architecturally defined and is a designated fixed counter. CPU_CLK_UNHALTED.CORE and CPU_CLK_UNHALTED.CORE_P use the core frequency which may change from time to time. CPU_CLK_UNHALTE.REF_TSC and CPU_CLK_UNHALTED.REF are not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time. The fixed events are CPU_CLK_UNHALTED.CORE and CPU_CLK_UNHALTED.REF_TSC and the programmable events are CPU_CLK_UNHALTED.CORE_P and CPU_CLK_UNHALTED.REF.
This event counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time.
This event counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time. This event is not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time.
Counts the number of reference cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time. This event is not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time. Divide this event count by core frequency to determine the elapsed time while the core was not in halt state. Divide this event count by core frequency to determine the elapsed time while the core was not in halt state. This event is architecturally defined and is a designated fixed counter. CPU_CLK_UNHALTED.CORE and CPU_CLK_UNHALTED.CORE_P use the core frequency which may change from time to time. CPU_CLK_UNHALTE.REF_TSC and CPU_CLK_UNHALTED.REF are not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time. The fixed events are CPU_CLK_UNHALTED.CORE and CPU_CLK_UNHALTED.REF_TSC and the programmable events are CPU_CLK_UNHALTED.CORE_P and CPU_CLK_UNHALTED.REF.
Cycles the divider is busy.This event counts the cycles when the divide unit is unable to accept a new divide UOP because it is busy processing a previously dispatched UOP. The cycles will be counted irrespective of whether or not another divide UOP is waiting to enter the divide unit (from the RS). This event might count cycles while a divide is in progress even if the RS is empty. The divide instruction is one of the longest latency instructions in the machine. Hence, it has a special event associated with it to help determine if divides are delaying the retirement of instructions.
Counts the number of times a decode restriction reduced the decode throughput due to wrong instruction length prediction
Counts the number of cycles the NIP stalls because of an icache miss. This is a cumulative count of cycles the NIP stalled for all icache misses.
This event counts all instruction fetches, including uncacheable fetches.
This event counts all instruction fetches from the instruction cache.
This event counts all instruction fetches that miss the Instruction cache or produce memory requests. This includes uncacheable fetches. An instruction fetch miss is counted only once and not once for every cycle it is outstanding.
This event counts the number of instructions that retire. For instructions that consist of multiple micro-ops, this event counts exactly once, as the last micro-op of the instruction retires. The event continues counting while instructions retire, including during interrupt service routines caused by hardware interrupts, faults or traps. Background: Modern microprocessors employ extensive pipelining and speculative techniques. Since sometimes an instruction is started but never completed, the notion of "retirement" is introduced. A retired instruction is one that commits its states. Or stated differently, an instruction might be abandoned at some point. No instruction is truly finished until it retires. This counter measures the number of completed instructions. The fixed event is INST_RETIRED.ANY and the programmable event is INST_RETIRED.ANY_P.
This event counts the number of instructions that retire execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. The counter continues counting during hardware interrupts, traps, and inside interrupt handlers.
This event counts the number of instructions that retire execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. The counter continues counting during hardware interrupts, traps, and inside interrupt handlers. (Precise Event)
This event counts the number of demand and prefetch transactions that the L2 XQ rejects due to a full or near full condition which likely indicates back pressure from the IDI link. The XQ may reject transactions from the L2Q (non-cacheable requests), BBS (L2 misses) and WOB (L2 write-back victims)
This event counts the total number of L2 cache references and the number of L2 cache misses respectively.
This event counts requests originating from the core that references a cache line in the L2 cache
Machine clears happen when something happens in the machine that causes the hardware to need to take special care to get the right answer. When such a condition is signaled on an instruction, the front end of the machine is notified that it must restart, so no more instructions will be decoded from the current path. All instructions "older" than this one will be allowed to finish. This instruction and all "younger" instructions must be cleared, since they must not be allowed to complete. Essentially, the hardware waits until the problematic instruction is the oldest instruction in the machine. This means all older instructions are retired, and all pending stores (from older instructions) are completed. Then the new path of instructions from the front end are allowed to start into the machine. There are many conditions that might cause a machine clear (including the receipt of an interrupt, or a trap or a fault). All those conditions (including but not limited to MACHINE_CLEARS.MEMORY_ORDERING, MACHINE_CLEARS.SMC, and MACHINE_CLEARS.FP_ASSIST) are captured in the ANY event. In addition, some conditions can be specifically counted (i.e. SMC, MEMORY_ORDERING, FP_ASSIST). However, the sum of SMC, MEMORY_ORDERING, and FP_ASSIST machine clears will not necessarily equal the number of ANY.
This event counts the number of times that pipeline stalled due to FP operations needing assists.
This event counts the number of times that pipeline was cleared due to memory ordering issues.
This event counts the number of times that a program writes to a code section. Self-modifying code causes a severe penalty in all Intel? architecture processors.
This event counts the number of load ops retired
This event counts the number of store ops retired
This event counts the number of load ops retired that had DTLB miss.
This event counts the number of load ops retired that had DTLB miss. (Precise Event)
This event counts the number of load ops retired that got data from the other core or from the other module.
This event counts the number of load ops retired that got data from the other core or from the other module. (Precise Event)
This event counts the number of load ops retired that miss in L1 Data cache. Note that prefetch misses will not be counted.
This event counts the number of load ops retired that hit in the L2
This event counts the number of load ops retired that hit in the L2 (Precise Event)
This event counts the number of load ops retired that miss in the L2
This event counts the number of load ops retired that miss in the L2 (Precise Event)
This event counts the number of load ops retired that had UTLB miss.
Counts the number of times the MSROM starts a flow of UOPS. It does not count every time a UOP is read from the microcode ROM. The most common case that this counts is when a micro-coded instruction is encountered by the front end of the machine. Other cases include when an instruction encounters a fault, trap, or microcode assist of any sort. The event will count MSROM startups for UOPS that are speculative, and subsequently cleared by branch mispredict or machine clear. Background: UOPS are produced by two mechanisms. Either they are generated by hardware that decodes instructions into UOPS, or they are delivered by a ROM (called the MSROM) that holds UOPS associated with a specific instruction. MSROM UOPS might also be delivered in response to some condition such as a fault or other exceptional condition. This event is an excellent mechanism for detecting instructions that require the use of MSROM instructions.
The NO_ALLOC_CYCLES.ALL event counts the number of cycles when the front-end does not provide any instructions to be allocated for any reason. This event indicates the cycles where an allocation stalls occurs, and no UOPS are allocated in that cycle.
Counts the number of cycles when no uops are allocated and the alloc pipe is stalled waiting for a mispredicted jump to retire. After the misprediction is detected, the front end will start immediately but the allocate pipe stalls until the mispredicted
The NO_ALLOC_CYCLES.NOT_DELIVERED event is used to measure front-end inefficiencies, i.e. when front-end of the machine is not delivering micro-ops to the back-end and the back-end is not stalled. This event can be used to identify if the machine is truly front-end bound. When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance. Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into micro-ops (uops) in machine understandable format and putting them into a micro-op queue to be consumed by back end. The back-end then takes these micro-ops, allocates the required resources. When all resources are ready, micro-ops are executed. If the back-end is not ready to accept micro-ops from the front-end, then we do not want to count these as front-end bottlenecks. However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more UOPS. This event counts the cycles only when back-end is requesting more uops and front-end is not able to provide them. Some examples of conditions that cause front-end efficiencies are: Icache misses, ITLB misses, and decoder restrictions that limit the the front-end bandwidth.
Counts the number of cycles when no uops are allocated and a RATstall is asserted.
Counts the number of cycles when no uops are allocated and the ROB is full (less than 2 entries available)
Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction
Counts any code reads (demand & prefetch) that have any response type
Counts any code reads (demand & prefetch) that miss L2
Counts any code reads (demand & prefetch) that hit in the other module where modified copies were found in other core's L1 cache
Counts any code reads (demand & prefetch) that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts any code reads (demand & prefetch) that miss L2 and the target was non-DRAM system address
Counts any code reads (demand & prefetch) that miss L2 with no details on snoop-related information
Counts any code reads (demand & prefetch) that miss L2 with a snoop miss response
Counts any code reads (demand & prefetch) that cycles of outstanding offcore requests
Counts any data read (demand & prefetch) that have any response type
Counts any data read (demand & prefetch) that miss L2
Counts any data read (demand & prefetch) that hit in the other module where modified copies were found in other core's L1 cache
Counts any data read (demand & prefetch) that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts any data read (demand & prefetch) that miss L2 and the target was non-DRAM system address
Counts any data read (demand & prefetch) that miss L2 with no details on snoop-related information
Counts any data read (demand & prefetch) that miss L2 with a snoop miss response
Counts any data read (demand & prefetch) that cycles of outstanding offcore requests
Counts any prefetch read that have any response type
Counts any prefetch read that miss L2
Counts any prefetch read that hit in the other module where modified copies were found in other core's L1 cache
Counts any prefetch read that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts any prefetch read that miss L2 and the target was non-DRAM system address
Counts any prefetch read that miss L2 with no details on snoop-related information
Counts any prefetch read that miss L2 with a snoop miss response
Counts any prefetch read that cycles of outstanding offcore requests
Counts any data/code/rfo reads (demand & prefetch) that have any response type
Counts any data/code/rfo reads (demand & prefetch) that miss L2
Counts any data/code/rfo reads (demand & prefetch) that hit in the other module where modified copies were found in other core's L1 cache
Counts any data/code/rfo reads (demand & prefetch) that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts any data/code/rfo reads (demand & prefetch) that miss L2 and the target was non-DRAM system address
Counts any data/code/rfo reads (demand & prefetch) that miss L2 with no details on snoop-related information
Counts any data/code/rfo reads (demand & prefetch) that miss L2 with a snoop miss response
Counts any data/code/rfo reads (demand & prefetch) that cycles of outstanding offcore requests
Counts any request that have any response type
Counts any request that miss L2
Counts any request that hit in the other module where modified copies were found in other core's L1 cache
Counts any request that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts any request that miss L2 and the target was non-DRAM system address
Counts any request that miss L2 with no details on snoop-related information
Counts any request that miss L2 with a snoop miss response
Counts any request that cycles of outstanding offcore requests
Counts any rfo reads (demand & prefetch) that have any response type
Counts any rfo reads (demand & prefetch) that miss L2
Counts any rfo reads (demand & prefetch) that hit in the other module where modified copies were found in other core's L1 cache
Counts any rfo reads (demand & prefetch) that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts any rfo reads (demand & prefetch) that miss L2 and the target was non-DRAM system address
Counts any rfo reads (demand & prefetch) that miss L2 with no details on snoop-related information
Counts any rfo reads (demand & prefetch) that miss L2 with a snoop miss response
Counts any rfo reads (demand & prefetch) that cycles of outstanding offcore requests
Bus lock and split lock that have any response type
Bus lock and split lock that miss L2
Bus lock and split lock that hit in the other module where modified copies were found in other core's L1 cache
Bus lock and split lock that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Bus lock and split lock that miss L2 and the target was non-DRAM system address
Bus lock and split lock that miss L2 with no details on snoop-related information
Bus lock and split lock that miss L2 with a snoop miss response
Bus lock and split lock that cycles of outstanding offcore requests
Counts writeback (modified to exclusive) that have any response type
Counts writeback (modified to exclusive) that miss L2
Counts writeback (modified to exclusive) that hit in the other module where modified copies were found in other core's L1 cache
Counts writeback (modified to exclusive) that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts writeback (modified to exclusive) that miss L2 and the target was non-DRAM system address
Counts writeback (modified to exclusive) that miss L2 with no details on snoop-related information
Counts writeback (modified to exclusive) that miss L2 with a snoop miss response
Counts writeback (modified to exclusive) that cycles of outstanding offcore requests
Counts demand and DCU prefetch instruction cacheline that have any response type
Counts demand and DCU prefetch instruction cacheline that miss L2
Counts demand and DCU prefetch instruction cacheline that hit in the other module where modified copies were found in other core's L1 cache
Counts demand and DCU prefetch instruction cacheline that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts demand and DCU prefetch instruction cacheline that miss L2 and the target was non-DRAM system address
Counts demand and DCU prefetch instruction cacheline that miss L2 with no details on snoop-related information
Counts demand and DCU prefetch instruction cacheline that miss L2 with a snoop miss response
Counts demand and DCU prefetch instruction cacheline that cycles of outstanding offcore requests
Counts demand and DCU prefetch data read that have any response type
Counts demand and DCU prefetch data read that miss L2
Counts demand and DCU prefetch data read that hit in the other module where modified copies were found in other core's L1 cache
Counts demand and DCU prefetch data read that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts demand and DCU prefetch data read that miss L2 and the target was non-DRAM system address
Counts demand and DCU prefetch data read that miss L2 with no details on snoop-related information
Counts demand and DCU prefetch data read that miss L2 with a snoop miss response
Counts demand and DCU prefetch data read that cycles of outstanding offcore requests
Counts demand and DCU prefetch RFOs that have any response type
Counts demand and DCU prefetch RFOs that miss L2
Counts demand and DCU prefetch RFOs that hit in the other module where modified copies were found in other core's L1 cache
Counts demand and DCU prefetch RFOs that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts demand and DCU prefetch RFOs that miss L2 and the target was non-DRAM system address
Counts demand and DCU prefetch RFOs that miss L2 with no details on snoop-related information
Counts demand and DCU prefetch RFOs that miss L2 with a snoop miss response
Counts demand and DCU prefetch RFOs that cycles of outstanding offcore requests
Counts demand reads of partial cache lines (including UC and WC) that have any response type
Counts demand reads of partial cache lines (including UC and WC) that miss L2
Counts demand reads of partial cache lines (including UC and WC) that hit in the other module where modified copies were found in other core's L1 cache
Counts demand reads of partial cache lines (including UC and WC) that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts demand reads of partial cache lines (including UC and WC) that miss L2 and the target was non-DRAM system address
Counts demand reads of partial cache lines (including UC and WC) that miss L2 with no details on snoop-related information
Counts demand reads of partial cache lines (including UC and WC) that miss L2 with a snoop miss response
Counts demand reads of partial cache lines (including UC and WC) that cycles of outstanding offcore requests
Countsof demand RFO requests to write to partial cache lines that have any response type
Countsof demand RFO requests to write to partial cache lines that miss L2
Countsof demand RFO requests to write to partial cache lines that hit in the other module where modified copies were found in other core's L1 cache
Countsof demand RFO requests to write to partial cache lines that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Countsof demand RFO requests to write to partial cache lines that miss L2 and the target was non-DRAM system address
Countsof demand RFO requests to write to partial cache lines that miss L2 with no details on snoop-related information
Countsof demand RFO requests to write to partial cache lines that miss L2 with a snoop miss response
Countsof demand RFO requests to write to partial cache lines that cycles of outstanding offcore requests
Counts DCU hardware prefetcher data read that have any response type
Counts DCU hardware prefetcher data read that miss L2
Counts DCU hardware prefetcher data read that hit in the other module where modified copies were found in other core's L1 cache
Counts DCU hardware prefetcher data read that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts DCU hardware prefetcher data read that miss L2 and the target was non-DRAM system address
Counts DCU hardware prefetcher data read that miss L2 with no details on snoop-related information
Counts DCU hardware prefetcher data read that miss L2 with a snoop miss response
Counts DCU hardware prefetcher data read that cycles of outstanding offcore requests
Counts code reads generated by L2 prefetchers that have any response type
Counts code reads generated by L2 prefetchers that miss L2
Counts code reads generated by L2 prefetchers that hit in the other module where modified copies were found in other core's L1 cache
Counts code reads generated by L2 prefetchers that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts code reads generated by L2 prefetchers that miss L2 and the target was non-DRAM system address
Counts code reads generated by L2 prefetchers that miss L2 with no details on snoop-related information
Counts code reads generated by L2 prefetchers that miss L2 with a snoop miss response
Counts code reads generated by L2 prefetchers that cycles of outstanding offcore requests
Counts data cacheline reads generated by L2 prefetchers that have any response type
Counts data cacheline reads generated by L2 prefetchers that miss L2
Counts data cacheline reads generated by L2 prefetchers that hit in the other module where modified copies were found in other core's L1 cache
Counts data cacheline reads generated by L2 prefetchers that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts data cacheline reads generated by L2 prefetchers that miss L2 and the target was non-DRAM system address
Counts data cacheline reads generated by L2 prefetchers that miss L2 with no details on snoop-related information
Counts data cacheline reads generated by L2 prefetchers that miss L2 with a snoop miss response
Counts data cacheline reads generated by L2 prefetchers that cycles of outstanding offcore requests
Counts RFO requests generated by L2 prefetchers that have any response type
Counts RFO requests generated by L2 prefetchers that miss L2
Counts RFO requests generated by L2 prefetchers that hit in the other module where modified copies were found in other core's L1 cache
Counts RFO requests generated by L2 prefetchers that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts RFO requests generated by L2 prefetchers that miss L2 and the target was non-DRAM system address
Counts RFO requests generated by L2 prefetchers that miss L2 with no details on snoop-related information
Counts RFO requests generated by L2 prefetchers that miss L2 with a snoop miss response
Counts RFO requests generated by L2 prefetchers that cycles of outstanding offcore requests
Counts streaming store that have any response type
Counts streaming store that miss L2
Counts streaming store that hit in the other module where modified copies were found in other core's L1 cache
Counts streaming store that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts streaming store that miss L2 and the target was non-DRAM system address
Counts streaming store that miss L2 with no details on snoop-related information
Counts streaming store that miss L2 with a snoop miss response
Counts streaming store that cycles of outstanding offcore requests
Counts UC instruction fetch that have any response type
Counts UC instruction fetch that miss L2
Counts UC instruction fetch that hit in the other module where modified copies were found in other core's L1 cache
Counts UC instruction fetch that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
Counts UC instruction fetch that miss L2 and the target was non-DRAM system address
Counts UC instruction fetch that miss L2 with no details on snoop-related information
Counts UC instruction fetch that miss L2 with a snoop miss response
Counts UC instruction fetch that cycles of outstanding offcore requests
This event counts every cycle when a data (D) page walk or instruction (I) page walk is in progress. Since a pagewalk implies a TLB miss, the approximate cost of a TLB miss can be determined from this event.
This event counts every cycle when a D-side (walks due to a load) page walk is in progress. Page walk duration divided by number of page walks is the average duration of page-walks.
This event counts when a data (D) page walk is completed or started. Since a page walk implies a TLB miss, the number of TLB misses can be counted by counting the number of pagewalks.
This event counts every cycle when a I-side (walks due to an instruction fetch) page walk is in progress. Page walk duration divided by number of page walks is the average duration of page-walks.
This event counts when an instruction (I) page walk is completed or started. Since a page walk implies a TLB miss, the number of TLB misses can be counted by counting the number of pagewalks.
This event counts when a data (D) page walk or an instruction (I) page walk is completed or started. Since a page walk implies a TLB miss, the number of TLB misses can be counted by counting the number of pagewalks.
This event counts the number of load uops reissued from Rehabq
This event counts the number of store uops reissued from Rehabq
This event counts the cases where a forward was technically possible, but did not occur because the store data was not available at the right time
This event counts the number of retired loads that were prohibited from receiving forwarded data from the store because of address mismatch.
This event counts the number of retired loads that were prohibited from receiving forwarded data from the store because of address mismatch. (Precise event)
This event counts the number of retire loads that experienced cache line boundary splits
This event counts the number of retire stores that experienced cache line boundary splits (Precise event)
This event counts the number of retired memory operations with lock semantics. These are either implicit locked instructions such as the XCHG instruction or instructions with an explicit LOCK prefix (0xF0).
This event counts the number of retired stores that are delayed because there is not a store address buffer available.
This event counts the number of retire stores that experienced cache line boundary splits
Counts the number of cycles the Alloc pipeline is stalled when any one of the RSs (IEC, FPC and MEC) is full. This event is a superset of all the individual RS stall event counts.
Counts the number of cycles and allocation pipeline is stalled and is waiting for a free MEC reservation station entry. The cycles should be appropriately counted in case of the cracked ops e.g. In case of a cracked load-op, the load portion is sent to M
This event counts the number of micro-ops retired. The processor decodes complex macro instructions into a sequence of simpler micro-ops. Most instructions are composed of one or two micro-ops. Some instructions are decoded into longer sequences such as repeat instructions, floating point transcendental instructions, and assists. In some cases micro-op sequences are fused or whole instructions are fused into one micro-op. See other UOPS_RETIRED events for differentiating retired fused and non-fused micro-ops.
This event counts the number of micro-ops retired that were supplied from MSROM.