8000 perf/core: Fix pmu::filter_match for SW-led groups · bsd-unix/linux@2c81a64 · GitHub
[go: up one dir, main page]

Skip to content

Commit 2c81a64

Browse files
mrutland-armIngo Molnar
authored andcommitted
perf/core: Fix pmu::filter_match for SW-led groups
The following commit: 66eb579 ("perf: allow for PMU-specific event filtering") added the pmu::filter_match() callback. This was intended to avoid HW constraints on events from resulting in extremely pessimistic scheduling. However, pmu::filter_match() is only called for the leader of each event group. When the leader is a SW event, we do not filter the groups, and may fail at pmu::add() time, and when this happens we'll give up on scheduling any event groups later in the list until they are rotated ahead of the failing group. This can result in extremely sub-optimal event scheduling behaviour, e.g. if running the following on a big.LITTLE platform: $ taskset -c 0 ./perf stat \ -e 'a57{context-switches,armv8_cortex_a57/config=0x11/}' \ -e 'a53{context-switches,armv8_cortex_a53/config=0x11/}' \ ls <not counted> context-switches (0.00%) <not counted> armv8_cortex_a57/config=0x11/ (0.00%) 24 context-switches (37.36%) 57589154 armv8_cortex_a53/config=0x11/ (37.36%) Here the 'a53' event group was always eligible to be scheduled, but the 'a57' group never eligible to be scheduled, as the task was always affine to a Cortex-A53 CPU. The SW (group leader) event in the 'a57' group was eligible, but the HW event failed at pmu::add() time, resulting in ctx_flexible_sched_in giving up on scheduling further groups with HW events. One way of avoiding this is to check pmu::filter_match() on siblings as well as the group leader. If any of these fail their pmu::filter_match() call, we must skip the entire group before attempting to add any events. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Fixes: 66eb579 ("perf: allow for PMU-specific event filtering") Link: http://lkml.kernel.org/r/1465917041-15339-1-git-send-email-mark.rutland@arm.com [ Small readability edits. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent 175a20c commit 2c81a64

File tree

1 file changed

+22
-1
lines changed

1 file changed

+22
-1
lines changed

kernel/events/core.c

-1Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1678,12 +1678,33 @@ static bool is_orphaned_event(struct perf_event *event)
16781678
return event->state == PERF_EVENT_STATE_DEAD;
16791679
}
16801680

1681-
static inline int pmu_filter_match(struct perf_event *event)
1681+
static inline int __pmu_filter_match(struct perf_event *event)
16821682
{
16831683
struct pmu *pmu = event->pmu;
16841684
return pmu->filter_match ? pmu->filter_match(event) : 1;
16851685
}
16861686

1687+
/*
1688+
* Check whether we should attempt to schedule an event group based on
1689+
* PMU-specific filtering. An event group can consist of HW and SW events,
1690+
* potentially with a SW leader, so we must check all the filters, to
1691+
* determine whether a group is schedulable:
1692+
*/
1693+
static inline int pmu_filter_match(struct perf_event *event)
1694+
{
1695+
struct perf_event *child;
1696+
1697+
if (!__pmu_filter_match(event))
1698+
return 0;
1699+
1700+
list_for_each_entry(child, &event->sibling_list, group_entry) {
1701+
if (!__pmu_filter_match(child))
1702+
return 0;
1703+
}
1704+
1705+
return 1;
1706+
}
1707+
16871708
static inline int
16881709
event_filter_match(struct perf_event *event)
16891710
{

0 commit comments

Comments
 (0)
0