Intel® Advisor Help

Choosing a Small, Representative Data Set for the Dependencies Tool

When using the Dependencies tool, Intel recommends that you use a Debug build.

When you run the Dependencies tool, it executes the target against the supplied data set. If you supplied a full data set for the Dependencies tool, this would require a runtime of 50 to several hundred times longer than the normal execution time of the target executable.

Data set size and workload have a direct impact on target execution time and analysis speed.

For example, it takes longer to process a 1000x1000 pixel image than a 100x100 pixel image. A possible reason: You may have loops with an iteration space of 1...1000 for the larger image, but only 1...100 for the smaller image. The exact same code paths may be executed in both cases. The difference is the number of times these code paths are repeated.

You can control analysis cost without sacrificing completeness by minimizing this kind of unnecessary repetition from your target's execution.

Instead of choosing large, repetitive data sets, choose small, representative data sets that fully create tasks with minimal to moderate work per task. Minimal to moderate means just enough work to demonstrate all the different behaviors a task can perform.

Your objective: In as short a runtime period as possible, execute as many paths and the maximum number of tasks (parallel activities) as you can afford, while minimizing the repetitive computation within each task to the bare minimum needed for good code coverage.

Data sets that run in about ten seconds or less are ideal. You can always create additional data sets to ensure all your code is checked.

See Also