Intel® Advisor Help
Intel® Advisor Dependency analysis identifies the following types of data dependencies:
In case of threading, such sharing problems might make data unreliable as errors are likely to happen in a parallel program when two tasks are executing simultaneously and contain operations that access the same memory location (also called "Race Condition"). If one of the tasks writes to the location and the other one reads from it, then the result of the read depends on whether it happens before or after the write. If they both write to the location, then the final value in the location depends on which write happens last. For more information about data races, refer to Using Intel® Inspector XE to Find Data Races in Multithreaded Code.
To avoid data sharing problems during parallel execution, you need to add a form of synchronization.
On the other hand, vectorization is applied individually to each instruction of a loop body. Which means that if a vectorized instance of a loop modifies data that is not being processed in the current iteration, no data sharing problems should occur. For example:
for (i = 0; i < n - 4; i += 4) { a[i + 4] = a[i] * c; }
This loop can be vectorized with #pragma omp simd or#pragma simd. Also, you need to make sure that the vector length is suitable, so that no data sharing problems occur. You can do it by specifying vector length with #pragma omp simd safelen(4).
Also refer to Explicit Vector Programming - Best Known Methods for best-known methods of vector programming and additional sources of information.