Intel® C++ Compiler 16.0 User and Reference Guide

omp parallel

Specifies that a structured block should be run in parallel by a team of threads.

Syntax

#pragma omp parallel [clause, clause, ...]

structured-block

Arguments

clause

Can be one or more of the following:

  • copyin(list)

  • default(shared | none)

  • firstprivate(list)

  • if(scalar-expression)

  • num_threads(integer expression)

  • private(list)

  • proc_bind(clause)

    where clause is one of the following:

    Clause Definition

    master

    Assign every thread in the team to the same place as the master thread.

    close

    Assign the threads to places close to the place of the thread of the parent.

    spread

    Create a sparse distribution for a team of threads among the place partition of the parent

  • reduction(operator:list)

  • shared(list)

Description

The thread that encounters this pragma constructs a team of threads to execute the structured-block. The thread that encounters the pragma becomes the master thread with a thread number of zero. The remaining threads are assigned unique numbers between 1 and (N-1), where N is the number of threads in the team. The number of threads in the team is constant for the duration of the structured-block.

The following example demonstrates how to use this pragma to create a team of N threads, each with its own private copy of the variables start, end, and tag. Each variable tag is initialized from the encountering thread's value of tag. Each thread executes the code to compute their own starting time and total time and logs those times in the shared array timing:

Example

#include <omp.h>
void compute(int tag) {
	double timing[n], start, end;

	#pragma omp parallel private(start, end) firstprivate(tag) num_threads(n)	{
    start = omp_get_wtime();
		  // some parallel computation using “tag”
		  end = omp_get_wtime();
		  timing[omp_get_thread_num()] = end – start;
	}
}