Module dataproc

@pulumi/gcp > dataproc

Index

dataproc/cluster.ts dataproc/job.ts

class Cluster

Manages a Cloud Dataproc cluster resource within GCP. For more information see the official dataproc documentation.

!> Warning: Due to limitations of the API, all arguments except labels,cluster_config.worker_config.num_instances and cluster_config.preemptible_worker_config.num_instances are non-updateable. Changing others will cause recreation of the whole cluster!

constructor

new Cluster(name: string, args?: ClusterArgs, opts?: pulumi.CustomResourceOptions)

Create a Cluster resource with the given unique name, arguments, and options.

  • name The unique name of the resource.
  • args The arguments to use to populate this resource's properties.
  • opts A bag of options that control this resource's behavior.

method get

public static get(name: string, id: pulumi.Input<pulumi.ID>, state?: ClusterState): Cluster

Get an existing Cluster resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.

method getProvider

getProvider(moduleMember: string): ProviderResource | undefined

method isInstance

static isInstance(obj: any): boolean

Returns true if the given object is an instance of CustomResource. This is designed to work even when multiple copies of the Pulumi SDK have been loaded into the same process.

property clusterConfig

public clusterConfig: pulumi.Output<{ ... }>;

Allows you to configure various aspects of the cluster. Structure defined below.

property id

id: Output<ID>;

id is the provider-assigned unique ID for this managed resource. It is set during deployments and may be missing (undefined) during planning phases.

property labels

public labels: pulumi.Output<{ ... }>;

The list of labels (key/value pairs) to be applied to instances in the cluster. GCP generates some itself including goog-dataproc-cluster-name which is the name of the cluster.

property name

public name: pulumi.Output<string>;

The name of the cluster, unique within the project and zone.

property project

public project: pulumi.Output<string>;

The ID of the project in which the cluster will exist. If it is not provided, the provider project is used.

property region

public region: pulumi.Output<string | undefined>;

The region in which the cluster and associated nodes will be created in. Defaults to global.

property urn

urn: Output<URN>;

urn is the stable logical URN used to distinctly address a resource, both before and after deployments.

class Job

Manages a job resource within a Dataproc cluster within GCE. For more information see the official dataproc documentation.

!> Note: This resource does not support ‘update’ and changing any attributes will cause the resource to be recreated.

constructor

new Job(name: string, args: JobArgs, opts?: pulumi.CustomResourceOptions)

Create a Job resource with the given unique name, arguments, and options.

  • name The unique name of the resource.
  • args The arguments to use to populate this resource's properties.
  • opts A bag of options that control this resource's behavior.

method get

public static get(name: string, id: pulumi.Input<pulumi.ID>, state?: JobState): Job

Get an existing Job resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.

method getProvider

getProvider(moduleMember: string): ProviderResource | undefined

method isInstance

static isInstance(obj: any): boolean

Returns true if the given object is an instance of CustomResource. This is designed to work even when multiple copies of the Pulumi SDK have been loaded into the same process.

property driverControlsFilesUri

public driverControlsFilesUri: pulumi.Output<string>;

If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.

property driverOutputResourceUri

public driverOutputResourceUri: pulumi.Output<string>;

A URI pointing to the location of the stdout of the job’s driver program.

property forceDelete

public forceDelete: pulumi.Output<boolean | undefined>;

By default, you can only delete inactive jobs within Dataproc. Setting this to true, and calling destroy, will ensure that the job is first cancelled before issuing the delete.

property hadoopConfig

public hadoopConfig: pulumi.Output<{ ... } | undefined>;

property hiveConfig

public hiveConfig: pulumi.Output<{ ... } | undefined>;

property id

id: Output<ID>;

id is the provider-assigned unique ID for this managed resource. It is set during deployments and may be missing (undefined) during planning phases.

property labels

public labels: pulumi.Output<{ ... } | undefined>;

The list of labels (key/value pairs) to add to the job.

property pigConfig

public pigConfig: pulumi.Output<{ ... } | undefined>;

property placement

public placement: pulumi.Output<{ ... }>;

property project

public project: pulumi.Output<string>;

The project in which the cluster can be found and jobs subsequently run against. If it is not provided, the provider project is used.

property pysparkConfig

public pysparkConfig: pulumi.Output<{ ... } | undefined>;

property reference

public reference: pulumi.Output<{ ... }>;

property region

public region: pulumi.Output<string | undefined>;

The Cloud Dataproc region. This essentially determines which clusters are available for this job to be submitted to. If not specified, defaults to global.

property scheduling

public scheduling: pulumi.Output<{ ... } | undefined>;

Optional. Job scheduling configuration.

property sparkConfig

public sparkConfig: pulumi.Output<{ ... } | undefined>;

property sparksqlConfig

public sparksqlConfig: pulumi.Output<{ ... } | undefined>;

property status

public status: pulumi.Output<{ ... }>;

property urn

urn: Output<URN>;

urn is the stable logical URN used to distinctly address a resource, both before and after deployments.

interface ClusterArgs

The set of arguments for constructing a Cluster resource.

property clusterConfig

clusterConfig?: pulumi.Input<{ ... }>;

Allows you to configure various aspects of the cluster. Structure defined below.

property labels

labels?: pulumi.Input<{ ... }>;

The list of labels (key/value pairs) to be applied to instances in the cluster. GCP generates some itself including goog-dataproc-cluster-name which is the name of the cluster.

property name

name?: pulumi.Input<string>;

The name of the cluster, unique within the project and zone.

property project

project?: pulumi.Input<string>;

The ID of the project in which the cluster will exist. If it is not provided, the provider project is used.

property region

region?: pulumi.Input<string>;

The region in which the cluster and associated nodes will be created in. Defaults to global.

interface ClusterState

Input properties used for looking up and filtering Cluster resources.

property clusterConfig

clusterConfig?: pulumi.Input<{ ... }>;

Allows you to configure various aspects of the cluster. Structure defined below.

property labels

labels?: pulumi.Input<{ ... }>;

The list of labels (key/value pairs) to be applied to instances in the cluster. GCP generates some itself including goog-dataproc-cluster-name which is the name of the cluster.

property name

name?: pulumi.Input<string>;

The name of the cluster, unique within the project and zone.

property project

project?: pulumi.Input<string>;

The ID of the project in which the cluster will exist. If it is not provided, the provider project is used.

property region

region?: pulumi.Input<string>;

The region in which the cluster and associated nodes will be created in. Defaults to global.

interface JobArgs

The set of arguments for constructing a Job resource.

property forceDelete

forceDelete?: pulumi.Input<boolean>;

By default, you can only delete inactive jobs within Dataproc. Setting this to true, and calling destroy, will ensure that the job is first cancelled before issuing the delete.

property hadoopConfig

hadoopConfig?: pulumi.Input<{ ... }>;

property hiveConfig

hiveConfig?: pulumi.Input<{ ... }>;

property labels

labels?: pulumi.Input<{ ... }>;

The list of labels (key/value pairs) to add to the job.

property pigConfig

pigConfig?: pulumi.Input<{ ... }>;

property placement

placement: pulumi.Input<{ ... }>;

property project

project?: pulumi.Input<string>;

The project in which the cluster can be found and jobs subsequently run against. If it is not provided, the provider project is used.

property pysparkConfig

pysparkConfig?: pulumi.Input<{ ... }>;

property reference

reference?: pulumi.Input<{ ... }>;

property region

region?: pulumi.Input<string>;

The Cloud Dataproc region. This essentially determines which clusters are available for this job to be submitted to. If not specified, defaults to global.

property scheduling

scheduling?: pulumi.Input<{ ... }>;

Optional. Job scheduling configuration.

property sparkConfig

sparkConfig?: pulumi.Input<{ ... }>;

property sparksqlConfig

sparksqlConfig?: pulumi.Input<{ ... }>;

interface JobState

Input properties used for looking up and filtering Job resources.

property driverControlsFilesUri

driverControlsFilesUri?: pulumi.Input<string>;

If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.

property driverOutputResourceUri

driverOutputResourceUri?: pulumi.Input<string>;

A URI pointing to the location of the stdout of the job’s driver program.

property forceDelete

forceDelete?: pulumi.Input<boolean>;

By default, you can only delete inactive jobs within Dataproc. Setting this to true, and calling destroy, will ensure that the job is first cancelled before issuing the delete.

property hadoopConfig

hadoopConfig?: pulumi.Input<{ ... }>;

property hiveConfig

hiveConfig?: pulumi.Input<{ ... }>;

property labels

labels?: pulumi.Input<{ ... }>;

The list of labels (key/value pairs) to add to the job.

property pigConfig

pigConfig?: pulumi.Input<{ ... }>;

property placement

placement?: pulumi.Input<{ ... }>;

property project

project?: pulumi.Input<string>;

The project in which the cluster can be found and jobs subsequently run against. If it is not provided, the provider project is used.

property pysparkConfig

pysparkConfig?: pulumi.Input<{ ... }>;

property reference

reference?: pulumi.Input<{ ... }>;

property region

region?: pulumi.Input<string>;

The Cloud Dataproc region. This essentially determines which clusters are available for this job to be submitted to. If not specified, defaults to global.

property scheduling

scheduling?: pulumi.Input<{ ... }>;

Optional. Job scheduling configuration.

property sparkConfig

sparkConfig?: pulumi.Input<{ ... }>;

property sparksqlConfig

sparksqlConfig?: pulumi.Input<{ ... }>;

property status

status?: pulumi.Input<{ ... }>;