Export Table Contents to files on crate node machines.
COPY table_ident [ column (, ...) ] TO [DIRECTORY] { output_uri }
[ WITH ( copy_parameter [= value] [, ... ] ) ]
Copy the contents of a table to one or many files on any cluster node containing data from the given table.
Note
Output files will always be stored on the cluster node machines, not on the client machine.
The created files are JSON formatted and contain one table row per line.
If the DIRECTORY keyword is given, the uri is treated as a directory path. This will generate one or more files in the given directory, named in such a way to prevent filename conflicts.
table_ident: | The name (optionally schema-qualified) of the table to be exported. |
---|---|
column: | (optional) A list of column expressions that should be exported. |
Note
Declaring columns changes the output to JSON list format, which is currently not supported by the COPY FROM statement.
The output_uri can be any expression evaluating to a string. The resulting string should be a valid URI of one of the supporting schemes:
- file://
- s3://[<accesskey>:<secretkey>@]<bucketname>/<path>
If no scheme is given (e.g.: ‘/path/to/file’) the default uri-scheme file:// will be used.
Note
If the s3 scheme is used without specifying any credentials an attempt is made to read these information from the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY environment variables. In addition to that the Java System properties aws.accessKeyId and aws.secretKey are also used as a fallback.
The optional WITH clause can specify parameters for the copy statement.
[ WITH ( copy_parameter [= value] [, ... ] ) ]
Possible copy_parameters are: