Azure Storage Datalake Service

Since Camel 3.8

Both producer and consumer are supported

The Azure storage datalake component is used for storing and retrieving file from Azure Storage Datalake Sevice using the Azure APIs v12.

Prerequisites

You need to have a valid Azure account with Azure storage set up. More information can be found at Azure Documentation Portal.

Maven users will need to add the following dependency to their pom.xml for this component.

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-azure-storage-datalake</artifactId>
    <version>x.x.x</version>
    <!-- use the same version as your camel core version -->
</dependency>

Uri Format

azure-storage-datalake:accountName[/fileSystemName][?options]

In case of a consumer, both accountName and fileSystemName are required. In case of the producer, it depends on the operation being requested.

You can append query options to the URI in the following format, ?option1=value&option2=value&…​

Configuring Options

Camel components are configured on two separate levels:

  • component level

  • endpoint level

Configuring Component Options

The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.

Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.

Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.

Configuring Endpoint Options

Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.

Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java.

A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.

The following two sections lists all the options, firstly for the component followed by the endpoint.

Component Options

The Azure Storage Datalake Service component supports 38 options, which are listed below.

Name Description Default Type

clientId (common)

client id for azure account.

String

close (common)

Whether or not a file changed event raised indicates completion (true) or modification (false).

Boolean

closeStreamAfterRead (common)

check for closing stream after read.

Boolean

configuration (common)

configuration object for datalake.

DataLakeConfiguration

credentialType (common)

Determines the credential strategy to adopt.

Enum values:

  • CLIENT_SECRET

  • SHARED_KEY_CREDENTIAL

  • AZURE_IDENTITY

  • AZURE_SAS

  • SERVICE_CLIENT_INSTANCE

CLIENT_SECRET

CredentialType

dataCount (common)

count number of bytes to download.

Long

directoryName (common)

directory of the file to be handled in component.

String

downloadLinkExpiration (common)

download link expiration time.

Long

expression (common)

expression for queryInputStream.

String

fileDir (common)

directory of file to do operations in the local system.

String

fileName (common)

name of file to be handled in component.

String

fileOffset (common)

offset position in file for different operations.

Long

maxResults (common)

maximum number of results to show at a time.

Integer

maxRetryRequests (common)

no of retries to a given request.

int

openOptions (common)

set open options for creating file.

Set

path (common)

path in azure datalake for operations.

String

permission (common)

permission string for the file.

String

position (common)

This parameter allows the caller to upload data in parallel and control the order in which it is appended to the file.

Long

recursive (common)

recursively include all paths.

Boolean

regex (common)

regular expression for matching file names.

String

retainUncommitedData (common)

Whether or not uncommitted data is to be retained after the operation.

Boolean

serviceClient (common)

Autowired datalake service client for azure storage datalake.

DataLakeServiceClient

sharedKeyCredential (common)

shared key credential for azure datalake gen2.

StorageSharedKeyCredential

tenantId (common)

tenant id for azure account.

String

timeout (common)

Timeout for operation.

Duration

umask (common)

umask permission for file.

String

userPrincipalNameReturned (common)

whether or not to use upn.

Boolean

bridgeErrorHandler (consumer)

Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.

false

boolean

lazyStartProducer (producer)

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

boolean

operation (producer)

operation to be performed.

Enum values:

  • listFileSystem

  • listFiles

listFileSystem

DataLakeOperationsDefinition

autowiredEnabled (advanced)

Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.

true

boolean

healthCheckConsumerEnabled (health)

Used for enabling or disabling all consumer based health checks from this component.

true

boolean

healthCheckProducerEnabled (health)

Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true.

true

boolean

accountKey (security)

account key for authentication.

String

clientSecret (security)

client secret for azure account.

String

clientSecretCredential (security)

client secret credential for authentication.

ClientSecretCredential

sasCredential (security)

SAS token credential.

AzureSasCredential

sasSignature (security)

SAS token signature.

String

Endpoint Options

The Azure Storage Datalake Service endpoint is configured using URI syntax:

azure-storage-datalake:accountName/fileSystemName

with the following path and query parameters:

Path Parameters (2 parameters)

Name Description Default Type

accountName (common)

name of the azure account.

String

fileSystemName (common)

name of filesystem to be used.

String

Query Parameters (53 parameters)

Name Description Default Type

clientId (common)

client id for azure account.

String

close (common)

Whether or not a file changed event raised indicates completion (true) or modification (false).

Boolean

closeStreamAfterRead (common)

check for closing stream after read.

Boolean

credentialType (common)

Determines the credential strategy to adopt.

Enum values:

  • CLIENT_SECRET

  • SHARED_KEY_CREDENTIAL

  • AZURE_IDENTITY

  • AZURE_SAS

  • SERVICE_CLIENT_INSTANCE

CLIENT_SECRET

CredentialType

dataCount (common)

count number of bytes to download.

Long

dataLakeServiceClient (common)

service client of datalake.

DataLakeServiceClient

directoryName (common)

directory of the file to be handled in component.

String

downloadLinkExpiration (common)

download link expiration time.

Long

expression (common)

expression for queryInputStream.

String

fileDir (common)

directory of file to do operations in the local system.

String

fileName (common)

name of file to be handled in component.

String

fileOffset (common)

offset position in file for different operations.

Long

maxResults (common)

maximum number of results to show at a time.

Integer

maxRetryRequests (common)

no of retries to a given request.

int

openOptions (common)

set open options for creating file.

Set

path (common)

path in azure datalake for operations.

String

permission (common)

permission string for the file.

String

position (common)

This parameter allows the caller to upload data in parallel and control the order in which it is appended to the file.

Long

recursive (common)

recursively include all paths.

Boolean

regex (common)

regular expression for matching file names.

String

retainUncommitedData (common)

Whether or not uncommitted data is to be retained after the operation.

Boolean

serviceClient (common)

Autowired datalake service client for azure storage datalake.

DataLakeServiceClient

sharedKeyCredential (common)

shared key credential for azure datalake gen2.

StorageSharedKeyCredential

tenantId (common)

tenant id for azure account.

String

timeout (common)

Timeout for operation.

Duration

umask (common)

umask permission for file.

String

userPrincipalNameReturned (common)

whether or not to use upn.

Boolean

sendEmptyMessageWhenIdle (consumer)

If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.

false

boolean

bridgeErrorHandler (consumer (advanced))

Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.

false

boolean

exceptionHandler (consumer (advanced))

To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.

ExceptionHandler

exchangePattern (consumer (advanced))

Sets the exchange pattern when the consumer creates an exchange.

Enum values:

  • InOnly

  • InOut

ExchangePattern

pollStrategy (consumer (advanced))

A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.

PollingConsumerPollStrategy

operation (producer)

operation to be performed.

Enum values:

  • listFileSystem

  • listFiles

listFileSystem

DataLakeOperationsDefinition

lazyStartProducer (producer (advanced))

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

boolean

backoffErrorThreshold (scheduler)

The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.

int

backoffIdleThreshold (scheduler)

The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.

int

backoffMultiplier (scheduler)

To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.

int

delay (scheduler)

Milliseconds before the next poll.

500

long

greedy (scheduler)

If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.

false

boolean

initialDelay (scheduler)

Milliseconds before the first poll starts.

1000

long

repeatCount (scheduler)

Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.

0

long

runLoggingLevel (scheduler)

The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.

Enum values:

  • TRACE

  • DEBUG

  • INFO

  • WARN

  • ERROR

  • OFF

TRACE

LoggingLevel

scheduledExecutorService (scheduler)

Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.

ScheduledExecutorService

scheduler (scheduler)

To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler.

none

Object

schedulerProperties (scheduler)

To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.

Map

startScheduler (scheduler)

Whether the scheduler should be auto started.

true

boolean

timeUnit (scheduler)

Time unit for initialDelay and delay options.

Enum values:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

MILLISECONDS

TimeUnit

useFixedDelay (scheduler)

Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.

true

boolean

accountKey (security)

account key for authentication.

String

clientSecret (security)

client secret for azure account.

String

clientSecretCredential (security)

client secret credential for authentication.

ClientSecretCredential

sasCredential (security)

SAS token credential.

AzureSasCredential

sasSignature (security)

SAS token signature.

String

Methods of authentication

In order to use this component, you will have to provide at least one of the specific credentialType parameters:

  • SHARED_KEY_CREDENTIAL: Provide accountName and accessKey for your azure account or provide StorageSharedKeyCredential instance which can be provided into sharedKeyCredential option.

  • CLIENT_SECRET: Provide ClientSecretCredential instance which can be provided into clientSecretCredential option or provide accountName, clientId, clientSecret and tenantId for authentication with Azure Active Directory.

  • SERVICE_CLIENT_INSTANCE: Provide a DataLakeServiceClient instance which can be provided into serviceClient option.

  • AZURE_IDENTITY: Use the Default Azure Credential Provider Chain

  • AZURE_SAS: Provide sasSignature or sasCredential parameters to use SAS mechanism

Default is CLIENT_SECRET

Usage

For example, in order to download content from file test.txt located on the filesystem in camelTesting storage account, use the following snippet:

from("azure-storage-datalake:camelTesting/filesystem?fileName=test.txt&accountKey=key").
to("file://fileDirectory");

Message Headers

The Azure Storage Datalake Service component supports 63 message header(s), which is/are listed below:

Name Description Default Type

CamelAzureStorageDataLakeListFileSystemsOptions (producer)

Constant: LIST_FILESYSTEMS_OPTIONS

Defines options available to configure the behavior of a call to listFileSystemsSegment on a DataLakeServiceAsyncClient object. Null may be passed.

ListFileSystemsOptions

CamelAzureStorageDataLakeTimeout (producer)

Constant: TIMEOUT

An optional timeout value beyond which a RuntimeException will be raised.

Duration

CamelAzureStorageDataLakeOperation (producer)

Constant: DATALAKE_OPERATION

Specify the producer operation to execute. Different operations allowed are shown below.

Enum values:

  • listFileSystem

  • createFileSystem

  • deleteFileSystem

  • listPaths

  • getFile

  • downloadToFile

  • downloadLink

  • deleteFile

  • appendToFile

  • flushToFile

  • uploadFromFile

  • upload

  • openQueryInputStream

  • createFile

  • deleteDirectory

DataLakeOperationsDefinition

CamelAzureStorageDataLakeFileSystemName (producer)

Constant: FILESYSTEM_NAME

Name of the file system in azure datalake on which operation is to be performed. Please make sure that filesystem name is all lowercase.

String

CamelAzureStorageDataLakeDirectoryName (producer)

Constant: DIRECTORY_NAME

Name of the directory in azure datalake on which operation is to be performed.

String

CamelAzureStorageDataLakeFileName (producer)

Constant: FILE_NAME

Name of the file in azure datalake on which operation is to be performed.

String

CamelAzureStorageDataLakeMetadata (from both)

Constant: METADATA

The metadata to associate with the file.

Map

CamelAzureStorageDataLakePublicAccessType (producer)

Constant: PUBLIC_ACCESS_TYPE

Defines options available to configure the behavior of a call to listFileSystemsSegment on a DataLakeServiceAsyncClient object.

PublicAccessType

CamelAzureStorageDataLakeRawHttpHeaders (consumer)

Constant: RAW_HTTP_HEADERS

Non parsed http headers that can be used by the user.

HttpHeaders

CamelAzureStorageDataLakeRequestCondition (producer)

Constant: DATALAKE_REQUEST_CONDITION

This contains values which will restrict the successful operation of a variety of requests to the conditions present. These conditions are entirely optional.

DataLakeRequestConditions

CamelAzureStorageDataLakeListPathOptions (producer)

Constant: LIST_PATH_OPTIONS

Defines options available to configure the behavior of a call to listContainersSegment on a DataLakeFileSystemClient object. Null may be passed.

ListPathOptions

CamelAzureStorageDataLakePath (producer)

Constant: PATH

Path of the file to be used for upload operations.

String

CamelAzureStorageDataLakeRecursive (producer)

Constant: RECURSIVE

Specifies if the call to listContainersSegment should recursively include all paths.

Boolean

CamelAzureStorageDataLakeMaxResults (producer)

Constant: MAX_RESULTS

Specifies the maximum number of blobs to return, including all BlobPrefix elements.

Integer

CamelAzureStorageDataLakeUserPrincipalNameReturned (producer)

Constant: USER_PRINCIPAL_NAME_RETURNED

Specifies if the name of the user principal should be returned.

Boolean

CamelAzureStorageDataLakeRegex (producer)

Constant: REGEX

Filter the results to return only those files with match the specified regular expression.

String

CamelAzureStorageDataLakeFileDir (producer)

Constant: FILE_DIR

Directory in which the file is to be downloaded.

String

CamelAzureStorageDataLakeAccessTier (consumer)

Constant: ACCESS_TIER

Access tier of file.

AccessTier

CamelAzureStorageDataLakeContentMD5 (producer)

Constant: CONTENT_MD5

An MD5 hash of the content. The hash is used to verify the integrity of the file during transport.

byte[]

CamelAzureStorageDataLakeFileRange (producer)

Constant: FILE_RANGE

This is a representation of a range of bytes on a file, typically used during a download operation. Passing null as a FileRange value will default to the entire range of the file.

FileRange

CamelAzureStorageDataLakeParallelTransferOptions (producer)

Constant: PARALLEL_TRANSFER_OPTIONS

The configuration used to parallelize data transfer operations.

ParallelTransferOptions

CamelAzureStorageDataLakeOpenOptions (producer)

Constant: OPEN_OPTIONS

Set of OpenOption used to configure how to open or create a file.

Set

CamelAzureStorageDataLakeAccessTierChangeTime (consumer)

Constant: ACCESS_TIER_CHANGE_TIME

Datetime when the access tier of the blob last changed.

OffsetDateTime

CamelAzureStorageDataLakeArchiveStatus (consumer)

Constant: ARCHIVE_STATUS

Archive status of file.

ArchiveStatus

CamelAzureStorageDataLakeCacheControl (consumer)

Constant: CACHE_CONTROL

Cache control specified for the file.

String

CamelAzureStorageDataLakeContentDisposition (consumer)

Constant: CONTENT_DISPOSITION

Content disposition specified for the file.

String

CamelAzureStorageDataLakeContentEncoding (consumer)

Constant: CONTENT_ENCODING

Content encoding specified for the file.

String

CamelAzureStorageDataLakeContentLanguage (consumer)

Constant: CONTENT_LANGUAGE

Content language specified for the file.

String

CamelAzureStorageDataLakeContentType (consumer)

Constant: CONTENT_TYPE

Content type specified for the file.

String

CamelAzureStorageDataLakeCopyCompletionTime (consumer)

Constant: COPY_COMPLETION_TIME

Conclusion time of the last attempted Copy Blob operation where this file was the destination file.

OffsetDateTime

CamelAzureStorageDataLakeCopyId (consumer)

Constant: COPY_ID

String identifier for this copy operation.

String

CamelAzureStorageDataLakeCopyProgress (consumer)

Constant: COPY_PROGRESS

Contains the number of bytes copied and the total bytes in the source in the last attempted Copy Blob operation where this file was the destination file.

String

CamelAzureStorageDataLakeCopySource (consumer)

Constant: COPY_SOURCE

URL up to 2 KB in length that specifies the source file or file used in the last attempted Copy Blob operation where this file was the destination file.

String

CamelAzureStorageDataLakeCopyStatus (consumer)

Constant: COPY_STATUS

Status of the last copy operation performed on the file.

Enum values:

  • pending

  • success

  • aborted

  • failed

CopyStatusType

CamelAzureStorageDataLakeCopyStatusDescription (consumer)

Constant: COPY_STATUS_DESCRIPTION

The description of the copy’s status.

String

CamelAzureStorageDataLakeCreationTime (consumer)

Constant: CREATION_TIME

Creation time of the file.

OffsetDateTime

CamelAzureStorageDataLakeEncryptionKeySha256 (consumer)

Constant: ENCRYPTION_KEY_SHA_256

The SHA-256 hash of the encryption key used to encrypt the file.

String

CamelAzureStorageDataLakeETag (consumer)

Constant: E_TAG

The E Tag of the file.

String

CamelAzureStorageDataLakeFileSize (consumer)

Constant: FILE_SIZE

Size of the file.

Long

CamelAzureStorageDataLakeLastModified (consumer)

Constant: LAST_MODIFIED

Datetime when the file was last modified.

OffsetDateTime

CamelAzureStorageDataLakeLeaseDuration (consumer)

Constant: LEASE_DURATION

Type of lease on the file.

Enum values:

  • infinite

  • fixed

LeaseDurationType

CamelAzureStorageDataLakeLeaseState (consumer)

Constant: LEASE_STATE

State of the lease on the file.

Enum values:

  • available

  • leased

  • expired

  • breaking

  • broken

LeaseStateType

CamelAzureStorageDataLakeLeaseStatus (consumer)

Constant: LEASE_STATUS

Status of the lease on the file.

Enum values:

  • locked

  • unlocked

LeaseStatusType

CamelAzureStorageDataLakeIncrementalCopy (producer)

Constant: INCREMENTAL_COPY

Flag indicating if the file was incrementally copied.

Boolean

CamelAzureStorageDataLakeServerEncrypted (consumer)

Constant: SERVER_ENCRYPTED

Flag indicating if the file’s content is encrypted on the server.

Boolean

CamelAzureStorageDataLakeDownloadLinkExpiration (producer)

Constant: DOWNLOAD_LINK_EXPIRATION

Set the Expiration time of the download link.

Long

CamelAzureStorageDataLakeDownloadLink (consumer)

Constant: DOWNLOAD_LINK

The link that can be used to download the file from datalake.

String

CamelAzureStorageDataLakeFileOffset (producer)

Constant: FILE_OFFSET

The position where the data is to be appended.

Long

CamelAzureStorageDataLakeLeaseId (producer)

Constant: LEASE_ID

By setting lease id, requests will fail if the provided lease does not match the active lease on the file.

String

CamelAzureStorageDataLakePathHttpHeaders (producer)

Constant: PATH_HTTP_HEADERS

Additional parameters for a set of operations.

PathHttpHeaders

CamelAzureStorageDataLakeRetainCommitedData (producer)

Constant: RETAIN_UNCOMMITED_DATA

Determines Whether or not uncommitted data is to be retained after the operation.

Boolean

CamelAzureStorageDataLakeClose (producer)

Constant: CLOSE

Whether or not a file changed event raised indicates completion (true) or modification (false).

Boolean

CamelAzureStorageDataLakePosition (producer)

Constant: POSITION

The length of the file after all data has been written.

Long

CamelAzureStorageDataLakeExpression (producer)

Constant: EXPRESSION

The query expression on the file.

String

CamelAzureStorageDataLakeInputSerialization (producer)

Constant: INPUT_SERIALIZATION

Defines the input serialization for a file query request. either FileQueryJsonSerialization or FileQueryDelimitedSerialization.

FileQuerySerialization

CamelAzureStorageDataLakeOutputSerialization (producer)

Constant: OUTPUT_SERIALIZATION

Defines the output serialization for a file query request. either FileQueryJsonSerialization or FileQueryDelimitedSerialization.

FileQuerySerialization

CamelAzureStorageDataLakeErrorConsumer (producer)

Constant: ERROR_CONSUMER

Sets error consumer for file query.

Consumer

CamelAzureStorageDataLakeProgressConsumer (producer)

Constant: PROGRESS_CONSUMER

Sets progress consumer for file query.

Consumer

CamelAzureStorageDataLakeQueryOptions (producer)

Constant: QUERY_OPTIONS

Optional parameters for File Query.

FileQueryOptions

CamelAzureStorageDataLakePermission (producer)

Constant: PERMISSION

Sets the permission for file.

String

CamelAzureStorageDataLakeUmask (producer)

Constant: UMASK

Sets the umask for file.

String

CamelAzureStorageDataLakeFileClient (producer)

Constant: FILE_CLIENT

Sets the file client to use.

DataLakeFileClient

CamelAzureStorageDataLakeFlush (producer)

Constant: FLUSH

Sets whether to flush on append.

Boolean

Automatic detection of service client

The component is capable of automatically detecting the presence of a DataLakeServiceClient bean in the registry. Hence, if your registry has only one instance of type DataLakeServiceClient, it will be automatically used as the default client. You won’t have to explicitly define it as an uri parameter.

Azure Storage DataLake Producer Operations

The various operations supported by Azure Storage DataLake are as given below:

Operations on Service level

For these operations, accountName option is required

Operation Description

listFileSystem

List all the file systems that are present in the given azure account.

Operations on File system level

For these operations, accountName and fileSystemName options are required

Operation Description

createFileSystem

Creates a new file System with the storage account

deleteFileSystem

Deletes the specified file system within the storage account

listPaths

Returns list of all the files within the given path in the given file system , with folder structure flattened

Operations on Directory level

For these operations, accountName, fileSystemName and directoryName options are required

Operation Description

createFile

Creates a new file in the specified directory within the fileSystem

deleteDirectory

Deletes the specified directory within the file system

Operations on file level

For these operations, accountName, fileSystemName and fileName options are required

Operation Description

getFile

Get the contents of a file

downloadToFile

Downloadd the entire file from the file system into a path specified by fileDir.

downloadLink

Generate download link for the specified file using Shared Access Signature (SAS). The expiration time to be set for the link can be specified otherwise 1 hour is taken as default.

deleteFile

Deletes the specified file.

appendToFile

Appends the data passed to the specified file in the file System. Flush command is required after append.

flushToFile

Flushes the data already appended to the specified file.

openQueryInputStream

Opens an inputstream based on the query passed to the endpoint. For this operation, you must first register the query acceleration feature with your subscription.

Refer the examples section below for more details on how to use these operations

Consumer Examples

To consume a file from the storage datalake into a file using the file component, this can be done like this:

from("azure-storage-datalake":cameltesting/filesystem?fileName=test.txt&accountKey=yourAccountKey").
to("file:/filelocation");

You can also directly write to a file without using the file component. For this, you will need to specify the path in fileDir option, to save it to your machine.

from("azure-storage-datalake":cameltesting/filesystem?fileName=test.txt&accountKey=yourAccountKey&fileDir=/test/directory").
to("mock:results");

This component also supports batch consumer. So, you can consume multiple files from a file system by specifying the path from where you want to consume the files.

from("azure-storage-datalake":cameltesting/filesystem?accountKey=yourAccountKey&fileDir=/test/directory&path=abc/test").
to("mock:results");

Producer Examples

  • listFileSystem

from("direct:start")
    .process(exchange -> {
        //required headers can be added here
        exchange.getIn().setHeader(DataLakeConstants.LIST_FILESYSTEMS_OPTIONS, new ListFileSystemsOptions().setMaxResultsPerPage(10));
    })
    .to("azure-storage-datalake:cameltesting?operation=listFileSystem&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • createFileSystem

from("direct:start")
    .process(exchange -> {
        exchange.getIn().setHeader(DataLakeConstants.FILESYSTEM_NAME, "test1");
    })
    .to("azure-storage-datalake:cameltesting?operation=createFileSystem&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • deleteFileSystem

from("direct:start")
    .process(exchange -> {
        exchange.getIn().setHeader(DataLakeConstants.FILESYSTEM_NAME, "test1");
    })
    .to("azure-storage-datalake:cameltesting?operation=deleteFileSystem&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • listPaths

from("direct:start")
    .process(exchange -> {
        exchange.getIn().setHeader(DataLakeConstants.LIST_PATH_OPTIONS, new ListPathsOptions().setPath("/main"));
    })
    .to("azure-storage-datalake:cameltesting/filesystem?operation=listPaths&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • getFile

This can be done in two ways, We can either set an outputstream in the exchange body

from("direct:start")
    .process(exchange -> {
        // set an outputstream where the file data can should be written
        exchange.getIn().setBody(outputStream);
    })
    .to("azure-storage-datalake:cameltesting/filesystem?operation=getFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
    .to("mock:results");

Or if body is not set, the operation will give an inputstream, given that you have already registered for query acceleration in azure portal.

from("direct:start")
    .to("azure-storage-datalake:cameltesting/filesystem?operation=getFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
    .process(exchange -> {
        InputStream inputStream = exchange.getMessage().getBody(InputStream.class);
        System.out.Println(IOUtils.toString(inputStream, StandardCharcets.UTF_8.name()));
    })
    .to("mock:results");
  • deleteFile

from("direct:start")
    .to("azure-storage-datalake:cameltesting/filesystem?operation=deleteFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • downloadToFile

from("direct:start")
    .to("azure-storage-datalake:cameltesting/filesystem?operation=downloadToFile&fileName=test.txt&fileDir=/test/mydir&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • downloadLink

from("direct:start")
    .to("azure-storage-datalake:cameltesting/filesystem?operation=downloadLink&fileName=test.txt&dataLakeServiceClient=#serviceClient")
    .process(exchange -> {
        String link = exchange.getMessage().getBody(String.class);
        System.out.println(link);
    })
    .to("mock:results");
  • appendToFile

from("direct:start")
    .process(exchange -> {
        final String data = "test data";
        final InputStream inputStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8));
        exchange.getIn().setBody(inputStream);
    })
    .to("azure-storage-datalake:cameltesting/filesystem?operation=appendToFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • flushToFile

from("direct:start")
    .process(exchange -> {
        exchange.getIn().setHeader(DataLakeConstants.POSITION, 0);
    })
    .to("azure-storage-datalake:cameltesting/filesystem?operation=flushToFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • openQueryInputStream

For this operation, you should have already registered for query acceleration on the azure portal

from("direct:start")
    .process(exchange -> {
        exchange.getIn().setHeader(DataLakeConstants.QUERY_OPTIONS, new FileQueryOptions("SELECT * from BlobStorage"));
    })
    .to("azure-storage-datalake:cameltesting/filesystem?operation=openQueryInputStream&fileName=test.txt&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • upload

from("direct:start")
    .process(exchange -> {
        final String data = "test data";
        final InputStream inputStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8));
        exchange.getIn().setBody(inputStream);
    })
    .to("azure-storage-datalake:cameltesting/filesystem?operation=upload&fileName=test.txt&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • uploadFromFile

from("direct:start")
    .process(exchange -> {
        exchange.getIn().setHeader(DataLakeConstants.PATH, "test/file.txt");
    })
    .to("azure-storage-datalake:cameltesting/filesystem?operation=uploadFromFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • createFile

from("direct:start")
    .process(exchange -> {
        exchange.getIn().setHeader(DataLakeConstants.DIRECTORY_NAME, "test/file/");
    })
    .to("azure-storage-datalake:cameltesting/filesystem?operation=createFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
    .to("mock:results");
  • deleteDirectory

from("direct:start")
    .process(exchange -> {
        exchange.getIn().setHeader(DataLakeConstants.DIRECTORY_NAME, "test/file/");
    })
    .to("azure-storage-datalake:cameltesting/filesystem?operation=deleteDirectory&dataLakeServiceClient=#serviceClient")
    .to("mock:results");

Testing

Please run all the unit tests and integration test while making changes to the component as changes or version upgrades can break things. For running all the test in the component, you will need to obtain azure accountName and accessKey. After obtaining the same, you can run the full test, on this component directory, by running the following maven command

mvn verify -Dazure.storage.account.name=<accountName> -Dazure.storage.account.key=<accessKey>

You can also skip the integration test, and run only basic unit test by using the command

mvn test