Various features have to store files in a blobStore. For this they offer a storage
config option that can be configured to store data in memory, on the file system or on an object storage like S3.
Underlying this we use the abstract-blob-store API: https://github.com/maxogden/abstract-blob-store
Configuration
At the moment we have a fixed set of supported strategies:
Interface
storage: {
// an alias to our supported abstract blob stores
strategy: 'strategy-name',
computeKey({projectId, mimeType, extension, dateString, randomString}) {
return `${projectId}/${dateString}/${randomString}${extension}`
},
// configuration that gets passed to the abstract blob store
config: {...}
}
computeKey()
is an optional parameter that can be provided if you want a custom path for your media files.
The default configuration will return {dateString}/{randomString}{ext}
, but you can use the provided parameters on computeKey()
to compose a key blueprint that matches your needs, e.g. separate media by projectId with {projectId}/{dateString}/{randomString}{extension}
.
Strategies
Memory
For testing purposes you directly use a memory storage:
storage: {
strategy: 'memory'
}
Local File System
Example setup to write to a temporary folder:
storage: {
strategy: 'fs',
config: {
path: require('os').tmpdir()
}
}
S3 Object Storage
S3 is the most common strategy. It is implemented by multiple cloud hosters and also supported by on-premise solutions like minio or ceph.
The whole config object gets passed down to the aws-sdk S3 class.
The accessKeyId
and secretAccessKey
are optional when running on AWS ECS/EKS/Fargate together with an IAM service role. This simplifies token management and rotation within the AWS ecosystem.
storage: {
strategy: 's3',
config: {
bucket: 'livingdocs-images-dev',
region: 'eu-central-1',
accessKeyId: 'key',
secretAccessKey: 'secret',
params: {ACL: 'public-read'}
}
}
S3 with a HTTP Proxy
If you use an http proxy like squid, you can declare the HTTPS_PROXY
environment variable, which then configures the aws client.
e.g. HTTPS_PROXY=http://localhost:3128 node index.js
Or you can explicitly configure it with the options provided by the AWS SDK:
storage: {
strategy: 's3',
config: {
...
httpOptions: {
proxy: 'http://localhost:3128'
// If you can't use ssl in your http proxy,
// you might need to disable it explicitly
// sslEnabled: false
}
}
}
Google Cloud Storage
The Google Cloud Storage needs a bucket name and a credentials object.
Please consult the detailed instructions to retrieve the credentials
from Google.
storage: {
strategy: 'google-cloud-storage',
config: {
bucket: 'my-bucket-name',
credentials: {
type: 'service_account',
project_id: '******',
private_key_id: '******',
private_key: '******',
client_email: '******',
client_id: '******',
auth_uri: 'https://accounts.google.com/o/oauth2/auth',
token_uri: 'https://oauth2.googleapis.com/token',
auth_provider_x509_cert_url: 'https://www.googleapis.com/oauth2/v1/certs',
client_x509_cert_url: '******'
}
}
}
Azure Blob Storage
The Azure Blob Storage provider configuration needs a Storage Account name, Container name. There are multiple options for authentication for Azure Blob Storage:
- generate a SAS token with read, write, create and delete access to the Azure Storage Account. Please consult the detailed instructions to generate the Shared Access Signature (SAS) for
sasToken
config parameter. The SAS token should be rotated periodically. - skip SAS token configuration and let the Livingdocs Server fetch credentials from environment variables, workload identity, managed identity, Azure CLI, etc. The server will automatically select the most appropriate authentication method based on the available environment. With this method token rotation is not necessary as Azure should manage it.
If you skip the SAS token configuration, please make sure that your environment is properly set up with at least one of the supported authentication sources. We recommend using Azure CLI for local development and Managed Identity or Workload Identity (on AKS) for production environments. Do not forget to define the appropriate RBAC role for the identity used to access the storage account.
storage: {
strategy: 'azure-blob-storage',
config: {
storageAccountName: 'my-storage-account'
sasToken: '?my-sas-token', // optional, do not define sasToken when using DefaultAzureCredential
containerName: 'my-container-name'
}
}
Cloudinary
With cloudinary we support a storage provider that has built in image-processing and can directly serve the images instead of using a separate image service in front of another object storage like S3.
The whole config object gets passed down to the cloudinary sdk. The minimal configuration are the 3 properties listed here, For more details you might want to go through their configuration options.
storage: {
strategy: 'cloudinary',
config: {
cloud_name: 'sample',
api_key: 'your-key',
api_secret: 'your-secret'
}
}