Client compatibility list

This section needs your help

s3cmd

Fully tested with the current API coverage. Here is a minimal configuration you can put in ~/.s3cfg:

[default]
host_base = s3.example.com
host_bucket = %(bucket)s.s3.example.com
access_key = YOUR_ACCESS_KEY
secret_key = YOUR_SECRET_KEY
use_https = True
signature_v2 = True

Adapt with your credentials and replace s3.example.com with the value you specified for service-uri. use_https is needed only if Pithos is served over TLS. Currently pithos doesn’t support v4 signatures so the signature_v2 flag is necessary.

When testing locally, the following configuration can be used:

[default]
host_base = s3.example.com
host_bucket = %(bucket)s.s3.example.com
access_key = YOUR_ACCESS_KEY
secret_key = YOUR_SECRET_KEY
use_https = False
signature_v2 = True
proxy_host = localhost
proxy_port = 8080

libcloud

Working support with the S3 provider:

from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
cls = get_driver(Provider.S3)
driver = cls('api key', 'api secret key', host='s3.example.com')
driver.list_containers()

rclone

Working support with the S3 provider:

RCLONE_CONFIG_<remote>_TYPE=s3
RCLONE_CONFIG_<remote>_ACCESS_KEY_ID=YOUR_ACCESS_KEY
RCLONE_CONFIG_<remote>_SECRET_ACCESS_KEY=YOUR_SECRET_KEY
RCLONE_CONFIG_<remote>_REGION=other-v2-signature
RCLONE_CONFIG_<remote>_ENDPOINT=s3.example.com
RCLONE_CONFIG_<remote>_ACL=private

Ansible

Sample task configuration to list a bucket using the signature V2 by using the scheme fakes3 (= http) or fakes3s (= https):

- name: List bucket content
  aws_s3:
    s3_url: "fakes3s://s3.example.com"
    bucket: "my_bucket"
    mode: list
  register: my_bucket_content

cyberduck

On-going integration

owncloud

Working support

s3fs - s3 fuse support

Working support. If you specified s3.example.com as service-uri, you can mount the bucket bucket with the following command:

s3fs bucket /mnt/bucket -o url=https://s3.example.com

The credentials have to be specified in ~/.passwd-s3fs:

YOUR_ACCESS_KEY:YOUR_SECRET_KEY

WAL-E - continuous archiving for Postgres

Support for S3-compatible object stores was added in version 0.8 of WAL-E. Configure WAL-E with the following environment variables:

AWS_ACCESS_KEY_ID YOUR_ACCESS_KEY
AWS_SECRET_ACCESS_KEY YOUR_SECRET_KEY
WALE_S3_ENDPOINT https+path://s3.example.com
WALE_S3_PREFIX s3://your-bucket/your-prefix

Archiving WAL files

Postgresql needs the following settings in postresql.conf:

wal_level = archive
archive_mode = on
archive_command = 'envdir /etc/wal-e.d/env /path/to/wal-e wal-push %p'
archive_timeout = 60

Once postgres is setup to send WAL files, make a base backup with envdir /etc/wal-e.d/env /path/to/wal-e backup-push /path/to/postgres/data

Restoring from archived WAL files

Pull a base backup:

envdir /etc/wal-e.d/env /path/to/wal-e backup-fetch /path/to/postgres/data LATEST

Create a recovery.conf file in the postgres data dir with the following content:

restore_command = 'envdir /etc/wal-e.d/env /path/to/wal-e wal-fetch "%f" "%p"'

Start postgresql and check the logs to see its restore status.

elasticsearch - index backup and restore

Snapshotting and restoring indices to Pithos is supported thanks to the AWS Cloud Plugin. To configure a snapshot repository that points to your pithos installation, simply add to your /etc/elasticsearch/elasticsearch.yml:

cloud:
  aws:
    access_key: <your key>
    secret_key: <your secret>
    s3:
      protocol: https
      endpoint: s3.example.com

Then create your repository:

$ curl -XPUT 'http://localhost:9200/_snapshot/pithos' -d '{
    "type": "s3",
    "settings": {
        "bucket": "es-snapshots"
    }
}'

Starting with version 2.4.2 of the plugin, all settings can be provided per-repository:

$ curl -XPUT 'http://localhost:9200/_snapshot/pithos' -d '{
    "type": "s3",
    "settings": {
        "bucket": "es-snapshots",
        "access_key": "your key",
        "secret_key": "your secret",
        "protocol": "http",
        "endpoint": "s3.example.com",
    }
}'

AWS Languages SDKs

In general, AWS Language SDKs can work with Pithos with the following configuration:

  • In ~/.aws/config:

    [default]
    s3 =
        signature_version = s3
    
  • In ~/.aws/credentials:

    [default]
    aws_access_key_id = <your key>
    aws_secret_access_key = <your secret>
    

You can have multiple profiles instead of altering the [default] configuration. Simply repeat configuration sections and name them [profile <profile name>]

Shell (awscli)

Install awscli, then:

aws s3 ls --endpoint-url=https://your-endpoint

To use a non-default profile:

aws s3 ls --endpoint-url=https://your-endpoint --profile=<profile-name>

Python (boto3)

Install boto3 and create a Pithos client like this:

import boto3.session

session = boto3.session.Session()
client = session.client('s3', endpoint_url='https://pithos-endpoint')
client.list_buckets()

To use a non-default profile:

import boto3.session
session = boto3.session.Session(profile_name='profile-name')
client = session.client('s3', endpoint_url='https://pithos-endpoint')

Python (boto)

Boto version 2 is boto3’s ancestor but is still widely used. It doesn’t take ~/.aws/* configuration files into account.

from boto.s3.connection import S3Connection, OrdinaryCallingFormat

connection = S3Connection(key, secret, host='pithos-endpoint',
                          port=443, is_secure=True,
                          calling_format=OrdinaryCallingFormat())
bucket = connection.get_bucket('your-bucket')

.NET

Install AWSSDK.S3, then:

Amazon.AWSConfigsS3.UseSignatureVersion4 = false;
var config = new Amazon.S3.AmazonS3Config()
{
    ServiceURL = host,
    SignatureVersion = "s3",
};
var client = new Amazon.S3.AmazonS3Client(apikey, secretKey, config);

Java

Install AWS SDK for Java, then:

import com.amazonaws.ClientConfiguration;
import com.amazonaws.services.s3.AmazonS3Client;

ClientConfiguration config = new ClientConfiguration();
config.setSignerOverride("S3SignerType");
AmazonS3Client s3 = new AmazonS3Client(config);
s3.setEndpoint("https://your-endpoint");

PHP

Install PHP AWS SDK - important: Only Version2 is suitable (Version 3 only supports SignatureVersion 4, which is not yet implemented). After install, use something like this:

// connect
$s3Client=Aws\S3\S3Client::factory([
    'base_url'=>'https://your-endpoint.com',
    'key'=>'your-key',
    'secret'=>'your-secret',
    'region'=>'region', // must be filled with something, even if you have no regions
]);

// list all files in bucket
$iterator = $s3Client->getIterator('ListObjects', array(
    'Bucket' => $bucket,
    'Prefix' => 'foo'
));

foreach ($iterator as $object) {
    echo $object['Key'] . "\n";
}