Welcome to S3utils’s documentation!

** S3utils deals with files on Amazon S3 Bucket **

Installation

Install from PyPi:

pip install s3utils

Setup in Django

in your settings file:

S3UTILS_DEBUG_LEVEL=1
AWS_ACCESS_KEY_ID = 'your access key'
AWS_SECRET_ACCESS_KEY = 'your secret key'
AWS_STORAGE_BUCKET_NAME = 'your bucket name'

in your code:

from s3utils import S3utils
s3utils = S3utils()

Setup manually

in your code:

from s3utils import S3utils
s3utils = S3utils(
AWS_ACCESS_KEY_ID = 'your access key',
AWS_SECRET_ACCESS_KEY = 'your secret key',
AWS_STORAGE_BUCKET_NAME = 'your bucket name',
S3UTILS_DEBUG_LEVEL = 1,  #change it to 0 for less verbose
)

S3utils 0.5

class s3utils.S3utils(AWS_ACCESS_KEY_ID='', AWS_SECRET_ACCESS_KEY='', AWS_STORAGE_BUCKET_NAME='', MEDIA_ROOT='', MEDIA_ROOT_BASE='', S3UTILS_DEBUG_LEVEL=0)

Methods

chmod(*args, **kwargs)

sets permissions for a file on S3

Parameters:

target_file : string

Path to file on S3

acl : string, optional

File permissions on S3. Default is public-read

options:
  • private: Owner gets FULL_CONTROL. No one else has any access rights.
  • public-read: Owners gets FULL_CONTROL and the anonymous principal is granted READ access.
  • public-read-write: Owner gets FULL_CONTROL and the anonymous principal is granted READ and WRITE access.
  • authenticated-read: Owner gets FULL_CONTROL and any principal authenticated as a registered Amazon S3 user is granted READ access

Examples

>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1,  #change it to 0 for less verbose
... )
>>> s3utils.chmod("path/to/file","private")
connect()

Establishes the connection. This is normally done automatically.

connect_cloudfront()

connects to cloud front which is more control than just S3. This is done automatically for you.

cp(*args, **kwargs)

Copies a file or folder from local to s3

Parameters:

local_path : string

Path to file or folder. Or if you want to copy only the contents of folder, add /* at the end of folder name

target_path : string

Target path on S3 bucket.

acl : string, optional

File permissions on S3. Default is public-read

options:
  • private: Owner gets FULL_CONTROL. No one else has any access rights.
  • public-read: Owners gets FULL_CONTROL and the anonymous principal is granted READ access.
  • public-read-write: Owner gets FULL_CONTROL and the anonymous principal is granted READ and WRITE access.
  • authenticated-read: Owner gets FULL_CONTROL and any principal authenticated as a registered Amazon S3 user is granted READ access

del_after_upload : boolean, optional

delete the local file after uploading. This is effectively like moving the file. You can use s3utils.mv instead of s3utils.cp to move files from local to S3. It basically sets this flag to True. default = False

overwrite : boolean, optional

overwrites files on S3 if set to True. Default is True

invalidate : boolean, optional

invalidates the CDN (a.k.a Distribution) cache if the file already exists on S3 default = False Note that invalidation might take up to 15 minutes to take place. It is easier and faster to use cache buster to grab lastest version of your file on CDN than invalidation.

Examples

>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1,  #change it to 0 for less verbose
... )
>>> s3utils.cp("path/to/folder","/test/")
copying /path/to/myfolder/test2.txt to test/myfolder/test2.txt
copying /path/to/myfolder/test.txt to test/myfolder/test.txt
copying /path/to/myfolder/hoho/photo.JPG to test/myfolder/hoho/photo.JPG
copying /path/to/myfolder/hoho/haha/ff to test/myfolder/hoho/haha/ff
>>> # When overwrite is set to False:
>>> s3utils.cp("path/to/folder","/test/", overwrite=False)
test/myfolder/test2.txt already exist. Not overwriting.
test/myfolder/test.txt already exist. Not overwriting.
test/myfolder/hoho/photo.JPG already exist. Not overwriting.
test/myfolder/hoho/haha/ff already exist. Not overwriting.
>>> # To overwrite the files on S3 and invalidate the CDN (cloudfront) cache so the new file goes on CDN:
>>> s3utils.cp("path/to/folder","/test/", overwrite=True, invalidate=True)
copying /path/to/myfolder/test2.txt to test/myfolder/test2.txt
copying /path/to/myfolder/test.txt to test/myfolder/test.txt
copying /path/to/myfolder/hoho/photo.JPG to test/myfolder/hoho/photo.JPG
copying /path/to/myfolder/hoho/haha/ff to test/myfolder/hoho/haha/ff
cp_cropduster_image(*args, **kwargs)

deals with cropduster images saving to S3

disconnect()

Closes the connection. This is normally done automatically but you need to use this to close the connection if you manually started the connection using the connect() method.

get_grants(target_file, all_grant_data)

returns grant permission, grant owner, grant owner email and grant id as a list. It needs you to set k.key to a key on amazon (file path) before running this. note that Amazon returns a list of grants for each file.

options:
  • private: Owner gets FULL_CONTROL. No one else has any access rights.
  • public-read: Owners gets FULL_CONTROL and the anonymous principal is granted READ access.
  • public-read-write: Owner gets FULL_CONTROL and the anonymous principal is granted READ and WRITE access.
  • authenticated-read: Owner gets FULL_CONTROL and any principal authenticated as a registered Amazon S3 user is granted READ access
invalidate(*args, **kwargs)

Invalidates the CDN (distribution) cache for a certain file of files. This might take up to 15 minutes to be effective.

You can check for the invalidation status using check_invalidation_request.

Examples

>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1,  #change it to 0 for less verbose
... )
>>> aa = s3utils.invalidate("test/myfolder/hoho/photo.JPG")
>>> print aa
('your distro id', u'your request id')
>>> invalidation_request_id = aa[1]
>>> bb = s3utils.check_invalidation_request(*aa)
>>> for inval in bb:
...     print 'Object: %s, ID: %s, Status: %s' % (inval, inval.id, inval.status)
ll(folder='', begin_from_file='', num=-1, all_grant_data=False)

Gets the list of files and permissions from S3

Parameters:

folder : string

Path to file on S3

num: integer, optional :

number of results to return, by default it returns all results.

begin_from_file : string, optional

which file to start from on S3. This is usedful in case you are iterating over lists of files and you need to page the result by starting listing from a certain file and fetching certain num (number) of files.

all_grant_data : Boolean, optional

More detailed file permission data will be returned.

Examples

>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1,  #change it to 0 for less verbose
... )
>>> import json
>>> # We use json.dumps to print the results more readable:
>>> my_folder_stuff = s3utils.ll("/test/")
>>> print json.dumps(my_folder_stuff, indent=2)
{
  "test/myfolder/": [
    {
      "name": "owner's name", 
      "permission": "FULL_CONTROL"
    }
  ], 
  "test/myfolder/em/": [
    {
      "name": "owner's name", 
      "permission": "FULL_CONTROL"
    }
  ], 
  "test/myfolder/hoho/": [
    {
      "name": "owner's name", 
      "permission": "FULL_CONTROL"
    }
  ], 
  "test/myfolder/hoho/.DS_Store": [
    {
      "name": "owner's name", 
      "permission": "FULL_CONTROL"
    }, 
    {
      "name": null, 
      "permission": "READ"
    }
  ], 
  "test/myfolder/hoho/haha/": [
    {
      "name": "owner's name", 
      "permission": "FULL_CONTROL"
    }
  ], 
  "test/myfolder/hoho/haha/ff": [
    {
      "name": "owner's name", 
      "permission": "FULL_CONTROL"
    }, 
    {
      "name": null, 
      "permission": "READ"
    }
  ], 
  "test/myfolder/hoho/photo.JPG": [
    {
      "name": "owner's name", 
      "permission": "FULL_CONTROL"
    }, 
    {
      "name": null, 
      "permission": "READ"
    }
  ], 
}
ls(*args, **kwargs)

gets the list of file names (keys) in a s3 folder

Parameters:

folder : string

Path to file on S3

num: integer, optional :

number of results to return, by default it returns all results.

begin_from_file: string, optional :

which file to start from on S3. This is usedful in case you are iterating over lists of files and you need to page the result by starting listing from a certain file and fetching certain num (number) of files.

Examples

>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1,  #change it to 0 for less verbose
... )
>>> print s3utils.ls("test/")
[u'test/myfolder/', u'test/myfolder/em/', u'test/myfolder/hoho/', u'test/myfolder/hoho/.DS_Store', u'test/myfolder/hoho/haha/', u'test/myfolder/hoho/haha/ff', u'test/myfolder/hoho/haha/photo.JPG']
mkdir(*args, **kwargs)

Create a folder on S3.

Examples

>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1,  #change it to 0 for less verbose
... )
>>> s3utils.mkdir("path/to/my_folder")
Making directory: path/to/my_folder
mv(*args, **kwargs)

Moves the file to the S3 and deletes the local copy

It is basically s3utils.cp that has del_after_upload=True

Examples

>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1,  #change it to 0 for less verbose
... )
>>> s3utils.mv("path/to/folder","/test/")
moving /path/to/myfolder/test2.txt to test/myfolder/test2.txt
moving /path/to/myfolder/test.txt to test/myfolder/test.txt
moving /path/to/myfolder/hoho/photo.JPG to test/myfolder/hoho/photo.JPG
moving /path/to/myfolder/hoho/haha/ff to test/myfolder/hoho/haha/ff

Indices and tables

Author

Erasmose (Sep Dehpour) Github Linkedin

Table Of Contents

This Page