** S3utils deals with files on Amazon S3 Bucket **
in your settings file:
S3UTILS_DEBUG_LEVEL=1
AWS_ACCESS_KEY_ID = 'your access key'
AWS_SECRET_ACCESS_KEY = 'your secret key'
AWS_STORAGE_BUCKET_NAME = 'your bucket name'
in your code:
from s3utils import S3utils
s3utils = S3utils()
in your code:
from s3utils import S3utils
s3utils = S3utils(
AWS_ACCESS_KEY_ID = 'your access key',
AWS_SECRET_ACCESS_KEY = 'your secret key',
AWS_STORAGE_BUCKET_NAME = 'your bucket name',
S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
)
Methods
sets permissions for a file on S3
Parameters: | target_file : string
acl : string, optional
|
---|
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> s3utils.chmod("path/to/file","private")
Establishes the connection. This is normally done automatically.
connects to cloud front which is more control than just S3. This is done automatically for you.
Copies a file or folder from local to s3
Parameters: | local_path : string
target_path : string
acl : string, optional
del_after_upload : boolean, optional
overwrite : boolean, optional
invalidate : boolean, optional
|
---|
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> s3utils.cp("path/to/folder","/test/")
copying /path/to/myfolder/test2.txt to test/myfolder/test2.txt
copying /path/to/myfolder/test.txt to test/myfolder/test.txt
copying /path/to/myfolder/hoho/photo.JPG to test/myfolder/hoho/photo.JPG
copying /path/to/myfolder/hoho/haha/ff to test/myfolder/hoho/haha/ff
>>> # When overwrite is set to False:
>>> s3utils.cp("path/to/folder","/test/", overwrite=False)
test/myfolder/test2.txt already exist. Not overwriting.
test/myfolder/test.txt already exist. Not overwriting.
test/myfolder/hoho/photo.JPG already exist. Not overwriting.
test/myfolder/hoho/haha/ff already exist. Not overwriting.
>>> # To overwrite the files on S3 and invalidate the CDN (cloudfront) cache so the new file goes on CDN:
>>> s3utils.cp("path/to/folder","/test/", overwrite=True, invalidate=True)
copying /path/to/myfolder/test2.txt to test/myfolder/test2.txt
copying /path/to/myfolder/test.txt to test/myfolder/test.txt
copying /path/to/myfolder/hoho/photo.JPG to test/myfolder/hoho/photo.JPG
copying /path/to/myfolder/hoho/haha/ff to test/myfolder/hoho/haha/ff
deals with cropduster images saving to S3
Closes the connection. This is normally done automatically but you need to use this to close the connection if you manually started the connection using the connect() method.
returns grant permission, grant owner, grant owner email and grant id as a list. It needs you to set k.key to a key on amazon (file path) before running this. note that Amazon returns a list of grants for each file.
Invalidates the CDN (distribution) cache for a certain file of files. This might take up to 15 minutes to be effective.
You can check for the invalidation status using check_invalidation_request.
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> aa = s3utils.invalidate("test/myfolder/hoho/photo.JPG")
>>> print aa
('your distro id', u'your request id')
>>> invalidation_request_id = aa[1]
>>> bb = s3utils.check_invalidation_request(*aa)
>>> for inval in bb:
... print 'Object: %s, ID: %s, Status: %s' % (inval, inval.id, inval.status)
Gets the list of files and permissions from S3
Parameters: | folder : string
num: integer, optional :
begin_from_file : string, optional
all_grant_data : Boolean, optional
|
---|
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> import json
>>> # We use json.dumps to print the results more readable:
>>> my_folder_stuff = s3utils.ll("/test/")
>>> print json.dumps(my_folder_stuff, indent=2)
{
"test/myfolder/": [
{
"name": "owner's name",
"permission": "FULL_CONTROL"
}
],
"test/myfolder/em/": [
{
"name": "owner's name",
"permission": "FULL_CONTROL"
}
],
"test/myfolder/hoho/": [
{
"name": "owner's name",
"permission": "FULL_CONTROL"
}
],
"test/myfolder/hoho/.DS_Store": [
{
"name": "owner's name",
"permission": "FULL_CONTROL"
},
{
"name": null,
"permission": "READ"
}
],
"test/myfolder/hoho/haha/": [
{
"name": "owner's name",
"permission": "FULL_CONTROL"
}
],
"test/myfolder/hoho/haha/ff": [
{
"name": "owner's name",
"permission": "FULL_CONTROL"
},
{
"name": null,
"permission": "READ"
}
],
"test/myfolder/hoho/photo.JPG": [
{
"name": "owner's name",
"permission": "FULL_CONTROL"
},
{
"name": null,
"permission": "READ"
}
],
}
gets the list of file names (keys) in a s3 folder
Parameters: | folder : string
num: integer, optional :
begin_from_file: string, optional :
|
---|
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> print s3utils.ls("test/")
[u'test/myfolder/', u'test/myfolder/em/', u'test/myfolder/hoho/', u'test/myfolder/hoho/.DS_Store', u'test/myfolder/hoho/haha/', u'test/myfolder/hoho/haha/ff', u'test/myfolder/hoho/haha/photo.JPG']
Create a folder on S3.
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> s3utils.mkdir("path/to/my_folder")
Making directory: path/to/my_folder
Moves the file to the S3 and deletes the local copy
It is basically s3utils.cp that has del_after_upload=True
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> s3utils.mv("path/to/folder","/test/")
moving /path/to/myfolder/test2.txt to test/myfolder/test2.txt
moving /path/to/myfolder/test.txt to test/myfolder/test.txt
moving /path/to/myfolder/hoho/photo.JPG to test/myfolder/hoho/photo.JPG
moving /path/to/myfolder/hoho/haha/ff to test/myfolder/hoho/haha/ff