Viewed   141 times

A newbie question but I have googled abit and can't seem to find any solution.

I want to allow users to directly upload files to S3, not via my server first. By doing so, is there any way the files can be checked for size limit and permitted types before actually uploading to S3? Preferably not to use flash but javascript.

 Answers

3

If you are talking about security problem (people uploading huge file to your bucket), yes, You CAN restrict file size with browser-based upload to S3.

Here is an example of the "policy" variable, where "content-length-range" is the key point.

"expiration": "'.date('Y-m-dTG:i:sZ', time()+10).'",
"conditions": [
    {"bucket": "xxx"},
    {"acl": "public-read"},
    ["starts-with","xxx",""],
    {"success_action_redirect": "xxx"},
    ["starts-with", "$Content-Type", "image/jpeg"],
    ["content-length-range", 0, 10485760]
]

In this case, if the uplaoding file size > 10mb, the upload request will be rejected by Amazon.

Of course, before starting the upload process, you should use javascript to check the file size and make some alerts if it does.

getting file size in javascript

Saturday, November 12, 2022
2

To do this, I somehow need to list all files in an amazon bucket's folder, and find the one which has been added last.

S3's API isn't really optimized for sort-by-modified-date, so you'd need to call list_buckets() and check each timestamp, always keeping track of the newest one until you get to the end of the list.

An automatic PHP script detects that a new file has been added and notifies clients

You'd need to write a long-running PHP CLI script that starts with:

while (true) { /*...*/ }

Maybe throw an occasional sleep(1) in there so that your CPU doesn't spike so badly, but you essentially need to sleep-and-poll, looping over all of the timestamps each time.

I've tried $s3->list_objects("mybucket");, but it returns the list of all objects inside the bucket, and I don't see an option to list only files inside the specified folder.

You'll want to set the prefix parameter in your list_objects() call.

Friday, August 26, 2022
 
1

To set an object tag, simply pass the Tagging element to the putObject() parameters. In your case, it'd be like this:

$result = $s3Client->putObject([
    'Bucket' => $bucket,
    'Key' => $assetFilename,
    'SourceFile' => $fileTmpPath,
    'Tagging' => 'category=tag1', // your tag here!
    'Metadata'   => array(
        'title' => $requestAr['title'],
        'description' => $requestAr['description']
    )
]);

Notice the tag is a simple key=value string. As a "thinking ahead" measure, I'd make the tags be category=tagValue, so you can categorize them and eventually add more tag categories. If you do tag1=true, then it'll get messy quick.

  • putObject() reference
Friday, August 5, 2022
 
4

You are passing an array to exif_imagetype(), it should be a string with filename, for your example, you should iterate through your array or use something like:

// Get type from first image
$detectedTypeImage1 = exif_imagetype($tmp_name_array[0]);
$detectedTypeImage2 = exif_imagetype($tmp_name_array[1]);

Then you are wrongly compare an array with an int

$size_array > 2097152

you could do like the first example I gave or you could use a loop through the array, like:

foreach($size_array as $imageSize){
    if($imageSize > 2097152){
        echo "type error SIZE";
        echo "<br>";
    }
}

Full example

/**
    * Reorder the uploaded files array to simply the use of foreach
    * 
    * @param array $file_post
    * @return array
    */
    function reArrayFiles(&$file_post) {

    $file_ary = array();
    $file_count = count($file_post['name']);
    $file_keys = array_keys($file_post);

    for ($i=0; $i<$file_count; $i++) {
        foreach ($file_keys as $key) {
            $file_ary[$i][$key] = $file_post[$key][$i];
        }
    }

    return $file_ary;
}



// Array reArranged
$file_ary = reArrayFiles($_FILES['file_array']);

$allowedTypes = array(IMAGETYPE_PNG, IMAGETYPE_JPEG, IMAGETYPE_GIF);


// For each file...
foreach($file_ary as $key => $file){

    static $i = 1;

    if($file['error'] == 4){
        echo 'No file uploaded on field '.$i++.'.<br>';
        continue;
    }

    $errors = false;

    // Check type, if not allowed tell's you wich one have the problem
    if(!in_array(exif_imagetype($file['tmp_name']), $allowedTypes)){
        echo '<span style="color:red">Filename '.$file['name'].' <b>type</b> is not allowed.</span><br>';
        $errors = true;
    }

    // Check size, if exceed allowed size tell's you wich one have the problem
    if($file['size'] > 2097152){
        echo '<span style="color:red">Filename '.$file['name'].' exceeds maximum allowed <b>size</b></span>.<br>';
        $errors = true;
    }


    // If we don't have errors let's upload files to the server
    if(!$errors){
        if (move_uploaded_file(
                $file['tmp_name'], "../upload/" . $file['name'])
        ){
            echo $file['name'] . " upload is complete<br>";

        }else{
            echo "Uploaded failed for " . $file['name'] . "<br>";
        }   
    }
    $i++;
}

Don't forget to use is_uploaded_file() too.

Tuesday, October 18, 2022
4

It totally comes down to the point, whether you want to use the REST API or the AWS SDKs to interact with S3.

In both cases, you need to prove(authenticate/Sign-Request) your identity unless bucket is public.

a) If you are going with REST APIs, to prove identity, you need sign your request using ' AWS Signature version 4 ' (deprecated ver 2 is also there), which includes three methods (one you have listed)

  1. Authenticating Requests: Using the Authorization Header (AWS Signature Version 4)
  2. Authenticating Requests: Using Query Parameters (AWS Signature Version 4)
  3. Authenticating Requests: Browser-Based Uploads Using POST (AWS Signature Version 4)

b) If you are going to use AWS SDKs, you should let SDK do the signing ceremony(process). So the choice is straightforward to use SDK to sign the request

(Part of the question) It seems also painless compared to browser upload since it doesn't require all the keys browser upload wants in the post form<

For below code, s3Client already has got your creds whether from AWS-CLI-Profile(if running local/laptop), IAM Role(in case of EC2, lambda, etc)

string url = s3Client.GetPreSignedURL(request);
Tuesday, November 1, 2022
 
jlig
 
Only authorized users can answer the search term. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :