Viewed   87 times

I am using PHP with the Amazon Payments web service. I'm having problems with some of my requests. Amazon is returning an error as it should, however the way it goes about it is giving me problems.

Amazon returns XML data with a message about the error, but it also throws an HTTP 400 (or even 404 sometimes). This makes file_get_contents() throw an error right away and I have no way to get the content. I've tried using cURL also, but never got it to give me back a response.

I really need a way to get the XML returned regardless of HTTP status code. It has an important "message" element that gives me clues as to why my billing requests are failing.

Does anyone have a cURL example or otherwise that will allow me to do this? All my requests currently use file_get_contents() but I am not opposed to changing them. Everyone else seems to think cURL is the "right" way.



You have to define custom stream context (3rd argument of function file_get_contents) with ignore_errors option on.

Thursday, September 15, 2022

Many of these are intrinsically useful with REST-style API usage. For example:

  • 200 (OK): You asked for a resource. Here it is!

  • 201 (Created): You asked me to make a new resource. I did! Here's where you can go to ask me for it next time.

  • 202 (Accepted): You asked me to do something, but it's going to take a while, so don't wait up. Here's where you can go to check up on the status.

  • 300 (Multiple Choices): You asked for something, but you weren't specific enough. Which one of these did you mean?

  • 301 (Moved Permanently): You asked for something, but it's somewhere else now. Here's where it went.

  • 302 (Found): You asked for something, but it's somewhere else for the moment. Here it is.

  • 304 (Not Modified): You asked for something before this, but it hasn't changed since the last time you asked me.

  • 400 (Bad Request): Something is wrong about what you asked me to do. Fix what you said and try again.

  • 401 (Unauthorized): I need you to identify yourself before I can finish this request. [Note: This is one of the more unfortunately named headers. It should really be titled Unauthenticated; 403 is more like Unauthorized.]

  • 403 (Forbidden): You asked for something you're not allowed to have.

  • 404 (Not Found): You asked for a resource, but there isn't one that matches your description.

  • 500 (Server Error): Something went wrong, so I can't give you what you asked for right now. Sorry about that.

  • 501 (Not Implemented): I don't support that kind of request right now.

  • 503 (Service Unavailable): I'm not able to respond to requests right now.

Tuesday, November 22, 2022

You are calling recv() in a loop until the socket disconnects or fails (and writing the received data to your stream the wrong way), storing all of the raw data into your char* buffer. That is not the correct way to read an HTTP response, especially if HTTP keep-alives are used (in which case no disconnect will occur at the end of the response). You must follow the rules outlined in RFC 2616. Namely:

  1. Read until the "rnrn" sequence is encountered. This terminates the response headers. Do not read any more bytes past that yet.

  2. Analyze the received headers, per the rules in RFC 2616 Section 4.4. They tell you the actual format of the remaining response data.

  3. Read the remaining data, if any, per the format discovered in #2.

  4. Check the received headers for the presence of a Connection: close header if the response is using HTTP 1.1, or the lack of a Connection: keep-alive header if the response is using HTTP 0.9 or 1.0. If detected, close your end of the socket connection because the server is closing its end. Otherwise, keep the connection open and re-use it for subsequent requests (unless you are done using the connection, in which case do close it).

  5. Process the received data as needed.

In short, you need to do something more like this instead (pseudo code):

string headers[];
byte data[];

string statusLine = read a CRLF-delimited line;
int statusCode = extract from status line;
string responseVersion = extract from status line;

    string header = read a CRLF-delimited line;
    if (header == "") break;
    add header to headers list;
while (true);

if ( !((statusCode in [1xx, 204, 304]) || (request was "HEAD")) )
    if (headers["Transfer-Encoding"] ends with "chunked")
            string chunk = read a CRLF delimited line;
            int chunkSize = extract from chunk line;
            if (chunkSize == 0) break;

            read exactly chunkSize number of bytes into data storage;

            read and discard until a CRLF has been read;
        while (true);

            string header = read a CRLF-delimited line;
            if (header == "") break;
            add header to headers list;
        while (true);
    else if (headers["Content-Length"] is present)
        read exactly Content-Length number of bytes into data storage;
    else if (headers["Content-Type"] begins with "multipart/")
        string boundary = extract from Content-Type header;
        read into data storage until terminating boundary has been read;
        read bytes into data storage until disconnected;

if (!disconnected)
    if (responseVersion == "HTTP/1.1")
        if (headers["Connection"] == "close")
            close connection;
        if (headers["Connection"] != "keep-alive")
            close connection;

check statusCode for errors;
process data contents, per info in headers list;
Thursday, September 1, 2022

Curl has a specific option, --write-out, for this:

$ curl -o /dev/null --silent --head --write-out '%{http_code}n' <url>
  • -o /dev/null throws away the usual output
  • --silent throws away the progress meter
  • --head makes a HEAD HTTP request, instead of GET
  • --write-out '%{http_code}n' prints the required status code

To wrap this up in a complete Bash script:

while read LINE; do
  curl -o /dev/null --silent --head --write-out "%{http_code} $LINEn" "$LINE"
done < url-list.txt

(Eagle-eyed readers will notice that this uses one curl process per URL, which imposes fork and TCP connection penalties. It would be faster if multiple URLs were combined in a single curl, but there isn't space to write out the monsterous repetition of options that curl requires to do this.)

Sunday, October 16, 2022


First of all, you need to set up an onFormSubmit trigger.

Mogsdad has a great answer for this:

  1. Choose Edit > Current project's triggers. You see a panel with the message No triggers set up. Click here to add one now.
  2. Click the link.
  3. Under Run, select the function you want executed by the trigger. (That's getResponse(), in this case.)
  4. Under Events, select From Spreadsheet.
  5. From the next drop-down list, select On form submit.
  6. Click Save.

Note: The trigger needs to be set up on the spreadsheet for this.

Get Response Values:

Once you have your trigger set up properly, all you need is:

function getResponse(e) {
  var response = e.values;

Here we're using event objects to get the values submitted in the form response. This returns the array of values being submitted to the spreadsheet, in the example I tested, the array looks like this:

[25/07/2019 10:02:36, Option 1, Answer 2, Option 3]


  • Setting up an onFormSubmit trigger
  • Installable Triggers
  • Event Objects
Friday, September 16, 2022
Only authorized users can answer the search term. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :