Viewed   56 times

When running a long query from PHP, [how] can I kill the query if the user presses stop in their browser?

Take into consideration that I cannot call any other PHP functions because PHP is blocked while waiting for MySQL.

Also I cannot make any more requests to the server (via Ajax) because of session locking.

So one solution could be:

  • ignore user abort
  • run the long query in the back ground and have PHP check every 100ms if it has finished
  • get the pid from the query
  • if the user aborts, kill the pid
  • else return the result when finished

The 2 thing that I dont know how to do in that is:

  • run a non blocking (background) query
  • get the pid of a query

 Answers

5

For those who are interested, here is what I used:

<?php
// Connection to query on
$query_con = mysqli_connect($host, $user, $password, $name, $port);

// Connection to kill on
$kill_con = mysqli_connect($host, $user, $password, $name, $port);

// Start the query
$query_con->query($slow_query, MYSQLI_ASYNC);

// Get the PID
$thread_id = $query_con->thread_id;

// Ignore user abort so we can kill the query
ignore_user_abort(true);

do  {
    // Poll MySQL
    $links = $errors = $reject = array($mysqli->mysqli);
    $poll = mysqli_poll($links, $errors, $reject, 0, 500000);

    // Check if the connection is aborted and the query was killed
    if (connection_aborted() && mysqli_kill($kill_con, $thread_id)) {
        die();
    }
} while (!$poll);

// Not aborted, so do stuff with the result
$result = $link->reap_async_query();
if (is_object($result)) {
    // Select
    while ($row = $result->fetch_object()) {
        var_dump($row);
    }
} else {
    // Insert/update/delete
    var_dump($result);
}
Saturday, December 24, 2022
 
geraj
 
2

I would say just build it yourself. You can set it up like this:

$query = "INSERT INTO x (a,b,c) VALUES ";
foreach ($arr as $item) {
  $query .= "('".$item[0]."','".$item[1]."','".$item[2]."'),";
}
$query = rtrim($query,",");//remove the extra comma
//execute query

Don't forget to escape quotes if it's necessary.

Also, be careful that there's not too much data being sent at once. You may have to execute it in chunks instead of all at once.

Saturday, November 5, 2022
2

The function you're looking for is find_in_set:

 select * from ... where find_in_set($word, pets)

for multi-word queries you'll need to test each word and AND (or OR) the tests:

  where find_in_set($word1, pets) AND find_in_set($word2, pets) etc 
Wednesday, August 17, 2022
1

I was doing research on this issue for a while. Many people recommend using blob with only one primary key in a separate table and storing the blobs meta data in another table with a foreign key to the blob table. With this the performance will be higher considerably.

Saturday, August 13, 2022
 
dap.tci
 
4

The connection to MySQL can be disrupted by a number of means, but I would recommend revisiting Mario Carrion's answer since it's a very wise answer.

It seems likely that connection is disrupted because it's being shared with the other processes, causing communication protocol errors...

...this could easily happen if the connection pool is process bound, which I believe it is, in ActiveRecord, meaning that the same connection could be "checked-out" a number of times simultaneously in different processes.

The solution is that database connections must be established only AFTER the fork statement in the application server.

I'm not sure which server you're using, but if you're using a warmup feature - don't.

If you're running any database calls before the first network request - don't.

Either of these actions could potentially initialize the connection pool before forking occurs, causing the MySQL connection pool to be shared between processes while the locking system isn't.

I'm not saying this is the only possible reason for the issue, as stated by @sloth-jr, there are other options... but most of them seem less likely according to your description.

Sidenote:

I ran select * from information_schema.processlist; and I got 36 rows back. Does this mean my app servers were running 36 connections at that moment? or can a process be multiple connections?

Each process could hold a number of connections. In your case, you might have up to 500X36 connections. (see edit)

In general, the number of connections in the pool can often be the same as the number of threads in each process (it shouldn't be less than the number of thread, or contention will slow you down). Sometimes it's good to add a few more depending on your application.

EDIT:

I apologize for ignoring the fact that the process count was referencing the MySQL data and not the application data.

The process count you showed is the MySQL server data, which seems to use a thread per connection IO scheme. The "Process" data actually counts active connections and not actual processes or threads (although it should translate to the number of threads as well).

This means that out of possible 500 connections per application processes (i.e., if you're using 8 processes for your application, that would be 8X500=4,000 allowed connections) your application only opened 36 connections so far.

Monday, November 14, 2022
Only authorized users can answer the search term. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :