Dieser Post wurde aus meiner alten WordPress-Installation importiert. Sollte es Darstellungsprobleme, falsche Links oder fehlende Bilder geben, bitte einfach hier einen Kommentar hinterlassen. Danke.
mySQL servers have their own mind and sometimes just disappear. They go away from a client leaving it alone if the complete server is dying, if a client is doing unexpected things and sometimes even without any known reason.
All good client source could handle database errors, a failed execute call usually results in a crashed script. Nearly no one cares about error codes and their meaning (maybe except "duplicate key" errors where they're expected) and it's usually quite safe to die() on any database error. Usually - but not always.
In persistent environments - when a source code crash doesn't end the task but is being captured by eval - errors continue if the database connection (usually $dbh) is cached. Such environments connect database servers usually once for the first query and keep this connection until the end of their task lifetime (which could be quite long if nothing crashes the webserver (usually Apache) worker).
The first request (running script) is being killed by a 2006 MySQL server has gone away error and everything is fine because the error has been handled. But the next call also fails if no new connection is established and scripts build for those environments usually don't (re)connect at start - they're using a connection cache to avoid continues reconnects. A connect or is-this-connection-still-active check are expensive operations and there is no obvious reason to preform them while there is a clear error code (2006) for a lost mysql connection.
Here's the problem: How to clean the connection cache in case of a 2006 error without adding error handling stuff after each and every execute?
DBD::mysql has a bug (in the current Debian version which I'm forced to use, I think it has been fixed lateron) which requires an explicit $sth->finish call after each statement, even if the DBI CPAN-page tell's us not to do so. I tried to add a small subroutine as statement callback to be executed before the finish call (DBI doesn't allow any post-execution-callbacks but they'ld be a great feature) but got the following trace output while testing:
!! ERROR: 2013 'Lost connection to MySQL server during query' (err#0)<- execute= ( undef ) [1 items] at -e line 1DBD::mysql::st execute failed: Lost connection to MySQL server during query at -e line 1.!! ERROR: 2013 CLEARED by call to finish methodGreat fail. The only place for capturing errors clears the errors before calling the custom sourcecode. Thank you!
Back to plan B: Write a set of database modules wrapping all the DBI stuff and checking the error states after every execute by simply wrapping the execute call. It'll take a reasonable amount of developer time to get this stable and I don't like the idea at all but it seems to be the only way to go.
Noch keine Kommentare. Schreib was dazu