Sometimes its not possible to use client side cursors
because the memory hit is too big when copying the
result set into the client. Using SS cursors is also
not appropriate because the client may not want all the
results anyway (i.e. might read a couple of rows then
lose the context).
We need a cursor class which does something in between.
Copy result sets to the client up to a maximum number
of rows. When those are consumed, grab the next lot
from the server etc. This achieves a balance between
client memory consumption and network traffic.
We need to trap "mysql commands out of sequence" errors
to transparently hide a reconnection if the SS
connection has not been fully drained.
See for example:
http://pyflag.sourceforge.net/pyflag/_darcs/current/src/pyflag/DB.py
This would be nice if it was already built in because
it would prevent the memory hit that client side has,
but make using SS more friendly.
Michael.
Logged In: YES
user_id=71372
That disconnecting scheme won't play with transactions. Why
not use a LIMIT clause?
Logged In: YES
user_id=492398
Of course using a limit would be ideal, but in a number of
applications, the client may not actually consume all the
rows they asked for, which means that a reconnect is
essential. For example say you do a query and then for each
row you do some more queries, but an exception happened and
you need to stop execution.
In that case, the connection is in the middle of the
transaction, and if you wanted to reuse it, you would get
mysql command out of sequence.
We use a pooling mechanism for our connections, so there is
a chance that a "not fully drained" connection handle is
returned to the pool which someone else may get later. The
disconnecting scheme is needed because when the connection
is in that state its pretty useless anyway - you cant even
get the remaining rows. (This might be a mysql limitation,
but this is a workaround). So I guess if you ever get this
exception, your transaction has failed anyway. maybe just
raise then?
Logged In: YES
user_id=71372
About the only thing that sounds feasible is to do a
reconnect -on-rollback scheme, i.e. subclass Connection so
that it a) saves the parameters to passed to the super
class's constructor and b) changes the rollback method to
close the connection and then reconnect. Then your clients,
after deciding it doesn't want to read more rows, can call
db.rollback(). It's the only way I can think of to maintain
relative transactional sanity.