Psycopg 3 supports both sync and async code. Starting from release 3.2, the sync side is automatically generated from the async counterpart. This article explains how it's done, and the workflow used for the initial conversion to autogenerated code and for the regular maintenance.
and then generate a bunch of sync and async closures that capture that_call and then call the right things. In the typical API that you might want to compose sync/async I find at least 90% can be generated that way but working at the AST level you can get the last bit.
As someone who is trying to make a living out of free software, I would love to see this project succeeding.
The other remotely similar service, Tidelift, has unacceptable terms and conditions: for the compensation they decide "at [their] sole discretion", "[i]f you do not satisfactorily provide the Services, Tidelift may, at its option and without limitation, (a) require you to immediately re-perform the applicable Service at no additional charge; and/or (b) reduce the fees paid, or due to be paid, to you for the applicable Service in an amount commensurate with the cost to the Subscriber to cover the breach and/or the cost to Tidelift to assist its Subscriber with covering the breach." Quotes taken from the May 2022 "Lifter Agreement", which I was asked to sign for the monthly equivalent of what a professional charges for 30 minutes of work.
I hope that Ringer will prove a real contact point between Free Software professionals and professional users.
Hey dvarrazzo - we're making a real effort to talk to the developer side when we're structuring contracts and at the moment (until it's unscalable) will be manually involved (behind the scenes) in each one to make sure it fits. Getting this right is absolutely vital - a balanced contract that serves both parties equally is the only way I see of retaining good customers and incredible talent. If you have any other thoughts or want to discuss at length I'd be happy to hear - email in my bio!
This is interesting, thank you for pointing it out. Worth looking if the problems can be fixed the same way in both the adapter (and probably everything else using server-side binding).
There is a mechanism to ensure that there is no unexpected regression. Exposing the PSYCOPG_IMPL env var a program can make sure to obtain a specific implementation and import fails if it's not available. https://www.psycopg.org/psycopg3/docs/api/pq.html
only if the comment pertains to a django feature, not to a schema feature that cannot be expressed in django (e.g. a partial index designed for a specific query).
> You can name your migrations with:
In the example there is a foreign key name, not a migration name. It is persisted in the database, it's not ephemeral like a migration name, for which choosing a meaningful name has only a temporary value.
Just two factual corrections; for the rest our experiences diverge, that's fine.
It also breaks in interesting way: I discovered just today that constants defined on the Model subclass are not available when you use 'get_model()'. I suspect methods wouldn't be accessible either?
As far as I know, this is by design; your migrations are supposed to operate on a frozen state of what your models and code _were at a point in time_.
If you would rely on code outside of said migration, you would be breaching that frozen state and potentially end up with unintended side-effects (e.g. running a migration created 2 years ago that imports your code that changed today). This is why you might have to sometimes copy-paste logic to your python migrations, but you also guarantee that the migration always runs the same way.
Hi! Much of the difficulty comes from using blocking libpq calls in psycopg2. I'm thinking to avoid them altogether in psycopg3 and only use async calls with Python in charge of blocking.
Note that you can obtain a similar result in psycopg2 by going in green mode and using select as wait callback (see https://www.psycopg.org/docs/extras.html#psycopg2.extras.wai...). This trick enables for instance stopping long-running queries using ctrl-c.
You can also register a timeout in the server to require to terminate a query after a timeout. I guess they are two complementary approaches. In the first case you don't know the state of the connection anymore: maybe it should be cancelled or discarded, we should work out what to do with it. A server timeout is easier to recover from: just rollback and off you go again.