That's a crap default and a crap name. It should be prefetch_all_keys=False. (edit: and some documented reason WHY you would want to do such a thing)
I ran into this recently when making my own s3 sync tool, because the commonly used tool is completely broken (requires something called a 'config file' to function). But I didn't pay it too much mind, because I forgot the price discrepancy for ListBucket calls.
PS if you want to see what boto is doing do this:
logging.basicConfig(filename="boto.log", level=logging.DEBUG)
It does not prefetch any key (maxkeys is set to 0), it performs a query on the bucket to validate that the bucket exists and blow up if the bucket does not exist. With validate=False, you can call get_bucket and get a bucket object where no remote bucket exists.
It sounds like an annoying limitation of the API that (apparently?) you can't cheaply validate whether a bucket exists.
Two manual work-arounds that come to mind:
- store a list of created buckets as keys in another bucket.
- store a dummy file in each bucket you create.
Either method allows you to check the existence of the bucket with a GET request rather than a more expensive LIST request, but both are hackish. It seems like this is functionality S3 should already provide cheaply.
I ran into this recently when making my own s3 sync tool, because the commonly used tool is completely broken (requires something called a 'config file' to function). But I didn't pay it too much mind, because I forgot the price discrepancy for ListBucket calls.
PS if you want to see what boto is doing do this: logging.basicConfig(filename="boto.log", level=logging.DEBUG)