I have a mongo instance running on docker container:
docker run
-d
--name mongo
-e MONGO_INITDB_ROOT_USERNAME=bla
-e MONGO_INITDB_ROOT_PASSWORD=bla
-p 27017:27017
mongo
and I have a process that uses mongo in the following manner:
def init():
client = pymongo.MongoClient('mongodb://bla:bla@localhost:27017')
db = client['db']
collection = db['col']
collection.insert_one({'ok': False})
return collection
def critical_section(collection):
if list(collection.find({'ok': False})):
# do stuff
collection.update_one({'ok': False}, {'ok': True})
collection = init()
critical_section(collection)
I have a number of concurrent processes running the critical_section
function, so I want to lock the collection before the critical section, and unlock it afterwards (if I don't do that, 2 processes can find
a document, however, the first will manage to update, and the second will fail...)
I used this answer to lock the db:
collection = init()
pymongo.MongoClient('mongodb://bla:bla@localhost:27017')['admin'].command('fsync', lock=True)
critical_section(collection)
pymongo.MongoClient('mongodb://bla:bla@localhost:27017')['admin'].command('fsyncUnlock')
however, this only locks write operations, which doesn't cut it for me.
- How can I lock for reading as well?
- Can I lock the collection, instead of the entire DB?
- I now think I can just skip the find, and do a
if collection.update_one().updated_count == 1: # do stuff...
, though I'm not sure it solves exactly what I need, because without thefind
, I can't know the id of the item I updated