Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I have a mongo instance running on docker container:

docker run 
-d 
--name mongo 
-e MONGO_INITDB_ROOT_USERNAME=bla 
-e MONGO_INITDB_ROOT_PASSWORD=bla 
-p 27017:27017 
mongo

and I have a process that uses mongo in the following manner:

def init():
    client = pymongo.MongoClient('mongodb://bla:bla@localhost:27017')
    db = client['db']
    collection = db['col']
    collection.insert_one({'ok': False})
    return collection


def critical_section(collection):
    if list(collection.find({'ok': False})):
        # do stuff
        collection.update_one({'ok': False}, {'ok': True})


collection = init()
critical_section(collection)

I have a number of concurrent processes running the critical_section function, so I want to lock the collection before the critical section, and unlock it afterwards (if I don't do that, 2 processes can find a document, however, the first will manage to update, and the second will fail...)

I used this answer to lock the db:

collection = init()
pymongo.MongoClient('mongodb://bla:bla@localhost:27017')['admin'].command('fsync', lock=True)
critical_section(collection)
pymongo.MongoClient('mongodb://bla:bla@localhost:27017')['admin'].command('fsyncUnlock')

however, this only locks write operations, which doesn't cut it for me.

  1. How can I lock for reading as well?
  2. Can I lock the collection, instead of the entire DB?
  3. I now think I can just skip the find, and do a if collection.update_one().updated_count == 1: # do stuff..., though I'm not sure it solves exactly what I need, because without the find, I can't know the id of the item I updated
question from:https://stackoverflow.com/questions/65943426/how-to-lock-pymongo-collection-to-reading-writing

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
202 views
Welcome To Ask or Share your Answers For Others

1 Answer

See MongoDB concurrent update to same document is not behaving atomic for an example implementation.

Can I lock the collection, instead of the entire DB?

Trying to solve concurrency issues with locking (i.e. making it so that the database is unusable by more than one client) is suboptimal in that it doesn't scale, hence isn't useful for cases when you actually have concurrency.

This is why conditional updates, MVCC/snapshot read concern and similar constructs provide atomicity and consistency but not locking.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...